Compare commits
No commits in common. "main" and "v0.1.0" have entirely different histories.
101 changed files with 310 additions and 13504 deletions
|
|
@ -1,12 +0,0 @@
|
||||||
name: Security Scan
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches: [main, dev, 'feat/*']
|
|
||||||
pull_request:
|
|
||||||
branches: [main]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
security:
|
|
||||||
uses: core/go-devops/.forgejo/workflows/security-scan.yml@main
|
|
||||||
secrets: inherit
|
|
||||||
|
|
@ -1,14 +0,0 @@
|
||||||
name: Test
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches: [main, dev]
|
|
||||||
pull_request:
|
|
||||||
branches: [main]
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
test:
|
|
||||||
uses: core/go-devops/.forgejo/workflows/go-test.yml@main
|
|
||||||
with:
|
|
||||||
race: true
|
|
||||||
coverage: true
|
|
||||||
|
|
@ -11,9 +11,6 @@
|
||||||
|
|
||||||
| Date | Status | Notes |
|
| Date | Status | Notes |
|
||||||
|------|--------|-------|
|
|------|--------|-------|
|
||||||
| 2026-01-13 | Proposed | **Adaptive Bitrate (ABR)**: HLS-style multi-quality streaming with encrypted variants. New Section 3.7. All Future Work items complete. |
|
|
||||||
| 2026-01-12 | Proposed | **Chunked streaming**: v3 now supports optional ChunkSize for independently decryptable chunks - enables seek, HTTP Range, and decrypt-while-downloading. |
|
|
||||||
| 2026-01-12 | Proposed | **v3 Streaming**: LTHN rolling keys with configurable cadence (daily/12h/6h/1h). CEK wrapping for zero-trust streaming. WASM v1.3.0 with decryptV3(). |
|
|
||||||
| 2026-01-10 | Proposed | Technical review passed. Fixed section numbering (7.x, 8.x, 9.x, 11.x). Updated WASM size to 5.9MB. Implementation verified complete for stated scope. |
|
| 2026-01-10 | Proposed | Technical review passed. Fixed section numbering (7.x, 8.x, 9.x, 11.x). Updated WASM size to 5.9MB. Implementation verified complete for stated scope. |
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
@ -145,16 +142,14 @@ Key properties:
|
||||||
|
|
||||||
#### Format Versions
|
#### Format Versions
|
||||||
|
|
||||||
| Format | Payload Structure | Size | Speed | Use Case |
|
| Format | Payload Structure | Size | Speed |
|
||||||
|--------|------------------|------|-------|----------|
|
|--------|------------------|------|-------|
|
||||||
| **v1** | JSON with base64-encoded attachments | +33% overhead | Baseline | Legacy |
|
| **v1** | JSON with base64-encoded attachments | +33% overhead | Baseline |
|
||||||
| **v2** | Binary header + raw attachments + zstd | ~Original size | 3-10x faster | Download-to-own |
|
| **v2** | Binary header + raw attachments + zstd | ~Original size | 3-10x faster |
|
||||||
| **v3** | CEK + wrapped keys + rolling LTHN | ~Original size | 3-10x faster | **Streaming** |
|
|
||||||
| **v3+chunked** | v3 with independently decryptable chunks | ~Original size | Seekable | **Chunked streaming** |
|
|
||||||
|
|
||||||
v2 is recommended for download-to-own (perpetual license). v3 is recommended for streaming (time-limited access). v3 with chunking is recommended for large files requiring seek capability or decrypt-while-downloading.
|
v2 is recommended for production. v1 is maintained for backwards compatibility.
|
||||||
|
|
||||||
### 3.3 Key Derivation (v1/v2)
|
### 3.3 Key Derivation
|
||||||
|
|
||||||
```
|
```
|
||||||
License Key (password)
|
License Key (password)
|
||||||
|
|
@ -173,136 +168,7 @@ Simple, auditable, no key escrow.
|
||||||
|
|
||||||
**Note on password hashing**: SHA-256 is used for simplicity and speed. For high-value content, artists may choose to use stronger KDFs (Argon2, scrypt) in custom implementations. The format supports algorithm negotiation via the header.
|
**Note on password hashing**: SHA-256 is used for simplicity and speed. For high-value content, artists may choose to use stronger KDFs (Argon2, scrypt) in custom implementations. The format supports algorithm negotiation via the header.
|
||||||
|
|
||||||
### 3.4 Streaming Key Derivation (v3)
|
### 3.4 Supported Content Types
|
||||||
|
|
||||||
v3 format uses **LTHN rolling keys** for zero-trust streaming. The platform controls key refresh cadence.
|
|
||||||
|
|
||||||
```
|
|
||||||
┌──────────────────────────────────────────────────────────────────┐
|
|
||||||
│ v3 STREAMING KEY FLOW │
|
|
||||||
├──────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ SERVER (encryption time): │
|
|
||||||
│ ───────────────────────── │
|
|
||||||
│ 1. Generate random CEK (Content Encryption Key) │
|
|
||||||
│ 2. Encrypt content with CEK (one-time) │
|
|
||||||
│ 3. For current period AND next period: │
|
|
||||||
│ streamKey = SHA256(LTHN(period:license:fingerprint)) │
|
|
||||||
│ wrappedKey = ChaCha(CEK, streamKey) │
|
|
||||||
│ 4. Store wrapped keys in header (CEK never transmitted) │
|
|
||||||
│ │
|
|
||||||
│ CLIENT (decryption time): │
|
|
||||||
│ ──────────────────────── │
|
|
||||||
│ 1. Derive streamKey = SHA256(LTHN(period:license:fingerprint)) │
|
|
||||||
│ 2. Try to unwrap CEK from current period key │
|
|
||||||
│ 3. If fails, try next period key │
|
|
||||||
│ 4. Decrypt content with unwrapped CEK │
|
|
||||||
│ │
|
|
||||||
└──────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
#### LTHN Hash Function
|
|
||||||
|
|
||||||
LTHN is rainbow-table resistant because the salt is derived from the input itself:
|
|
||||||
|
|
||||||
```
|
|
||||||
LTHN(input) = SHA256(input + reverse_leet(input))
|
|
||||||
|
|
||||||
where reverse_leet swaps: o↔0, l↔1, e↔3, a↔4, s↔z, t↔7
|
|
||||||
|
|
||||||
Example:
|
|
||||||
LTHN("2026-01-12:license:fp")
|
|
||||||
= SHA256("2026-01-12:license:fp" + "pf:3zn3ci1:21-10-6202")
|
|
||||||
```
|
|
||||||
|
|
||||||
You cannot compute the hash without knowing the original input.
|
|
||||||
|
|
||||||
#### Cadence Options
|
|
||||||
|
|
||||||
The platform chooses the key refresh rate. Faster cadence = tighter access control.
|
|
||||||
|
|
||||||
| Cadence | Period Format | Rolling Window | Use Case |
|
|
||||||
|---------|---------------|----------------|----------|
|
|
||||||
| `daily` | `2026-01-12` | 24-48 hours | Standard streaming |
|
|
||||||
| `12h` | `2026-01-12-AM/PM` | 12-24 hours | Premium content |
|
|
||||||
| `6h` | `2026-01-12-00/06/12/18` | 6-12 hours | High-value content |
|
|
||||||
| `1h` | `2026-01-12-15` | 1-2 hours | Live events |
|
|
||||||
|
|
||||||
The rolling window ensures smooth key transitions. At any time, both the current period key AND the next period key are valid.
|
|
||||||
|
|
||||||
#### Zero-Trust Properties
|
|
||||||
|
|
||||||
- **Server never stores keys** - Derived on-demand from LTHN
|
|
||||||
- **Keys auto-expire** - No revocation mechanism needed
|
|
||||||
- **Sharing keys is pointless** - They expire within the cadence window
|
|
||||||
- **Fingerprint binds to device** - Different device = different key
|
|
||||||
- **License ties to user** - Different user = different key
|
|
||||||
|
|
||||||
### 3.5 Chunked Streaming (v3 with ChunkSize)
|
|
||||||
|
|
||||||
When `StreamParams.ChunkSize > 0`, v3 format splits content into independently decryptable chunks, enabling:
|
|
||||||
|
|
||||||
- **Decrypt-while-downloading** - Play media as chunks arrive
|
|
||||||
- **HTTP Range requests** - Fetch specific chunks by byte offset
|
|
||||||
- **Seekable playback** - Jump to any position without decrypting previous chunks
|
|
||||||
|
|
||||||
```
|
|
||||||
┌──────────────────────────────────────────────────────────────────┐
|
|
||||||
│ V3 CHUNKED FORMAT │
|
|
||||||
├──────────────────────────────────────────────────────────────────┤
|
|
||||||
│ │
|
|
||||||
│ Header (cleartext): │
|
|
||||||
│ format: "v3" │
|
|
||||||
│ chunked: { │
|
|
||||||
│ chunkSize: 1048576, // 1MB default │
|
|
||||||
│ totalChunks: N, │
|
|
||||||
│ totalSize: X, // unencrypted total │
|
|
||||||
│ index: [ // for HTTP Range / seeking │
|
|
||||||
│ { offset: 0, size: Y }, │
|
|
||||||
│ { offset: Y, size: Z }, │
|
|
||||||
│ ... │
|
|
||||||
│ ] │
|
|
||||||
│ } │
|
|
||||||
│ wrappedKeys: [...] // same as non-chunked v3 │
|
|
||||||
│ │
|
|
||||||
│ Payload: │
|
|
||||||
│ [chunk 0: nonce + encrypted + tag] │
|
|
||||||
│ [chunk 1: nonce + encrypted + tag] │
|
|
||||||
│ ... │
|
|
||||||
│ [chunk N: nonce + encrypted + tag] │
|
|
||||||
│ │
|
|
||||||
└──────────────────────────────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key insight**: Each chunk is encrypted with the same CEK but gets its own random nonce, making chunks independently decryptable. The chunk index in the header enables:
|
|
||||||
|
|
||||||
1. **Seeking**: Calculate which chunk contains byte offset X, fetch just that chunk
|
|
||||||
2. **Range requests**: Use HTTP Range headers to fetch specific encrypted chunks
|
|
||||||
3. **Streaming**: Decrypt chunk 0 for metadata, then stream chunks 1-N as they arrive
|
|
||||||
|
|
||||||
**Usage example**:
|
|
||||||
```go
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "user-license",
|
|
||||||
Fingerprint: "device-fp",
|
|
||||||
ChunkSize: 1024 * 1024, // 1MB chunks
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt with chunking
|
|
||||||
encrypted, _ := EncryptV3(msg, params, manifest)
|
|
||||||
|
|
||||||
// For streaming playback:
|
|
||||||
header, _ := GetV3Header(encrypted)
|
|
||||||
cek, _ := UnwrapCEKFromHeader(header, params)
|
|
||||||
payload, _ := GetV3Payload(encrypted)
|
|
||||||
|
|
||||||
for i := 0; i < header.Chunked.TotalChunks; i++ {
|
|
||||||
chunk, _ := DecryptV3Chunk(payload, cek, i, header.Chunked)
|
|
||||||
player.Write(chunk) // Stream to audio/video player
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.6 Supported Content Types
|
|
||||||
|
|
||||||
SMSG is content-agnostic. Any file can be an attachment:
|
SMSG is content-agnostic. Any file can be an attachment:
|
||||||
|
|
||||||
|
|
@ -317,95 +183,6 @@ SMSG is content-agnostic. Any file can be an attachment:
|
||||||
|
|
||||||
Multiple attachments per SMSG are supported (e.g., album + cover art + PDF booklet).
|
Multiple attachments per SMSG are supported (e.g., album + cover art + PDF booklet).
|
||||||
|
|
||||||
### 3.7 Adaptive Bitrate Streaming (ABR)
|
|
||||||
|
|
||||||
For large video content, ABR enables automatic quality switching based on network conditions—like HLS/DASH but with ChaCha20-Poly1305 encryption.
|
|
||||||
|
|
||||||
**Architecture:**
|
|
||||||
```
|
|
||||||
ABR Manifest (manifest.json)
|
|
||||||
├── Title: "My Video"
|
|
||||||
├── Version: "abr-v1"
|
|
||||||
├── Variants: [1080p, 720p, 480p, 360p]
|
|
||||||
└── DefaultIdx: 1 (720p)
|
|
||||||
|
|
||||||
track-1080p.smsg ──┐
|
|
||||||
track-720p.smsg ──┼── Each is standard v3 chunked SMSG
|
|
||||||
track-480p.smsg ──┤ Same password decrypts ALL variants
|
|
||||||
track-360p.smsg ──┘
|
|
||||||
```
|
|
||||||
|
|
||||||
**ABR Manifest Format:**
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"version": "abr-v1",
|
|
||||||
"title": "Content Title",
|
|
||||||
"duration": 300,
|
|
||||||
"variants": [
|
|
||||||
{
|
|
||||||
"name": "360p",
|
|
||||||
"bandwidth": 500000,
|
|
||||||
"width": 640,
|
|
||||||
"height": 360,
|
|
||||||
"codecs": "avc1.640028,mp4a.40.2",
|
|
||||||
"url": "track-360p.smsg",
|
|
||||||
"chunkCount": 12,
|
|
||||||
"fileSize": 18750000
|
|
||||||
},
|
|
||||||
{
|
|
||||||
"name": "720p",
|
|
||||||
"bandwidth": 2500000,
|
|
||||||
"width": 1280,
|
|
||||||
"height": 720,
|
|
||||||
"codecs": "avc1.640028,mp4a.40.2",
|
|
||||||
"url": "track-720p.smsg",
|
|
||||||
"chunkCount": 48,
|
|
||||||
"fileSize": 93750000
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"defaultIdx": 1
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Bandwidth Estimation Algorithm:**
|
|
||||||
1. Measure download time for each chunk
|
|
||||||
2. Calculate bits per second: `(bytes × 8 × 1000) / timeMs`
|
|
||||||
3. Average last 3 samples for stability
|
|
||||||
4. Apply 80% safety factor to prevent buffering
|
|
||||||
|
|
||||||
**Variant Selection:**
|
|
||||||
```
|
|
||||||
Selected = highest quality where (bandwidth × 0.8) >= variant.bandwidth
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key Properties:**
|
|
||||||
- **Same password for all variants**: CEK unwrapped once, works everywhere
|
|
||||||
- **Chunk-boundary switching**: Clean cuts, no partial chunk issues
|
|
||||||
- **Independent variants**: No cross-file dependencies
|
|
||||||
- **CDN-friendly**: Each variant is a standard file, cacheable separately
|
|
||||||
|
|
||||||
**Creating ABR Content:**
|
|
||||||
```bash
|
|
||||||
# Use mkdemo-abr to create variant set from source video
|
|
||||||
go run ./cmd/mkdemo-abr input.mp4 output-dir/ [password]
|
|
||||||
|
|
||||||
# Output:
|
|
||||||
# output-dir/manifest.json (ABR manifest)
|
|
||||||
# output-dir/track-1080p.smsg (v3 chunked, 5 Mbps)
|
|
||||||
# output-dir/track-720p.smsg (v3 chunked, 2.5 Mbps)
|
|
||||||
# output-dir/track-480p.smsg (v3 chunked, 1 Mbps)
|
|
||||||
# output-dir/track-360p.smsg (v3 chunked, 500 Kbps)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Standard Presets:**
|
|
||||||
|
|
||||||
| Name | Resolution | Bitrate | Use Case |
|
|
||||||
|------|------------|---------|----------|
|
|
||||||
| 1080p | 1920×1080 | 5 Mbps | High quality, fast connections |
|
|
||||||
| 720p | 1280×720 | 2.5 Mbps | Default, most connections |
|
|
||||||
| 480p | 854×480 | 1 Mbps | Mobile, medium connections |
|
|
||||||
| 360p | 640×360 | 500 Kbps | Slow connections, previews |
|
|
||||||
|
|
||||||
## 4. Demo Page Architecture
|
## 4. Demo Page Architecture
|
||||||
|
|
||||||
**Live Demo**: https://demo.dapp.fm
|
**Live Demo**: https://demo.dapp.fm
|
||||||
|
|
@ -702,7 +479,7 @@ Local playback Third-party hosting
|
||||||
## 8. Implementation Status
|
## 8. Implementation Status
|
||||||
|
|
||||||
### 8.1 Completed
|
### 8.1 Completed
|
||||||
- [x] SMSG format specification (v1, v2, v3)
|
- [x] SMSG format specification (v1 and v2)
|
||||||
- [x] Go encryption/decryption library (pkg/smsg)
|
- [x] Go encryption/decryption library (pkg/smsg)
|
||||||
- [x] WASM build for browser (pkg/wasm/stmf)
|
- [x] WASM build for browser (pkg/wasm/stmf)
|
||||||
- [x] Native desktop app (Wails, cmd/dapp-fm-app)
|
- [x] Native desktop app (Wails, cmd/dapp-fm-app)
|
||||||
|
|
@ -714,22 +491,17 @@ Local playback Third-party hosting
|
||||||
- [x] **Manifest links** - Artist platform links in metadata
|
- [x] **Manifest links** - Artist platform links in metadata
|
||||||
- [x] **Live demo** - https://demo.dapp.fm
|
- [x] **Live demo** - https://demo.dapp.fm
|
||||||
- [x] RFC-quality demo file with cryptographically secure password
|
- [x] RFC-quality demo file with cryptographically secure password
|
||||||
- [x] **v3 streaming format** - LTHN rolling keys with CEK wrapping
|
|
||||||
- [x] **Configurable cadence** - daily/12h/6h/1h key rotation
|
|
||||||
- [x] **WASM v1.3.0** - `BorgSMSG.decryptV3()` for streaming
|
|
||||||
- [x] **Chunked streaming** - Independently decryptable chunks for seek/streaming
|
|
||||||
- [x] **Adaptive Bitrate (ABR)** - HLS-style multi-quality streaming with encrypted variants
|
|
||||||
|
|
||||||
### 8.2 Fixed Issues
|
### 8.2 Fixed Issues
|
||||||
- [x] ~~Double base64 encoding bug~~ - Fixed by using binary format
|
- [x] ~~Double base64 encoding bug~~ - Fixed by using binary format
|
||||||
- [x] ~~Demo file format detection~~ - v2 format auto-detected via header
|
- [x] ~~Demo file format detection~~ - v2 format auto-detected via header
|
||||||
- [x] ~~Key wrapping for streaming~~ - Implemented in v3 format
|
|
||||||
|
|
||||||
### 8.3 Future Work
|
### 8.3 Future Work
|
||||||
- [x] Multi-bitrate adaptive streaming (see Section 3.7 ABR)
|
- [ ] Chunked streaming (decrypt while downloading)
|
||||||
- [x] Payment integration examples (see `docs/payment-integration.md`)
|
- [ ] Key wrapping for multi-license files (dapp.radio.fm)
|
||||||
- [x] IPFS distribution guide (see `docs/ipfs-distribution.md`)
|
- [ ] Payment integration examples (Stripe, Gumroad)
|
||||||
- [x] Demo page "Streaming" tab for v3 showcase
|
- [ ] IPFS distribution guide
|
||||||
|
- [ ] Expiring license enforcement
|
||||||
|
|
||||||
## 9. Usage Examples
|
## 9. Usage Examples
|
||||||
|
|
||||||
|
|
@ -816,11 +588,10 @@ SMSG includes version and format fields for forward compatibility:
|
||||||
|---------|--------|----------|
|
|---------|--------|----------|
|
||||||
| 1.0 | v1 | ChaCha20-Poly1305, JSON+base64 attachments |
|
| 1.0 | v1 | ChaCha20-Poly1305, JSON+base64 attachments |
|
||||||
| 1.0 | **v2** | Binary attachments, zstd compression (25% smaller, 3-10x faster) |
|
| 1.0 | **v2** | Binary attachments, zstd compression (25% smaller, 3-10x faster) |
|
||||||
| 1.0 | **v3** | LTHN rolling keys, CEK wrapping, chunked streaming |
|
|
||||||
| 1.0 | **v3+ABR** | Multi-quality variants with adaptive bitrate switching |
|
|
||||||
| 2 (future) | - | Algorithm negotiation, multiple KDFs |
|
| 2 (future) | - | Algorithm negotiation, multiple KDFs |
|
||||||
|
| 3 (future) | - | Streaming chunks, adaptive bitrate, key wrapping |
|
||||||
|
|
||||||
Decoders MUST reject versions they don't understand. Use v2 for download-to-own, v3 for streaming, v3+ABR for video.
|
Decoders MUST reject versions they don't understand. Encoders SHOULD use v2 format for production (smaller, faster).
|
||||||
|
|
||||||
### 11.2 Third-Party Implementations
|
### 11.2 Third-Party Implementations
|
||||||
|
|
||||||
|
|
@ -863,8 +634,6 @@ The player is embeddable:
|
||||||
- WASM Module: `pkg/wasm/stmf/`
|
- WASM Module: `pkg/wasm/stmf/`
|
||||||
- Native App: `cmd/dapp-fm-app/`
|
- Native App: `cmd/dapp-fm-app/`
|
||||||
- Demo Creator Tool: `cmd/mkdemo/`
|
- Demo Creator Tool: `cmd/mkdemo/`
|
||||||
- ABR Creator Tool: `cmd/mkdemo-abr/`
|
|
||||||
- ABR Package: `pkg/smsg/abr.go`
|
|
||||||
|
|
||||||
## 13. License
|
## 13. License
|
||||||
|
|
||||||
14
cmd/all.go
14
cmd/all.go
|
|
@ -8,13 +8,13 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/github"
|
"github.com/Snider/Borg/pkg/github"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/ui"
|
"github.com/Snider/Borg/pkg/ui"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/vcs"
|
"github.com/Snider/Borg/pkg/vcs"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -8,9 +8,9 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/github"
|
"github.com/Snider/Borg/pkg/github"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/mocks"
|
"github.com/Snider/Borg/pkg/mocks"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestAllCmd_Good(t *testing.T) {
|
func TestAllCmd_Good(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -7,8 +7,8 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
borg_github "forge.lthn.ai/Snider/Borg/pkg/github"
|
borg_github "github.com/Snider/Borg/pkg/github"
|
||||||
"github.com/google/go-github/v39/github"
|
"github.com/google/go-github/v39/github"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
"golang.org/x/mod/semver"
|
"golang.org/x/mod/semver"
|
||||||
|
|
|
||||||
|
|
@ -5,11 +5,11 @@ import (
|
||||||
"io"
|
"io"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/ui"
|
"github.com/Snider/Borg/pkg/ui"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/vcs"
|
"github.com/Snider/Borg/pkg/vcs"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -5,8 +5,8 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/mocks"
|
"github.com/Snider/Borg/pkg/mocks"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestCollectGithubRepoCmd_Good(t *testing.T) {
|
func TestCollectGithubRepoCmd_Good(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,7 @@ package cmd
|
||||||
import (
|
import (
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/github"
|
"github.com/Snider/Borg/pkg/github"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,581 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"archive/tar"
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"io/fs"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
"sync"
|
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/ui"
|
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
type CollectLocalCmd struct {
|
|
||||||
cobra.Command
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewCollectLocalCmd creates a new collect local command
|
|
||||||
func NewCollectLocalCmd() *CollectLocalCmd {
|
|
||||||
c := &CollectLocalCmd{}
|
|
||||||
c.Command = cobra.Command{
|
|
||||||
Use: "local [directory]",
|
|
||||||
Short: "Collect files from a local directory",
|
|
||||||
Long: `Collect local files into a portable container.
|
|
||||||
|
|
||||||
For STIM format, uses streaming I/O — memory usage is constant
|
|
||||||
(~2 MiB) regardless of input directory size. Other formats
|
|
||||||
(datanode, tim, trix) load files into memory.
|
|
||||||
|
|
||||||
Examples:
|
|
||||||
borg collect local
|
|
||||||
borg collect local ./src
|
|
||||||
borg collect local /path/to/project --output project.tar
|
|
||||||
borg collect local . --format stim --password secret
|
|
||||||
borg collect local . --exclude "*.log" --exclude "node_modules"`,
|
|
||||||
Args: cobra.MaximumNArgs(1),
|
|
||||||
RunE: func(cmd *cobra.Command, args []string) error {
|
|
||||||
directory := "."
|
|
||||||
if len(args) > 0 {
|
|
||||||
directory = args[0]
|
|
||||||
}
|
|
||||||
|
|
||||||
outputFile, _ := cmd.Flags().GetString("output")
|
|
||||||
format, _ := cmd.Flags().GetString("format")
|
|
||||||
compression, _ := cmd.Flags().GetString("compression")
|
|
||||||
password, _ := cmd.Flags().GetString("password")
|
|
||||||
excludes, _ := cmd.Flags().GetStringSlice("exclude")
|
|
||||||
includeHidden, _ := cmd.Flags().GetBool("hidden")
|
|
||||||
respectGitignore, _ := cmd.Flags().GetBool("gitignore")
|
|
||||||
|
|
||||||
progress := ProgressFromCmd(cmd)
|
|
||||||
finalPath, err := CollectLocal(directory, outputFile, format, compression, password, excludes, includeHidden, respectGitignore, progress)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
fmt.Fprintln(cmd.OutOrStdout(), "Files saved to", finalPath)
|
|
||||||
return nil
|
|
||||||
},
|
|
||||||
}
|
|
||||||
c.Flags().String("output", "", "Output file for the DataNode")
|
|
||||||
c.Flags().String("format", "datanode", "Output format (datanode, tim, trix, or stim)")
|
|
||||||
c.Flags().String("compression", "none", "Compression format (none, gz, or xz)")
|
|
||||||
c.Flags().String("password", "", "Password for encryption (required for stim/trix format)")
|
|
||||||
c.Flags().StringSlice("exclude", nil, "Patterns to exclude (can be specified multiple times)")
|
|
||||||
c.Flags().Bool("hidden", false, "Include hidden files and directories")
|
|
||||||
c.Flags().Bool("gitignore", true, "Respect .gitignore files (default: true)")
|
|
||||||
return c
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
collectCmd.AddCommand(&NewCollectLocalCmd().Command)
|
|
||||||
}
|
|
||||||
|
|
||||||
// CollectLocal collects files from a local directory into a DataNode
|
|
||||||
func CollectLocal(directory string, outputFile string, format string, compression string, password string, excludes []string, includeHidden bool, respectGitignore bool, progress ui.Progress) (string, error) {
|
|
||||||
// Validate format
|
|
||||||
if format != "datanode" && format != "tim" && format != "trix" && format != "stim" {
|
|
||||||
return "", fmt.Errorf("invalid format: %s (must be 'datanode', 'tim', 'trix', or 'stim')", format)
|
|
||||||
}
|
|
||||||
if (format == "stim" || format == "trix") && password == "" {
|
|
||||||
return "", fmt.Errorf("password is required for %s format", format)
|
|
||||||
}
|
|
||||||
if compression != "none" && compression != "gz" && compression != "xz" {
|
|
||||||
return "", fmt.Errorf("invalid compression: %s (must be 'none', 'gz', or 'xz')", compression)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve directory path
|
|
||||||
absDir, err := filepath.Abs(directory)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error resolving directory path: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
info, err := os.Stat(absDir)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error accessing directory: %w", err)
|
|
||||||
}
|
|
||||||
if !info.IsDir() {
|
|
||||||
return "", fmt.Errorf("not a directory: %s", absDir)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use streaming pipeline for STIM v2 format
|
|
||||||
if format == "stim" {
|
|
||||||
if outputFile == "" {
|
|
||||||
baseName := filepath.Base(absDir)
|
|
||||||
if baseName == "." || baseName == "/" {
|
|
||||||
baseName = "local"
|
|
||||||
}
|
|
||||||
outputFile = baseName + ".stim"
|
|
||||||
}
|
|
||||||
if err := CollectLocalStreaming(absDir, outputFile, compression, password); err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return outputFile, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Load gitignore patterns if enabled
|
|
||||||
var gitignorePatterns []string
|
|
||||||
if respectGitignore {
|
|
||||||
gitignorePatterns = loadGitignore(absDir)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create DataNode and collect files
|
|
||||||
dn := datanode.New()
|
|
||||||
var fileCount int
|
|
||||||
|
|
||||||
progress.Start("collecting " + directory)
|
|
||||||
|
|
||||||
err = filepath.WalkDir(absDir, func(path string, d fs.DirEntry, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get relative path
|
|
||||||
relPath, err := filepath.Rel(absDir, path)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip root
|
|
||||||
if relPath == "." {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip hidden files/dirs unless explicitly included
|
|
||||||
if !includeHidden && isHidden(relPath) {
|
|
||||||
if d.IsDir() {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check gitignore patterns
|
|
||||||
if respectGitignore && matchesGitignore(relPath, d.IsDir(), gitignorePatterns) {
|
|
||||||
if d.IsDir() {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check exclude patterns
|
|
||||||
if matchesExclude(relPath, excludes) {
|
|
||||||
if d.IsDir() {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip directories (they're implicit in DataNode)
|
|
||||||
if d.IsDir() {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read file content
|
|
||||||
content, err := os.ReadFile(path)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("error reading %s: %w", relPath, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Add to DataNode with forward slashes (tar convention)
|
|
||||||
dn.AddData(filepath.ToSlash(relPath), content)
|
|
||||||
fileCount++
|
|
||||||
progress.Update(int64(fileCount), 0)
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error walking directory: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if fileCount == 0 {
|
|
||||||
return "", fmt.Errorf("no files found in %s", directory)
|
|
||||||
}
|
|
||||||
|
|
||||||
progress.Finish(fmt.Sprintf("collected %d files", fileCount))
|
|
||||||
|
|
||||||
// Convert to output format
|
|
||||||
var data []byte
|
|
||||||
if format == "tim" {
|
|
||||||
t, err := tim.FromDataNode(dn)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error creating tim: %w", err)
|
|
||||||
}
|
|
||||||
data, err = t.ToTar()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error serializing tim: %w", err)
|
|
||||||
}
|
|
||||||
} else if format == "stim" {
|
|
||||||
t, err := tim.FromDataNode(dn)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error creating tim: %w", err)
|
|
||||||
}
|
|
||||||
data, err = t.ToSigil(password)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error encrypting stim: %w", err)
|
|
||||||
}
|
|
||||||
} else if format == "trix" {
|
|
||||||
data, err = trix.ToTrix(dn, password)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error serializing trix: %w", err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
data, err = dn.ToTar()
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error serializing DataNode: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Apply compression
|
|
||||||
compressedData, err := compress.Compress(data, compression)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error compressing data: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine output filename
|
|
||||||
if outputFile == "" {
|
|
||||||
baseName := filepath.Base(absDir)
|
|
||||||
if baseName == "." || baseName == "/" {
|
|
||||||
baseName = "local"
|
|
||||||
}
|
|
||||||
outputFile = baseName + "." + format
|
|
||||||
if compression != "none" {
|
|
||||||
outputFile += "." + compression
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
err = os.WriteFile(outputFile, compressedData, 0644)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("error writing output file: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return outputFile, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// isHidden checks if a path component starts with a dot
|
|
||||||
func isHidden(path string) bool {
|
|
||||||
parts := strings.Split(filepath.ToSlash(path), "/")
|
|
||||||
for _, part := range parts {
|
|
||||||
if strings.HasPrefix(part, ".") {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// loadGitignore loads patterns from .gitignore if it exists
|
|
||||||
func loadGitignore(dir string) []string {
|
|
||||||
var patterns []string
|
|
||||||
|
|
||||||
gitignorePath := filepath.Join(dir, ".gitignore")
|
|
||||||
content, err := os.ReadFile(gitignorePath)
|
|
||||||
if err != nil {
|
|
||||||
return patterns
|
|
||||||
}
|
|
||||||
|
|
||||||
lines := strings.Split(string(content), "\n")
|
|
||||||
for _, line := range lines {
|
|
||||||
line = strings.TrimSpace(line)
|
|
||||||
// Skip empty lines and comments
|
|
||||||
if line == "" || strings.HasPrefix(line, "#") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
patterns = append(patterns, line)
|
|
||||||
}
|
|
||||||
|
|
||||||
return patterns
|
|
||||||
}
|
|
||||||
|
|
||||||
// matchesGitignore checks if a path matches any gitignore pattern
|
|
||||||
func matchesGitignore(path string, isDir bool, patterns []string) bool {
|
|
||||||
for _, pattern := range patterns {
|
|
||||||
// Handle directory-only patterns
|
|
||||||
if strings.HasSuffix(pattern, "/") {
|
|
||||||
if !isDir {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
pattern = strings.TrimSuffix(pattern, "/")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle negation (simplified - just skip negated patterns)
|
|
||||||
if strings.HasPrefix(pattern, "!") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Match against path components
|
|
||||||
matched, _ := filepath.Match(pattern, filepath.Base(path))
|
|
||||||
if matched {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also try matching the full path
|
|
||||||
matched, _ = filepath.Match(pattern, path)
|
|
||||||
if matched {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle ** patterns (simplified)
|
|
||||||
if strings.Contains(pattern, "**") {
|
|
||||||
simplePattern := strings.ReplaceAll(pattern, "**", "*")
|
|
||||||
matched, _ = filepath.Match(simplePattern, path)
|
|
||||||
if matched {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// matchesExclude checks if a path matches any exclude pattern
|
|
||||||
func matchesExclude(path string, excludes []string) bool {
|
|
||||||
for _, pattern := range excludes {
|
|
||||||
// Match against basename
|
|
||||||
matched, _ := filepath.Match(pattern, filepath.Base(path))
|
|
||||||
if matched {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Match against full path
|
|
||||||
matched, _ = filepath.Match(pattern, path)
|
|
||||||
if matched {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// CollectLocalStreaming collects files from a local directory using a streaming
|
|
||||||
// pipeline: walk -> tar -> compress -> encrypt -> file.
|
|
||||||
// The encryption runs in a goroutine, consuming from an io.Pipe that the
|
|
||||||
// tar/compress writes feed into synchronously.
|
|
||||||
func CollectLocalStreaming(dir, output, compression, password string) error {
|
|
||||||
// Resolve to absolute path
|
|
||||||
absDir, err := filepath.Abs(dir)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("error resolving directory path: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate directory exists
|
|
||||||
info, err := os.Stat(absDir)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("error accessing directory: %w", err)
|
|
||||||
}
|
|
||||||
if !info.IsDir() {
|
|
||||||
return fmt.Errorf("not a directory: %s", absDir)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create output file
|
|
||||||
outFile, err := os.Create(output)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("error creating output file: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// cleanup removes partial output on error
|
|
||||||
cleanup := func() {
|
|
||||||
outFile.Close()
|
|
||||||
os.Remove(output)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build streaming pipeline:
|
|
||||||
// tar.Writer -> compressWriter -> pipeWriter -> pipeReader -> StreamEncrypt -> outFile
|
|
||||||
pr, pw := io.Pipe()
|
|
||||||
|
|
||||||
// Start encryption goroutine
|
|
||||||
var encErr error
|
|
||||||
var wg sync.WaitGroup
|
|
||||||
wg.Add(1)
|
|
||||||
go func() {
|
|
||||||
defer wg.Done()
|
|
||||||
encErr = tim.StreamEncrypt(pr, outFile, password)
|
|
||||||
}()
|
|
||||||
|
|
||||||
// Create compression writer wrapping the pipe writer
|
|
||||||
compWriter, err := compress.NewCompressWriter(pw, compression)
|
|
||||||
if err != nil {
|
|
||||||
pw.Close()
|
|
||||||
wg.Wait()
|
|
||||||
cleanup()
|
|
||||||
return fmt.Errorf("error creating compression writer: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create tar writer wrapping the compression writer
|
|
||||||
tw := tar.NewWriter(compWriter)
|
|
||||||
|
|
||||||
// Walk directory and write tar entries
|
|
||||||
walkErr := filepath.WalkDir(absDir, func(path string, d fs.DirEntry, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get relative path
|
|
||||||
relPath, err := filepath.Rel(absDir, path)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip root
|
|
||||||
if relPath == "." {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Normalize to forward slashes for tar
|
|
||||||
relPath = filepath.ToSlash(relPath)
|
|
||||||
|
|
||||||
// Check if entry is a symlink using Lstat
|
|
||||||
linfo, err := os.Lstat(path)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
isSymlink := linfo.Mode()&fs.ModeSymlink != 0
|
|
||||||
|
|
||||||
if isSymlink {
|
|
||||||
// Read symlink target
|
|
||||||
linkTarget, err := os.Readlink(path)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve to check if target exists
|
|
||||||
absTarget := linkTarget
|
|
||||||
if !filepath.IsAbs(absTarget) {
|
|
||||||
absTarget = filepath.Join(filepath.Dir(path), linkTarget)
|
|
||||||
}
|
|
||||||
_, statErr := os.Stat(absTarget)
|
|
||||||
if statErr != nil {
|
|
||||||
// Broken symlink - skip silently
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write valid symlink as tar entry
|
|
||||||
hdr := &tar.Header{
|
|
||||||
Typeflag: tar.TypeSymlink,
|
|
||||||
Name: relPath,
|
|
||||||
Linkname: linkTarget,
|
|
||||||
Mode: 0777,
|
|
||||||
}
|
|
||||||
return tw.WriteHeader(hdr)
|
|
||||||
}
|
|
||||||
|
|
||||||
if d.IsDir() {
|
|
||||||
// Write directory header
|
|
||||||
hdr := &tar.Header{
|
|
||||||
Typeflag: tar.TypeDir,
|
|
||||||
Name: relPath + "/",
|
|
||||||
Mode: 0755,
|
|
||||||
}
|
|
||||||
return tw.WriteHeader(hdr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Regular file: write header + content
|
|
||||||
finfo, err := d.Info()
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
hdr := &tar.Header{
|
|
||||||
Name: relPath,
|
|
||||||
Mode: 0644,
|
|
||||||
Size: finfo.Size(),
|
|
||||||
}
|
|
||||||
if err := tw.WriteHeader(hdr); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
f, err := os.Open(path)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("error opening %s: %w", relPath, err)
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
if _, err := io.Copy(tw, f); err != nil {
|
|
||||||
return fmt.Errorf("error streaming %s: %w", relPath, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
// Close pipeline layers in order: tar -> compress -> pipe
|
|
||||||
// We must close even on error to unblock the encryption goroutine.
|
|
||||||
twCloseErr := tw.Close()
|
|
||||||
compCloseErr := compWriter.Close()
|
|
||||||
|
|
||||||
if walkErr != nil {
|
|
||||||
pw.CloseWithError(walkErr)
|
|
||||||
wg.Wait()
|
|
||||||
cleanup()
|
|
||||||
return fmt.Errorf("error walking directory: %w", walkErr)
|
|
||||||
}
|
|
||||||
|
|
||||||
if twCloseErr != nil {
|
|
||||||
pw.CloseWithError(twCloseErr)
|
|
||||||
wg.Wait()
|
|
||||||
cleanup()
|
|
||||||
return fmt.Errorf("error closing tar writer: %w", twCloseErr)
|
|
||||||
}
|
|
||||||
|
|
||||||
if compCloseErr != nil {
|
|
||||||
pw.CloseWithError(compCloseErr)
|
|
||||||
wg.Wait()
|
|
||||||
cleanup()
|
|
||||||
return fmt.Errorf("error closing compression writer: %w", compCloseErr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Signal EOF to encryption goroutine
|
|
||||||
pw.Close()
|
|
||||||
|
|
||||||
// Wait for encryption to finish
|
|
||||||
wg.Wait()
|
|
||||||
|
|
||||||
if encErr != nil {
|
|
||||||
cleanup()
|
|
||||||
return fmt.Errorf("error encrypting data: %w", encErr)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Close output file
|
|
||||||
if err := outFile.Close(); err != nil {
|
|
||||||
os.Remove(output)
|
|
||||||
return fmt.Errorf("error closing output file: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// DecryptStimV2 decrypts a STIM v2 file back into a DataNode.
|
|
||||||
// It opens the file, runs StreamDecrypt, decompresses the result,
|
|
||||||
// and parses the tar archive into a DataNode.
|
|
||||||
func DecryptStimV2(path, password string) (*datanode.DataNode, error) {
|
|
||||||
f, err := os.Open(path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("error opening file: %w", err)
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
var decrypted bytes.Buffer
|
|
||||||
if err := tim.StreamDecrypt(f, &decrypted, password); err != nil {
|
|
||||||
return nil, fmt.Errorf("error decrypting: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decompress
|
|
||||||
decompressed, err := compress.Decompress(decrypted.Bytes())
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("error decompressing: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse tar into DataNode
|
|
||||||
dn, err := datanode.FromTar(decompressed)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("error parsing tar: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return dn, nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,161 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestCollectLocalStreaming_Good(t *testing.T) {
|
|
||||||
// Create a temp directory with some test files
|
|
||||||
srcDir := t.TempDir()
|
|
||||||
outDir := t.TempDir()
|
|
||||||
|
|
||||||
// Create files in subdirectories
|
|
||||||
subDir := filepath.Join(srcDir, "subdir")
|
|
||||||
if err := os.MkdirAll(subDir, 0755); err != nil {
|
|
||||||
t.Fatalf("failed to create subdir: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
files := map[string]string{
|
|
||||||
"hello.txt": "hello world",
|
|
||||||
"subdir/nested.go": "package main\n",
|
|
||||||
}
|
|
||||||
for name, content := range files {
|
|
||||||
path := filepath.Join(srcDir, name)
|
|
||||||
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
|
|
||||||
t.Fatalf("failed to write %s: %v", name, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
output := filepath.Join(outDir, "test.stim")
|
|
||||||
err := CollectLocalStreaming(srcDir, output, "gz", "test-password")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CollectLocalStreaming() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify file exists and is non-empty
|
|
||||||
info, err := os.Stat(output)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("output file does not exist: %v", err)
|
|
||||||
}
|
|
||||||
if info.Size() == 0 {
|
|
||||||
t.Fatal("output file is empty")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCollectLocalStreaming_Decrypt_Good(t *testing.T) {
|
|
||||||
// Create a temp directory with known files
|
|
||||||
srcDir := t.TempDir()
|
|
||||||
outDir := t.TempDir()
|
|
||||||
|
|
||||||
subDir := filepath.Join(srcDir, "pkg")
|
|
||||||
if err := os.MkdirAll(subDir, 0755); err != nil {
|
|
||||||
t.Fatalf("failed to create subdir: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
expectedFiles := map[string]string{
|
|
||||||
"README.md": "# Test Project\n",
|
|
||||||
"pkg/main.go": "package main\n\nfunc main() {}\n",
|
|
||||||
}
|
|
||||||
for name, content := range expectedFiles {
|
|
||||||
path := filepath.Join(srcDir, name)
|
|
||||||
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
|
|
||||||
t.Fatalf("failed to write %s: %v", name, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
password := "decrypt-test-pw"
|
|
||||||
output := filepath.Join(outDir, "roundtrip.stim")
|
|
||||||
|
|
||||||
// Collect
|
|
||||||
err := CollectLocalStreaming(srcDir, output, "gz", password)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CollectLocalStreaming() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
dn, err := DecryptStimV2(output, password)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptStimV2() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify each expected file exists in the DataNode
|
|
||||||
for name, wantContent := range expectedFiles {
|
|
||||||
f, err := dn.Open(name)
|
|
||||||
if err != nil {
|
|
||||||
t.Errorf("file %q not found in DataNode: %v", name, err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
buf := make([]byte, 4096)
|
|
||||||
n, _ := f.Read(buf)
|
|
||||||
f.Close()
|
|
||||||
got := string(buf[:n])
|
|
||||||
if got != wantContent {
|
|
||||||
t.Errorf("file %q content mismatch:\n got: %q\n want: %q", name, got, wantContent)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCollectLocalStreaming_BrokenSymlink_Good(t *testing.T) {
|
|
||||||
srcDir := t.TempDir()
|
|
||||||
outDir := t.TempDir()
|
|
||||||
|
|
||||||
// Create a regular file
|
|
||||||
if err := os.WriteFile(filepath.Join(srcDir, "real.txt"), []byte("I exist"), 0644); err != nil {
|
|
||||||
t.Fatalf("failed to write real.txt: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a broken symlink pointing to a nonexistent target
|
|
||||||
brokenLink := filepath.Join(srcDir, "broken-link")
|
|
||||||
if err := os.Symlink("/nonexistent/target/file", brokenLink); err != nil {
|
|
||||||
t.Fatalf("failed to create broken symlink: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
output := filepath.Join(outDir, "symlink.stim")
|
|
||||||
err := CollectLocalStreaming(srcDir, output, "none", "sym-password")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("CollectLocalStreaming() should skip broken symlinks, got error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify output exists and is non-empty
|
|
||||||
info, err := os.Stat(output)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("output file does not exist: %v", err)
|
|
||||||
}
|
|
||||||
if info.Size() == 0 {
|
|
||||||
t.Fatal("output file is empty")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt and verify the broken symlink was skipped
|
|
||||||
dn, err := DecryptStimV2(output, "sym-password")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptStimV2() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// real.txt should be present
|
|
||||||
if _, err := dn.Stat("real.txt"); err != nil {
|
|
||||||
t.Error("expected real.txt in DataNode but it's missing")
|
|
||||||
}
|
|
||||||
|
|
||||||
// broken-link should NOT be present
|
|
||||||
exists, _ := dn.Exists("broken-link")
|
|
||||||
if exists {
|
|
||||||
t.Error("broken symlink should have been skipped but was found in DataNode")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCollectLocalStreaming_Bad(t *testing.T) {
|
|
||||||
outDir := t.TempDir()
|
|
||||||
output := filepath.Join(outDir, "should-not-exist.stim")
|
|
||||||
|
|
||||||
err := CollectLocalStreaming("/nonexistent/path/that/does/not/exist", output, "none", "password")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for nonexistent directory, got nil")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify no partial output file was left behind
|
|
||||||
if _, statErr := os.Stat(output); statErr == nil {
|
|
||||||
t.Error("partial output file should have been cleaned up")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -4,11 +4,11 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/pwa"
|
"github.com/Snider/Borg/pkg/pwa"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/ui"
|
"github.com/Snider/Borg/pkg/ui"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -5,11 +5,11 @@ import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"github.com/schollz/progressbar/v3"
|
"github.com/schollz/progressbar/v3"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/ui"
|
"github.com/Snider/Borg/pkg/ui"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/website"
|
"github.com/Snider/Borg/pkg/website"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -6,8 +6,8 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/website"
|
"github.com/Snider/Borg/pkg/website"
|
||||||
"github.com/schollz/progressbar/v3"
|
"github.com/schollz/progressbar/v3"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,8 +5,8 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/console"
|
"github.com/Snider/Borg/pkg/console"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,17 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/ui"
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ProgressFromCmd returns a Progress based on --quiet flag and TTY detection.
|
|
||||||
func ProgressFromCmd(cmd *cobra.Command) ui.Progress {
|
|
||||||
quiet, _ := cmd.Flags().GetBool("quiet")
|
|
||||||
if quiet {
|
|
||||||
return ui.NewQuietProgress(os.Stderr)
|
|
||||||
}
|
|
||||||
return ui.DefaultProgress()
|
|
||||||
}
|
|
||||||
|
|
@ -1,28 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestProgressFromCmd_Good(t *testing.T) {
|
|
||||||
cmd := &cobra.Command{}
|
|
||||||
cmd.PersistentFlags().BoolP("quiet", "q", false, "")
|
|
||||||
|
|
||||||
p := ProgressFromCmd(cmd)
|
|
||||||
if p == nil {
|
|
||||||
t.Fatal("expected non-nil Progress")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestProgressFromCmd_Quiet_Good(t *testing.T) {
|
|
||||||
cmd := &cobra.Command{}
|
|
||||||
cmd.PersistentFlags().BoolP("quiet", "q", true, "")
|
|
||||||
_ = cmd.PersistentFlags().Set("quiet", "true")
|
|
||||||
|
|
||||||
p := ProgressFromCmd(cmd)
|
|
||||||
if p == nil {
|
|
||||||
t.Fatal("expected non-nil Progress")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -15,8 +15,8 @@ import (
|
||||||
"sync"
|
"sync"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/player"
|
"github.com/Snider/Borg/pkg/player"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
"github.com/Snider/Borg/pkg/smsg"
|
||||||
"github.com/wailsapp/wails/v2"
|
"github.com/wailsapp/wails/v2"
|
||||||
"github.com/wailsapp/wails/v2/pkg/options"
|
"github.com/wailsapp/wails/v2/pkg/options"
|
||||||
"github.com/wailsapp/wails/v2/pkg/options/assetserver"
|
"github.com/wailsapp/wails/v2/pkg/options/assetserver"
|
||||||
|
|
|
||||||
|
|
@ -6,7 +6,7 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/player"
|
"github.com/Snider/Borg/pkg/player"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,9 +5,9 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
trixsdk "forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
trixsdk "github.com/Snider/Enchantrix/pkg/trix"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -5,8 +5,8 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestDecodeCmd(t *testing.T) {
|
func TestDecodeCmd(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -1,70 +0,0 @@
|
||||||
// extract-demo extracts the video from a v2 SMSG file
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/base64"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
if len(os.Args) < 4 {
|
|
||||||
fmt.Println("Usage: extract-demo <input.smsg> <password> <output.mp4>")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
inputFile := os.Args[1]
|
|
||||||
password := os.Args[2]
|
|
||||||
outputFile := os.Args[3]
|
|
||||||
|
|
||||||
data, err := os.ReadFile(inputFile)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to read: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get info first
|
|
||||||
info, err := smsg.GetInfo(data)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to get info: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
fmt.Printf("Format: %s, Compression: %s\n", info.Format, info.Compression)
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
msg, err := smsg.Decrypt(data, password)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to decrypt: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("Body: %s...\n", msg.Body[:min(50, len(msg.Body))])
|
|
||||||
fmt.Printf("Attachments: %d\n", len(msg.Attachments))
|
|
||||||
|
|
||||||
if len(msg.Attachments) > 0 {
|
|
||||||
att := msg.Attachments[0]
|
|
||||||
fmt.Printf(" Name: %s, MIME: %s, Size: %d\n", att.Name, att.MimeType, att.Size)
|
|
||||||
|
|
||||||
// Decode and save
|
|
||||||
decoded, err := base64.StdEncoding.DecodeString(att.Content)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to decode: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := os.WriteFile(outputFile, decoded, 0644); err != nil {
|
|
||||||
fmt.Printf("Failed to save: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
fmt.Printf("Saved to %s (%d bytes)\n", outputFile, len(decoded))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func min(a, b int) int {
|
|
||||||
if a < b {
|
|
||||||
return a
|
|
||||||
}
|
|
||||||
return b
|
|
||||||
}
|
|
||||||
|
|
@ -6,7 +6,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
trixsdk "forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
trixsdk "github.com/Snider/Enchantrix/pkg/trix"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,194 +0,0 @@
|
||||||
package cmd
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
// TestFullPipeline_Good exercises the complete streaming pipeline end-to-end
|
|
||||||
// with realistic directory contents including nested dirs, a large file that
|
|
||||||
// crosses the AEAD block boundary, valid and broken symlinks, and a hidden file.
|
|
||||||
// Each compression mode (none, gz, xz) is tested as a subtest.
|
|
||||||
func TestFullPipeline_Good(t *testing.T) {
|
|
||||||
if testing.Short() {
|
|
||||||
t.Skip("skipping integration test in short mode")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build a realistic source directory.
|
|
||||||
srcDir := t.TempDir()
|
|
||||||
|
|
||||||
// Regular files at root level.
|
|
||||||
writeFile(t, srcDir, "readme.md", "# My Project\n\nA description.\n")
|
|
||||||
writeFile(t, srcDir, "config.json", `{"version":"1.0","debug":false}`)
|
|
||||||
|
|
||||||
// Nested directories with source code.
|
|
||||||
mkdirAll(t, srcDir, "src")
|
|
||||||
mkdirAll(t, srcDir, "src/pkg")
|
|
||||||
writeFile(t, srcDir, "src/main.go", "package main\n\nimport \"fmt\"\n\nfunc main() {\n\tfmt.Println(\"hello\")\n}\n")
|
|
||||||
writeFile(t, srcDir, "src/pkg/lib.go", "package pkg\n\n// Lib is a library function.\nfunc Lib() string { return \"lib\" }\n")
|
|
||||||
|
|
||||||
// Large file: 1 MiB + 1 byte — crosses the 64 KiB block boundary used by
|
|
||||||
// the chunked AEAD streaming encryption. Fill with a deterministic pattern
|
|
||||||
// so we can verify content after round-trip.
|
|
||||||
const largeSize = 1024*1024 + 1
|
|
||||||
largeContent := make([]byte, largeSize)
|
|
||||||
for i := range largeContent {
|
|
||||||
largeContent[i] = byte(i % 251) // prime mod for non-trivial pattern
|
|
||||||
}
|
|
||||||
writeFileBytes(t, srcDir, "large.bin", largeContent)
|
|
||||||
|
|
||||||
// Valid symlink pointing at a relative target.
|
|
||||||
if err := os.Symlink("readme.md", filepath.Join(srcDir, "link-to-readme")); err != nil {
|
|
||||||
t.Fatalf("failed to create valid symlink: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Broken symlink pointing at a nonexistent absolute path.
|
|
||||||
if err := os.Symlink("/nonexistent/target", filepath.Join(srcDir, "broken-link")); err != nil {
|
|
||||||
t.Fatalf("failed to create broken symlink: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Hidden file (dot-prefixed).
|
|
||||||
writeFile(t, srcDir, ".hidden", "secret stuff\n")
|
|
||||||
|
|
||||||
// Run each compression mode as a subtest.
|
|
||||||
modes := []string{"none", "gz", "xz"}
|
|
||||||
for _, comp := range modes {
|
|
||||||
comp := comp // capture
|
|
||||||
t.Run("compression="+comp, func(t *testing.T) {
|
|
||||||
outDir := t.TempDir()
|
|
||||||
outFile := filepath.Join(outDir, "pipeline-"+comp+".stim")
|
|
||||||
password := "integration-test-pw-" + comp
|
|
||||||
|
|
||||||
// Step 1: Collect (walk -> tar -> compress -> encrypt -> file).
|
|
||||||
if err := CollectLocalStreaming(srcDir, outFile, comp, password); err != nil {
|
|
||||||
t.Fatalf("CollectLocalStreaming(%q) error = %v", comp, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Verify output exists and is non-empty.
|
|
||||||
info, err := os.Stat(outFile)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("output file does not exist: %v", err)
|
|
||||||
}
|
|
||||||
if info.Size() == 0 {
|
|
||||||
t.Fatal("output file is empty")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Decrypt back into a DataNode.
|
|
||||||
dn, err := DecryptStimV2(outFile, password)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptStimV2() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 4: Verify all regular files exist in the DataNode.
|
|
||||||
expectedFiles := []string{
|
|
||||||
"readme.md",
|
|
||||||
"config.json",
|
|
||||||
"src/main.go",
|
|
||||||
"src/pkg/lib.go",
|
|
||||||
"large.bin",
|
|
||||||
".hidden",
|
|
||||||
}
|
|
||||||
for _, name := range expectedFiles {
|
|
||||||
exists, eerr := dn.Exists(name)
|
|
||||||
if eerr != nil {
|
|
||||||
t.Errorf("Exists(%q) error = %v", name, eerr)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if !exists {
|
|
||||||
t.Errorf("expected file %q in DataNode but it is missing", name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify the valid symlink was included.
|
|
||||||
linkExists, _ := dn.Exists("link-to-readme")
|
|
||||||
if !linkExists {
|
|
||||||
t.Error("expected symlink link-to-readme in DataNode but it is missing")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 5: Verify large file has correct content (first byte check).
|
|
||||||
f, err := dn.Open("large.bin")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Open(large.bin) error = %v", err)
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
|
|
||||||
// Read the entire large file and verify size and first byte.
|
|
||||||
allData, err := io.ReadAll(f)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("reading large.bin: %v", err)
|
|
||||||
}
|
|
||||||
if len(allData) != largeSize {
|
|
||||||
t.Errorf("large.bin size = %d, want %d", len(allData), largeSize)
|
|
||||||
}
|
|
||||||
if len(allData) > 0 && allData[0] != byte(0%251) {
|
|
||||||
t.Errorf("large.bin first byte = %d, want %d", allData[0], byte(0%251))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify content integrity of the whole large file.
|
|
||||||
if !bytes.Equal(allData, largeContent) {
|
|
||||||
t.Error("large.bin content does not match original after round-trip")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 6: Verify broken symlink was skipped.
|
|
||||||
brokenExists, _ := dn.Exists("broken-link")
|
|
||||||
if brokenExists {
|
|
||||||
t.Error("broken symlink should have been skipped but was found in DataNode")
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// TestFullPipeline_WrongPassword_Bad encrypts with one password and attempts
|
|
||||||
// to decrypt with a different password, verifying that an error is returned.
|
|
||||||
func TestFullPipeline_WrongPassword_Bad(t *testing.T) {
|
|
||||||
if testing.Short() {
|
|
||||||
t.Skip("skipping integration test in short mode")
|
|
||||||
}
|
|
||||||
|
|
||||||
srcDir := t.TempDir()
|
|
||||||
outDir := t.TempDir()
|
|
||||||
|
|
||||||
writeFile(t, srcDir, "secret.txt", "this is confidential\n")
|
|
||||||
|
|
||||||
outFile := filepath.Join(outDir, "wrong-pw.stim")
|
|
||||||
|
|
||||||
// Encrypt with the correct password.
|
|
||||||
if err := CollectLocalStreaming(srcDir, outFile, "none", "correct-password"); err != nil {
|
|
||||||
t.Fatalf("CollectLocalStreaming() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Attempt to decrypt with the wrong password.
|
|
||||||
_, err := DecryptStimV2(outFile, "wrong-password")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error when decrypting with wrong password, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// --- helpers ---
|
|
||||||
|
|
||||||
func writeFile(t *testing.T, base, rel, content string) {
|
|
||||||
t.Helper()
|
|
||||||
path := filepath.Join(base, rel)
|
|
||||||
if err := os.WriteFile(path, []byte(content), 0644); err != nil {
|
|
||||||
t.Fatalf("failed to write %s: %v", rel, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func writeFileBytes(t *testing.T, base, rel string, data []byte) {
|
|
||||||
t.Helper()
|
|
||||||
path := filepath.Join(base, rel)
|
|
||||||
if err := os.WriteFile(path, data, 0644); err != nil {
|
|
||||||
t.Fatalf("failed to write %s: %v", rel, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func mkdirAll(t *testing.T, base, rel string) {
|
|
||||||
t.Helper()
|
|
||||||
path := filepath.Join(base, rel)
|
|
||||||
if err := os.MkdirAll(path, 0755); err != nil {
|
|
||||||
t.Fatalf("failed to mkdir %s: %v", rel, err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,226 +0,0 @@
|
||||||
// mkdemo-abr creates an ABR (Adaptive Bitrate) demo set from a source video.
|
|
||||||
// It uses ffmpeg to transcode to multiple bitrates, then encrypts each as v3 chunked SMSG.
|
|
||||||
//
|
|
||||||
// Usage: mkdemo-abr <input-video> <output-dir> [password]
|
|
||||||
//
|
|
||||||
// Output:
|
|
||||||
//
|
|
||||||
// output-dir/manifest.json - ABR manifest listing all variants
|
|
||||||
// output-dir/track-1080p.smsg - 1080p variant (5 Mbps)
|
|
||||||
// output-dir/track-720p.smsg - 720p variant (2.5 Mbps)
|
|
||||||
// output-dir/track-480p.smsg - 480p variant (1 Mbps)
|
|
||||||
// output-dir/track-360p.smsg - 360p variant (500 Kbps)
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"encoding/base64"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"os/exec"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Preset defines a quality level for transcoding
|
|
||||||
type Preset struct {
|
|
||||||
Name string
|
|
||||||
Width int
|
|
||||||
Height int
|
|
||||||
Bitrate string // For ffmpeg (e.g., "5M")
|
|
||||||
BPS int // Bits per second for manifest
|
|
||||||
}
|
|
||||||
|
|
||||||
// Default presets matching ABRPresets in types.go
|
|
||||||
var presets = []Preset{
|
|
||||||
{"1080p", 1920, 1080, "5M", 5000000},
|
|
||||||
{"720p", 1280, 720, "2.5M", 2500000},
|
|
||||||
{"480p", 854, 480, "1M", 1000000},
|
|
||||||
{"360p", 640, 360, "500K", 500000},
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
if len(os.Args) < 3 {
|
|
||||||
fmt.Println("Usage: mkdemo-abr <input-video> <output-dir> [password]")
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Creates ABR variant set from source video using ffmpeg.")
|
|
||||||
fmt.Println()
|
|
||||||
fmt.Println("Output:")
|
|
||||||
fmt.Println(" output-dir/manifest.json - ABR manifest")
|
|
||||||
fmt.Println(" output-dir/track-1080p.smsg - 1080p (5 Mbps)")
|
|
||||||
fmt.Println(" output-dir/track-720p.smsg - 720p (2.5 Mbps)")
|
|
||||||
fmt.Println(" output-dir/track-480p.smsg - 480p (1 Mbps)")
|
|
||||||
fmt.Println(" output-dir/track-360p.smsg - 360p (500 Kbps)")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
inputFile := os.Args[1]
|
|
||||||
outputDir := os.Args[2]
|
|
||||||
|
|
||||||
// Check ffmpeg is available
|
|
||||||
if _, err := exec.LookPath("ffmpeg"); err != nil {
|
|
||||||
fmt.Println("Error: ffmpeg not found in PATH")
|
|
||||||
fmt.Println("Install ffmpeg: https://ffmpeg.org/download.html")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate or use provided password
|
|
||||||
var password string
|
|
||||||
if len(os.Args) > 3 {
|
|
||||||
password = os.Args[3]
|
|
||||||
} else {
|
|
||||||
passwordBytes := make([]byte, 24)
|
|
||||||
if _, err := rand.Read(passwordBytes); err != nil {
|
|
||||||
fmt.Printf("Failed to generate password: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
password = base64.RawURLEncoding.EncodeToString(passwordBytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create output directory
|
|
||||||
if err := os.MkdirAll(outputDir, 0755); err != nil {
|
|
||||||
fmt.Printf("Failed to create output directory: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get title from input filename
|
|
||||||
title := filepath.Base(inputFile)
|
|
||||||
ext := filepath.Ext(title)
|
|
||||||
if ext != "" {
|
|
||||||
title = title[:len(title)-len(ext)]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create ABR manifest
|
|
||||||
manifest := smsg.NewABRManifest(title)
|
|
||||||
|
|
||||||
fmt.Printf("Creating ABR variants for: %s\n", inputFile)
|
|
||||||
fmt.Printf("Output directory: %s\n", outputDir)
|
|
||||||
fmt.Printf("Password: %s\n\n", password)
|
|
||||||
|
|
||||||
// Process each preset
|
|
||||||
for _, preset := range presets {
|
|
||||||
fmt.Printf("Processing %s (%dx%d @ %s)...\n", preset.Name, preset.Width, preset.Height, preset.Bitrate)
|
|
||||||
|
|
||||||
// Step 1: Transcode with ffmpeg
|
|
||||||
tempFile := filepath.Join(outputDir, fmt.Sprintf("temp-%s.mp4", preset.Name))
|
|
||||||
if err := transcode(inputFile, tempFile, preset); err != nil {
|
|
||||||
fmt.Printf(" Warning: Transcode failed for %s: %v\n", preset.Name, err)
|
|
||||||
fmt.Printf(" Skipping this variant...\n")
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 2: Read transcoded file
|
|
||||||
content, err := os.ReadFile(tempFile)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf(" Error reading transcoded file: %v\n", err)
|
|
||||||
os.Remove(tempFile)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 3: Create SMSG message
|
|
||||||
msg := smsg.NewMessage("dapp.fm ABR Demo")
|
|
||||||
msg.Subject = fmt.Sprintf("%s - %s", title, preset.Name)
|
|
||||||
msg.From = "dapp.fm"
|
|
||||||
msg.AddBinaryAttachment(
|
|
||||||
fmt.Sprintf("%s-%s.mp4", strings.ReplaceAll(title, " ", "_"), preset.Name),
|
|
||||||
content,
|
|
||||||
"video/mp4",
|
|
||||||
)
|
|
||||||
|
|
||||||
// Step 4: Create manifest for this variant
|
|
||||||
variantManifest := smsg.NewManifest(title)
|
|
||||||
variantManifest.LicenseType = "perpetual"
|
|
||||||
variantManifest.Format = "dapp.fm/abr-v1"
|
|
||||||
|
|
||||||
// Step 5: Encrypt with v3 chunked format
|
|
||||||
params := &smsg.StreamParams{
|
|
||||||
License: password,
|
|
||||||
ChunkSize: smsg.DefaultChunkSize, // 1MB chunks
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := smsg.EncryptV3(msg, params, variantManifest)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf(" Error encrypting: %v\n", err)
|
|
||||||
os.Remove(tempFile)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 6: Write SMSG file
|
|
||||||
smsgFile := filepath.Join(outputDir, fmt.Sprintf("track-%s.smsg", preset.Name))
|
|
||||||
if err := os.WriteFile(smsgFile, encrypted, 0644); err != nil {
|
|
||||||
fmt.Printf(" Error writing SMSG: %v\n", err)
|
|
||||||
os.Remove(tempFile)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 7: Get chunk count from header
|
|
||||||
header, err := smsg.GetV3Header(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf(" Warning: Could not read header: %v\n", err)
|
|
||||||
}
|
|
||||||
chunkCount := 0
|
|
||||||
if header != nil && header.Chunked != nil {
|
|
||||||
chunkCount = header.Chunked.TotalChunks
|
|
||||||
}
|
|
||||||
|
|
||||||
// Step 8: Add variant to manifest
|
|
||||||
variant := smsg.Variant{
|
|
||||||
Name: preset.Name,
|
|
||||||
Bandwidth: preset.BPS,
|
|
||||||
Width: preset.Width,
|
|
||||||
Height: preset.Height,
|
|
||||||
Codecs: "avc1.640028,mp4a.40.2",
|
|
||||||
URL: fmt.Sprintf("track-%s.smsg", preset.Name),
|
|
||||||
ChunkCount: chunkCount,
|
|
||||||
FileSize: int64(len(encrypted)),
|
|
||||||
}
|
|
||||||
manifest.AddVariant(variant)
|
|
||||||
|
|
||||||
// Clean up temp file
|
|
||||||
os.Remove(tempFile)
|
|
||||||
|
|
||||||
fmt.Printf(" Created: %s (%d bytes, %d chunks)\n", smsgFile, len(encrypted), chunkCount)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(manifest.Variants) == 0 {
|
|
||||||
fmt.Println("\nError: No variants created. Check ffmpeg output.")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write ABR manifest
|
|
||||||
manifestPath := filepath.Join(outputDir, "manifest.json")
|
|
||||||
if err := smsg.WriteABRManifest(manifest, manifestPath); err != nil {
|
|
||||||
fmt.Printf("Failed to write manifest: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("\n✓ Created ABR manifest: %s\n", manifestPath)
|
|
||||||
fmt.Printf("✓ Variants: %d\n", len(manifest.Variants))
|
|
||||||
fmt.Printf("✓ Default: %s\n", manifest.Variants[manifest.DefaultIdx].Name)
|
|
||||||
fmt.Printf("\nMaster Password: %s\n", password)
|
|
||||||
fmt.Println("\nStore this password securely - it decrypts ALL variants!")
|
|
||||||
}
|
|
||||||
|
|
||||||
// transcode uses ffmpeg to transcode the input to the specified preset
|
|
||||||
func transcode(input, output string, preset Preset) error {
|
|
||||||
args := []string{
|
|
||||||
"-i", input,
|
|
||||||
"-vf", fmt.Sprintf("scale=%d:%d:force_original_aspect_ratio=decrease,pad=%d:%d:(ow-iw)/2:(oh-ih)/2",
|
|
||||||
preset.Width, preset.Height, preset.Width, preset.Height),
|
|
||||||
"-c:v", "libx264",
|
|
||||||
"-preset", "medium",
|
|
||||||
"-b:v", preset.Bitrate,
|
|
||||||
"-c:a", "aac",
|
|
||||||
"-b:a", "128k",
|
|
||||||
"-movflags", "+faststart",
|
|
||||||
"-y", // Overwrite output
|
|
||||||
output,
|
|
||||||
}
|
|
||||||
|
|
||||||
cmd := exec.Command("ffmpeg", args...)
|
|
||||||
cmd.Stderr = os.Stderr // Show ffmpeg output for debugging
|
|
||||||
|
|
||||||
return cmd.Run()
|
|
||||||
}
|
|
||||||
|
|
@ -1,129 +0,0 @@
|
||||||
// mkdemo-v3 creates a v3 chunked SMSG file for streaming demos
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"encoding/base64"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
if len(os.Args) < 3 {
|
|
||||||
fmt.Println("Usage: mkdemo-v3 <input-media-file> <output-smsg-file> [license] [chunk-size-kb]")
|
|
||||||
fmt.Println("")
|
|
||||||
fmt.Println("Creates a v3 chunked SMSG file for streaming demos.")
|
|
||||||
fmt.Println("V3 uses rolling keys derived from: LTHN(date:license:fingerprint)")
|
|
||||||
fmt.Println("")
|
|
||||||
fmt.Println("Options:")
|
|
||||||
fmt.Println(" license The license key (default: auto-generated)")
|
|
||||||
fmt.Println(" chunk-size-kb Chunk size in KB (default: 512)")
|
|
||||||
fmt.Println("")
|
|
||||||
fmt.Println("Note: V3 files work for 24-48 hours from creation (rolling keys).")
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
inputFile := os.Args[1]
|
|
||||||
outputFile := os.Args[2]
|
|
||||||
|
|
||||||
// Read input file
|
|
||||||
content, err := os.ReadFile(inputFile)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to read input file: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// License (acts as password in v3)
|
|
||||||
var license string
|
|
||||||
if len(os.Args) > 3 {
|
|
||||||
license = os.Args[3]
|
|
||||||
} else {
|
|
||||||
// Generate cryptographically secure license
|
|
||||||
licenseBytes := make([]byte, 24)
|
|
||||||
if _, err := rand.Read(licenseBytes); err != nil {
|
|
||||||
fmt.Printf("Failed to generate license: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
license = base64.RawURLEncoding.EncodeToString(licenseBytes)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Chunk size (default 512KB for good streaming granularity)
|
|
||||||
chunkSize := 512 * 1024
|
|
||||||
if len(os.Args) > 4 {
|
|
||||||
var chunkKB int
|
|
||||||
if _, err := fmt.Sscanf(os.Args[4], "%d", &chunkKB); err == nil && chunkKB > 0 {
|
|
||||||
chunkSize = chunkKB * 1024
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create manifest
|
|
||||||
title := filepath.Base(inputFile)
|
|
||||||
ext := filepath.Ext(title)
|
|
||||||
if ext != "" {
|
|
||||||
title = title[:len(title)-len(ext)]
|
|
||||||
}
|
|
||||||
manifest := smsg.NewManifest(title)
|
|
||||||
manifest.LicenseType = "streaming"
|
|
||||||
manifest.Format = "dapp.fm/v3-chunked"
|
|
||||||
|
|
||||||
// Detect MIME type
|
|
||||||
mimeType := "video/mp4"
|
|
||||||
switch ext {
|
|
||||||
case ".mp3":
|
|
||||||
mimeType = "audio/mpeg"
|
|
||||||
case ".wav":
|
|
||||||
mimeType = "audio/wav"
|
|
||||||
case ".flac":
|
|
||||||
mimeType = "audio/flac"
|
|
||||||
case ".webm":
|
|
||||||
mimeType = "video/webm"
|
|
||||||
case ".ogg":
|
|
||||||
mimeType = "audio/ogg"
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create message with attachment
|
|
||||||
msg := smsg.NewMessage("dapp.fm V3 Streaming Demo - Decrypt-while-downloading enabled")
|
|
||||||
msg.Subject = "V3 Chunked Streaming"
|
|
||||||
msg.From = "dapp.fm"
|
|
||||||
msg.AddBinaryAttachment(
|
|
||||||
filepath.Base(inputFile),
|
|
||||||
content,
|
|
||||||
mimeType,
|
|
||||||
)
|
|
||||||
|
|
||||||
// Create stream params with chunking enabled
|
|
||||||
params := &smsg.StreamParams{
|
|
||||||
License: license,
|
|
||||||
Fingerprint: "", // Empty for demo (works for any device)
|
|
||||||
Cadence: smsg.CadenceDaily,
|
|
||||||
ChunkSize: chunkSize,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt with v3 chunked format
|
|
||||||
encrypted, err := smsg.EncryptV3(msg, params, manifest)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Printf("Failed to encrypt: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write output
|
|
||||||
if err := os.WriteFile(outputFile, encrypted, 0644); err != nil {
|
|
||||||
fmt.Printf("Failed to write output: %v\n", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate chunk count
|
|
||||||
numChunks := (len(content) + chunkSize - 1) / chunkSize
|
|
||||||
|
|
||||||
fmt.Printf("Created: %s (%d bytes)\n", outputFile, len(encrypted))
|
|
||||||
fmt.Printf("Format: v3 chunked\n")
|
|
||||||
fmt.Printf("Chunk Size: %d KB\n", chunkSize/1024)
|
|
||||||
fmt.Printf("Total Chunks: ~%d\n", numChunks)
|
|
||||||
fmt.Printf("License: %s\n", license)
|
|
||||||
fmt.Println("")
|
|
||||||
fmt.Println("This license works for 24-48 hours from creation.")
|
|
||||||
fmt.Println("Use the license in the streaming demo to decrypt.")
|
|
||||||
}
|
|
||||||
|
|
@ -8,7 +8,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
"github.com/Snider/Borg/pkg/smsg"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
@ -42,15 +42,19 @@ func main() {
|
||||||
password = base64.RawURLEncoding.EncodeToString(passwordBytes)
|
password = base64.RawURLEncoding.EncodeToString(passwordBytes)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Create manifest with filename as title
|
// Create manifest
|
||||||
title := filepath.Base(inputFile)
|
manifest := smsg.NewManifest("It Feels So Good (The Conductor & The Cowboy's Amnesia Mix)")
|
||||||
ext := filepath.Ext(title)
|
manifest.Artist = "Sonique"
|
||||||
if ext != "" {
|
|
||||||
title = title[:len(title)-len(ext)]
|
|
||||||
}
|
|
||||||
manifest := smsg.NewManifest(title)
|
|
||||||
manifest.LicenseType = "perpetual"
|
manifest.LicenseType = "perpetual"
|
||||||
manifest.Format = "dapp.fm/v1"
|
manifest.Format = "dapp.fm/v1"
|
||||||
|
manifest.ReleaseType = "single"
|
||||||
|
manifest.Duration = 253 // 4:13
|
||||||
|
manifest.AddTrack("It Feels So Good (The Conductor & The Cowboy's Amnesia Mix)", 0)
|
||||||
|
|
||||||
|
// Artist links - direct to artist, skip the middlemen
|
||||||
|
// "home" = preferred landing page, artist name should always link here
|
||||||
|
manifest.AddLink("home", "https://linktr.ee/conductorandcowboy")
|
||||||
|
manifest.AddLink("beatport", "https://www.beatport.com/artist/the-conductor-the-cowboy/635335")
|
||||||
|
|
||||||
// Create message with attachment (using binary attachment for v2 format)
|
// Create message with attachment (using binary attachment for v2 format)
|
||||||
msg := smsg.NewMessage("Welcome to dapp.fm - Zero-Trust DRM for the open web.")
|
msg := smsg.NewMessage("Welcome to dapp.fm - Zero-Trust DRM for the open web.")
|
||||||
|
|
|
||||||
|
|
@ -16,7 +16,6 @@ packaging their contents into a single file, and managing the data within.`,
|
||||||
}
|
}
|
||||||
|
|
||||||
rootCmd.PersistentFlags().BoolP("verbose", "v", false, "Enable verbose logging")
|
rootCmd.PersistentFlags().BoolP("verbose", "v", false, "Enable verbose logging")
|
||||||
rootCmd.PersistentFlags().BoolP("quiet", "q", false, "Suppress non-error output")
|
|
||||||
return rootCmd
|
return rootCmd
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestRunCmd_Good(t *testing.T) {
|
func TestRunCmd_Good(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -6,9 +6,9 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tarfs"
|
"github.com/Snider/Borg/pkg/tarfs"
|
||||||
|
|
||||||
"github.com/spf13/cobra"
|
"github.com/spf13/cobra"
|
||||||
)
|
)
|
||||||
|
|
|
||||||
Binary file not shown.
Binary file not shown.
1219
demo/index.html
1219
demo/index.html
File diff suppressed because it is too large
Load diff
BIN
demo/stmf.wasm
BIN
demo/stmf.wasm
Binary file not shown.
|
|
@ -1,281 +0,0 @@
|
||||||
# IPFS Distribution Guide
|
|
||||||
|
|
||||||
This guide explains how to distribute your encrypted `.smsg` content via IPFS (InterPlanetary File System) for permanent, decentralized hosting.
|
|
||||||
|
|
||||||
## Why IPFS?
|
|
||||||
|
|
||||||
IPFS is ideal for dapp.fm content because:
|
|
||||||
|
|
||||||
- **Permanent links** - Content-addressed (CID) means the URL never changes
|
|
||||||
- **No hosting costs** - Pin with free services or self-host
|
|
||||||
- **Censorship resistant** - No single point of failure
|
|
||||||
- **Global CDN** - Content served from nearest peer
|
|
||||||
- **Perfect for archival** - Your content survives even if you disappear
|
|
||||||
|
|
||||||
Combined with password-as-license, IPFS creates truly permanent media distribution:
|
|
||||||
|
|
||||||
```
|
|
||||||
Artist uploads to IPFS → Fan downloads from anywhere → Password unlocks forever
|
|
||||||
```
|
|
||||||
|
|
||||||
## Quick Start
|
|
||||||
|
|
||||||
### 1. Install IPFS
|
|
||||||
|
|
||||||
**macOS:**
|
|
||||||
```bash
|
|
||||||
brew install ipfs
|
|
||||||
```
|
|
||||||
|
|
||||||
**Linux:**
|
|
||||||
```bash
|
|
||||||
wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz
|
|
||||||
tar xvfz kubo_v0.24.0_linux-amd64.tar.gz
|
|
||||||
sudo mv kubo/ipfs /usr/local/bin/
|
|
||||||
```
|
|
||||||
|
|
||||||
**Windows:**
|
|
||||||
Download from https://dist.ipfs.tech/#kubo
|
|
||||||
|
|
||||||
### 2. Initialize and Start
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ipfs init
|
|
||||||
ipfs daemon
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Add Your Content
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create your encrypted content first
|
|
||||||
go run ./cmd/mkdemo my-album.mp4 my-album.smsg
|
|
||||||
|
|
||||||
# Add to IPFS
|
|
||||||
ipfs add my-album.smsg
|
|
||||||
# Output: added QmX...abc my-album.smsg
|
|
||||||
|
|
||||||
# Your content is now available at:
|
|
||||||
# - Local: http://localhost:8080/ipfs/QmX...abc
|
|
||||||
# - Gateway: https://ipfs.io/ipfs/QmX...abc
|
|
||||||
```
|
|
||||||
|
|
||||||
## Distribution Workflow
|
|
||||||
|
|
||||||
### For Artists
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Package your media
|
|
||||||
go run ./cmd/mkdemo album.mp4 album.smsg
|
|
||||||
# Save the password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
|
|
||||||
# 2. Add to IPFS
|
|
||||||
ipfs add album.smsg
|
|
||||||
# added QmYourContentCID album.smsg
|
|
||||||
|
|
||||||
# 3. Pin for persistence (choose one):
|
|
||||||
|
|
||||||
# Option A: Pin locally (requires running node)
|
|
||||||
ipfs pin add QmYourContentCID
|
|
||||||
|
|
||||||
# Option B: Use Pinata (free tier: 1GB)
|
|
||||||
curl -X POST "https://api.pinata.cloud/pinning/pinByHash" \
|
|
||||||
-H "Authorization: Bearer YOUR_JWT" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"hashToPin": "QmYourContentCID"}'
|
|
||||||
|
|
||||||
# Option C: Use web3.storage (free tier: 5GB)
|
|
||||||
# Upload at https://web3.storage
|
|
||||||
|
|
||||||
# 4. Share with fans
|
|
||||||
# CID: QmYourContentCID
|
|
||||||
# Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
# Gateway URL: https://ipfs.io/ipfs/QmYourContentCID
|
|
||||||
```
|
|
||||||
|
|
||||||
### For Fans
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Download via any gateway
|
|
||||||
curl -o album.smsg https://ipfs.io/ipfs/QmYourContentCID
|
|
||||||
|
|
||||||
# Or via local node (faster if running)
|
|
||||||
ipfs get QmYourContentCID -o album.smsg
|
|
||||||
|
|
||||||
# Play with password in browser demo or native app
|
|
||||||
```
|
|
||||||
|
|
||||||
## IPFS Gateways
|
|
||||||
|
|
||||||
Public gateways for sharing (no IPFS node required):
|
|
||||||
|
|
||||||
| Gateway | URL Pattern | Notes |
|
|
||||||
|---------|-------------|-------|
|
|
||||||
| ipfs.io | `https://ipfs.io/ipfs/{CID}` | Official, reliable |
|
|
||||||
| dweb.link | `https://{CID}.ipfs.dweb.link` | Subdomain style |
|
|
||||||
| cloudflare | `https://cloudflare-ipfs.com/ipfs/{CID}` | Fast, cached |
|
|
||||||
| w3s.link | `https://{CID}.ipfs.w3s.link` | web3.storage |
|
|
||||||
| nftstorage.link | `https://{CID}.ipfs.nftstorage.link` | NFT.storage |
|
|
||||||
|
|
||||||
**Example URLs for CID `QmX...abc`:**
|
|
||||||
```
|
|
||||||
https://ipfs.io/ipfs/QmX...abc
|
|
||||||
https://QmX...abc.ipfs.dweb.link
|
|
||||||
https://cloudflare-ipfs.com/ipfs/QmX...abc
|
|
||||||
```
|
|
||||||
|
|
||||||
## Pinning Services
|
|
||||||
|
|
||||||
Content on IPFS is only available while someone is hosting it. Use pinning services for persistence:
|
|
||||||
|
|
||||||
### Free Options
|
|
||||||
|
|
||||||
| Service | Free Tier | Link |
|
|
||||||
|---------|-----------|------|
|
|
||||||
| Pinata | 1 GB | https://pinata.cloud |
|
|
||||||
| web3.storage | 5 GB | https://web3.storage |
|
|
||||||
| NFT.storage | Unlimited* | https://nft.storage |
|
|
||||||
| Filebase | 5 GB | https://filebase.com |
|
|
||||||
|
|
||||||
*NFT.storage is designed for NFT data but works for any content.
|
|
||||||
|
|
||||||
### Pin via CLI
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Pinata
|
|
||||||
export PINATA_JWT="your-jwt-token"
|
|
||||||
curl -X POST "https://api.pinata.cloud/pinning/pinByHash" \
|
|
||||||
-H "Authorization: Bearer $PINATA_JWT" \
|
|
||||||
-H "Content-Type: application/json" \
|
|
||||||
-d '{"hashToPin": "QmYourCID", "pinataMetadata": {"name": "my-album.smsg"}}'
|
|
||||||
|
|
||||||
# web3.storage (using w3 CLI)
|
|
||||||
npm install -g @web3-storage/w3cli
|
|
||||||
w3 login your@email.com
|
|
||||||
w3 up my-album.smsg
|
|
||||||
```
|
|
||||||
|
|
||||||
## Integration with Demo Page
|
|
||||||
|
|
||||||
The demo page can load content directly from IPFS gateways:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// In the demo page, use gateway URL
|
|
||||||
const ipfsCID = 'QmYourContentCID';
|
|
||||||
const gatewayUrl = `https://ipfs.io/ipfs/${ipfsCID}`;
|
|
||||||
|
|
||||||
// Fetch and decrypt
|
|
||||||
const response = await fetch(gatewayUrl);
|
|
||||||
const bytes = new Uint8Array(await response.arrayBuffer());
|
|
||||||
const msg = await BorgSMSG.decryptBinary(bytes, password);
|
|
||||||
```
|
|
||||||
|
|
||||||
Or use the Fan tab with the IPFS gateway URL directly.
|
|
||||||
|
|
||||||
## Best Practices
|
|
||||||
|
|
||||||
### 1. Always Pin Your Content
|
|
||||||
|
|
||||||
IPFS garbage-collects unpinned content. Always pin important files:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
ipfs pin add QmYourCID
|
|
||||||
# Or use a pinning service
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Use Multiple Pins
|
|
||||||
|
|
||||||
Pin with 2-3 services for redundancy:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Pin locally
|
|
||||||
ipfs pin add QmYourCID
|
|
||||||
|
|
||||||
# Also pin with Pinata
|
|
||||||
curl -X POST "https://api.pinata.cloud/pinning/pinByHash" ...
|
|
||||||
|
|
||||||
# And web3.storage as backup
|
|
||||||
w3 up my-album.smsg
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Share CID + Password Separately
|
|
||||||
|
|
||||||
```
|
|
||||||
Download: https://ipfs.io/ipfs/QmYourCID
|
|
||||||
License: [sent via email/DM after purchase]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. Use IPNS for Updates (Optional)
|
|
||||||
|
|
||||||
IPNS lets you update content while keeping the same URL:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create IPNS name
|
|
||||||
ipfs name publish QmYourCID
|
|
||||||
# Published to k51...xyz
|
|
||||||
|
|
||||||
# Your content is now at:
|
|
||||||
# https://ipfs.io/ipns/k51...xyz
|
|
||||||
|
|
||||||
# Update to new version later:
|
|
||||||
ipfs name publish QmNewVersionCID
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example: Full Album Release
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create encrypted album
|
|
||||||
go run ./cmd/mkdemo my-album.mp4 my-album.smsg
|
|
||||||
# Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
|
|
||||||
# 2. Add to IPFS
|
|
||||||
ipfs add my-album.smsg
|
|
||||||
# added QmAlbumCID my-album.smsg
|
|
||||||
|
|
||||||
# 3. Pin with multiple services
|
|
||||||
ipfs pin add QmAlbumCID
|
|
||||||
w3 up my-album.smsg
|
|
||||||
|
|
||||||
# 4. Create release page
|
|
||||||
cat > release.html << 'EOF'
|
|
||||||
<!DOCTYPE html>
|
|
||||||
<html>
|
|
||||||
<head><title>My Album - Download</title></head>
|
|
||||||
<body>
|
|
||||||
<h1>My Album</h1>
|
|
||||||
<p>Download: <a href="https://ipfs.io/ipfs/QmAlbumCID">IPFS</a></p>
|
|
||||||
<p>After purchase, you'll receive your license key via email.</p>
|
|
||||||
<p><a href="https://demo.dapp.fm">Play with license key</a></p>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# 5. Host release page on IPFS too!
|
|
||||||
ipfs add release.html
|
|
||||||
# added QmReleaseCID release.html
|
|
||||||
# Share: https://ipfs.io/ipfs/QmReleaseCID
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Content Not Loading
|
|
||||||
|
|
||||||
1. **Check if pinned**: `ipfs pin ls | grep QmYourCID`
|
|
||||||
2. **Try different gateway**: Some gateways cache slowly
|
|
||||||
3. **Check daemon running**: `ipfs swarm peers` should show peers
|
|
||||||
|
|
||||||
### Slow Downloads
|
|
||||||
|
|
||||||
1. Use a faster gateway (cloudflare-ipfs.com is often fastest)
|
|
||||||
2. Run your own IPFS node for direct access
|
|
||||||
3. Pre-warm gateways by accessing content once
|
|
||||||
|
|
||||||
### CID Changed After Re-adding
|
|
||||||
|
|
||||||
IPFS CIDs are content-addressed. If you modify the file, the CID changes. For the same content, the CID is always identical.
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [IPFS Documentation](https://docs.ipfs.tech/)
|
|
||||||
- [Pinata Docs](https://docs.pinata.cloud/)
|
|
||||||
- [web3.storage Docs](https://web3.storage/docs/)
|
|
||||||
- [IPFS Gateway Checker](https://ipfs.github.io/public-gateway-checker/)
|
|
||||||
|
|
@ -1,497 +0,0 @@
|
||||||
# Payment Integration Guide
|
|
||||||
|
|
||||||
This guide shows how to sell your encrypted `.smsg` content and deliver license keys (passwords) to customers using popular payment processors.
|
|
||||||
|
|
||||||
## Overview
|
|
||||||
|
|
||||||
The dapp.fm model is simple:
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Customer pays via Stripe/Gumroad/PayPal
|
|
||||||
2. Payment processor triggers webhook or delivers digital product
|
|
||||||
3. Customer receives password (license key)
|
|
||||||
4. Customer downloads .smsg from your CDN/IPFS
|
|
||||||
5. Customer decrypts with password - done forever
|
|
||||||
```
|
|
||||||
|
|
||||||
No license servers, no accounts, no ongoing infrastructure.
|
|
||||||
|
|
||||||
## Stripe Integration
|
|
||||||
|
|
||||||
### Option 1: Stripe Payment Links (Easiest)
|
|
||||||
|
|
||||||
No code required - use Stripe's hosted checkout:
|
|
||||||
|
|
||||||
1. Create a Payment Link in Stripe Dashboard
|
|
||||||
2. Set up a webhook to email the password on successful payment
|
|
||||||
3. Host your `.smsg` file anywhere (CDN, IPFS, S3)
|
|
||||||
|
|
||||||
**Webhook endpoint (Node.js/Express):**
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const express = require('express');
|
|
||||||
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
|
|
||||||
const nodemailer = require('nodemailer');
|
|
||||||
|
|
||||||
const app = express();
|
|
||||||
|
|
||||||
// Your content passwords (store securely!)
|
|
||||||
const PRODUCTS = {
|
|
||||||
'prod_ABC123': {
|
|
||||||
name: 'My Album',
|
|
||||||
password: 'PMVXogAJNVe_DDABfTmLYztaJAzsD0R7',
|
|
||||||
downloadUrl: 'https://ipfs.io/ipfs/QmYourCID'
|
|
||||||
}
|
|
||||||
};
|
|
||||||
|
|
||||||
app.post('/webhook', express.raw({type: 'application/json'}), async (req, res) => {
|
|
||||||
const sig = req.headers['stripe-signature'];
|
|
||||||
const endpointSecret = process.env.STRIPE_WEBHOOK_SECRET;
|
|
||||||
|
|
||||||
let event;
|
|
||||||
try {
|
|
||||||
event = stripe.webhooks.constructEvent(req.body, sig, endpointSecret);
|
|
||||||
} catch (err) {
|
|
||||||
return res.status(400).send(`Webhook Error: ${err.message}`);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (event.type === 'checkout.session.completed') {
|
|
||||||
const session = event.data.object;
|
|
||||||
const customerEmail = session.customer_details.email;
|
|
||||||
const productId = session.metadata.product_id;
|
|
||||||
const product = PRODUCTS[productId];
|
|
||||||
|
|
||||||
if (product) {
|
|
||||||
await sendLicenseEmail(customerEmail, product);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
res.json({received: true});
|
|
||||||
});
|
|
||||||
|
|
||||||
async function sendLicenseEmail(email, product) {
|
|
||||||
const transporter = nodemailer.createTransport({
|
|
||||||
// Configure your email provider
|
|
||||||
service: 'gmail',
|
|
||||||
auth: {
|
|
||||||
user: process.env.EMAIL_USER,
|
|
||||||
pass: process.env.EMAIL_PASS
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
await transporter.sendMail({
|
|
||||||
from: 'artist@example.com',
|
|
||||||
to: email,
|
|
||||||
subject: `Your License Key for ${product.name}`,
|
|
||||||
html: `
|
|
||||||
<h1>Thank you for your purchase!</h1>
|
|
||||||
<p><strong>Download:</strong> <a href="${product.downloadUrl}">${product.name}</a></p>
|
|
||||||
<p><strong>License Key:</strong> <code>${product.password}</code></p>
|
|
||||||
<p><strong>How to play:</strong></p>
|
|
||||||
<ol>
|
|
||||||
<li>Download the .smsg file from the link above</li>
|
|
||||||
<li>Go to <a href="https://demo.dapp.fm">demo.dapp.fm</a></li>
|
|
||||||
<li>Click "Fan" tab, then "Unlock Licensed Content"</li>
|
|
||||||
<li>Paste the file and enter your license key</li>
|
|
||||||
</ol>
|
|
||||||
<p>This is your permanent license - save this email!</p>
|
|
||||||
`
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
app.listen(3000);
|
|
||||||
```
|
|
||||||
|
|
||||||
### Option 2: Stripe Checkout Session (More Control)
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
|
|
||||||
|
|
||||||
// Create checkout session
|
|
||||||
app.post('/create-checkout', async (req, res) => {
|
|
||||||
const { productId } = req.body;
|
|
||||||
|
|
||||||
const session = await stripe.checkout.sessions.create({
|
|
||||||
payment_method_types: ['card'],
|
|
||||||
line_items: [{
|
|
||||||
price: 'price_ABC123', // Your Stripe price ID
|
|
||||||
quantity: 1,
|
|
||||||
}],
|
|
||||||
mode: 'payment',
|
|
||||||
success_url: 'https://yoursite.com/success?session_id={CHECKOUT_SESSION_ID}',
|
|
||||||
cancel_url: 'https://yoursite.com/cancel',
|
|
||||||
metadata: {
|
|
||||||
product_id: productId
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
res.json({ url: session.url });
|
|
||||||
});
|
|
||||||
|
|
||||||
// Success page - show license after payment
|
|
||||||
app.get('/success', async (req, res) => {
|
|
||||||
const session = await stripe.checkout.sessions.retrieve(req.query.session_id);
|
|
||||||
|
|
||||||
if (session.payment_status === 'paid') {
|
|
||||||
const product = PRODUCTS[session.metadata.product_id];
|
|
||||||
res.send(`
|
|
||||||
<h1>Thank you!</h1>
|
|
||||||
<p>Download: <a href="${product.downloadUrl}">${product.name}</a></p>
|
|
||||||
<p>License Key: <code>${product.password}</code></p>
|
|
||||||
`);
|
|
||||||
} else {
|
|
||||||
res.send('Payment not completed');
|
|
||||||
}
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Gumroad Integration
|
|
||||||
|
|
||||||
Gumroad is perfect for artists - handles payments, delivery, and customer management.
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
1. Create a Digital Product on Gumroad
|
|
||||||
2. Upload a text file or PDF containing the password
|
|
||||||
3. Set your `.smsg` download URL in the product description
|
|
||||||
4. Gumroad delivers the password file on purchase
|
|
||||||
|
|
||||||
### Product Setup
|
|
||||||
|
|
||||||
**Product Description:**
|
|
||||||
```
|
|
||||||
My Album - Encrypted Digital Download
|
|
||||||
|
|
||||||
After purchase, you'll receive:
|
|
||||||
1. A license key (in the download)
|
|
||||||
2. Download link for the .smsg file
|
|
||||||
|
|
||||||
How to play:
|
|
||||||
1. Download the .smsg file: https://ipfs.io/ipfs/QmYourCID
|
|
||||||
2. Go to https://demo.dapp.fm
|
|
||||||
3. Click "Fan" → "Unlock Licensed Content"
|
|
||||||
4. Enter your license key from the PDF
|
|
||||||
```
|
|
||||||
|
|
||||||
**Delivered File (license.txt):**
|
|
||||||
```
|
|
||||||
Your License Key: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
|
|
||||||
Download your content: https://ipfs.io/ipfs/QmYourCID
|
|
||||||
|
|
||||||
This is your permanent license - keep this file safe!
|
|
||||||
The content works offline forever with this key.
|
|
||||||
|
|
||||||
Need help? Visit https://demo.dapp.fm
|
|
||||||
```
|
|
||||||
|
|
||||||
### Gumroad Ping (Webhook)
|
|
||||||
|
|
||||||
For automated delivery, use Gumroad's Ping feature:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const express = require('express');
|
|
||||||
const app = express();
|
|
||||||
|
|
||||||
app.use(express.urlencoded({ extended: true }));
|
|
||||||
|
|
||||||
// Gumroad sends POST to this endpoint on sale
|
|
||||||
app.post('/gumroad-ping', (req, res) => {
|
|
||||||
const {
|
|
||||||
seller_id,
|
|
||||||
product_id,
|
|
||||||
email,
|
|
||||||
full_name,
|
|
||||||
purchaser_id
|
|
||||||
} = req.body;
|
|
||||||
|
|
||||||
// Verify it's from Gumroad (check seller_id matches yours)
|
|
||||||
if (seller_id !== process.env.GUMROAD_SELLER_ID) {
|
|
||||||
return res.status(403).send('Invalid seller');
|
|
||||||
}
|
|
||||||
|
|
||||||
const product = PRODUCTS[product_id];
|
|
||||||
if (product) {
|
|
||||||
// Send custom email with password
|
|
||||||
sendLicenseEmail(email, product);
|
|
||||||
}
|
|
||||||
|
|
||||||
res.send('OK');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## PayPal Integration
|
|
||||||
|
|
||||||
### PayPal Buttons + IPN
|
|
||||||
|
|
||||||
```html
|
|
||||||
<!-- PayPal Buy Button -->
|
|
||||||
<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
|
|
||||||
<input type="hidden" name="cmd" value="_xclick">
|
|
||||||
<input type="hidden" name="business" value="artist@example.com">
|
|
||||||
<input type="hidden" name="item_name" value="My Album - Digital Download">
|
|
||||||
<input type="hidden" name="item_number" value="album-001">
|
|
||||||
<input type="hidden" name="amount" value="9.99">
|
|
||||||
<input type="hidden" name="currency_code" value="USD">
|
|
||||||
<input type="hidden" name="notify_url" value="https://yoursite.com/paypal-ipn">
|
|
||||||
<input type="hidden" name="return" value="https://yoursite.com/thank-you">
|
|
||||||
<input type="submit" value="Buy Now - $9.99">
|
|
||||||
</form>
|
|
||||||
```
|
|
||||||
|
|
||||||
**IPN Handler:**
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const express = require('express');
|
|
||||||
const axios = require('axios');
|
|
||||||
|
|
||||||
app.post('/paypal-ipn', express.urlencoded({ extended: true }), async (req, res) => {
|
|
||||||
// Verify with PayPal
|
|
||||||
const verifyUrl = 'https://ipnpb.paypal.com/cgi-bin/webscr';
|
|
||||||
const verifyBody = 'cmd=_notify-validate&' + new URLSearchParams(req.body).toString();
|
|
||||||
|
|
||||||
const response = await axios.post(verifyUrl, verifyBody);
|
|
||||||
|
|
||||||
if (response.data === 'VERIFIED' && req.body.payment_status === 'Completed') {
|
|
||||||
const email = req.body.payer_email;
|
|
||||||
const itemNumber = req.body.item_number;
|
|
||||||
const product = PRODUCTS[itemNumber];
|
|
||||||
|
|
||||||
if (product) {
|
|
||||||
await sendLicenseEmail(email, product);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
res.send('OK');
|
|
||||||
});
|
|
||||||
```
|
|
||||||
|
|
||||||
## Ko-fi Integration
|
|
||||||
|
|
||||||
Ko-fi is great for tips and single purchases.
|
|
||||||
|
|
||||||
### Setup
|
|
||||||
|
|
||||||
1. Enable "Commissions" or "Shop" on Ko-fi
|
|
||||||
2. Create a product with the license key in the thank-you message
|
|
||||||
3. Link to your .smsg download
|
|
||||||
|
|
||||||
**Ko-fi Thank You Message:**
|
|
||||||
```
|
|
||||||
Thank you for your purchase!
|
|
||||||
|
|
||||||
Your License Key: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
|
|
||||||
Download: https://ipfs.io/ipfs/QmYourCID
|
|
||||||
|
|
||||||
Play at: https://demo.dapp.fm (Fan → Unlock Licensed Content)
|
|
||||||
```
|
|
||||||
|
|
||||||
## Serverless Options
|
|
||||||
|
|
||||||
### Vercel/Netlify Functions
|
|
||||||
|
|
||||||
No server needed - use serverless functions:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// api/stripe-webhook.js (Vercel)
|
|
||||||
import Stripe from 'stripe';
|
|
||||||
import { Resend } from 'resend';
|
|
||||||
|
|
||||||
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY);
|
|
||||||
const resend = new Resend(process.env.RESEND_API_KEY);
|
|
||||||
|
|
||||||
export default async function handler(req, res) {
|
|
||||||
if (req.method !== 'POST') {
|
|
||||||
return res.status(405).end();
|
|
||||||
}
|
|
||||||
|
|
||||||
const sig = req.headers['stripe-signature'];
|
|
||||||
const event = stripe.webhooks.constructEvent(
|
|
||||||
req.body,
|
|
||||||
sig,
|
|
||||||
process.env.STRIPE_WEBHOOK_SECRET
|
|
||||||
);
|
|
||||||
|
|
||||||
if (event.type === 'checkout.session.completed') {
|
|
||||||
const session = event.data.object;
|
|
||||||
|
|
||||||
await resend.emails.send({
|
|
||||||
from: 'artist@yoursite.com',
|
|
||||||
to: session.customer_details.email,
|
|
||||||
subject: 'Your License Key',
|
|
||||||
html: `
|
|
||||||
<p>Download: <a href="https://ipfs.io/ipfs/QmYourCID">My Album</a></p>
|
|
||||||
<p>License Key: <code>PMVXogAJNVe_DDABfTmLYztaJAzsD0R7</code></p>
|
|
||||||
`
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
res.json({ received: true });
|
|
||||||
}
|
|
||||||
|
|
||||||
export const config = {
|
|
||||||
api: { bodyParser: false }
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Manual Workflow (No Code)
|
|
||||||
|
|
||||||
For artists who don't want to set up webhooks:
|
|
||||||
|
|
||||||
### Using Email
|
|
||||||
|
|
||||||
1. **Gumroad/Ko-fi**: Set product to require email
|
|
||||||
2. **Manual delivery**: Check sales daily, email passwords manually
|
|
||||||
3. **Template**:
|
|
||||||
|
|
||||||
```
|
|
||||||
Subject: Your License for [Album Name]
|
|
||||||
|
|
||||||
Hi [Name],
|
|
||||||
|
|
||||||
Thank you for your purchase!
|
|
||||||
|
|
||||||
Download: [IPFS/CDN link]
|
|
||||||
License Key: [password]
|
|
||||||
|
|
||||||
How to play:
|
|
||||||
1. Download the .smsg file
|
|
||||||
2. Go to demo.dapp.fm
|
|
||||||
3. Fan tab → Unlock Licensed Content
|
|
||||||
4. Enter your license key
|
|
||||||
|
|
||||||
Enjoy! This license works forever.
|
|
||||||
|
|
||||||
[Artist Name]
|
|
||||||
```
|
|
||||||
|
|
||||||
### Using Discord/Telegram
|
|
||||||
|
|
||||||
1. Sell via Gumroad (free tier)
|
|
||||||
2. Require customers join your Discord/Telegram
|
|
||||||
3. Bot or manual delivery of license keys
|
|
||||||
4. Community building bonus!
|
|
||||||
|
|
||||||
## Security Best Practices
|
|
||||||
|
|
||||||
### 1. One Password Per Product
|
|
||||||
|
|
||||||
Don't reuse passwords across products:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const PRODUCTS = {
|
|
||||||
'album-2024': { password: 'unique-key-1' },
|
|
||||||
'album-2023': { password: 'unique-key-2' },
|
|
||||||
'single-summer': { password: 'unique-key-3' }
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. Environment Variables
|
|
||||||
|
|
||||||
Never hardcode passwords in source:
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# .env
|
|
||||||
ALBUM_2024_PASSWORD=PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
STRIPE_SECRET_KEY=sk_live_...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3. Webhook Verification
|
|
||||||
|
|
||||||
Always verify webhooks are from the payment provider:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Stripe
|
|
||||||
stripe.webhooks.constructEvent(body, sig, secret);
|
|
||||||
|
|
||||||
// Gumroad
|
|
||||||
if (seller_id !== MY_SELLER_ID) reject();
|
|
||||||
|
|
||||||
// PayPal
|
|
||||||
verify with IPN endpoint
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4. HTTPS Only
|
|
||||||
|
|
||||||
All webhook endpoints must use HTTPS.
|
|
||||||
|
|
||||||
## Pricing Strategies
|
|
||||||
|
|
||||||
### Direct Sale (Perpetual License)
|
|
||||||
|
|
||||||
- Customer pays once, owns forever
|
|
||||||
- Single password for all buyers
|
|
||||||
- Best for: Albums, films, books
|
|
||||||
|
|
||||||
### Time-Limited (Streaming/Rental)
|
|
||||||
|
|
||||||
Use dapp.fm Re-Key feature:
|
|
||||||
|
|
||||||
1. Encrypt master copy with master password
|
|
||||||
2. On purchase, re-key with customer-specific password + expiry
|
|
||||||
3. Deliver unique password per customer
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// On purchase webhook
|
|
||||||
const customerPassword = generateUniquePassword();
|
|
||||||
const expiry = Date.now() + (24 * 60 * 60 * 1000); // 24 hours
|
|
||||||
|
|
||||||
// Use WASM or Go to re-key
|
|
||||||
const customerVersion = await rekeyContent(masterSmsg, masterPassword, customerPassword, expiry);
|
|
||||||
|
|
||||||
// Deliver customer-specific file + password
|
|
||||||
```
|
|
||||||
|
|
||||||
### Tiered Access
|
|
||||||
|
|
||||||
Different passwords for different tiers:
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
const TIERS = {
|
|
||||||
'preview': { password: 'preview-key', expiry: '30s' },
|
|
||||||
'rental': { password: 'rental-key', expiry: '7d' },
|
|
||||||
'own': { password: 'perpetual-key', expiry: null }
|
|
||||||
};
|
|
||||||
```
|
|
||||||
|
|
||||||
## Example: Complete Stripe Setup
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# 1. Create your content
|
|
||||||
go run ./cmd/mkdemo album.mp4 album.smsg
|
|
||||||
# Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
|
||||||
|
|
||||||
# 2. Upload to IPFS
|
|
||||||
ipfs add album.smsg
|
|
||||||
# QmAlbumCID
|
|
||||||
|
|
||||||
# 3. Create Stripe product
|
|
||||||
# Dashboard → Products → Add Product
|
|
||||||
# Name: My Album
|
|
||||||
# Price: $9.99
|
|
||||||
|
|
||||||
# 4. Create Payment Link
|
|
||||||
# Dashboard → Payment Links → New
|
|
||||||
# Select your product
|
|
||||||
# Get link: https://buy.stripe.com/xxx
|
|
||||||
|
|
||||||
# 5. Set up webhook
|
|
||||||
# Dashboard → Developers → Webhooks → Add endpoint
|
|
||||||
# URL: https://yoursite.com/api/stripe-webhook
|
|
||||||
# Events: checkout.session.completed
|
|
||||||
|
|
||||||
# 6. Deploy webhook handler (Vercel example)
|
|
||||||
vercel deploy
|
|
||||||
|
|
||||||
# 7. Share payment link
|
|
||||||
# Fans click → Pay → Get email with password → Download → Play forever
|
|
||||||
```
|
|
||||||
|
|
||||||
## Resources
|
|
||||||
|
|
||||||
- [Stripe Webhooks](https://stripe.com/docs/webhooks)
|
|
||||||
- [Gumroad Ping](https://help.gumroad.com/article/149-ping)
|
|
||||||
- [PayPal IPN](https://developer.paypal.com/docs/ipn/)
|
|
||||||
- [Resend (Email API)](https://resend.com/)
|
|
||||||
- [Vercel Functions](https://vercel.com/docs/functions)
|
|
||||||
|
|
@ -1,209 +0,0 @@
|
||||||
# Borg Production Backup Upgrade — Design Document
|
|
||||||
|
|
||||||
**Date:** 2026-02-21
|
|
||||||
**Status:** Implemented
|
|
||||||
**Approach:** Bottom-Up Refactor
|
|
||||||
|
|
||||||
## Problem Statement
|
|
||||||
|
|
||||||
Borg's `collect local` command fails on large directories because DataNode loads
|
|
||||||
everything into RAM. The UI spinner floods non-TTY output. Broken symlinks crash
|
|
||||||
the collection pipeline. Key derivation uses bare SHA-256. These issues prevent
|
|
||||||
Borg from being used for production backup workflows.
|
|
||||||
|
|
||||||
## Goals
|
|
||||||
|
|
||||||
1. Make `collect local` work reliably on large directories (10GB+)
|
|
||||||
2. Handle symlinks properly (skip broken, follow/store valid)
|
|
||||||
3. Add quiet/scripted mode for cron and pipeline use
|
|
||||||
4. Harden encryption key derivation (Argon2id)
|
|
||||||
5. Clean up the library for external consumers
|
|
||||||
|
|
||||||
## Non-Goals
|
|
||||||
|
|
||||||
- Full core/go-* package integration (deferred — circular dependency risk since
|
|
||||||
core imports Borg)
|
|
||||||
- New CLI commands beyond fixing existing ones
|
|
||||||
- Network transport or remote sync features
|
|
||||||
- GUI or web interface
|
|
||||||
|
|
||||||
## Architecture
|
|
||||||
|
|
||||||
### Current Flow (Broken for Large Dirs)
|
|
||||||
|
|
||||||
```
|
|
||||||
Walk directory → Load ALL files into DataNode (RAM) → Compress → Encrypt → Write
|
|
||||||
```
|
|
||||||
|
|
||||||
### New Flow (Streaming)
|
|
||||||
|
|
||||||
```
|
|
||||||
Walk directory → tar.Writer stream → compress stream → chunked encrypt → output file
|
|
||||||
```
|
|
||||||
|
|
||||||
DataNode remains THE core abstraction — the I/O sandbox that keeps everything safe
|
|
||||||
and portable. The streaming path bypasses DataNode for the `collect local` pipeline
|
|
||||||
only, while DataNode continues to serve all other use cases (programmatic access,
|
|
||||||
format conversion, inspection).
|
|
||||||
|
|
||||||
## Design Sections
|
|
||||||
|
|
||||||
### 1. DataNode Refactor
|
|
||||||
|
|
||||||
DataNode gains a `ToTarWriter(w io.Writer)` method for streaming out its contents
|
|
||||||
without buffering the entire archive. This is the bridge between DataNode's sandbox
|
|
||||||
model and streaming I/O.
|
|
||||||
|
|
||||||
New symlink handling:
|
|
||||||
|
|
||||||
| Symlink State | Behaviour |
|
|
||||||
|---------------|-----------|
|
|
||||||
| Valid, points inside DataNode root | Store as symlink entry |
|
|
||||||
| Valid, points outside DataNode root | Follow and store target content |
|
|
||||||
| Broken (dangling) | Skip with warning (configurable via `SkipBrokenSymlinks`) |
|
|
||||||
|
|
||||||
The `AddPath` method gets an options struct:
|
|
||||||
|
|
||||||
```go
|
|
||||||
type AddPathOptions struct {
|
|
||||||
SkipBrokenSymlinks bool // default: true
|
|
||||||
FollowSymlinks bool // default: false (store as symlinks)
|
|
||||||
ExcludePatterns []string
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2. UI & Logger Cleanup
|
|
||||||
|
|
||||||
Replace direct spinner writes with a `Progress` interface:
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Progress interface {
|
|
||||||
Start(label string)
|
|
||||||
Update(current, total int64)
|
|
||||||
Finish(label string)
|
|
||||||
Log(level, msg string, args ...any)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Two implementations:
|
|
||||||
- **InteractiveProgress** — spinner + progress bar (when `isatty(stdout)`)
|
|
||||||
- **QuietProgress** — structured log lines only (cron, pipes, `--quiet` flag)
|
|
||||||
|
|
||||||
TTY detection at startup selects the implementation. All existing `ui.Spinner` and
|
|
||||||
`fmt.Printf` calls in library code get replaced with `Progress` method calls.
|
|
||||||
|
|
||||||
New `--quiet` / `-q` flag on all commands suppresses non-error output.
|
|
||||||
|
|
||||||
### 3. TIM Streaming Encryption
|
|
||||||
|
|
||||||
ChaCha20-Poly1305 is AEAD — it needs the full plaintext to compute the auth tag.
|
|
||||||
For streaming, we use a chunked block format:
|
|
||||||
|
|
||||||
```
|
|
||||||
[magic: 4 bytes "STIM"]
|
|
||||||
[version: 1 byte]
|
|
||||||
[salt: 16 bytes] ← Argon2id salt
|
|
||||||
[argon2 params: 12 bytes] ← time, memory, threads (uint32 LE each)
|
|
||||||
|
|
||||||
Per block (repeated):
|
|
||||||
[nonce: 12 bytes]
|
|
||||||
[length: 4 bytes LE] ← ciphertext length including 16-byte Poly1305 tag
|
|
||||||
[ciphertext: N bytes] ← encrypted chunk + tag
|
|
||||||
|
|
||||||
Final block:
|
|
||||||
[nonce: 12 bytes]
|
|
||||||
[length: 4 bytes LE = 0] ← zero length signals EOF
|
|
||||||
```
|
|
||||||
|
|
||||||
Block size: 1 MiB plaintext → ~1 MiB + 16 bytes ciphertext per block.
|
|
||||||
|
|
||||||
The `Sigil` (Enchantrix crypto handle) wraps this as `StreamEncrypt(r io.Reader,
|
|
||||||
w io.Writer)` and `StreamDecrypt(r io.Reader, w io.Writer)`.
|
|
||||||
|
|
||||||
### 4. Key Derivation Hardening
|
|
||||||
|
|
||||||
Replace bare `SHA-256(password)` with Argon2id:
|
|
||||||
|
|
||||||
```go
|
|
||||||
key := argon2.IDKey(password, salt, time=3, memory=64*1024, threads=4, keyLen=32)
|
|
||||||
```
|
|
||||||
|
|
||||||
Parameters stored in the STIM header (section 3 above) so they can be tuned
|
|
||||||
without breaking existing archives. Random 16-byte salt generated per archive.
|
|
||||||
|
|
||||||
Backward compatibility: detect old format by checking for "STIM" magic. Old files
|
|
||||||
(no magic header) use legacy SHA-256 derivation with a deprecation warning.
|
|
||||||
|
|
||||||
### 5. Collect Local Streaming Pipeline
|
|
||||||
|
|
||||||
The new `collect local` pipeline for large directories:
|
|
||||||
|
|
||||||
```
|
|
||||||
filepath.WalkDir
|
|
||||||
→ tar.NewWriter (streaming)
|
|
||||||
→ xz/gzip compressor (streaming)
|
|
||||||
→ chunked AEAD encryptor (streaming)
|
|
||||||
→ os.File output
|
|
||||||
```
|
|
||||||
|
|
||||||
Memory usage: ~2 MiB regardless of input size (1 MiB compress buffer + 1 MiB
|
|
||||||
encrypt block).
|
|
||||||
|
|
||||||
Error handling:
|
|
||||||
- Broken symlinks: skip with warning (not fatal)
|
|
||||||
- Permission denied: skip with warning, continue
|
|
||||||
- Disk full on output: fatal, clean up partial file
|
|
||||||
- Read errors mid-stream: fatal, clean up partial file
|
|
||||||
|
|
||||||
Compression selection: `--compress=xz` (default, best ratio) or `--compress=gzip`
|
|
||||||
(faster). Matches existing Borg compression support.
|
|
||||||
|
|
||||||
### 6. Core Package Integration (Deferred)
|
|
||||||
|
|
||||||
Core imports Borg, so Borg cannot import core packages without creating a circular
|
|
||||||
dependency. Integration points are marked with TODOs for when the dependency
|
|
||||||
direction is resolved (likely by extracting shared interfaces to a common module):
|
|
||||||
|
|
||||||
- `core/go` config system → Borg config loading
|
|
||||||
- `core/go` logging → Borg Progress interface backend
|
|
||||||
- `core/go-store` → DataNode persistence
|
|
||||||
- `core/go` io.Medium → DataNode filesystem abstraction
|
|
||||||
|
|
||||||
## File Impact Summary
|
|
||||||
|
|
||||||
| Area | Files | Change Type |
|
|
||||||
|------|-------|-------------|
|
|
||||||
| DataNode | `pkg/datanode/*.go` | Modify (ToTarWriter, symlinks, AddPathOptions) |
|
|
||||||
| UI | `pkg/ui/*.go` | Rewrite (Progress interface, TTY detection) |
|
|
||||||
| TIM/STIM | `pkg/tim/*.go` | Modify (streaming encrypt/decrypt, new header) |
|
|
||||||
| Crypto | `pkg/tim/crypto.go` (new) | Create (Argon2id, chunked AEAD) |
|
|
||||||
| Collect | `cmd/collect_local.go` | Rewrite (streaming pipeline) |
|
|
||||||
| CLI | `cmd/root.go`, `cmd/*.go` | Modify (--quiet flag) |
|
|
||||||
|
|
||||||
## Testing Strategy
|
|
||||||
|
|
||||||
- Unit tests for each component (DataNode, Progress, chunked AEAD, Argon2id)
|
|
||||||
- Round-trip tests: encrypt → decrypt → compare original
|
|
||||||
- Large file test: 100 MiB synthetic directory through full pipeline
|
|
||||||
- Symlink matrix: valid internal, valid external, broken, nested
|
|
||||||
- Backward compatibility: decrypt old-format STIM with new code
|
|
||||||
- Race detector: `go test -race ./...`
|
|
||||||
|
|
||||||
## Dependencies
|
|
||||||
|
|
||||||
New:
|
|
||||||
- `golang.org/x/crypto/argon2` (Argon2id key derivation)
|
|
||||||
- `golang.org/x/term` (TTY detection via `term.IsTerminal`)
|
|
||||||
|
|
||||||
Existing (unchanged):
|
|
||||||
- `github.com/snider/Enchantrix` (ChaCha20-Poly1305 via Sigil)
|
|
||||||
- `github.com/ulikunitz/xz` (XZ compression)
|
|
||||||
|
|
||||||
## Risk Assessment
|
|
||||||
|
|
||||||
| Risk | Mitigation |
|
|
||||||
|------|------------|
|
|
||||||
| Breaking existing STIM format | Magic-byte detection for backward compat |
|
|
||||||
| Chunked AEAD security | Standard construction (each block independent nonce) |
|
|
||||||
| Circular dep with core | Deferred; TODO markers only |
|
|
||||||
| Large directory edge cases | Extensive symlink + permission test matrix |
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -6,8 +6,8 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/github"
|
"github.com/Snider/Borg/pkg/github"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/vcs"
|
"github.com/Snider/Borg/pkg/vcs"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -4,13 +4,13 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/github"
|
"github.com/Snider/Borg/pkg/github"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
log.Println("Collecting GitHub release...")
|
log.Println("Collecting GitHub release...")
|
||||||
|
|
||||||
owner, repo, err := github.ParseRepoFromURL("https://forge.lthn.ai/Snider/Borg")
|
owner, repo, err := github.ParseRepoFromURL("https://github.com/Snider/Borg")
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to parse repo from URL: %v", err)
|
log.Fatalf("Failed to parse repo from URL: %v", err)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -4,14 +4,14 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/vcs"
|
"github.com/Snider/Borg/pkg/vcs"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
log.Println("Collecting GitHub repo...")
|
log.Println("Collecting GitHub repo...")
|
||||||
|
|
||||||
cloner := vcs.NewGitCloner()
|
cloner := vcs.NewGitCloner()
|
||||||
dn, err := cloner.CloneGitRepository("https://forge.lthn.ai/Snider/Borg", nil)
|
dn, err := cloner.CloneGitRepository("https://github.com/Snider/Borg", nil)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Fatalf("Failed to clone repository: %v", err)
|
log.Fatalf("Failed to clone repository: %v", err)
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/pwa"
|
"github.com/Snider/Borg/pkg/pwa"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/website"
|
"github.com/Snider/Borg/pkg/website"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -4,8 +4,8 @@ import (
|
||||||
"log"
|
"log"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -17,7 +17,7 @@ import (
|
||||||
"strconv"
|
"strconv"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
"github.com/Snider/Borg/pkg/smsg"
|
||||||
)
|
)
|
||||||
|
|
||||||
// trackList allows multiple -track flags
|
// trackList allows multiple -track flags
|
||||||
|
|
|
||||||
|
|
@ -5,8 +5,8 @@ import (
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -3,7 +3,7 @@ package main
|
||||||
import (
|
import (
|
||||||
"log"
|
"log"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -5,8 +5,8 @@ import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/compress"
|
"github.com/Snider/Borg/pkg/compress"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tarfs"
|
"github.com/Snider/Borg/pkg/tarfs"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
|
|
@ -19,8 +19,8 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
"github.com/Snider/Borg/pkg/smsg"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/stmf"
|
"github.com/Snider/Borg/pkg/stmf"
|
||||||
)
|
)
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
|
|
|
||||||
58
go.mod
58
go.mod
|
|
@ -1,74 +1,68 @@
|
||||||
module forge.lthn.ai/Snider/Borg
|
module github.com/Snider/Borg
|
||||||
|
|
||||||
go 1.25.0
|
go 1.25.0
|
||||||
|
|
||||||
require (
|
require (
|
||||||
forge.lthn.ai/Snider/Enchantrix v0.0.4
|
github.com/Snider/Enchantrix v0.0.2
|
||||||
github.com/fatih/color v1.18.0
|
github.com/fatih/color v1.18.0
|
||||||
github.com/go-git/go-git/v5 v5.16.4
|
github.com/go-git/go-git/v5 v5.16.3
|
||||||
github.com/google/go-github/v39 v39.2.0
|
github.com/google/go-github/v39 v39.2.0
|
||||||
github.com/klauspost/compress v1.18.4
|
github.com/klauspost/compress v1.18.2
|
||||||
github.com/mattn/go-isatty v0.0.20
|
github.com/mattn/go-isatty v0.0.20
|
||||||
github.com/schollz/progressbar/v3 v3.18.0
|
github.com/schollz/progressbar/v3 v3.18.0
|
||||||
github.com/spf13/cobra v1.10.2
|
github.com/spf13/cobra v1.10.1
|
||||||
github.com/ulikunitz/xz v0.5.15
|
github.com/ulikunitz/xz v0.5.15
|
||||||
github.com/wailsapp/wails/v2 v2.11.0
|
github.com/wailsapp/wails/v2 v2.11.0
|
||||||
golang.org/x/crypto v0.48.0
|
golang.org/x/mod v0.30.0
|
||||||
golang.org/x/mod v0.33.0
|
golang.org/x/net v0.47.0
|
||||||
golang.org/x/net v0.50.0
|
golang.org/x/oauth2 v0.33.0
|
||||||
golang.org/x/oauth2 v0.35.0
|
|
||||||
)
|
)
|
||||||
|
|
||||||
require (
|
require (
|
||||||
dario.cat/mergo v1.0.2 // indirect
|
dario.cat/mergo v1.0.0 // indirect
|
||||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||||
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
github.com/ProtonMail/go-crypto v1.3.0 // indirect
|
||||||
github.com/bep/debounce v1.2.1 // indirect
|
github.com/bep/debounce v1.2.1 // indirect
|
||||||
github.com/clipperhouse/uax29/v2 v2.4.0 // indirect
|
github.com/cloudflare/circl v1.6.1 // indirect
|
||||||
github.com/cloudflare/circl v1.6.3 // indirect
|
github.com/cyphar/filepath-securejoin v0.4.1 // indirect
|
||||||
github.com/cyphar/filepath-securejoin v0.6.1 // indirect
|
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect
|
|
||||||
github.com/emirpasic/gods v1.18.1 // indirect
|
github.com/emirpasic/gods v1.18.1 // indirect
|
||||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
|
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
|
||||||
github.com/go-git/go-billy/v5 v5.7.0 // indirect
|
github.com/go-git/go-billy/v5 v5.6.2 // indirect
|
||||||
github.com/go-ole/go-ole v1.3.0 // indirect
|
github.com/go-ole/go-ole v1.3.0 // indirect
|
||||||
github.com/godbus/dbus/v5 v5.2.2 // indirect
|
github.com/godbus/dbus/v5 v5.1.0 // indirect
|
||||||
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
|
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 // indirect
|
||||||
github.com/google/go-querystring v1.1.0 // indirect
|
github.com/google/go-querystring v1.1.0 // indirect
|
||||||
github.com/google/uuid v1.6.0 // indirect
|
github.com/google/uuid v1.6.0 // indirect
|
||||||
github.com/gorilla/websocket v1.5.3 // indirect
|
github.com/gorilla/websocket v1.5.3 // indirect
|
||||||
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
github.com/inconshreveable/mousetrap v1.1.0 // indirect
|
||||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
|
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 // indirect
|
||||||
github.com/jchv/go-winloader v0.0.0-20250406163304-c1995be93bd1 // indirect
|
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e // indirect
|
||||||
github.com/kevinburke/ssh_config v1.4.0 // indirect
|
github.com/kevinburke/ssh_config v1.2.0 // indirect
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 // indirect
|
|
||||||
github.com/labstack/echo/v4 v4.13.3 // indirect
|
github.com/labstack/echo/v4 v4.13.3 // indirect
|
||||||
github.com/labstack/gommon v0.4.2 // indirect
|
github.com/labstack/gommon v0.4.2 // indirect
|
||||||
github.com/leaanthony/go-ansi-parser v1.6.1 // indirect
|
github.com/leaanthony/go-ansi-parser v1.6.1 // indirect
|
||||||
github.com/leaanthony/gosod v1.0.4 // indirect
|
github.com/leaanthony/gosod v1.0.4 // indirect
|
||||||
github.com/leaanthony/slicer v1.6.0 // indirect
|
github.com/leaanthony/slicer v1.6.0 // indirect
|
||||||
github.com/leaanthony/u v1.1.1 // indirect
|
github.com/leaanthony/u v1.1.1 // indirect
|
||||||
github.com/mattn/go-colorable v0.1.14 // indirect
|
github.com/mattn/go-colorable v0.1.13 // indirect
|
||||||
github.com/mattn/go-runewidth v0.0.19 // indirect
|
|
||||||
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db // indirect
|
||||||
github.com/pjbgf/sha1cd v0.5.0 // indirect
|
github.com/pjbgf/sha1cd v0.3.2 // indirect
|
||||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
|
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c // indirect
|
||||||
github.com/pkg/errors v0.9.1 // indirect
|
github.com/pkg/errors v0.9.1 // indirect
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect
|
|
||||||
github.com/rivo/uniseg v0.4.7 // indirect
|
github.com/rivo/uniseg v0.4.7 // indirect
|
||||||
github.com/samber/lo v1.52.0 // indirect
|
github.com/samber/lo v1.49.1 // indirect
|
||||||
github.com/sergi/go-diff v1.4.0 // indirect
|
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 // indirect
|
||||||
github.com/skeema/knownhosts v1.3.2 // indirect
|
github.com/skeema/knownhosts v1.3.1 // indirect
|
||||||
github.com/spf13/pflag v1.0.10 // indirect
|
github.com/spf13/pflag v1.0.9 // indirect
|
||||||
github.com/tkrajina/go-reflector v0.5.8 // indirect
|
github.com/tkrajina/go-reflector v0.5.8 // indirect
|
||||||
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
github.com/valyala/bytebufferpool v1.0.0 // indirect
|
||||||
github.com/valyala/fasttemplate v1.2.2 // indirect
|
github.com/valyala/fasttemplate v1.2.2 // indirect
|
||||||
github.com/wailsapp/go-webview2 v1.0.23 // indirect
|
github.com/wailsapp/go-webview2 v1.0.22 // indirect
|
||||||
github.com/wailsapp/mimetype v1.4.1 // indirect
|
github.com/wailsapp/mimetype v1.4.1 // indirect
|
||||||
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
||||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a // indirect
|
golang.org/x/crypto v0.44.0 // indirect
|
||||||
golang.org/x/sys v0.41.0 // indirect
|
golang.org/x/sys v0.38.0 // indirect
|
||||||
golang.org/x/term v0.40.0 // indirect
|
golang.org/x/term v0.37.0 // indirect
|
||||||
golang.org/x/text v0.34.0 // indirect
|
golang.org/x/text v0.31.0 // indirect
|
||||||
gopkg.in/warnings.v0 v0.1.2 // indirect
|
gopkg.in/warnings.v0 v0.1.2 // indirect
|
||||||
)
|
)
|
||||||
|
|
|
||||||
93
go.sum
93
go.sum
|
|
@ -1,10 +1,12 @@
|
||||||
dario.cat/mergo v1.0.2 h1:85+piFYR1tMbRrLcDwR18y4UKJ3aH1Tbzi24VRW1TK8=
|
dario.cat/mergo v1.0.0 h1:AGCNq9Evsj31mOgNPcLyXc+4PNABt905YmuqPYYpBWk=
|
||||||
forge.lthn.ai/Snider/Enchantrix v0.0.4 h1:biwpix/bdedfyc0iVeK15awhhJKH6TEMYOTXzHXx5TI=
|
dario.cat/mergo v1.0.0/go.mod h1:uNxQE+84aUszobStD9th8a29P2fMDhsBdgRYvZOxGmk=
|
||||||
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
|
github.com/Microsoft/go-winio v0.5.2/go.mod h1:WpS1mjBmmwHBEWmogvA2mj8546UReBk4v8QkMxJ6pZY=
|
||||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||||
github.com/ProtonMail/go-crypto v1.3.0 h1:ILq8+Sf5If5DCpHQp4PbZdS1J7HDFRXz/+xKBiRGFrw=
|
github.com/ProtonMail/go-crypto v1.3.0 h1:ILq8+Sf5If5DCpHQp4PbZdS1J7HDFRXz/+xKBiRGFrw=
|
||||||
github.com/ProtonMail/go-crypto v1.3.0/go.mod h1:9whxjD8Rbs29b4XWbB8irEcE8KHMqaR2e7GWU1R+/PE=
|
github.com/ProtonMail/go-crypto v1.3.0/go.mod h1:9whxjD8Rbs29b4XWbB8irEcE8KHMqaR2e7GWU1R+/PE=
|
||||||
|
github.com/Snider/Enchantrix v0.0.2 h1:ExZQiBhfS/p/AHFTKhY80TOd+BXZjK95EzByAEgwvjs=
|
||||||
|
github.com/Snider/Enchantrix v0.0.2/go.mod h1:CtFcLAvnDT1KcuF1JBb/DJj0KplY8jHryO06KzQ1hsQ=
|
||||||
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
|
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be h1:9AeTilPcZAjCFIImctFaOjnTIavg87rW78vTPkQqLI8=
|
||||||
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
|
github.com/anmitsu/go-shlex v0.0.0-20200514113438-38f4b401e2be/go.mod h1:ySMOLuWl6zY27l47sB3qLNK6tF2fkHG55UZxx8oIVo4=
|
||||||
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
github.com/armon/go-socks5 v0.0.0-20160902184237-e75332964ef5 h1:0CwZNZbxp69SHPdPJAN/hZIm0C4OItdklCFmMRWYpio=
|
||||||
|
|
@ -13,14 +15,14 @@ github.com/bep/debounce v1.2.1 h1:v67fRdBA9UQu2NhLFXrSg0Brw7CexQekrBwDMM8bzeY=
|
||||||
github.com/bep/debounce v1.2.1/go.mod h1:H8yggRPQKLUhUoqrJC1bO2xNya7vanpDl7xR3ISbCJ0=
|
github.com/bep/debounce v1.2.1/go.mod h1:H8yggRPQKLUhUoqrJC1bO2xNya7vanpDl7xR3ISbCJ0=
|
||||||
github.com/chengxilo/virtualterm v1.0.4 h1:Z6IpERbRVlfB8WkOmtbHiDbBANU7cimRIof7mk9/PwM=
|
github.com/chengxilo/virtualterm v1.0.4 h1:Z6IpERbRVlfB8WkOmtbHiDbBANU7cimRIof7mk9/PwM=
|
||||||
github.com/chengxilo/virtualterm v1.0.4/go.mod h1:DyxxBZz/x1iqJjFxTFcr6/x+jSpqN0iwWCOK1q10rlY=
|
github.com/chengxilo/virtualterm v1.0.4/go.mod h1:DyxxBZz/x1iqJjFxTFcr6/x+jSpqN0iwWCOK1q10rlY=
|
||||||
github.com/clipperhouse/stringish v0.1.1 h1:+NSqMOr3GR6k1FdRhhnXrLfztGzuG+VuFDfatpWHKCs=
|
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
|
||||||
github.com/clipperhouse/uax29/v2 v2.4.0 h1:RXqE/l5EiAbA4u97giimKNlmpvkmz+GrBVTelsoXy9g=
|
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
|
||||||
github.com/cloudflare/circl v1.6.3 h1:9GPOhQGF9MCYUeXyMYlqTR6a5gTrgR/fBLXvUgtVcg8=
|
|
||||||
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
github.com/cpuguy83/go-md2man/v2 v2.0.6/go.mod h1:oOW0eioCTA6cOiMLiUPZOpcVxMig6NIQQ7OS05n1F4g=
|
||||||
github.com/cyphar/filepath-securejoin v0.6.1 h1:5CeZ1jPXEiYt3+Z6zqprSAgSWiggmpVyciv8syjIpVE=
|
github.com/cyphar/filepath-securejoin v0.4.1 h1:JyxxyPEaktOD+GAnqIqTf9A8tHyAG22rowi7HkoSU1s=
|
||||||
|
github.com/cyphar/filepath-securejoin v0.4.1/go.mod h1:Sdj7gXlvMcPZsbhwhQ33GguGLDGQL7h7bg04C/+u9jI=
|
||||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
|
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||||
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1VwoXQT9A3Wy9MM3WgvqSxFWenqJduM=
|
|
||||||
github.com/elazarl/goproxy v1.7.2 h1:Y2o6urb7Eule09PjlhQRGNsqRfPmYI3KKQLFpCAV3+o=
|
github.com/elazarl/goproxy v1.7.2 h1:Y2o6urb7Eule09PjlhQRGNsqRfPmYI3KKQLFpCAV3+o=
|
||||||
github.com/elazarl/goproxy v1.7.2/go.mod h1:82vkLNir0ALaW14Rc399OTTjyNREgmdL2cVoIbS6XaE=
|
github.com/elazarl/goproxy v1.7.2/go.mod h1:82vkLNir0ALaW14Rc399OTTjyNREgmdL2cVoIbS6XaE=
|
||||||
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
|
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
|
||||||
|
|
@ -31,13 +33,16 @@ github.com/gliderlabs/ssh v0.3.8 h1:a4YXD1V7xMF9g5nTkdfnja3Sxy1PVDCj1Zg4Wb8vY6c=
|
||||||
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
|
github.com/gliderlabs/ssh v0.3.8/go.mod h1:xYoytBv1sV0aL3CavoDuJIQNURXkkfPA/wxQ1pL1fAU=
|
||||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
|
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
|
||||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
|
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
|
||||||
github.com/go-git/go-billy/v5 v5.7.0 h1:83lBUJhGWhYp0ngzCMSgllhUSuoHP1iEWYjsPl9nwqM=
|
github.com/go-git/go-billy/v5 v5.6.2 h1:6Q86EsPXMa7c3YZ3aLAQsMA0VlWmy43r6FHqa/UNbRM=
|
||||||
|
github.com/go-git/go-billy/v5 v5.6.2/go.mod h1:rcFC2rAsp/erv7CMz9GczHcuD0D32fWzH+MJAU+jaUU=
|
||||||
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
|
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
|
||||||
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
|
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399/go.mod h1:1OCfN199q1Jm3HZlxleg+Dw/mwps2Wbk9frAWm+4FII=
|
||||||
github.com/go-git/go-git/v5 v5.16.4 h1:7ajIEZHZJULcyJebDLo99bGgS0jRrOxzZG4uCk2Yb2Y=
|
github.com/go-git/go-git/v5 v5.16.3 h1:Z8BtvxZ09bYm/yYNgPKCzgWtaRqDTgIKRgIRHBfU6Z8=
|
||||||
|
github.com/go-git/go-git/v5 v5.16.3/go.mod h1:4Ge4alE/5gPs30F2H1esi2gPd69R0C39lolkucHBOp8=
|
||||||
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
|
github.com/go-ole/go-ole v1.3.0 h1:Dt6ye7+vXGIKZ7Xtk4s6/xVdGDQynvom7xCFEdWr6uE=
|
||||||
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
|
github.com/go-ole/go-ole v1.3.0/go.mod h1:5LS6F96DhAwUc7C+1HLexzMXY1xGRSryjyPPKW6zv78=
|
||||||
github.com/godbus/dbus/v5 v5.2.2 h1:TUR3TgtSVDmjiXOgAAyaZbYmIeP3DPkld3jgKGV8mXQ=
|
github.com/godbus/dbus/v5 v5.1.0 h1:4KLkAxT3aOY8Li4FRJe/KvhoNFFxo0m6fNuFUO8QJUk=
|
||||||
|
github.com/godbus/dbus/v5 v5.1.0/go.mod h1:xhWf0FNVPg57R7Z0UbKHbJfkEywrmjJnf7w5xrFpKfA=
|
||||||
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
|
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8 h1:f+oWsMOmNPc8JmEHVZIycC7hBoQxHH9pNKQORJNozsQ=
|
||||||
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
|
github.com/golang/groupcache v0.0.0-20241129210726-2c02b8208cf8/go.mod h1:wcDNUvekVysuuOpQKo3191zZyTpiI6se1N1ULghS0sw=
|
||||||
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
|
||||||
|
|
@ -58,10 +63,12 @@ github.com/inconshreveable/mousetrap v1.1.0 h1:wN+x4NVGpMsO7ErUn/mUI3vEoE6Jt13X2
|
||||||
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
github.com/inconshreveable/mousetrap v1.1.0/go.mod h1:vpF70FUmC8bwa3OWnCshd2FqLfsEA9PFc4w1p2J65bw=
|
||||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
|
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99 h1:BQSFePA1RWJOlocH6Fxy8MmwDt+yVQYULKfN0RoTN8A=
|
||||||
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
|
github.com/jbenet/go-context v0.0.0-20150711004518-d14ea06fba99/go.mod h1:1lJo3i6rXxKeerYnT8Nvf0QmHCRC1n8sfWVwXF2Frvo=
|
||||||
github.com/jchv/go-winloader v0.0.0-20250406163304-c1995be93bd1 h1:njuLRcjAuMKr7kI3D85AXWkw6/+v9PwtV6M6o11sWHQ=
|
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e h1:Q3+PugElBCf4PFpxhErSzU3/PY5sFL5Z6rfv4AbGAck=
|
||||||
github.com/kevinburke/ssh_config v1.4.0 h1:6xxtP5bZ2E4NF5tuQulISpTO2z8XbtH8cg1PWkxoFkQ=
|
github.com/jchv/go-winloader v0.0.0-20210711035445-715c2860da7e/go.mod h1:alcuEEnZsY1WQsagKhZDsoPCRoOijYqhZvPwLG0kzVs=
|
||||||
github.com/klauspost/compress v1.18.4 h1:RPhnKRAQ4Fh8zU2FY/6ZFDwTVTxgJ/EMydqSTzE9a2c=
|
github.com/kevinburke/ssh_config v1.2.0 h1:x584FjTGwHzMwvHx18PXxbBVzfnxogHaAReU4gf13a4=
|
||||||
github.com/klauspost/cpuid/v2 v2.3.0 h1:S4CRMLnYUhGeDFDqkGriYKdfoFlDnMtqTiI/sFzhA9Y=
|
github.com/kevinburke/ssh_config v1.2.0/go.mod h1:CT57kijsi8u/K/BOFA39wgDQJ9CxiF4nAY/ojJ6r6mM=
|
||||||
|
github.com/klauspost/compress v1.18.2 h1:iiPHWW0YrcFgpBYhsA6D1+fqHssJscY/Tm/y2Uqnapk=
|
||||||
|
github.com/klauspost/compress v1.18.2/go.mod h1:R0h/fSBs8DE4ENlcrlib3PsXS61voFxhIs2DeRhCvJ4=
|
||||||
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
|
||||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||||
|
|
@ -86,36 +93,44 @@ github.com/leaanthony/u v1.1.1/go.mod h1:9+o6hejoRljvZ3BzdYlVL0JYCwtnAsVuN9pVTQc
|
||||||
github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
github.com/matryer/is v1.4.0/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
||||||
github.com/matryer/is v1.4.1 h1:55ehd8zaGABKLXQUe2awZ99BD/PTc2ls+KV/dXphgEQ=
|
github.com/matryer/is v1.4.1 h1:55ehd8zaGABKLXQUe2awZ99BD/PTc2ls+KV/dXphgEQ=
|
||||||
github.com/matryer/is v1.4.1/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
github.com/matryer/is v1.4.1/go.mod h1:8I/i5uYgLzgsgEloJE1U6xx5HkBQpAZvepWuujKwMRU=
|
||||||
github.com/mattn/go-colorable v0.1.14 h1:9A9LHSqF/7dyVVX6g0U9cwm9pG3kP9gSzcuIPHPsaIE=
|
github.com/mattn/go-colorable v0.1.13 h1:fFA4WZxdEF4tXPZVKMLwD8oUnCTTo08duU7wxecdEvA=
|
||||||
|
github.com/mattn/go-colorable v0.1.13/go.mod h1:7S9/ev0klgBDR4GtXTXX8a3vIGJpMovkB8vQcUbaXHg=
|
||||||
|
github.com/mattn/go-isatty v0.0.16/go.mod h1:kYGgaQfpe5nmfYZH+SKPsOc2e4SrIfOl2e/yFXSvRLM=
|
||||||
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
|
||||||
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||||
github.com/mattn/go-runewidth v0.0.19 h1:v++JhqYnZuu5jSKrk9RbgF5v4CGUjqRfBm05byFGLdw=
|
github.com/mattn/go-runewidth v0.0.16 h1:E5ScNMtiwvlvB5paMFdw9p4kSQzbXFikJ5SQO6TULQc=
|
||||||
|
github.com/mattn/go-runewidth v0.0.16/go.mod h1:Jdepj2loyihRzMpdS35Xk/zdY8IAYHsh153qUoGf23w=
|
||||||
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db h1:62I3jR2EmQ4l5rM/4FEfDWcRD+abF5XlKShorW5LRoQ=
|
||||||
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
|
github.com/mitchellh/colorstring v0.0.0-20190213212951-d06e56a500db/go.mod h1:l0dey0ia/Uv7NcFFVbCLtqEBQbrT4OCwCSKTEv6enCw=
|
||||||
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
|
github.com/onsi/gomega v1.34.1 h1:EUMJIKUjM8sKjYbtxQI9A4z2o+rruxnzNvpknOXie6k=
|
||||||
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
|
github.com/onsi/gomega v1.34.1/go.mod h1:kU1QgUvBDLXBJq618Xvm2LUX6rSAfRaFRTcdOeDLwwY=
|
||||||
github.com/pjbgf/sha1cd v0.5.0 h1:a+UkboSi1znleCDUNT3M5YxjOnN1fz2FhN48FlwCxs0=
|
github.com/pjbgf/sha1cd v0.3.2 h1:a9wb0bp1oC2TGwStyn0Umc/IGKQnEgF0vVaZ8QF8eo4=
|
||||||
|
github.com/pjbgf/sha1cd v0.3.2/go.mod h1:zQWigSxVmsHEZow5qaLtPYxpcKMMQpa09ixqBxuCS6A=
|
||||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c h1:+mdjkGKdHQG3305AYmdv1U2eRNDiU2ErMBj1gwrq8eQ=
|
||||||
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
github.com/pkg/browser v0.0.0-20240102092130-5ac0b6a4141c/go.mod h1:7rwL4CYBLnjLxUqIJNnCWiEdr3bn6IUYi15bNlnbCCU=
|
||||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||||
|
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||||
github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U=
|
|
||||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||||
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
|
||||||
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
|
||||||
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
github.com/russross/blackfriday/v2 v2.1.0/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
|
||||||
github.com/samber/lo v1.52.0 h1:Rvi+3BFHES3A8meP33VPAxiBZX/Aws5RxrschYGjomw=
|
github.com/samber/lo v1.49.1 h1:4BIFyVfuQSEpluc7Fua+j1NolZHiEHEpaSEKdsH0tew=
|
||||||
|
github.com/samber/lo v1.49.1/go.mod h1:dO6KHFzUKXgP8LDhU0oI8d2hekjXnGOu0DB8Jecxd6o=
|
||||||
github.com/schollz/progressbar/v3 v3.18.0 h1:uXdoHABRFmNIjUfte/Ex7WtuyVslrw2wVPQmCN62HpA=
|
github.com/schollz/progressbar/v3 v3.18.0 h1:uXdoHABRFmNIjUfte/Ex7WtuyVslrw2wVPQmCN62HpA=
|
||||||
github.com/schollz/progressbar/v3 v3.18.0/go.mod h1:IsO3lpbaGuzh8zIMzgY3+J8l4C8GjO0Y9S69eFvNsec=
|
github.com/schollz/progressbar/v3 v3.18.0/go.mod h1:IsO3lpbaGuzh8zIMzgY3+J8l4C8GjO0Y9S69eFvNsec=
|
||||||
github.com/sergi/go-diff v1.4.0 h1:n/SP9D5ad1fORl+llWyN+D6qoUETXNZARKjyY2/KVCw=
|
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3 h1:n661drycOFuPLCN3Uc8sB6B/s6Z4t2xvBgU1htSHuq8=
|
||||||
|
github.com/sergi/go-diff v1.3.2-0.20230802210424-5b0b94c5c0d3/go.mod h1:A0bzQcvG0E7Rwjx0REVgAGH58e96+X0MeOfepqsbeW4=
|
||||||
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
github.com/sirupsen/logrus v1.7.0/go.mod h1:yWOB1SBYBC5VeMP7gHvWumXLIWorT60ONWic61uBYv0=
|
||||||
github.com/skeema/knownhosts v1.3.2 h1:EDL9mgf4NzwMXCTfaxSD/o/a5fxDw/xL9nkU28JjdBg=
|
github.com/skeema/knownhosts v1.3.1 h1:X2osQ+RAjK76shCbvhHHHVl3ZlgDm8apHEHFqRjnBY8=
|
||||||
github.com/spf13/cobra v1.10.2 h1:DMTTonx5m65Ic0GOoRY2c16WCbHxOOw6xxezuLaBpcU=
|
github.com/skeema/knownhosts v1.3.1/go.mod h1:r7KTdC8l4uxWRyK2TpQZ/1o5HaSzh06ePQNxPwTcfiY=
|
||||||
|
github.com/spf13/cobra v1.10.1 h1:lJeBwCfmrnXthfAupyUTzJ/J4Nc1RsHC/mSRU2dll/s=
|
||||||
|
github.com/spf13/cobra v1.10.1/go.mod h1:7SmJGaTHFVBY0jW4NXGluQoLvhqFQM+6XSKD+P4XaB0=
|
||||||
|
github.com/spf13/pflag v1.0.9 h1:9exaQaMOCwffKiiiYk6/BndUBv+iRViNW+4lEMi0PvY=
|
||||||
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
github.com/spf13/pflag v1.0.9/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||||
github.com/spf13/pflag v1.0.10 h1:4EBh2KAYBwaONj6b2Ye1GiHfwjqyROoF4RwYO+vPwFk=
|
|
||||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||||
|
|
@ -129,7 +144,8 @@ github.com/valyala/bytebufferpool v1.0.0 h1:GqA5TC/0021Y/b9FG4Oi9Mr3q7XYx6Kllzaw
|
||||||
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
github.com/valyala/bytebufferpool v1.0.0/go.mod h1:6bBcMArwyJ5K/AmCkWv1jt77kVWyCJ6HpOuEn7z0Csc=
|
||||||
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
|
github.com/valyala/fasttemplate v1.2.2 h1:lxLXG0uE3Qnshl9QyaK6XJxMXlQZELvChBOCmQD0Loo=
|
||||||
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
|
github.com/valyala/fasttemplate v1.2.2/go.mod h1:KHLXt3tVN2HBp8eijSv/kGJopbvo7S+qRAEEKiv+SiQ=
|
||||||
github.com/wailsapp/go-webview2 v1.0.23 h1:jmv8qhz1lHibCc79bMM/a/FqOnnzOGEisLav+a0b9P0=
|
github.com/wailsapp/go-webview2 v1.0.22 h1:YT61F5lj+GGaat5OB96Aa3b4QA+mybD0Ggq6NZijQ58=
|
||||||
|
github.com/wailsapp/go-webview2 v1.0.22/go.mod h1:qJmWAmAmaniuKGZPWwne+uor3AHMB5PFhqiK0Bbj8kc=
|
||||||
github.com/wailsapp/mimetype v1.4.1 h1:pQN9ycO7uo4vsUUuPeHEYoUkLVkaRntMnHJxVwYhwHs=
|
github.com/wailsapp/mimetype v1.4.1 h1:pQN9ycO7uo4vsUUuPeHEYoUkLVkaRntMnHJxVwYhwHs=
|
||||||
github.com/wailsapp/mimetype v1.4.1/go.mod h1:9aV5k31bBOv5z6u+QP8TltzvNGJPmNJD4XlAL3U+j3o=
|
github.com/wailsapp/mimetype v1.4.1/go.mod h1:9aV5k31bBOv5z6u+QP8TltzvNGJPmNJD4XlAL3U+j3o=
|
||||||
github.com/wailsapp/wails/v2 v2.11.0 h1:seLacV8pqupq32IjS4Y7V8ucab0WZwtK6VvUVxSBtqQ=
|
github.com/wailsapp/wails/v2 v2.11.0 h1:seLacV8pqupq32IjS4Y7V8ucab0WZwtK6VvUVxSBtqQ=
|
||||||
|
|
@ -139,17 +155,21 @@ github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI
|
||||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||||
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
|
||||||
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||||
golang.org/x/crypto v0.48.0 h1:/VRzVqiRSggnhY7gNRxPauEQ5Drw9haKdM0jqfcCFts=
|
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
|
||||||
golang.org/x/crypto v0.48.0/go.mod h1:r0kV5h3qnFPlQnBSrULhlsRfryS2pmewsg+XfMgkVos=
|
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
|
||||||
golang.org/x/exp v0.0.0-20260212183809-81e46e3db34a h1:ovFr6Z0MNmU7nH8VaX5xqw+05ST2uO1exVfZPVqRC5o=
|
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
|
||||||
golang.org/x/mod v0.33.0 h1:tHFzIWbBifEmbwtGz65eaWyGiGZatSrT9prnU8DbVL8=
|
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
|
||||||
|
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=
|
||||||
|
golang.org/x/mod v0.30.0/go.mod h1:lAsf5O2EvJeSFMiBxXDki7sCgAxEUcZHXoXMKT4GJKc=
|
||||||
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
golang.org/x/net v0.0.0-20190603091049-60506f45cf65/go.mod h1:HSz+uSET+XFnRR8LxR5pz3Of3rY3CfYBVs4xY44aLks=
|
||||||
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
|
||||||
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20210505024714-0287a6fb4125/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||||
golang.org/x/net v0.50.0 h1:ucWh9eiCGyDR3vtzso0WMQinm2Dnt8cFMuQa9K33J60=
|
golang.org/x/net v0.47.0 h1:Mx+4dIFzqraBXUugkia1OOvlD6LemFo1ALMHjrXDOhY=
|
||||||
|
golang.org/x/net v0.47.0/go.mod h1:/jNxtkgq5yWUGYkaZGqo27cfGZ1c5Nen03aYrrKpVRU=
|
||||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||||
golang.org/x/oauth2 v0.35.0 h1:Mv2mzuHuZuY2+bkyWXIHMfhNdJAdwW3FuWeCPYN5GVQ=
|
golang.org/x/oauth2 v0.33.0 h1:4Q+qn+E5z8gPRJfmRy7C2gGG3T4jIprK6aSYgTXGRpo=
|
||||||
|
golang.org/x/oauth2 v0.33.0/go.mod h1:lzm5WQJQwKZ3nwavOZ3IS5Aulzxi68dUSgRHujetwEA=
|
||||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||||
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20191026070338-33540a1f6037/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20200810151505-1b9f1253b3ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20200810151505-1b9f1253b3ed/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
|
|
@ -158,19 +178,20 @@ golang.org/x/sys v0.0.0-20210124154548-22da62e12c0c/go.mod h1:h1NjWce9XRLGQEsW7w
|
||||||
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||||
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.0.0-20220715151400-c0bba94af5f8/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
|
golang.org/x/sys v0.0.0-20220811171246-fbc7d0a398ab/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||||
golang.org/x/sys v0.41.0 h1:Ivj+2Cp/ylzLiEU89QhWblYnOE9zerudt9Ftecq2C6k=
|
golang.org/x/sys v0.38.0 h1:3yZWxaJjBmCWXqhN1qh02AkOnCQ1poK6oF+a7xWL6Gc=
|
||||||
golang.org/x/sys v0.41.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
golang.org/x/sys v0.38.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
|
||||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||||
golang.org/x/term v0.40.0 h1:36e4zGLqU4yhjlmxEaagx2KuYbJq3EwY8K943ZsHcvg=
|
golang.org/x/term v0.37.0 h1:8EGAD0qCmHYZg6J17DvsMy9/wJ7/D/4pV/wfnld5lTU=
|
||||||
golang.org/x/term v0.40.0/go.mod h1:w2P8uVp06p2iyKKuvXIm7N/y0UCRt3UfJTfZ7oOpglM=
|
golang.org/x/term v0.37.0/go.mod h1:5pB4lxRNYYVZuTLmy8oR2BH8dflOR+IbTYFD8fi3254=
|
||||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||||
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=
|
||||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||||
golang.org/x/text v0.34.0 h1:oL/Qq0Kdaqxa1KbNeMKwQq0reLCCaFtqu2eNuSeNHbk=
|
golang.org/x/text v0.31.0 h1:aC8ghyu4JhP8VojJ2lEHBnochRno1sgL6nEi9WGFGMM=
|
||||||
golang.org/x/text v0.34.0/go.mod h1:homfLqTYRFyVYemLBFl5GgL/DWEiH5wcsQ5gSh1yziA=
|
golang.org/x/text v0.31.0/go.mod h1:tKRAlv61yKIjGGHX/4tP1LTbc13YSec1pxVEWXzfoeM=
|
||||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||||
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
google.golang.org/appengine v1.6.7/go.mod h1:8WjMMxjGQR8xUklV/ARdw2HLXBOI7O7uCIDZVag1xfc=
|
||||||
|
|
|
||||||
Binary file not shown.
4
main.go
4
main.go
|
|
@ -3,8 +3,8 @@ package main
|
||||||
import (
|
import (
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/cmd"
|
"github.com/Snider/Borg/cmd"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/logger"
|
"github.com/Snider/Borg/pkg/logger"
|
||||||
)
|
)
|
||||||
|
|
||||||
var osExit = os.Exit
|
var osExit = os.Exit
|
||||||
|
|
|
||||||
|
|
@ -9,7 +9,7 @@ import (
|
||||||
"fmt"
|
"fmt"
|
||||||
"os"
|
"os"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/stmf"
|
"github.com/Snider/Borg/pkg/stmf"
|
||||||
)
|
)
|
||||||
|
|
||||||
type TestVector struct {
|
type TestVector struct {
|
||||||
|
|
|
||||||
|
|
@ -3,34 +3,11 @@ package compress
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"compress/gzip"
|
"compress/gzip"
|
||||||
"fmt"
|
|
||||||
"io"
|
"io"
|
||||||
|
|
||||||
"github.com/ulikunitz/xz"
|
"github.com/ulikunitz/xz"
|
||||||
)
|
)
|
||||||
|
|
||||||
// nopCloser wraps an io.Writer with a no-op Close method.
|
|
||||||
type nopCloser struct{ io.Writer }
|
|
||||||
|
|
||||||
func (n *nopCloser) Close() error { return nil }
|
|
||||||
|
|
||||||
// NewCompressWriter returns a streaming io.WriteCloser that compresses data
|
|
||||||
// written to it into the underlying writer w using the specified format.
|
|
||||||
// Supported formats: "gz" (gzip), "xz", "none" or "" (passthrough).
|
|
||||||
// Unknown formats return an error.
|
|
||||||
func NewCompressWriter(w io.Writer, format string) (io.WriteCloser, error) {
|
|
||||||
switch format {
|
|
||||||
case "gz":
|
|
||||||
return gzip.NewWriter(w), nil
|
|
||||||
case "xz":
|
|
||||||
return xz.NewWriter(w)
|
|
||||||
case "none", "":
|
|
||||||
return &nopCloser{w}, nil
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("unsupported compression format: %q", format)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compress compresses data using the specified format.
|
// Compress compresses data using the specified format.
|
||||||
func Compress(data []byte, format string) ([]byte, error) {
|
func Compress(data []byte, format string) ([]byte, error) {
|
||||||
var buf bytes.Buffer
|
var buf bytes.Buffer
|
||||||
|
|
|
||||||
|
|
@ -5,108 +5,6 @@ import (
|
||||||
"testing"
|
"testing"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestNewCompressWriter_Gzip_Good(t *testing.T) {
|
|
||||||
original := []byte("hello, streaming gzip world")
|
|
||||||
var buf bytes.Buffer
|
|
||||||
|
|
||||||
w, err := NewCompressWriter(&buf, "gz")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("NewCompressWriter(gz) error: %v", err)
|
|
||||||
}
|
|
||||||
if _, err := w.Write(original); err != nil {
|
|
||||||
t.Fatalf("Write error: %v", err)
|
|
||||||
}
|
|
||||||
if err := w.Close(); err != nil {
|
|
||||||
t.Fatalf("Close error: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
compressed := buf.Bytes()
|
|
||||||
if bytes.Equal(original, compressed) {
|
|
||||||
t.Fatal("compressed data should differ from original")
|
|
||||||
}
|
|
||||||
|
|
||||||
decompressed, err := Decompress(compressed)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Decompress error: %v", err)
|
|
||||||
}
|
|
||||||
if !bytes.Equal(original, decompressed) {
|
|
||||||
t.Errorf("round-trip mismatch: got %q, want %q", decompressed, original)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestNewCompressWriter_Xz_Good(t *testing.T) {
|
|
||||||
original := []byte("hello, streaming xz world")
|
|
||||||
var buf bytes.Buffer
|
|
||||||
|
|
||||||
w, err := NewCompressWriter(&buf, "xz")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("NewCompressWriter(xz) error: %v", err)
|
|
||||||
}
|
|
||||||
if _, err := w.Write(original); err != nil {
|
|
||||||
t.Fatalf("Write error: %v", err)
|
|
||||||
}
|
|
||||||
if err := w.Close(); err != nil {
|
|
||||||
t.Fatalf("Close error: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
compressed := buf.Bytes()
|
|
||||||
if bytes.Equal(original, compressed) {
|
|
||||||
t.Fatal("compressed data should differ from original")
|
|
||||||
}
|
|
||||||
|
|
||||||
decompressed, err := Decompress(compressed)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Decompress error: %v", err)
|
|
||||||
}
|
|
||||||
if !bytes.Equal(original, decompressed) {
|
|
||||||
t.Errorf("round-trip mismatch: got %q, want %q", decompressed, original)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestNewCompressWriter_None_Good(t *testing.T) {
|
|
||||||
original := []byte("hello, passthrough world")
|
|
||||||
var buf bytes.Buffer
|
|
||||||
|
|
||||||
w, err := NewCompressWriter(&buf, "none")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("NewCompressWriter(none) error: %v", err)
|
|
||||||
}
|
|
||||||
if _, err := w.Write(original); err != nil {
|
|
||||||
t.Fatalf("Write error: %v", err)
|
|
||||||
}
|
|
||||||
if err := w.Close(); err != nil {
|
|
||||||
t.Fatalf("Close error: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !bytes.Equal(original, buf.Bytes()) {
|
|
||||||
t.Errorf("passthrough mismatch: got %q, want %q", buf.Bytes(), original)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also test empty string format
|
|
||||||
var buf2 bytes.Buffer
|
|
||||||
w2, err := NewCompressWriter(&buf2, "")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("NewCompressWriter('') error: %v", err)
|
|
||||||
}
|
|
||||||
if _, err := w2.Write(original); err != nil {
|
|
||||||
t.Fatalf("Write error: %v", err)
|
|
||||||
}
|
|
||||||
if err := w2.Close(); err != nil {
|
|
||||||
t.Fatalf("Close error: %v", err)
|
|
||||||
}
|
|
||||||
if !bytes.Equal(original, buf2.Bytes()) {
|
|
||||||
t.Errorf("passthrough (empty string) mismatch: got %q, want %q", buf2.Bytes(), original)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestNewCompressWriter_Bad(t *testing.T) {
|
|
||||||
var buf bytes.Buffer
|
|
||||||
_, err := NewCompressWriter(&buf, "invalid-format")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for unknown compression format, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGzip_Good(t *testing.T) {
|
func TestGzip_Good(t *testing.T) {
|
||||||
originalData := []byte("hello, gzip world")
|
originalData := []byte("hello, gzip world")
|
||||||
compressed, err := Compress(originalData, "gz")
|
compressed, err := Compress(originalData, "gz")
|
||||||
|
|
|
||||||
|
|
@ -8,8 +8,8 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/tim"
|
"github.com/Snider/Borg/pkg/tim"
|
||||||
)
|
)
|
||||||
|
|
||||||
//go:embed unlock.html
|
//go:embed unlock.html
|
||||||
|
|
|
||||||
|
|
@ -1,197 +0,0 @@
|
||||||
package datanode
|
|
||||||
|
|
||||||
import (
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"runtime"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestAddPath_Good(t *testing.T) {
|
|
||||||
// Create a temp directory with files and a nested subdirectory.
|
|
||||||
dir := t.TempDir()
|
|
||||||
if err := os.WriteFile(filepath.Join(dir, "hello.txt"), []byte("hello"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
subdir := filepath.Join(dir, "sub")
|
|
||||||
if err := os.Mkdir(subdir, 0755); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
if err := os.WriteFile(filepath.Join(subdir, "world.txt"), []byte("world"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
dn := New()
|
|
||||||
if err := dn.AddPath(dir, AddPathOptions{}); err != nil {
|
|
||||||
t.Fatalf("AddPath failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify files are stored with paths relative to dir, using forward slashes.
|
|
||||||
file, ok := dn.files["hello.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("hello.txt not found in datanode")
|
|
||||||
}
|
|
||||||
if string(file.content) != "hello" {
|
|
||||||
t.Errorf("expected content 'hello', got %q", file.content)
|
|
||||||
}
|
|
||||||
|
|
||||||
file, ok = dn.files["sub/world.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("sub/world.txt not found in datanode")
|
|
||||||
}
|
|
||||||
if string(file.content) != "world" {
|
|
||||||
t.Errorf("expected content 'world', got %q", file.content)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Directories should not be stored explicitly.
|
|
||||||
if _, ok := dn.files["sub"]; ok {
|
|
||||||
t.Error("directories should not be stored as explicit entries")
|
|
||||||
}
|
|
||||||
if _, ok := dn.files["sub/"]; ok {
|
|
||||||
t.Error("directories should not be stored as explicit entries")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddPath_SkipBrokenSymlinks_Good(t *testing.T) {
|
|
||||||
if runtime.GOOS == "windows" {
|
|
||||||
t.Skip("symlinks not reliably supported on Windows")
|
|
||||||
}
|
|
||||||
|
|
||||||
dir := t.TempDir()
|
|
||||||
|
|
||||||
// Create a real file.
|
|
||||||
if err := os.WriteFile(filepath.Join(dir, "real.txt"), []byte("real"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a broken symlink (target does not exist).
|
|
||||||
if err := os.Symlink("/nonexistent/target", filepath.Join(dir, "broken.txt")); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
dn := New()
|
|
||||||
err := dn.AddPath(dir, AddPathOptions{SkipBrokenSymlinks: true})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("AddPath should not error with SkipBrokenSymlinks: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// The real file should be present.
|
|
||||||
if _, ok := dn.files["real.txt"]; !ok {
|
|
||||||
t.Error("real.txt should be present")
|
|
||||||
}
|
|
||||||
|
|
||||||
// The broken symlink should be skipped.
|
|
||||||
if _, ok := dn.files["broken.txt"]; ok {
|
|
||||||
t.Error("broken.txt should have been skipped")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddPath_ExcludePatterns_Good(t *testing.T) {
|
|
||||||
dir := t.TempDir()
|
|
||||||
|
|
||||||
if err := os.WriteFile(filepath.Join(dir, "app.go"), []byte("package main"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
if err := os.WriteFile(filepath.Join(dir, "debug.log"), []byte("log data"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
if err := os.WriteFile(filepath.Join(dir, "error.log"), []byte("error data"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
dn := New()
|
|
||||||
err := dn.AddPath(dir, AddPathOptions{
|
|
||||||
ExcludePatterns: []string{"*.log"},
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("AddPath failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// app.go should be present.
|
|
||||||
if _, ok := dn.files["app.go"]; !ok {
|
|
||||||
t.Error("app.go should be present")
|
|
||||||
}
|
|
||||||
|
|
||||||
// .log files should be excluded.
|
|
||||||
if _, ok := dn.files["debug.log"]; ok {
|
|
||||||
t.Error("debug.log should have been excluded")
|
|
||||||
}
|
|
||||||
if _, ok := dn.files["error.log"]; ok {
|
|
||||||
t.Error("error.log should have been excluded")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddPath_Bad(t *testing.T) {
|
|
||||||
dn := New()
|
|
||||||
err := dn.AddPath("/nonexistent/path/that/does/not/exist", AddPathOptions{})
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for nonexistent directory, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddPath_ValidSymlink_Good(t *testing.T) {
|
|
||||||
if runtime.GOOS == "windows" {
|
|
||||||
t.Skip("symlinks not reliably supported on Windows")
|
|
||||||
}
|
|
||||||
|
|
||||||
dir := t.TempDir()
|
|
||||||
|
|
||||||
// Create a real file.
|
|
||||||
if err := os.WriteFile(filepath.Join(dir, "target.txt"), []byte("target content"), 0644); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create a valid symlink pointing to the real file.
|
|
||||||
if err := os.Symlink("target.txt", filepath.Join(dir, "link.txt")); err != nil {
|
|
||||||
t.Fatal(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Default behavior (FollowSymlinks=false): store as symlink.
|
|
||||||
dn := New()
|
|
||||||
err := dn.AddPath(dir, AddPathOptions{})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("AddPath failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// The target file should be a regular file.
|
|
||||||
targetFile, ok := dn.files["target.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("target.txt not found")
|
|
||||||
}
|
|
||||||
if targetFile.isSymlink() {
|
|
||||||
t.Error("target.txt should not be a symlink")
|
|
||||||
}
|
|
||||||
if string(targetFile.content) != "target content" {
|
|
||||||
t.Errorf("expected content 'target content', got %q", targetFile.content)
|
|
||||||
}
|
|
||||||
|
|
||||||
// The symlink should be stored as a symlink entry.
|
|
||||||
linkFile, ok := dn.files["link.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("link.txt not found")
|
|
||||||
}
|
|
||||||
if !linkFile.isSymlink() {
|
|
||||||
t.Error("link.txt should be a symlink")
|
|
||||||
}
|
|
||||||
if linkFile.symlink != "target.txt" {
|
|
||||||
t.Errorf("expected symlink target 'target.txt', got %q", linkFile.symlink)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test with FollowSymlinks=true: store as regular file with target content.
|
|
||||||
dn2 := New()
|
|
||||||
err = dn2.AddPath(dir, AddPathOptions{FollowSymlinks: true})
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("AddPath with FollowSymlinks failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
linkFile2, ok := dn2.files["link.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("link.txt not found with FollowSymlinks")
|
|
||||||
}
|
|
||||||
if linkFile2.isSymlink() {
|
|
||||||
t.Error("link.txt should NOT be a symlink when FollowSymlinks is true")
|
|
||||||
}
|
|
||||||
if string(linkFile2.content) != "target content" {
|
|
||||||
t.Errorf("expected content 'target content', got %q", linkFile2.content)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -8,7 +8,6 @@ import (
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"os"
|
"os"
|
||||||
"path"
|
"path"
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
"sort"
|
||||||
"strings"
|
"strings"
|
||||||
"time"
|
"time"
|
||||||
|
|
@ -43,15 +42,12 @@ func FromTar(tarball []byte) (*DataNode, error) {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
switch header.Typeflag {
|
if header.Typeflag == tar.TypeReg {
|
||||||
case tar.TypeReg:
|
|
||||||
data, err := io.ReadAll(tarReader)
|
data, err := io.ReadAll(tarReader)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
dn.AddData(header.Name, data)
|
dn.AddData(header.Name, data)
|
||||||
case tar.TypeSymlink:
|
|
||||||
dn.AddSymlink(header.Name, header.Linkname)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -64,30 +60,17 @@ func (d *DataNode) ToTar() ([]byte, error) {
|
||||||
tw := tar.NewWriter(buf)
|
tw := tar.NewWriter(buf)
|
||||||
|
|
||||||
for _, file := range d.files {
|
for _, file := range d.files {
|
||||||
var hdr *tar.Header
|
hdr := &tar.Header{
|
||||||
if file.isSymlink() {
|
Name: file.name,
|
||||||
hdr = &tar.Header{
|
Mode: 0600,
|
||||||
Typeflag: tar.TypeSymlink,
|
Size: int64(len(file.content)),
|
||||||
Name: file.name,
|
ModTime: file.modTime,
|
||||||
Linkname: file.symlink,
|
|
||||||
Mode: 0777,
|
|
||||||
ModTime: file.modTime,
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
hdr = &tar.Header{
|
|
||||||
Name: file.name,
|
|
||||||
Mode: 0600,
|
|
||||||
Size: int64(len(file.content)),
|
|
||||||
ModTime: file.modTime,
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
if err := tw.WriteHeader(hdr); err != nil {
|
if err := tw.WriteHeader(hdr); err != nil {
|
||||||
return nil, err
|
return nil, err
|
||||||
}
|
}
|
||||||
if !file.isSymlink() {
|
if _, err := tw.Write(file.content); err != nil {
|
||||||
if _, err := tw.Write(file.content); err != nil {
|
return nil, err
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
@ -98,51 +81,6 @@ func (d *DataNode) ToTar() ([]byte, error) {
|
||||||
return buf.Bytes(), nil
|
return buf.Bytes(), nil
|
||||||
}
|
}
|
||||||
|
|
||||||
// ToTarWriter streams the DataNode contents to a tar writer.
|
|
||||||
// File keys are sorted for deterministic output.
|
|
||||||
func (d *DataNode) ToTarWriter(w io.Writer) error {
|
|
||||||
tw := tar.NewWriter(w)
|
|
||||||
defer tw.Close()
|
|
||||||
|
|
||||||
// Sort keys for deterministic output.
|
|
||||||
keys := make([]string, 0, len(d.files))
|
|
||||||
for k := range d.files {
|
|
||||||
keys = append(keys, k)
|
|
||||||
}
|
|
||||||
sort.Strings(keys)
|
|
||||||
|
|
||||||
for _, k := range keys {
|
|
||||||
file := d.files[k]
|
|
||||||
var hdr *tar.Header
|
|
||||||
if file.isSymlink() {
|
|
||||||
hdr = &tar.Header{
|
|
||||||
Typeflag: tar.TypeSymlink,
|
|
||||||
Name: file.name,
|
|
||||||
Linkname: file.symlink,
|
|
||||||
Mode: 0777,
|
|
||||||
ModTime: file.modTime,
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
hdr = &tar.Header{
|
|
||||||
Name: file.name,
|
|
||||||
Mode: 0600,
|
|
||||||
Size: int64(len(file.content)),
|
|
||||||
ModTime: file.modTime,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if err := tw.WriteHeader(hdr); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !file.isSymlink() {
|
|
||||||
if _, err := tw.Write(file.content); err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddData adds a file to the DataNode.
|
// AddData adds a file to the DataNode.
|
||||||
func (d *DataNode) AddData(name string, content []byte) {
|
func (d *DataNode) AddData(name string, content []byte) {
|
||||||
name = strings.TrimPrefix(name, "/")
|
name = strings.TrimPrefix(name, "/")
|
||||||
|
|
@ -161,119 +99,6 @@ func (d *DataNode) AddData(name string, content []byte) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// AddSymlink adds a symlink entry to the DataNode.
|
|
||||||
func (d *DataNode) AddSymlink(name, target string) {
|
|
||||||
name = strings.TrimPrefix(name, "/")
|
|
||||||
if name == "" {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if strings.HasSuffix(name, "/") {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
d.files[name] = &dataFile{
|
|
||||||
name: name,
|
|
||||||
symlink: target,
|
|
||||||
modTime: time.Now(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddPathOptions configures the behaviour of AddPath.
|
|
||||||
type AddPathOptions struct {
|
|
||||||
SkipBrokenSymlinks bool // skip broken symlinks instead of erroring
|
|
||||||
FollowSymlinks bool // follow symlinks and store target content (default false = store as symlinks)
|
|
||||||
ExcludePatterns []string // glob patterns to exclude (matched against basename)
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddPath walks a real directory and adds its files to the DataNode.
|
|
||||||
// Paths are stored relative to dir, normalized with forward slashes.
|
|
||||||
// Directories are implicit and not stored.
|
|
||||||
func (d *DataNode) AddPath(dir string, opts AddPathOptions) error {
|
|
||||||
absDir, err := filepath.Abs(dir)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
return filepath.WalkDir(absDir, func(p string, entry fs.DirEntry, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip the root directory itself.
|
|
||||||
if p == absDir {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compute relative path and normalize to forward slashes.
|
|
||||||
rel, err := filepath.Rel(absDir, p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
rel = filepath.ToSlash(rel)
|
|
||||||
|
|
||||||
// Skip directories — they are implicit in DataNode.
|
|
||||||
isSymlink := entry.Type()&fs.ModeSymlink != 0
|
|
||||||
if entry.IsDir() {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Apply exclude patterns against basename.
|
|
||||||
base := filepath.Base(p)
|
|
||||||
for _, pattern := range opts.ExcludePatterns {
|
|
||||||
matched, matchErr := filepath.Match(pattern, base)
|
|
||||||
if matchErr != nil {
|
|
||||||
return matchErr
|
|
||||||
}
|
|
||||||
if matched {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle symlinks.
|
|
||||||
if isSymlink {
|
|
||||||
linkTarget, err := os.Readlink(p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Resolve the symlink target to check if it exists.
|
|
||||||
absTarget := linkTarget
|
|
||||||
if !filepath.IsAbs(absTarget) {
|
|
||||||
absTarget = filepath.Join(filepath.Dir(p), linkTarget)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, statErr := os.Stat(absTarget)
|
|
||||||
if statErr != nil {
|
|
||||||
// Broken symlink.
|
|
||||||
if opts.SkipBrokenSymlinks {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return statErr
|
|
||||||
}
|
|
||||||
|
|
||||||
if opts.FollowSymlinks {
|
|
||||||
// Read the target content and store as regular file.
|
|
||||||
content, err := os.ReadFile(absTarget)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
d.AddData(rel, content)
|
|
||||||
} else {
|
|
||||||
// Store as symlink.
|
|
||||||
d.AddSymlink(rel, linkTarget)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Regular file: read content and add.
|
|
||||||
content, err := os.ReadFile(p)
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
d.AddData(rel, content)
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Open opens a file from the DataNode.
|
// Open opens a file from the DataNode.
|
||||||
func (d *DataNode) Open(name string) (fs.File, error) {
|
func (d *DataNode) Open(name string) (fs.File, error) {
|
||||||
name = strings.TrimPrefix(name, "/")
|
name = strings.TrimPrefix(name, "/")
|
||||||
|
|
@ -474,11 +299,8 @@ type dataFile struct {
|
||||||
name string
|
name string
|
||||||
content []byte
|
content []byte
|
||||||
modTime time.Time
|
modTime time.Time
|
||||||
symlink string
|
|
||||||
}
|
}
|
||||||
|
|
||||||
func (d *dataFile) isSymlink() bool { return d.symlink != "" }
|
|
||||||
|
|
||||||
func (d *dataFile) Stat() (fs.FileInfo, error) { return &dataFileInfo{file: d}, nil }
|
func (d *dataFile) Stat() (fs.FileInfo, error) { return &dataFileInfo{file: d}, nil }
|
||||||
func (d *dataFile) Read(p []byte) (int, error) { return 0, io.EOF }
|
func (d *dataFile) Read(p []byte) (int, error) { return 0, io.EOF }
|
||||||
func (d *dataFile) Close() error { return nil }
|
func (d *dataFile) Close() error { return nil }
|
||||||
|
|
@ -488,12 +310,7 @@ type dataFileInfo struct{ file *dataFile }
|
||||||
|
|
||||||
func (d *dataFileInfo) Name() string { return path.Base(d.file.name) }
|
func (d *dataFileInfo) Name() string { return path.Base(d.file.name) }
|
||||||
func (d *dataFileInfo) Size() int64 { return int64(len(d.file.content)) }
|
func (d *dataFileInfo) Size() int64 { return int64(len(d.file.content)) }
|
||||||
func (d *dataFileInfo) Mode() fs.FileMode {
|
func (d *dataFileInfo) Mode() fs.FileMode { return 0444 }
|
||||||
if d.file.isSymlink() {
|
|
||||||
return os.ModeSymlink | 0777
|
|
||||||
}
|
|
||||||
return 0444
|
|
||||||
}
|
|
||||||
func (d *dataFileInfo) ModTime() time.Time { return d.file.modTime }
|
func (d *dataFileInfo) ModTime() time.Time { return d.file.modTime }
|
||||||
func (d *dataFileInfo) IsDir() bool { return false }
|
func (d *dataFileInfo) IsDir() bool { return false }
|
||||||
func (d *dataFileInfo) Sys() interface{} { return nil }
|
func (d *dataFileInfo) Sys() interface{} { return nil }
|
||||||
|
|
|
||||||
|
|
@ -580,273 +580,6 @@ func TestFromTar_Bad(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestAddSymlink_Good(t *testing.T) {
|
|
||||||
dn := New()
|
|
||||||
dn.AddSymlink("link.txt", "target.txt")
|
|
||||||
|
|
||||||
file, ok := dn.files["link.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("symlink not found in datanode")
|
|
||||||
}
|
|
||||||
if file.symlink != "target.txt" {
|
|
||||||
t.Errorf("expected symlink target 'target.txt', got %q", file.symlink)
|
|
||||||
}
|
|
||||||
if !file.isSymlink() {
|
|
||||||
t.Error("expected isSymlink() to return true")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Stat should return ModeSymlink
|
|
||||||
info, err := dn.Stat("link.txt")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Stat failed: %v", err)
|
|
||||||
}
|
|
||||||
if info.Mode()&os.ModeSymlink == 0 {
|
|
||||||
t.Error("expected ModeSymlink to be set in file mode")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestSymlinkTarRoundTrip_Good(t *testing.T) {
|
|
||||||
dn1 := New()
|
|
||||||
dn1.AddData("real.txt", []byte("real content"))
|
|
||||||
dn1.AddSymlink("link.txt", "real.txt")
|
|
||||||
|
|
||||||
tarball, err := dn1.ToTar()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("ToTar failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify the tar contains a symlink entry
|
|
||||||
tr := tar.NewReader(bytes.NewReader(tarball))
|
|
||||||
foundSymlink := false
|
|
||||||
foundFile := false
|
|
||||||
for {
|
|
||||||
header, err := tr.Next()
|
|
||||||
if err == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("tar.Next failed: %v", err)
|
|
||||||
}
|
|
||||||
switch header.Name {
|
|
||||||
case "link.txt":
|
|
||||||
foundSymlink = true
|
|
||||||
if header.Typeflag != tar.TypeSymlink {
|
|
||||||
t.Errorf("expected TypeSymlink, got %d", header.Typeflag)
|
|
||||||
}
|
|
||||||
if header.Linkname != "real.txt" {
|
|
||||||
t.Errorf("expected Linkname 'real.txt', got %q", header.Linkname)
|
|
||||||
}
|
|
||||||
if header.Mode != 0777 {
|
|
||||||
t.Errorf("expected mode 0777, got %o", header.Mode)
|
|
||||||
}
|
|
||||||
case "real.txt":
|
|
||||||
foundFile = true
|
|
||||||
if header.Typeflag != tar.TypeReg {
|
|
||||||
t.Errorf("expected TypeReg for real.txt, got %d", header.Typeflag)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !foundSymlink {
|
|
||||||
t.Error("symlink entry not found in tarball")
|
|
||||||
}
|
|
||||||
if !foundFile {
|
|
||||||
t.Error("regular file entry not found in tarball")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Round-trip: FromTar should restore the symlink
|
|
||||||
dn2, err := FromTar(tarball)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("FromTar failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify the regular file survived
|
|
||||||
exists, _ := dn2.Exists("real.txt")
|
|
||||||
if !exists {
|
|
||||||
t.Error("real.txt missing after round-trip")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify the symlink survived
|
|
||||||
linkFile, ok := dn2.files["link.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("link.txt missing after round-trip")
|
|
||||||
}
|
|
||||||
if !linkFile.isSymlink() {
|
|
||||||
t.Error("expected link.txt to be a symlink after round-trip")
|
|
||||||
}
|
|
||||||
if linkFile.symlink != "real.txt" {
|
|
||||||
t.Errorf("expected symlink target 'real.txt', got %q", linkFile.symlink)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Stat should still report ModeSymlink
|
|
||||||
info, err := dn2.Stat("link.txt")
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Stat failed: %v", err)
|
|
||||||
}
|
|
||||||
if info.Mode()&os.ModeSymlink == 0 {
|
|
||||||
t.Error("expected ModeSymlink after round-trip")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestAddSymlink_Bad(t *testing.T) {
|
|
||||||
dn := New()
|
|
||||||
|
|
||||||
// Empty name should be ignored
|
|
||||||
dn.AddSymlink("", "target.txt")
|
|
||||||
if len(dn.files) != 0 {
|
|
||||||
t.Error("expected empty name to be ignored")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Leading slash should be stripped
|
|
||||||
dn.AddSymlink("/link.txt", "target.txt")
|
|
||||||
if _, ok := dn.files["link.txt"]; !ok {
|
|
||||||
t.Error("expected leading slash to be stripped")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Directory-like name (trailing slash) should be ignored
|
|
||||||
dn2 := New()
|
|
||||||
dn2.AddSymlink("dir/", "target")
|
|
||||||
if len(dn2.files) != 0 {
|
|
||||||
t.Error("expected directory-like name to be ignored")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestToTarWriter_Good(t *testing.T) {
|
|
||||||
dn := New()
|
|
||||||
dn.AddData("foo.txt", []byte("hello"))
|
|
||||||
dn.AddData("bar/baz.txt", []byte("world"))
|
|
||||||
|
|
||||||
var buf bytes.Buffer
|
|
||||||
if err := dn.ToTarWriter(&buf); err != nil {
|
|
||||||
t.Fatalf("ToTarWriter failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Round-trip through FromTar to verify contents survived.
|
|
||||||
dn2, err := FromTar(buf.Bytes())
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("FromTar failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify foo.txt
|
|
||||||
f1, ok := dn2.files["foo.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("foo.txt missing after round-trip")
|
|
||||||
}
|
|
||||||
if string(f1.content) != "hello" {
|
|
||||||
t.Errorf("expected foo.txt content 'hello', got %q", f1.content)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify bar/baz.txt
|
|
||||||
f2, ok := dn2.files["bar/baz.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("bar/baz.txt missing after round-trip")
|
|
||||||
}
|
|
||||||
if string(f2.content) != "world" {
|
|
||||||
t.Errorf("expected bar/baz.txt content 'world', got %q", f2.content)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify deterministic ordering: bar/baz.txt should come before foo.txt.
|
|
||||||
tr := tar.NewReader(bytes.NewReader(buf.Bytes()))
|
|
||||||
header1, err := tr.Next()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("tar.Next failed: %v", err)
|
|
||||||
}
|
|
||||||
header2, err := tr.Next()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("tar.Next failed: %v", err)
|
|
||||||
}
|
|
||||||
if header1.Name != "bar/baz.txt" || header2.Name != "foo.txt" {
|
|
||||||
t.Errorf("expected sorted order [bar/baz.txt, foo.txt], got [%s, %s]",
|
|
||||||
header1.Name, header2.Name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestToTarWriter_Symlinks_Good(t *testing.T) {
|
|
||||||
dn := New()
|
|
||||||
dn.AddData("real.txt", []byte("real content"))
|
|
||||||
dn.AddSymlink("link.txt", "real.txt")
|
|
||||||
|
|
||||||
var buf bytes.Buffer
|
|
||||||
if err := dn.ToTarWriter(&buf); err != nil {
|
|
||||||
t.Fatalf("ToTarWriter failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Round-trip through FromTar.
|
|
||||||
dn2, err := FromTar(buf.Bytes())
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("FromTar failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify regular file survived.
|
|
||||||
realFile, ok := dn2.files["real.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("real.txt missing after round-trip")
|
|
||||||
}
|
|
||||||
if string(realFile.content) != "real content" {
|
|
||||||
t.Errorf("expected 'real content', got %q", realFile.content)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify symlink survived.
|
|
||||||
linkFile, ok := dn2.files["link.txt"]
|
|
||||||
if !ok {
|
|
||||||
t.Fatal("link.txt missing after round-trip")
|
|
||||||
}
|
|
||||||
if !linkFile.isSymlink() {
|
|
||||||
t.Error("expected link.txt to be a symlink")
|
|
||||||
}
|
|
||||||
if linkFile.symlink != "real.txt" {
|
|
||||||
t.Errorf("expected symlink target 'real.txt', got %q", linkFile.symlink)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also verify the raw tar entries have correct types and modes.
|
|
||||||
tr := tar.NewReader(bytes.NewReader(buf.Bytes()))
|
|
||||||
for {
|
|
||||||
header, err := tr.Next()
|
|
||||||
if err == io.EOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("tar.Next failed: %v", err)
|
|
||||||
}
|
|
||||||
switch header.Name {
|
|
||||||
case "link.txt":
|
|
||||||
if header.Typeflag != tar.TypeSymlink {
|
|
||||||
t.Errorf("expected TypeSymlink for link.txt, got %d", header.Typeflag)
|
|
||||||
}
|
|
||||||
if header.Linkname != "real.txt" {
|
|
||||||
t.Errorf("expected Linkname 'real.txt', got %q", header.Linkname)
|
|
||||||
}
|
|
||||||
if header.Mode != 0777 {
|
|
||||||
t.Errorf("expected mode 0777 for symlink, got %o", header.Mode)
|
|
||||||
}
|
|
||||||
case "real.txt":
|
|
||||||
if header.Typeflag != tar.TypeReg {
|
|
||||||
t.Errorf("expected TypeReg for real.txt, got %d", header.Typeflag)
|
|
||||||
}
|
|
||||||
if header.Mode != 0600 {
|
|
||||||
t.Errorf("expected mode 0600 for regular file, got %o", header.Mode)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestToTarWriter_Empty_Good(t *testing.T) {
|
|
||||||
dn := New()
|
|
||||||
|
|
||||||
var buf bytes.Buffer
|
|
||||||
if err := dn.ToTarWriter(&buf); err != nil {
|
|
||||||
t.Fatalf("ToTarWriter on empty DataNode should not error, got: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// The buffer should contain a valid (empty) tar archive.
|
|
||||||
dn2, err := FromTar(buf.Bytes())
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("FromTar on empty tar failed: %v", err)
|
|
||||||
}
|
|
||||||
if len(dn2.files) != 0 {
|
|
||||||
t.Errorf("expected 0 files in empty round-trip, got %d", len(dn2.files))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func toSortedNames(entries []fs.DirEntry) []string {
|
func toSortedNames(entries []fs.DirEntry) []string {
|
||||||
var names []string
|
var names []string
|
||||||
for _, e := range entries {
|
for _, e := range entries {
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/mocks"
|
"github.com/Snider/Borg/pkg/mocks"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestGetPublicRepos_Good(t *testing.T) {
|
func TestGetPublicRepos_Good(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -8,7 +8,7 @@ import (
|
||||||
"net/url"
|
"net/url"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/mocks"
|
"github.com/Snider/Borg/pkg/mocks"
|
||||||
"github.com/google/go-github/v39/github"
|
"github.com/google/go-github/v39/github"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -3,8 +3,8 @@ package mocks
|
||||||
import (
|
import (
|
||||||
"io"
|
"io"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/vcs"
|
"github.com/Snider/Borg/pkg/vcs"
|
||||||
)
|
)
|
||||||
|
|
||||||
// MockGitCloner is a mock implementation of the GitCloner interface.
|
// MockGitCloner is a mock implementation of the GitCloner interface.
|
||||||
|
|
|
||||||
Binary file not shown.
|
|
@ -10,7 +10,7 @@ import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
"github.com/Snider/Borg/pkg/smsg"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Player provides media decryption and playback services
|
// Player provides media decryption and playback services
|
||||||
|
|
|
||||||
|
|
@ -11,7 +11,7 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"sync"
|
"sync"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"github.com/schollz/progressbar/v3"
|
"github.com/schollz/progressbar/v3"
|
||||||
"golang.org/x/net/html"
|
"golang.org/x/net/html"
|
||||||
)
|
)
|
||||||
|
|
@ -217,9 +217,7 @@ func (p *pwaClient) DownloadAndPackagePWA(pwaURL, manifestURL string, bar *progr
|
||||||
if path == "" {
|
if path == "" {
|
||||||
path = "index.html"
|
path = "index.html"
|
||||||
}
|
}
|
||||||
mu.Lock()
|
|
||||||
dn.AddData(path, body)
|
dn.AddData(path, body)
|
||||||
mu.Unlock()
|
|
||||||
|
|
||||||
// Parse HTML for additional assets
|
// Parse HTML for additional assets
|
||||||
if parseHTML && isHTMLContent(resp.Header.Get("Content-Type"), body) {
|
if parseHTML && isHTMLContent(resp.Header.Get("Content-Type"), body) {
|
||||||
|
|
|
||||||
214
pkg/smsg/abr.go
214
pkg/smsg/abr.go
|
|
@ -1,214 +0,0 @@
|
||||||
// Package smsg - Adaptive Bitrate Streaming (ABR) support
|
|
||||||
//
|
|
||||||
// ABR enables multi-bitrate streaming with automatic quality switching based on
|
|
||||||
// network conditions. Similar to HLS/DASH but with ChaCha20-Poly1305 encryption.
|
|
||||||
//
|
|
||||||
// Architecture:
|
|
||||||
// - Master manifest (.json) lists available quality variants
|
|
||||||
// - Each variant is a standard v3 chunked .smsg file
|
|
||||||
// - Same password decrypts all variants (CEK unwrapped once)
|
|
||||||
// - Player switches variants at chunk boundaries based on bandwidth
|
|
||||||
package smsg
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
|
||||||
)
|
|
||||||
|
|
||||||
const ABRVersion = "abr-v1"
|
|
||||||
|
|
||||||
// ABRSafetyFactor is the bandwidth multiplier for variant selection.
|
|
||||||
// Using 80% of available bandwidth prevents buffering on fluctuating networks.
|
|
||||||
const ABRSafetyFactor = 0.8
|
|
||||||
|
|
||||||
// NewABRManifest creates a new ABR manifest with the given title.
|
|
||||||
func NewABRManifest(title string) *ABRManifest {
|
|
||||||
return &ABRManifest{
|
|
||||||
Version: ABRVersion,
|
|
||||||
Title: title,
|
|
||||||
Variants: make([]Variant, 0),
|
|
||||||
DefaultIdx: 0,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddVariant adds a quality variant to the manifest.
|
|
||||||
// Variants are automatically sorted by bandwidth (ascending) after adding.
|
|
||||||
func (m *ABRManifest) AddVariant(v Variant) {
|
|
||||||
m.Variants = append(m.Variants, v)
|
|
||||||
// Sort by bandwidth ascending (lowest quality first)
|
|
||||||
sort.Slice(m.Variants, func(i, j int) bool {
|
|
||||||
return m.Variants[i].Bandwidth < m.Variants[j].Bandwidth
|
|
||||||
})
|
|
||||||
// Update default to 720p if available, otherwise middle variant
|
|
||||||
m.DefaultIdx = m.findDefaultVariant()
|
|
||||||
}
|
|
||||||
|
|
||||||
// findDefaultVariant finds the best default variant (prefers 720p).
|
|
||||||
func (m *ABRManifest) findDefaultVariant() int {
|
|
||||||
// Prefer 720p as default
|
|
||||||
for i, v := range m.Variants {
|
|
||||||
if v.Name == "720p" || v.Height == 720 {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Otherwise use middle variant
|
|
||||||
if len(m.Variants) > 0 {
|
|
||||||
return len(m.Variants) / 2
|
|
||||||
}
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
|
|
||||||
// SelectVariant selects the best variant for the given bandwidth (bits per second).
|
|
||||||
// Returns the index of the highest quality variant that fits within the bandwidth.
|
|
||||||
func (m *ABRManifest) SelectVariant(bandwidthBPS int) int {
|
|
||||||
safeBandwidth := float64(bandwidthBPS) * ABRSafetyFactor
|
|
||||||
|
|
||||||
// Find highest quality that fits
|
|
||||||
selected := 0
|
|
||||||
for i, v := range m.Variants {
|
|
||||||
if float64(v.Bandwidth) <= safeBandwidth {
|
|
||||||
selected = i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return selected
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetVariant returns the variant at the given index, or nil if out of range.
|
|
||||||
func (m *ABRManifest) GetVariant(idx int) *Variant {
|
|
||||||
if idx < 0 || idx >= len(m.Variants) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return &m.Variants[idx]
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteABRManifest writes the ABR manifest to a JSON file.
|
|
||||||
func WriteABRManifest(manifest *ABRManifest, path string) error {
|
|
||||||
data, err := json.MarshalIndent(manifest, "", " ")
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("marshal ABR manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ensure directory exists
|
|
||||||
dir := filepath.Dir(path)
|
|
||||||
if err := os.MkdirAll(dir, 0755); err != nil {
|
|
||||||
return fmt.Errorf("create directory: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := os.WriteFile(path, data, 0644); err != nil {
|
|
||||||
return fmt.Errorf("write ABR manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReadABRManifest reads an ABR manifest from a JSON file.
|
|
||||||
func ReadABRManifest(path string) (*ABRManifest, error) {
|
|
||||||
data, err := os.ReadFile(path)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("read ABR manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return ParseABRManifest(data)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ParseABRManifest parses an ABR manifest from JSON bytes.
|
|
||||||
func ParseABRManifest(data []byte) (*ABRManifest, error) {
|
|
||||||
var manifest ABRManifest
|
|
||||||
if err := json.Unmarshal(data, &manifest); err != nil {
|
|
||||||
return nil, fmt.Errorf("parse ABR manifest: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate version
|
|
||||||
if manifest.Version != ABRVersion {
|
|
||||||
return nil, fmt.Errorf("unsupported ABR version: %s (expected %s)", manifest.Version, ABRVersion)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &manifest, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// VariantFromSMSG creates a Variant from an existing .smsg file.
|
|
||||||
// It reads the header to extract chunk count and file size.
|
|
||||||
func VariantFromSMSG(name string, bandwidth, width, height int, smsgPath string) (*Variant, error) {
|
|
||||||
// Read file to get size and chunk info
|
|
||||||
data, err := os.ReadFile(smsgPath)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("read smsg file: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get header to extract chunk count
|
|
||||||
header, err := GetV3Header(data)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("parse smsg header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
chunkCount := 0
|
|
||||||
if header.Chunked != nil {
|
|
||||||
chunkCount = header.Chunked.TotalChunks
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Variant{
|
|
||||||
Name: name,
|
|
||||||
Bandwidth: bandwidth,
|
|
||||||
Width: width,
|
|
||||||
Height: height,
|
|
||||||
Codecs: "avc1.640028,mp4a.40.2", // Default H.264 + AAC
|
|
||||||
URL: filepath.Base(smsgPath),
|
|
||||||
ChunkCount: chunkCount,
|
|
||||||
FileSize: int64(len(data)),
|
|
||||||
}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// ABRBandwidthEstimator tracks download speeds for adaptive quality selection.
|
|
||||||
type ABRBandwidthEstimator struct {
|
|
||||||
samples []int // bandwidth samples in bps
|
|
||||||
maxSamples int
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewABRBandwidthEstimator creates a new bandwidth estimator.
|
|
||||||
func NewABRBandwidthEstimator(maxSamples int) *ABRBandwidthEstimator {
|
|
||||||
if maxSamples <= 0 {
|
|
||||||
maxSamples = 10
|
|
||||||
}
|
|
||||||
return &ABRBandwidthEstimator{
|
|
||||||
samples: make([]int, 0, maxSamples),
|
|
||||||
maxSamples: maxSamples,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// RecordSample records a bandwidth sample from a download.
|
|
||||||
// bytes is the number of bytes downloaded, durationMs is the time in milliseconds.
|
|
||||||
func (e *ABRBandwidthEstimator) RecordSample(bytes int, durationMs int) {
|
|
||||||
if durationMs <= 0 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
// Calculate bits per second: (bytes * 8 * 1000) / durationMs
|
|
||||||
bps := (bytes * 8 * 1000) / durationMs
|
|
||||||
e.samples = append(e.samples, bps)
|
|
||||||
if len(e.samples) > e.maxSamples {
|
|
||||||
e.samples = e.samples[1:]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Estimate returns the estimated bandwidth in bits per second.
|
|
||||||
// Uses average of recent samples, or 1 Mbps default if no samples.
|
|
||||||
func (e *ABRBandwidthEstimator) Estimate() int {
|
|
||||||
if len(e.samples) == 0 {
|
|
||||||
return 1000000 // 1 Mbps default
|
|
||||||
}
|
|
||||||
|
|
||||||
// Use average of last 3 samples (or all if fewer)
|
|
||||||
count := 3
|
|
||||||
if len(e.samples) < count {
|
|
||||||
count = len(e.samples)
|
|
||||||
}
|
|
||||||
recent := e.samples[len(e.samples)-count:]
|
|
||||||
|
|
||||||
sum := 0
|
|
||||||
for _, s := range recent {
|
|
||||||
sum += s
|
|
||||||
}
|
|
||||||
return sum / count
|
|
||||||
}
|
|
||||||
|
|
@ -1,23 +1,5 @@
|
||||||
package smsg
|
package smsg
|
||||||
|
|
||||||
// SMSG (Secure Message) provides ChaCha20-Poly1305 authenticated encryption.
|
|
||||||
//
|
|
||||||
// IMPORTANT: Nonce handling for developers
|
|
||||||
// =========================================
|
|
||||||
// Enchantrix embeds the nonce directly in the ciphertext:
|
|
||||||
//
|
|
||||||
// [24-byte nonce][encrypted data][16-byte auth tag]
|
|
||||||
//
|
|
||||||
// The nonce is NOT transmitted separately in headers. It is:
|
|
||||||
// - Generated fresh (random) for each encryption
|
|
||||||
// - Extracted automatically from ciphertext during decryption
|
|
||||||
// - Safe to transmit (public) - only the KEY must remain secret
|
|
||||||
//
|
|
||||||
// This means wrapped keys, encrypted payloads, etc. are self-contained.
|
|
||||||
// You only need the correct key to decrypt - no nonce management required.
|
|
||||||
//
|
|
||||||
// See: forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix/crypto_sigil.go
|
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
"bytes"
|
||||||
"compress/gzip"
|
"compress/gzip"
|
||||||
|
|
@ -29,8 +11,8 @@ import (
|
||||||
"io"
|
"io"
|
||||||
"time"
|
"time"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
"github.com/Snider/Enchantrix/pkg/enchantrix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
"github.com/Snider/Enchantrix/pkg/trix"
|
||||||
"github.com/klauspost/compress/zstd"
|
"github.com/klauspost/compress/zstd"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,827 +0,0 @@
|
||||||
package smsg
|
|
||||||
|
|
||||||
// V3 Streaming Support with LTHN Rolling Keys
|
|
||||||
//
|
|
||||||
// This file implements zero-trust streaming where:
|
|
||||||
// - Content is encrypted once with a random CEK (Content Encryption Key)
|
|
||||||
// - CEK is wrapped (encrypted) with time-bound stream keys
|
|
||||||
// - Stream keys are derived using LTHN(date:license:fingerprint)
|
|
||||||
// - Rolling window: today and tomorrow keys are valid (24-48hr window)
|
|
||||||
// - Keys auto-expire - no revocation needed
|
|
||||||
//
|
|
||||||
// Server flow:
|
|
||||||
// 1. Generate random CEK
|
|
||||||
// 2. Encrypt content with CEK
|
|
||||||
// 3. For today & tomorrow: wrap CEK with DeriveStreamKey(date, license, fingerprint)
|
|
||||||
// 4. Store wrapped keys in header
|
|
||||||
//
|
|
||||||
// Client flow:
|
|
||||||
// 1. Derive stream key for today (or tomorrow)
|
|
||||||
// 2. Try to unwrap CEK from header
|
|
||||||
// 3. Decrypt content with CEK
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"crypto/sha256"
|
|
||||||
"encoding/base64"
|
|
||||||
"encoding/binary"
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/crypt"
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
|
||||||
)
|
|
||||||
|
|
||||||
// StreamParams contains the parameters needed for stream key derivation
|
|
||||||
type StreamParams struct {
|
|
||||||
License string // User's license identifier
|
|
||||||
Fingerprint string // Device/session fingerprint
|
|
||||||
Cadence Cadence // Key rotation cadence (default: daily)
|
|
||||||
ChunkSize int // Optional: chunk size for decrypt-while-downloading (0 = no chunking)
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeriveStreamKey derives a 32-byte ChaCha key from date, license, and fingerprint.
|
|
||||||
// Uses LTHN hash which is rainbow-table resistant (salt derived from input itself).
|
|
||||||
//
|
|
||||||
// The derived key is: SHA256(LTHN("YYYY-MM-DD:license:fingerprint"))
|
|
||||||
func DeriveStreamKey(date, license, fingerprint string) []byte {
|
|
||||||
// Build input string
|
|
||||||
input := fmt.Sprintf("%s:%s:%s", date, license, fingerprint)
|
|
||||||
|
|
||||||
// Use Enchantrix crypt service for LTHN hash
|
|
||||||
cryptService := crypt.NewService()
|
|
||||||
lthnHash := cryptService.Hash(crypt.LTHN, input)
|
|
||||||
|
|
||||||
// LTHN returns hex string, hash it again to get 32 bytes for ChaCha
|
|
||||||
key := sha256.Sum256([]byte(lthnHash))
|
|
||||||
return key[:]
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetRollingDates returns today and tomorrow's date strings in YYYY-MM-DD format
|
|
||||||
// This is the default daily cadence.
|
|
||||||
func GetRollingDates() (current, next string) {
|
|
||||||
return GetRollingPeriods(CadenceDaily, time.Now().UTC())
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetRollingDatesAt returns today and tomorrow relative to a specific time
|
|
||||||
func GetRollingDatesAt(t time.Time) (current, next string) {
|
|
||||||
return GetRollingPeriods(CadenceDaily, t.UTC())
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetRollingPeriods returns the current and next period strings based on cadence.
|
|
||||||
// The period string format varies by cadence:
|
|
||||||
// - daily: "2006-01-02"
|
|
||||||
// - 12h: "2006-01-02-AM" or "2006-01-02-PM"
|
|
||||||
// - 6h: "2006-01-02-00", "2006-01-02-06", "2006-01-02-12", "2006-01-02-18"
|
|
||||||
// - 1h: "2006-01-02-15" (hour in 24h format)
|
|
||||||
func GetRollingPeriods(cadence Cadence, t time.Time) (current, next string) {
|
|
||||||
t = t.UTC()
|
|
||||||
|
|
||||||
switch cadence {
|
|
||||||
case CadenceHalfDay:
|
|
||||||
// 12-hour periods: AM (00:00-11:59) and PM (12:00-23:59)
|
|
||||||
date := t.Format("2006-01-02")
|
|
||||||
if t.Hour() < 12 {
|
|
||||||
current = date + "-AM"
|
|
||||||
next = date + "-PM"
|
|
||||||
} else {
|
|
||||||
current = date + "-PM"
|
|
||||||
next = t.AddDate(0, 0, 1).Format("2006-01-02") + "-AM"
|
|
||||||
}
|
|
||||||
|
|
||||||
case CadenceQuarter:
|
|
||||||
// 6-hour periods: 00, 06, 12, 18
|
|
||||||
date := t.Format("2006-01-02")
|
|
||||||
hour := t.Hour()
|
|
||||||
period := (hour / 6) * 6
|
|
||||||
nextPeriod := period + 6
|
|
||||||
|
|
||||||
current = fmt.Sprintf("%s-%02d", date, period)
|
|
||||||
if nextPeriod >= 24 {
|
|
||||||
next = fmt.Sprintf("%s-%02d", t.AddDate(0, 0, 1).Format("2006-01-02"), 0)
|
|
||||||
} else {
|
|
||||||
next = fmt.Sprintf("%s-%02d", date, nextPeriod)
|
|
||||||
}
|
|
||||||
|
|
||||||
case CadenceHourly:
|
|
||||||
// Hourly periods
|
|
||||||
current = t.Format("2006-01-02-15")
|
|
||||||
next = t.Add(time.Hour).Format("2006-01-02-15")
|
|
||||||
|
|
||||||
default: // CadenceDaily or empty
|
|
||||||
current = t.Format("2006-01-02")
|
|
||||||
next = t.AddDate(0, 0, 1).Format("2006-01-02")
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetCadenceWindowDuration returns the duration of one period for a cadence
|
|
||||||
func GetCadenceWindowDuration(cadence Cadence) time.Duration {
|
|
||||||
switch cadence {
|
|
||||||
case CadenceHourly:
|
|
||||||
return time.Hour
|
|
||||||
case CadenceQuarter:
|
|
||||||
return 6 * time.Hour
|
|
||||||
case CadenceHalfDay:
|
|
||||||
return 12 * time.Hour
|
|
||||||
default: // CadenceDaily
|
|
||||||
return 24 * time.Hour
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// WrapCEK wraps a Content Encryption Key with a stream key
|
|
||||||
// Returns base64-encoded wrapped key (includes nonce)
|
|
||||||
func WrapCEK(cek, streamKey []byte) (string, error) {
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(streamKey)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
wrapped, err := sigil.In(cek)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("failed to wrap CEK: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return base64.StdEncoding.EncodeToString(wrapped), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnwrapCEK unwraps a Content Encryption Key using a stream key
|
|
||||||
// Takes base64-encoded wrapped key, returns raw CEK bytes
|
|
||||||
func UnwrapCEK(wrappedB64 string, streamKey []byte) ([]byte, error) {
|
|
||||||
wrapped, err := base64.StdEncoding.DecodeString(wrappedB64)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to decode wrapped key: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(streamKey)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cek, err := sigil.Out(wrapped)
|
|
||||||
if err != nil {
|
|
||||||
return nil, ErrDecryptionFailed
|
|
||||||
}
|
|
||||||
|
|
||||||
return cek, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GenerateCEK generates a random 32-byte Content Encryption Key
|
|
||||||
func GenerateCEK() ([]byte, error) {
|
|
||||||
cek := make([]byte, 32)
|
|
||||||
if _, err := rand.Read(cek); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to generate CEK: %w", err)
|
|
||||||
}
|
|
||||||
return cek, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// EncryptV3 encrypts a message using v3 streaming format with rolling keys.
|
|
||||||
// The content is encrypted with a random CEK, which is then wrapped with
|
|
||||||
// stream keys for today and tomorrow.
|
|
||||||
//
|
|
||||||
// When params.ChunkSize > 0, content is split into independently decryptable
|
|
||||||
// chunks, enabling decrypt-while-downloading and seeking.
|
|
||||||
func EncryptV3(msg *Message, params *StreamParams, manifest *Manifest) ([]byte, error) {
|
|
||||||
if params == nil || params.License == "" {
|
|
||||||
return nil, ErrLicenseRequired
|
|
||||||
}
|
|
||||||
if msg.Body == "" && len(msg.Attachments) == 0 {
|
|
||||||
return nil, ErrEmptyMessage
|
|
||||||
}
|
|
||||||
|
|
||||||
// Set timestamp if not set
|
|
||||||
if msg.Timestamp == 0 {
|
|
||||||
msg.Timestamp = time.Now().Unix()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate random CEK
|
|
||||||
cek, err := GenerateCEK()
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine cadence (default to daily if not specified)
|
|
||||||
cadence := params.Cadence
|
|
||||||
if cadence == "" {
|
|
||||||
cadence = CadenceDaily
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get rolling periods based on cadence
|
|
||||||
current, next := GetRollingPeriods(cadence, time.Now().UTC())
|
|
||||||
|
|
||||||
// Wrap CEK with current period's stream key
|
|
||||||
currentKey := DeriveStreamKey(current, params.License, params.Fingerprint)
|
|
||||||
wrappedCurrent, err := WrapCEK(cek, currentKey)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to wrap CEK for current period: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wrap CEK with next period's stream key
|
|
||||||
nextKey := DeriveStreamKey(next, params.License, params.Fingerprint)
|
|
||||||
wrappedNext, err := WrapCEK(cek, nextKey)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to wrap CEK for next period: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if chunked mode requested
|
|
||||||
if params.ChunkSize > 0 {
|
|
||||||
return encryptV3Chunked(msg, params, manifest, cek, cadence, current, next, wrappedCurrent, wrappedNext)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Non-chunked v3 (original behavior)
|
|
||||||
return encryptV3Standard(msg, params, manifest, cek, cadence, current, next, wrappedCurrent, wrappedNext)
|
|
||||||
}
|
|
||||||
|
|
||||||
// encryptV3Standard encrypts as a single block (original v3 behavior)
|
|
||||||
func encryptV3Standard(msg *Message, params *StreamParams, manifest *Manifest, cek []byte, cadence Cadence, current, next, wrappedCurrent, wrappedNext string) ([]byte, error) {
|
|
||||||
// Build v3 payload (similar to v2 but encrypted with CEK)
|
|
||||||
payload, attachmentData, err := buildV3Payload(msg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compress payload
|
|
||||||
compressed, err := zstdCompress(payload)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("compression failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt with CEK
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(cek)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := sigil.In(compressed)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("encryption failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt attachment data with CEK
|
|
||||||
encryptedAttachments, err := sigil.In(attachmentData)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("attachment encryption failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create header with wrapped keys
|
|
||||||
headerMap := map[string]interface{}{
|
|
||||||
"version": Version,
|
|
||||||
"algorithm": "chacha20poly1305",
|
|
||||||
"format": FormatV3,
|
|
||||||
"compression": CompressionZstd,
|
|
||||||
"keyMethod": KeyMethodLTHNRolling,
|
|
||||||
"cadence": string(cadence),
|
|
||||||
"wrappedKeys": []WrappedKey{
|
|
||||||
{Date: current, Wrapped: wrappedCurrent},
|
|
||||||
{Date: next, Wrapped: wrappedNext},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
if manifest != nil {
|
|
||||||
if manifest.IssuedAt == 0 {
|
|
||||||
manifest.IssuedAt = time.Now().Unix()
|
|
||||||
}
|
|
||||||
headerMap["manifest"] = manifest
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build v3 binary format: [4-byte json len][json header][encrypted payload][encrypted attachments]
|
|
||||||
headerJSON, err := json.Marshal(headerMap)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to marshal header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate total size
|
|
||||||
totalSize := 4 + len(headerJSON) + 4 + len(encrypted) + len(encryptedAttachments)
|
|
||||||
output := make([]byte, 0, totalSize)
|
|
||||||
|
|
||||||
// Write header length (4 bytes, big-endian)
|
|
||||||
headerLen := make([]byte, 4)
|
|
||||||
binary.BigEndian.PutUint32(headerLen, uint32(len(headerJSON)))
|
|
||||||
output = append(output, headerLen...)
|
|
||||||
|
|
||||||
// Write header JSON
|
|
||||||
output = append(output, headerJSON...)
|
|
||||||
|
|
||||||
// Write encrypted payload length (4 bytes, big-endian)
|
|
||||||
payloadLen := make([]byte, 4)
|
|
||||||
binary.BigEndian.PutUint32(payloadLen, uint32(len(encrypted)))
|
|
||||||
output = append(output, payloadLen...)
|
|
||||||
|
|
||||||
// Write encrypted payload
|
|
||||||
output = append(output, encrypted...)
|
|
||||||
|
|
||||||
// Write encrypted attachments
|
|
||||||
output = append(output, encryptedAttachments...)
|
|
||||||
|
|
||||||
// Wrap in trix container
|
|
||||||
t := &trix.Trix{
|
|
||||||
Header: headerMap,
|
|
||||||
Payload: output,
|
|
||||||
}
|
|
||||||
|
|
||||||
return trix.Encode(t, Magic, nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// encryptV3Chunked encrypts content into independently decryptable chunks
|
|
||||||
func encryptV3Chunked(msg *Message, params *StreamParams, manifest *Manifest, cek []byte, cadence Cadence, current, next, wrappedCurrent, wrappedNext string) ([]byte, error) {
|
|
||||||
chunkSize := params.ChunkSize
|
|
||||||
|
|
||||||
// Build raw content to chunk: metadata JSON + binary attachments
|
|
||||||
metaJSON, attachmentData, err := buildV3Payload(msg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Combine into single byte slice for chunking
|
|
||||||
rawContent := append(metaJSON, attachmentData...)
|
|
||||||
totalSize := int64(len(rawContent))
|
|
||||||
|
|
||||||
// Create sigil with CEK for chunk encryption
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(cek)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt in chunks
|
|
||||||
var chunks [][]byte
|
|
||||||
var chunkIndex []ChunkInfo
|
|
||||||
offset := 0
|
|
||||||
|
|
||||||
for i := 0; offset < len(rawContent); i++ {
|
|
||||||
// Determine this chunk's size
|
|
||||||
end := offset + chunkSize
|
|
||||||
if end > len(rawContent) {
|
|
||||||
end = len(rawContent)
|
|
||||||
}
|
|
||||||
chunkData := rawContent[offset:end]
|
|
||||||
|
|
||||||
// Encrypt chunk (each gets its own nonce)
|
|
||||||
encryptedChunk, err := sigil.In(chunkData)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to encrypt chunk %d: %w", i, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
chunks = append(chunks, encryptedChunk)
|
|
||||||
chunkIndex = append(chunkIndex, ChunkInfo{
|
|
||||||
Offset: 0, // Will be calculated after we know all sizes
|
|
||||||
Size: len(encryptedChunk),
|
|
||||||
})
|
|
||||||
|
|
||||||
offset = end
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate chunk offsets
|
|
||||||
currentOffset := 0
|
|
||||||
for i := range chunkIndex {
|
|
||||||
chunkIndex[i].Offset = currentOffset
|
|
||||||
currentOffset += chunkIndex[i].Size
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build header with chunked info
|
|
||||||
chunkedInfo := &ChunkedInfo{
|
|
||||||
ChunkSize: chunkSize,
|
|
||||||
TotalChunks: len(chunks),
|
|
||||||
TotalSize: totalSize,
|
|
||||||
Index: chunkIndex,
|
|
||||||
}
|
|
||||||
|
|
||||||
headerMap := map[string]interface{}{
|
|
||||||
"version": Version,
|
|
||||||
"algorithm": "chacha20poly1305",
|
|
||||||
"format": FormatV3,
|
|
||||||
"compression": CompressionNone, // No compression in chunked mode (per-chunk not supported yet)
|
|
||||||
"keyMethod": KeyMethodLTHNRolling,
|
|
||||||
"cadence": string(cadence),
|
|
||||||
"chunked": chunkedInfo,
|
|
||||||
"wrappedKeys": []WrappedKey{
|
|
||||||
{Date: current, Wrapped: wrappedCurrent},
|
|
||||||
{Date: next, Wrapped: wrappedNext},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
if manifest != nil {
|
|
||||||
if manifest.IssuedAt == 0 {
|
|
||||||
manifest.IssuedAt = time.Now().Unix()
|
|
||||||
}
|
|
||||||
headerMap["manifest"] = manifest
|
|
||||||
}
|
|
||||||
|
|
||||||
// Concatenate all encrypted chunks
|
|
||||||
var payload []byte
|
|
||||||
for _, chunk := range chunks {
|
|
||||||
payload = append(payload, chunk...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wrap in trix container
|
|
||||||
t := &trix.Trix{
|
|
||||||
Header: headerMap,
|
|
||||||
Payload: payload,
|
|
||||||
}
|
|
||||||
|
|
||||||
return trix.Encode(t, Magic, nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
// DecryptV3 decrypts a v3 streaming message using rolling keys.
|
|
||||||
// It tries today's key first, then tomorrow's key.
|
|
||||||
// Automatically handles both chunked and non-chunked v3 formats.
|
|
||||||
func DecryptV3(data []byte, params *StreamParams) (*Message, *Header, error) {
|
|
||||||
if params == nil || params.License == "" {
|
|
||||||
return nil, nil, ErrLicenseRequired
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decode trix container
|
|
||||||
t, err := trix.Decode(data, Magic, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, fmt.Errorf("failed to decode container: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse header
|
|
||||||
headerJSON, err := json.Marshal(t.Header)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, fmt.Errorf("failed to marshal header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var header Header
|
|
||||||
if err := json.Unmarshal(headerJSON, &header); err != nil {
|
|
||||||
return nil, nil, fmt.Errorf("failed to parse header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify v3 format
|
|
||||||
if header.Format != FormatV3 {
|
|
||||||
return nil, nil, fmt.Errorf("expected v3 format, got: %s", header.Format)
|
|
||||||
}
|
|
||||||
|
|
||||||
if header.KeyMethod != KeyMethodLTHNRolling {
|
|
||||||
return nil, nil, fmt.Errorf("unsupported key method: %s", header.KeyMethod)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine cadence from header (or use params, or default to daily)
|
|
||||||
cadence := header.Cadence
|
|
||||||
if cadence == "" && params.Cadence != "" {
|
|
||||||
cadence = params.Cadence
|
|
||||||
}
|
|
||||||
if cadence == "" {
|
|
||||||
cadence = CadenceDaily
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to unwrap CEK with rolling keys
|
|
||||||
cek, err := tryUnwrapCEK(header.WrappedKeys, params, cadence)
|
|
||||||
if err != nil {
|
|
||||||
return nil, &header, err
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check if chunked format
|
|
||||||
if header.Chunked != nil {
|
|
||||||
return decryptV3Chunked(t.Payload, cek, &header)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Non-chunked v3
|
|
||||||
return decryptV3Standard(t.Payload, cek, &header)
|
|
||||||
}
|
|
||||||
|
|
||||||
// decryptV3Standard handles non-chunked v3 decryption
|
|
||||||
func decryptV3Standard(payload []byte, cek []byte, header *Header) (*Message, *Header, error) {
|
|
||||||
if len(payload) < 8 {
|
|
||||||
return nil, header, ErrInvalidPayload
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read header length (skip - we already parsed from trix header)
|
|
||||||
headerLen := binary.BigEndian.Uint32(payload[:4])
|
|
||||||
pos := 4 + int(headerLen)
|
|
||||||
|
|
||||||
if len(payload) < pos+4 {
|
|
||||||
return nil, header, ErrInvalidPayload
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read encrypted payload length
|
|
||||||
encryptedLen := binary.BigEndian.Uint32(payload[pos : pos+4])
|
|
||||||
pos += 4
|
|
||||||
|
|
||||||
if len(payload) < pos+int(encryptedLen) {
|
|
||||||
return nil, header, ErrInvalidPayload
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract encrypted payload and attachments
|
|
||||||
encryptedPayload := payload[pos : pos+int(encryptedLen)]
|
|
||||||
encryptedAttachments := payload[pos+int(encryptedLen):]
|
|
||||||
|
|
||||||
// Decrypt with CEK
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(cek)
|
|
||||||
if err != nil {
|
|
||||||
return nil, header, fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
compressed, err := sigil.Out(encryptedPayload)
|
|
||||||
if err != nil {
|
|
||||||
return nil, header, ErrDecryptionFailed
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decompress
|
|
||||||
var decompressed []byte
|
|
||||||
if header.Compression == CompressionZstd {
|
|
||||||
decompressed, err = zstdDecompress(compressed)
|
|
||||||
if err != nil {
|
|
||||||
return nil, header, fmt.Errorf("decompression failed: %w", err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
decompressed = compressed
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse message
|
|
||||||
var msg Message
|
|
||||||
if err := json.Unmarshal(decompressed, &msg); err != nil {
|
|
||||||
return nil, header, fmt.Errorf("failed to parse message: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt attachments if present
|
|
||||||
if len(encryptedAttachments) > 0 {
|
|
||||||
attachmentData, err := sigil.Out(encryptedAttachments)
|
|
||||||
if err != nil {
|
|
||||||
return nil, header, fmt.Errorf("attachment decryption failed: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Restore attachment content from binary data
|
|
||||||
if err := restoreV3Attachments(&msg, attachmentData); err != nil {
|
|
||||||
return nil, header, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &msg, header, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// decryptV3Chunked handles chunked v3 decryption
|
|
||||||
func decryptV3Chunked(payload []byte, cek []byte, header *Header) (*Message, *Header, error) {
|
|
||||||
if header.Chunked == nil {
|
|
||||||
return nil, header, fmt.Errorf("v3 chunked format missing chunked info")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create sigil for decryption
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(cek)
|
|
||||||
if err != nil {
|
|
||||||
return nil, header, fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt all chunks
|
|
||||||
var decrypted []byte
|
|
||||||
|
|
||||||
for i, ci := range header.Chunked.Index {
|
|
||||||
if ci.Offset+ci.Size > len(payload) {
|
|
||||||
return nil, header, fmt.Errorf("chunk %d out of bounds", i)
|
|
||||||
}
|
|
||||||
|
|
||||||
chunkData := payload[ci.Offset : ci.Offset+ci.Size]
|
|
||||||
plaintext, err := sigil.Out(chunkData)
|
|
||||||
if err != nil {
|
|
||||||
return nil, header, fmt.Errorf("failed to decrypt chunk %d: %w", i, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
decrypted = append(decrypted, plaintext...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse decrypted content (metadata JSON + attachments)
|
|
||||||
var msg Message
|
|
||||||
if err := json.Unmarshal(decrypted, &msg); err != nil {
|
|
||||||
// First part should be JSON, but may be mixed with binary
|
|
||||||
// Try to find JSON boundary
|
|
||||||
for i := 0; i < len(decrypted); i++ {
|
|
||||||
if decrypted[i] == '}' {
|
|
||||||
if err := json.Unmarshal(decrypted[:i+1], &msg); err == nil {
|
|
||||||
// Found valid JSON, rest is attachment data
|
|
||||||
if err := restoreV3Attachments(&msg, decrypted[i+1:]); err != nil {
|
|
||||||
return nil, header, err
|
|
||||||
}
|
|
||||||
return &msg, header, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, header, fmt.Errorf("failed to parse message: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &msg, header, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// tryUnwrapCEK attempts to unwrap the CEK using current or next period's key
|
|
||||||
func tryUnwrapCEK(wrappedKeys []WrappedKey, params *StreamParams, cadence Cadence) ([]byte, error) {
|
|
||||||
current, next := GetRollingPeriods(cadence, time.Now().UTC())
|
|
||||||
|
|
||||||
// Build map of available wrapped keys by period
|
|
||||||
keysByPeriod := make(map[string]string)
|
|
||||||
for _, wk := range wrappedKeys {
|
|
||||||
keysByPeriod[wk.Date] = wk.Wrapped
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try current period's key first
|
|
||||||
if wrapped, ok := keysByPeriod[current]; ok {
|
|
||||||
streamKey := DeriveStreamKey(current, params.License, params.Fingerprint)
|
|
||||||
if cek, err := UnwrapCEK(wrapped, streamKey); err == nil {
|
|
||||||
return cek, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try next period's key
|
|
||||||
if wrapped, ok := keysByPeriod[next]; ok {
|
|
||||||
streamKey := DeriveStreamKey(next, params.License, params.Fingerprint)
|
|
||||||
if cek, err := UnwrapCEK(wrapped, streamKey); err == nil {
|
|
||||||
return cek, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, ErrNoValidKey
|
|
||||||
}
|
|
||||||
|
|
||||||
// buildV3Payload builds the message JSON and binary attachment data
|
|
||||||
func buildV3Payload(msg *Message) ([]byte, []byte, error) {
|
|
||||||
// Create a copy of the message without attachment content
|
|
||||||
msgCopy := *msg
|
|
||||||
var attachmentData []byte
|
|
||||||
|
|
||||||
for i := range msgCopy.Attachments {
|
|
||||||
att := &msgCopy.Attachments[i]
|
|
||||||
if att.Content != "" {
|
|
||||||
// Decode base64 content to binary
|
|
||||||
data, err := base64.StdEncoding.DecodeString(att.Content)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, fmt.Errorf("failed to decode attachment %s: %w", att.Name, err)
|
|
||||||
}
|
|
||||||
attachmentData = append(attachmentData, data...)
|
|
||||||
att.Content = "" // Clear content, will be restored on decrypt
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Marshal message (without attachment content)
|
|
||||||
payload, err := json.Marshal(&msgCopy)
|
|
||||||
if err != nil {
|
|
||||||
return nil, nil, fmt.Errorf("failed to marshal message: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return payload, attachmentData, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// restoreV3Attachments restores attachment content from decrypted binary data
|
|
||||||
func restoreV3Attachments(msg *Message, data []byte) error {
|
|
||||||
offset := 0
|
|
||||||
for i := range msg.Attachments {
|
|
||||||
att := &msg.Attachments[i]
|
|
||||||
if att.Size > 0 {
|
|
||||||
if offset+att.Size > len(data) {
|
|
||||||
return fmt.Errorf("attachment data truncated for %s", att.Name)
|
|
||||||
}
|
|
||||||
att.Content = base64.StdEncoding.EncodeToString(data[offset : offset+att.Size])
|
|
||||||
offset += att.Size
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// =============================================================================
|
|
||||||
// V3 Chunked Streaming Helpers
|
|
||||||
// =============================================================================
|
|
||||||
//
|
|
||||||
// When StreamParams.ChunkSize > 0, v3 format uses independently decryptable
|
|
||||||
// chunks, enabling:
|
|
||||||
// - Decrypt-while-downloading: Play media as it arrives
|
|
||||||
// - HTTP Range requests: Fetch specific chunks by byte range
|
|
||||||
// - Seekable playback: Jump to any position without decrypting everything
|
|
||||||
//
|
|
||||||
// Each chunk is encrypted with the same CEK but has its own nonce,
|
|
||||||
// making it independently decryptable.
|
|
||||||
|
|
||||||
// DecryptV3Chunk decrypts a single chunk by index.
|
|
||||||
// This enables streaming playback and seeking without decrypting the entire file.
|
|
||||||
//
|
|
||||||
// Usage for streaming:
|
|
||||||
//
|
|
||||||
// header, _ := GetV3Header(data)
|
|
||||||
// cek, _ := UnwrapCEKFromHeader(header, params)
|
|
||||||
// payload, _ := GetV3Payload(data)
|
|
||||||
// for i := 0; i < header.Chunked.TotalChunks; i++ {
|
|
||||||
// chunk, _ := DecryptV3Chunk(payload, cek, i, header.Chunked)
|
|
||||||
// player.Write(chunk)
|
|
||||||
// }
|
|
||||||
func DecryptV3Chunk(payload []byte, cek []byte, chunkIndex int, chunked *ChunkedInfo) ([]byte, error) {
|
|
||||||
if chunked == nil {
|
|
||||||
return nil, fmt.Errorf("chunked info is nil")
|
|
||||||
}
|
|
||||||
if chunkIndex < 0 || chunkIndex >= len(chunked.Index) {
|
|
||||||
return nil, fmt.Errorf("chunk index %d out of range [0, %d)", chunkIndex, len(chunked.Index))
|
|
||||||
}
|
|
||||||
|
|
||||||
ci := chunked.Index[chunkIndex]
|
|
||||||
if ci.Offset+ci.Size > len(payload) {
|
|
||||||
return nil, fmt.Errorf("chunk %d data out of bounds", chunkIndex)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create sigil and decrypt
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(cek)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to create sigil: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
chunkData := payload[ci.Offset : ci.Offset+ci.Size]
|
|
||||||
return sigil.Out(chunkData)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetV3Header extracts the header from a v3 file without decrypting.
|
|
||||||
// Useful for getting chunk index for Range requests.
|
|
||||||
func GetV3Header(data []byte) (*Header, error) {
|
|
||||||
t, err := trix.Decode(data, Magic, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to decode container: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
headerJSON, err := json.Marshal(t.Header)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to marshal header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
var header Header
|
|
||||||
if err := json.Unmarshal(headerJSON, &header); err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to parse header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if header.Format != FormatV3 {
|
|
||||||
return nil, fmt.Errorf("not a v3 format: %s", header.Format)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &header, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// UnwrapCEKFromHeader unwraps the CEK from a v3 header using stream params.
|
|
||||||
// Returns the CEK for use with DecryptV3Chunk.
|
|
||||||
func UnwrapCEKFromHeader(header *Header, params *StreamParams) ([]byte, error) {
|
|
||||||
if params == nil || params.License == "" {
|
|
||||||
return nil, ErrLicenseRequired
|
|
||||||
}
|
|
||||||
|
|
||||||
cadence := header.Cadence
|
|
||||||
if cadence == "" && params.Cadence != "" {
|
|
||||||
cadence = params.Cadence
|
|
||||||
}
|
|
||||||
if cadence == "" {
|
|
||||||
cadence = CadenceDaily
|
|
||||||
}
|
|
||||||
|
|
||||||
return tryUnwrapCEK(header.WrappedKeys, params, cadence)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetV3Payload extracts just the payload from a v3 file.
|
|
||||||
// Use with DecryptV3Chunk for individual chunk decryption.
|
|
||||||
func GetV3Payload(data []byte) ([]byte, error) {
|
|
||||||
t, err := trix.Decode(data, Magic, nil)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("failed to decode container: %w", err)
|
|
||||||
}
|
|
||||||
return t.Payload, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetV3HeaderFromPrefix parses the v3 header from just the file prefix.
|
|
||||||
// This enables streaming: parse header as soon as first few KB arrive.
|
|
||||||
// Returns header and payload offset (where encrypted chunks start).
|
|
||||||
//
|
|
||||||
// File format:
|
|
||||||
// - Bytes 0-3: Magic "SMSG"
|
|
||||||
// - Bytes 4-5: Version (2-byte little endian)
|
|
||||||
// - Bytes 6-8: Header length (3-byte big endian)
|
|
||||||
// - Bytes 9+: Header JSON
|
|
||||||
// - Payload starts at offset 9 + headerLen
|
|
||||||
func GetV3HeaderFromPrefix(data []byte) (*Header, int, error) {
|
|
||||||
// Need at least magic + version + header length indicator
|
|
||||||
if len(data) < 9 {
|
|
||||||
return nil, 0, fmt.Errorf("need at least 9 bytes, got %d", len(data))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check magic
|
|
||||||
if string(data[0:4]) != Magic {
|
|
||||||
return nil, 0, ErrInvalidMagic
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse header length (3 bytes big endian at offset 6-8)
|
|
||||||
headerLen := int(data[6])<<16 | int(data[7])<<8 | int(data[8])
|
|
||||||
if headerLen <= 0 || headerLen > 16*1024*1024 {
|
|
||||||
return nil, 0, fmt.Errorf("invalid header length: %d", headerLen)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Calculate payload offset
|
|
||||||
payloadOffset := 9 + headerLen
|
|
||||||
|
|
||||||
// Check if we have enough data for the header
|
|
||||||
if len(data) < payloadOffset {
|
|
||||||
return nil, 0, fmt.Errorf("need %d bytes for header, got %d", payloadOffset, len(data))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Parse header JSON
|
|
||||||
headerJSON := data[9:payloadOffset]
|
|
||||||
var header Header
|
|
||||||
if err := json.Unmarshal(headerJSON, &header); err != nil {
|
|
||||||
return nil, 0, fmt.Errorf("failed to parse header JSON: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if header.Format != FormatV3 {
|
|
||||||
return nil, 0, fmt.Errorf("not a v3 format: %s", header.Format)
|
|
||||||
}
|
|
||||||
|
|
||||||
return &header, payloadOffset, nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,677 +0,0 @@
|
||||||
package smsg
|
|
||||||
|
|
||||||
import (
|
|
||||||
"testing"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestDeriveStreamKey(t *testing.T) {
|
|
||||||
// Test that same inputs produce same key
|
|
||||||
key1 := DeriveStreamKey("2026-01-12", "license123", "fingerprint456")
|
|
||||||
key2 := DeriveStreamKey("2026-01-12", "license123", "fingerprint456")
|
|
||||||
|
|
||||||
if len(key1) != 32 {
|
|
||||||
t.Errorf("Key length = %d, want 32", len(key1))
|
|
||||||
}
|
|
||||||
|
|
||||||
if string(key1) != string(key2) {
|
|
||||||
t.Error("Same inputs should produce same key")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that different dates produce different keys
|
|
||||||
key3 := DeriveStreamKey("2026-01-13", "license123", "fingerprint456")
|
|
||||||
if string(key1) == string(key3) {
|
|
||||||
t.Error("Different dates should produce different keys")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Test that different licenses produce different keys
|
|
||||||
key4 := DeriveStreamKey("2026-01-12", "license789", "fingerprint456")
|
|
||||||
if string(key1) == string(key4) {
|
|
||||||
t.Error("Different licenses should produce different keys")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestGetRollingDates(t *testing.T) {
|
|
||||||
today, tomorrow := GetRollingDates()
|
|
||||||
|
|
||||||
// Parse dates to verify format
|
|
||||||
todayTime, err := time.Parse("2006-01-02", today)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Invalid today format: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
tomorrowTime, err := time.Parse("2006-01-02", tomorrow)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("Invalid tomorrow format: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Tomorrow should be 1 day after today
|
|
||||||
diff := tomorrowTime.Sub(todayTime)
|
|
||||||
if diff != 24*time.Hour {
|
|
||||||
t.Errorf("Tomorrow should be 24h after today, got %v", diff)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestWrapUnwrapCEK(t *testing.T) {
|
|
||||||
// Generate a test CEK
|
|
||||||
cek, err := GenerateCEK()
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GenerateCEK failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Generate a stream key
|
|
||||||
streamKey := DeriveStreamKey("2026-01-12", "test-license", "test-fp")
|
|
||||||
|
|
||||||
// Wrap CEK
|
|
||||||
wrapped, err := WrapCEK(cek, streamKey)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("WrapCEK failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unwrap CEK
|
|
||||||
unwrapped, err := UnwrapCEK(wrapped, streamKey)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("UnwrapCEK failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify CEK matches
|
|
||||||
if string(cek) != string(unwrapped) {
|
|
||||||
t.Error("Unwrapped CEK doesn't match original")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Wrong key should fail
|
|
||||||
wrongKey := DeriveStreamKey("2026-01-12", "wrong-license", "test-fp")
|
|
||||||
_, err = UnwrapCEK(wrapped, wrongKey)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("UnwrapCEK with wrong key should fail")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEncryptDecryptV3RoundTrip(t *testing.T) {
|
|
||||||
msg := NewMessage("Hello, this is a v3 streaming message!").
|
|
||||||
WithSubject("V3 Test").
|
|
||||||
WithFrom("stream@dapp.fm")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "test-license-123",
|
|
||||||
Fingerprint: "device-fp-456",
|
|
||||||
}
|
|
||||||
|
|
||||||
manifest := NewManifest("Test Track")
|
|
||||||
manifest.Artist = "Test Artist"
|
|
||||||
manifest.LicenseType = "stream"
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
encrypted, err := EncryptV3(msg, params, manifest)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt with same params
|
|
||||||
decrypted, header, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify message content
|
|
||||||
if decrypted.Body != msg.Body {
|
|
||||||
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
|
|
||||||
}
|
|
||||||
if decrypted.Subject != msg.Subject {
|
|
||||||
t.Errorf("Subject = %q, want %q", decrypted.Subject, msg.Subject)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify header
|
|
||||||
if header.Format != FormatV3 {
|
|
||||||
t.Errorf("Format = %q, want %q", header.Format, FormatV3)
|
|
||||||
}
|
|
||||||
if header.KeyMethod != KeyMethodLTHNRolling {
|
|
||||||
t.Errorf("KeyMethod = %q, want %q", header.KeyMethod, KeyMethodLTHNRolling)
|
|
||||||
}
|
|
||||||
if len(header.WrappedKeys) != 2 {
|
|
||||||
t.Errorf("WrappedKeys count = %d, want 2", len(header.WrappedKeys))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify manifest
|
|
||||||
if header.Manifest == nil {
|
|
||||||
t.Fatal("Manifest is nil")
|
|
||||||
}
|
|
||||||
if header.Manifest.Title != "Test Track" {
|
|
||||||
t.Errorf("Manifest.Title = %q, want %q", header.Manifest.Title, "Test Track")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDecryptV3WrongLicense(t *testing.T) {
|
|
||||||
msg := NewMessage("Secret content")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "correct-license",
|
|
||||||
Fingerprint: "device-fp",
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to decrypt with wrong license
|
|
||||||
wrongParams := &StreamParams{
|
|
||||||
License: "wrong-license",
|
|
||||||
Fingerprint: "device-fp",
|
|
||||||
}
|
|
||||||
|
|
||||||
_, _, err = DecryptV3(encrypted, wrongParams)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("DecryptV3 with wrong license should fail")
|
|
||||||
}
|
|
||||||
if err != ErrNoValidKey {
|
|
||||||
t.Errorf("Error = %v, want ErrNoValidKey", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDecryptV3WrongFingerprint(t *testing.T) {
|
|
||||||
msg := NewMessage("Secret content")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "test-license",
|
|
||||||
Fingerprint: "correct-fingerprint",
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to decrypt with wrong fingerprint
|
|
||||||
wrongParams := &StreamParams{
|
|
||||||
License: "test-license",
|
|
||||||
Fingerprint: "wrong-fingerprint",
|
|
||||||
}
|
|
||||||
|
|
||||||
_, _, err = DecryptV3(encrypted, wrongParams)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("DecryptV3 with wrong fingerprint should fail")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEncryptV3WithAttachment(t *testing.T) {
|
|
||||||
msg := NewMessage("Message with attachment")
|
|
||||||
msg.AddBinaryAttachment("test.mp3", []byte("fake audio data here"), "audio/mpeg")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "test-license",
|
|
||||||
Fingerprint: "test-fp",
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
decrypted, _, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify attachment
|
|
||||||
if len(decrypted.Attachments) != 1 {
|
|
||||||
t.Fatalf("Attachment count = %d, want 1", len(decrypted.Attachments))
|
|
||||||
}
|
|
||||||
|
|
||||||
att := decrypted.GetAttachment("test.mp3")
|
|
||||||
if att == nil {
|
|
||||||
t.Fatal("Attachment not found")
|
|
||||||
}
|
|
||||||
if att.MimeType != "audio/mpeg" {
|
|
||||||
t.Errorf("MimeType = %q, want %q", att.MimeType, "audio/mpeg")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEncryptV3RequiresLicense(t *testing.T) {
|
|
||||||
msg := NewMessage("Test")
|
|
||||||
|
|
||||||
// Nil params
|
|
||||||
_, err := EncryptV3(msg, nil, nil)
|
|
||||||
if err != ErrLicenseRequired {
|
|
||||||
t.Errorf("Error = %v, want ErrLicenseRequired", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Empty license
|
|
||||||
_, err = EncryptV3(msg, &StreamParams{}, nil)
|
|
||||||
if err != ErrLicenseRequired {
|
|
||||||
t.Errorf("Error = %v, want ErrLicenseRequired", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCadencePeriods(t *testing.T) {
|
|
||||||
// Test at a known time: 2026-01-12 15:30:00 UTC
|
|
||||||
testTime := time.Date(2026, 1, 12, 15, 30, 0, 0, time.UTC)
|
|
||||||
|
|
||||||
tests := []struct {
|
|
||||||
cadence Cadence
|
|
||||||
expectedCurrent string
|
|
||||||
expectedNext string
|
|
||||||
}{
|
|
||||||
{CadenceDaily, "2026-01-12", "2026-01-13"},
|
|
||||||
{CadenceHalfDay, "2026-01-12-PM", "2026-01-13-AM"},
|
|
||||||
{CadenceQuarter, "2026-01-12-12", "2026-01-12-18"},
|
|
||||||
{CadenceHourly, "2026-01-12-15", "2026-01-12-16"},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, tc := range tests {
|
|
||||||
t.Run(string(tc.cadence), func(t *testing.T) {
|
|
||||||
current, next := GetRollingPeriods(tc.cadence, testTime)
|
|
||||||
if current != tc.expectedCurrent {
|
|
||||||
t.Errorf("current = %q, want %q", current, tc.expectedCurrent)
|
|
||||||
}
|
|
||||||
if next != tc.expectedNext {
|
|
||||||
t.Errorf("next = %q, want %q", next, tc.expectedNext)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCadenceHalfDayAM(t *testing.T) {
|
|
||||||
// Test in the morning
|
|
||||||
testTime := time.Date(2026, 1, 12, 9, 0, 0, 0, time.UTC)
|
|
||||||
current, next := GetRollingPeriods(CadenceHalfDay, testTime)
|
|
||||||
|
|
||||||
if current != "2026-01-12-AM" {
|
|
||||||
t.Errorf("current = %q, want %q", current, "2026-01-12-AM")
|
|
||||||
}
|
|
||||||
if next != "2026-01-12-PM" {
|
|
||||||
t.Errorf("next = %q, want %q", next, "2026-01-12-PM")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestCadenceQuarterBoundary(t *testing.T) {
|
|
||||||
// Test at 23:00 - should wrap to next day
|
|
||||||
testTime := time.Date(2026, 1, 12, 23, 0, 0, 0, time.UTC)
|
|
||||||
current, next := GetRollingPeriods(CadenceQuarter, testTime)
|
|
||||||
|
|
||||||
if current != "2026-01-12-18" {
|
|
||||||
t.Errorf("current = %q, want %q", current, "2026-01-12-18")
|
|
||||||
}
|
|
||||||
if next != "2026-01-13-00" {
|
|
||||||
t.Errorf("next = %q, want %q", next, "2026-01-13-00")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestEncryptDecryptV3WithCadence(t *testing.T) {
|
|
||||||
cadences := []Cadence{CadenceDaily, CadenceHalfDay, CadenceQuarter, CadenceHourly}
|
|
||||||
|
|
||||||
for _, cadence := range cadences {
|
|
||||||
t.Run(string(cadence), func(t *testing.T) {
|
|
||||||
msg := NewMessage("Testing " + string(cadence) + " cadence")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "cadence-test-license",
|
|
||||||
Fingerprint: "cadence-test-fp",
|
|
||||||
Cadence: cadence,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt with same params
|
|
||||||
decrypted, header, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if decrypted.Body != msg.Body {
|
|
||||||
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify cadence in header
|
|
||||||
if header.Cadence != cadence {
|
|
||||||
t.Errorf("Cadence = %q, want %q", header.Cadence, cadence)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestRollingKeyWindow(t *testing.T) {
|
|
||||||
// This test verifies that both today's and tomorrow's keys work
|
|
||||||
msg := NewMessage("Rolling window test")
|
|
||||||
|
|
||||||
// Create params
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "rolling-test-license",
|
|
||||||
Fingerprint: "rolling-test-fp",
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt with current time
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Should decrypt successfully (within rolling window)
|
|
||||||
decrypted, header, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if decrypted.Body != msg.Body {
|
|
||||||
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify we have both today and tomorrow keys
|
|
||||||
today, tomorrow := GetRollingDates()
|
|
||||||
hasToday := false
|
|
||||||
hasTomorrow := false
|
|
||||||
for _, wk := range header.WrappedKeys {
|
|
||||||
if wk.Date == today {
|
|
||||||
hasToday = true
|
|
||||||
}
|
|
||||||
if wk.Date == tomorrow {
|
|
||||||
hasTomorrow = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if !hasToday {
|
|
||||||
t.Error("Missing today's wrapped key")
|
|
||||||
}
|
|
||||||
if !hasTomorrow {
|
|
||||||
t.Error("Missing tomorrow's wrapped key")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// =============================================================================
|
|
||||||
// V3 Chunked Streaming Tests
|
|
||||||
// =============================================================================
|
|
||||||
|
|
||||||
func TestEncryptDecryptV3ChunkedBasic(t *testing.T) {
|
|
||||||
msg := NewMessage("This is a chunked streaming test message")
|
|
||||||
msg.WithSubject("Chunked Test")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "chunk-license",
|
|
||||||
Fingerprint: "chunk-fp",
|
|
||||||
ChunkSize: 64, // Small chunks for testing
|
|
||||||
}
|
|
||||||
|
|
||||||
manifest := NewManifest("Chunked Track")
|
|
||||||
manifest.Artist = "Test Artist"
|
|
||||||
|
|
||||||
// Encrypt with chunking
|
|
||||||
encrypted, err := EncryptV3(msg, params, manifest)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt - automatically handles chunked format
|
|
||||||
decrypted, header, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify content
|
|
||||||
if decrypted.Body != msg.Body {
|
|
||||||
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
|
|
||||||
}
|
|
||||||
if decrypted.Subject != msg.Subject {
|
|
||||||
t.Errorf("Subject = %q, want %q", decrypted.Subject, msg.Subject)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify header
|
|
||||||
if header.Format != FormatV3 {
|
|
||||||
t.Errorf("Format = %q, want %q", header.Format, FormatV3)
|
|
||||||
}
|
|
||||||
if header.Chunked == nil {
|
|
||||||
t.Fatal("Chunked info is nil")
|
|
||||||
}
|
|
||||||
if header.Chunked.ChunkSize != 64 {
|
|
||||||
t.Errorf("ChunkSize = %d, want 64", header.Chunked.ChunkSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestV3ChunkedWithAttachment(t *testing.T) {
|
|
||||||
// Create a message with attachment larger than chunk size
|
|
||||||
attachmentData := make([]byte, 256)
|
|
||||||
for i := range attachmentData {
|
|
||||||
attachmentData[i] = byte(i)
|
|
||||||
}
|
|
||||||
|
|
||||||
msg := NewMessage("Message with large attachment")
|
|
||||||
msg.AddBinaryAttachment("test.bin", attachmentData, "application/octet-stream")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "attach-license",
|
|
||||||
Fingerprint: "attach-fp",
|
|
||||||
ChunkSize: 64, // Force multiple chunks
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify we have multiple chunks
|
|
||||||
header, err := GetV3Header(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetV3Header failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if header.Chunked.TotalChunks <= 1 {
|
|
||||||
t.Errorf("TotalChunks = %d, want > 1", header.Chunked.TotalChunks)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
decrypted, _, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify attachment
|
|
||||||
if len(decrypted.Attachments) != 1 {
|
|
||||||
t.Fatalf("Attachment count = %d, want 1", len(decrypted.Attachments))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestV3ChunkedIndividualChunks(t *testing.T) {
|
|
||||||
// Create content that spans multiple chunks
|
|
||||||
largeContent := make([]byte, 200)
|
|
||||||
for i := range largeContent {
|
|
||||||
largeContent[i] = byte(i % 256)
|
|
||||||
}
|
|
||||||
|
|
||||||
msg := NewMessage("Chunk-by-chunk test")
|
|
||||||
msg.AddBinaryAttachment("data.bin", largeContent, "application/octet-stream")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "individual-license",
|
|
||||||
Fingerprint: "individual-fp",
|
|
||||||
ChunkSize: 50, // Force ~5 chunks
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get header and payload
|
|
||||||
header, err := GetV3Header(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetV3Header failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
payload, err := GetV3Payload(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetV3Payload failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unwrap CEK
|
|
||||||
cek, err := UnwrapCEKFromHeader(header, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("UnwrapCEKFromHeader failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt each chunk individually
|
|
||||||
var allDecrypted []byte
|
|
||||||
for i := 0; i < header.Chunked.TotalChunks; i++ {
|
|
||||||
chunk, err := DecryptV3Chunk(payload, cek, i, header.Chunked)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3Chunk(%d) failed: %v", i, err)
|
|
||||||
}
|
|
||||||
allDecrypted = append(allDecrypted, chunk...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify total size matches
|
|
||||||
if int64(len(allDecrypted)) != header.Chunked.TotalSize {
|
|
||||||
t.Errorf("Decrypted size = %d, want %d", len(allDecrypted), header.Chunked.TotalSize)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestV3ChunkedWrongLicense(t *testing.T) {
|
|
||||||
msg := NewMessage("Secret chunked content")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "correct-chunked-license",
|
|
||||||
Fingerprint: "device-fp",
|
|
||||||
ChunkSize: 64,
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try to decrypt with wrong license
|
|
||||||
wrongParams := &StreamParams{
|
|
||||||
License: "wrong-chunked-license",
|
|
||||||
Fingerprint: "device-fp",
|
|
||||||
}
|
|
||||||
|
|
||||||
_, _, err = DecryptV3(encrypted, wrongParams)
|
|
||||||
if err == nil {
|
|
||||||
t.Error("DecryptV3 (chunked) with wrong license should fail")
|
|
||||||
}
|
|
||||||
if err != ErrNoValidKey {
|
|
||||||
t.Errorf("Error = %v, want ErrNoValidKey", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestV3ChunkedChunkIndex(t *testing.T) {
|
|
||||||
msg := NewMessage("Index test")
|
|
||||||
msg.AddBinaryAttachment("test.dat", make([]byte, 150), "application/octet-stream")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "index-license",
|
|
||||||
Fingerprint: "index-fp",
|
|
||||||
ChunkSize: 50,
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
header, err := GetV3Header(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetV3Header failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify index structure
|
|
||||||
if len(header.Chunked.Index) != header.Chunked.TotalChunks {
|
|
||||||
t.Errorf("Index length = %d, want %d", len(header.Chunked.Index), header.Chunked.TotalChunks)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify offsets are sequential
|
|
||||||
expectedOffset := 0
|
|
||||||
for i, ci := range header.Chunked.Index {
|
|
||||||
if ci.Offset != expectedOffset {
|
|
||||||
t.Errorf("Chunk %d offset = %d, want %d", i, ci.Offset, expectedOffset)
|
|
||||||
}
|
|
||||||
expectedOffset += ci.Size
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestV3ChunkedSeekMiddleChunk(t *testing.T) {
|
|
||||||
// Create predictable data
|
|
||||||
data := make([]byte, 300)
|
|
||||||
for i := range data {
|
|
||||||
data[i] = byte(i % 256)
|
|
||||||
}
|
|
||||||
|
|
||||||
msg := NewMessage("Seek test")
|
|
||||||
msg.AddBinaryAttachment("seek.bin", data, "application/octet-stream")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "seek-license",
|
|
||||||
Fingerprint: "seek-fp",
|
|
||||||
ChunkSize: 100, // 3 data chunks minimum
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
header, err := GetV3Header(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetV3Header failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
payload, err := GetV3Payload(encrypted)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("GetV3Payload failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
cek, err := UnwrapCEKFromHeader(header, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("UnwrapCEKFromHeader failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Skip to middle chunk (simulate seeking)
|
|
||||||
if header.Chunked.TotalChunks < 2 {
|
|
||||||
t.Skip("Need at least 2 chunks for seek test")
|
|
||||||
}
|
|
||||||
|
|
||||||
middleIdx := header.Chunked.TotalChunks / 2
|
|
||||||
chunk, err := DecryptV3Chunk(payload, cek, middleIdx, header.Chunked)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3Chunk(%d) failed: %v", middleIdx, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Just verify we got something
|
|
||||||
if len(chunk) == 0 {
|
|
||||||
t.Error("Middle chunk is empty")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestV3NonChunkedStillWorks(t *testing.T) {
|
|
||||||
// Verify non-chunked v3 still works (ChunkSize = 0)
|
|
||||||
msg := NewMessage("Non-chunked v3 test")
|
|
||||||
msg.WithSubject("No Chunks")
|
|
||||||
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "non-chunk-license",
|
|
||||||
Fingerprint: "non-chunk-fp",
|
|
||||||
// ChunkSize = 0 (default) - no chunking
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted, err := EncryptV3(msg, params, nil)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("EncryptV3 (non-chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
decrypted, header, err := DecryptV3(encrypted, params)
|
|
||||||
if err != nil {
|
|
||||||
t.Fatalf("DecryptV3 (non-chunked) failed: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if decrypted.Body != msg.Body {
|
|
||||||
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Non-chunked should not have Chunked info
|
|
||||||
if header.Chunked != nil {
|
|
||||||
t.Error("Non-chunked v3 should not have Chunked info")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -2,14 +2,6 @@
|
||||||
// SMSG (Secure Message) enables encrypted message exchange where the recipient
|
// SMSG (Secure Message) enables encrypted message exchange where the recipient
|
||||||
// decrypts using a pre-shared password. Useful for secure support replies,
|
// decrypts using a pre-shared password. Useful for secure support replies,
|
||||||
// confidential documents, and any scenario requiring password-protected content.
|
// confidential documents, and any scenario requiring password-protected content.
|
||||||
//
|
|
||||||
// Format versions:
|
|
||||||
// - v1: JSON with base64-encoded attachments (legacy)
|
|
||||||
// - v2: Binary format with zstd compression (current)
|
|
||||||
// - v3: Streaming with LTHN rolling keys (planned)
|
|
||||||
//
|
|
||||||
// Encryption note: Nonces are embedded in ciphertext, not transmitted separately.
|
|
||||||
// See smsg.go header comment for details.
|
|
||||||
package smsg
|
package smsg
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
|
@ -31,9 +23,6 @@ var (
|
||||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
||||||
ErrPasswordRequired = errors.New("password is required")
|
ErrPasswordRequired = errors.New("password is required")
|
||||||
ErrEmptyMessage = errors.New("message cannot be empty")
|
ErrEmptyMessage = errors.New("message cannot be empty")
|
||||||
ErrStreamKeyExpired = errors.New("stream key expired (outside rolling window)")
|
|
||||||
ErrNoValidKey = errors.New("no valid wrapped key found for current date")
|
|
||||||
ErrLicenseRequired = errors.New("license is required for stream decryption")
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Attachment represents a file attached to the message
|
// Attachment represents a file attached to the message
|
||||||
|
|
@ -289,27 +278,8 @@ func (m *Manifest) AddLink(platform, url string) *Manifest {
|
||||||
const (
|
const (
|
||||||
FormatV1 = "" // Original format: JSON with base64-encoded attachments
|
FormatV1 = "" // Original format: JSON with base64-encoded attachments
|
||||||
FormatV2 = "v2" // Binary format: JSON header + raw binary attachments
|
FormatV2 = "v2" // Binary format: JSON header + raw binary attachments
|
||||||
FormatV3 = "v3" // Streaming format: CEK wrapped with rolling LTHN keys, optional chunking
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Default chunk size for v3 chunked format (1MB)
|
|
||||||
const DefaultChunkSize = 1024 * 1024
|
|
||||||
|
|
||||||
// ChunkInfo describes a single chunk in v3 chunked format
|
|
||||||
type ChunkInfo struct {
|
|
||||||
Offset int `json:"offset"` // byte offset in payload
|
|
||||||
Size int `json:"size"` // encrypted chunk size (includes nonce + tag)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ChunkedInfo contains chunking metadata for v3 streaming
|
|
||||||
// When present, enables decrypt-while-downloading and seeking
|
|
||||||
type ChunkedInfo struct {
|
|
||||||
ChunkSize int `json:"chunkSize"` // size of each chunk before encryption
|
|
||||||
TotalChunks int `json:"totalChunks"` // number of chunks
|
|
||||||
TotalSize int64 `json:"totalSize"` // total unencrypted size
|
|
||||||
Index []ChunkInfo `json:"index"` // chunk locations for seeking
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compression types
|
// Compression types
|
||||||
const (
|
const (
|
||||||
CompressionNone = "" // No compression (default, backwards compatible)
|
CompressionNone = "" // No compression (default, backwards compatible)
|
||||||
|
|
@ -317,100 +287,12 @@ const (
|
||||||
CompressionZstd = "zstd" // Zstandard compression (faster, better ratio)
|
CompressionZstd = "zstd" // Zstandard compression (faster, better ratio)
|
||||||
)
|
)
|
||||||
|
|
||||||
// Key derivation methods for v3 streaming
|
|
||||||
const (
|
|
||||||
// KeyMethodDirect uses password directly (v1/v2 behavior)
|
|
||||||
KeyMethodDirect = ""
|
|
||||||
|
|
||||||
// KeyMethodLTHNRolling uses LTHN hash with rolling date windows
|
|
||||||
// Key = SHA256(LTHN(date:license:fingerprint))
|
|
||||||
// Valid keys: current period and next period (rolling window)
|
|
||||||
KeyMethodLTHNRolling = "lthn-rolling"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Cadence defines how often stream keys rotate
|
|
||||||
type Cadence string
|
|
||||||
|
|
||||||
const (
|
|
||||||
// CadenceDaily rotates keys every 24 hours (default)
|
|
||||||
// Date format: "2006-01-02"
|
|
||||||
CadenceDaily Cadence = "daily"
|
|
||||||
|
|
||||||
// CadenceHalfDay rotates keys every 12 hours
|
|
||||||
// Date format: "2006-01-02-AM" or "2006-01-02-PM"
|
|
||||||
CadenceHalfDay Cadence = "12h"
|
|
||||||
|
|
||||||
// CadenceQuarter rotates keys every 6 hours
|
|
||||||
// Date format: "2006-01-02-00", "2006-01-02-06", "2006-01-02-12", "2006-01-02-18"
|
|
||||||
CadenceQuarter Cadence = "6h"
|
|
||||||
|
|
||||||
// CadenceHourly rotates keys every hour
|
|
||||||
// Date format: "2006-01-02-15" (24-hour format)
|
|
||||||
CadenceHourly Cadence = "1h"
|
|
||||||
)
|
|
||||||
|
|
||||||
// WrappedKey represents a CEK (Content Encryption Key) wrapped with a time-bound stream key.
|
|
||||||
// The stream key is derived from LTHN(date:license:fingerprint) and is never transmitted.
|
|
||||||
// Only the wrapped CEK (which includes its own nonce) is stored in the header.
|
|
||||||
type WrappedKey struct {
|
|
||||||
Date string `json:"date"` // ISO date "YYYY-MM-DD" for key derivation
|
|
||||||
Wrapped string `json:"wrapped"` // base64([nonce][ChaCha(CEK, streamKey)])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Header represents the SMSG container header
|
// Header represents the SMSG container header
|
||||||
type Header struct {
|
type Header struct {
|
||||||
Version string `json:"version"`
|
Version string `json:"version"`
|
||||||
Algorithm string `json:"algorithm"`
|
Algorithm string `json:"algorithm"`
|
||||||
Format string `json:"format,omitempty"` // v2 for binary, v3 for streaming, empty for v1 (base64)
|
Format string `json:"format,omitempty"` // v2 for binary, empty for v1 (base64)
|
||||||
Compression string `json:"compression,omitempty"` // gzip, zstd, or empty for none
|
Compression string `json:"compression,omitempty"` // gzip or empty for none
|
||||||
Hint string `json:"hint,omitempty"` // optional password hint
|
Hint string `json:"hint,omitempty"` // optional password hint
|
||||||
Manifest *Manifest `json:"manifest,omitempty"` // public metadata for discovery
|
Manifest *Manifest `json:"manifest,omitempty"` // public metadata for discovery
|
||||||
|
|
||||||
// V3 streaming fields
|
|
||||||
KeyMethod string `json:"keyMethod,omitempty"` // lthn-rolling for v3
|
|
||||||
Cadence Cadence `json:"cadence,omitempty"` // key rotation frequency (daily, 12h, 6h, 1h)
|
|
||||||
WrappedKeys []WrappedKey `json:"wrappedKeys,omitempty"` // CEK wrapped with rolling keys
|
|
||||||
|
|
||||||
// V3 chunked streaming (optional - enables decrypt-while-downloading)
|
|
||||||
Chunked *ChunkedInfo `json:"chunked,omitempty"` // chunk index for seeking/range requests
|
|
||||||
}
|
|
||||||
|
|
||||||
// ========== ADAPTIVE BITRATE STREAMING (ABR) ==========
|
|
||||||
|
|
||||||
// ABRManifest represents a multi-bitrate variant playlist for adaptive streaming.
|
|
||||||
// Similar to HLS master playlist but with encrypted SMSG variants.
|
|
||||||
type ABRManifest struct {
|
|
||||||
Version string `json:"version"` // "abr-v1"
|
|
||||||
Title string `json:"title"` // Content title
|
|
||||||
Duration int `json:"duration"` // Total duration in seconds
|
|
||||||
Variants []Variant `json:"variants"` // Quality variants (sorted by bandwidth, ascending)
|
|
||||||
DefaultIdx int `json:"defaultIdx"` // Default variant index (typically 720p)
|
|
||||||
Password string `json:"-"` // Shared password for all variants (not serialized)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Variant represents a single quality level in an ABR stream.
|
|
||||||
// Each variant is a standard v3 chunked .smsg file.
|
|
||||||
type Variant struct {
|
|
||||||
Name string `json:"name"` // Human-readable name: "1080p", "720p", etc.
|
|
||||||
Bandwidth int `json:"bandwidth"` // Required bandwidth in bits per second
|
|
||||||
Width int `json:"width"` // Video width in pixels
|
|
||||||
Height int `json:"height"` // Video height in pixels
|
|
||||||
Codecs string `json:"codecs"` // Codec string: "avc1.640028,mp4a.40.2"
|
|
||||||
URL string `json:"url"` // Relative path to .smsg file
|
|
||||||
ChunkCount int `json:"chunkCount"` // Number of chunks (for progress calculation)
|
|
||||||
FileSize int64 `json:"fileSize"` // File size in bytes
|
|
||||||
}
|
|
||||||
|
|
||||||
// Standard ABR quality presets
|
|
||||||
var ABRPresets = []struct {
|
|
||||||
Name string
|
|
||||||
Width int
|
|
||||||
Height int
|
|
||||||
Bitrate string // For ffmpeg
|
|
||||||
BPS int // Bits per second
|
|
||||||
}{
|
|
||||||
{"1080p", 1920, 1080, "5M", 5000000},
|
|
||||||
{"720p", 1280, 720, "2.5M", 2500000},
|
|
||||||
{"480p", 854, 480, "1M", 1000000},
|
|
||||||
{"360p", 640, 360, "500K", 500000},
|
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -7,8 +7,8 @@ import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
"github.com/Snider/Enchantrix/pkg/enchantrix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
"github.com/Snider/Enchantrix/pkg/trix"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Decrypt decrypts a STMF payload using the server's private key.
|
// Decrypt decrypts a STMF payload using the server's private key.
|
||||||
|
|
|
||||||
|
|
@ -8,8 +8,8 @@ import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
"github.com/Snider/Enchantrix/pkg/enchantrix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
"github.com/Snider/Enchantrix/pkg/trix"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Encrypt encrypts form data using the server's public key.
|
// Encrypt encrypts form data using the server's public key.
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/url"
|
"net/url"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/stmf"
|
"github.com/Snider/Borg/pkg/stmf"
|
||||||
)
|
)
|
||||||
|
|
||||||
// contextKey is a custom type for context keys to avoid collisions
|
// contextKey is a custom type for context keys to avoid collisions
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"strings"
|
"strings"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/stmf"
|
"github.com/Snider/Borg/pkg/stmf"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestMiddleware(t *testing.T) {
|
func TestMiddleware(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,6 @@
|
||||||
package tim
|
package tim
|
||||||
|
|
||||||
import "forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
import "github.com/Snider/Enchantrix/pkg/trix"
|
||||||
|
|
||||||
// DefaultSpec returns a default runc spec.
|
// DefaultSpec returns a default runc spec.
|
||||||
func defaultConfig() (*trix.Trix, error) {
|
func defaultConfig() (*trix.Trix, error) {
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ import (
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/trix"
|
"github.com/Snider/Borg/pkg/trix"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestToFromSigil(t *testing.T) {
|
func TestToFromSigil(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -1,198 +0,0 @@
|
||||||
package tim
|
|
||||||
|
|
||||||
import (
|
|
||||||
"crypto/rand"
|
|
||||||
"encoding/binary"
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
|
|
||||||
"golang.org/x/crypto/argon2"
|
|
||||||
"golang.org/x/crypto/chacha20poly1305"
|
|
||||||
|
|
||||||
borgtrix "forge.lthn.ai/Snider/Borg/pkg/trix"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
blockSize = 1024 * 1024 // 1 MiB plaintext blocks
|
|
||||||
saltSize = 16
|
|
||||||
nonceSize = 12 // chacha20poly1305.NonceSize
|
|
||||||
lengthSize = 4
|
|
||||||
headerSize = 33 // 4 (magic) + 1 (version) + 16 (salt) + 12 (argon2 params)
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
stimMagic = [4]byte{'S', 'T', 'I', 'M'}
|
|
||||||
|
|
||||||
ErrInvalidMagic = errors.New("invalid STIM magic header")
|
|
||||||
ErrUnsupportedVersion = errors.New("unsupported STIM version")
|
|
||||||
ErrStreamDecrypt = errors.New("stream decryption failed")
|
|
||||||
)
|
|
||||||
|
|
||||||
// StreamEncrypt reads plaintext from r and writes STIM v2 chunked AEAD
|
|
||||||
// encrypted data to w. Each 1 MiB block is independently encrypted with
|
|
||||||
// ChaCha20-Poly1305 using a unique random nonce.
|
|
||||||
func StreamEncrypt(r io.Reader, w io.Writer, password string) error {
|
|
||||||
// Generate random salt
|
|
||||||
salt := make([]byte, saltSize)
|
|
||||||
if _, err := rand.Read(salt); err != nil {
|
|
||||||
return fmt.Errorf("failed to generate salt: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Derive key using Argon2id with default params
|
|
||||||
params := borgtrix.DefaultArgon2Params()
|
|
||||||
key := borgtrix.DeriveKeyArgon2(password, salt)
|
|
||||||
|
|
||||||
// Create AEAD cipher
|
|
||||||
aead, err := chacha20poly1305.New(key)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to create AEAD: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write header: magic(4) + version(1) + salt(16) + argon2params(12) = 33 bytes
|
|
||||||
header := make([]byte, headerSize)
|
|
||||||
copy(header[0:4], stimMagic[:])
|
|
||||||
header[4] = 2 // version
|
|
||||||
copy(header[5:21], salt)
|
|
||||||
copy(header[21:33], params.Encode())
|
|
||||||
|
|
||||||
if _, err := w.Write(header); err != nil {
|
|
||||||
return fmt.Errorf("failed to write header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt data in blocks
|
|
||||||
buf := make([]byte, blockSize)
|
|
||||||
nonce := make([]byte, nonceSize)
|
|
||||||
|
|
||||||
for {
|
|
||||||
n, readErr := io.ReadFull(r, buf)
|
|
||||||
|
|
||||||
if n > 0 {
|
|
||||||
// Generate unique nonce for this block
|
|
||||||
if _, err := rand.Read(nonce); err != nil {
|
|
||||||
return fmt.Errorf("failed to generate nonce: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encrypt: ciphertext includes the Poly1305 auth tag (16 bytes)
|
|
||||||
ciphertext := aead.Seal(nil, nonce, buf[:n], nil)
|
|
||||||
|
|
||||||
// Write [nonce(12)][length(4)][ciphertext(n+16)]
|
|
||||||
if _, err := w.Write(nonce); err != nil {
|
|
||||||
return fmt.Errorf("failed to write nonce: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
lenBuf := make([]byte, lengthSize)
|
|
||||||
binary.LittleEndian.PutUint32(lenBuf, uint32(len(ciphertext)))
|
|
||||||
if _, err := w.Write(lenBuf); err != nil {
|
|
||||||
return fmt.Errorf("failed to write length: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := w.Write(ciphertext); err != nil {
|
|
||||||
return fmt.Errorf("failed to write ciphertext: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if readErr != nil {
|
|
||||||
if readErr == io.EOF || readErr == io.ErrUnexpectedEOF {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
return fmt.Errorf("failed to read input: %w", readErr)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Write EOF marker: [nonce(12)][length=0(4)]
|
|
||||||
if _, err := rand.Read(nonce); err != nil {
|
|
||||||
return fmt.Errorf("failed to generate EOF nonce: %w", err)
|
|
||||||
}
|
|
||||||
if _, err := w.Write(nonce); err != nil {
|
|
||||||
return fmt.Errorf("failed to write EOF nonce: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
eofLen := make([]byte, lengthSize)
|
|
||||||
// length is already zero (zero-value)
|
|
||||||
if _, err := w.Write(eofLen); err != nil {
|
|
||||||
return fmt.Errorf("failed to write EOF length: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// StreamDecrypt reads STIM v2 chunked AEAD encrypted data from r and writes
|
|
||||||
// the decrypted plaintext to w. Returns an error if the header is invalid,
|
|
||||||
// the password is wrong, or data has been tampered with.
|
|
||||||
func StreamDecrypt(r io.Reader, w io.Writer, password string) error {
|
|
||||||
// Read header
|
|
||||||
header := make([]byte, headerSize)
|
|
||||||
if _, err := io.ReadFull(r, header); err != nil {
|
|
||||||
return fmt.Errorf("failed to read header: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate magic
|
|
||||||
if header[0] != stimMagic[0] || header[1] != stimMagic[1] ||
|
|
||||||
header[2] != stimMagic[2] || header[3] != stimMagic[3] {
|
|
||||||
return ErrInvalidMagic
|
|
||||||
}
|
|
||||||
|
|
||||||
// Validate version
|
|
||||||
if header[4] != 2 {
|
|
||||||
return fmt.Errorf("%w: got %d", ErrUnsupportedVersion, header[4])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract salt and params
|
|
||||||
salt := header[5:21]
|
|
||||||
params := borgtrix.DecodeArgon2Params(header[21:33])
|
|
||||||
|
|
||||||
// Derive key using stored params
|
|
||||||
key := deriveKeyWithParams(password, salt, params)
|
|
||||||
|
|
||||||
// Create AEAD cipher
|
|
||||||
aead, err := chacha20poly1305.New(key)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("failed to create AEAD: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt blocks
|
|
||||||
nonce := make([]byte, nonceSize)
|
|
||||||
lenBuf := make([]byte, lengthSize)
|
|
||||||
|
|
||||||
for {
|
|
||||||
// Read nonce
|
|
||||||
if _, err := io.ReadFull(r, nonce); err != nil {
|
|
||||||
return fmt.Errorf("failed to read block nonce: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read length
|
|
||||||
if _, err := io.ReadFull(r, lenBuf); err != nil {
|
|
||||||
return fmt.Errorf("failed to read block length: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
ctLen := binary.LittleEndian.Uint32(lenBuf)
|
|
||||||
|
|
||||||
// EOF marker: length == 0
|
|
||||||
if ctLen == 0 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Read ciphertext
|
|
||||||
ciphertext := make([]byte, ctLen)
|
|
||||||
if _, err := io.ReadFull(r, ciphertext); err != nil {
|
|
||||||
return fmt.Errorf("failed to read ciphertext: %w", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt and authenticate
|
|
||||||
plaintext, err := aead.Open(nil, nonce, ciphertext, nil)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("%w: %v", ErrStreamDecrypt, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, err := w.Write(plaintext); err != nil {
|
|
||||||
return fmt.Errorf("failed to write plaintext: %w", err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// deriveKeyWithParams derives a 32-byte key using Argon2id with specific
|
|
||||||
// parameters read from the STIM header (rather than using defaults).
|
|
||||||
func deriveKeyWithParams(password string, salt []byte, params borgtrix.Argon2Params) []byte {
|
|
||||||
return argon2.IDKey([]byte(password), salt, params.Time, params.Memory, uint8(params.Threads), 32)
|
|
||||||
}
|
|
||||||
|
|
@ -1,203 +0,0 @@
|
||||||
package tim
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"crypto/rand"
|
|
||||||
"io"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestStreamRoundTrip_Good(t *testing.T) {
|
|
||||||
plaintext := []byte("Hello, STIM v2 streaming encryption!")
|
|
||||||
password := "test-password-123"
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
var cipherBuf bytes.Buffer
|
|
||||||
if err := StreamEncrypt(bytes.NewReader(plaintext), &cipherBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamEncrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Verify header magic
|
|
||||||
encrypted := cipherBuf.Bytes()
|
|
||||||
if len(encrypted) < 5 {
|
|
||||||
t.Fatal("encrypted output too short for header")
|
|
||||||
}
|
|
||||||
if string(encrypted[:4]) != "STIM" {
|
|
||||||
t.Errorf("expected magic 'STIM', got %q", string(encrypted[:4]))
|
|
||||||
}
|
|
||||||
if encrypted[4] != 2 {
|
|
||||||
t.Errorf("expected version 2, got %d", encrypted[4])
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
if err := StreamDecrypt(bytes.NewReader(encrypted), &plainBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamDecrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !bytes.Equal(plainBuf.Bytes(), plaintext) {
|
|
||||||
t.Errorf("round-trip mismatch:\n got: %q\n want: %q", plainBuf.Bytes(), plaintext)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamRoundTrip_Large_Good(t *testing.T) {
|
|
||||||
// 3 MiB of pseudo-random data spans multiple 1 MiB blocks
|
|
||||||
plaintext := make([]byte, 3*1024*1024)
|
|
||||||
if _, err := rand.Read(plaintext); err != nil {
|
|
||||||
t.Fatalf("failed to generate random data: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
password := "large-data-password"
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
var cipherBuf bytes.Buffer
|
|
||||||
if err := StreamEncrypt(bytes.NewReader(plaintext), &cipherBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamEncrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
if err := StreamDecrypt(bytes.NewReader(cipherBuf.Bytes()), &plainBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamDecrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !bytes.Equal(plainBuf.Bytes(), plaintext) {
|
|
||||||
t.Errorf("round-trip mismatch: got %d bytes, want %d bytes", plainBuf.Len(), len(plaintext))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamEncrypt_Empty_Good(t *testing.T) {
|
|
||||||
password := "empty-test"
|
|
||||||
|
|
||||||
// Encrypt empty input
|
|
||||||
var cipherBuf bytes.Buffer
|
|
||||||
if err := StreamEncrypt(bytes.NewReader(nil), &cipherBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamEncrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
if err := StreamDecrypt(bytes.NewReader(cipherBuf.Bytes()), &plainBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamDecrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if plainBuf.Len() != 0 {
|
|
||||||
t.Errorf("expected empty output, got %d bytes", plainBuf.Len())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamDecrypt_WrongPassword_Bad(t *testing.T) {
|
|
||||||
plaintext := []byte("secret data that should not decrypt with wrong key")
|
|
||||||
correctPassword := "correct-password"
|
|
||||||
wrongPassword := "wrong-password"
|
|
||||||
|
|
||||||
// Encrypt with correct password
|
|
||||||
var cipherBuf bytes.Buffer
|
|
||||||
if err := StreamEncrypt(bytes.NewReader(plaintext), &cipherBuf, correctPassword); err != nil {
|
|
||||||
t.Fatalf("StreamEncrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Attempt decrypt with wrong password
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
err := StreamDecrypt(bytes.NewReader(cipherBuf.Bytes()), &plainBuf, wrongPassword)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error when decrypting with wrong password, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamDecrypt_Truncated_Bad(t *testing.T) {
|
|
||||||
plaintext := []byte("data that will be truncated after encryption")
|
|
||||||
password := "truncation-test"
|
|
||||||
|
|
||||||
// Encrypt
|
|
||||||
var cipherBuf bytes.Buffer
|
|
||||||
if err := StreamEncrypt(bytes.NewReader(plaintext), &cipherBuf, password); err != nil {
|
|
||||||
t.Fatalf("StreamEncrypt() error = %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
encrypted := cipherBuf.Bytes()
|
|
||||||
|
|
||||||
// Truncate to just past the header (33 bytes) but before the full first block
|
|
||||||
if len(encrypted) > 40 {
|
|
||||||
truncated := encrypted[:40]
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
err := StreamDecrypt(bytes.NewReader(truncated), &plainBuf, password)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error when decrypting truncated data, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Truncate mid-way through the ciphertext
|
|
||||||
if len(encrypted) > headerSize+nonceSize+lengthSize+5 {
|
|
||||||
midpoint := headerSize + nonceSize + lengthSize + 5
|
|
||||||
truncated := encrypted[:midpoint]
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
err := StreamDecrypt(bytes.NewReader(truncated), &plainBuf, password)
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error when decrypting mid-block truncated data, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamDecrypt_InvalidMagic_Bad(t *testing.T) {
|
|
||||||
// Construct data with wrong magic
|
|
||||||
data := []byte("NOPE\x02")
|
|
||||||
data = append(data, make([]byte, 28)...) // pad to header size
|
|
||||||
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
err := StreamDecrypt(bytes.NewReader(data), &plainBuf, "password")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for invalid magic, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamDecrypt_InvalidVersion_Bad(t *testing.T) {
|
|
||||||
// Construct data with wrong version
|
|
||||||
data := []byte("STIM\x01")
|
|
||||||
data = append(data, make([]byte, 28)...) // pad to header size
|
|
||||||
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
err := StreamDecrypt(bytes.NewReader(data), &plainBuf, "password")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for unsupported version, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamDecrypt_ShortHeader_Bad(t *testing.T) {
|
|
||||||
// Too short to contain full header
|
|
||||||
data := []byte("STIM\x02")
|
|
||||||
var plainBuf bytes.Buffer
|
|
||||||
err := StreamDecrypt(bytes.NewReader(data), &plainBuf, "password")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error for short header, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestStreamEncrypt_WriterError_Bad(t *testing.T) {
|
|
||||||
plaintext := []byte("test data")
|
|
||||||
// Use a writer that fails after a few bytes
|
|
||||||
w := &limitedWriter{limit: 5}
|
|
||||||
err := StreamEncrypt(bytes.NewReader(plaintext), w, "password")
|
|
||||||
if err == nil {
|
|
||||||
t.Fatal("expected error when writer fails, got nil")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// limitedWriter fails after writing limit bytes.
|
|
||||||
type limitedWriter struct {
|
|
||||||
limit int
|
|
||||||
written int
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *limitedWriter) Write(p []byte) (int, error) {
|
|
||||||
remaining := w.limit - w.written
|
|
||||||
if remaining <= 0 {
|
|
||||||
return 0, io.ErrShortWrite
|
|
||||||
}
|
|
||||||
if len(p) > remaining {
|
|
||||||
w.written += remaining
|
|
||||||
return remaining, io.ErrShortWrite
|
|
||||||
}
|
|
||||||
w.written += len(p)
|
|
||||||
return len(p), nil
|
|
||||||
}
|
|
||||||
|
|
@ -11,10 +11,10 @@ import (
|
||||||
"io/fs"
|
"io/fs"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
borgtrix "forge.lthn.ai/Snider/Borg/pkg/trix"
|
borgtrix "github.com/Snider/Borg/pkg/trix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
"github.com/Snider/Enchantrix/pkg/enchantrix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
"github.com/Snider/Enchantrix/pkg/trix"
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
|
|
||||||
|
|
@ -4,7 +4,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestMain(m *testing.M) {
|
func TestMain(m *testing.M) {
|
||||||
|
|
|
||||||
|
|
@ -5,7 +5,7 @@ import (
|
||||||
"errors"
|
"errors"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestNew(t *testing.T) {
|
func TestNew(t *testing.T) {
|
||||||
|
|
|
||||||
|
|
@ -2,16 +2,13 @@ package trix
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/sha256"
|
"crypto/sha256"
|
||||||
"encoding/binary"
|
|
||||||
"errors"
|
"errors"
|
||||||
"fmt"
|
"fmt"
|
||||||
|
|
||||||
"golang.org/x/crypto/argon2"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
|
"github.com/Snider/Enchantrix/pkg/crypt"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Enchantrix/pkg/enchantrix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/crypt"
|
"github.com/Snider/Enchantrix/pkg/trix"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/trix"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
var (
|
var (
|
||||||
|
|
@ -64,53 +61,11 @@ func FromTrix(data []byte, password string) (*datanode.DataNode, error) {
|
||||||
|
|
||||||
// DeriveKey derives a 32-byte key from a password using SHA-256.
|
// DeriveKey derives a 32-byte key from a password using SHA-256.
|
||||||
// This is used for ChaCha20-Poly1305 encryption which requires a 32-byte key.
|
// This is used for ChaCha20-Poly1305 encryption which requires a 32-byte key.
|
||||||
// Deprecated: Use DeriveKeyArgon2 for new code; this remains for backward compatibility.
|
|
||||||
func DeriveKey(password string) []byte {
|
func DeriveKey(password string) []byte {
|
||||||
hash := sha256.Sum256([]byte(password))
|
hash := sha256.Sum256([]byte(password))
|
||||||
return hash[:]
|
return hash[:]
|
||||||
}
|
}
|
||||||
|
|
||||||
// Argon2Params holds the tunable parameters for Argon2id key derivation.
|
|
||||||
type Argon2Params struct {
|
|
||||||
Time uint32
|
|
||||||
Memory uint32 // in KiB
|
|
||||||
Threads uint32
|
|
||||||
}
|
|
||||||
|
|
||||||
// DefaultArgon2Params returns sensible default parameters for Argon2id.
|
|
||||||
func DefaultArgon2Params() Argon2Params {
|
|
||||||
return Argon2Params{
|
|
||||||
Time: 3,
|
|
||||||
Memory: 64 * 1024,
|
|
||||||
Threads: 4,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encode serialises the Argon2Params as 12 bytes (3 x uint32 little-endian).
|
|
||||||
func (p Argon2Params) Encode() []byte {
|
|
||||||
buf := make([]byte, 12)
|
|
||||||
binary.LittleEndian.PutUint32(buf[0:4], p.Time)
|
|
||||||
binary.LittleEndian.PutUint32(buf[4:8], p.Memory)
|
|
||||||
binary.LittleEndian.PutUint32(buf[8:12], p.Threads)
|
|
||||||
return buf
|
|
||||||
}
|
|
||||||
|
|
||||||
// DecodeArgon2Params reads 12 bytes (3 x uint32 little-endian) into Argon2Params.
|
|
||||||
func DecodeArgon2Params(data []byte) Argon2Params {
|
|
||||||
return Argon2Params{
|
|
||||||
Time: binary.LittleEndian.Uint32(data[0:4]),
|
|
||||||
Memory: binary.LittleEndian.Uint32(data[4:8]),
|
|
||||||
Threads: binary.LittleEndian.Uint32(data[8:12]),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// DeriveKeyArgon2 derives a 32-byte key from a password and salt using Argon2id
|
|
||||||
// with DefaultArgon2Params. This is the recommended key derivation for new code.
|
|
||||||
func DeriveKeyArgon2(password string, salt []byte) []byte {
|
|
||||||
p := DefaultArgon2Params()
|
|
||||||
return argon2.IDKey([]byte(password), salt, p.Time, p.Memory, uint8(p.Threads), 32)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ToTrixChaCha converts a DataNode to encrypted Trix format using ChaCha20-Poly1305.
|
// ToTrixChaCha converts a DataNode to encrypted Trix format using ChaCha20-Poly1305.
|
||||||
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error) {
|
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error) {
|
||||||
if password == "" {
|
if password == "" {
|
||||||
|
|
|
||||||
|
|
@ -1,11 +1,9 @@
|
||||||
package trix
|
package trix
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"bytes"
|
|
||||||
"crypto/rand"
|
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestDeriveKey(t *testing.T) {
|
func TestDeriveKey(t *testing.T) {
|
||||||
|
|
@ -238,85 +236,3 @@ func TestToTrixChaChaWithLargeData(t *testing.T) {
|
||||||
t.Fatalf("Failed to open large.bin: %v", err)
|
t.Fatalf("Failed to open large.bin: %v", err)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// --- Argon2id key derivation tests ---
|
|
||||||
|
|
||||||
func TestDeriveKeyArgon2_Good(t *testing.T) {
|
|
||||||
salt := make([]byte, 16)
|
|
||||||
if _, err := rand.Read(salt); err != nil {
|
|
||||||
t.Fatalf("failed to generate salt: %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
key := DeriveKeyArgon2("test-password", salt)
|
|
||||||
if len(key) != 32 {
|
|
||||||
t.Fatalf("expected 32-byte key, got %d bytes", len(key))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeriveKeyArgon2_Deterministic_Good(t *testing.T) {
|
|
||||||
salt := []byte("fixed-salt-value")
|
|
||||||
|
|
||||||
key1 := DeriveKeyArgon2("same-password", salt)
|
|
||||||
key2 := DeriveKeyArgon2("same-password", salt)
|
|
||||||
|
|
||||||
if !bytes.Equal(key1, key2) {
|
|
||||||
t.Fatal("same password and salt must produce the same key")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeriveKeyArgon2_DifferentSalt_Good(t *testing.T) {
|
|
||||||
salt1 := []byte("salt-one-value!!")
|
|
||||||
salt2 := []byte("salt-two-value!!")
|
|
||||||
|
|
||||||
key1 := DeriveKeyArgon2("same-password", salt1)
|
|
||||||
key2 := DeriveKeyArgon2("same-password", salt2)
|
|
||||||
|
|
||||||
if bytes.Equal(key1, key2) {
|
|
||||||
t.Fatal("different salts must produce different keys")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestDeriveKeyLegacy_Good(t *testing.T) {
|
|
||||||
key1 := DeriveKey("backward-compat")
|
|
||||||
key2 := DeriveKey("backward-compat")
|
|
||||||
|
|
||||||
if len(key1) != 32 {
|
|
||||||
t.Fatalf("expected 32-byte key, got %d bytes", len(key1))
|
|
||||||
}
|
|
||||||
if !bytes.Equal(key1, key2) {
|
|
||||||
t.Fatal("legacy DeriveKey must be deterministic")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestArgon2Params_Good(t *testing.T) {
|
|
||||||
params := DefaultArgon2Params()
|
|
||||||
|
|
||||||
// Non-zero values
|
|
||||||
if params.Time == 0 {
|
|
||||||
t.Fatal("Time must be non-zero")
|
|
||||||
}
|
|
||||||
if params.Memory == 0 {
|
|
||||||
t.Fatal("Memory must be non-zero")
|
|
||||||
}
|
|
||||||
if params.Threads == 0 {
|
|
||||||
t.Fatal("Threads must be non-zero")
|
|
||||||
}
|
|
||||||
|
|
||||||
// Encode produces 12 bytes (3 x uint32 LE)
|
|
||||||
encoded := params.Encode()
|
|
||||||
if len(encoded) != 12 {
|
|
||||||
t.Fatalf("expected 12-byte encoding, got %d bytes", len(encoded))
|
|
||||||
}
|
|
||||||
|
|
||||||
// Round-trip: Decode must recover original values
|
|
||||||
decoded := DecodeArgon2Params(encoded)
|
|
||||||
if decoded.Time != params.Time {
|
|
||||||
t.Fatalf("Time mismatch: got %d, want %d", decoded.Time, params.Time)
|
|
||||||
}
|
|
||||||
if decoded.Memory != params.Memory {
|
|
||||||
t.Fatalf("Memory mismatch: got %d, want %d", decoded.Memory, params.Memory)
|
|
||||||
}
|
|
||||||
if decoded.Threads != params.Threads {
|
|
||||||
t.Fatalf("Threads mismatch: got %d, want %d", decoded.Threads, params.Threads)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -1,93 +0,0 @@
|
||||||
package ui
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
|
|
||||||
"github.com/mattn/go-isatty"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Progress abstracts output for both interactive and scripted use.
|
|
||||||
type Progress interface {
|
|
||||||
Start(label string)
|
|
||||||
Update(current, total int64)
|
|
||||||
Finish(label string)
|
|
||||||
Log(level, msg string, args ...any)
|
|
||||||
}
|
|
||||||
|
|
||||||
// QuietProgress writes structured log lines. For cron, pipes, --quiet.
|
|
||||||
type QuietProgress struct {
|
|
||||||
w io.Writer
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewQuietProgress(w io.Writer) *QuietProgress {
|
|
||||||
return &QuietProgress{w: w}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (q *QuietProgress) Start(label string) {
|
|
||||||
fmt.Fprintf(q.w, "[START] %s\n", label)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (q *QuietProgress) Update(current, total int64) {
|
|
||||||
if total > 0 {
|
|
||||||
fmt.Fprintf(q.w, "[PROGRESS] %d/%d\n", current, total)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (q *QuietProgress) Finish(label string) {
|
|
||||||
fmt.Fprintf(q.w, "[DONE] %s\n", label)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (q *QuietProgress) Log(level, msg string, args ...any) {
|
|
||||||
fmt.Fprintf(q.w, "[%s] %s", level, msg)
|
|
||||||
for i := 0; i+1 < len(args); i += 2 {
|
|
||||||
fmt.Fprintf(q.w, " %v=%v", args[i], args[i+1])
|
|
||||||
}
|
|
||||||
fmt.Fprintln(q.w)
|
|
||||||
}
|
|
||||||
|
|
||||||
// InteractiveProgress uses simple terminal output for TTY sessions.
|
|
||||||
type InteractiveProgress struct {
|
|
||||||
w io.Writer
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewInteractiveProgress(w io.Writer) *InteractiveProgress {
|
|
||||||
return &InteractiveProgress{w: w}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *InteractiveProgress) Start(label string) {
|
|
||||||
fmt.Fprintf(p.w, "→ %s\n", label)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *InteractiveProgress) Update(current, total int64) {
|
|
||||||
if total > 0 {
|
|
||||||
pct := current * 100 / total
|
|
||||||
fmt.Fprintf(p.w, "\r %d%%", pct)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *InteractiveProgress) Finish(label string) {
|
|
||||||
fmt.Fprintf(p.w, "\r✓ %s\n", label)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (p *InteractiveProgress) Log(level, msg string, args ...any) {
|
|
||||||
fmt.Fprintf(p.w, " %s", msg)
|
|
||||||
for i := 0; i+1 < len(args); i += 2 {
|
|
||||||
fmt.Fprintf(p.w, " %v=%v", args[i], args[i+1])
|
|
||||||
}
|
|
||||||
fmt.Fprintln(p.w)
|
|
||||||
}
|
|
||||||
|
|
||||||
// IsTTY returns true if the given file descriptor is a terminal.
|
|
||||||
func IsTTY(fd uintptr) bool {
|
|
||||||
return isatty.IsTerminal(fd) || isatty.IsCygwinTerminal(fd)
|
|
||||||
}
|
|
||||||
|
|
||||||
// DefaultProgress returns InteractiveProgress for TTYs, QuietProgress otherwise.
|
|
||||||
func DefaultProgress() Progress {
|
|
||||||
if IsTTY(os.Stdout.Fd()) {
|
|
||||||
return NewInteractiveProgress(os.Stdout)
|
|
||||||
}
|
|
||||||
return NewQuietProgress(os.Stdout)
|
|
||||||
}
|
|
||||||
|
|
@ -1,63 +0,0 @@
|
||||||
package ui
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
)
|
|
||||||
|
|
||||||
func TestQuietProgress_Log_Good(t *testing.T) {
|
|
||||||
var buf bytes.Buffer
|
|
||||||
p := NewQuietProgress(&buf)
|
|
||||||
p.Log("info", "test message", "key", "val")
|
|
||||||
out := buf.String()
|
|
||||||
if !strings.Contains(out, "test message") {
|
|
||||||
t.Fatalf("expected log output to contain 'test message', got: %s", out)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestQuietProgress_StartFinish_Good(t *testing.T) {
|
|
||||||
var buf bytes.Buffer
|
|
||||||
p := NewQuietProgress(&buf)
|
|
||||||
p.Start("collecting")
|
|
||||||
p.Update(50, 100)
|
|
||||||
p.Finish("done")
|
|
||||||
out := buf.String()
|
|
||||||
if !strings.Contains(out, "collecting") {
|
|
||||||
t.Fatalf("expected 'collecting' in output, got: %s", out)
|
|
||||||
}
|
|
||||||
if !strings.Contains(out, "done") {
|
|
||||||
t.Fatalf("expected 'done' in output, got: %s", out)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestQuietProgress_Update_Ugly(t *testing.T) {
|
|
||||||
var buf bytes.Buffer
|
|
||||||
p := NewQuietProgress(&buf)
|
|
||||||
// Should not panic with zero total
|
|
||||||
p.Update(0, 0)
|
|
||||||
p.Update(5, 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInteractiveProgress_StartFinish_Good(t *testing.T) {
|
|
||||||
var buf bytes.Buffer
|
|
||||||
p := NewInteractiveProgress(&buf)
|
|
||||||
p.Start("collecting")
|
|
||||||
p.Finish("done")
|
|
||||||
out := buf.String()
|
|
||||||
if !strings.Contains(out, "collecting") {
|
|
||||||
t.Fatalf("expected 'collecting', got: %s", out)
|
|
||||||
}
|
|
||||||
if !strings.Contains(out, "done") {
|
|
||||||
t.Fatalf("expected 'done', got: %s", out)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestInteractiveProgress_Update_Good(t *testing.T) {
|
|
||||||
var buf bytes.Buffer
|
|
||||||
p := NewInteractiveProgress(&buf)
|
|
||||||
p.Update(50, 100)
|
|
||||||
if !strings.Contains(buf.String(), "50%") {
|
|
||||||
t.Fatalf("expected '50%%', got: %s", buf.String())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -5,7 +5,7 @@ import (
|
||||||
"os"
|
"os"
|
||||||
"path/filepath"
|
"path/filepath"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
|
|
||||||
"github.com/go-git/go-git/v5"
|
"github.com/go-git/go-git/v5"
|
||||||
)
|
)
|
||||||
|
|
|
||||||
|
|
@ -11,13 +11,12 @@ import (
|
||||||
"encoding/json"
|
"encoding/json"
|
||||||
"syscall/js"
|
"syscall/js"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/smsg"
|
"github.com/Snider/Borg/pkg/smsg"
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/stmf"
|
"github.com/Snider/Borg/pkg/stmf"
|
||||||
"forge.lthn.ai/Snider/Enchantrix/pkg/enchantrix"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
// Version of the WASM module
|
// Version of the WASM module
|
||||||
const Version = "1.6.0"
|
const Version = "1.2.0"
|
||||||
|
|
||||||
func main() {
|
func main() {
|
||||||
// Export the BorgSTMF object to JavaScript global scope
|
// Export the BorgSTMF object to JavaScript global scope
|
||||||
|
|
@ -33,24 +32,12 @@ func main() {
|
||||||
js.Global().Set("BorgSMSG", js.ValueOf(map[string]interface{}{
|
js.Global().Set("BorgSMSG", js.ValueOf(map[string]interface{}{
|
||||||
"decrypt": js.FuncOf(smsgDecrypt),
|
"decrypt": js.FuncOf(smsgDecrypt),
|
||||||
"decryptStream": js.FuncOf(smsgDecryptStream),
|
"decryptStream": js.FuncOf(smsgDecryptStream),
|
||||||
"decryptBinary": js.FuncOf(smsgDecryptBinary), // v2/v3 binary input (no base64!)
|
|
||||||
"decryptV3": js.FuncOf(smsgDecryptV3), // v3 streaming with rolling keys
|
|
||||||
"getV3ChunkInfo": js.FuncOf(smsgGetV3ChunkInfo), // Get chunk index for seeking
|
|
||||||
"decryptV3Chunk": js.FuncOf(smsgDecryptV3Chunk), // Decrypt single chunk
|
|
||||||
"unwrapV3CEK": js.FuncOf(smsgUnwrapV3CEK), // Unwrap CEK for chunk decryption
|
|
||||||
"parseV3Header": js.FuncOf(smsgParseV3Header), // Parse header from bytes, returns header + payloadOffset
|
|
||||||
"unwrapCEKFromHeader": js.FuncOf(smsgUnwrapCEKFromHeader), // Unwrap CEK from parsed header
|
|
||||||
"decryptChunkDirect": js.FuncOf(smsgDecryptChunkDirect), // Decrypt raw chunk bytes with CEK
|
|
||||||
"encrypt": js.FuncOf(smsgEncrypt),
|
"encrypt": js.FuncOf(smsgEncrypt),
|
||||||
"encryptWithManifest": js.FuncOf(smsgEncryptWithManifest),
|
"encryptWithManifest": js.FuncOf(smsgEncryptWithManifest),
|
||||||
"getInfo": js.FuncOf(smsgGetInfo),
|
"getInfo": js.FuncOf(smsgGetInfo),
|
||||||
"getInfoBinary": js.FuncOf(smsgGetInfoBinary), // Binary input (no base64!)
|
|
||||||
"quickDecrypt": js.FuncOf(smsgQuickDecrypt),
|
"quickDecrypt": js.FuncOf(smsgQuickDecrypt),
|
||||||
// ABR (Adaptive Bitrate Streaming) functions
|
"version": Version,
|
||||||
"parseABRManifest": js.FuncOf(smsgParseABRManifest), // Parse ABR manifest JSON
|
"ready": true,
|
||||||
"selectVariant": js.FuncOf(smsgSelectVariant), // Select best variant for bandwidth
|
|
||||||
"version": Version,
|
|
||||||
"ready": true,
|
|
||||||
}))
|
}))
|
||||||
|
|
||||||
// Dispatch a ready event
|
// Dispatch a ready event
|
||||||
|
|
@ -374,182 +361,6 @@ func smsgDecryptStream(this js.Value, args []js.Value) interface{} {
|
||||||
return promiseConstructor.New(handler)
|
return promiseConstructor.New(handler)
|
||||||
}
|
}
|
||||||
|
|
||||||
// smsgDecryptBinary decrypts v2/v3 binary data directly from Uint8Array.
|
|
||||||
// No base64 conversion needed - this is the efficient path for zstd streams.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const response = await fetch(url);
|
|
||||||
// const bytes = new Uint8Array(await response.arrayBuffer());
|
|
||||||
// const result = await BorgSMSG.decryptBinary(bytes, password);
|
|
||||||
// const blob = new Blob([result.attachments[0].data], {type: result.attachments[0].mime});
|
|
||||||
func smsgDecryptBinary(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 2 {
|
|
||||||
reject.Invoke(newError("decryptBinary requires 2 arguments: Uint8Array, password"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get binary data directly from Uint8Array
|
|
||||||
uint8Array := args[0]
|
|
||||||
length := uint8Array.Get("length").Int()
|
|
||||||
data := make([]byte, length)
|
|
||||||
js.CopyBytesToGo(data, uint8Array)
|
|
||||||
|
|
||||||
password := args[1].String()
|
|
||||||
|
|
||||||
// Decrypt directly from binary (no base64 decode!)
|
|
||||||
msg, err := smsg.Decrypt(data, password)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("decryption failed: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build result with binary attachment data
|
|
||||||
result := map[string]interface{}{
|
|
||||||
"body": msg.Body,
|
|
||||||
"timestamp": msg.Timestamp,
|
|
||||||
}
|
|
||||||
|
|
||||||
if msg.Subject != "" {
|
|
||||||
result["subject"] = msg.Subject
|
|
||||||
}
|
|
||||||
if msg.From != "" {
|
|
||||||
result["from"] = msg.From
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert attachments with binary data
|
|
||||||
if len(msg.Attachments) > 0 {
|
|
||||||
attachments := make([]interface{}, len(msg.Attachments))
|
|
||||||
for i, att := range msg.Attachments {
|
|
||||||
// Decode base64 to binary (internal format still uses base64)
|
|
||||||
attData, err := base64.StdEncoding.DecodeString(att.Content)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to decode attachment: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create Uint8Array in JS
|
|
||||||
attArray := js.Global().Get("Uint8Array").New(len(attData))
|
|
||||||
js.CopyBytesToJS(attArray, attData)
|
|
||||||
|
|
||||||
attachments[i] = map[string]interface{}{
|
|
||||||
"name": att.Name,
|
|
||||||
"mime": att.MimeType,
|
|
||||||
"size": len(attData),
|
|
||||||
"data": attArray,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["attachments"] = attachments
|
|
||||||
}
|
|
||||||
|
|
||||||
resolve.Invoke(js.ValueOf(result))
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgGetInfoBinary extracts header info from binary Uint8Array without decrypting.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const bytes = new Uint8Array(await response.arrayBuffer());
|
|
||||||
// const info = await BorgSMSG.getInfoBinary(bytes);
|
|
||||||
// console.log(info.manifest);
|
|
||||||
func smsgGetInfoBinary(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 1 {
|
|
||||||
reject.Invoke(newError("getInfoBinary requires 1 argument: Uint8Array"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get binary data directly from Uint8Array
|
|
||||||
uint8Array := args[0]
|
|
||||||
length := uint8Array.Get("length").Int()
|
|
||||||
data := make([]byte, length)
|
|
||||||
js.CopyBytesToGo(data, uint8Array)
|
|
||||||
|
|
||||||
header, err := smsg.GetInfo(data)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to get info: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
result := map[string]interface{}{
|
|
||||||
"version": header.Version,
|
|
||||||
"algorithm": header.Algorithm,
|
|
||||||
}
|
|
||||||
if header.Format != "" {
|
|
||||||
result["format"] = header.Format
|
|
||||||
}
|
|
||||||
if header.Compression != "" {
|
|
||||||
result["compression"] = header.Compression
|
|
||||||
}
|
|
||||||
if header.Hint != "" {
|
|
||||||
result["hint"] = header.Hint
|
|
||||||
}
|
|
||||||
|
|
||||||
// V3 streaming fields
|
|
||||||
if header.KeyMethod != "" {
|
|
||||||
result["keyMethod"] = header.KeyMethod
|
|
||||||
}
|
|
||||||
if header.Cadence != "" {
|
|
||||||
result["cadence"] = string(header.Cadence)
|
|
||||||
}
|
|
||||||
if len(header.WrappedKeys) > 0 {
|
|
||||||
wrappedKeys := make([]interface{}, len(header.WrappedKeys))
|
|
||||||
for i, wk := range header.WrappedKeys {
|
|
||||||
wrappedKeys[i] = map[string]interface{}{
|
|
||||||
"date": wk.Date,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["wrappedKeys"] = wrappedKeys
|
|
||||||
result["isV3Streaming"] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// V3 chunked streaming fields
|
|
||||||
if header.Chunked != nil {
|
|
||||||
index := make([]interface{}, len(header.Chunked.Index))
|
|
||||||
for i, ci := range header.Chunked.Index {
|
|
||||||
index[i] = map[string]interface{}{
|
|
||||||
"offset": ci.Offset,
|
|
||||||
"size": ci.Size,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["chunked"] = map[string]interface{}{
|
|
||||||
"chunkSize": header.Chunked.ChunkSize,
|
|
||||||
"totalChunks": header.Chunked.TotalChunks,
|
|
||||||
"totalSize": header.Chunked.TotalSize,
|
|
||||||
"index": index,
|
|
||||||
}
|
|
||||||
result["isChunked"] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include manifest if present
|
|
||||||
if header.Manifest != nil {
|
|
||||||
result["manifest"] = manifestToJS(header.Manifest)
|
|
||||||
}
|
|
||||||
|
|
||||||
resolve.Invoke(js.ValueOf(result))
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgEncrypt encrypts a message with a password.
|
// smsgEncrypt encrypts a message with a password.
|
||||||
// JavaScript usage:
|
// JavaScript usage:
|
||||||
//
|
//
|
||||||
|
|
@ -684,43 +495,6 @@ func smsgGetInfo(this js.Value, args []js.Value) interface{} {
|
||||||
result["hint"] = header.Hint
|
result["hint"] = header.Hint
|
||||||
}
|
}
|
||||||
|
|
||||||
// V3 streaming fields
|
|
||||||
if header.KeyMethod != "" {
|
|
||||||
result["keyMethod"] = header.KeyMethod
|
|
||||||
}
|
|
||||||
if header.Cadence != "" {
|
|
||||||
result["cadence"] = string(header.Cadence)
|
|
||||||
}
|
|
||||||
if len(header.WrappedKeys) > 0 {
|
|
||||||
wrappedKeys := make([]interface{}, len(header.WrappedKeys))
|
|
||||||
for i, wk := range header.WrappedKeys {
|
|
||||||
wrappedKeys[i] = map[string]interface{}{
|
|
||||||
"date": wk.Date,
|
|
||||||
// Note: wrapped key itself is not exposed for security
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["wrappedKeys"] = wrappedKeys
|
|
||||||
result["isV3Streaming"] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// V3 chunked streaming fields
|
|
||||||
if header.Chunked != nil {
|
|
||||||
index := make([]interface{}, len(header.Chunked.Index))
|
|
||||||
for i, ci := range header.Chunked.Index {
|
|
||||||
index[i] = map[string]interface{}{
|
|
||||||
"offset": ci.Offset,
|
|
||||||
"size": ci.Size,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["chunked"] = map[string]interface{}{
|
|
||||||
"chunkSize": header.Chunked.ChunkSize,
|
|
||||||
"totalChunks": header.Chunked.TotalChunks,
|
|
||||||
"totalSize": header.Chunked.TotalSize,
|
|
||||||
"index": index,
|
|
||||||
}
|
|
||||||
result["isChunked"] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include manifest if present
|
// Include manifest if present
|
||||||
if header.Manifest != nil {
|
if header.Manifest != nil {
|
||||||
result["manifest"] = manifestToJS(header.Manifest)
|
result["manifest"] = manifestToJS(header.Manifest)
|
||||||
|
|
@ -852,131 +626,6 @@ func smsgQuickDecrypt(this js.Value, args []js.Value) interface{} {
|
||||||
return promiseConstructor.New(handler)
|
return promiseConstructor.New(handler)
|
||||||
}
|
}
|
||||||
|
|
||||||
// smsgDecryptV3 decrypts a v3 streaming message using LTHN rolling keys.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const result = await BorgSMSG.decryptV3(encryptedBase64, {
|
|
||||||
// license: 'user-license-id',
|
|
||||||
// fingerprint: 'device-fingerprint'
|
|
||||||
// });
|
|
||||||
// // result.attachments[0].data is a Uint8Array
|
|
||||||
func smsgDecryptV3(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 2 {
|
|
||||||
reject.Invoke(newError("decryptV3 requires 2 arguments: encryptedBase64, {license, fingerprint}"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
encryptedB64 := args[0].String()
|
|
||||||
paramsObj := args[1]
|
|
||||||
|
|
||||||
// Extract stream params
|
|
||||||
license := paramsObj.Get("license").String()
|
|
||||||
fingerprint := ""
|
|
||||||
if !paramsObj.Get("fingerprint").IsUndefined() {
|
|
||||||
fingerprint = paramsObj.Get("fingerprint").String()
|
|
||||||
}
|
|
||||||
|
|
||||||
if license == "" {
|
|
||||||
reject.Invoke(newError("license is required for v3 decryption"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
params := &smsg.StreamParams{
|
|
||||||
License: license,
|
|
||||||
Fingerprint: fingerprint,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decode base64
|
|
||||||
data, err := base64.StdEncoding.DecodeString(encryptedB64)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("invalid base64: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt v3
|
|
||||||
msg, header, err := smsg.DecryptV3(data, params)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("v3 decryption failed: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build result with binary attachment data
|
|
||||||
result := map[string]interface{}{
|
|
||||||
"body": msg.Body,
|
|
||||||
"timestamp": msg.Timestamp,
|
|
||||||
}
|
|
||||||
|
|
||||||
if msg.Subject != "" {
|
|
||||||
result["subject"] = msg.Subject
|
|
||||||
}
|
|
||||||
if msg.From != "" {
|
|
||||||
result["from"] = msg.From
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include header info
|
|
||||||
if header != nil {
|
|
||||||
headerResult := map[string]interface{}{
|
|
||||||
"format": header.Format,
|
|
||||||
"keyMethod": header.KeyMethod,
|
|
||||||
}
|
|
||||||
if header.Cadence != "" {
|
|
||||||
headerResult["cadence"] = string(header.Cadence)
|
|
||||||
}
|
|
||||||
// Include chunked info if present
|
|
||||||
if header.Chunked != nil {
|
|
||||||
headerResult["isChunked"] = true
|
|
||||||
headerResult["chunked"] = map[string]interface{}{
|
|
||||||
"chunkSize": header.Chunked.ChunkSize,
|
|
||||||
"totalChunks": header.Chunked.TotalChunks,
|
|
||||||
"totalSize": header.Chunked.TotalSize,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["header"] = headerResult
|
|
||||||
if header.Manifest != nil {
|
|
||||||
result["manifest"] = manifestToJS(header.Manifest)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert attachments with binary data
|
|
||||||
if len(msg.Attachments) > 0 {
|
|
||||||
attachments := make([]interface{}, len(msg.Attachments))
|
|
||||||
for i, att := range msg.Attachments {
|
|
||||||
// Decode base64 to binary
|
|
||||||
data, err := base64.StdEncoding.DecodeString(att.Content)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to decode attachment: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create Uint8Array in JS
|
|
||||||
uint8Array := js.Global().Get("Uint8Array").New(len(data))
|
|
||||||
js.CopyBytesToJS(uint8Array, data)
|
|
||||||
|
|
||||||
attachments[i] = map[string]interface{}{
|
|
||||||
"name": att.Name,
|
|
||||||
"mime": att.MimeType,
|
|
||||||
"size": len(data),
|
|
||||||
"data": uint8Array,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["attachments"] = attachments
|
|
||||||
}
|
|
||||||
|
|
||||||
resolve.Invoke(js.ValueOf(result))
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// messageToJS converts an smsg.Message to a JavaScript object
|
// messageToJS converts an smsg.Message to a JavaScript object
|
||||||
func messageToJS(msg *smsg.Message) js.Value {
|
func messageToJS(msg *smsg.Message) js.Value {
|
||||||
result := map[string]interface{}{
|
result := map[string]interface{}{
|
||||||
|
|
@ -1122,447 +771,6 @@ func manifestToJS(m *smsg.Manifest) map[string]interface{} {
|
||||||
return result
|
return result
|
||||||
}
|
}
|
||||||
|
|
||||||
// smsgGetV3ChunkInfo extracts chunk information from a v3 file for seeking.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const info = await BorgSMSG.getV3ChunkInfo(encryptedBase64);
|
|
||||||
// console.log(info.chunked.totalChunks);
|
|
||||||
// console.log(info.chunked.index); // [{offset, size}, ...]
|
|
||||||
func smsgGetV3ChunkInfo(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 1 {
|
|
||||||
reject.Invoke(newError("getV3ChunkInfo requires 1 argument: encryptedBase64"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
encryptedB64 := args[0].String()
|
|
||||||
|
|
||||||
// Decode base64
|
|
||||||
data, err := base64.StdEncoding.DecodeString(encryptedB64)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("invalid base64: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get v3 header
|
|
||||||
header, err := smsg.GetV3Header(data)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to get v3 header: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
result := map[string]interface{}{
|
|
||||||
"format": header.Format,
|
|
||||||
"keyMethod": header.KeyMethod,
|
|
||||||
"cadence": string(header.Cadence),
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include chunked info if present
|
|
||||||
if header.Chunked != nil {
|
|
||||||
index := make([]interface{}, len(header.Chunked.Index))
|
|
||||||
for i, ci := range header.Chunked.Index {
|
|
||||||
index[i] = map[string]interface{}{
|
|
||||||
"offset": ci.Offset,
|
|
||||||
"size": ci.Size,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
result["chunked"] = map[string]interface{}{
|
|
||||||
"chunkSize": header.Chunked.ChunkSize,
|
|
||||||
"totalChunks": header.Chunked.TotalChunks,
|
|
||||||
"totalSize": header.Chunked.TotalSize,
|
|
||||||
"index": index,
|
|
||||||
}
|
|
||||||
result["isChunked"] = true
|
|
||||||
} else {
|
|
||||||
result["isChunked"] = false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include manifest if present
|
|
||||||
if header.Manifest != nil {
|
|
||||||
result["manifest"] = manifestToJS(header.Manifest)
|
|
||||||
}
|
|
||||||
|
|
||||||
resolve.Invoke(js.ValueOf(result))
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgUnwrapV3CEK unwraps the Content Encryption Key for chunk-by-chunk decryption.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const cek = await BorgSMSG.unwrapV3CEK(encryptedBase64, {license, fingerprint});
|
|
||||||
// // cek is base64-encoded CEK for use with decryptV3Chunk
|
|
||||||
func smsgUnwrapV3CEK(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 2 {
|
|
||||||
reject.Invoke(newError("unwrapV3CEK requires 2 arguments: encryptedBase64, {license, fingerprint}"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
encryptedB64 := args[0].String()
|
|
||||||
paramsObj := args[1]
|
|
||||||
|
|
||||||
// Extract stream params
|
|
||||||
license := paramsObj.Get("license").String()
|
|
||||||
fingerprint := ""
|
|
||||||
if !paramsObj.Get("fingerprint").IsUndefined() {
|
|
||||||
fingerprint = paramsObj.Get("fingerprint").String()
|
|
||||||
}
|
|
||||||
|
|
||||||
if license == "" {
|
|
||||||
reject.Invoke(newError("license is required"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
params := &smsg.StreamParams{
|
|
||||||
License: license,
|
|
||||||
Fingerprint: fingerprint,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decode base64
|
|
||||||
data, err := base64.StdEncoding.DecodeString(encryptedB64)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("invalid base64: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get header
|
|
||||||
header, err := smsg.GetV3Header(data)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to get v3 header: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unwrap CEK
|
|
||||||
cek, err := smsg.UnwrapCEKFromHeader(header, params)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to unwrap CEK: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Return CEK as base64 for use with decryptV3Chunk
|
|
||||||
cekB64 := base64.StdEncoding.EncodeToString(cek)
|
|
||||||
resolve.Invoke(cekB64)
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgDecryptV3Chunk decrypts a single chunk by index.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const info = await BorgSMSG.getV3ChunkInfo(encryptedBase64);
|
|
||||||
// const cek = await BorgSMSG.unwrapV3CEK(encryptedBase64, {license, fingerprint});
|
|
||||||
// for (let i = 0; i < info.chunked.totalChunks; i++) {
|
|
||||||
// const chunk = await BorgSMSG.decryptV3Chunk(encryptedBase64, cek, i);
|
|
||||||
// // chunk is Uint8Array of decrypted data
|
|
||||||
// }
|
|
||||||
func smsgDecryptV3Chunk(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 3 {
|
|
||||||
reject.Invoke(newError("decryptV3Chunk requires 3 arguments: encryptedBase64, cekBase64, chunkIndex"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
encryptedB64 := args[0].String()
|
|
||||||
cekB64 := args[1].String()
|
|
||||||
chunkIndex := args[2].Int()
|
|
||||||
|
|
||||||
// Decode base64 data
|
|
||||||
data, err := base64.StdEncoding.DecodeString(encryptedB64)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("invalid base64: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decode CEK
|
|
||||||
cek, err := base64.StdEncoding.DecodeString(cekB64)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("invalid CEK base64: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get header for chunk info
|
|
||||||
header, err := smsg.GetV3Header(data)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to get v3 header: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if header.Chunked == nil {
|
|
||||||
reject.Invoke(newError("not a chunked v3 file"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get payload
|
|
||||||
payload, err := smsg.GetV3Payload(data)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to get payload: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Decrypt the chunk
|
|
||||||
decrypted, err := smsg.DecryptV3Chunk(payload, cek, chunkIndex, header.Chunked)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to decrypt chunk: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Return as Uint8Array
|
|
||||||
uint8Array := js.Global().Get("Uint8Array").New(len(decrypted))
|
|
||||||
js.CopyBytesToJS(uint8Array, decrypted)
|
|
||||||
|
|
||||||
resolve.Invoke(uint8Array)
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgParseV3Header parses header from file bytes, returns header info + payload offset.
|
|
||||||
// This allows streaming: fetch header first, then fetch chunks as needed.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const headerInfo = await BorgSMSG.parseV3Header(fileBytes);
|
|
||||||
// // headerInfo.payloadOffset = where encrypted chunks start
|
|
||||||
// // headerInfo.chunked.index = [{offset, size}, ...] relative to payload
|
|
||||||
//
|
|
||||||
// STREAMING: This function uses GetV3HeaderFromPrefix which only needs
|
|
||||||
// the first few KB of the file. Call it as soon as ~3KB arrives.
|
|
||||||
func smsgParseV3Header(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 1 {
|
|
||||||
reject.Invoke(newError("parseV3Header requires 1 argument: Uint8Array"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get binary data from Uint8Array
|
|
||||||
uint8Array := args[0]
|
|
||||||
length := uint8Array.Get("length").Int()
|
|
||||||
data := make([]byte, length)
|
|
||||||
js.CopyBytesToGo(data, uint8Array)
|
|
||||||
|
|
||||||
// Parse header from prefix - works with partial data!
|
|
||||||
header, payloadOffset, err := smsg.GetV3HeaderFromPrefix(data)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to parse header: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
result := map[string]interface{}{
|
|
||||||
"format": header.Format,
|
|
||||||
"keyMethod": header.KeyMethod,
|
|
||||||
"cadence": string(header.Cadence),
|
|
||||||
"payloadOffset": payloadOffset,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include wrapped keys for CEK unwrapping
|
|
||||||
if len(header.WrappedKeys) > 0 {
|
|
||||||
wrappedKeys := make([]interface{}, len(header.WrappedKeys))
|
|
||||||
for i, wk := range header.WrappedKeys {
|
|
||||||
wrappedKeys[i] = map[string]interface{}{
|
|
||||||
"date": wk.Date,
|
|
||||||
"wrapped": wk.Wrapped,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["wrappedKeys"] = wrappedKeys
|
|
||||||
}
|
|
||||||
|
|
||||||
// Include chunk info
|
|
||||||
if header.Chunked != nil {
|
|
||||||
index := make([]interface{}, len(header.Chunked.Index))
|
|
||||||
for i, ci := range header.Chunked.Index {
|
|
||||||
index[i] = map[string]interface{}{
|
|
||||||
"offset": ci.Offset,
|
|
||||||
"size": ci.Size,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
result["chunked"] = map[string]interface{}{
|
|
||||||
"chunkSize": header.Chunked.ChunkSize,
|
|
||||||
"totalChunks": header.Chunked.TotalChunks,
|
|
||||||
"totalSize": header.Chunked.TotalSize,
|
|
||||||
"index": index,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if header.Manifest != nil {
|
|
||||||
result["manifest"] = manifestToJS(header.Manifest)
|
|
||||||
}
|
|
||||||
|
|
||||||
resolve.Invoke(js.ValueOf(result))
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgUnwrapCEKFromHeader unwraps CEK using wrapped keys from header.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const headerInfo = await BorgSMSG.parseV3Header(fileBytes);
|
|
||||||
// const cek = await BorgSMSG.unwrapCEKFromHeader(headerInfo.wrappedKeys, {license, fingerprint}, headerInfo.cadence);
|
|
||||||
func smsgUnwrapCEKFromHeader(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 2 {
|
|
||||||
reject.Invoke(newError("unwrapCEKFromHeader requires 2-3 arguments: wrappedKeys, {license, fingerprint}, [cadence]"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
wrappedKeysJS := args[0]
|
|
||||||
paramsObj := args[1]
|
|
||||||
|
|
||||||
// Get cadence (optional, defaults to daily)
|
|
||||||
cadence := smsg.CadenceDaily
|
|
||||||
if len(args) >= 3 && !args[2].IsUndefined() {
|
|
||||||
cadence = smsg.Cadence(args[2].String())
|
|
||||||
}
|
|
||||||
|
|
||||||
// Extract stream params
|
|
||||||
license := paramsObj.Get("license").String()
|
|
||||||
fingerprint := ""
|
|
||||||
if !paramsObj.Get("fingerprint").IsUndefined() {
|
|
||||||
fingerprint = paramsObj.Get("fingerprint").String()
|
|
||||||
}
|
|
||||||
|
|
||||||
if license == "" {
|
|
||||||
reject.Invoke(newError("license is required"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert JS wrapped keys to Go
|
|
||||||
var wrappedKeys []smsg.WrappedKey
|
|
||||||
for i := 0; i < wrappedKeysJS.Length(); i++ {
|
|
||||||
wk := wrappedKeysJS.Index(i)
|
|
||||||
wrappedKeys = append(wrappedKeys, smsg.WrappedKey{
|
|
||||||
Date: wk.Get("date").String(),
|
|
||||||
Wrapped: wk.Get("wrapped").String(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build header with just the wrapped keys
|
|
||||||
header := &smsg.Header{
|
|
||||||
WrappedKeys: wrappedKeys,
|
|
||||||
Cadence: cadence,
|
|
||||||
}
|
|
||||||
|
|
||||||
params := &smsg.StreamParams{
|
|
||||||
License: license,
|
|
||||||
Fingerprint: fingerprint,
|
|
||||||
Cadence: cadence,
|
|
||||||
}
|
|
||||||
|
|
||||||
// Unwrap CEK
|
|
||||||
cek, err := smsg.UnwrapCEKFromHeader(header, params)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to unwrap CEK: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Return CEK as Uint8Array
|
|
||||||
cekArray := js.Global().Get("Uint8Array").New(len(cek))
|
|
||||||
js.CopyBytesToJS(cekArray, cek)
|
|
||||||
|
|
||||||
resolve.Invoke(cekArray)
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgDecryptChunkDirect decrypts raw chunk bytes with CEK.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const chunkBytes = fileBytes.subarray(payloadOffset + chunk.offset, payloadOffset + chunk.offset + chunk.size);
|
|
||||||
// const decrypted = await BorgSMSG.decryptChunkDirect(chunkBytes, cek);
|
|
||||||
func smsgDecryptChunkDirect(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 2 {
|
|
||||||
reject.Invoke(newError("decryptChunkDirect requires 2 arguments: chunkBytes (Uint8Array), cek (Uint8Array)"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Get chunk bytes
|
|
||||||
chunkArray := args[0]
|
|
||||||
chunkLen := chunkArray.Get("length").Int()
|
|
||||||
chunkData := make([]byte, chunkLen)
|
|
||||||
js.CopyBytesToGo(chunkData, chunkArray)
|
|
||||||
|
|
||||||
// Get CEK
|
|
||||||
cekArray := args[1]
|
|
||||||
cekLen := cekArray.Get("length").Int()
|
|
||||||
cek := make([]byte, cekLen)
|
|
||||||
js.CopyBytesToGo(cek, cekArray)
|
|
||||||
|
|
||||||
// Create sigil and decrypt
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(cek)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to create sigil: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
decrypted, err := sigil.Out(chunkData)
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("decryption failed: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Return as Uint8Array
|
|
||||||
result := js.Global().Get("Uint8Array").New(len(decrypted))
|
|
||||||
js.CopyBytesToJS(result, decrypted)
|
|
||||||
|
|
||||||
resolve.Invoke(result)
|
|
||||||
}()
|
|
||||||
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
promiseConstructor := js.Global().Get("Promise")
|
|
||||||
return promiseConstructor.New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// jsToManifest converts a JavaScript object to an smsg.Manifest
|
// jsToManifest converts a JavaScript object to an smsg.Manifest
|
||||||
func jsToManifest(obj js.Value) *smsg.Manifest {
|
func jsToManifest(obj js.Value) *smsg.Manifest {
|
||||||
if obj.IsUndefined() || obj.IsNull() {
|
if obj.IsUndefined() || obj.IsNull() {
|
||||||
|
|
@ -1653,106 +861,3 @@ func jsToManifest(obj js.Value) *smsg.Manifest {
|
||||||
|
|
||||||
return manifest
|
return manifest
|
||||||
}
|
}
|
||||||
|
|
||||||
// ========== ABR (Adaptive Bitrate Streaming) Functions ==========
|
|
||||||
|
|
||||||
// smsgParseABRManifest parses an ABR manifest from JSON string.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const manifest = await BorgSMSG.parseABRManifest(jsonString);
|
|
||||||
// // Returns: {version, title, duration, variants: [{name, bandwidth, width, height, url, ...}], defaultIdx}
|
|
||||||
func smsgParseABRManifest(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 1 {
|
|
||||||
reject.Invoke(newError("parseABRManifest requires 1 argument: jsonString"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
jsonStr := args[0].String()
|
|
||||||
manifest, err := smsg.ParseABRManifest([]byte(jsonStr))
|
|
||||||
if err != nil {
|
|
||||||
reject.Invoke(newError("failed to parse ABR manifest: " + err.Error()))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Convert to JS object
|
|
||||||
variants := make([]interface{}, len(manifest.Variants))
|
|
||||||
for i, v := range manifest.Variants {
|
|
||||||
variants[i] = map[string]interface{}{
|
|
||||||
"name": v.Name,
|
|
||||||
"bandwidth": v.Bandwidth,
|
|
||||||
"width": v.Width,
|
|
||||||
"height": v.Height,
|
|
||||||
"codecs": v.Codecs,
|
|
||||||
"url": v.URL,
|
|
||||||
"chunkCount": v.ChunkCount,
|
|
||||||
"fileSize": v.FileSize,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
result := map[string]interface{}{
|
|
||||||
"version": manifest.Version,
|
|
||||||
"title": manifest.Title,
|
|
||||||
"duration": manifest.Duration,
|
|
||||||
"variants": variants,
|
|
||||||
"defaultIdx": manifest.DefaultIdx,
|
|
||||||
}
|
|
||||||
|
|
||||||
resolve.Invoke(js.ValueOf(result))
|
|
||||||
}()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
return js.Global().Get("Promise").New(handler)
|
|
||||||
}
|
|
||||||
|
|
||||||
// smsgSelectVariant selects the best variant for the given bandwidth.
|
|
||||||
// JavaScript usage:
|
|
||||||
//
|
|
||||||
// const idx = await BorgSMSG.selectVariant(manifest, bandwidthBPS);
|
|
||||||
// // Returns: index of best variant that fits within 80% of bandwidth
|
|
||||||
func smsgSelectVariant(this js.Value, args []js.Value) interface{} {
|
|
||||||
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
|
|
||||||
resolve := promiseArgs[0]
|
|
||||||
reject := promiseArgs[1]
|
|
||||||
|
|
||||||
go func() {
|
|
||||||
if len(args) < 2 {
|
|
||||||
reject.Invoke(newError("selectVariant requires 2 arguments: manifest, bandwidthBPS"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
manifestObj := args[0]
|
|
||||||
bandwidthBPS := args[1].Int()
|
|
||||||
|
|
||||||
// Extract variants from JS object
|
|
||||||
variantsJS := manifestObj.Get("variants")
|
|
||||||
if variantsJS.IsUndefined() || variantsJS.Length() == 0 {
|
|
||||||
reject.Invoke(newError("manifest has no variants"))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Build manifest struct
|
|
||||||
manifest := &smsg.ABRManifest{
|
|
||||||
Variants: make([]smsg.Variant, variantsJS.Length()),
|
|
||||||
}
|
|
||||||
for i := 0; i < variantsJS.Length(); i++ {
|
|
||||||
v := variantsJS.Index(i)
|
|
||||||
manifest.Variants[i] = smsg.Variant{
|
|
||||||
Bandwidth: v.Get("bandwidth").Int(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Select best variant
|
|
||||||
selectedIdx := manifest.SelectVariant(bandwidthBPS)
|
|
||||||
resolve.Invoke(selectedIdx)
|
|
||||||
}()
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
return js.Global().Get("Promise").New(handler)
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ import (
|
||||||
"net/url"
|
"net/url"
|
||||||
"strings"
|
"strings"
|
||||||
|
|
||||||
"forge.lthn.ai/Snider/Borg/pkg/datanode"
|
"github.com/Snider/Borg/pkg/datanode"
|
||||||
"github.com/schollz/progressbar/v3"
|
"github.com/schollz/progressbar/v3"
|
||||||
|
|
||||||
"golang.org/x/net/html"
|
"golang.org/x/net/html"
|
||||||
|
|
|
||||||
|
|
@ -1,40 +0,0 @@
|
||||||
# Borg RFC Specifications
|
|
||||||
|
|
||||||
This directory contains technical specifications (RFCs) for the Borg project.
|
|
||||||
|
|
||||||
## Index
|
|
||||||
|
|
||||||
| RFC | Title | Status | Description |
|
|
||||||
|-----|-------|--------|-------------|
|
|
||||||
| [001](RFC-001-OSS-DRM.md) | Open Source DRM | Proposed | Core DRM system for independent artists |
|
|
||||||
| [002](RFC-002-SMSG-FORMAT.md) | SMSG Container Format | Draft | Encrypted container format (v1/v2/v3) |
|
|
||||||
| [003](RFC-003-DATANODE.md) | DataNode | Draft | In-memory filesystem abstraction |
|
|
||||||
| [004](RFC-004-TIM.md) | Terminal Isolation Matrix | Draft | OCI-compatible container bundle |
|
|
||||||
| [005](RFC-005-STIM.md) | Encrypted TIM | Draft | ChaCha20-Poly1305 encrypted containers |
|
|
||||||
| [006](RFC-006-TRIX.md) | TRIX PGP Format | Draft | PGP encryption for archives and accounts |
|
|
||||||
| [007](RFC-007-LTHN.md) | LTHN Key Derivation | Draft | Rainbow-table resistant rolling keys |
|
|
||||||
| [008](RFC-008-BORGFILE.md) | Borgfile | Draft | Container compilation syntax |
|
|
||||||
| [009](RFC-009-STMF.md) | Secure To-Me Form | Draft | Asymmetric form encryption |
|
|
||||||
| [010](RFC-010-WASM-API.md) | WASM Decryption API | Draft | Browser decryption interface |
|
|
||||||
|
|
||||||
## Status Definitions
|
|
||||||
|
|
||||||
| Status | Meaning |
|
|
||||||
|--------|---------|
|
|
||||||
| **Draft** | Initial specification, subject to change |
|
|
||||||
| **Proposed** | Ready for review, implementation may begin |
|
|
||||||
| **Accepted** | Approved, implementation complete |
|
|
||||||
| **Deprecated** | Superseded by newer specification |
|
|
||||||
|
|
||||||
## Contributing
|
|
||||||
|
|
||||||
1. Create a new RFC with the next available number
|
|
||||||
2. Use the template format (see existing RFCs)
|
|
||||||
3. Start with "Draft" status
|
|
||||||
4. Update this README index
|
|
||||||
|
|
||||||
## Related Documentation
|
|
||||||
|
|
||||||
- [CLAUDE.md](../CLAUDE.md) - Developer quick reference
|
|
||||||
- [docs/](../docs/) - User documentation
|
|
||||||
- [examples/formats/](../examples/formats/) - Format examples
|
|
||||||
|
|
@ -1,480 +0,0 @@
|
||||||
# RFC-002: SMSG Container Format
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
**Depends On**: RFC-001, RFC-007
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
SMSG (Secure Message) is an encrypted container format using ChaCha20-Poly1305 authenticated encryption. This RFC specifies the binary wire format, versioning, and encoding rules for SMSG files.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
SMSG provides:
|
|
||||||
- Authenticated encryption (ChaCha20-Poly1305)
|
|
||||||
- Public metadata (manifest) readable without decryption
|
|
||||||
- Multiple format versions (v1 legacy, v2 binary, v3 streaming)
|
|
||||||
- Optional chunking for large files and seeking
|
|
||||||
|
|
||||||
## 2. File Structure
|
|
||||||
|
|
||||||
### 2.1 Binary Layout
|
|
||||||
|
|
||||||
```
|
|
||||||
Offset Size Field
|
|
||||||
------ ----- ------------------------------------
|
|
||||||
0 4 Magic: "SMSG" (ASCII)
|
|
||||||
4 2 Version: uint16 little-endian
|
|
||||||
6 3 Header Length: 3-byte big-endian
|
|
||||||
9 N Header JSON (plaintext)
|
|
||||||
9+N M Encrypted Payload
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.2 Magic Number
|
|
||||||
|
|
||||||
| Format | Value |
|
|
||||||
|--------|-------|
|
|
||||||
| Binary | `0x53 0x4D 0x53 0x47` |
|
|
||||||
| ASCII | `SMSG` |
|
|
||||||
| Base64 (first 6 chars) | `U01TRw` |
|
|
||||||
|
|
||||||
### 2.3 Version Field
|
|
||||||
|
|
||||||
Current version: `0x0001` (1)
|
|
||||||
|
|
||||||
Decoders MUST reject versions they don't understand.
|
|
||||||
|
|
||||||
### 2.4 Header Length
|
|
||||||
|
|
||||||
3 bytes, big-endian unsigned integer. Supports headers up to 16 MB.
|
|
||||||
|
|
||||||
## 3. Header Format (JSON)
|
|
||||||
|
|
||||||
Header is always plaintext (never encrypted), enabling metadata inspection without decryption.
|
|
||||||
|
|
||||||
### 3.1 Base Header
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"version": "1.0",
|
|
||||||
"algorithm": "chacha20poly1305",
|
|
||||||
"format": "v2",
|
|
||||||
"compression": "zstd",
|
|
||||||
"manifest": { ... }
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 V3 Header Extensions
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"version": "1.0",
|
|
||||||
"algorithm": "chacha20poly1305",
|
|
||||||
"format": "v3",
|
|
||||||
"compression": "zstd",
|
|
||||||
"keyMethod": "lthn-rolling",
|
|
||||||
"cadence": "daily",
|
|
||||||
"manifest": { ... },
|
|
||||||
"wrappedKeys": [
|
|
||||||
{"date": "2026-01-13", "wrapped": "<base64>"},
|
|
||||||
{"date": "2026-01-14", "wrapped": "<base64>"}
|
|
||||||
],
|
|
||||||
"chunked": {
|
|
||||||
"chunkSize": 1048576,
|
|
||||||
"totalChunks": 42,
|
|
||||||
"totalSize": 44040192,
|
|
||||||
"index": [
|
|
||||||
{"offset": 0, "size": 1048600},
|
|
||||||
{"offset": 1048600, "size": 1048600}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.3 Header Field Reference
|
|
||||||
|
|
||||||
| Field | Type | Values | Description |
|
|
||||||
|-------|------|--------|-------------|
|
|
||||||
| version | string | "1.0" | Format version string |
|
|
||||||
| algorithm | string | "chacha20poly1305" | Always ChaCha20-Poly1305 |
|
|
||||||
| format | string | "", "v2", "v3" | Payload format version |
|
|
||||||
| compression | string | "", "gzip", "zstd" | Compression algorithm |
|
|
||||||
| keyMethod | string | "", "lthn-rolling" | Key derivation method |
|
|
||||||
| cadence | string | "daily", "12h", "6h", "1h" | Rolling key period (v3) |
|
|
||||||
| manifest | object | - | Content metadata |
|
|
||||||
| wrappedKeys | array | - | CEK wrapped for each period (v3) |
|
|
||||||
| chunked | object | - | Chunk index for seeking (v3) |
|
|
||||||
|
|
||||||
## 4. Manifest Structure
|
|
||||||
|
|
||||||
### 4.1 Complete Manifest
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Manifest struct {
|
|
||||||
Title string `json:"title,omitempty"`
|
|
||||||
Artist string `json:"artist,omitempty"`
|
|
||||||
Album string `json:"album,omitempty"`
|
|
||||||
Genre string `json:"genre,omitempty"`
|
|
||||||
Year int `json:"year,omitempty"`
|
|
||||||
ReleaseType string `json:"release_type,omitempty"`
|
|
||||||
Duration int `json:"duration,omitempty"`
|
|
||||||
Format string `json:"format,omitempty"`
|
|
||||||
ExpiresAt int64 `json:"expires_at,omitempty"`
|
|
||||||
IssuedAt int64 `json:"issued_at,omitempty"`
|
|
||||||
LicenseType string `json:"license_type,omitempty"`
|
|
||||||
Tracks []Track `json:"tracks,omitempty"`
|
|
||||||
Links map[string]string `json:"links,omitempty"`
|
|
||||||
Tags []string `json:"tags,omitempty"`
|
|
||||||
Extra map[string]string `json:"extra,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type Track struct {
|
|
||||||
Title string `json:"title"`
|
|
||||||
Start float64 `json:"start"`
|
|
||||||
End float64 `json:"end,omitempty"`
|
|
||||||
Type string `json:"type,omitempty"`
|
|
||||||
TrackNum int `json:"track_num,omitempty"`
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Manifest Field Reference
|
|
||||||
|
|
||||||
| Field | Type | Range | Description |
|
|
||||||
|-------|------|-------|-------------|
|
|
||||||
| title | string | 0-255 chars | Display name (required for discovery) |
|
|
||||||
| artist | string | 0-255 chars | Creator name |
|
|
||||||
| album | string | 0-255 chars | Album/collection name |
|
|
||||||
| genre | string | 0-255 chars | Genre classification |
|
|
||||||
| year | int | 0-9999 | Release year (0 = unset) |
|
|
||||||
| releaseType | string | enum | "single", "album", "ep", "mix" |
|
|
||||||
| duration | int | 0+ | Total duration in seconds |
|
|
||||||
| format | string | any | Platform format string (e.g., "dapp.fm/v1") |
|
|
||||||
| expiresAt | int64 | 0+ | Unix timestamp (0 = never expires) |
|
|
||||||
| issuedAt | int64 | 0+ | Unix timestamp of license issue |
|
|
||||||
| licenseType | string | enum | "perpetual", "rental", "stream", "preview" |
|
|
||||||
| tracks | []Track | - | Track boundaries for multi-track releases |
|
|
||||||
| links | map | - | Platform name → URL (e.g., "bandcamp" → URL) |
|
|
||||||
| tags | []string | - | Arbitrary string tags |
|
|
||||||
| extra | map | - | Free-form key-value extension data |
|
|
||||||
|
|
||||||
## 5. Format Versions
|
|
||||||
|
|
||||||
### 5.1 Version Comparison
|
|
||||||
|
|
||||||
| Aspect | v1 (Legacy) | v2 (Binary) | v3 (Streaming) |
|
|
||||||
|--------|-------------|-------------|----------------|
|
|
||||||
| Payload Structure | JSON only | Length-prefixed JSON + binary | Same as v2 |
|
|
||||||
| Attachment Encoding | Base64 in JSON | Size field + raw binary | Size field + raw binary |
|
|
||||||
| Compression | None | zstd (default) | zstd (default) |
|
|
||||||
| Key Derivation | SHA256(password) | SHA256(password) | LTHN rolling keys |
|
|
||||||
| Chunked Support | No | No | Yes (optional) |
|
|
||||||
| Size Overhead | ~33% | ~25% | ~15% |
|
|
||||||
| Use Case | Legacy | General purpose | Time-limited streaming |
|
|
||||||
|
|
||||||
### 5.2 V1 Format (Legacy)
|
|
||||||
|
|
||||||
**Payload (after decryption):**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"body": "Message content",
|
|
||||||
"subject": "Optional subject",
|
|
||||||
"from": "sender@example.com",
|
|
||||||
"to": "recipient@example.com",
|
|
||||||
"timestamp": 1673644800,
|
|
||||||
"attachments": [
|
|
||||||
{
|
|
||||||
"name": "file.bin",
|
|
||||||
"content": "base64encodeddata==",
|
|
||||||
"mime": "application/octet-stream",
|
|
||||||
"size": 1024
|
|
||||||
}
|
|
||||||
],
|
|
||||||
"reply_key": {
|
|
||||||
"public_key": "base64x25519key==",
|
|
||||||
"algorithm": "x25519"
|
|
||||||
},
|
|
||||||
"meta": {
|
|
||||||
"custom_field": "custom_value"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- Attachments base64-encoded inline in JSON (~33% overhead)
|
|
||||||
- Simple but inefficient for large files
|
|
||||||
|
|
||||||
### 5.3 V2 Format (Binary)
|
|
||||||
|
|
||||||
**Payload structure (after decryption and decompression):**
|
|
||||||
|
|
||||||
```
|
|
||||||
Offset Size Field
|
|
||||||
------ ----- ------------------------------------
|
|
||||||
0 4 Message JSON Length (big-endian uint32)
|
|
||||||
4 N Message JSON (attachments have size only, no content)
|
|
||||||
4+N B1 Attachment 1 raw binary
|
|
||||||
4+N+B1 B2 Attachment 2 raw binary
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
**Message JSON (within payload):**
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"body": "Message text",
|
|
||||||
"subject": "Subject",
|
|
||||||
"from": "sender",
|
|
||||||
"attachments": [
|
|
||||||
{"name": "file1.bin", "mime": "application/octet-stream", "size": 4096},
|
|
||||||
{"name": "file2.bin", "mime": "image/png", "size": 65536}
|
|
||||||
],
|
|
||||||
"timestamp": 1673644800
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- Attachment `content` field omitted; binary data follows JSON
|
|
||||||
- Compressed before encryption
|
|
||||||
- 3-10x faster than v1, ~25% smaller
|
|
||||||
|
|
||||||
### 5.4 V3 Format (Streaming)
|
|
||||||
|
|
||||||
Same payload structure as v2, but with:
|
|
||||||
- LTHN-derived rolling keys instead of password
|
|
||||||
- CEK (Content Encryption Key) wrapped for each time period
|
|
||||||
- Optional chunking for seek support
|
|
||||||
|
|
||||||
**CEK Wrapping:**
|
|
||||||
|
|
||||||
```
|
|
||||||
For each rolling period:
|
|
||||||
streamKey = SHA256(LTHN(period:license:fingerprint))
|
|
||||||
wrappedKey = ChaCha20-Poly1305(CEK, streamKey)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rolling Periods (cadence):**
|
|
||||||
|
|
||||||
| Cadence | Period Format | Example |
|
|
||||||
|---------|---------------|---------|
|
|
||||||
| daily | YYYY-MM-DD | "2026-01-13" |
|
|
||||||
| 12h | YYYY-MM-DD-AM/PM | "2026-01-13-AM" |
|
|
||||||
| 6h | YYYY-MM-DD-HH | "2026-01-13-00", "2026-01-13-06" |
|
|
||||||
| 1h | YYYY-MM-DD-HH | "2026-01-13-15" |
|
|
||||||
|
|
||||||
### 5.5 V3 Chunked Format
|
|
||||||
|
|
||||||
**Payload (independently decryptable chunks):**
|
|
||||||
|
|
||||||
```
|
|
||||||
Offset Size Content
|
|
||||||
------ ----- ----------------------------------
|
|
||||||
0 1048600 Chunk 0: [24-byte nonce][ciphertext][16-byte tag]
|
|
||||||
1048600 1048600 Chunk 1: [24-byte nonce][ciphertext][16-byte tag]
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
- Each chunk encrypted separately with same CEK, unique nonce
|
|
||||||
- Enables seeking, HTTP Range requests
|
|
||||||
- Chunk size typically 1MB (configurable)
|
|
||||||
|
|
||||||
## 6. Encryption
|
|
||||||
|
|
||||||
### 6.1 Algorithm
|
|
||||||
|
|
||||||
XChaCha20-Poly1305 (extended nonce variant)
|
|
||||||
|
|
||||||
| Parameter | Value |
|
|
||||||
|-----------|-------|
|
|
||||||
| Key size | 32 bytes |
|
|
||||||
| Nonce size | 24 bytes (XChaCha) |
|
|
||||||
| Tag size | 16 bytes |
|
|
||||||
|
|
||||||
### 6.2 Ciphertext Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
[24-byte XChaCha20 nonce][encrypted data][16-byte Poly1305 tag]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Critical**: Nonces are embedded IN the ciphertext by the Enchantrix library, NOT transmitted separately in headers.
|
|
||||||
|
|
||||||
### 6.3 Key Derivation
|
|
||||||
|
|
||||||
**V1/V2 (Password-based):**
|
|
||||||
|
|
||||||
```go
|
|
||||||
key := sha256.Sum256([]byte(password)) // 32 bytes
|
|
||||||
```
|
|
||||||
|
|
||||||
**V3 (LTHN Rolling):**
|
|
||||||
|
|
||||||
```go
|
|
||||||
// For each period in rolling window:
|
|
||||||
streamKey := sha256.Sum256([]byte(
|
|
||||||
crypt.NewService().Hash(crypt.LTHN, period + ":" + license + ":" + fingerprint)
|
|
||||||
))
|
|
||||||
```
|
|
||||||
|
|
||||||
## 7. Compression
|
|
||||||
|
|
||||||
| Value | Algorithm | Notes |
|
|
||||||
|-------|-----------|-------|
|
|
||||||
| "" (empty) | None | Raw bytes, default for v1 |
|
|
||||||
| "gzip" | RFC 1952 | Stdlib, WASM compatible |
|
|
||||||
| "zstd" | Zstandard | Default for v2/v3, better ratio |
|
|
||||||
|
|
||||||
**Order**: Compress → Encrypt (on write), Decrypt → Decompress (on read)
|
|
||||||
|
|
||||||
## 8. Message Structure
|
|
||||||
|
|
||||||
### 8.1 Go Types
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Message struct {
|
|
||||||
From string `json:"from,omitempty"`
|
|
||||||
To string `json:"to,omitempty"`
|
|
||||||
Subject string `json:"subject,omitempty"`
|
|
||||||
Body string `json:"body"`
|
|
||||||
Timestamp int64 `json:"timestamp,omitempty"`
|
|
||||||
Attachments []Attachment `json:"attachments,omitempty"`
|
|
||||||
ReplyKey *KeyInfo `json:"reply_key,omitempty"`
|
|
||||||
Meta map[string]string `json:"meta,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type Attachment struct {
|
|
||||||
Name string `json:"name"`
|
|
||||||
Mime string `json:"mime"`
|
|
||||||
Size int `json:"size"`
|
|
||||||
Content string `json:"content,omitempty"` // Base64, v1 only
|
|
||||||
Data []byte `json:"-"` // Binary, v2/v3
|
|
||||||
}
|
|
||||||
|
|
||||||
type KeyInfo struct {
|
|
||||||
PublicKey string `json:"public_key"`
|
|
||||||
Algorithm string `json:"algorithm"`
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.2 Stream Parameters (V3)
|
|
||||||
|
|
||||||
```go
|
|
||||||
type StreamParams struct {
|
|
||||||
License string `json:"license"` // User's license identifier
|
|
||||||
Fingerprint string `json:"fingerprint"` // Device fingerprint (optional)
|
|
||||||
Cadence string `json:"cadence"` // Rolling period: daily, 12h, 6h, 1h
|
|
||||||
ChunkSize int `json:"chunk_size"` // Bytes per chunk (default 1MB)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 9. Error Handling
|
|
||||||
|
|
||||||
### 9.1 Error Types
|
|
||||||
|
|
||||||
```go
|
|
||||||
var (
|
|
||||||
ErrInvalidMagic = errors.New("invalid SMSG magic")
|
|
||||||
ErrInvalidPayload = errors.New("invalid SMSG payload")
|
|
||||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
|
||||||
ErrPasswordRequired = errors.New("password is required")
|
|
||||||
ErrEmptyMessage = errors.New("message cannot be empty")
|
|
||||||
ErrStreamKeyExpired = errors.New("stream key expired (outside rolling window)")
|
|
||||||
ErrNoValidKey = errors.New("no valid wrapped key found for current date")
|
|
||||||
ErrLicenseRequired = errors.New("license is required for stream decryption")
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 9.2 Error Conditions
|
|
||||||
|
|
||||||
| Error | Cause | Recovery |
|
|
||||||
|-------|-------|----------|
|
|
||||||
| ErrInvalidMagic | File magic is not "SMSG" | Verify file format |
|
|
||||||
| ErrInvalidPayload | Corrupted payload structure | Re-download or restore |
|
|
||||||
| ErrDecryptionFailed | Wrong password or corrupted | Try correct password |
|
|
||||||
| ErrPasswordRequired | Empty password provided | Provide password |
|
|
||||||
| ErrStreamKeyExpired | Time outside rolling window | Wait for valid period or update file |
|
|
||||||
| ErrNoValidKey | No wrapped key for current period | License/fingerprint mismatch |
|
|
||||||
| ErrLicenseRequired | Empty StreamParams.License | Provide license identifier |
|
|
||||||
|
|
||||||
## 10. Constants
|
|
||||||
|
|
||||||
```go
|
|
||||||
const Magic = "SMSG" // 4 ASCII bytes
|
|
||||||
const Version = "1.0" // String version identifier
|
|
||||||
const DefaultChunkSize = 1024 * 1024 // 1 MB
|
|
||||||
|
|
||||||
const FormatV1 = "" // Legacy JSON format
|
|
||||||
const FormatV2 = "v2" // Binary format
|
|
||||||
const FormatV3 = "v3" // Streaming with rolling keys
|
|
||||||
|
|
||||||
const KeyMethodDirect = "" // Password-direct (v1/v2)
|
|
||||||
const KeyMethodLTHNRolling = "lthn-rolling" // LTHN rolling (v3)
|
|
||||||
|
|
||||||
const CompressionNone = ""
|
|
||||||
const CompressionGzip = "gzip"
|
|
||||||
const CompressionZstd = "zstd"
|
|
||||||
|
|
||||||
const CadenceDaily = "daily"
|
|
||||||
const CadenceHalfDay = "12h"
|
|
||||||
const CadenceQuarter = "6h"
|
|
||||||
const CadenceHourly = "1h"
|
|
||||||
```
|
|
||||||
|
|
||||||
## 11. API Usage
|
|
||||||
|
|
||||||
### 11.1 V1 (Legacy)
|
|
||||||
|
|
||||||
```go
|
|
||||||
msg := NewMessage("Hello").WithSubject("Test")
|
|
||||||
encrypted, _ := Encrypt(msg, "password")
|
|
||||||
decrypted, _ := Decrypt(encrypted, "password")
|
|
||||||
```
|
|
||||||
|
|
||||||
### 11.2 V2 (Binary)
|
|
||||||
|
|
||||||
```go
|
|
||||||
msg := NewMessage("Hello").AddBinaryAttachment("file.bin", data, "application/octet-stream")
|
|
||||||
manifest := NewManifest("My Content")
|
|
||||||
encrypted, _ := EncryptV2WithManifest(msg, "password", manifest)
|
|
||||||
decrypted, _ := Decrypt(encrypted, "password")
|
|
||||||
```
|
|
||||||
|
|
||||||
### 11.3 V3 (Streaming)
|
|
||||||
|
|
||||||
```go
|
|
||||||
msg := NewMessage("Stream content")
|
|
||||||
params := &StreamParams{
|
|
||||||
License: "user-license",
|
|
||||||
Fingerprint: "device-fingerprint",
|
|
||||||
Cadence: CadenceDaily,
|
|
||||||
ChunkSize: 1048576,
|
|
||||||
}
|
|
||||||
manifest := NewManifest("Stream Track")
|
|
||||||
manifest.LicenseType = "stream"
|
|
||||||
encrypted, _ := EncryptV3(msg, params, manifest)
|
|
||||||
decrypted, header, _ := DecryptV3(encrypted, params)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 12. Implementation Reference
|
|
||||||
|
|
||||||
- Types: `pkg/smsg/types.go`
|
|
||||||
- Encryption: `pkg/smsg/smsg.go`
|
|
||||||
- Streaming: `pkg/smsg/stream.go`
|
|
||||||
- WASM: `pkg/wasm/stmf/main.go`
|
|
||||||
- Tests: `pkg/smsg/*_test.go`
|
|
||||||
|
|
||||||
## 13. Security Considerations
|
|
||||||
|
|
||||||
1. **Nonce uniqueness**: Enchantrix generates random 24-byte nonces automatically
|
|
||||||
2. **Key entropy**: Passwords should have 64+ bits entropy (no key stretching)
|
|
||||||
3. **Manifest exposure**: Manifest is public; never include sensitive data
|
|
||||||
4. **Constant-time crypto**: Enchantrix uses constant-time comparison for auth tags
|
|
||||||
5. **Rolling window**: V3 keys valid for current + next period only
|
|
||||||
|
|
||||||
## 14. Future Work
|
|
||||||
|
|
||||||
- [ ] Key stretching (Argon2 option)
|
|
||||||
- [ ] Multi-recipient encryption
|
|
||||||
- [ ] Streaming API with ReadableStream
|
|
||||||
- [ ] Hardware key support (WebAuthn)
|
|
||||||
|
|
@ -1,326 +0,0 @@
|
||||||
# RFC-003: DataNode In-Memory Filesystem
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
DataNode is an in-memory filesystem abstraction implementing Go's `fs.FS` interface. It provides the foundation for collecting, manipulating, and serializing file trees without touching disk.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
DataNode serves as the core data structure for:
|
|
||||||
- Collecting files from various sources (GitHub, websites, PWAs)
|
|
||||||
- Building container filesystems (TIM rootfs)
|
|
||||||
- Serializing to/from tar archives
|
|
||||||
- Encrypting as TRIX format
|
|
||||||
|
|
||||||
## 2. Implementation
|
|
||||||
|
|
||||||
### 2.1 Core Type
|
|
||||||
|
|
||||||
```go
|
|
||||||
type DataNode struct {
|
|
||||||
files map[string]*dataFile
|
|
||||||
}
|
|
||||||
|
|
||||||
type dataFile struct {
|
|
||||||
name string
|
|
||||||
content []byte
|
|
||||||
modTime time.Time
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Key insight**: DataNode uses a **flat key-value map**, not a nested tree structure. Paths are stored as keys directly, and directories are implicit (derived from path prefixes).
|
|
||||||
|
|
||||||
### 2.2 fs.FS Implementation
|
|
||||||
|
|
||||||
DataNode implements these interfaces:
|
|
||||||
|
|
||||||
| Interface | Method | Description |
|
|
||||||
|-----------|--------|-------------|
|
|
||||||
| `fs.FS` | `Open(name string)` | Returns fs.File for path |
|
|
||||||
| `fs.StatFS` | `Stat(name string)` | Returns fs.FileInfo |
|
|
||||||
| `fs.ReadDirFS` | `ReadDir(name string)` | Lists directory contents |
|
|
||||||
|
|
||||||
### 2.3 Internal Helper Types
|
|
||||||
|
|
||||||
```go
|
|
||||||
// File metadata
|
|
||||||
type dataFileInfo struct {
|
|
||||||
name string
|
|
||||||
size int64
|
|
||||||
modTime time.Time
|
|
||||||
}
|
|
||||||
func (fi *dataFileInfo) Mode() fs.FileMode { return 0444 } // Read-only
|
|
||||||
|
|
||||||
// Directory metadata
|
|
||||||
type dirInfo struct {
|
|
||||||
name string
|
|
||||||
}
|
|
||||||
func (di *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
|
|
||||||
|
|
||||||
// File reader (implements fs.File)
|
|
||||||
type dataFileReader struct {
|
|
||||||
info *dataFileInfo
|
|
||||||
reader *bytes.Reader
|
|
||||||
}
|
|
||||||
|
|
||||||
// Directory reader (implements fs.File)
|
|
||||||
type dirFile struct {
|
|
||||||
info *dirInfo
|
|
||||||
entries []fs.DirEntry
|
|
||||||
offset int
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 3. Operations
|
|
||||||
|
|
||||||
### 3.1 Construction
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Create empty DataNode
|
|
||||||
node := datanode.New()
|
|
||||||
|
|
||||||
// Returns: &DataNode{files: make(map[string]*dataFile)}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 Adding Files
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Add file with content
|
|
||||||
node.AddData("path/to/file.txt", []byte("content"))
|
|
||||||
|
|
||||||
// Trailing slashes are ignored (treated as directory indicator)
|
|
||||||
node.AddData("path/to/dir/", []byte("")) // Stored as "path/to/dir"
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: Parent directories are NOT explicitly created. They are implicit based on path prefixes.
|
|
||||||
|
|
||||||
### 3.3 File Access
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Open file (fs.FS interface)
|
|
||||||
f, err := node.Open("path/to/file.txt")
|
|
||||||
if err != nil {
|
|
||||||
// fs.ErrNotExist if not found
|
|
||||||
}
|
|
||||||
defer f.Close()
|
|
||||||
content, _ := io.ReadAll(f)
|
|
||||||
|
|
||||||
// Stat file
|
|
||||||
info, err := node.Stat("path/to/file.txt")
|
|
||||||
// info.Name(), info.Size(), info.ModTime(), info.Mode()
|
|
||||||
|
|
||||||
// Read directory
|
|
||||||
entries, err := node.ReadDir("path/to")
|
|
||||||
for _, entry := range entries {
|
|
||||||
// entry.Name(), entry.IsDir(), entry.Type()
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.4 Walking
|
|
||||||
|
|
||||||
```go
|
|
||||||
err := fs.WalkDir(node, ".", func(path string, d fs.DirEntry, err error) error {
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
if !d.IsDir() {
|
|
||||||
// Process file
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4. Path Semantics
|
|
||||||
|
|
||||||
### 4.1 Path Handling
|
|
||||||
|
|
||||||
- **Leading slashes stripped**: `/path/file` → `path/file`
|
|
||||||
- **Trailing slashes ignored**: `path/dir/` → `path/dir`
|
|
||||||
- **Forward slashes only**: Uses `/` regardless of OS
|
|
||||||
- **Case-sensitive**: `File.txt` ≠ `file.txt`
|
|
||||||
- **Direct lookup**: Paths stored as flat keys
|
|
||||||
|
|
||||||
### 4.2 Valid Paths
|
|
||||||
|
|
||||||
```
|
|
||||||
file.txt → stored as "file.txt"
|
|
||||||
dir/file.txt → stored as "dir/file.txt"
|
|
||||||
/absolute/path → stored as "absolute/path" (leading / stripped)
|
|
||||||
path/to/dir/ → stored as "path/to/dir" (trailing / stripped)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.3 Directory Detection
|
|
||||||
|
|
||||||
Directories are **implicit**. A directory exists if:
|
|
||||||
1. Any file path has it as a prefix
|
|
||||||
2. Example: Adding `a/b/c.txt` implicitly creates directories `a` and `a/b`
|
|
||||||
|
|
||||||
```go
|
|
||||||
// ReadDir finds directories by scanning all paths
|
|
||||||
func (dn *DataNode) ReadDir(name string) ([]fs.DirEntry, error) {
|
|
||||||
// Scans all keys for matching prefix
|
|
||||||
// Returns unique immediate children
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 5. Tar Serialization
|
|
||||||
|
|
||||||
### 5.1 ToTar
|
|
||||||
|
|
||||||
```go
|
|
||||||
tarBytes, err := node.ToTar()
|
|
||||||
```
|
|
||||||
|
|
||||||
**Format**:
|
|
||||||
- All files written as `tar.TypeReg` (regular files)
|
|
||||||
- Header Mode: **0600** (fixed, not original mode)
|
|
||||||
- No explicit directory entries
|
|
||||||
- ModTime preserved from dataFile
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Serialization logic
|
|
||||||
for path, file := range dn.files {
|
|
||||||
header := &tar.Header{
|
|
||||||
Name: path,
|
|
||||||
Mode: 0600, // Fixed mode
|
|
||||||
Size: int64(len(file.content)),
|
|
||||||
ModTime: file.modTime,
|
|
||||||
Typeflag: tar.TypeReg,
|
|
||||||
}
|
|
||||||
tw.WriteHeader(header)
|
|
||||||
tw.Write(file.content)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.2 FromTar
|
|
||||||
|
|
||||||
```go
|
|
||||||
node, err := datanode.FromTar(tarBytes)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Parsing**:
|
|
||||||
- Only reads `tar.TypeReg` entries
|
|
||||||
- Ignores directory entries (`tar.TypeDir`)
|
|
||||||
- Stores path and content in flat map
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Deserialization logic
|
|
||||||
for {
|
|
||||||
header, err := tr.Next()
|
|
||||||
if header.Typeflag == tar.TypeReg {
|
|
||||||
content, _ := io.ReadAll(tr)
|
|
||||||
dn.files[header.Name] = &dataFile{
|
|
||||||
name: filepath.Base(header.Name),
|
|
||||||
content: content,
|
|
||||||
modTime: header.ModTime,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.3 Compressed Variants
|
|
||||||
|
|
||||||
```go
|
|
||||||
// gzip compressed
|
|
||||||
tarGz, err := node.ToTarGz()
|
|
||||||
node, err := datanode.FromTarGz(tarGzBytes)
|
|
||||||
|
|
||||||
// xz compressed
|
|
||||||
tarXz, err := node.ToTarXz()
|
|
||||||
node, err := datanode.FromTarXz(tarXzBytes)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. File Modes
|
|
||||||
|
|
||||||
| Context | Mode | Notes |
|
|
||||||
|---------|------|-------|
|
|
||||||
| File read (fs.FS) | 0444 | Read-only for all |
|
|
||||||
| Directory (fs.FS) | 0555 | Read+execute for all |
|
|
||||||
| Tar export | 0600 | Owner read/write only |
|
|
||||||
|
|
||||||
**Note**: Original file modes are NOT preserved. All files get fixed modes.
|
|
||||||
|
|
||||||
## 7. Memory Model
|
|
||||||
|
|
||||||
- All content held in memory as `[]byte`
|
|
||||||
- No lazy loading
|
|
||||||
- No memory mapping
|
|
||||||
- Thread-safe for concurrent reads (map is not mutated after creation)
|
|
||||||
|
|
||||||
### 7.1 Size Calculation
|
|
||||||
|
|
||||||
```go
|
|
||||||
func (dn *DataNode) Size() int64 {
|
|
||||||
var total int64
|
|
||||||
for _, f := range dn.files {
|
|
||||||
total += int64(len(f.content))
|
|
||||||
}
|
|
||||||
return total
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 8. Integration Points
|
|
||||||
|
|
||||||
### 8.1 TIM RootFS
|
|
||||||
|
|
||||||
```go
|
|
||||||
tim := &tim.TIM{
|
|
||||||
Config: configJSON,
|
|
||||||
RootFS: datanode, // DataNode as container filesystem
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.2 TRIX Encryption
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Encrypt DataNode to TRIX
|
|
||||||
encrypted, err := trix.Encrypt(datanode.ToTar(), password)
|
|
||||||
|
|
||||||
// Decrypt TRIX to DataNode
|
|
||||||
tarBytes, err := trix.Decrypt(encrypted, password)
|
|
||||||
node, err := datanode.FromTar(tarBytes)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.3 Collectors
|
|
||||||
|
|
||||||
```go
|
|
||||||
// GitHub collector returns DataNode
|
|
||||||
node, err := github.CollectRepo(url)
|
|
||||||
|
|
||||||
// Website collector returns DataNode
|
|
||||||
node, err := website.Collect(url, depth)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 9. Implementation Reference
|
|
||||||
|
|
||||||
- Source: `pkg/datanode/datanode.go`
|
|
||||||
- Tests: `pkg/datanode/datanode_test.go`
|
|
||||||
|
|
||||||
## 10. Security Considerations
|
|
||||||
|
|
||||||
1. **Path traversal**: Leading slashes stripped; no `..` handling needed (flat map)
|
|
||||||
2. **Memory exhaustion**: No built-in limits; caller must validate input size
|
|
||||||
3. **Tar bombs**: FromTar reads all entries into memory
|
|
||||||
4. **Symlinks**: Not supported (intentional - tar.TypeReg only)
|
|
||||||
|
|
||||||
## 11. Limitations
|
|
||||||
|
|
||||||
- No symlink support
|
|
||||||
- No extended attributes
|
|
||||||
- No sparse files
|
|
||||||
- Fixed file modes (0600 on export)
|
|
||||||
- No streaming (full content in memory)
|
|
||||||
|
|
||||||
## 12. Future Work
|
|
||||||
|
|
||||||
- [ ] Streaming tar generation for large files
|
|
||||||
- [ ] Optional mode preservation
|
|
||||||
- [ ] Size limits for untrusted input
|
|
||||||
- [ ] Lazy loading for large datasets
|
|
||||||
|
|
@ -1,330 +0,0 @@
|
||||||
# RFC-004: Terminal Isolation Matrix (TIM)
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
**Depends On**: RFC-003
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
TIM (Terminal Isolation Matrix) is an OCI-compatible container bundle format. It packages a runtime configuration with a root filesystem (DataNode) for execution via runc or compatible runtimes.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
TIM provides:
|
|
||||||
- OCI runtime-spec compatible bundles
|
|
||||||
- Portable container packaging
|
|
||||||
- Integration with DataNode filesystem
|
|
||||||
- Encryption via STIM (RFC-005)
|
|
||||||
|
|
||||||
## 2. Implementation
|
|
||||||
|
|
||||||
### 2.1 Core Type
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/tim.go:28-32
|
|
||||||
type TerminalIsolationMatrix struct {
|
|
||||||
Config []byte // Raw OCI runtime specification (JSON)
|
|
||||||
RootFS *datanode.DataNode // In-memory filesystem
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.2 Error Variables
|
|
||||||
|
|
||||||
```go
|
|
||||||
var (
|
|
||||||
ErrDataNodeRequired = errors.New("datanode is required")
|
|
||||||
ErrConfigIsNil = errors.New("config is nil")
|
|
||||||
ErrPasswordRequired = errors.New("password is required for encryption")
|
|
||||||
ErrInvalidStimPayload = errors.New("invalid stim payload")
|
|
||||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 3. Public API
|
|
||||||
|
|
||||||
### 3.1 Constructors
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Create empty TIM with default config
|
|
||||||
func New() (*TerminalIsolationMatrix, error)
|
|
||||||
|
|
||||||
// Wrap existing DataNode into TIM
|
|
||||||
func FromDataNode(dn *DataNode) (*TerminalIsolationMatrix, error)
|
|
||||||
|
|
||||||
// Deserialize from tar archive
|
|
||||||
func FromTar(data []byte) (*TerminalIsolationMatrix, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 Serialization
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Serialize to tar archive
|
|
||||||
func (m *TerminalIsolationMatrix) ToTar() ([]byte, error)
|
|
||||||
|
|
||||||
// Encrypt to STIM format (ChaCha20-Poly1305)
|
|
||||||
func (m *TerminalIsolationMatrix) ToSigil(password string) ([]byte, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.3 Decryption
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Decrypt from STIM format
|
|
||||||
func FromSigil(data []byte, password string) (*TerminalIsolationMatrix, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.4 Execution
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Run plain .tim file with runc
|
|
||||||
func Run(timPath string) error
|
|
||||||
|
|
||||||
// Decrypt and run .stim file
|
|
||||||
func RunEncrypted(stimPath, password string) error
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4. Tar Archive Structure
|
|
||||||
|
|
||||||
### 4.1 Layout
|
|
||||||
|
|
||||||
```
|
|
||||||
config.json (root level, mode 0600)
|
|
||||||
rootfs/ (directory, mode 0755)
|
|
||||||
rootfs/bin/app (files within rootfs/)
|
|
||||||
rootfs/etc/config
|
|
||||||
...
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Serialization (ToTar)
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/tim.go:111-195
|
|
||||||
func (m *TerminalIsolationMatrix) ToTar() ([]byte, error) {
|
|
||||||
// 1. Write config.json header (size = len(m.Config), mode 0600)
|
|
||||||
// 2. Write config.json content
|
|
||||||
// 3. Write rootfs/ directory entry (TypeDir, mode 0755)
|
|
||||||
// 4. Walk m.RootFS depth-first
|
|
||||||
// 5. For each file: tar entry with name "rootfs/" + path, mode 0600
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.3 Deserialization (FromTar)
|
|
||||||
|
|
||||||
```go
|
|
||||||
func FromTar(data []byte) (*TerminalIsolationMatrix, error) {
|
|
||||||
// 1. Parse tar entries
|
|
||||||
// 2. "config.json" → stored as raw bytes in Config
|
|
||||||
// 3. "rootfs/*" prefix → stripped and added to DataNode
|
|
||||||
// 4. Error if config.json missing (ErrConfigIsNil)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 5. OCI Config
|
|
||||||
|
|
||||||
### 5.1 Default Config
|
|
||||||
|
|
||||||
The `New()` function creates a TIM with a default config from `pkg/tim/config.go`:
|
|
||||||
|
|
||||||
```go
|
|
||||||
func defaultConfig() (*trix.Trix, error) {
|
|
||||||
return &trix.Trix{Header: make(map[string]interface{})}, nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: The default config is minimal. Applications should populate the Config field with a proper OCI runtime spec.
|
|
||||||
|
|
||||||
### 5.2 OCI Runtime Spec Example
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"ociVersion": "1.0.2",
|
|
||||||
"process": {
|
|
||||||
"terminal": false,
|
|
||||||
"user": {"uid": 0, "gid": 0},
|
|
||||||
"args": ["/bin/app"],
|
|
||||||
"env": ["PATH=/usr/bin:/bin"],
|
|
||||||
"cwd": "/"
|
|
||||||
},
|
|
||||||
"root": {
|
|
||||||
"path": "rootfs",
|
|
||||||
"readonly": true
|
|
||||||
},
|
|
||||||
"mounts": [],
|
|
||||||
"linux": {
|
|
||||||
"namespaces": [
|
|
||||||
{"type": "pid"},
|
|
||||||
{"type": "network"},
|
|
||||||
{"type": "mount"}
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. Execution Flow
|
|
||||||
|
|
||||||
### 6.1 Plain TIM (Run)
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/run.go:18-74
|
|
||||||
func Run(timPath string) error {
|
|
||||||
// 1. Create temporary directory (borg-run-*)
|
|
||||||
// 2. Extract tar entry-by-entry
|
|
||||||
// - Security: Path traversal check (prevents ../)
|
|
||||||
// - Validates: target = Clean(target) within tempDir
|
|
||||||
// 3. Create directories as needed (0755)
|
|
||||||
// 4. Write files with 0600 permissions
|
|
||||||
// 5. Execute: runc run -b <tempDir> borg-container
|
|
||||||
// 6. Stream stdout/stderr directly
|
|
||||||
// 7. Return exit code
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.2 Encrypted TIM (RunEncrypted)
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/run.go:79-134
|
|
||||||
func RunEncrypted(stimPath, password string) error {
|
|
||||||
// 1. Read encrypted .stim file
|
|
||||||
// 2. Decrypt using FromSigil() with password
|
|
||||||
// 3. Create temporary directory (borg-run-*)
|
|
||||||
// 4. Write config.json to tempDir
|
|
||||||
// 5. Create rootfs/ subdirectory
|
|
||||||
// 6. Walk DataNode and extract all files to rootfs/
|
|
||||||
// - Uses CopyFile() with 0600 permissions
|
|
||||||
// 7. Execute: runc run -b <tempDir> borg-container
|
|
||||||
// 8. Stream stdout/stderr
|
|
||||||
// 9. Clean up temp directory (defer os.RemoveAll)
|
|
||||||
// 10. Return exit code
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.3 Security Controls
|
|
||||||
|
|
||||||
| Control | Implementation |
|
|
||||||
|---------|----------------|
|
|
||||||
| Path traversal | `filepath.Clean()` + prefix validation |
|
|
||||||
| Temp cleanup | `defer os.RemoveAll(tempDir)` |
|
|
||||||
| File permissions | Hardcoded 0600 (files), 0755 (dirs) |
|
|
||||||
| Test injection | `ExecCommand` variable for mocking runc |
|
|
||||||
|
|
||||||
## 7. Cache API
|
|
||||||
|
|
||||||
### 7.1 Cache Structure
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/cache.go
|
|
||||||
type Cache struct {
|
|
||||||
Dir string // Directory path for storage
|
|
||||||
Password string // Shared password for all TIMs
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 7.2 Cache Operations
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Create cache with master password
|
|
||||||
func NewCache(dir, password string) (*Cache, error)
|
|
||||||
|
|
||||||
// Store TIM (encrypted automatically as .stim)
|
|
||||||
func (c *Cache) Store(name string, m *TerminalIsolationMatrix) error
|
|
||||||
|
|
||||||
// Load TIM (decrypted automatically)
|
|
||||||
func (c *Cache) Load(name string) (*TerminalIsolationMatrix, error)
|
|
||||||
|
|
||||||
// Delete cached TIM
|
|
||||||
func (c *Cache) Delete(name string) error
|
|
||||||
|
|
||||||
// Check if TIM exists
|
|
||||||
func (c *Cache) Exists(name string) bool
|
|
||||||
|
|
||||||
// List all cached TIM names
|
|
||||||
func (c *Cache) List() ([]string, error)
|
|
||||||
|
|
||||||
// Load and execute cached TIM
|
|
||||||
func (c *Cache) Run(name string) error
|
|
||||||
|
|
||||||
// Get file size of cached .stim
|
|
||||||
func (c *Cache) Size(name string) (int64, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 7.3 Cache Directory Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
cache/
|
|
||||||
├── mycontainer.stim (encrypted)
|
|
||||||
├── another.stim (encrypted)
|
|
||||||
└── ...
|
|
||||||
```
|
|
||||||
|
|
||||||
- All TIMs stored as `.stim` files (encrypted)
|
|
||||||
- Single password protects entire cache
|
|
||||||
- Directory created with 0700 permissions
|
|
||||||
- Files stored with 0600 permissions
|
|
||||||
|
|
||||||
## 8. CLI Usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Compile Borgfile to TIM
|
|
||||||
borg compile -f Borgfile -o container.tim
|
|
||||||
|
|
||||||
# Compile with encryption
|
|
||||||
borg compile -f Borgfile -e "password" -o container.stim
|
|
||||||
|
|
||||||
# Run plain TIM
|
|
||||||
borg run container.tim
|
|
||||||
|
|
||||||
# Run encrypted TIM
|
|
||||||
borg run container.stim -p "password"
|
|
||||||
|
|
||||||
# Decode (extract) to tar
|
|
||||||
borg decode container.stim -p "password" --i-am-in-isolation -o container.tar
|
|
||||||
|
|
||||||
# Inspect metadata without decrypting
|
|
||||||
borg inspect container.stim
|
|
||||||
```
|
|
||||||
|
|
||||||
## 9. Implementation Reference
|
|
||||||
|
|
||||||
- TIM core: `pkg/tim/tim.go`
|
|
||||||
- Execution: `pkg/tim/run.go`
|
|
||||||
- Cache: `pkg/tim/cache.go`
|
|
||||||
- Config: `pkg/tim/config.go`
|
|
||||||
- Tests: `pkg/tim/tim_test.go`, `pkg/tim/run_test.go`, `pkg/tim/cache_test.go`
|
|
||||||
|
|
||||||
## 10. Security Considerations
|
|
||||||
|
|
||||||
1. **Path traversal prevention**: `filepath.Clean()` + prefix validation
|
|
||||||
2. **Permission hardcoding**: 0600 files, 0755 directories
|
|
||||||
3. **Secure cleanup**: `defer os.RemoveAll()` on temp directories
|
|
||||||
4. **Command injection prevention**: `ExecCommand` variable (no shell)
|
|
||||||
5. **Config validation**: Validate OCI spec before execution
|
|
||||||
|
|
||||||
## 11. OCI Compatibility
|
|
||||||
|
|
||||||
TIM bundles are compatible with:
|
|
||||||
- runc
|
|
||||||
- crun
|
|
||||||
- youki
|
|
||||||
- Any OCI runtime-spec 1.0.2 compliant runtime
|
|
||||||
|
|
||||||
## 12. Test Coverage
|
|
||||||
|
|
||||||
| Area | Tests |
|
|
||||||
|------|-------|
|
|
||||||
| TIM creation | DataNode wrapping, default config |
|
|
||||||
| Serialization | Tar round-trips, large files (1MB+) |
|
|
||||||
| Encryption | ToSigil/FromSigil, wrong password detection |
|
|
||||||
| Caching | Store/Load/Delete, List, Size |
|
|
||||||
| Execution | ZIP slip prevention, temp cleanup |
|
|
||||||
| Error handling | Nil DataNode, nil config, invalid tar |
|
|
||||||
|
|
||||||
## 13. Future Work
|
|
||||||
|
|
||||||
- [ ] Image layer support
|
|
||||||
- [ ] Registry push/pull
|
|
||||||
- [ ] Multi-platform bundles
|
|
||||||
- [ ] Signature verification
|
|
||||||
- [ ] Full OCI config generation
|
|
||||||
|
|
@ -1,303 +0,0 @@
|
||||||
# RFC-005: STIM Encrypted Container Format
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
**Depends On**: RFC-003, RFC-004
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
STIM (Secure TIM) is an encrypted container format that wraps TIM bundles using ChaCha20-Poly1305 authenticated encryption. It enables secure distribution and execution of containers without exposing the contents.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
STIM provides:
|
|
||||||
- Encrypted TIM containers
|
|
||||||
- ChaCha20-Poly1305 authenticated encryption
|
|
||||||
- Separate encryption of config and rootfs
|
|
||||||
- Direct execution without persistent decryption
|
|
||||||
|
|
||||||
## 2. Format Name
|
|
||||||
|
|
||||||
**ChaChaPolySigil** - The internal name for the STIM format, using:
|
|
||||||
- ChaCha20-Poly1305 algorithm (via Enchantrix library)
|
|
||||||
- Trix container wrapper with "STIM" magic
|
|
||||||
|
|
||||||
## 3. File Structure
|
|
||||||
|
|
||||||
### 3.1 Container Format
|
|
||||||
|
|
||||||
STIM uses the **Trix container format** from Enchantrix library:
|
|
||||||
|
|
||||||
```
|
|
||||||
┌─────────────────────────────────────────┐
|
|
||||||
│ Magic: "STIM" (4 bytes ASCII) │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ Trix Header (Gob-encoded JSON) │
|
|
||||||
│ - encryption_algorithm: "chacha20poly1305"
|
|
||||||
│ - tim: true │
|
|
||||||
│ - config_size: uint32 │
|
|
||||||
│ - rootfs_size: uint32 │
|
|
||||||
│ - version: "1.0" │
|
|
||||||
├─────────────────────────────────────────┤
|
|
||||||
│ Trix Payload: │
|
|
||||||
│ [config_size: 4 bytes BE uint32] │
|
|
||||||
│ [encrypted config] │
|
|
||||||
│ [encrypted rootfs tar] │
|
|
||||||
└─────────────────────────────────────────┘
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 Payload Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
Offset Size Field
|
|
||||||
------ ----- ------------------------------------
|
|
||||||
0 4 Config size (big-endian uint32)
|
|
||||||
4 N Encrypted config (includes nonce + tag)
|
|
||||||
4+N M Encrypted rootfs tar (includes nonce + tag)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.3 Encrypted Component Format
|
|
||||||
|
|
||||||
Each encrypted component (config and rootfs) follows Enchantrix format:
|
|
||||||
|
|
||||||
```
|
|
||||||
[24-byte XChaCha20 nonce][ciphertext][16-byte Poly1305 tag]
|
|
||||||
```
|
|
||||||
|
|
||||||
**Critical**: Nonces are **embedded in the ciphertext**, not transmitted separately.
|
|
||||||
|
|
||||||
## 4. Encryption
|
|
||||||
|
|
||||||
### 4.1 Algorithm
|
|
||||||
|
|
||||||
XChaCha20-Poly1305 (extended nonce variant)
|
|
||||||
|
|
||||||
| Parameter | Value |
|
|
||||||
|-----------|-------|
|
|
||||||
| Key size | 32 bytes |
|
|
||||||
| Nonce size | 24 bytes (embedded) |
|
|
||||||
| Tag size | 16 bytes |
|
|
||||||
|
|
||||||
### 4.2 Key Derivation
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/trix/trix.go:64-67
|
|
||||||
func DeriveKey(password string) []byte {
|
|
||||||
hash := sha256.Sum256([]byte(password))
|
|
||||||
return hash[:] // 32 bytes
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.3 Dual Encryption
|
|
||||||
|
|
||||||
Config and RootFS are encrypted **separately** with independent nonces:
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/tim.go:217-232
|
|
||||||
func (m *TerminalIsolationMatrix) ToSigil(password string) ([]byte, error) {
|
|
||||||
// 1. Derive key
|
|
||||||
key := trix.DeriveKey(password)
|
|
||||||
|
|
||||||
// 2. Create sigil
|
|
||||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
|
||||||
|
|
||||||
// 3. Encrypt config (generates fresh nonce automatically)
|
|
||||||
encConfig, _ := sigil.In(m.Config)
|
|
||||||
|
|
||||||
// 4. Serialize rootfs to tar
|
|
||||||
rootfsTar, _ := m.RootFS.ToTar()
|
|
||||||
|
|
||||||
// 5. Encrypt rootfs (generates different fresh nonce)
|
|
||||||
encRootFS, _ := sigil.In(rootfsTar)
|
|
||||||
|
|
||||||
// 6. Build payload
|
|
||||||
payload := make([]byte, 4+len(encConfig)+len(encRootFS))
|
|
||||||
binary.BigEndian.PutUint32(payload[:4], uint32(len(encConfig)))
|
|
||||||
copy(payload[4:4+len(encConfig)], encConfig)
|
|
||||||
copy(payload[4+len(encConfig):], encRootFS)
|
|
||||||
|
|
||||||
// 7. Create Trix container with STIM magic
|
|
||||||
// ...
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Rationale for dual encryption:**
|
|
||||||
- Config can be decrypted separately for inspection
|
|
||||||
- Allows streaming decryption of large rootfs
|
|
||||||
- Independent nonces prevent any nonce reuse
|
|
||||||
|
|
||||||
## 5. Decryption Flow
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/tim.go:255-308
|
|
||||||
func FromSigil(data []byte, password string) (*TerminalIsolationMatrix, error) {
|
|
||||||
// 1. Decode Trix container with magic "STIM"
|
|
||||||
t, _ := trix.Decode(data, "STIM", nil)
|
|
||||||
|
|
||||||
// 2. Derive key from password
|
|
||||||
key := trix.DeriveKey(password)
|
|
||||||
|
|
||||||
// 3. Create sigil
|
|
||||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
|
||||||
|
|
||||||
// 4. Parse payload: extract configSize from first 4 bytes
|
|
||||||
configSize := binary.BigEndian.Uint32(t.Payload[:4])
|
|
||||||
|
|
||||||
// 5. Validate bounds
|
|
||||||
if int(configSize) > len(t.Payload)-4 {
|
|
||||||
return nil, ErrInvalidStimPayload
|
|
||||||
}
|
|
||||||
|
|
||||||
// 6. Extract encrypted components
|
|
||||||
encConfig := t.Payload[4 : 4+configSize]
|
|
||||||
encRootFS := t.Payload[4+configSize:]
|
|
||||||
|
|
||||||
// 7. Decrypt config (nonce auto-extracted by Enchantrix)
|
|
||||||
config, err := sigil.Out(encConfig)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// 8. Decrypt rootfs
|
|
||||||
rootfsTar, err := sigil.Out(encRootFS)
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// 9. Reconstruct DataNode from tar
|
|
||||||
rootfs, _ := datanode.FromTar(rootfsTar)
|
|
||||||
|
|
||||||
return &TerminalIsolationMatrix{Config: config, RootFS: rootfs}, nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. Trix Header
|
|
||||||
|
|
||||||
```go
|
|
||||||
Header: map[string]interface{}{
|
|
||||||
"encryption_algorithm": "chacha20poly1305",
|
|
||||||
"tim": true,
|
|
||||||
"config_size": len(encConfig),
|
|
||||||
"rootfs_size": len(encRootFS),
|
|
||||||
"version": "1.0",
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 7. CLI Usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create encrypted container
|
|
||||||
borg compile -f Borgfile -e "password" -o container.stim
|
|
||||||
|
|
||||||
# Run encrypted container
|
|
||||||
borg run container.stim -p "password"
|
|
||||||
|
|
||||||
# Decode (extract) encrypted container
|
|
||||||
borg decode container.stim -p "password" --i-am-in-isolation -o container.tar
|
|
||||||
|
|
||||||
# Inspect without decrypting (shows header metadata only)
|
|
||||||
borg inspect container.stim
|
|
||||||
# Output:
|
|
||||||
# Format: STIM
|
|
||||||
# encryption_algorithm: chacha20poly1305
|
|
||||||
# config_size: 1234
|
|
||||||
# rootfs_size: 567890
|
|
||||||
```
|
|
||||||
|
|
||||||
## 8. Cache API
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Create cache with master password
|
|
||||||
cache, err := tim.NewCache("/path/to/cache", masterPassword)
|
|
||||||
|
|
||||||
// Store TIM (encrypted automatically as .stim)
|
|
||||||
err := cache.Store("name", tim)
|
|
||||||
|
|
||||||
// Load TIM (decrypted automatically)
|
|
||||||
tim, err := cache.Load("name")
|
|
||||||
|
|
||||||
// List cached containers
|
|
||||||
names, err := cache.List()
|
|
||||||
```
|
|
||||||
|
|
||||||
## 9. Execution Security
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Secure execution flow
|
|
||||||
func RunEncrypted(path, password string) error {
|
|
||||||
// 1. Create secure temp directory
|
|
||||||
tmpDir, _ := os.MkdirTemp("", "borg-run-*")
|
|
||||||
defer os.RemoveAll(tmpDir) // Secure cleanup
|
|
||||||
|
|
||||||
// 2. Read and decrypt
|
|
||||||
data, _ := os.ReadFile(path)
|
|
||||||
tim, _ := FromSigil(data, password)
|
|
||||||
|
|
||||||
// 3. Extract to temp
|
|
||||||
tim.ExtractTo(tmpDir)
|
|
||||||
|
|
||||||
// 4. Execute with runc
|
|
||||||
return runRunc(tmpDir)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 10. Security Properties
|
|
||||||
|
|
||||||
### 10.1 Confidentiality
|
|
||||||
|
|
||||||
- Contents encrypted with ChaCha20-Poly1305
|
|
||||||
- Password-derived key never stored
|
|
||||||
- Nonces are random, never reused
|
|
||||||
|
|
||||||
### 10.2 Integrity
|
|
||||||
|
|
||||||
- Poly1305 MAC prevents tampering
|
|
||||||
- Decryption fails if modified
|
|
||||||
- Separate MACs for config and rootfs
|
|
||||||
|
|
||||||
### 10.3 Error Detection
|
|
||||||
|
|
||||||
| Error | Cause |
|
|
||||||
|-------|-------|
|
|
||||||
| `ErrPasswordRequired` | Empty password provided |
|
|
||||||
| `ErrInvalidStimPayload` | Payload < 4 bytes or invalid size |
|
|
||||||
| `ErrDecryptionFailed` | Wrong password or corrupted data |
|
|
||||||
|
|
||||||
## 11. Comparison to TRIX
|
|
||||||
|
|
||||||
| Feature | STIM | TRIX |
|
|
||||||
|---------|------|------|
|
|
||||||
| Algorithm | ChaCha20-Poly1305 | PGP/AES or ChaCha |
|
|
||||||
| Content | TIM bundles | DataNode (raw files) |
|
|
||||||
| Structure | Dual encryption | Single blob |
|
|
||||||
| Magic | "STIM" | "TRIX" |
|
|
||||||
| Use case | Container execution | General encryption, accounts |
|
|
||||||
|
|
||||||
STIM is for containers. TRIX is for general file encryption and accounts.
|
|
||||||
|
|
||||||
## 12. Implementation Reference
|
|
||||||
|
|
||||||
- Encryption: `pkg/tim/tim.go` (ToSigil, FromSigil)
|
|
||||||
- Key derivation: `pkg/trix/trix.go` (DeriveKey)
|
|
||||||
- Cache: `pkg/tim/cache.go`
|
|
||||||
- CLI: `cmd/run.go`, `cmd/decode.go`, `cmd/compile.go`
|
|
||||||
- Enchantrix: `github.com/Snider/Enchantrix`
|
|
||||||
|
|
||||||
## 13. Security Considerations
|
|
||||||
|
|
||||||
1. **Password strength**: Recommend 64+ bits entropy (12+ chars)
|
|
||||||
2. **Key derivation**: SHA-256 only (no stretching) - use strong passwords
|
|
||||||
3. **Memory handling**: Keys should be wiped after use
|
|
||||||
4. **Temp files**: Use tmpfs when available, secure wipe after
|
|
||||||
5. **Side channels**: Enchantrix uses constant-time crypto operations
|
|
||||||
|
|
||||||
## 14. Future Work
|
|
||||||
|
|
||||||
- [ ] Hardware key support (YubiKey, TPM)
|
|
||||||
- [ ] Key stretching (Argon2)
|
|
||||||
- [ ] Multi-recipient encryption
|
|
||||||
- [ ] Streaming decryption for large rootfs
|
|
||||||
|
|
@ -1,342 +0,0 @@
|
||||||
# RFC-006: TRIX PGP Encryption Format
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
**Depends On**: RFC-003
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
TRIX is a PGP-based encryption format for DataNode archives and account credentials. It provides symmetric and asymmetric encryption using OpenPGP standards and ChaCha20-Poly1305, enabling secure data exchange and identity management.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
TRIX provides:
|
|
||||||
- PGP symmetric encryption for DataNode archives
|
|
||||||
- ChaCha20-Poly1305 modern encryption
|
|
||||||
- PGP armored keys for account/identity management
|
|
||||||
- Integration with Enchantrix library
|
|
||||||
|
|
||||||
## 2. Public API
|
|
||||||
|
|
||||||
### 2.1 Key Derivation
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/trix/trix.go:64-67
|
|
||||||
func DeriveKey(password string) []byte {
|
|
||||||
hash := sha256.Sum256([]byte(password))
|
|
||||||
return hash[:] // 32 bytes
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- Input: password string (any length)
|
|
||||||
- Output: 32-byte key (256 bits)
|
|
||||||
- Algorithm: SHA-256 hash of UTF-8 bytes
|
|
||||||
- Deterministic: identical passwords → identical keys
|
|
||||||
|
|
||||||
### 2.2 Legacy PGP Encryption
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Encrypt DataNode to TRIX (PGP symmetric)
|
|
||||||
func ToTrix(dn *datanode.DataNode, password string) ([]byte, error)
|
|
||||||
|
|
||||||
// Decrypt TRIX to DataNode (DISABLED for encrypted payloads)
|
|
||||||
func FromTrix(data []byte, password string) (*datanode.DataNode, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: `FromTrix` with a non-empty password returns error `"decryption disabled: cannot accept encrypted payloads"`. This is intentional to prevent accidental password use.
|
|
||||||
|
|
||||||
### 2.3 Modern ChaCha20-Poly1305 Encryption
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Encrypt with ChaCha20-Poly1305
|
|
||||||
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error)
|
|
||||||
|
|
||||||
// Decrypt ChaCha20-Poly1305
|
|
||||||
func FromTrixChaCha(data []byte, password string) (*datanode.DataNode, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 2.4 Error Variables
|
|
||||||
|
|
||||||
```go
|
|
||||||
var (
|
|
||||||
ErrPasswordRequired = errors.New("password is required for encryption")
|
|
||||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 3. File Format
|
|
||||||
|
|
||||||
### 3.1 Container Structure
|
|
||||||
|
|
||||||
```
|
|
||||||
[4 bytes] Magic: "TRIX" (ASCII)
|
|
||||||
[Variable] Gob-encoded Header (map[string]interface{})
|
|
||||||
[Variable] Payload (encrypted or unencrypted tarball)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 Header Examples
|
|
||||||
|
|
||||||
**Unencrypted:**
|
|
||||||
```go
|
|
||||||
Header: map[string]interface{}{} // Empty map
|
|
||||||
```
|
|
||||||
|
|
||||||
**ChaCha20-Poly1305:**
|
|
||||||
```go
|
|
||||||
Header: map[string]interface{}{
|
|
||||||
"encryption_algorithm": "chacha20poly1305",
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.3 ChaCha20-Poly1305 Payload
|
|
||||||
|
|
||||||
```
|
|
||||||
[24 bytes] XChaCha20 Nonce (embedded)
|
|
||||||
[N bytes] Encrypted tar archive
|
|
||||||
[16 bytes] Poly1305 authentication tag
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: Nonces are embedded in the ciphertext by Enchantrix, not stored separately.
|
|
||||||
|
|
||||||
## 4. Encryption Workflows
|
|
||||||
|
|
||||||
### 4.1 ChaCha20-Poly1305 (Recommended)
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Encryption
|
|
||||||
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error) {
|
|
||||||
// 1. Validate password is non-empty
|
|
||||||
if password == "" {
|
|
||||||
return nil, ErrPasswordRequired
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Serialize DataNode to tar
|
|
||||||
tarball, _ := dn.ToTar()
|
|
||||||
|
|
||||||
// 3. Derive 32-byte key
|
|
||||||
key := DeriveKey(password)
|
|
||||||
|
|
||||||
// 4. Create sigil and encrypt
|
|
||||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
|
||||||
encrypted, _ := sigil.In(tarball) // Generates nonce automatically
|
|
||||||
|
|
||||||
// 5. Create Trix container
|
|
||||||
t := &trix.Trix{
|
|
||||||
Header: map[string]interface{}{"encryption_algorithm": "chacha20poly1305"},
|
|
||||||
Payload: encrypted,
|
|
||||||
}
|
|
||||||
|
|
||||||
// 6. Encode with TRIX magic
|
|
||||||
return trix.Encode(t, "TRIX", nil)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Decryption
|
|
||||||
|
|
||||||
```go
|
|
||||||
func FromTrixChaCha(data []byte, password string) (*datanode.DataNode, error) {
|
|
||||||
// 1. Validate password
|
|
||||||
if password == "" {
|
|
||||||
return nil, ErrPasswordRequired
|
|
||||||
}
|
|
||||||
|
|
||||||
// 2. Decode TRIX container
|
|
||||||
t, _ := trix.Decode(data, "TRIX", nil)
|
|
||||||
|
|
||||||
// 3. Derive key and decrypt
|
|
||||||
key := DeriveKey(password)
|
|
||||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
|
||||||
tarball, err := sigil.Out(t.Payload) // Extracts nonce, verifies MAC
|
|
||||||
if err != nil {
|
|
||||||
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// 4. Deserialize DataNode
|
|
||||||
return datanode.FromTar(tarball)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.3 Legacy PGP (Disabled Decryption)
|
|
||||||
|
|
||||||
```go
|
|
||||||
func ToTrix(dn *datanode.DataNode, password string) ([]byte, error) {
|
|
||||||
tarball, _ := dn.ToTar()
|
|
||||||
|
|
||||||
var payload []byte
|
|
||||||
if password != "" {
|
|
||||||
// PGP symmetric encryption
|
|
||||||
cryptService := crypt.NewService()
|
|
||||||
payload, _ = cryptService.SymmetricallyEncryptPGP([]byte(password), tarball)
|
|
||||||
} else {
|
|
||||||
payload = tarball
|
|
||||||
}
|
|
||||||
|
|
||||||
t := &trix.Trix{Header: map[string]interface{}{}, Payload: payload}
|
|
||||||
return trix.Encode(t, "TRIX", nil)
|
|
||||||
}
|
|
||||||
|
|
||||||
func FromTrix(data []byte, password string) (*datanode.DataNode, error) {
|
|
||||||
// Security: Reject encrypted payloads
|
|
||||||
if password != "" {
|
|
||||||
return nil, errors.New("decryption disabled: cannot accept encrypted payloads")
|
|
||||||
}
|
|
||||||
|
|
||||||
t, _ := trix.Decode(data, "TRIX", nil)
|
|
||||||
return datanode.FromTar(t.Payload)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 5. Enchantrix Library
|
|
||||||
|
|
||||||
### 5.1 Dependencies
|
|
||||||
|
|
||||||
```go
|
|
||||||
import (
|
|
||||||
"github.com/Snider/Enchantrix/pkg/trix" // Container format
|
|
||||||
"github.com/Snider/Enchantrix/pkg/crypt" // PGP operations
|
|
||||||
"github.com/Snider/Enchantrix/pkg/enchantrix" // AEAD sigils
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.2 Trix Container
|
|
||||||
|
|
||||||
```go
|
|
||||||
type Trix struct {
|
|
||||||
Header map[string]interface{}
|
|
||||||
Payload []byte
|
|
||||||
}
|
|
||||||
|
|
||||||
func Encode(t *Trix, magic string, extra interface{}) ([]byte, error)
|
|
||||||
func Decode(data []byte, magic string, extra interface{}) (*Trix, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.3 ChaCha20-Poly1305 Sigil
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Create sigil with 32-byte key
|
|
||||||
sigil, err := enchantrix.NewChaChaPolySigil(key)
|
|
||||||
|
|
||||||
// Encrypt (generates random 24-byte nonce)
|
|
||||||
ciphertext, err := sigil.In(plaintext)
|
|
||||||
|
|
||||||
// Decrypt (extracts nonce, verifies MAC)
|
|
||||||
plaintext, err := sigil.Out(ciphertext)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. Account System Integration
|
|
||||||
|
|
||||||
### 6.1 PGP Armored Keys
|
|
||||||
|
|
||||||
```
|
|
||||||
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
|
||||||
|
|
||||||
mQENBGX...base64...
|
|
||||||
-----END PGP PUBLIC KEY BLOCK-----
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.2 Key Storage
|
|
||||||
|
|
||||||
```
|
|
||||||
~/.borg/
|
|
||||||
├── identity.pub # PGP public key (armored)
|
|
||||||
├── identity.key # PGP private key (armored, encrypted)
|
|
||||||
└── keyring/ # Trusted public keys
|
|
||||||
```
|
|
||||||
|
|
||||||
## 7. CLI Usage
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Encrypt with TRIX (PGP symmetric)
|
|
||||||
borg collect github repo https://github.com/user/repo \
|
|
||||||
--format trix \
|
|
||||||
--password "password"
|
|
||||||
|
|
||||||
# Decrypt unencrypted TRIX
|
|
||||||
borg decode archive.trix -o decoded.tar
|
|
||||||
|
|
||||||
# Inspect without decrypting
|
|
||||||
borg inspect archive.trix
|
|
||||||
# Output:
|
|
||||||
# Format: TRIX
|
|
||||||
# encryption_algorithm: chacha20poly1305 (if present)
|
|
||||||
# Payload Size: N bytes
|
|
||||||
```
|
|
||||||
|
|
||||||
## 8. Format Comparison
|
|
||||||
|
|
||||||
| Format | Extension | Algorithm | Use Case |
|
|
||||||
|--------|-----------|-----------|----------|
|
|
||||||
| `datanode` | `.tar` | None | Uncompressed archive |
|
|
||||||
| `tim` | `.tim` | None | Container bundle |
|
|
||||||
| `trix` | `.trix` | PGP/AES or ChaCha | Encrypted archives, accounts |
|
|
||||||
| `stim` | `.stim` | ChaCha20-Poly1305 | Encrypted containers |
|
|
||||||
| `smsg` | `.smsg` | ChaCha20-Poly1305 | Encrypted media |
|
|
||||||
|
|
||||||
## 9. Security Analysis
|
|
||||||
|
|
||||||
### 9.1 Key Derivation Limitations
|
|
||||||
|
|
||||||
**Current implementation: SHA-256 (single round)**
|
|
||||||
|
|
||||||
| Metric | Value |
|
|
||||||
|--------|-------|
|
|
||||||
| Algorithm | SHA-256 |
|
|
||||||
| Iterations | 1 |
|
|
||||||
| Salt | None |
|
|
||||||
| Key stretching | None |
|
|
||||||
|
|
||||||
**Implications:**
|
|
||||||
- GPU brute force: ~10 billion guesses/second
|
|
||||||
- 8-character password: ~10 seconds to break
|
|
||||||
- Recommendation: Use 15+ character passwords
|
|
||||||
|
|
||||||
### 9.2 ChaCha20-Poly1305 Properties
|
|
||||||
|
|
||||||
| Property | Status |
|
|
||||||
|----------|--------|
|
|
||||||
| Authentication | Poly1305 MAC (16 bytes) |
|
|
||||||
| Key size | 256 bits |
|
|
||||||
| Nonce size | 192 bits (XChaCha) |
|
|
||||||
| Standard | RFC 7539 compliant |
|
|
||||||
|
|
||||||
## 10. Test Coverage
|
|
||||||
|
|
||||||
| Test | Description |
|
|
||||||
|------|-------------|
|
|
||||||
| DeriveKey length | Output is exactly 32 bytes |
|
|
||||||
| DeriveKey determinism | Same password → same key |
|
|
||||||
| DeriveKey uniqueness | Different passwords → different keys |
|
|
||||||
| ToTrix without password | Valid TRIX with "TRIX" magic |
|
|
||||||
| ToTrix with password | PGP encryption applied |
|
|
||||||
| FromTrix unencrypted | Round-trip preserves files |
|
|
||||||
| FromTrix password rejection | Returns error |
|
|
||||||
| ToTrixChaCha success | Valid TRIX created |
|
|
||||||
| ToTrixChaCha empty password | Returns ErrPasswordRequired |
|
|
||||||
| FromTrixChaCha round-trip | Preserves nested directories |
|
|
||||||
| FromTrixChaCha wrong password | Returns ErrDecryptionFailed |
|
|
||||||
| FromTrixChaCha large data | 1MB file processed |
|
|
||||||
|
|
||||||
## 11. Implementation Reference
|
|
||||||
|
|
||||||
- Source: `pkg/trix/trix.go`
|
|
||||||
- Tests: `pkg/trix/trix_test.go`
|
|
||||||
- Enchantrix: `github.com/Snider/Enchantrix v0.0.2`
|
|
||||||
|
|
||||||
## 12. Security Considerations
|
|
||||||
|
|
||||||
1. **Use strong passwords**: 15+ characters due to no key stretching
|
|
||||||
2. **Prefer ChaCha**: Use `ToTrixChaCha` over legacy PGP
|
|
||||||
3. **Key backup**: Securely backup private keys
|
|
||||||
4. **Interoperability**: TRIX files with GPG require password
|
|
||||||
|
|
||||||
## 13. Future Work
|
|
||||||
|
|
||||||
- [ ] Key stretching (Argon2 option in DeriveKey)
|
|
||||||
- [ ] Public key encryption support
|
|
||||||
- [ ] Signature support
|
|
||||||
- [ ] Key expiration metadata
|
|
||||||
- [ ] Multi-recipient encryption
|
|
||||||
|
|
@ -1,355 +0,0 @@
|
||||||
# RFC-007: LTHN Key Derivation
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
**Depends On**: RFC-002
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
LTHN (Leet-Hash-Nonce) is a rainbow-table resistant key derivation function used for streaming DRM with time-limited access. It generates rolling keys that automatically expire without requiring revocation infrastructure.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
LTHN provides:
|
|
||||||
- Rainbow-table resistant hashing
|
|
||||||
- Time-based key rolling
|
|
||||||
- Zero-trust key derivation (no key server)
|
|
||||||
- Configurable cadence (daily to hourly)
|
|
||||||
|
|
||||||
## 2. Motivation
|
|
||||||
|
|
||||||
Traditional DRM requires:
|
|
||||||
- Central key server
|
|
||||||
- License validation
|
|
||||||
- Revocation lists
|
|
||||||
- Network connectivity
|
|
||||||
|
|
||||||
LTHN eliminates these by:
|
|
||||||
- Deriving keys from public information + secret
|
|
||||||
- Time-bounding keys automatically
|
|
||||||
- Making rainbow tables impractical
|
|
||||||
- Working completely offline
|
|
||||||
|
|
||||||
## 3. Algorithm
|
|
||||||
|
|
||||||
### 3.1 Core Function
|
|
||||||
|
|
||||||
The LTHN hash is implemented in the Enchantrix library:
|
|
||||||
|
|
||||||
```go
|
|
||||||
import "github.com/Snider/Enchantrix/pkg/crypt"
|
|
||||||
|
|
||||||
cryptService := crypt.NewService()
|
|
||||||
lthnHash := cryptService.Hash(crypt.LTHN, input)
|
|
||||||
```
|
|
||||||
|
|
||||||
**LTHN formula**:
|
|
||||||
```
|
|
||||||
LTHN(input) = SHA256(input || reverse_leet(input))
|
|
||||||
```
|
|
||||||
|
|
||||||
Where `reverse_leet` performs bidirectional character substitution.
|
|
||||||
|
|
||||||
### 3.2 Reverse Leet Mapping
|
|
||||||
|
|
||||||
| Original | Leet | Bidirectional |
|
|
||||||
|----------|------|---------------|
|
|
||||||
| o | 0 | o ↔ 0 |
|
|
||||||
| l | 1 | l ↔ 1 |
|
|
||||||
| e | 3 | e ↔ 3 |
|
|
||||||
| a | 4 | a ↔ 4 |
|
|
||||||
| s | z | s ↔ z |
|
|
||||||
| t | 7 | t ↔ 7 |
|
|
||||||
|
|
||||||
### 3.3 Example
|
|
||||||
|
|
||||||
```
|
|
||||||
Input: "2026-01-13:license:fp"
|
|
||||||
reverse_leet: "pf:3zn3ci1:31-10-6202"
|
|
||||||
Combined: "2026-01-13:license:fppf:3zn3ci1:31-10-6202"
|
|
||||||
Result: SHA256(combined) → 32-byte hash
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4. Stream Key Derivation
|
|
||||||
|
|
||||||
### 4.1 Implementation
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/smsg/stream.go:49-60
|
|
||||||
func DeriveStreamKey(date, license, fingerprint string) []byte {
|
|
||||||
input := fmt.Sprintf("%s:%s:%s", date, license, fingerprint)
|
|
||||||
cryptService := crypt.NewService()
|
|
||||||
lthnHash := cryptService.Hash(crypt.LTHN, input)
|
|
||||||
key := sha256.Sum256([]byte(lthnHash))
|
|
||||||
return key[:]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Input Format
|
|
||||||
|
|
||||||
```
|
|
||||||
period:license:fingerprint
|
|
||||||
|
|
||||||
Where:
|
|
||||||
- period: Time period identifier (see Cadence)
|
|
||||||
- license: User's license key (password)
|
|
||||||
- fingerprint: Device/browser fingerprint
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.3 Output
|
|
||||||
|
|
||||||
32-byte key suitable for ChaCha20-Poly1305.
|
|
||||||
|
|
||||||
## 5. Cadence
|
|
||||||
|
|
||||||
### 5.1 Options
|
|
||||||
|
|
||||||
| Cadence | Constant | Period Format | Example | Duration |
|
|
||||||
|---------|----------|---------------|---------|----------|
|
|
||||||
| Daily | `CadenceDaily` | `2006-01-02` | `2026-01-13` | 24h |
|
|
||||||
| 12-hour | `CadenceHalfDay` | `2006-01-02-AM/PM` | `2026-01-13-PM` | 12h |
|
|
||||||
| 6-hour | `CadenceQuarter` | `2006-01-02-HH` | `2026-01-13-12` | 6h |
|
|
||||||
| Hourly | `CadenceHourly` | `2006-01-02-HH` | `2026-01-13-15` | 1h |
|
|
||||||
|
|
||||||
### 5.2 Period Calculation
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/smsg/stream.go:73-119
|
|
||||||
func GetCurrentPeriod(cadence Cadence) string {
|
|
||||||
return GetPeriodAt(time.Now(), cadence)
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetPeriodAt(t time.Time, cadence Cadence) string {
|
|
||||||
switch cadence {
|
|
||||||
case CadenceDaily:
|
|
||||||
return t.Format("2006-01-02")
|
|
||||||
case CadenceHalfDay:
|
|
||||||
suffix := "AM"
|
|
||||||
if t.Hour() >= 12 {
|
|
||||||
suffix = "PM"
|
|
||||||
}
|
|
||||||
return t.Format("2006-01-02") + "-" + suffix
|
|
||||||
case CadenceQuarter:
|
|
||||||
bucket := (t.Hour() / 6) * 6
|
|
||||||
return fmt.Sprintf("%s-%02d", t.Format("2006-01-02"), bucket)
|
|
||||||
case CadenceHourly:
|
|
||||||
return fmt.Sprintf("%s-%02d", t.Format("2006-01-02"), t.Hour())
|
|
||||||
}
|
|
||||||
return t.Format("2006-01-02")
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetNextPeriod(cadence Cadence) string {
|
|
||||||
return GetPeriodAt(time.Now().Add(GetCadenceDuration(cadence)), cadence)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.3 Duration Mapping
|
|
||||||
|
|
||||||
```go
|
|
||||||
func GetCadenceDuration(cadence Cadence) time.Duration {
|
|
||||||
switch cadence {
|
|
||||||
case CadenceDaily:
|
|
||||||
return 24 * time.Hour
|
|
||||||
case CadenceHalfDay:
|
|
||||||
return 12 * time.Hour
|
|
||||||
case CadenceQuarter:
|
|
||||||
return 6 * time.Hour
|
|
||||||
case CadenceHourly:
|
|
||||||
return 1 * time.Hour
|
|
||||||
}
|
|
||||||
return 24 * time.Hour
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. Rolling Windows
|
|
||||||
|
|
||||||
### 6.1 Dual-Key Strategy
|
|
||||||
|
|
||||||
At encryption time, CEK is wrapped with **two** keys:
|
|
||||||
1. Current period key
|
|
||||||
2. Next period key
|
|
||||||
|
|
||||||
This creates a rolling validity window:
|
|
||||||
|
|
||||||
```
|
|
||||||
Time: 2026-01-13 23:30 (daily cadence)
|
|
||||||
|
|
||||||
Valid keys:
|
|
||||||
- "2026-01-13:license:fp" (current period)
|
|
||||||
- "2026-01-14:license:fp" (next period)
|
|
||||||
|
|
||||||
Window: 24-48 hours of validity
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.2 Key Wrapping
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/smsg/stream.go:135-155
|
|
||||||
func WrapCEK(cek []byte, streamKey []byte) (string, error) {
|
|
||||||
sigil := enchantrix.NewChaChaPolySigil()
|
|
||||||
wrapped, err := sigil.Seal(cek, streamKey)
|
|
||||||
if err != nil {
|
|
||||||
return "", err
|
|
||||||
}
|
|
||||||
return base64.StdEncoding.EncodeToString(wrapped), nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Wrapped format**:
|
|
||||||
```
|
|
||||||
[24-byte nonce][encrypted CEK][16-byte auth tag]
|
|
||||||
→ base64 encoded for header storage
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.3 Key Unwrapping
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/smsg/stream.go:157-170
|
|
||||||
func UnwrapCEK(wrapped string, streamKey []byte) ([]byte, error) {
|
|
||||||
data, err := base64.StdEncoding.DecodeString(wrapped)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
sigil := enchantrix.NewChaChaPolySigil()
|
|
||||||
return sigil.Open(data, streamKey)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.4 Decryption Flow
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/smsg/stream.go:606-633
|
|
||||||
func UnwrapCEKFromHeader(header *V3Header, params *StreamParams) ([]byte, error) {
|
|
||||||
// Try current period first
|
|
||||||
currentPeriod := GetCurrentPeriod(params.Cadence)
|
|
||||||
currentKey := DeriveStreamKey(currentPeriod, params.License, params.Fingerprint)
|
|
||||||
|
|
||||||
for _, wk := range header.WrappedKeys {
|
|
||||||
cek, err := UnwrapCEK(wk.Key, currentKey)
|
|
||||||
if err == nil {
|
|
||||||
return cek, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Try next period (for clock skew)
|
|
||||||
nextPeriod := GetNextPeriod(params.Cadence)
|
|
||||||
nextKey := DeriveStreamKey(nextPeriod, params.License, params.Fingerprint)
|
|
||||||
|
|
||||||
for _, wk := range header.WrappedKeys {
|
|
||||||
cek, err := UnwrapCEK(wk.Key, nextKey)
|
|
||||||
if err == nil {
|
|
||||||
return cek, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, ErrKeyExpired
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 7. V3 Header Format
|
|
||||||
|
|
||||||
```go
|
|
||||||
type V3Header struct {
|
|
||||||
Format string `json:"format"` // "v3"
|
|
||||||
Manifest *Manifest `json:"manifest"`
|
|
||||||
WrappedKeys []WrappedKey `json:"wrappedKeys"`
|
|
||||||
Chunked *ChunkInfo `json:"chunked,omitempty"`
|
|
||||||
}
|
|
||||||
|
|
||||||
type WrappedKey struct {
|
|
||||||
Period string `json:"period"` // e.g., "2026-01-13"
|
|
||||||
Key string `json:"key"` // base64-encoded wrapped CEK
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 8. Rainbow Table Resistance
|
|
||||||
|
|
||||||
### 8.1 Why It Works
|
|
||||||
|
|
||||||
Standard hash:
|
|
||||||
```
|
|
||||||
SHA256("2026-01-13:license:fp") → predictable, precomputable
|
|
||||||
```
|
|
||||||
|
|
||||||
LTHN hash:
|
|
||||||
```
|
|
||||||
LTHN("2026-01-13:license:fp")
|
|
||||||
= SHA256("2026-01-13:license:fp" + reverse_leet("2026-01-13:license:fp"))
|
|
||||||
= SHA256("2026-01-13:license:fp" + "pf:3zn3ci1:31-10-6202")
|
|
||||||
```
|
|
||||||
|
|
||||||
The salt is **derived from the input itself**, making precomputation impractical:
|
|
||||||
- Each unique input has a unique salt
|
|
||||||
- Cannot build rainbow tables without knowing all possible inputs
|
|
||||||
- Input space includes license keys (high entropy)
|
|
||||||
|
|
||||||
### 8.2 Security Analysis
|
|
||||||
|
|
||||||
| Attack | Mitigation |
|
|
||||||
|--------|------------|
|
|
||||||
| Rainbow tables | Input-derived salt makes precomputation infeasible |
|
|
||||||
| Brute force | License key entropy (64+ bits recommended) |
|
|
||||||
| Time oracle | Rolling window prevents precise timing attacks |
|
|
||||||
| Key sharing | Keys expire within cadence window |
|
|
||||||
|
|
||||||
## 9. Zero-Trust Properties
|
|
||||||
|
|
||||||
| Property | Implementation |
|
|
||||||
|----------|----------------|
|
|
||||||
| No key server | Keys derived locally from LTHN |
|
|
||||||
| Auto-expiration | Rolling periods invalidate old keys |
|
|
||||||
| No revocation | Keys naturally expire within cadence window |
|
|
||||||
| Device binding | Fingerprint in derivation input |
|
|
||||||
| User binding | License key in derivation input |
|
|
||||||
|
|
||||||
## 10. Test Vectors
|
|
||||||
|
|
||||||
From `pkg/smsg/stream_test.go`:
|
|
||||||
|
|
||||||
```go
|
|
||||||
// Stream key generation
|
|
||||||
date := "2026-01-12"
|
|
||||||
license := "test-license"
|
|
||||||
fingerprint := "test-fp"
|
|
||||||
key := DeriveStreamKey(date, license, fingerprint)
|
|
||||||
// key is 32 bytes, deterministic
|
|
||||||
|
|
||||||
// Period calculation at 2026-01-12 15:30:00 UTC
|
|
||||||
t := time.Date(2026, 1, 12, 15, 30, 0, 0, time.UTC)
|
|
||||||
|
|
||||||
GetPeriodAt(t, CadenceDaily) // "2026-01-12"
|
|
||||||
GetPeriodAt(t, CadenceHalfDay) // "2026-01-12-PM"
|
|
||||||
GetPeriodAt(t, CadenceQuarter) // "2026-01-12-12"
|
|
||||||
GetPeriodAt(t, CadenceHourly) // "2026-01-12-15"
|
|
||||||
|
|
||||||
// Next periods
|
|
||||||
// Daily: "2026-01-12" → "2026-01-13"
|
|
||||||
// 12h: "2026-01-12-PM" → "2026-01-13-AM"
|
|
||||||
// 6h: "2026-01-12-12" → "2026-01-12-18"
|
|
||||||
// 1h: "2026-01-12-15" → "2026-01-12-16"
|
|
||||||
```
|
|
||||||
|
|
||||||
## 11. Implementation Reference
|
|
||||||
|
|
||||||
- Stream key derivation: `pkg/smsg/stream.go`
|
|
||||||
- LTHN hash: `github.com/Snider/Enchantrix/pkg/crypt`
|
|
||||||
- WASM bindings: `pkg/wasm/stmf/main.go` (decryptV3, unwrapCEK)
|
|
||||||
- Tests: `pkg/smsg/stream_test.go`
|
|
||||||
|
|
||||||
## 12. Security Considerations
|
|
||||||
|
|
||||||
1. **License entropy**: Recommend 64+ bits (12+ alphanumeric chars)
|
|
||||||
2. **Fingerprint stability**: Should be stable but not user-controllable
|
|
||||||
3. **Clock skew**: Rolling windows handle ±1 period drift
|
|
||||||
4. **Key exposure**: Derived keys valid only for one period
|
|
||||||
|
|
||||||
## 13. References
|
|
||||||
|
|
||||||
- RFC-002: SMSG Format (v3 streaming)
|
|
||||||
- RFC-001: OSS DRM (Section 3.4)
|
|
||||||
- RFC 8439: ChaCha20-Poly1305
|
|
||||||
- Enchantrix: github.com/Snider/Enchantrix
|
|
||||||
|
|
@ -1,255 +0,0 @@
|
||||||
# RFC-008: Borgfile Compilation
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
**Depends On**: RFC-003, RFC-004
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
Borgfile is a declarative syntax for defining TIM container contents. It specifies how local files are mapped into the container filesystem, enabling reproducible container builds.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
Borgfile provides:
|
|
||||||
- Dockerfile-like syntax for familiarity
|
|
||||||
- File mapping into containers
|
|
||||||
- Simple ADD directive
|
|
||||||
- Integration with TIM encryption
|
|
||||||
|
|
||||||
## 2. File Format
|
|
||||||
|
|
||||||
### 2.1 Location
|
|
||||||
|
|
||||||
- Default: `Borgfile` in current directory
|
|
||||||
- Override: `borg compile -f path/to/Borgfile`
|
|
||||||
|
|
||||||
### 2.2 Encoding
|
|
||||||
|
|
||||||
- UTF-8 text
|
|
||||||
- Unix line endings (LF)
|
|
||||||
- No BOM
|
|
||||||
|
|
||||||
## 3. Syntax
|
|
||||||
|
|
||||||
### 3.1 Parsing Implementation
|
|
||||||
|
|
||||||
```go
|
|
||||||
// cmd/compile.go:33-54
|
|
||||||
lines := strings.Split(content, "\n")
|
|
||||||
for _, line := range lines {
|
|
||||||
parts := strings.Fields(line) // Whitespace-separated tokens
|
|
||||||
if len(parts) == 0 {
|
|
||||||
continue // Skip empty lines
|
|
||||||
}
|
|
||||||
switch parts[0] {
|
|
||||||
case "ADD":
|
|
||||||
// Process ADD directive
|
|
||||||
default:
|
|
||||||
return fmt.Errorf("unknown instruction: %s", parts[0])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 ADD Directive
|
|
||||||
|
|
||||||
```
|
|
||||||
ADD <source> <destination>
|
|
||||||
```
|
|
||||||
|
|
||||||
| Parameter | Description |
|
|
||||||
|-----------|-------------|
|
|
||||||
| source | Local path (relative to current working directory) |
|
|
||||||
| destination | Container path (leading slash stripped) |
|
|
||||||
|
|
||||||
### 3.3 Examples
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
# Add single file
|
|
||||||
ADD ./app /usr/local/bin/app
|
|
||||||
|
|
||||||
# Add configuration
|
|
||||||
ADD ./config.yaml /etc/myapp/config.yaml
|
|
||||||
|
|
||||||
# Multiple files
|
|
||||||
ADD ./bin/server /app/server
|
|
||||||
ADD ./static /app/static
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4. Path Resolution
|
|
||||||
|
|
||||||
### 4.1 Source Paths
|
|
||||||
|
|
||||||
- Resolved relative to **current working directory** (not Borgfile location)
|
|
||||||
- Must exist at compile time
|
|
||||||
- Read via `os.ReadFile(src)`
|
|
||||||
|
|
||||||
### 4.2 Destination Paths
|
|
||||||
|
|
||||||
- Leading slash stripped: `strings.TrimPrefix(dest, "/")`
|
|
||||||
- Added to DataNode as-is
|
|
||||||
|
|
||||||
```go
|
|
||||||
// cmd/compile.go:46-50
|
|
||||||
data, err := os.ReadFile(src)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("invalid ADD instruction: %s", line)
|
|
||||||
}
|
|
||||||
name := strings.TrimPrefix(dest, "/")
|
|
||||||
m.RootFS.AddData(name, data)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 5. File Handling
|
|
||||||
|
|
||||||
### 5.1 Permissions
|
|
||||||
|
|
||||||
**Current implementation**: Permissions are NOT preserved.
|
|
||||||
|
|
||||||
| Source | Container |
|
|
||||||
|--------|-----------|
|
|
||||||
| Any file | 0600 (hardcoded in DataNode.ToTar) |
|
|
||||||
| Any directory | 0755 (implicit) |
|
|
||||||
|
|
||||||
### 5.2 Timestamps
|
|
||||||
|
|
||||||
- Set to `time.Now()` when added to DataNode
|
|
||||||
- Original timestamps not preserved
|
|
||||||
|
|
||||||
### 5.3 File Types
|
|
||||||
|
|
||||||
- Regular files only
|
|
||||||
- No directory recursion (each file must be added explicitly)
|
|
||||||
- No symlink following
|
|
||||||
|
|
||||||
## 6. Error Handling
|
|
||||||
|
|
||||||
| Error | Cause |
|
|
||||||
|-------|-------|
|
|
||||||
| `invalid ADD instruction: {line}` | Wrong number of arguments |
|
|
||||||
| `os.ReadFile` error | Source file not found |
|
|
||||||
| `unknown instruction: {name}` | Unrecognized directive |
|
|
||||||
| `ErrPasswordRequired` | Encryption requested without password |
|
|
||||||
|
|
||||||
## 7. CLI Flags
|
|
||||||
|
|
||||||
```go
|
|
||||||
// cmd/compile.go:80-82
|
|
||||||
-f, --file string Path to Borgfile (default: "Borgfile")
|
|
||||||
-o, --output string Output path (default: "a.tim")
|
|
||||||
-e, --encrypt string Password for .stim encryption (optional)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 8. Output Formats
|
|
||||||
|
|
||||||
### 8.1 Plain TIM
|
|
||||||
|
|
||||||
```bash
|
|
||||||
borg compile -f Borgfile -o container.tim
|
|
||||||
```
|
|
||||||
|
|
||||||
Output: Standard TIM tar archive with `config.json` + `rootfs/`
|
|
||||||
|
|
||||||
### 8.2 Encrypted STIM
|
|
||||||
|
|
||||||
```bash
|
|
||||||
borg compile -f Borgfile -e "password" -o container.stim
|
|
||||||
```
|
|
||||||
|
|
||||||
Output: ChaCha20-Poly1305 encrypted STIM container
|
|
||||||
|
|
||||||
**Auto-detection**: If `-e` flag provided, output automatically uses `.stim` format even if `-o` specifies `.tim`.
|
|
||||||
|
|
||||||
## 9. Default OCI Config
|
|
||||||
|
|
||||||
The current implementation creates a minimal config:
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/tim/config.go:6-10
|
|
||||||
func defaultConfig() (*trix.Trix, error) {
|
|
||||||
return &trix.Trix{Header: make(map[string]interface{})}, nil
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
**Note**: This is a placeholder. For full OCI runtime execution, you'll need to provide a proper `config.json` in the container or modify the TIM after compilation.
|
|
||||||
|
|
||||||
## 10. Compilation Process
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Read Borgfile content
|
|
||||||
2. Parse line-by-line
|
|
||||||
3. For each ADD directive:
|
|
||||||
a. Read source file from filesystem
|
|
||||||
b. Strip leading slash from destination
|
|
||||||
c. Add to DataNode
|
|
||||||
4. Create TIM with default config + populated RootFS
|
|
||||||
5. If password provided:
|
|
||||||
a. Encrypt to STIM via ToSigil()
|
|
||||||
b. Adjust output extension to .stim
|
|
||||||
6. Write output file
|
|
||||||
```
|
|
||||||
|
|
||||||
## 11. Implementation Reference
|
|
||||||
|
|
||||||
- Parser/Compiler: `cmd/compile.go`
|
|
||||||
- TIM creation: `pkg/tim/tim.go`
|
|
||||||
- DataNode: `pkg/datanode/datanode.go`
|
|
||||||
- Tests: `cmd/compile_test.go`
|
|
||||||
|
|
||||||
## 12. Current Limitations
|
|
||||||
|
|
||||||
| Feature | Status |
|
|
||||||
|---------|--------|
|
|
||||||
| Comment support (`#`) | Not implemented |
|
|
||||||
| Quoted paths | Not implemented |
|
|
||||||
| Directory recursion | Not implemented |
|
|
||||||
| Permission preservation | Not implemented |
|
|
||||||
| Path resolution relative to Borgfile | Not implemented (uses CWD) |
|
|
||||||
| Full OCI config generation | Not implemented (empty header) |
|
|
||||||
| Symlink following | Not implemented |
|
|
||||||
|
|
||||||
## 13. Examples
|
|
||||||
|
|
||||||
### 13.1 Simple Application
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
ADD ./myapp /usr/local/bin/myapp
|
|
||||||
ADD ./config.yaml /etc/myapp/config.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
### 13.2 Web Application
|
|
||||||
|
|
||||||
```dockerfile
|
|
||||||
ADD ./server /app/server
|
|
||||||
ADD ./index.html /app/static/index.html
|
|
||||||
ADD ./style.css /app/static/style.css
|
|
||||||
ADD ./app.js /app/static/app.js
|
|
||||||
```
|
|
||||||
|
|
||||||
### 13.3 With Encryption
|
|
||||||
|
|
||||||
```bash
|
|
||||||
# Create Borgfile
|
|
||||||
cat > Borgfile << 'EOF'
|
|
||||||
ADD ./secret-app /app/secret-app
|
|
||||||
ADD ./credentials.json /etc/app/credentials.json
|
|
||||||
EOF
|
|
||||||
|
|
||||||
# Compile with encryption
|
|
||||||
borg compile -f Borgfile -e "MySecretPassword123" -o secret.stim
|
|
||||||
```
|
|
||||||
|
|
||||||
## 14. Future Work
|
|
||||||
|
|
||||||
- [ ] Comment support (`#`)
|
|
||||||
- [ ] Quoted path support for spaces
|
|
||||||
- [ ] Directory recursion in ADD
|
|
||||||
- [ ] Permission preservation
|
|
||||||
- [ ] Path resolution relative to Borgfile location
|
|
||||||
- [ ] Full OCI config generation
|
|
||||||
- [ ] Variable substitution (`${VAR}`)
|
|
||||||
- [ ] Include directive
|
|
||||||
- [ ] Glob patterns in source
|
|
||||||
- [ ] COPY directive (alias for ADD)
|
|
||||||
|
|
@ -1,365 +0,0 @@
|
||||||
# RFC-009: STMF Secure To-Me Form
|
|
||||||
|
|
||||||
**Status**: Draft
|
|
||||||
**Author**: [Snider](https://github.com/Snider/)
|
|
||||||
**Created**: 2026-01-13
|
|
||||||
**License**: EUPL-1.2
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
## Abstract
|
|
||||||
|
|
||||||
STMF (Secure To-Me Form) provides asymmetric encryption for web form submissions. It enables end-to-end encrypted form data where only the recipient can decrypt submissions, protecting sensitive data from server compromise.
|
|
||||||
|
|
||||||
## 1. Overview
|
|
||||||
|
|
||||||
STMF provides:
|
|
||||||
- Asymmetric encryption for form data
|
|
||||||
- X25519 key exchange
|
|
||||||
- ChaCha20-Poly1305 for payload encryption
|
|
||||||
- Browser-based encryption via WASM
|
|
||||||
- HTTP middleware for server-side decryption
|
|
||||||
|
|
||||||
## 2. Cryptographic Primitives
|
|
||||||
|
|
||||||
### 2.1 Key Exchange
|
|
||||||
|
|
||||||
X25519 (Curve25519 Diffie-Hellman)
|
|
||||||
|
|
||||||
| Parameter | Value |
|
|
||||||
|-----------|-------|
|
|
||||||
| Private key | 32 bytes |
|
|
||||||
| Public key | 32 bytes |
|
|
||||||
| Shared secret | 32 bytes |
|
|
||||||
|
|
||||||
### 2.2 Encryption
|
|
||||||
|
|
||||||
ChaCha20-Poly1305
|
|
||||||
|
|
||||||
| Parameter | Value |
|
|
||||||
|-----------|-------|
|
|
||||||
| Key | 32 bytes (SHA-256 of shared secret) |
|
|
||||||
| Nonce | 24 bytes (XChaCha variant) |
|
|
||||||
| Tag | 16 bytes |
|
|
||||||
|
|
||||||
## 3. Protocol
|
|
||||||
|
|
||||||
### 3.1 Setup (One-time)
|
|
||||||
|
|
||||||
```
|
|
||||||
Recipient (Server):
|
|
||||||
1. Generate X25519 keypair
|
|
||||||
2. Publish public key (embed in page or API)
|
|
||||||
3. Store private key securely
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.2 Encryption Flow (Browser)
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Fetch recipient's public key
|
|
||||||
2. Generate ephemeral X25519 keypair
|
|
||||||
3. Compute shared secret: X25519(ephemeral_private, recipient_public)
|
|
||||||
4. Derive encryption key: SHA256(shared_secret)
|
|
||||||
5. Encrypt form data: ChaCha20-Poly1305(data, key, random_nonce)
|
|
||||||
6. Send: {ephemeral_public, nonce, ciphertext}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 3.3 Decryption Flow (Server)
|
|
||||||
|
|
||||||
```
|
|
||||||
1. Receive {ephemeral_public, nonce, ciphertext}
|
|
||||||
2. Compute shared secret: X25519(recipient_private, ephemeral_public)
|
|
||||||
3. Derive encryption key: SHA256(shared_secret)
|
|
||||||
4. Decrypt: ChaCha20-Poly1305_Open(ciphertext, key, nonce)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 4. Wire Format
|
|
||||||
|
|
||||||
### 4.1 Container (Trix-based)
|
|
||||||
|
|
||||||
```
|
|
||||||
[Magic: "STMF" (4 bytes)]
|
|
||||||
[Header: Gob-encoded JSON]
|
|
||||||
[Payload: ChaCha20-Poly1305 ciphertext]
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.2 Header Structure
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"version": "1.0",
|
|
||||||
"algorithm": "x25519-chacha20poly1305",
|
|
||||||
"ephemeral_pk": "<base64 32-byte ephemeral public key>"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 4.3 Transmission
|
|
||||||
|
|
||||||
- Default form field: `_stmf_payload`
|
|
||||||
- Encoding: Base64 string
|
|
||||||
- Content-Type: `application/x-www-form-urlencoded` or `multipart/form-data`
|
|
||||||
|
|
||||||
## 5. Data Structures
|
|
||||||
|
|
||||||
### 5.1 FormField
|
|
||||||
|
|
||||||
```go
|
|
||||||
type FormField struct {
|
|
||||||
Name string // Field name
|
|
||||||
Value string // Base64 for files, plaintext otherwise
|
|
||||||
Type string // "text", "password", "file"
|
|
||||||
Filename string // For file uploads
|
|
||||||
MimeType string // For file uploads
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.2 FormData
|
|
||||||
|
|
||||||
```go
|
|
||||||
type FormData struct {
|
|
||||||
Fields []FormField // Array of form fields
|
|
||||||
Metadata map[string]string // Arbitrary key-value metadata
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 5.3 Builder Pattern
|
|
||||||
|
|
||||||
```go
|
|
||||||
formData := NewFormData().
|
|
||||||
AddField("email", "user@example.com").
|
|
||||||
AddFieldWithType("password", "secret", "password").
|
|
||||||
AddFile("document", base64Content, "report.pdf", "application/pdf").
|
|
||||||
SetMetadata("timestamp", time.Now().String())
|
|
||||||
```
|
|
||||||
|
|
||||||
## 6. Key Management API
|
|
||||||
|
|
||||||
### 6.1 Key Generation
|
|
||||||
|
|
||||||
```go
|
|
||||||
// pkg/stmf/keypair.go
|
|
||||||
func GenerateKeyPair() (*KeyPair, error)
|
|
||||||
|
|
||||||
type KeyPair struct {
|
|
||||||
privateKey *ecdh.PrivateKey
|
|
||||||
publicKey *ecdh.PublicKey
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.2 Key Loading
|
|
||||||
|
|
||||||
```go
|
|
||||||
// From raw bytes
|
|
||||||
func LoadPublicKey(data []byte) (*ecdh.PublicKey, error)
|
|
||||||
func LoadPrivateKey(data []byte) (*ecdh.PrivateKey, error)
|
|
||||||
|
|
||||||
// From base64
|
|
||||||
func LoadPublicKeyBase64(encoded string) (*ecdh.PublicKey, error)
|
|
||||||
func LoadPrivateKeyBase64(encoded string) (*ecdh.PrivateKey, error)
|
|
||||||
|
|
||||||
// Reconstruct keypair from private key
|
|
||||||
func LoadKeyPair(privateKeyBytes []byte) (*KeyPair, error)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 6.3 Key Export
|
|
||||||
|
|
||||||
```go
|
|
||||||
func (kp *KeyPair) PublicKey() []byte // Raw 32 bytes
|
|
||||||
func (kp *KeyPair) PrivateKey() []byte // Raw 32 bytes
|
|
||||||
func (kp *KeyPair) PublicKeyBase64() string // Base64 encoded
|
|
||||||
func (kp *KeyPair) PrivateKeyBase64() string // Base64 encoded
|
|
||||||
```
|
|
||||||
|
|
||||||
## 7. WASM API
|
|
||||||
|
|
||||||
### 7.1 BorgSTMF Namespace
|
|
||||||
|
|
||||||
```javascript
|
|
||||||
// Generate X25519 keypair
|
|
||||||
const keypair = await BorgSTMF.generateKeyPair();
|
|
||||||
// keypair.publicKey: base64 string
|
|
||||||
// keypair.privateKey: base64 string
|
|
||||||
|
|
||||||
// Encrypt form data
|
|
||||||
const encrypted = await BorgSTMF.encrypt(
|
|
||||||
JSON.stringify(formData),
|
|
||||||
serverPublicKeyBase64
|
|
||||||
);
|
|
||||||
|
|
||||||
// Encrypt with field-level control
|
|
||||||
const encrypted = await BorgSTMF.encryptFields(
|
|
||||||
{email: "user@example.com", password: "secret"},
|
|
||||||
serverPublicKeyBase64,
|
|
||||||
{timestamp: Date.now().toString()} // Optional metadata
|
|
||||||
);
|
|
||||||
```
|
|
||||||
|
|
||||||
## 8. HTTP Middleware
|
|
||||||
|
|
||||||
### 8.1 Simple Usage
|
|
||||||
|
|
||||||
```go
|
|
||||||
import "github.com/Snider/Borg/pkg/stmf/middleware"
|
|
||||||
|
|
||||||
// Create middleware with private key
|
|
||||||
mw := middleware.Simple(privateKeyBytes)
|
|
||||||
|
|
||||||
// Or from base64
|
|
||||||
mw, err := middleware.SimpleBase64(privateKeyB64)
|
|
||||||
|
|
||||||
// Apply to handler
|
|
||||||
http.Handle("/submit", mw(myHandler))
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.2 Advanced Configuration
|
|
||||||
|
|
||||||
```go
|
|
||||||
cfg := middleware.DefaultConfig(privateKeyBytes)
|
|
||||||
cfg.FieldName = "_custom_field" // Custom field name (default: _stmf_payload)
|
|
||||||
cfg.PopulateForm = &true // Auto-populate r.Form
|
|
||||||
cfg.OnError = customErrorHandler // Custom error handling
|
|
||||||
cfg.OnMissingPayload = customHandler // When field is absent
|
|
||||||
|
|
||||||
mw := middleware.Middleware(cfg)
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.3 Context Access
|
|
||||||
|
|
||||||
```go
|
|
||||||
func myHandler(w http.ResponseWriter, r *http.Request) {
|
|
||||||
// Get decrypted form data
|
|
||||||
formData := middleware.GetFormData(r)
|
|
||||||
|
|
||||||
// Get metadata
|
|
||||||
metadata := middleware.GetMetadata(r)
|
|
||||||
|
|
||||||
// Access fields
|
|
||||||
email := formData.Get("email")
|
|
||||||
password := formData.Get("password")
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### 8.4 Middleware Behavior
|
|
||||||
|
|
||||||
- Handles POST, PUT, PATCH requests only
|
|
||||||
- Parses multipart/form-data (32 MB limit) or application/x-www-form-urlencoded
|
|
||||||
- Looks for field `_stmf_payload` (configurable)
|
|
||||||
- Base64 decodes, then decrypts
|
|
||||||
- Populates `r.Form` and `r.PostForm` with decrypted fields
|
|
||||||
- Returns 400 Bad Request on decryption failure
|
|
||||||
|
|
||||||
## 9. Integration Example
|
|
||||||
|
|
||||||
### 9.1 HTML Form
|
|
||||||
|
|
||||||
```html
|
|
||||||
<form id="secure-form" data-stmf-pubkey="<base64-public-key>">
|
|
||||||
<input name="name" type="text">
|
|
||||||
<input name="email" type="email">
|
|
||||||
<input name="ssn" type="password">
|
|
||||||
<button type="submit">Send Securely</button>
|
|
||||||
</form>
|
|
||||||
|
|
||||||
<script>
|
|
||||||
document.getElementById('secure-form').addEventListener('submit', async (e) => {
|
|
||||||
e.preventDefault();
|
|
||||||
const form = e.target;
|
|
||||||
const pubkey = form.dataset.stmfPubkey;
|
|
||||||
|
|
||||||
const formData = new FormData(form);
|
|
||||||
const data = Object.fromEntries(formData);
|
|
||||||
|
|
||||||
const encrypted = await BorgSTMF.encrypt(JSON.stringify(data), pubkey);
|
|
||||||
|
|
||||||
await fetch('/api/submit', {
|
|
||||||
method: 'POST',
|
|
||||||
body: new URLSearchParams({_stmf_payload: encrypted}),
|
|
||||||
headers: {'Content-Type': 'application/x-www-form-urlencoded'}
|
|
||||||
});
|
|
||||||
});
|
|
||||||
</script>
|
|
||||||
```
|
|
||||||
|
|
||||||
### 9.2 Server Handler
|
|
||||||
|
|
||||||
```go
|
|
||||||
func main() {
|
|
||||||
privateKey, _ := os.ReadFile("private.key")
|
|
||||||
mw := middleware.Simple(privateKey)
|
|
||||||
|
|
||||||
http.Handle("/api/submit", mw(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
|
||||||
formData := middleware.GetFormData(r)
|
|
||||||
|
|
||||||
name := formData.Get("name")
|
|
||||||
email := formData.Get("email")
|
|
||||||
ssn := formData.Get("ssn")
|
|
||||||
|
|
||||||
// Process securely...
|
|
||||||
w.WriteHeader(http.StatusOK)
|
|
||||||
})))
|
|
||||||
|
|
||||||
http.ListenAndServeTLS(":443", "cert.pem", "key.pem", nil)
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
## 10. Security Properties
|
|
||||||
|
|
||||||
### 10.1 Forward Secrecy
|
|
||||||
|
|
||||||
- Fresh ephemeral keypair per encryption
|
|
||||||
- Compromised private key doesn't decrypt past messages
|
|
||||||
- Each ciphertext has unique shared secret
|
|
||||||
|
|
||||||
### 10.2 Authenticity
|
|
||||||
|
|
||||||
- Poly1305 MAC prevents tampering
|
|
||||||
- Decryption fails if ciphertext modified
|
|
||||||
|
|
||||||
### 10.3 Confidentiality
|
|
||||||
|
|
||||||
- ChaCha20 provides 256-bit security
|
|
||||||
- Nonces are random (24 bytes), collision unlikely
|
|
||||||
- Data encrypted before leaving browser
|
|
||||||
|
|
||||||
### 10.4 Key Isolation
|
|
||||||
|
|
||||||
- Private key never exposed to browser/JavaScript
|
|
||||||
- Public key can be safely distributed
|
|
||||||
- Ephemeral keys discarded after encryption
|
|
||||||
|
|
||||||
## 11. Error Handling
|
|
||||||
|
|
||||||
```go
|
|
||||||
var (
|
|
||||||
ErrInvalidMagic = errors.New("invalid STMF magic")
|
|
||||||
ErrInvalidPayload = errors.New("invalid STMF payload")
|
|
||||||
ErrDecryptionFailed = errors.New("decryption failed")
|
|
||||||
ErrInvalidPublicKey = errors.New("invalid public key")
|
|
||||||
ErrInvalidPrivateKey = errors.New("invalid private key")
|
|
||||||
ErrKeyGenerationFailed = errors.New("key generation failed")
|
|
||||||
)
|
|
||||||
```
|
|
||||||
|
|
||||||
## 12. Implementation Reference
|
|
||||||
|
|
||||||
- Types: `pkg/stmf/types.go`
|
|
||||||
- Key management: `pkg/stmf/keypair.go`
|
|
||||||
- Encryption: `pkg/stmf/encrypt.go`
|
|
||||||
- Decryption: `pkg/stmf/decrypt.go`
|
|
||||||
- Middleware: `pkg/stmf/middleware/http.go`
|
|
||||||
- WASM: `pkg/wasm/stmf/main.go`
|
|
||||||
|
|
||||||
## 13. Security Considerations
|
|
||||||
|
|
||||||
1. **Public key authenticity**: Verify public key source (HTTPS, pinning)
|
|
||||||
2. **Private key protection**: Never expose to browser, store securely
|
|
||||||
3. **Nonce uniqueness**: Random generation ensures uniqueness
|
|
||||||
4. **HTTPS required**: Transport layer must be encrypted
|
|
||||||
|
|
||||||
## 14. Future Work
|
|
||||||
|
|
||||||
- [ ] Multiple recipients
|
|
||||||
- [ ] Key attestation
|
|
||||||
- [ ] Offline decryption app
|
|
||||||
- [ ] Hardware key support (WebAuthn)
|
|
||||||
- [ ] Key rotation support
|
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Add table
Reference in a new issue