Compare commits

...

13 commits
v0.1.0 ... main

Author SHA1 Message Date
Snider
a77024aad4 feat(collect): add local directory collection
Add `borg collect local` command to collect files from the local
filesystem into a DataNode.

Features:
- Walks directory tree (defaults to CWD)
- Respects .gitignore patterns by default
- Excludes hidden files by default (--hidden to include)
- Custom exclude patterns via --exclude flag
- Output formats: datanode, tim, trix, stim
- Compression: none, gz, xz

Examples:
  borg collect local
  borg collect local ./src --output src.tar.xz --compression xz
  borg collect local . --format stim --password secret

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-02-03 07:12:10 +00:00
Snider
eae9de0cf6
Merge pull request #18 from Snider/dependabot/go_modules/golang.org/x/crypto-0.45.0
Bump golang.org/x/crypto from 0.44.0 to 0.45.0
2026-02-02 06:43:32 +00:00
Snider
6e38c4f3a6
Merge pull request #112 from Snider/copilot/combine-prs-into-one-update
[WIP] Combine multiple PRs into a single squash commit
2026-02-02 06:35:39 +00:00
copilot-swe-agent[bot]
c26d841b1b Initial plan 2026-02-02 05:36:04 +00:00
snider
cf2af53ed3 feat: add RFC specifications and documentation for Borg project 2026-01-13 17:26:21 +00:00
snider
63b8a3ecb6 feat: adaptive bitrate streaming (ABR) for HLS-style encrypted video
Add multi-quality variant support for video content:
   - New ABR types in pkg/smsg/types.go (ABRManifest, Variant, ABRPresets)
   - New pkg/smsg/abr.go with manifest read/write and bandwidth estimation
   - New cmd/mkdemo-abr CLI tool for creating ABR variant sets via ffmpeg
   - WASM parseABRManifest and selectVariant functions
   - Demo page "Adaptive Quality" tab with ABR player
   - RFC-001 Section 3.7 documenting ABR format and algorithm
2026-01-13 15:40:15 +00:00
snider
8486242fd8 docs: add IPFS and payment
integration guides + artist mode polish

   - Add docs/ipfs-distribution.md: complete guide for IPFS hosting
     - Installation, pinning services, gateways, best practices
     - Full album release workflow example

   - Add docs/payment-integration.md: Stripe, Gumroad, PayPal examples
     - Webhook handlers for automated license delivery
     - Serverless options (Vercel/Netlify)
     - Manual workflow for non-technical artists

   - Demo artist mode improvements:
     - WASM loads on-demand (fixes 6s delay on 4G)
     - Generate button enabled by password only
     - Vi demo preloads when WASM ready

   - Update RFC-001 section 8.3: mark completed items
2026-01-13 15:17:22 +00:00
snider
bd7e8b3040 feat: lazy loading profile page + v3 streaming polish
Profile page:
   - No WASM or video download until play button clicked
   - Play button visible immediately, loading on-demand
   - Removed auto-play behavior completely

   Streaming:
   - GetV3HeaderFromPrefix for parsing from partial data
   - v3 demo file with 128KB chunks for streaming tests
2026-01-12 17:48:32 +00:00
snider
2debed53f1 feat: v3 streaming with LTHN rolling keys and configurable cadence
V3 streaming format enables zero-trust media streaming:
- Content encrypted once with random CEK
- CEK wrapped with time-bound stream keys derived from LTHN hash
- Rolling window: current period + next period always valid
- Keys auto-expire, no revocation needed

Cadence options (platform controls refresh rate):
- daily:  24-hour periods (2026-01-12)
- 12h:    Half-day periods (2026-01-12-AM/PM)
- 6h:     Quarter-day periods (2026-01-12-00/06/12/18)
- 1h:     Hourly periods (2026-01-12-15)

Key derivation: SHA256(LTHN(period:license:fingerprint))
- LTHN is rainbow-table resistant (salt derived from input)
- Only the derived key can decrypt, never transmitted

New files:
- pkg/smsg/stream.go - v3 encryption/decryption
- pkg/smsg/stream_test.go - 17 tests including cadence

WASM v1.3.0:
- BorgSMSG.decryptV3(data, {license, fingerprint})
- getInfo() now returns cadence and keyMethod
2026-01-12 16:01:59 +00:00
snider
0ba0897c25 docs: add nonce handling explanation for developers 2026-01-12 15:51:41 +00:00
snider
3d903c5a27 feat: multi-track demo support with password map 2026-01-12 15:39:26 +00:00
snider
2da38ae462 fix: mobile scrolling + clean up mkdemo hardcoded values 2026-01-12 15:35:13 +00:00
dependabot[bot]
b94ffbab5e
Bump golang.org/x/crypto from 0.44.0 to 0.45.0
Bumps [golang.org/x/crypto](https://github.com/golang/crypto) from 0.44.0 to 0.45.0.
- [Commits](https://github.com/golang/crypto/compare/v0.44.0...v0.45.0)

---
updated-dependencies:
- dependency-name: golang.org/x/crypto
  dependency-version: 0.45.0
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-11-20 02:44:02 +00:00
32 changed files with 8936 additions and 117 deletions

333
cmd/collect_local.go Normal file
View file

@ -0,0 +1,333 @@
package cmd
import (
"fmt"
"io/fs"
"os"
"path/filepath"
"strings"
"github.com/Snider/Borg/pkg/compress"
"github.com/Snider/Borg/pkg/datanode"
"github.com/Snider/Borg/pkg/tim"
"github.com/Snider/Borg/pkg/trix"
"github.com/Snider/Borg/pkg/ui"
"github.com/spf13/cobra"
)
type CollectLocalCmd struct {
cobra.Command
}
// NewCollectLocalCmd creates a new collect local command
func NewCollectLocalCmd() *CollectLocalCmd {
c := &CollectLocalCmd{}
c.Command = cobra.Command{
Use: "local [directory]",
Short: "Collect files from a local directory",
Long: `Collect files from a local directory and store them in a DataNode.
If no directory is specified, the current working directory is used.
Examples:
borg collect local
borg collect local ./src
borg collect local /path/to/project --output project.tar
borg collect local . --format stim --password secret
borg collect local . --exclude "*.log" --exclude "node_modules"`,
Args: cobra.MaximumNArgs(1),
RunE: func(cmd *cobra.Command, args []string) error {
directory := "."
if len(args) > 0 {
directory = args[0]
}
outputFile, _ := cmd.Flags().GetString("output")
format, _ := cmd.Flags().GetString("format")
compression, _ := cmd.Flags().GetString("compression")
password, _ := cmd.Flags().GetString("password")
excludes, _ := cmd.Flags().GetStringSlice("exclude")
includeHidden, _ := cmd.Flags().GetBool("hidden")
respectGitignore, _ := cmd.Flags().GetBool("gitignore")
finalPath, err := CollectLocal(directory, outputFile, format, compression, password, excludes, includeHidden, respectGitignore)
if err != nil {
return err
}
fmt.Fprintln(cmd.OutOrStdout(), "Files saved to", finalPath)
return nil
},
}
c.Flags().String("output", "", "Output file for the DataNode")
c.Flags().String("format", "datanode", "Output format (datanode, tim, trix, or stim)")
c.Flags().String("compression", "none", "Compression format (none, gz, or xz)")
c.Flags().String("password", "", "Password for encryption (required for stim/trix format)")
c.Flags().StringSlice("exclude", nil, "Patterns to exclude (can be specified multiple times)")
c.Flags().Bool("hidden", false, "Include hidden files and directories")
c.Flags().Bool("gitignore", true, "Respect .gitignore files (default: true)")
return c
}
func init() {
collectCmd.AddCommand(&NewCollectLocalCmd().Command)
}
// CollectLocal collects files from a local directory into a DataNode
func CollectLocal(directory string, outputFile string, format string, compression string, password string, excludes []string, includeHidden bool, respectGitignore bool) (string, error) {
// Validate format
if format != "datanode" && format != "tim" && format != "trix" && format != "stim" {
return "", fmt.Errorf("invalid format: %s (must be 'datanode', 'tim', 'trix', or 'stim')", format)
}
if (format == "stim" || format == "trix") && password == "" {
return "", fmt.Errorf("password is required for %s format", format)
}
if compression != "none" && compression != "gz" && compression != "xz" {
return "", fmt.Errorf("invalid compression: %s (must be 'none', 'gz', or 'xz')", compression)
}
// Resolve directory path
absDir, err := filepath.Abs(directory)
if err != nil {
return "", fmt.Errorf("error resolving directory path: %w", err)
}
info, err := os.Stat(absDir)
if err != nil {
return "", fmt.Errorf("error accessing directory: %w", err)
}
if !info.IsDir() {
return "", fmt.Errorf("not a directory: %s", absDir)
}
// Load gitignore patterns if enabled
var gitignorePatterns []string
if respectGitignore {
gitignorePatterns = loadGitignore(absDir)
}
// Create DataNode and collect files
dn := datanode.New()
var fileCount int
bar := ui.NewProgressBar(-1, "Scanning files")
defer bar.Finish()
err = filepath.WalkDir(absDir, func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
// Get relative path
relPath, err := filepath.Rel(absDir, path)
if err != nil {
return err
}
// Skip root
if relPath == "." {
return nil
}
// Skip hidden files/dirs unless explicitly included
if !includeHidden && isHidden(relPath) {
if d.IsDir() {
return filepath.SkipDir
}
return nil
}
// Check gitignore patterns
if respectGitignore && matchesGitignore(relPath, d.IsDir(), gitignorePatterns) {
if d.IsDir() {
return filepath.SkipDir
}
return nil
}
// Check exclude patterns
if matchesExclude(relPath, excludes) {
if d.IsDir() {
return filepath.SkipDir
}
return nil
}
// Skip directories (they're implicit in DataNode)
if d.IsDir() {
return nil
}
// Read file content
content, err := os.ReadFile(path)
if err != nil {
return fmt.Errorf("error reading %s: %w", relPath, err)
}
// Add to DataNode with forward slashes (tar convention)
dn.AddData(filepath.ToSlash(relPath), content)
fileCount++
bar.Describe(fmt.Sprintf("Collected %d files", fileCount))
return nil
})
if err != nil {
return "", fmt.Errorf("error walking directory: %w", err)
}
if fileCount == 0 {
return "", fmt.Errorf("no files found in %s", directory)
}
bar.Describe(fmt.Sprintf("Packaging %d files", fileCount))
// Convert to output format
var data []byte
if format == "tim" {
t, err := tim.FromDataNode(dn)
if err != nil {
return "", fmt.Errorf("error creating tim: %w", err)
}
data, err = t.ToTar()
if err != nil {
return "", fmt.Errorf("error serializing tim: %w", err)
}
} else if format == "stim" {
t, err := tim.FromDataNode(dn)
if err != nil {
return "", fmt.Errorf("error creating tim: %w", err)
}
data, err = t.ToSigil(password)
if err != nil {
return "", fmt.Errorf("error encrypting stim: %w", err)
}
} else if format == "trix" {
data, err = trix.ToTrix(dn, password)
if err != nil {
return "", fmt.Errorf("error serializing trix: %w", err)
}
} else {
data, err = dn.ToTar()
if err != nil {
return "", fmt.Errorf("error serializing DataNode: %w", err)
}
}
// Apply compression
compressedData, err := compress.Compress(data, compression)
if err != nil {
return "", fmt.Errorf("error compressing data: %w", err)
}
// Determine output filename
if outputFile == "" {
baseName := filepath.Base(absDir)
if baseName == "." || baseName == "/" {
baseName = "local"
}
outputFile = baseName + "." + format
if compression != "none" {
outputFile += "." + compression
}
}
err = os.WriteFile(outputFile, compressedData, 0644)
if err != nil {
return "", fmt.Errorf("error writing output file: %w", err)
}
return outputFile, nil
}
// isHidden checks if a path component starts with a dot
func isHidden(path string) bool {
parts := strings.Split(filepath.ToSlash(path), "/")
for _, part := range parts {
if strings.HasPrefix(part, ".") {
return true
}
}
return false
}
// loadGitignore loads patterns from .gitignore if it exists
func loadGitignore(dir string) []string {
var patterns []string
gitignorePath := filepath.Join(dir, ".gitignore")
content, err := os.ReadFile(gitignorePath)
if err != nil {
return patterns
}
lines := strings.Split(string(content), "\n")
for _, line := range lines {
line = strings.TrimSpace(line)
// Skip empty lines and comments
if line == "" || strings.HasPrefix(line, "#") {
continue
}
patterns = append(patterns, line)
}
return patterns
}
// matchesGitignore checks if a path matches any gitignore pattern
func matchesGitignore(path string, isDir bool, patterns []string) bool {
for _, pattern := range patterns {
// Handle directory-only patterns
if strings.HasSuffix(pattern, "/") {
if !isDir {
continue
}
pattern = strings.TrimSuffix(pattern, "/")
}
// Handle negation (simplified - just skip negated patterns)
if strings.HasPrefix(pattern, "!") {
continue
}
// Match against path components
matched, _ := filepath.Match(pattern, filepath.Base(path))
if matched {
return true
}
// Also try matching the full path
matched, _ = filepath.Match(pattern, path)
if matched {
return true
}
// Handle ** patterns (simplified)
if strings.Contains(pattern, "**") {
simplePattern := strings.ReplaceAll(pattern, "**", "*")
matched, _ = filepath.Match(simplePattern, path)
if matched {
return true
}
}
}
return false
}
// matchesExclude checks if a path matches any exclude pattern
func matchesExclude(path string, excludes []string) bool {
for _, pattern := range excludes {
// Match against basename
matched, _ := filepath.Match(pattern, filepath.Base(path))
if matched {
return true
}
// Match against full path
matched, _ = filepath.Match(pattern, path)
if matched {
return true
}
}
return false
}

70
cmd/extract-demo/main.go Normal file
View file

@ -0,0 +1,70 @@
// extract-demo extracts the video from a v2 SMSG file
package main
import (
"encoding/base64"
"fmt"
"os"
"github.com/Snider/Borg/pkg/smsg"
)
func main() {
if len(os.Args) < 4 {
fmt.Println("Usage: extract-demo <input.smsg> <password> <output.mp4>")
os.Exit(1)
}
inputFile := os.Args[1]
password := os.Args[2]
outputFile := os.Args[3]
data, err := os.ReadFile(inputFile)
if err != nil {
fmt.Printf("Failed to read: %v\n", err)
os.Exit(1)
}
// Get info first
info, err := smsg.GetInfo(data)
if err != nil {
fmt.Printf("Failed to get info: %v\n", err)
os.Exit(1)
}
fmt.Printf("Format: %s, Compression: %s\n", info.Format, info.Compression)
// Decrypt
msg, err := smsg.Decrypt(data, password)
if err != nil {
fmt.Printf("Failed to decrypt: %v\n", err)
os.Exit(1)
}
fmt.Printf("Body: %s...\n", msg.Body[:min(50, len(msg.Body))])
fmt.Printf("Attachments: %d\n", len(msg.Attachments))
if len(msg.Attachments) > 0 {
att := msg.Attachments[0]
fmt.Printf(" Name: %s, MIME: %s, Size: %d\n", att.Name, att.MimeType, att.Size)
// Decode and save
decoded, err := base64.StdEncoding.DecodeString(att.Content)
if err != nil {
fmt.Printf("Failed to decode: %v\n", err)
os.Exit(1)
}
if err := os.WriteFile(outputFile, decoded, 0644); err != nil {
fmt.Printf("Failed to save: %v\n", err)
os.Exit(1)
}
fmt.Printf("Saved to %s (%d bytes)\n", outputFile, len(decoded))
}
}
func min(a, b int) int {
if a < b {
return a
}
return b
}

226
cmd/mkdemo-abr/main.go Normal file
View file

@ -0,0 +1,226 @@
// mkdemo-abr creates an ABR (Adaptive Bitrate) demo set from a source video.
// It uses ffmpeg to transcode to multiple bitrates, then encrypts each as v3 chunked SMSG.
//
// Usage: mkdemo-abr <input-video> <output-dir> [password]
//
// Output:
//
// output-dir/manifest.json - ABR manifest listing all variants
// output-dir/track-1080p.smsg - 1080p variant (5 Mbps)
// output-dir/track-720p.smsg - 720p variant (2.5 Mbps)
// output-dir/track-480p.smsg - 480p variant (1 Mbps)
// output-dir/track-360p.smsg - 360p variant (500 Kbps)
package main
import (
"crypto/rand"
"encoding/base64"
"fmt"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/Snider/Borg/pkg/smsg"
)
// Preset defines a quality level for transcoding
type Preset struct {
Name string
Width int
Height int
Bitrate string // For ffmpeg (e.g., "5M")
BPS int // Bits per second for manifest
}
// Default presets matching ABRPresets in types.go
var presets = []Preset{
{"1080p", 1920, 1080, "5M", 5000000},
{"720p", 1280, 720, "2.5M", 2500000},
{"480p", 854, 480, "1M", 1000000},
{"360p", 640, 360, "500K", 500000},
}
func main() {
if len(os.Args) < 3 {
fmt.Println("Usage: mkdemo-abr <input-video> <output-dir> [password]")
fmt.Println()
fmt.Println("Creates ABR variant set from source video using ffmpeg.")
fmt.Println()
fmt.Println("Output:")
fmt.Println(" output-dir/manifest.json - ABR manifest")
fmt.Println(" output-dir/track-1080p.smsg - 1080p (5 Mbps)")
fmt.Println(" output-dir/track-720p.smsg - 720p (2.5 Mbps)")
fmt.Println(" output-dir/track-480p.smsg - 480p (1 Mbps)")
fmt.Println(" output-dir/track-360p.smsg - 360p (500 Kbps)")
os.Exit(1)
}
inputFile := os.Args[1]
outputDir := os.Args[2]
// Check ffmpeg is available
if _, err := exec.LookPath("ffmpeg"); err != nil {
fmt.Println("Error: ffmpeg not found in PATH")
fmt.Println("Install ffmpeg: https://ffmpeg.org/download.html")
os.Exit(1)
}
// Generate or use provided password
var password string
if len(os.Args) > 3 {
password = os.Args[3]
} else {
passwordBytes := make([]byte, 24)
if _, err := rand.Read(passwordBytes); err != nil {
fmt.Printf("Failed to generate password: %v\n", err)
os.Exit(1)
}
password = base64.RawURLEncoding.EncodeToString(passwordBytes)
}
// Create output directory
if err := os.MkdirAll(outputDir, 0755); err != nil {
fmt.Printf("Failed to create output directory: %v\n", err)
os.Exit(1)
}
// Get title from input filename
title := filepath.Base(inputFile)
ext := filepath.Ext(title)
if ext != "" {
title = title[:len(title)-len(ext)]
}
// Create ABR manifest
manifest := smsg.NewABRManifest(title)
fmt.Printf("Creating ABR variants for: %s\n", inputFile)
fmt.Printf("Output directory: %s\n", outputDir)
fmt.Printf("Password: %s\n\n", password)
// Process each preset
for _, preset := range presets {
fmt.Printf("Processing %s (%dx%d @ %s)...\n", preset.Name, preset.Width, preset.Height, preset.Bitrate)
// Step 1: Transcode with ffmpeg
tempFile := filepath.Join(outputDir, fmt.Sprintf("temp-%s.mp4", preset.Name))
if err := transcode(inputFile, tempFile, preset); err != nil {
fmt.Printf(" Warning: Transcode failed for %s: %v\n", preset.Name, err)
fmt.Printf(" Skipping this variant...\n")
continue
}
// Step 2: Read transcoded file
content, err := os.ReadFile(tempFile)
if err != nil {
fmt.Printf(" Error reading transcoded file: %v\n", err)
os.Remove(tempFile)
continue
}
// Step 3: Create SMSG message
msg := smsg.NewMessage("dapp.fm ABR Demo")
msg.Subject = fmt.Sprintf("%s - %s", title, preset.Name)
msg.From = "dapp.fm"
msg.AddBinaryAttachment(
fmt.Sprintf("%s-%s.mp4", strings.ReplaceAll(title, " ", "_"), preset.Name),
content,
"video/mp4",
)
// Step 4: Create manifest for this variant
variantManifest := smsg.NewManifest(title)
variantManifest.LicenseType = "perpetual"
variantManifest.Format = "dapp.fm/abr-v1"
// Step 5: Encrypt with v3 chunked format
params := &smsg.StreamParams{
License: password,
ChunkSize: smsg.DefaultChunkSize, // 1MB chunks
}
encrypted, err := smsg.EncryptV3(msg, params, variantManifest)
if err != nil {
fmt.Printf(" Error encrypting: %v\n", err)
os.Remove(tempFile)
continue
}
// Step 6: Write SMSG file
smsgFile := filepath.Join(outputDir, fmt.Sprintf("track-%s.smsg", preset.Name))
if err := os.WriteFile(smsgFile, encrypted, 0644); err != nil {
fmt.Printf(" Error writing SMSG: %v\n", err)
os.Remove(tempFile)
continue
}
// Step 7: Get chunk count from header
header, err := smsg.GetV3Header(encrypted)
if err != nil {
fmt.Printf(" Warning: Could not read header: %v\n", err)
}
chunkCount := 0
if header != nil && header.Chunked != nil {
chunkCount = header.Chunked.TotalChunks
}
// Step 8: Add variant to manifest
variant := smsg.Variant{
Name: preset.Name,
Bandwidth: preset.BPS,
Width: preset.Width,
Height: preset.Height,
Codecs: "avc1.640028,mp4a.40.2",
URL: fmt.Sprintf("track-%s.smsg", preset.Name),
ChunkCount: chunkCount,
FileSize: int64(len(encrypted)),
}
manifest.AddVariant(variant)
// Clean up temp file
os.Remove(tempFile)
fmt.Printf(" Created: %s (%d bytes, %d chunks)\n", smsgFile, len(encrypted), chunkCount)
}
if len(manifest.Variants) == 0 {
fmt.Println("\nError: No variants created. Check ffmpeg output.")
os.Exit(1)
}
// Write ABR manifest
manifestPath := filepath.Join(outputDir, "manifest.json")
if err := smsg.WriteABRManifest(manifest, manifestPath); err != nil {
fmt.Printf("Failed to write manifest: %v\n", err)
os.Exit(1)
}
fmt.Printf("\n✓ Created ABR manifest: %s\n", manifestPath)
fmt.Printf("✓ Variants: %d\n", len(manifest.Variants))
fmt.Printf("✓ Default: %s\n", manifest.Variants[manifest.DefaultIdx].Name)
fmt.Printf("\nMaster Password: %s\n", password)
fmt.Println("\nStore this password securely - it decrypts ALL variants!")
}
// transcode uses ffmpeg to transcode the input to the specified preset
func transcode(input, output string, preset Preset) error {
args := []string{
"-i", input,
"-vf", fmt.Sprintf("scale=%d:%d:force_original_aspect_ratio=decrease,pad=%d:%d:(ow-iw)/2:(oh-ih)/2",
preset.Width, preset.Height, preset.Width, preset.Height),
"-c:v", "libx264",
"-preset", "medium",
"-b:v", preset.Bitrate,
"-c:a", "aac",
"-b:a", "128k",
"-movflags", "+faststart",
"-y", // Overwrite output
output,
}
cmd := exec.Command("ffmpeg", args...)
cmd.Stderr = os.Stderr // Show ffmpeg output for debugging
return cmd.Run()
}

129
cmd/mkdemo-v3/main.go Normal file
View file

@ -0,0 +1,129 @@
// mkdemo-v3 creates a v3 chunked SMSG file for streaming demos
package main
import (
"crypto/rand"
"encoding/base64"
"fmt"
"os"
"path/filepath"
"github.com/Snider/Borg/pkg/smsg"
)
func main() {
if len(os.Args) < 3 {
fmt.Println("Usage: mkdemo-v3 <input-media-file> <output-smsg-file> [license] [chunk-size-kb]")
fmt.Println("")
fmt.Println("Creates a v3 chunked SMSG file for streaming demos.")
fmt.Println("V3 uses rolling keys derived from: LTHN(date:license:fingerprint)")
fmt.Println("")
fmt.Println("Options:")
fmt.Println(" license The license key (default: auto-generated)")
fmt.Println(" chunk-size-kb Chunk size in KB (default: 512)")
fmt.Println("")
fmt.Println("Note: V3 files work for 24-48 hours from creation (rolling keys).")
os.Exit(1)
}
inputFile := os.Args[1]
outputFile := os.Args[2]
// Read input file
content, err := os.ReadFile(inputFile)
if err != nil {
fmt.Printf("Failed to read input file: %v\n", err)
os.Exit(1)
}
// License (acts as password in v3)
var license string
if len(os.Args) > 3 {
license = os.Args[3]
} else {
// Generate cryptographically secure license
licenseBytes := make([]byte, 24)
if _, err := rand.Read(licenseBytes); err != nil {
fmt.Printf("Failed to generate license: %v\n", err)
os.Exit(1)
}
license = base64.RawURLEncoding.EncodeToString(licenseBytes)
}
// Chunk size (default 512KB for good streaming granularity)
chunkSize := 512 * 1024
if len(os.Args) > 4 {
var chunkKB int
if _, err := fmt.Sscanf(os.Args[4], "%d", &chunkKB); err == nil && chunkKB > 0 {
chunkSize = chunkKB * 1024
}
}
// Create manifest
title := filepath.Base(inputFile)
ext := filepath.Ext(title)
if ext != "" {
title = title[:len(title)-len(ext)]
}
manifest := smsg.NewManifest(title)
manifest.LicenseType = "streaming"
manifest.Format = "dapp.fm/v3-chunked"
// Detect MIME type
mimeType := "video/mp4"
switch ext {
case ".mp3":
mimeType = "audio/mpeg"
case ".wav":
mimeType = "audio/wav"
case ".flac":
mimeType = "audio/flac"
case ".webm":
mimeType = "video/webm"
case ".ogg":
mimeType = "audio/ogg"
}
// Create message with attachment
msg := smsg.NewMessage("dapp.fm V3 Streaming Demo - Decrypt-while-downloading enabled")
msg.Subject = "V3 Chunked Streaming"
msg.From = "dapp.fm"
msg.AddBinaryAttachment(
filepath.Base(inputFile),
content,
mimeType,
)
// Create stream params with chunking enabled
params := &smsg.StreamParams{
License: license,
Fingerprint: "", // Empty for demo (works for any device)
Cadence: smsg.CadenceDaily,
ChunkSize: chunkSize,
}
// Encrypt with v3 chunked format
encrypted, err := smsg.EncryptV3(msg, params, manifest)
if err != nil {
fmt.Printf("Failed to encrypt: %v\n", err)
os.Exit(1)
}
// Write output
if err := os.WriteFile(outputFile, encrypted, 0644); err != nil {
fmt.Printf("Failed to write output: %v\n", err)
os.Exit(1)
}
// Calculate chunk count
numChunks := (len(content) + chunkSize - 1) / chunkSize
fmt.Printf("Created: %s (%d bytes)\n", outputFile, len(encrypted))
fmt.Printf("Format: v3 chunked\n")
fmt.Printf("Chunk Size: %d KB\n", chunkSize/1024)
fmt.Printf("Total Chunks: ~%d\n", numChunks)
fmt.Printf("License: %s\n", license)
fmt.Println("")
fmt.Println("This license works for 24-48 hours from creation.")
fmt.Println("Use the license in the streaming demo to decrypt.")
}

View file

@ -42,19 +42,15 @@ func main() {
password = base64.RawURLEncoding.EncodeToString(passwordBytes)
}
// Create manifest
manifest := smsg.NewManifest("It Feels So Good (The Conductor & The Cowboy's Amnesia Mix)")
manifest.Artist = "Sonique"
// Create manifest with filename as title
title := filepath.Base(inputFile)
ext := filepath.Ext(title)
if ext != "" {
title = title[:len(title)-len(ext)]
}
manifest := smsg.NewManifest(title)
manifest.LicenseType = "perpetual"
manifest.Format = "dapp.fm/v1"
manifest.ReleaseType = "single"
manifest.Duration = 253 // 4:13
manifest.AddTrack("It Feels So Good (The Conductor & The Cowboy's Amnesia Mix)", 0)
// Artist links - direct to artist, skip the middlemen
// "home" = preferred landing page, artist name should always link here
manifest.AddLink("home", "https://linktr.ee/conductorandcowboy")
manifest.AddLink("beatport", "https://www.beatport.com/artist/the-conductor-the-cowboy/635335")
// Create message with attachment (using binary attachment for v2 format)
msg := smsg.NewMessage("Welcome to dapp.fm - Zero-Trust DRM for the open web.")

BIN
demo/demo-sample.smsg Normal file

Binary file not shown.

BIN
demo/demo-track-v3.smsg Normal file

Binary file not shown.

File diff suppressed because it is too large Load diff

Binary file not shown.

281
docs/ipfs-distribution.md Normal file
View file

@ -0,0 +1,281 @@
# IPFS Distribution Guide
This guide explains how to distribute your encrypted `.smsg` content via IPFS (InterPlanetary File System) for permanent, decentralized hosting.
## Why IPFS?
IPFS is ideal for dapp.fm content because:
- **Permanent links** - Content-addressed (CID) means the URL never changes
- **No hosting costs** - Pin with free services or self-host
- **Censorship resistant** - No single point of failure
- **Global CDN** - Content served from nearest peer
- **Perfect for archival** - Your content survives even if you disappear
Combined with password-as-license, IPFS creates truly permanent media distribution:
```
Artist uploads to IPFS → Fan downloads from anywhere → Password unlocks forever
```
## Quick Start
### 1. Install IPFS
**macOS:**
```bash
brew install ipfs
```
**Linux:**
```bash
wget https://dist.ipfs.tech/kubo/v0.24.0/kubo_v0.24.0_linux-amd64.tar.gz
tar xvfz kubo_v0.24.0_linux-amd64.tar.gz
sudo mv kubo/ipfs /usr/local/bin/
```
**Windows:**
Download from https://dist.ipfs.tech/#kubo
### 2. Initialize and Start
```bash
ipfs init
ipfs daemon
```
### 3. Add Your Content
```bash
# Create your encrypted content first
go run ./cmd/mkdemo my-album.mp4 my-album.smsg
# Add to IPFS
ipfs add my-album.smsg
# Output: added QmX...abc my-album.smsg
# Your content is now available at:
# - Local: http://localhost:8080/ipfs/QmX...abc
# - Gateway: https://ipfs.io/ipfs/QmX...abc
```
## Distribution Workflow
### For Artists
```bash
# 1. Package your media
go run ./cmd/mkdemo album.mp4 album.smsg
# Save the password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
# 2. Add to IPFS
ipfs add album.smsg
# added QmYourContentCID album.smsg
# 3. Pin for persistence (choose one):
# Option A: Pin locally (requires running node)
ipfs pin add QmYourContentCID
# Option B: Use Pinata (free tier: 1GB)
curl -X POST "https://api.pinata.cloud/pinning/pinByHash" \
-H "Authorization: Bearer YOUR_JWT" \
-H "Content-Type: application/json" \
-d '{"hashToPin": "QmYourContentCID"}'
# Option C: Use web3.storage (free tier: 5GB)
# Upload at https://web3.storage
# 4. Share with fans
# CID: QmYourContentCID
# Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
# Gateway URL: https://ipfs.io/ipfs/QmYourContentCID
```
### For Fans
```bash
# Download via any gateway
curl -o album.smsg https://ipfs.io/ipfs/QmYourContentCID
# Or via local node (faster if running)
ipfs get QmYourContentCID -o album.smsg
# Play with password in browser demo or native app
```
## IPFS Gateways
Public gateways for sharing (no IPFS node required):
| Gateway | URL Pattern | Notes |
|---------|-------------|-------|
| ipfs.io | `https://ipfs.io/ipfs/{CID}` | Official, reliable |
| dweb.link | `https://{CID}.ipfs.dweb.link` | Subdomain style |
| cloudflare | `https://cloudflare-ipfs.com/ipfs/{CID}` | Fast, cached |
| w3s.link | `https://{CID}.ipfs.w3s.link` | web3.storage |
| nftstorage.link | `https://{CID}.ipfs.nftstorage.link` | NFT.storage |
**Example URLs for CID `QmX...abc`:**
```
https://ipfs.io/ipfs/QmX...abc
https://QmX...abc.ipfs.dweb.link
https://cloudflare-ipfs.com/ipfs/QmX...abc
```
## Pinning Services
Content on IPFS is only available while someone is hosting it. Use pinning services for persistence:
### Free Options
| Service | Free Tier | Link |
|---------|-----------|------|
| Pinata | 1 GB | https://pinata.cloud |
| web3.storage | 5 GB | https://web3.storage |
| NFT.storage | Unlimited* | https://nft.storage |
| Filebase | 5 GB | https://filebase.com |
*NFT.storage is designed for NFT data but works for any content.
### Pin via CLI
```bash
# Pinata
export PINATA_JWT="your-jwt-token"
curl -X POST "https://api.pinata.cloud/pinning/pinByHash" \
-H "Authorization: Bearer $PINATA_JWT" \
-H "Content-Type: application/json" \
-d '{"hashToPin": "QmYourCID", "pinataMetadata": {"name": "my-album.smsg"}}'
# web3.storage (using w3 CLI)
npm install -g @web3-storage/w3cli
w3 login your@email.com
w3 up my-album.smsg
```
## Integration with Demo Page
The demo page can load content directly from IPFS gateways:
```javascript
// In the demo page, use gateway URL
const ipfsCID = 'QmYourContentCID';
const gatewayUrl = `https://ipfs.io/ipfs/${ipfsCID}`;
// Fetch and decrypt
const response = await fetch(gatewayUrl);
const bytes = new Uint8Array(await response.arrayBuffer());
const msg = await BorgSMSG.decryptBinary(bytes, password);
```
Or use the Fan tab with the IPFS gateway URL directly.
## Best Practices
### 1. Always Pin Your Content
IPFS garbage-collects unpinned content. Always pin important files:
```bash
ipfs pin add QmYourCID
# Or use a pinning service
```
### 2. Use Multiple Pins
Pin with 2-3 services for redundancy:
```bash
# Pin locally
ipfs pin add QmYourCID
# Also pin with Pinata
curl -X POST "https://api.pinata.cloud/pinning/pinByHash" ...
# And web3.storage as backup
w3 up my-album.smsg
```
### 3. Share CID + Password Separately
```
Download: https://ipfs.io/ipfs/QmYourCID
License: [sent via email/DM after purchase]
```
### 4. Use IPNS for Updates (Optional)
IPNS lets you update content while keeping the same URL:
```bash
# Create IPNS name
ipfs name publish QmYourCID
# Published to k51...xyz
# Your content is now at:
# https://ipfs.io/ipns/k51...xyz
# Update to new version later:
ipfs name publish QmNewVersionCID
```
## Example: Full Album Release
```bash
# 1. Create encrypted album
go run ./cmd/mkdemo my-album.mp4 my-album.smsg
# Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
# 2. Add to IPFS
ipfs add my-album.smsg
# added QmAlbumCID my-album.smsg
# 3. Pin with multiple services
ipfs pin add QmAlbumCID
w3 up my-album.smsg
# 4. Create release page
cat > release.html << 'EOF'
<!DOCTYPE html>
<html>
<head><title>My Album - Download</title></head>
<body>
<h1>My Album</h1>
<p>Download: <a href="https://ipfs.io/ipfs/QmAlbumCID">IPFS</a></p>
<p>After purchase, you'll receive your license key via email.</p>
<p><a href="https://demo.dapp.fm">Play with license key</a></p>
</body>
</html>
EOF
# 5. Host release page on IPFS too!
ipfs add release.html
# added QmReleaseCID release.html
# Share: https://ipfs.io/ipfs/QmReleaseCID
```
## Troubleshooting
### Content Not Loading
1. **Check if pinned**: `ipfs pin ls | grep QmYourCID`
2. **Try different gateway**: Some gateways cache slowly
3. **Check daemon running**: `ipfs swarm peers` should show peers
### Slow Downloads
1. Use a faster gateway (cloudflare-ipfs.com is often fastest)
2. Run your own IPFS node for direct access
3. Pre-warm gateways by accessing content once
### CID Changed After Re-adding
IPFS CIDs are content-addressed. If you modify the file, the CID changes. For the same content, the CID is always identical.
## Resources
- [IPFS Documentation](https://docs.ipfs.tech/)
- [Pinata Docs](https://docs.pinata.cloud/)
- [web3.storage Docs](https://web3.storage/docs/)
- [IPFS Gateway Checker](https://ipfs.github.io/public-gateway-checker/)

497
docs/payment-integration.md Normal file
View file

@ -0,0 +1,497 @@
# Payment Integration Guide
This guide shows how to sell your encrypted `.smsg` content and deliver license keys (passwords) to customers using popular payment processors.
## Overview
The dapp.fm model is simple:
```
1. Customer pays via Stripe/Gumroad/PayPal
2. Payment processor triggers webhook or delivers digital product
3. Customer receives password (license key)
4. Customer downloads .smsg from your CDN/IPFS
5. Customer decrypts with password - done forever
```
No license servers, no accounts, no ongoing infrastructure.
## Stripe Integration
### Option 1: Stripe Payment Links (Easiest)
No code required - use Stripe's hosted checkout:
1. Create a Payment Link in Stripe Dashboard
2. Set up a webhook to email the password on successful payment
3. Host your `.smsg` file anywhere (CDN, IPFS, S3)
**Webhook endpoint (Node.js/Express):**
```javascript
const express = require('express');
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
const nodemailer = require('nodemailer');
const app = express();
// Your content passwords (store securely!)
const PRODUCTS = {
'prod_ABC123': {
name: 'My Album',
password: 'PMVXogAJNVe_DDABfTmLYztaJAzsD0R7',
downloadUrl: 'https://ipfs.io/ipfs/QmYourCID'
}
};
app.post('/webhook', express.raw({type: 'application/json'}), async (req, res) => {
const sig = req.headers['stripe-signature'];
const endpointSecret = process.env.STRIPE_WEBHOOK_SECRET;
let event;
try {
event = stripe.webhooks.constructEvent(req.body, sig, endpointSecret);
} catch (err) {
return res.status(400).send(`Webhook Error: ${err.message}`);
}
if (event.type === 'checkout.session.completed') {
const session = event.data.object;
const customerEmail = session.customer_details.email;
const productId = session.metadata.product_id;
const product = PRODUCTS[productId];
if (product) {
await sendLicenseEmail(customerEmail, product);
}
}
res.json({received: true});
});
async function sendLicenseEmail(email, product) {
const transporter = nodemailer.createTransport({
// Configure your email provider
service: 'gmail',
auth: {
user: process.env.EMAIL_USER,
pass: process.env.EMAIL_PASS
}
});
await transporter.sendMail({
from: 'artist@example.com',
to: email,
subject: `Your License Key for ${product.name}`,
html: `
<h1>Thank you for your purchase!</h1>
<p><strong>Download:</strong> <a href="${product.downloadUrl}">${product.name}</a></p>
<p><strong>License Key:</strong> <code>${product.password}</code></p>
<p><strong>How to play:</strong></p>
<ol>
<li>Download the .smsg file from the link above</li>
<li>Go to <a href="https://demo.dapp.fm">demo.dapp.fm</a></li>
<li>Click "Fan" tab, then "Unlock Licensed Content"</li>
<li>Paste the file and enter your license key</li>
</ol>
<p>This is your permanent license - save this email!</p>
`
});
}
app.listen(3000);
```
### Option 2: Stripe Checkout Session (More Control)
```javascript
const stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);
// Create checkout session
app.post('/create-checkout', async (req, res) => {
const { productId } = req.body;
const session = await stripe.checkout.sessions.create({
payment_method_types: ['card'],
line_items: [{
price: 'price_ABC123', // Your Stripe price ID
quantity: 1,
}],
mode: 'payment',
success_url: 'https://yoursite.com/success?session_id={CHECKOUT_SESSION_ID}',
cancel_url: 'https://yoursite.com/cancel',
metadata: {
product_id: productId
}
});
res.json({ url: session.url });
});
// Success page - show license after payment
app.get('/success', async (req, res) => {
const session = await stripe.checkout.sessions.retrieve(req.query.session_id);
if (session.payment_status === 'paid') {
const product = PRODUCTS[session.metadata.product_id];
res.send(`
<h1>Thank you!</h1>
<p>Download: <a href="${product.downloadUrl}">${product.name}</a></p>
<p>License Key: <code>${product.password}</code></p>
`);
} else {
res.send('Payment not completed');
}
});
```
## Gumroad Integration
Gumroad is perfect for artists - handles payments, delivery, and customer management.
### Setup
1. Create a Digital Product on Gumroad
2. Upload a text file or PDF containing the password
3. Set your `.smsg` download URL in the product description
4. Gumroad delivers the password file on purchase
### Product Setup
**Product Description:**
```
My Album - Encrypted Digital Download
After purchase, you'll receive:
1. A license key (in the download)
2. Download link for the .smsg file
How to play:
1. Download the .smsg file: https://ipfs.io/ipfs/QmYourCID
2. Go to https://demo.dapp.fm
3. Click "Fan" → "Unlock Licensed Content"
4. Enter your license key from the PDF
```
**Delivered File (license.txt):**
```
Your License Key: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
Download your content: https://ipfs.io/ipfs/QmYourCID
This is your permanent license - keep this file safe!
The content works offline forever with this key.
Need help? Visit https://demo.dapp.fm
```
### Gumroad Ping (Webhook)
For automated delivery, use Gumroad's Ping feature:
```javascript
const express = require('express');
const app = express();
app.use(express.urlencoded({ extended: true }));
// Gumroad sends POST to this endpoint on sale
app.post('/gumroad-ping', (req, res) => {
const {
seller_id,
product_id,
email,
full_name,
purchaser_id
} = req.body;
// Verify it's from Gumroad (check seller_id matches yours)
if (seller_id !== process.env.GUMROAD_SELLER_ID) {
return res.status(403).send('Invalid seller');
}
const product = PRODUCTS[product_id];
if (product) {
// Send custom email with password
sendLicenseEmail(email, product);
}
res.send('OK');
});
```
## PayPal Integration
### PayPal Buttons + IPN
```html
<!-- PayPal Buy Button -->
<form action="https://www.paypal.com/cgi-bin/webscr" method="post">
<input type="hidden" name="cmd" value="_xclick">
<input type="hidden" name="business" value="artist@example.com">
<input type="hidden" name="item_name" value="My Album - Digital Download">
<input type="hidden" name="item_number" value="album-001">
<input type="hidden" name="amount" value="9.99">
<input type="hidden" name="currency_code" value="USD">
<input type="hidden" name="notify_url" value="https://yoursite.com/paypal-ipn">
<input type="hidden" name="return" value="https://yoursite.com/thank-you">
<input type="submit" value="Buy Now - $9.99">
</form>
```
**IPN Handler:**
```javascript
const express = require('express');
const axios = require('axios');
app.post('/paypal-ipn', express.urlencoded({ extended: true }), async (req, res) => {
// Verify with PayPal
const verifyUrl = 'https://ipnpb.paypal.com/cgi-bin/webscr';
const verifyBody = 'cmd=_notify-validate&' + new URLSearchParams(req.body).toString();
const response = await axios.post(verifyUrl, verifyBody);
if (response.data === 'VERIFIED' && req.body.payment_status === 'Completed') {
const email = req.body.payer_email;
const itemNumber = req.body.item_number;
const product = PRODUCTS[itemNumber];
if (product) {
await sendLicenseEmail(email, product);
}
}
res.send('OK');
});
```
## Ko-fi Integration
Ko-fi is great for tips and single purchases.
### Setup
1. Enable "Commissions" or "Shop" on Ko-fi
2. Create a product with the license key in the thank-you message
3. Link to your .smsg download
**Ko-fi Thank You Message:**
```
Thank you for your purchase!
Your License Key: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
Download: https://ipfs.io/ipfs/QmYourCID
Play at: https://demo.dapp.fm (Fan → Unlock Licensed Content)
```
## Serverless Options
### Vercel/Netlify Functions
No server needed - use serverless functions:
```javascript
// api/stripe-webhook.js (Vercel)
import Stripe from 'stripe';
import { Resend } from 'resend';
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY);
const resend = new Resend(process.env.RESEND_API_KEY);
export default async function handler(req, res) {
if (req.method !== 'POST') {
return res.status(405).end();
}
const sig = req.headers['stripe-signature'];
const event = stripe.webhooks.constructEvent(
req.body,
sig,
process.env.STRIPE_WEBHOOK_SECRET
);
if (event.type === 'checkout.session.completed') {
const session = event.data.object;
await resend.emails.send({
from: 'artist@yoursite.com',
to: session.customer_details.email,
subject: 'Your License Key',
html: `
<p>Download: <a href="https://ipfs.io/ipfs/QmYourCID">My Album</a></p>
<p>License Key: <code>PMVXogAJNVe_DDABfTmLYztaJAzsD0R7</code></p>
`
});
}
res.json({ received: true });
}
export const config = {
api: { bodyParser: false }
};
```
## Manual Workflow (No Code)
For artists who don't want to set up webhooks:
### Using Email
1. **Gumroad/Ko-fi**: Set product to require email
2. **Manual delivery**: Check sales daily, email passwords manually
3. **Template**:
```
Subject: Your License for [Album Name]
Hi [Name],
Thank you for your purchase!
Download: [IPFS/CDN link]
License Key: [password]
How to play:
1. Download the .smsg file
2. Go to demo.dapp.fm
3. Fan tab → Unlock Licensed Content
4. Enter your license key
Enjoy! This license works forever.
[Artist Name]
```
### Using Discord/Telegram
1. Sell via Gumroad (free tier)
2. Require customers join your Discord/Telegram
3. Bot or manual delivery of license keys
4. Community building bonus!
## Security Best Practices
### 1. One Password Per Product
Don't reuse passwords across products:
```javascript
const PRODUCTS = {
'album-2024': { password: 'unique-key-1' },
'album-2023': { password: 'unique-key-2' },
'single-summer': { password: 'unique-key-3' }
};
```
### 2. Environment Variables
Never hardcode passwords in source:
```bash
# .env
ALBUM_2024_PASSWORD=PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
STRIPE_SECRET_KEY=sk_live_...
```
### 3. Webhook Verification
Always verify webhooks are from the payment provider:
```javascript
// Stripe
stripe.webhooks.constructEvent(body, sig, secret);
// Gumroad
if (seller_id !== MY_SELLER_ID) reject();
// PayPal
verify with IPN endpoint
```
### 4. HTTPS Only
All webhook endpoints must use HTTPS.
## Pricing Strategies
### Direct Sale (Perpetual License)
- Customer pays once, owns forever
- Single password for all buyers
- Best for: Albums, films, books
### Time-Limited (Streaming/Rental)
Use dapp.fm Re-Key feature:
1. Encrypt master copy with master password
2. On purchase, re-key with customer-specific password + expiry
3. Deliver unique password per customer
```javascript
// On purchase webhook
const customerPassword = generateUniquePassword();
const expiry = Date.now() + (24 * 60 * 60 * 1000); // 24 hours
// Use WASM or Go to re-key
const customerVersion = await rekeyContent(masterSmsg, masterPassword, customerPassword, expiry);
// Deliver customer-specific file + password
```
### Tiered Access
Different passwords for different tiers:
```javascript
const TIERS = {
'preview': { password: 'preview-key', expiry: '30s' },
'rental': { password: 'rental-key', expiry: '7d' },
'own': { password: 'perpetual-key', expiry: null }
};
```
## Example: Complete Stripe Setup
```bash
# 1. Create your content
go run ./cmd/mkdemo album.mp4 album.smsg
# Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
# 2. Upload to IPFS
ipfs add album.smsg
# QmAlbumCID
# 3. Create Stripe product
# Dashboard → Products → Add Product
# Name: My Album
# Price: $9.99
# 4. Create Payment Link
# Dashboard → Payment Links → New
# Select your product
# Get link: https://buy.stripe.com/xxx
# 5. Set up webhook
# Dashboard → Developers → Webhooks → Add endpoint
# URL: https://yoursite.com/api/stripe-webhook
# Events: checkout.session.completed
# 6. Deploy webhook handler (Vercel example)
vercel deploy
# 7. Share payment link
# Fans click → Pay → Get email with password → Download → Play forever
```
## Resources
- [Stripe Webhooks](https://stripe.com/docs/webhooks)
- [Gumroad Ping](https://help.gumroad.com/article/149-ping)
- [PayPal IPN](https://developer.paypal.com/docs/ipn/)
- [Resend (Email API)](https://resend.com/)
- [Vercel Functions](https://vercel.com/docs/functions)

2
go.mod
View file

@ -60,7 +60,7 @@ require (
github.com/wailsapp/go-webview2 v1.0.22 // indirect
github.com/wailsapp/mimetype v1.4.1 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
golang.org/x/crypto v0.44.0 // indirect
golang.org/x/crypto v0.45.0 // indirect
golang.org/x/sys v0.38.0 // indirect
golang.org/x/term v0.37.0 // indirect
golang.org/x/text v0.31.0 // indirect

4
go.sum
View file

@ -155,8 +155,8 @@ github.com/xanzy/ssh-agent v0.3.3/go.mod h1:6dzNDKs0J9rVPHPhaGCukekBHKqfl+L3KghI
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20210817164053-32db794688a5/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220622213112-05595931fe9d/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.44.0 h1:A97SsFvM3AIwEEmTBiaxPPTYpDC47w720rdiiUvgoAU=
golang.org/x/crypto v0.44.0/go.mod h1:013i+Nw79BMiQiMsOPcVCB5ZIJbYkerPrGnOa00tvmc=
golang.org/x/crypto v0.45.0 h1:jMBrvKuj23MTlT0bQEOBcAE0mjg8mK9RXFhRH6nyF3Q=
golang.org/x/crypto v0.45.0/go.mod h1:XTGrrkGJve7CYK7J8PEww4aY7gM3qMCElcJQ8n8JdX4=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56 h1:2dVuKD2vS7b0QIHQbpyTISPd0LeHDbnYEryqj5Q1ug8=
golang.org/x/exp v0.0.0-20240719175910-8a7402abbf56/go.mod h1:M4RDyNAINzryxdtnbRXRL/OHtkFuWGRjvuhBJpk2IlY=
golang.org/x/mod v0.30.0 h1:fDEXFVZ/fmCKProc/yAXXUijritrDzahmwwefnjoPFk=

Binary file not shown.

Binary file not shown.

214
pkg/smsg/abr.go Normal file
View file

@ -0,0 +1,214 @@
// Package smsg - Adaptive Bitrate Streaming (ABR) support
//
// ABR enables multi-bitrate streaming with automatic quality switching based on
// network conditions. Similar to HLS/DASH but with ChaCha20-Poly1305 encryption.
//
// Architecture:
// - Master manifest (.json) lists available quality variants
// - Each variant is a standard v3 chunked .smsg file
// - Same password decrypts all variants (CEK unwrapped once)
// - Player switches variants at chunk boundaries based on bandwidth
package smsg
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"sort"
)
const ABRVersion = "abr-v1"
// ABRSafetyFactor is the bandwidth multiplier for variant selection.
// Using 80% of available bandwidth prevents buffering on fluctuating networks.
const ABRSafetyFactor = 0.8
// NewABRManifest creates a new ABR manifest with the given title.
func NewABRManifest(title string) *ABRManifest {
return &ABRManifest{
Version: ABRVersion,
Title: title,
Variants: make([]Variant, 0),
DefaultIdx: 0,
}
}
// AddVariant adds a quality variant to the manifest.
// Variants are automatically sorted by bandwidth (ascending) after adding.
func (m *ABRManifest) AddVariant(v Variant) {
m.Variants = append(m.Variants, v)
// Sort by bandwidth ascending (lowest quality first)
sort.Slice(m.Variants, func(i, j int) bool {
return m.Variants[i].Bandwidth < m.Variants[j].Bandwidth
})
// Update default to 720p if available, otherwise middle variant
m.DefaultIdx = m.findDefaultVariant()
}
// findDefaultVariant finds the best default variant (prefers 720p).
func (m *ABRManifest) findDefaultVariant() int {
// Prefer 720p as default
for i, v := range m.Variants {
if v.Name == "720p" || v.Height == 720 {
return i
}
}
// Otherwise use middle variant
if len(m.Variants) > 0 {
return len(m.Variants) / 2
}
return 0
}
// SelectVariant selects the best variant for the given bandwidth (bits per second).
// Returns the index of the highest quality variant that fits within the bandwidth.
func (m *ABRManifest) SelectVariant(bandwidthBPS int) int {
safeBandwidth := float64(bandwidthBPS) * ABRSafetyFactor
// Find highest quality that fits
selected := 0
for i, v := range m.Variants {
if float64(v.Bandwidth) <= safeBandwidth {
selected = i
}
}
return selected
}
// GetVariant returns the variant at the given index, or nil if out of range.
func (m *ABRManifest) GetVariant(idx int) *Variant {
if idx < 0 || idx >= len(m.Variants) {
return nil
}
return &m.Variants[idx]
}
// WriteABRManifest writes the ABR manifest to a JSON file.
func WriteABRManifest(manifest *ABRManifest, path string) error {
data, err := json.MarshalIndent(manifest, "", " ")
if err != nil {
return fmt.Errorf("marshal ABR manifest: %w", err)
}
// Ensure directory exists
dir := filepath.Dir(path)
if err := os.MkdirAll(dir, 0755); err != nil {
return fmt.Errorf("create directory: %w", err)
}
if err := os.WriteFile(path, data, 0644); err != nil {
return fmt.Errorf("write ABR manifest: %w", err)
}
return nil
}
// ReadABRManifest reads an ABR manifest from a JSON file.
func ReadABRManifest(path string) (*ABRManifest, error) {
data, err := os.ReadFile(path)
if err != nil {
return nil, fmt.Errorf("read ABR manifest: %w", err)
}
return ParseABRManifest(data)
}
// ParseABRManifest parses an ABR manifest from JSON bytes.
func ParseABRManifest(data []byte) (*ABRManifest, error) {
var manifest ABRManifest
if err := json.Unmarshal(data, &manifest); err != nil {
return nil, fmt.Errorf("parse ABR manifest: %w", err)
}
// Validate version
if manifest.Version != ABRVersion {
return nil, fmt.Errorf("unsupported ABR version: %s (expected %s)", manifest.Version, ABRVersion)
}
return &manifest, nil
}
// VariantFromSMSG creates a Variant from an existing .smsg file.
// It reads the header to extract chunk count and file size.
func VariantFromSMSG(name string, bandwidth, width, height int, smsgPath string) (*Variant, error) {
// Read file to get size and chunk info
data, err := os.ReadFile(smsgPath)
if err != nil {
return nil, fmt.Errorf("read smsg file: %w", err)
}
// Get header to extract chunk count
header, err := GetV3Header(data)
if err != nil {
return nil, fmt.Errorf("parse smsg header: %w", err)
}
chunkCount := 0
if header.Chunked != nil {
chunkCount = header.Chunked.TotalChunks
}
return &Variant{
Name: name,
Bandwidth: bandwidth,
Width: width,
Height: height,
Codecs: "avc1.640028,mp4a.40.2", // Default H.264 + AAC
URL: filepath.Base(smsgPath),
ChunkCount: chunkCount,
FileSize: int64(len(data)),
}, nil
}
// ABRBandwidthEstimator tracks download speeds for adaptive quality selection.
type ABRBandwidthEstimator struct {
samples []int // bandwidth samples in bps
maxSamples int
}
// NewABRBandwidthEstimator creates a new bandwidth estimator.
func NewABRBandwidthEstimator(maxSamples int) *ABRBandwidthEstimator {
if maxSamples <= 0 {
maxSamples = 10
}
return &ABRBandwidthEstimator{
samples: make([]int, 0, maxSamples),
maxSamples: maxSamples,
}
}
// RecordSample records a bandwidth sample from a download.
// bytes is the number of bytes downloaded, durationMs is the time in milliseconds.
func (e *ABRBandwidthEstimator) RecordSample(bytes int, durationMs int) {
if durationMs <= 0 {
return
}
// Calculate bits per second: (bytes * 8 * 1000) / durationMs
bps := (bytes * 8 * 1000) / durationMs
e.samples = append(e.samples, bps)
if len(e.samples) > e.maxSamples {
e.samples = e.samples[1:]
}
}
// Estimate returns the estimated bandwidth in bits per second.
// Uses average of recent samples, or 1 Mbps default if no samples.
func (e *ABRBandwidthEstimator) Estimate() int {
if len(e.samples) == 0 {
return 1000000 // 1 Mbps default
}
// Use average of last 3 samples (or all if fewer)
count := 3
if len(e.samples) < count {
count = len(e.samples)
}
recent := e.samples[len(e.samples)-count:]
sum := 0
for _, s := range recent {
sum += s
}
return sum / count
}

View file

@ -1,5 +1,23 @@
package smsg
// SMSG (Secure Message) provides ChaCha20-Poly1305 authenticated encryption.
//
// IMPORTANT: Nonce handling for developers
// =========================================
// Enchantrix embeds the nonce directly in the ciphertext:
//
// [24-byte nonce][encrypted data][16-byte auth tag]
//
// The nonce is NOT transmitted separately in headers. It is:
// - Generated fresh (random) for each encryption
// - Extracted automatically from ciphertext during decryption
// - Safe to transmit (public) - only the KEY must remain secret
//
// This means wrapped keys, encrypted payloads, etc. are self-contained.
// You only need the correct key to decrypt - no nonce management required.
//
// See: github.com/Snider/Enchantrix/pkg/enchantrix/crypto_sigil.go
import (
"bytes"
"compress/gzip"

827
pkg/smsg/stream.go Normal file
View file

@ -0,0 +1,827 @@
package smsg
// V3 Streaming Support with LTHN Rolling Keys
//
// This file implements zero-trust streaming where:
// - Content is encrypted once with a random CEK (Content Encryption Key)
// - CEK is wrapped (encrypted) with time-bound stream keys
// - Stream keys are derived using LTHN(date:license:fingerprint)
// - Rolling window: today and tomorrow keys are valid (24-48hr window)
// - Keys auto-expire - no revocation needed
//
// Server flow:
// 1. Generate random CEK
// 2. Encrypt content with CEK
// 3. For today & tomorrow: wrap CEK with DeriveStreamKey(date, license, fingerprint)
// 4. Store wrapped keys in header
//
// Client flow:
// 1. Derive stream key for today (or tomorrow)
// 2. Try to unwrap CEK from header
// 3. Decrypt content with CEK
import (
"crypto/rand"
"crypto/sha256"
"encoding/base64"
"encoding/binary"
"encoding/json"
"fmt"
"time"
"github.com/Snider/Enchantrix/pkg/crypt"
"github.com/Snider/Enchantrix/pkg/enchantrix"
"github.com/Snider/Enchantrix/pkg/trix"
)
// StreamParams contains the parameters needed for stream key derivation
type StreamParams struct {
License string // User's license identifier
Fingerprint string // Device/session fingerprint
Cadence Cadence // Key rotation cadence (default: daily)
ChunkSize int // Optional: chunk size for decrypt-while-downloading (0 = no chunking)
}
// DeriveStreamKey derives a 32-byte ChaCha key from date, license, and fingerprint.
// Uses LTHN hash which is rainbow-table resistant (salt derived from input itself).
//
// The derived key is: SHA256(LTHN("YYYY-MM-DD:license:fingerprint"))
func DeriveStreamKey(date, license, fingerprint string) []byte {
// Build input string
input := fmt.Sprintf("%s:%s:%s", date, license, fingerprint)
// Use Enchantrix crypt service for LTHN hash
cryptService := crypt.NewService()
lthnHash := cryptService.Hash(crypt.LTHN, input)
// LTHN returns hex string, hash it again to get 32 bytes for ChaCha
key := sha256.Sum256([]byte(lthnHash))
return key[:]
}
// GetRollingDates returns today and tomorrow's date strings in YYYY-MM-DD format
// This is the default daily cadence.
func GetRollingDates() (current, next string) {
return GetRollingPeriods(CadenceDaily, time.Now().UTC())
}
// GetRollingDatesAt returns today and tomorrow relative to a specific time
func GetRollingDatesAt(t time.Time) (current, next string) {
return GetRollingPeriods(CadenceDaily, t.UTC())
}
// GetRollingPeriods returns the current and next period strings based on cadence.
// The period string format varies by cadence:
// - daily: "2006-01-02"
// - 12h: "2006-01-02-AM" or "2006-01-02-PM"
// - 6h: "2006-01-02-00", "2006-01-02-06", "2006-01-02-12", "2006-01-02-18"
// - 1h: "2006-01-02-15" (hour in 24h format)
func GetRollingPeriods(cadence Cadence, t time.Time) (current, next string) {
t = t.UTC()
switch cadence {
case CadenceHalfDay:
// 12-hour periods: AM (00:00-11:59) and PM (12:00-23:59)
date := t.Format("2006-01-02")
if t.Hour() < 12 {
current = date + "-AM"
next = date + "-PM"
} else {
current = date + "-PM"
next = t.AddDate(0, 0, 1).Format("2006-01-02") + "-AM"
}
case CadenceQuarter:
// 6-hour periods: 00, 06, 12, 18
date := t.Format("2006-01-02")
hour := t.Hour()
period := (hour / 6) * 6
nextPeriod := period + 6
current = fmt.Sprintf("%s-%02d", date, period)
if nextPeriod >= 24 {
next = fmt.Sprintf("%s-%02d", t.AddDate(0, 0, 1).Format("2006-01-02"), 0)
} else {
next = fmt.Sprintf("%s-%02d", date, nextPeriod)
}
case CadenceHourly:
// Hourly periods
current = t.Format("2006-01-02-15")
next = t.Add(time.Hour).Format("2006-01-02-15")
default: // CadenceDaily or empty
current = t.Format("2006-01-02")
next = t.AddDate(0, 0, 1).Format("2006-01-02")
}
return
}
// GetCadenceWindowDuration returns the duration of one period for a cadence
func GetCadenceWindowDuration(cadence Cadence) time.Duration {
switch cadence {
case CadenceHourly:
return time.Hour
case CadenceQuarter:
return 6 * time.Hour
case CadenceHalfDay:
return 12 * time.Hour
default: // CadenceDaily
return 24 * time.Hour
}
}
// WrapCEK wraps a Content Encryption Key with a stream key
// Returns base64-encoded wrapped key (includes nonce)
func WrapCEK(cek, streamKey []byte) (string, error) {
sigil, err := enchantrix.NewChaChaPolySigil(streamKey)
if err != nil {
return "", fmt.Errorf("failed to create sigil: %w", err)
}
wrapped, err := sigil.In(cek)
if err != nil {
return "", fmt.Errorf("failed to wrap CEK: %w", err)
}
return base64.StdEncoding.EncodeToString(wrapped), nil
}
// UnwrapCEK unwraps a Content Encryption Key using a stream key
// Takes base64-encoded wrapped key, returns raw CEK bytes
func UnwrapCEK(wrappedB64 string, streamKey []byte) ([]byte, error) {
wrapped, err := base64.StdEncoding.DecodeString(wrappedB64)
if err != nil {
return nil, fmt.Errorf("failed to decode wrapped key: %w", err)
}
sigil, err := enchantrix.NewChaChaPolySigil(streamKey)
if err != nil {
return nil, fmt.Errorf("failed to create sigil: %w", err)
}
cek, err := sigil.Out(wrapped)
if err != nil {
return nil, ErrDecryptionFailed
}
return cek, nil
}
// GenerateCEK generates a random 32-byte Content Encryption Key
func GenerateCEK() ([]byte, error) {
cek := make([]byte, 32)
if _, err := rand.Read(cek); err != nil {
return nil, fmt.Errorf("failed to generate CEK: %w", err)
}
return cek, nil
}
// EncryptV3 encrypts a message using v3 streaming format with rolling keys.
// The content is encrypted with a random CEK, which is then wrapped with
// stream keys for today and tomorrow.
//
// When params.ChunkSize > 0, content is split into independently decryptable
// chunks, enabling decrypt-while-downloading and seeking.
func EncryptV3(msg *Message, params *StreamParams, manifest *Manifest) ([]byte, error) {
if params == nil || params.License == "" {
return nil, ErrLicenseRequired
}
if msg.Body == "" && len(msg.Attachments) == 0 {
return nil, ErrEmptyMessage
}
// Set timestamp if not set
if msg.Timestamp == 0 {
msg.Timestamp = time.Now().Unix()
}
// Generate random CEK
cek, err := GenerateCEK()
if err != nil {
return nil, err
}
// Determine cadence (default to daily if not specified)
cadence := params.Cadence
if cadence == "" {
cadence = CadenceDaily
}
// Get rolling periods based on cadence
current, next := GetRollingPeriods(cadence, time.Now().UTC())
// Wrap CEK with current period's stream key
currentKey := DeriveStreamKey(current, params.License, params.Fingerprint)
wrappedCurrent, err := WrapCEK(cek, currentKey)
if err != nil {
return nil, fmt.Errorf("failed to wrap CEK for current period: %w", err)
}
// Wrap CEK with next period's stream key
nextKey := DeriveStreamKey(next, params.License, params.Fingerprint)
wrappedNext, err := WrapCEK(cek, nextKey)
if err != nil {
return nil, fmt.Errorf("failed to wrap CEK for next period: %w", err)
}
// Check if chunked mode requested
if params.ChunkSize > 0 {
return encryptV3Chunked(msg, params, manifest, cek, cadence, current, next, wrappedCurrent, wrappedNext)
}
// Non-chunked v3 (original behavior)
return encryptV3Standard(msg, params, manifest, cek, cadence, current, next, wrappedCurrent, wrappedNext)
}
// encryptV3Standard encrypts as a single block (original v3 behavior)
func encryptV3Standard(msg *Message, params *StreamParams, manifest *Manifest, cek []byte, cadence Cadence, current, next, wrappedCurrent, wrappedNext string) ([]byte, error) {
// Build v3 payload (similar to v2 but encrypted with CEK)
payload, attachmentData, err := buildV3Payload(msg)
if err != nil {
return nil, err
}
// Compress payload
compressed, err := zstdCompress(payload)
if err != nil {
return nil, fmt.Errorf("compression failed: %w", err)
}
// Encrypt with CEK
sigil, err := enchantrix.NewChaChaPolySigil(cek)
if err != nil {
return nil, fmt.Errorf("failed to create sigil: %w", err)
}
encrypted, err := sigil.In(compressed)
if err != nil {
return nil, fmt.Errorf("encryption failed: %w", err)
}
// Encrypt attachment data with CEK
encryptedAttachments, err := sigil.In(attachmentData)
if err != nil {
return nil, fmt.Errorf("attachment encryption failed: %w", err)
}
// Create header with wrapped keys
headerMap := map[string]interface{}{
"version": Version,
"algorithm": "chacha20poly1305",
"format": FormatV3,
"compression": CompressionZstd,
"keyMethod": KeyMethodLTHNRolling,
"cadence": string(cadence),
"wrappedKeys": []WrappedKey{
{Date: current, Wrapped: wrappedCurrent},
{Date: next, Wrapped: wrappedNext},
},
}
if manifest != nil {
if manifest.IssuedAt == 0 {
manifest.IssuedAt = time.Now().Unix()
}
headerMap["manifest"] = manifest
}
// Build v3 binary format: [4-byte json len][json header][encrypted payload][encrypted attachments]
headerJSON, err := json.Marshal(headerMap)
if err != nil {
return nil, fmt.Errorf("failed to marshal header: %w", err)
}
// Calculate total size
totalSize := 4 + len(headerJSON) + 4 + len(encrypted) + len(encryptedAttachments)
output := make([]byte, 0, totalSize)
// Write header length (4 bytes, big-endian)
headerLen := make([]byte, 4)
binary.BigEndian.PutUint32(headerLen, uint32(len(headerJSON)))
output = append(output, headerLen...)
// Write header JSON
output = append(output, headerJSON...)
// Write encrypted payload length (4 bytes, big-endian)
payloadLen := make([]byte, 4)
binary.BigEndian.PutUint32(payloadLen, uint32(len(encrypted)))
output = append(output, payloadLen...)
// Write encrypted payload
output = append(output, encrypted...)
// Write encrypted attachments
output = append(output, encryptedAttachments...)
// Wrap in trix container
t := &trix.Trix{
Header: headerMap,
Payload: output,
}
return trix.Encode(t, Magic, nil)
}
// encryptV3Chunked encrypts content into independently decryptable chunks
func encryptV3Chunked(msg *Message, params *StreamParams, manifest *Manifest, cek []byte, cadence Cadence, current, next, wrappedCurrent, wrappedNext string) ([]byte, error) {
chunkSize := params.ChunkSize
// Build raw content to chunk: metadata JSON + binary attachments
metaJSON, attachmentData, err := buildV3Payload(msg)
if err != nil {
return nil, err
}
// Combine into single byte slice for chunking
rawContent := append(metaJSON, attachmentData...)
totalSize := int64(len(rawContent))
// Create sigil with CEK for chunk encryption
sigil, err := enchantrix.NewChaChaPolySigil(cek)
if err != nil {
return nil, fmt.Errorf("failed to create sigil: %w", err)
}
// Encrypt in chunks
var chunks [][]byte
var chunkIndex []ChunkInfo
offset := 0
for i := 0; offset < len(rawContent); i++ {
// Determine this chunk's size
end := offset + chunkSize
if end > len(rawContent) {
end = len(rawContent)
}
chunkData := rawContent[offset:end]
// Encrypt chunk (each gets its own nonce)
encryptedChunk, err := sigil.In(chunkData)
if err != nil {
return nil, fmt.Errorf("failed to encrypt chunk %d: %w", i, err)
}
chunks = append(chunks, encryptedChunk)
chunkIndex = append(chunkIndex, ChunkInfo{
Offset: 0, // Will be calculated after we know all sizes
Size: len(encryptedChunk),
})
offset = end
}
// Calculate chunk offsets
currentOffset := 0
for i := range chunkIndex {
chunkIndex[i].Offset = currentOffset
currentOffset += chunkIndex[i].Size
}
// Build header with chunked info
chunkedInfo := &ChunkedInfo{
ChunkSize: chunkSize,
TotalChunks: len(chunks),
TotalSize: totalSize,
Index: chunkIndex,
}
headerMap := map[string]interface{}{
"version": Version,
"algorithm": "chacha20poly1305",
"format": FormatV3,
"compression": CompressionNone, // No compression in chunked mode (per-chunk not supported yet)
"keyMethod": KeyMethodLTHNRolling,
"cadence": string(cadence),
"chunked": chunkedInfo,
"wrappedKeys": []WrappedKey{
{Date: current, Wrapped: wrappedCurrent},
{Date: next, Wrapped: wrappedNext},
},
}
if manifest != nil {
if manifest.IssuedAt == 0 {
manifest.IssuedAt = time.Now().Unix()
}
headerMap["manifest"] = manifest
}
// Concatenate all encrypted chunks
var payload []byte
for _, chunk := range chunks {
payload = append(payload, chunk...)
}
// Wrap in trix container
t := &trix.Trix{
Header: headerMap,
Payload: payload,
}
return trix.Encode(t, Magic, nil)
}
// DecryptV3 decrypts a v3 streaming message using rolling keys.
// It tries today's key first, then tomorrow's key.
// Automatically handles both chunked and non-chunked v3 formats.
func DecryptV3(data []byte, params *StreamParams) (*Message, *Header, error) {
if params == nil || params.License == "" {
return nil, nil, ErrLicenseRequired
}
// Decode trix container
t, err := trix.Decode(data, Magic, nil)
if err != nil {
return nil, nil, fmt.Errorf("failed to decode container: %w", err)
}
// Parse header
headerJSON, err := json.Marshal(t.Header)
if err != nil {
return nil, nil, fmt.Errorf("failed to marshal header: %w", err)
}
var header Header
if err := json.Unmarshal(headerJSON, &header); err != nil {
return nil, nil, fmt.Errorf("failed to parse header: %w", err)
}
// Verify v3 format
if header.Format != FormatV3 {
return nil, nil, fmt.Errorf("expected v3 format, got: %s", header.Format)
}
if header.KeyMethod != KeyMethodLTHNRolling {
return nil, nil, fmt.Errorf("unsupported key method: %s", header.KeyMethod)
}
// Determine cadence from header (or use params, or default to daily)
cadence := header.Cadence
if cadence == "" && params.Cadence != "" {
cadence = params.Cadence
}
if cadence == "" {
cadence = CadenceDaily
}
// Try to unwrap CEK with rolling keys
cek, err := tryUnwrapCEK(header.WrappedKeys, params, cadence)
if err != nil {
return nil, &header, err
}
// Check if chunked format
if header.Chunked != nil {
return decryptV3Chunked(t.Payload, cek, &header)
}
// Non-chunked v3
return decryptV3Standard(t.Payload, cek, &header)
}
// decryptV3Standard handles non-chunked v3 decryption
func decryptV3Standard(payload []byte, cek []byte, header *Header) (*Message, *Header, error) {
if len(payload) < 8 {
return nil, header, ErrInvalidPayload
}
// Read header length (skip - we already parsed from trix header)
headerLen := binary.BigEndian.Uint32(payload[:4])
pos := 4 + int(headerLen)
if len(payload) < pos+4 {
return nil, header, ErrInvalidPayload
}
// Read encrypted payload length
encryptedLen := binary.BigEndian.Uint32(payload[pos : pos+4])
pos += 4
if len(payload) < pos+int(encryptedLen) {
return nil, header, ErrInvalidPayload
}
// Extract encrypted payload and attachments
encryptedPayload := payload[pos : pos+int(encryptedLen)]
encryptedAttachments := payload[pos+int(encryptedLen):]
// Decrypt with CEK
sigil, err := enchantrix.NewChaChaPolySigil(cek)
if err != nil {
return nil, header, fmt.Errorf("failed to create sigil: %w", err)
}
compressed, err := sigil.Out(encryptedPayload)
if err != nil {
return nil, header, ErrDecryptionFailed
}
// Decompress
var decompressed []byte
if header.Compression == CompressionZstd {
decompressed, err = zstdDecompress(compressed)
if err != nil {
return nil, header, fmt.Errorf("decompression failed: %w", err)
}
} else {
decompressed = compressed
}
// Parse message
var msg Message
if err := json.Unmarshal(decompressed, &msg); err != nil {
return nil, header, fmt.Errorf("failed to parse message: %w", err)
}
// Decrypt attachments if present
if len(encryptedAttachments) > 0 {
attachmentData, err := sigil.Out(encryptedAttachments)
if err != nil {
return nil, header, fmt.Errorf("attachment decryption failed: %w", err)
}
// Restore attachment content from binary data
if err := restoreV3Attachments(&msg, attachmentData); err != nil {
return nil, header, err
}
}
return &msg, header, nil
}
// decryptV3Chunked handles chunked v3 decryption
func decryptV3Chunked(payload []byte, cek []byte, header *Header) (*Message, *Header, error) {
if header.Chunked == nil {
return nil, header, fmt.Errorf("v3 chunked format missing chunked info")
}
// Create sigil for decryption
sigil, err := enchantrix.NewChaChaPolySigil(cek)
if err != nil {
return nil, header, fmt.Errorf("failed to create sigil: %w", err)
}
// Decrypt all chunks
var decrypted []byte
for i, ci := range header.Chunked.Index {
if ci.Offset+ci.Size > len(payload) {
return nil, header, fmt.Errorf("chunk %d out of bounds", i)
}
chunkData := payload[ci.Offset : ci.Offset+ci.Size]
plaintext, err := sigil.Out(chunkData)
if err != nil {
return nil, header, fmt.Errorf("failed to decrypt chunk %d: %w", i, err)
}
decrypted = append(decrypted, plaintext...)
}
// Parse decrypted content (metadata JSON + attachments)
var msg Message
if err := json.Unmarshal(decrypted, &msg); err != nil {
// First part should be JSON, but may be mixed with binary
// Try to find JSON boundary
for i := 0; i < len(decrypted); i++ {
if decrypted[i] == '}' {
if err := json.Unmarshal(decrypted[:i+1], &msg); err == nil {
// Found valid JSON, rest is attachment data
if err := restoreV3Attachments(&msg, decrypted[i+1:]); err != nil {
return nil, header, err
}
return &msg, header, nil
}
}
}
return nil, header, fmt.Errorf("failed to parse message: %w", err)
}
return &msg, header, nil
}
// tryUnwrapCEK attempts to unwrap the CEK using current or next period's key
func tryUnwrapCEK(wrappedKeys []WrappedKey, params *StreamParams, cadence Cadence) ([]byte, error) {
current, next := GetRollingPeriods(cadence, time.Now().UTC())
// Build map of available wrapped keys by period
keysByPeriod := make(map[string]string)
for _, wk := range wrappedKeys {
keysByPeriod[wk.Date] = wk.Wrapped
}
// Try current period's key first
if wrapped, ok := keysByPeriod[current]; ok {
streamKey := DeriveStreamKey(current, params.License, params.Fingerprint)
if cek, err := UnwrapCEK(wrapped, streamKey); err == nil {
return cek, nil
}
}
// Try next period's key
if wrapped, ok := keysByPeriod[next]; ok {
streamKey := DeriveStreamKey(next, params.License, params.Fingerprint)
if cek, err := UnwrapCEK(wrapped, streamKey); err == nil {
return cek, nil
}
}
return nil, ErrNoValidKey
}
// buildV3Payload builds the message JSON and binary attachment data
func buildV3Payload(msg *Message) ([]byte, []byte, error) {
// Create a copy of the message without attachment content
msgCopy := *msg
var attachmentData []byte
for i := range msgCopy.Attachments {
att := &msgCopy.Attachments[i]
if att.Content != "" {
// Decode base64 content to binary
data, err := base64.StdEncoding.DecodeString(att.Content)
if err != nil {
return nil, nil, fmt.Errorf("failed to decode attachment %s: %w", att.Name, err)
}
attachmentData = append(attachmentData, data...)
att.Content = "" // Clear content, will be restored on decrypt
}
}
// Marshal message (without attachment content)
payload, err := json.Marshal(&msgCopy)
if err != nil {
return nil, nil, fmt.Errorf("failed to marshal message: %w", err)
}
return payload, attachmentData, nil
}
// restoreV3Attachments restores attachment content from decrypted binary data
func restoreV3Attachments(msg *Message, data []byte) error {
offset := 0
for i := range msg.Attachments {
att := &msg.Attachments[i]
if att.Size > 0 {
if offset+att.Size > len(data) {
return fmt.Errorf("attachment data truncated for %s", att.Name)
}
att.Content = base64.StdEncoding.EncodeToString(data[offset : offset+att.Size])
offset += att.Size
}
}
return nil
}
// =============================================================================
// V3 Chunked Streaming Helpers
// =============================================================================
//
// When StreamParams.ChunkSize > 0, v3 format uses independently decryptable
// chunks, enabling:
// - Decrypt-while-downloading: Play media as it arrives
// - HTTP Range requests: Fetch specific chunks by byte range
// - Seekable playback: Jump to any position without decrypting everything
//
// Each chunk is encrypted with the same CEK but has its own nonce,
// making it independently decryptable.
// DecryptV3Chunk decrypts a single chunk by index.
// This enables streaming playback and seeking without decrypting the entire file.
//
// Usage for streaming:
//
// header, _ := GetV3Header(data)
// cek, _ := UnwrapCEKFromHeader(header, params)
// payload, _ := GetV3Payload(data)
// for i := 0; i < header.Chunked.TotalChunks; i++ {
// chunk, _ := DecryptV3Chunk(payload, cek, i, header.Chunked)
// player.Write(chunk)
// }
func DecryptV3Chunk(payload []byte, cek []byte, chunkIndex int, chunked *ChunkedInfo) ([]byte, error) {
if chunked == nil {
return nil, fmt.Errorf("chunked info is nil")
}
if chunkIndex < 0 || chunkIndex >= len(chunked.Index) {
return nil, fmt.Errorf("chunk index %d out of range [0, %d)", chunkIndex, len(chunked.Index))
}
ci := chunked.Index[chunkIndex]
if ci.Offset+ci.Size > len(payload) {
return nil, fmt.Errorf("chunk %d data out of bounds", chunkIndex)
}
// Create sigil and decrypt
sigil, err := enchantrix.NewChaChaPolySigil(cek)
if err != nil {
return nil, fmt.Errorf("failed to create sigil: %w", err)
}
chunkData := payload[ci.Offset : ci.Offset+ci.Size]
return sigil.Out(chunkData)
}
// GetV3Header extracts the header from a v3 file without decrypting.
// Useful for getting chunk index for Range requests.
func GetV3Header(data []byte) (*Header, error) {
t, err := trix.Decode(data, Magic, nil)
if err != nil {
return nil, fmt.Errorf("failed to decode container: %w", err)
}
headerJSON, err := json.Marshal(t.Header)
if err != nil {
return nil, fmt.Errorf("failed to marshal header: %w", err)
}
var header Header
if err := json.Unmarshal(headerJSON, &header); err != nil {
return nil, fmt.Errorf("failed to parse header: %w", err)
}
if header.Format != FormatV3 {
return nil, fmt.Errorf("not a v3 format: %s", header.Format)
}
return &header, nil
}
// UnwrapCEKFromHeader unwraps the CEK from a v3 header using stream params.
// Returns the CEK for use with DecryptV3Chunk.
func UnwrapCEKFromHeader(header *Header, params *StreamParams) ([]byte, error) {
if params == nil || params.License == "" {
return nil, ErrLicenseRequired
}
cadence := header.Cadence
if cadence == "" && params.Cadence != "" {
cadence = params.Cadence
}
if cadence == "" {
cadence = CadenceDaily
}
return tryUnwrapCEK(header.WrappedKeys, params, cadence)
}
// GetV3Payload extracts just the payload from a v3 file.
// Use with DecryptV3Chunk for individual chunk decryption.
func GetV3Payload(data []byte) ([]byte, error) {
t, err := trix.Decode(data, Magic, nil)
if err != nil {
return nil, fmt.Errorf("failed to decode container: %w", err)
}
return t.Payload, nil
}
// GetV3HeaderFromPrefix parses the v3 header from just the file prefix.
// This enables streaming: parse header as soon as first few KB arrive.
// Returns header and payload offset (where encrypted chunks start).
//
// File format:
// - Bytes 0-3: Magic "SMSG"
// - Bytes 4-5: Version (2-byte little endian)
// - Bytes 6-8: Header length (3-byte big endian)
// - Bytes 9+: Header JSON
// - Payload starts at offset 9 + headerLen
func GetV3HeaderFromPrefix(data []byte) (*Header, int, error) {
// Need at least magic + version + header length indicator
if len(data) < 9 {
return nil, 0, fmt.Errorf("need at least 9 bytes, got %d", len(data))
}
// Check magic
if string(data[0:4]) != Magic {
return nil, 0, ErrInvalidMagic
}
// Parse header length (3 bytes big endian at offset 6-8)
headerLen := int(data[6])<<16 | int(data[7])<<8 | int(data[8])
if headerLen <= 0 || headerLen > 16*1024*1024 {
return nil, 0, fmt.Errorf("invalid header length: %d", headerLen)
}
// Calculate payload offset
payloadOffset := 9 + headerLen
// Check if we have enough data for the header
if len(data) < payloadOffset {
return nil, 0, fmt.Errorf("need %d bytes for header, got %d", payloadOffset, len(data))
}
// Parse header JSON
headerJSON := data[9:payloadOffset]
var header Header
if err := json.Unmarshal(headerJSON, &header); err != nil {
return nil, 0, fmt.Errorf("failed to parse header JSON: %w", err)
}
if header.Format != FormatV3 {
return nil, 0, fmt.Errorf("not a v3 format: %s", header.Format)
}
return &header, payloadOffset, nil
}

677
pkg/smsg/stream_test.go Normal file
View file

@ -0,0 +1,677 @@
package smsg
import (
"testing"
"time"
)
func TestDeriveStreamKey(t *testing.T) {
// Test that same inputs produce same key
key1 := DeriveStreamKey("2026-01-12", "license123", "fingerprint456")
key2 := DeriveStreamKey("2026-01-12", "license123", "fingerprint456")
if len(key1) != 32 {
t.Errorf("Key length = %d, want 32", len(key1))
}
if string(key1) != string(key2) {
t.Error("Same inputs should produce same key")
}
// Test that different dates produce different keys
key3 := DeriveStreamKey("2026-01-13", "license123", "fingerprint456")
if string(key1) == string(key3) {
t.Error("Different dates should produce different keys")
}
// Test that different licenses produce different keys
key4 := DeriveStreamKey("2026-01-12", "license789", "fingerprint456")
if string(key1) == string(key4) {
t.Error("Different licenses should produce different keys")
}
}
func TestGetRollingDates(t *testing.T) {
today, tomorrow := GetRollingDates()
// Parse dates to verify format
todayTime, err := time.Parse("2006-01-02", today)
if err != nil {
t.Fatalf("Invalid today format: %v", err)
}
tomorrowTime, err := time.Parse("2006-01-02", tomorrow)
if err != nil {
t.Fatalf("Invalid tomorrow format: %v", err)
}
// Tomorrow should be 1 day after today
diff := tomorrowTime.Sub(todayTime)
if diff != 24*time.Hour {
t.Errorf("Tomorrow should be 24h after today, got %v", diff)
}
}
func TestWrapUnwrapCEK(t *testing.T) {
// Generate a test CEK
cek, err := GenerateCEK()
if err != nil {
t.Fatalf("GenerateCEK failed: %v", err)
}
// Generate a stream key
streamKey := DeriveStreamKey("2026-01-12", "test-license", "test-fp")
// Wrap CEK
wrapped, err := WrapCEK(cek, streamKey)
if err != nil {
t.Fatalf("WrapCEK failed: %v", err)
}
// Unwrap CEK
unwrapped, err := UnwrapCEK(wrapped, streamKey)
if err != nil {
t.Fatalf("UnwrapCEK failed: %v", err)
}
// Verify CEK matches
if string(cek) != string(unwrapped) {
t.Error("Unwrapped CEK doesn't match original")
}
// Wrong key should fail
wrongKey := DeriveStreamKey("2026-01-12", "wrong-license", "test-fp")
_, err = UnwrapCEK(wrapped, wrongKey)
if err == nil {
t.Error("UnwrapCEK with wrong key should fail")
}
}
func TestEncryptDecryptV3RoundTrip(t *testing.T) {
msg := NewMessage("Hello, this is a v3 streaming message!").
WithSubject("V3 Test").
WithFrom("stream@dapp.fm")
params := &StreamParams{
License: "test-license-123",
Fingerprint: "device-fp-456",
}
manifest := NewManifest("Test Track")
manifest.Artist = "Test Artist"
manifest.LicenseType = "stream"
// Encrypt
encrypted, err := EncryptV3(msg, params, manifest)
if err != nil {
t.Fatalf("EncryptV3 failed: %v", err)
}
// Decrypt with same params
decrypted, header, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 failed: %v", err)
}
// Verify message content
if decrypted.Body != msg.Body {
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
}
if decrypted.Subject != msg.Subject {
t.Errorf("Subject = %q, want %q", decrypted.Subject, msg.Subject)
}
// Verify header
if header.Format != FormatV3 {
t.Errorf("Format = %q, want %q", header.Format, FormatV3)
}
if header.KeyMethod != KeyMethodLTHNRolling {
t.Errorf("KeyMethod = %q, want %q", header.KeyMethod, KeyMethodLTHNRolling)
}
if len(header.WrappedKeys) != 2 {
t.Errorf("WrappedKeys count = %d, want 2", len(header.WrappedKeys))
}
// Verify manifest
if header.Manifest == nil {
t.Fatal("Manifest is nil")
}
if header.Manifest.Title != "Test Track" {
t.Errorf("Manifest.Title = %q, want %q", header.Manifest.Title, "Test Track")
}
}
func TestDecryptV3WrongLicense(t *testing.T) {
msg := NewMessage("Secret content")
params := &StreamParams{
License: "correct-license",
Fingerprint: "device-fp",
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 failed: %v", err)
}
// Try to decrypt with wrong license
wrongParams := &StreamParams{
License: "wrong-license",
Fingerprint: "device-fp",
}
_, _, err = DecryptV3(encrypted, wrongParams)
if err == nil {
t.Error("DecryptV3 with wrong license should fail")
}
if err != ErrNoValidKey {
t.Errorf("Error = %v, want ErrNoValidKey", err)
}
}
func TestDecryptV3WrongFingerprint(t *testing.T) {
msg := NewMessage("Secret content")
params := &StreamParams{
License: "test-license",
Fingerprint: "correct-fingerprint",
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 failed: %v", err)
}
// Try to decrypt with wrong fingerprint
wrongParams := &StreamParams{
License: "test-license",
Fingerprint: "wrong-fingerprint",
}
_, _, err = DecryptV3(encrypted, wrongParams)
if err == nil {
t.Error("DecryptV3 with wrong fingerprint should fail")
}
}
func TestEncryptV3WithAttachment(t *testing.T) {
msg := NewMessage("Message with attachment")
msg.AddBinaryAttachment("test.mp3", []byte("fake audio data here"), "audio/mpeg")
params := &StreamParams{
License: "test-license",
Fingerprint: "test-fp",
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 failed: %v", err)
}
decrypted, _, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 failed: %v", err)
}
// Verify attachment
if len(decrypted.Attachments) != 1 {
t.Fatalf("Attachment count = %d, want 1", len(decrypted.Attachments))
}
att := decrypted.GetAttachment("test.mp3")
if att == nil {
t.Fatal("Attachment not found")
}
if att.MimeType != "audio/mpeg" {
t.Errorf("MimeType = %q, want %q", att.MimeType, "audio/mpeg")
}
}
func TestEncryptV3RequiresLicense(t *testing.T) {
msg := NewMessage("Test")
// Nil params
_, err := EncryptV3(msg, nil, nil)
if err != ErrLicenseRequired {
t.Errorf("Error = %v, want ErrLicenseRequired", err)
}
// Empty license
_, err = EncryptV3(msg, &StreamParams{}, nil)
if err != ErrLicenseRequired {
t.Errorf("Error = %v, want ErrLicenseRequired", err)
}
}
func TestCadencePeriods(t *testing.T) {
// Test at a known time: 2026-01-12 15:30:00 UTC
testTime := time.Date(2026, 1, 12, 15, 30, 0, 0, time.UTC)
tests := []struct {
cadence Cadence
expectedCurrent string
expectedNext string
}{
{CadenceDaily, "2026-01-12", "2026-01-13"},
{CadenceHalfDay, "2026-01-12-PM", "2026-01-13-AM"},
{CadenceQuarter, "2026-01-12-12", "2026-01-12-18"},
{CadenceHourly, "2026-01-12-15", "2026-01-12-16"},
}
for _, tc := range tests {
t.Run(string(tc.cadence), func(t *testing.T) {
current, next := GetRollingPeriods(tc.cadence, testTime)
if current != tc.expectedCurrent {
t.Errorf("current = %q, want %q", current, tc.expectedCurrent)
}
if next != tc.expectedNext {
t.Errorf("next = %q, want %q", next, tc.expectedNext)
}
})
}
}
func TestCadenceHalfDayAM(t *testing.T) {
// Test in the morning
testTime := time.Date(2026, 1, 12, 9, 0, 0, 0, time.UTC)
current, next := GetRollingPeriods(CadenceHalfDay, testTime)
if current != "2026-01-12-AM" {
t.Errorf("current = %q, want %q", current, "2026-01-12-AM")
}
if next != "2026-01-12-PM" {
t.Errorf("next = %q, want %q", next, "2026-01-12-PM")
}
}
func TestCadenceQuarterBoundary(t *testing.T) {
// Test at 23:00 - should wrap to next day
testTime := time.Date(2026, 1, 12, 23, 0, 0, 0, time.UTC)
current, next := GetRollingPeriods(CadenceQuarter, testTime)
if current != "2026-01-12-18" {
t.Errorf("current = %q, want %q", current, "2026-01-12-18")
}
if next != "2026-01-13-00" {
t.Errorf("next = %q, want %q", next, "2026-01-13-00")
}
}
func TestEncryptDecryptV3WithCadence(t *testing.T) {
cadences := []Cadence{CadenceDaily, CadenceHalfDay, CadenceQuarter, CadenceHourly}
for _, cadence := range cadences {
t.Run(string(cadence), func(t *testing.T) {
msg := NewMessage("Testing " + string(cadence) + " cadence")
params := &StreamParams{
License: "cadence-test-license",
Fingerprint: "cadence-test-fp",
Cadence: cadence,
}
// Encrypt
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 failed: %v", err)
}
// Decrypt with same params
decrypted, header, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 failed: %v", err)
}
if decrypted.Body != msg.Body {
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
}
// Verify cadence in header
if header.Cadence != cadence {
t.Errorf("Cadence = %q, want %q", header.Cadence, cadence)
}
})
}
}
func TestRollingKeyWindow(t *testing.T) {
// This test verifies that both today's and tomorrow's keys work
msg := NewMessage("Rolling window test")
// Create params
params := &StreamParams{
License: "rolling-test-license",
Fingerprint: "rolling-test-fp",
}
// Encrypt with current time
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 failed: %v", err)
}
// Should decrypt successfully (within rolling window)
decrypted, header, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 failed: %v", err)
}
if decrypted.Body != msg.Body {
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
}
// Verify we have both today and tomorrow keys
today, tomorrow := GetRollingDates()
hasToday := false
hasTomorrow := false
for _, wk := range header.WrappedKeys {
if wk.Date == today {
hasToday = true
}
if wk.Date == tomorrow {
hasTomorrow = true
}
}
if !hasToday {
t.Error("Missing today's wrapped key")
}
if !hasTomorrow {
t.Error("Missing tomorrow's wrapped key")
}
}
// =============================================================================
// V3 Chunked Streaming Tests
// =============================================================================
func TestEncryptDecryptV3ChunkedBasic(t *testing.T) {
msg := NewMessage("This is a chunked streaming test message")
msg.WithSubject("Chunked Test")
params := &StreamParams{
License: "chunk-license",
Fingerprint: "chunk-fp",
ChunkSize: 64, // Small chunks for testing
}
manifest := NewManifest("Chunked Track")
manifest.Artist = "Test Artist"
// Encrypt with chunking
encrypted, err := EncryptV3(msg, params, manifest)
if err != nil {
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
}
// Decrypt - automatically handles chunked format
decrypted, header, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 (chunked) failed: %v", err)
}
// Verify content
if decrypted.Body != msg.Body {
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
}
if decrypted.Subject != msg.Subject {
t.Errorf("Subject = %q, want %q", decrypted.Subject, msg.Subject)
}
// Verify header
if header.Format != FormatV3 {
t.Errorf("Format = %q, want %q", header.Format, FormatV3)
}
if header.Chunked == nil {
t.Fatal("Chunked info is nil")
}
if header.Chunked.ChunkSize != 64 {
t.Errorf("ChunkSize = %d, want 64", header.Chunked.ChunkSize)
}
}
func TestV3ChunkedWithAttachment(t *testing.T) {
// Create a message with attachment larger than chunk size
attachmentData := make([]byte, 256)
for i := range attachmentData {
attachmentData[i] = byte(i)
}
msg := NewMessage("Message with large attachment")
msg.AddBinaryAttachment("test.bin", attachmentData, "application/octet-stream")
params := &StreamParams{
License: "attach-license",
Fingerprint: "attach-fp",
ChunkSize: 64, // Force multiple chunks
}
// Encrypt
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
}
// Verify we have multiple chunks
header, err := GetV3Header(encrypted)
if err != nil {
t.Fatalf("GetV3Header failed: %v", err)
}
if header.Chunked.TotalChunks <= 1 {
t.Errorf("TotalChunks = %d, want > 1", header.Chunked.TotalChunks)
}
// Decrypt
decrypted, _, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 (chunked) failed: %v", err)
}
// Verify attachment
if len(decrypted.Attachments) != 1 {
t.Fatalf("Attachment count = %d, want 1", len(decrypted.Attachments))
}
}
func TestV3ChunkedIndividualChunks(t *testing.T) {
// Create content that spans multiple chunks
largeContent := make([]byte, 200)
for i := range largeContent {
largeContent[i] = byte(i % 256)
}
msg := NewMessage("Chunk-by-chunk test")
msg.AddBinaryAttachment("data.bin", largeContent, "application/octet-stream")
params := &StreamParams{
License: "individual-license",
Fingerprint: "individual-fp",
ChunkSize: 50, // Force ~5 chunks
}
// Encrypt
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
}
// Get header and payload
header, err := GetV3Header(encrypted)
if err != nil {
t.Fatalf("GetV3Header failed: %v", err)
}
payload, err := GetV3Payload(encrypted)
if err != nil {
t.Fatalf("GetV3Payload failed: %v", err)
}
// Unwrap CEK
cek, err := UnwrapCEKFromHeader(header, params)
if err != nil {
t.Fatalf("UnwrapCEKFromHeader failed: %v", err)
}
// Decrypt each chunk individually
var allDecrypted []byte
for i := 0; i < header.Chunked.TotalChunks; i++ {
chunk, err := DecryptV3Chunk(payload, cek, i, header.Chunked)
if err != nil {
t.Fatalf("DecryptV3Chunk(%d) failed: %v", i, err)
}
allDecrypted = append(allDecrypted, chunk...)
}
// Verify total size matches
if int64(len(allDecrypted)) != header.Chunked.TotalSize {
t.Errorf("Decrypted size = %d, want %d", len(allDecrypted), header.Chunked.TotalSize)
}
}
func TestV3ChunkedWrongLicense(t *testing.T) {
msg := NewMessage("Secret chunked content")
params := &StreamParams{
License: "correct-chunked-license",
Fingerprint: "device-fp",
ChunkSize: 64,
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
}
// Try to decrypt with wrong license
wrongParams := &StreamParams{
License: "wrong-chunked-license",
Fingerprint: "device-fp",
}
_, _, err = DecryptV3(encrypted, wrongParams)
if err == nil {
t.Error("DecryptV3 (chunked) with wrong license should fail")
}
if err != ErrNoValidKey {
t.Errorf("Error = %v, want ErrNoValidKey", err)
}
}
func TestV3ChunkedChunkIndex(t *testing.T) {
msg := NewMessage("Index test")
msg.AddBinaryAttachment("test.dat", make([]byte, 150), "application/octet-stream")
params := &StreamParams{
License: "index-license",
Fingerprint: "index-fp",
ChunkSize: 50,
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
}
header, err := GetV3Header(encrypted)
if err != nil {
t.Fatalf("GetV3Header failed: %v", err)
}
// Verify index structure
if len(header.Chunked.Index) != header.Chunked.TotalChunks {
t.Errorf("Index length = %d, want %d", len(header.Chunked.Index), header.Chunked.TotalChunks)
}
// Verify offsets are sequential
expectedOffset := 0
for i, ci := range header.Chunked.Index {
if ci.Offset != expectedOffset {
t.Errorf("Chunk %d offset = %d, want %d", i, ci.Offset, expectedOffset)
}
expectedOffset += ci.Size
}
}
func TestV3ChunkedSeekMiddleChunk(t *testing.T) {
// Create predictable data
data := make([]byte, 300)
for i := range data {
data[i] = byte(i % 256)
}
msg := NewMessage("Seek test")
msg.AddBinaryAttachment("seek.bin", data, "application/octet-stream")
params := &StreamParams{
License: "seek-license",
Fingerprint: "seek-fp",
ChunkSize: 100, // 3 data chunks minimum
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 (chunked) failed: %v", err)
}
header, err := GetV3Header(encrypted)
if err != nil {
t.Fatalf("GetV3Header failed: %v", err)
}
payload, err := GetV3Payload(encrypted)
if err != nil {
t.Fatalf("GetV3Payload failed: %v", err)
}
cek, err := UnwrapCEKFromHeader(header, params)
if err != nil {
t.Fatalf("UnwrapCEKFromHeader failed: %v", err)
}
// Skip to middle chunk (simulate seeking)
if header.Chunked.TotalChunks < 2 {
t.Skip("Need at least 2 chunks for seek test")
}
middleIdx := header.Chunked.TotalChunks / 2
chunk, err := DecryptV3Chunk(payload, cek, middleIdx, header.Chunked)
if err != nil {
t.Fatalf("DecryptV3Chunk(%d) failed: %v", middleIdx, err)
}
// Just verify we got something
if len(chunk) == 0 {
t.Error("Middle chunk is empty")
}
}
func TestV3NonChunkedStillWorks(t *testing.T) {
// Verify non-chunked v3 still works (ChunkSize = 0)
msg := NewMessage("Non-chunked v3 test")
msg.WithSubject("No Chunks")
params := &StreamParams{
License: "non-chunk-license",
Fingerprint: "non-chunk-fp",
// ChunkSize = 0 (default) - no chunking
}
encrypted, err := EncryptV3(msg, params, nil)
if err != nil {
t.Fatalf("EncryptV3 (non-chunked) failed: %v", err)
}
decrypted, header, err := DecryptV3(encrypted, params)
if err != nil {
t.Fatalf("DecryptV3 (non-chunked) failed: %v", err)
}
if decrypted.Body != msg.Body {
t.Errorf("Body = %q, want %q", decrypted.Body, msg.Body)
}
// Non-chunked should not have Chunked info
if header.Chunked != nil {
t.Error("Non-chunked v3 should not have Chunked info")
}
}

View file

@ -2,6 +2,14 @@
// SMSG (Secure Message) enables encrypted message exchange where the recipient
// decrypts using a pre-shared password. Useful for secure support replies,
// confidential documents, and any scenario requiring password-protected content.
//
// Format versions:
// - v1: JSON with base64-encoded attachments (legacy)
// - v2: Binary format with zstd compression (current)
// - v3: Streaming with LTHN rolling keys (planned)
//
// Encryption note: Nonces are embedded in ciphertext, not transmitted separately.
// See smsg.go header comment for details.
package smsg
import (
@ -23,6 +31,9 @@ var (
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
ErrPasswordRequired = errors.New("password is required")
ErrEmptyMessage = errors.New("message cannot be empty")
ErrStreamKeyExpired = errors.New("stream key expired (outside rolling window)")
ErrNoValidKey = errors.New("no valid wrapped key found for current date")
ErrLicenseRequired = errors.New("license is required for stream decryption")
)
// Attachment represents a file attached to the message
@ -278,8 +289,27 @@ func (m *Manifest) AddLink(platform, url string) *Manifest {
const (
FormatV1 = "" // Original format: JSON with base64-encoded attachments
FormatV2 = "v2" // Binary format: JSON header + raw binary attachments
FormatV3 = "v3" // Streaming format: CEK wrapped with rolling LTHN keys, optional chunking
)
// Default chunk size for v3 chunked format (1MB)
const DefaultChunkSize = 1024 * 1024
// ChunkInfo describes a single chunk in v3 chunked format
type ChunkInfo struct {
Offset int `json:"offset"` // byte offset in payload
Size int `json:"size"` // encrypted chunk size (includes nonce + tag)
}
// ChunkedInfo contains chunking metadata for v3 streaming
// When present, enables decrypt-while-downloading and seeking
type ChunkedInfo struct {
ChunkSize int `json:"chunkSize"` // size of each chunk before encryption
TotalChunks int `json:"totalChunks"` // number of chunks
TotalSize int64 `json:"totalSize"` // total unencrypted size
Index []ChunkInfo `json:"index"` // chunk locations for seeking
}
// Compression types
const (
CompressionNone = "" // No compression (default, backwards compatible)
@ -287,12 +317,100 @@ const (
CompressionZstd = "zstd" // Zstandard compression (faster, better ratio)
)
// Key derivation methods for v3 streaming
const (
// KeyMethodDirect uses password directly (v1/v2 behavior)
KeyMethodDirect = ""
// KeyMethodLTHNRolling uses LTHN hash with rolling date windows
// Key = SHA256(LTHN(date:license:fingerprint))
// Valid keys: current period and next period (rolling window)
KeyMethodLTHNRolling = "lthn-rolling"
)
// Cadence defines how often stream keys rotate
type Cadence string
const (
// CadenceDaily rotates keys every 24 hours (default)
// Date format: "2006-01-02"
CadenceDaily Cadence = "daily"
// CadenceHalfDay rotates keys every 12 hours
// Date format: "2006-01-02-AM" or "2006-01-02-PM"
CadenceHalfDay Cadence = "12h"
// CadenceQuarter rotates keys every 6 hours
// Date format: "2006-01-02-00", "2006-01-02-06", "2006-01-02-12", "2006-01-02-18"
CadenceQuarter Cadence = "6h"
// CadenceHourly rotates keys every hour
// Date format: "2006-01-02-15" (24-hour format)
CadenceHourly Cadence = "1h"
)
// WrappedKey represents a CEK (Content Encryption Key) wrapped with a time-bound stream key.
// The stream key is derived from LTHN(date:license:fingerprint) and is never transmitted.
// Only the wrapped CEK (which includes its own nonce) is stored in the header.
type WrappedKey struct {
Date string `json:"date"` // ISO date "YYYY-MM-DD" for key derivation
Wrapped string `json:"wrapped"` // base64([nonce][ChaCha(CEK, streamKey)])
}
// Header represents the SMSG container header
type Header struct {
Version string `json:"version"`
Algorithm string `json:"algorithm"`
Format string `json:"format,omitempty"` // v2 for binary, empty for v1 (base64)
Compression string `json:"compression,omitempty"` // gzip or empty for none
Format string `json:"format,omitempty"` // v2 for binary, v3 for streaming, empty for v1 (base64)
Compression string `json:"compression,omitempty"` // gzip, zstd, or empty for none
Hint string `json:"hint,omitempty"` // optional password hint
Manifest *Manifest `json:"manifest,omitempty"` // public metadata for discovery
// V3 streaming fields
KeyMethod string `json:"keyMethod,omitempty"` // lthn-rolling for v3
Cadence Cadence `json:"cadence,omitempty"` // key rotation frequency (daily, 12h, 6h, 1h)
WrappedKeys []WrappedKey `json:"wrappedKeys,omitempty"` // CEK wrapped with rolling keys
// V3 chunked streaming (optional - enables decrypt-while-downloading)
Chunked *ChunkedInfo `json:"chunked,omitempty"` // chunk index for seeking/range requests
}
// ========== ADAPTIVE BITRATE STREAMING (ABR) ==========
// ABRManifest represents a multi-bitrate variant playlist for adaptive streaming.
// Similar to HLS master playlist but with encrypted SMSG variants.
type ABRManifest struct {
Version string `json:"version"` // "abr-v1"
Title string `json:"title"` // Content title
Duration int `json:"duration"` // Total duration in seconds
Variants []Variant `json:"variants"` // Quality variants (sorted by bandwidth, ascending)
DefaultIdx int `json:"defaultIdx"` // Default variant index (typically 720p)
Password string `json:"-"` // Shared password for all variants (not serialized)
}
// Variant represents a single quality level in an ABR stream.
// Each variant is a standard v3 chunked .smsg file.
type Variant struct {
Name string `json:"name"` // Human-readable name: "1080p", "720p", etc.
Bandwidth int `json:"bandwidth"` // Required bandwidth in bits per second
Width int `json:"width"` // Video width in pixels
Height int `json:"height"` // Video height in pixels
Codecs string `json:"codecs"` // Codec string: "avc1.640028,mp4a.40.2"
URL string `json:"url"` // Relative path to .smsg file
ChunkCount int `json:"chunkCount"` // Number of chunks (for progress calculation)
FileSize int64 `json:"fileSize"` // File size in bytes
}
// Standard ABR quality presets
var ABRPresets = []struct {
Name string
Width int
Height int
Bitrate string // For ffmpeg
BPS int // Bits per second
}{
{"1080p", 1920, 1080, "5M", 5000000},
{"720p", 1280, 720, "2.5M", 2500000},
{"480p", 854, 480, "1M", 1000000},
{"360p", 640, 360, "500K", 500000},
}

View file

@ -13,10 +13,11 @@ import (
"github.com/Snider/Borg/pkg/smsg"
"github.com/Snider/Borg/pkg/stmf"
"github.com/Snider/Enchantrix/pkg/enchantrix"
)
// Version of the WASM module
const Version = "1.2.0"
const Version = "1.6.0"
func main() {
// Export the BorgSTMF object to JavaScript global scope
@ -32,12 +33,24 @@ func main() {
js.Global().Set("BorgSMSG", js.ValueOf(map[string]interface{}{
"decrypt": js.FuncOf(smsgDecrypt),
"decryptStream": js.FuncOf(smsgDecryptStream),
"decryptBinary": js.FuncOf(smsgDecryptBinary), // v2/v3 binary input (no base64!)
"decryptV3": js.FuncOf(smsgDecryptV3), // v3 streaming with rolling keys
"getV3ChunkInfo": js.FuncOf(smsgGetV3ChunkInfo), // Get chunk index for seeking
"decryptV3Chunk": js.FuncOf(smsgDecryptV3Chunk), // Decrypt single chunk
"unwrapV3CEK": js.FuncOf(smsgUnwrapV3CEK), // Unwrap CEK for chunk decryption
"parseV3Header": js.FuncOf(smsgParseV3Header), // Parse header from bytes, returns header + payloadOffset
"unwrapCEKFromHeader": js.FuncOf(smsgUnwrapCEKFromHeader), // Unwrap CEK from parsed header
"decryptChunkDirect": js.FuncOf(smsgDecryptChunkDirect), // Decrypt raw chunk bytes with CEK
"encrypt": js.FuncOf(smsgEncrypt),
"encryptWithManifest": js.FuncOf(smsgEncryptWithManifest),
"getInfo": js.FuncOf(smsgGetInfo),
"getInfoBinary": js.FuncOf(smsgGetInfoBinary), // Binary input (no base64!)
"quickDecrypt": js.FuncOf(smsgQuickDecrypt),
"version": Version,
"ready": true,
// ABR (Adaptive Bitrate Streaming) functions
"parseABRManifest": js.FuncOf(smsgParseABRManifest), // Parse ABR manifest JSON
"selectVariant": js.FuncOf(smsgSelectVariant), // Select best variant for bandwidth
"version": Version,
"ready": true,
}))
// Dispatch a ready event
@ -361,6 +374,182 @@ func smsgDecryptStream(this js.Value, args []js.Value) interface{} {
return promiseConstructor.New(handler)
}
// smsgDecryptBinary decrypts v2/v3 binary data directly from Uint8Array.
// No base64 conversion needed - this is the efficient path for zstd streams.
// JavaScript usage:
//
// const response = await fetch(url);
// const bytes = new Uint8Array(await response.arrayBuffer());
// const result = await BorgSMSG.decryptBinary(bytes, password);
// const blob = new Blob([result.attachments[0].data], {type: result.attachments[0].mime});
func smsgDecryptBinary(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 2 {
reject.Invoke(newError("decryptBinary requires 2 arguments: Uint8Array, password"))
return
}
// Get binary data directly from Uint8Array
uint8Array := args[0]
length := uint8Array.Get("length").Int()
data := make([]byte, length)
js.CopyBytesToGo(data, uint8Array)
password := args[1].String()
// Decrypt directly from binary (no base64 decode!)
msg, err := smsg.Decrypt(data, password)
if err != nil {
reject.Invoke(newError("decryption failed: " + err.Error()))
return
}
// Build result with binary attachment data
result := map[string]interface{}{
"body": msg.Body,
"timestamp": msg.Timestamp,
}
if msg.Subject != "" {
result["subject"] = msg.Subject
}
if msg.From != "" {
result["from"] = msg.From
}
// Convert attachments with binary data
if len(msg.Attachments) > 0 {
attachments := make([]interface{}, len(msg.Attachments))
for i, att := range msg.Attachments {
// Decode base64 to binary (internal format still uses base64)
attData, err := base64.StdEncoding.DecodeString(att.Content)
if err != nil {
reject.Invoke(newError("failed to decode attachment: " + err.Error()))
return
}
// Create Uint8Array in JS
attArray := js.Global().Get("Uint8Array").New(len(attData))
js.CopyBytesToJS(attArray, attData)
attachments[i] = map[string]interface{}{
"name": att.Name,
"mime": att.MimeType,
"size": len(attData),
"data": attArray,
}
}
result["attachments"] = attachments
}
resolve.Invoke(js.ValueOf(result))
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgGetInfoBinary extracts header info from binary Uint8Array without decrypting.
// JavaScript usage:
//
// const bytes = new Uint8Array(await response.arrayBuffer());
// const info = await BorgSMSG.getInfoBinary(bytes);
// console.log(info.manifest);
func smsgGetInfoBinary(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 1 {
reject.Invoke(newError("getInfoBinary requires 1 argument: Uint8Array"))
return
}
// Get binary data directly from Uint8Array
uint8Array := args[0]
length := uint8Array.Get("length").Int()
data := make([]byte, length)
js.CopyBytesToGo(data, uint8Array)
header, err := smsg.GetInfo(data)
if err != nil {
reject.Invoke(newError("failed to get info: " + err.Error()))
return
}
result := map[string]interface{}{
"version": header.Version,
"algorithm": header.Algorithm,
}
if header.Format != "" {
result["format"] = header.Format
}
if header.Compression != "" {
result["compression"] = header.Compression
}
if header.Hint != "" {
result["hint"] = header.Hint
}
// V3 streaming fields
if header.KeyMethod != "" {
result["keyMethod"] = header.KeyMethod
}
if header.Cadence != "" {
result["cadence"] = string(header.Cadence)
}
if len(header.WrappedKeys) > 0 {
wrappedKeys := make([]interface{}, len(header.WrappedKeys))
for i, wk := range header.WrappedKeys {
wrappedKeys[i] = map[string]interface{}{
"date": wk.Date,
}
}
result["wrappedKeys"] = wrappedKeys
result["isV3Streaming"] = true
}
// V3 chunked streaming fields
if header.Chunked != nil {
index := make([]interface{}, len(header.Chunked.Index))
for i, ci := range header.Chunked.Index {
index[i] = map[string]interface{}{
"offset": ci.Offset,
"size": ci.Size,
}
}
result["chunked"] = map[string]interface{}{
"chunkSize": header.Chunked.ChunkSize,
"totalChunks": header.Chunked.TotalChunks,
"totalSize": header.Chunked.TotalSize,
"index": index,
}
result["isChunked"] = true
}
// Include manifest if present
if header.Manifest != nil {
result["manifest"] = manifestToJS(header.Manifest)
}
resolve.Invoke(js.ValueOf(result))
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgEncrypt encrypts a message with a password.
// JavaScript usage:
//
@ -495,6 +684,43 @@ func smsgGetInfo(this js.Value, args []js.Value) interface{} {
result["hint"] = header.Hint
}
// V3 streaming fields
if header.KeyMethod != "" {
result["keyMethod"] = header.KeyMethod
}
if header.Cadence != "" {
result["cadence"] = string(header.Cadence)
}
if len(header.WrappedKeys) > 0 {
wrappedKeys := make([]interface{}, len(header.WrappedKeys))
for i, wk := range header.WrappedKeys {
wrappedKeys[i] = map[string]interface{}{
"date": wk.Date,
// Note: wrapped key itself is not exposed for security
}
}
result["wrappedKeys"] = wrappedKeys
result["isV3Streaming"] = true
}
// V3 chunked streaming fields
if header.Chunked != nil {
index := make([]interface{}, len(header.Chunked.Index))
for i, ci := range header.Chunked.Index {
index[i] = map[string]interface{}{
"offset": ci.Offset,
"size": ci.Size,
}
}
result["chunked"] = map[string]interface{}{
"chunkSize": header.Chunked.ChunkSize,
"totalChunks": header.Chunked.TotalChunks,
"totalSize": header.Chunked.TotalSize,
"index": index,
}
result["isChunked"] = true
}
// Include manifest if present
if header.Manifest != nil {
result["manifest"] = manifestToJS(header.Manifest)
@ -626,6 +852,131 @@ func smsgQuickDecrypt(this js.Value, args []js.Value) interface{} {
return promiseConstructor.New(handler)
}
// smsgDecryptV3 decrypts a v3 streaming message using LTHN rolling keys.
// JavaScript usage:
//
// const result = await BorgSMSG.decryptV3(encryptedBase64, {
// license: 'user-license-id',
// fingerprint: 'device-fingerprint'
// });
// // result.attachments[0].data is a Uint8Array
func smsgDecryptV3(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 2 {
reject.Invoke(newError("decryptV3 requires 2 arguments: encryptedBase64, {license, fingerprint}"))
return
}
encryptedB64 := args[0].String()
paramsObj := args[1]
// Extract stream params
license := paramsObj.Get("license").String()
fingerprint := ""
if !paramsObj.Get("fingerprint").IsUndefined() {
fingerprint = paramsObj.Get("fingerprint").String()
}
if license == "" {
reject.Invoke(newError("license is required for v3 decryption"))
return
}
params := &smsg.StreamParams{
License: license,
Fingerprint: fingerprint,
}
// Decode base64
data, err := base64.StdEncoding.DecodeString(encryptedB64)
if err != nil {
reject.Invoke(newError("invalid base64: " + err.Error()))
return
}
// Decrypt v3
msg, header, err := smsg.DecryptV3(data, params)
if err != nil {
reject.Invoke(newError("v3 decryption failed: " + err.Error()))
return
}
// Build result with binary attachment data
result := map[string]interface{}{
"body": msg.Body,
"timestamp": msg.Timestamp,
}
if msg.Subject != "" {
result["subject"] = msg.Subject
}
if msg.From != "" {
result["from"] = msg.From
}
// Include header info
if header != nil {
headerResult := map[string]interface{}{
"format": header.Format,
"keyMethod": header.KeyMethod,
}
if header.Cadence != "" {
headerResult["cadence"] = string(header.Cadence)
}
// Include chunked info if present
if header.Chunked != nil {
headerResult["isChunked"] = true
headerResult["chunked"] = map[string]interface{}{
"chunkSize": header.Chunked.ChunkSize,
"totalChunks": header.Chunked.TotalChunks,
"totalSize": header.Chunked.TotalSize,
}
}
result["header"] = headerResult
if header.Manifest != nil {
result["manifest"] = manifestToJS(header.Manifest)
}
}
// Convert attachments with binary data
if len(msg.Attachments) > 0 {
attachments := make([]interface{}, len(msg.Attachments))
for i, att := range msg.Attachments {
// Decode base64 to binary
data, err := base64.StdEncoding.DecodeString(att.Content)
if err != nil {
reject.Invoke(newError("failed to decode attachment: " + err.Error()))
return
}
// Create Uint8Array in JS
uint8Array := js.Global().Get("Uint8Array").New(len(data))
js.CopyBytesToJS(uint8Array, data)
attachments[i] = map[string]interface{}{
"name": att.Name,
"mime": att.MimeType,
"size": len(data),
"data": uint8Array,
}
}
result["attachments"] = attachments
}
resolve.Invoke(js.ValueOf(result))
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// messageToJS converts an smsg.Message to a JavaScript object
func messageToJS(msg *smsg.Message) js.Value {
result := map[string]interface{}{
@ -771,6 +1122,447 @@ func manifestToJS(m *smsg.Manifest) map[string]interface{} {
return result
}
// smsgGetV3ChunkInfo extracts chunk information from a v3 file for seeking.
// JavaScript usage:
//
// const info = await BorgSMSG.getV3ChunkInfo(encryptedBase64);
// console.log(info.chunked.totalChunks);
// console.log(info.chunked.index); // [{offset, size}, ...]
func smsgGetV3ChunkInfo(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 1 {
reject.Invoke(newError("getV3ChunkInfo requires 1 argument: encryptedBase64"))
return
}
encryptedB64 := args[0].String()
// Decode base64
data, err := base64.StdEncoding.DecodeString(encryptedB64)
if err != nil {
reject.Invoke(newError("invalid base64: " + err.Error()))
return
}
// Get v3 header
header, err := smsg.GetV3Header(data)
if err != nil {
reject.Invoke(newError("failed to get v3 header: " + err.Error()))
return
}
result := map[string]interface{}{
"format": header.Format,
"keyMethod": header.KeyMethod,
"cadence": string(header.Cadence),
}
// Include chunked info if present
if header.Chunked != nil {
index := make([]interface{}, len(header.Chunked.Index))
for i, ci := range header.Chunked.Index {
index[i] = map[string]interface{}{
"offset": ci.Offset,
"size": ci.Size,
}
}
result["chunked"] = map[string]interface{}{
"chunkSize": header.Chunked.ChunkSize,
"totalChunks": header.Chunked.TotalChunks,
"totalSize": header.Chunked.TotalSize,
"index": index,
}
result["isChunked"] = true
} else {
result["isChunked"] = false
}
// Include manifest if present
if header.Manifest != nil {
result["manifest"] = manifestToJS(header.Manifest)
}
resolve.Invoke(js.ValueOf(result))
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgUnwrapV3CEK unwraps the Content Encryption Key for chunk-by-chunk decryption.
// JavaScript usage:
//
// const cek = await BorgSMSG.unwrapV3CEK(encryptedBase64, {license, fingerprint});
// // cek is base64-encoded CEK for use with decryptV3Chunk
func smsgUnwrapV3CEK(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 2 {
reject.Invoke(newError("unwrapV3CEK requires 2 arguments: encryptedBase64, {license, fingerprint}"))
return
}
encryptedB64 := args[0].String()
paramsObj := args[1]
// Extract stream params
license := paramsObj.Get("license").String()
fingerprint := ""
if !paramsObj.Get("fingerprint").IsUndefined() {
fingerprint = paramsObj.Get("fingerprint").String()
}
if license == "" {
reject.Invoke(newError("license is required"))
return
}
params := &smsg.StreamParams{
License: license,
Fingerprint: fingerprint,
}
// Decode base64
data, err := base64.StdEncoding.DecodeString(encryptedB64)
if err != nil {
reject.Invoke(newError("invalid base64: " + err.Error()))
return
}
// Get header
header, err := smsg.GetV3Header(data)
if err != nil {
reject.Invoke(newError("failed to get v3 header: " + err.Error()))
return
}
// Unwrap CEK
cek, err := smsg.UnwrapCEKFromHeader(header, params)
if err != nil {
reject.Invoke(newError("failed to unwrap CEK: " + err.Error()))
return
}
// Return CEK as base64 for use with decryptV3Chunk
cekB64 := base64.StdEncoding.EncodeToString(cek)
resolve.Invoke(cekB64)
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgDecryptV3Chunk decrypts a single chunk by index.
// JavaScript usage:
//
// const info = await BorgSMSG.getV3ChunkInfo(encryptedBase64);
// const cek = await BorgSMSG.unwrapV3CEK(encryptedBase64, {license, fingerprint});
// for (let i = 0; i < info.chunked.totalChunks; i++) {
// const chunk = await BorgSMSG.decryptV3Chunk(encryptedBase64, cek, i);
// // chunk is Uint8Array of decrypted data
// }
func smsgDecryptV3Chunk(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 3 {
reject.Invoke(newError("decryptV3Chunk requires 3 arguments: encryptedBase64, cekBase64, chunkIndex"))
return
}
encryptedB64 := args[0].String()
cekB64 := args[1].String()
chunkIndex := args[2].Int()
// Decode base64 data
data, err := base64.StdEncoding.DecodeString(encryptedB64)
if err != nil {
reject.Invoke(newError("invalid base64: " + err.Error()))
return
}
// Decode CEK
cek, err := base64.StdEncoding.DecodeString(cekB64)
if err != nil {
reject.Invoke(newError("invalid CEK base64: " + err.Error()))
return
}
// Get header for chunk info
header, err := smsg.GetV3Header(data)
if err != nil {
reject.Invoke(newError("failed to get v3 header: " + err.Error()))
return
}
if header.Chunked == nil {
reject.Invoke(newError("not a chunked v3 file"))
return
}
// Get payload
payload, err := smsg.GetV3Payload(data)
if err != nil {
reject.Invoke(newError("failed to get payload: " + err.Error()))
return
}
// Decrypt the chunk
decrypted, err := smsg.DecryptV3Chunk(payload, cek, chunkIndex, header.Chunked)
if err != nil {
reject.Invoke(newError("failed to decrypt chunk: " + err.Error()))
return
}
// Return as Uint8Array
uint8Array := js.Global().Get("Uint8Array").New(len(decrypted))
js.CopyBytesToJS(uint8Array, decrypted)
resolve.Invoke(uint8Array)
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgParseV3Header parses header from file bytes, returns header info + payload offset.
// This allows streaming: fetch header first, then fetch chunks as needed.
// JavaScript usage:
//
// const headerInfo = await BorgSMSG.parseV3Header(fileBytes);
// // headerInfo.payloadOffset = where encrypted chunks start
// // headerInfo.chunked.index = [{offset, size}, ...] relative to payload
//
// STREAMING: This function uses GetV3HeaderFromPrefix which only needs
// the first few KB of the file. Call it as soon as ~3KB arrives.
func smsgParseV3Header(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 1 {
reject.Invoke(newError("parseV3Header requires 1 argument: Uint8Array"))
return
}
// Get binary data from Uint8Array
uint8Array := args[0]
length := uint8Array.Get("length").Int()
data := make([]byte, length)
js.CopyBytesToGo(data, uint8Array)
// Parse header from prefix - works with partial data!
header, payloadOffset, err := smsg.GetV3HeaderFromPrefix(data)
if err != nil {
reject.Invoke(newError("failed to parse header: " + err.Error()))
return
}
result := map[string]interface{}{
"format": header.Format,
"keyMethod": header.KeyMethod,
"cadence": string(header.Cadence),
"payloadOffset": payloadOffset,
}
// Include wrapped keys for CEK unwrapping
if len(header.WrappedKeys) > 0 {
wrappedKeys := make([]interface{}, len(header.WrappedKeys))
for i, wk := range header.WrappedKeys {
wrappedKeys[i] = map[string]interface{}{
"date": wk.Date,
"wrapped": wk.Wrapped,
}
}
result["wrappedKeys"] = wrappedKeys
}
// Include chunk info
if header.Chunked != nil {
index := make([]interface{}, len(header.Chunked.Index))
for i, ci := range header.Chunked.Index {
index[i] = map[string]interface{}{
"offset": ci.Offset,
"size": ci.Size,
}
}
result["chunked"] = map[string]interface{}{
"chunkSize": header.Chunked.ChunkSize,
"totalChunks": header.Chunked.TotalChunks,
"totalSize": header.Chunked.TotalSize,
"index": index,
}
}
if header.Manifest != nil {
result["manifest"] = manifestToJS(header.Manifest)
}
resolve.Invoke(js.ValueOf(result))
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgUnwrapCEKFromHeader unwraps CEK using wrapped keys from header.
// JavaScript usage:
//
// const headerInfo = await BorgSMSG.parseV3Header(fileBytes);
// const cek = await BorgSMSG.unwrapCEKFromHeader(headerInfo.wrappedKeys, {license, fingerprint}, headerInfo.cadence);
func smsgUnwrapCEKFromHeader(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 2 {
reject.Invoke(newError("unwrapCEKFromHeader requires 2-3 arguments: wrappedKeys, {license, fingerprint}, [cadence]"))
return
}
wrappedKeysJS := args[0]
paramsObj := args[1]
// Get cadence (optional, defaults to daily)
cadence := smsg.CadenceDaily
if len(args) >= 3 && !args[2].IsUndefined() {
cadence = smsg.Cadence(args[2].String())
}
// Extract stream params
license := paramsObj.Get("license").String()
fingerprint := ""
if !paramsObj.Get("fingerprint").IsUndefined() {
fingerprint = paramsObj.Get("fingerprint").String()
}
if license == "" {
reject.Invoke(newError("license is required"))
return
}
// Convert JS wrapped keys to Go
var wrappedKeys []smsg.WrappedKey
for i := 0; i < wrappedKeysJS.Length(); i++ {
wk := wrappedKeysJS.Index(i)
wrappedKeys = append(wrappedKeys, smsg.WrappedKey{
Date: wk.Get("date").String(),
Wrapped: wk.Get("wrapped").String(),
})
}
// Build header with just the wrapped keys
header := &smsg.Header{
WrappedKeys: wrappedKeys,
Cadence: cadence,
}
params := &smsg.StreamParams{
License: license,
Fingerprint: fingerprint,
Cadence: cadence,
}
// Unwrap CEK
cek, err := smsg.UnwrapCEKFromHeader(header, params)
if err != nil {
reject.Invoke(newError("failed to unwrap CEK: " + err.Error()))
return
}
// Return CEK as Uint8Array
cekArray := js.Global().Get("Uint8Array").New(len(cek))
js.CopyBytesToJS(cekArray, cek)
resolve.Invoke(cekArray)
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// smsgDecryptChunkDirect decrypts raw chunk bytes with CEK.
// JavaScript usage:
//
// const chunkBytes = fileBytes.subarray(payloadOffset + chunk.offset, payloadOffset + chunk.offset + chunk.size);
// const decrypted = await BorgSMSG.decryptChunkDirect(chunkBytes, cek);
func smsgDecryptChunkDirect(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 2 {
reject.Invoke(newError("decryptChunkDirect requires 2 arguments: chunkBytes (Uint8Array), cek (Uint8Array)"))
return
}
// Get chunk bytes
chunkArray := args[0]
chunkLen := chunkArray.Get("length").Int()
chunkData := make([]byte, chunkLen)
js.CopyBytesToGo(chunkData, chunkArray)
// Get CEK
cekArray := args[1]
cekLen := cekArray.Get("length").Int()
cek := make([]byte, cekLen)
js.CopyBytesToGo(cek, cekArray)
// Create sigil and decrypt
sigil, err := enchantrix.NewChaChaPolySigil(cek)
if err != nil {
reject.Invoke(newError("failed to create sigil: " + err.Error()))
return
}
decrypted, err := sigil.Out(chunkData)
if err != nil {
reject.Invoke(newError("decryption failed: " + err.Error()))
return
}
// Return as Uint8Array
result := js.Global().Get("Uint8Array").New(len(decrypted))
js.CopyBytesToJS(result, decrypted)
resolve.Invoke(result)
}()
return nil
})
promiseConstructor := js.Global().Get("Promise")
return promiseConstructor.New(handler)
}
// jsToManifest converts a JavaScript object to an smsg.Manifest
func jsToManifest(obj js.Value) *smsg.Manifest {
if obj.IsUndefined() || obj.IsNull() {
@ -861,3 +1653,106 @@ func jsToManifest(obj js.Value) *smsg.Manifest {
return manifest
}
// ========== ABR (Adaptive Bitrate Streaming) Functions ==========
// smsgParseABRManifest parses an ABR manifest from JSON string.
// JavaScript usage:
//
// const manifest = await BorgSMSG.parseABRManifest(jsonString);
// // Returns: {version, title, duration, variants: [{name, bandwidth, width, height, url, ...}], defaultIdx}
func smsgParseABRManifest(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 1 {
reject.Invoke(newError("parseABRManifest requires 1 argument: jsonString"))
return
}
jsonStr := args[0].String()
manifest, err := smsg.ParseABRManifest([]byte(jsonStr))
if err != nil {
reject.Invoke(newError("failed to parse ABR manifest: " + err.Error()))
return
}
// Convert to JS object
variants := make([]interface{}, len(manifest.Variants))
for i, v := range manifest.Variants {
variants[i] = map[string]interface{}{
"name": v.Name,
"bandwidth": v.Bandwidth,
"width": v.Width,
"height": v.Height,
"codecs": v.Codecs,
"url": v.URL,
"chunkCount": v.ChunkCount,
"fileSize": v.FileSize,
}
}
result := map[string]interface{}{
"version": manifest.Version,
"title": manifest.Title,
"duration": manifest.Duration,
"variants": variants,
"defaultIdx": manifest.DefaultIdx,
}
resolve.Invoke(js.ValueOf(result))
}()
return nil
})
return js.Global().Get("Promise").New(handler)
}
// smsgSelectVariant selects the best variant for the given bandwidth.
// JavaScript usage:
//
// const idx = await BorgSMSG.selectVariant(manifest, bandwidthBPS);
// // Returns: index of best variant that fits within 80% of bandwidth
func smsgSelectVariant(this js.Value, args []js.Value) interface{} {
handler := js.FuncOf(func(this js.Value, promiseArgs []js.Value) interface{} {
resolve := promiseArgs[0]
reject := promiseArgs[1]
go func() {
if len(args) < 2 {
reject.Invoke(newError("selectVariant requires 2 arguments: manifest, bandwidthBPS"))
return
}
manifestObj := args[0]
bandwidthBPS := args[1].Int()
// Extract variants from JS object
variantsJS := manifestObj.Get("variants")
if variantsJS.IsUndefined() || variantsJS.Length() == 0 {
reject.Invoke(newError("manifest has no variants"))
return
}
// Build manifest struct
manifest := &smsg.ABRManifest{
Variants: make([]smsg.Variant, variantsJS.Length()),
}
for i := 0; i < variantsJS.Length(); i++ {
v := variantsJS.Index(i)
manifest.Variants[i] = smsg.Variant{
Bandwidth: v.Get("bandwidth").Int(),
}
}
// Select best variant
selectedIdx := manifest.SelectVariant(bandwidthBPS)
resolve.Invoke(selectedIdx)
}()
return nil
})
return js.Global().Get("Promise").New(handler)
}

40
rfc/README.md Normal file
View file

@ -0,0 +1,40 @@
# Borg RFC Specifications
This directory contains technical specifications (RFCs) for the Borg project.
## Index
| RFC | Title | Status | Description |
|-----|-------|--------|-------------|
| [001](RFC-001-OSS-DRM.md) | Open Source DRM | Proposed | Core DRM system for independent artists |
| [002](RFC-002-SMSG-FORMAT.md) | SMSG Container Format | Draft | Encrypted container format (v1/v2/v3) |
| [003](RFC-003-DATANODE.md) | DataNode | Draft | In-memory filesystem abstraction |
| [004](RFC-004-TIM.md) | Terminal Isolation Matrix | Draft | OCI-compatible container bundle |
| [005](RFC-005-STIM.md) | Encrypted TIM | Draft | ChaCha20-Poly1305 encrypted containers |
| [006](RFC-006-TRIX.md) | TRIX PGP Format | Draft | PGP encryption for archives and accounts |
| [007](RFC-007-LTHN.md) | LTHN Key Derivation | Draft | Rainbow-table resistant rolling keys |
| [008](RFC-008-BORGFILE.md) | Borgfile | Draft | Container compilation syntax |
| [009](RFC-009-STMF.md) | Secure To-Me Form | Draft | Asymmetric form encryption |
| [010](RFC-010-WASM-API.md) | WASM Decryption API | Draft | Browser decryption interface |
## Status Definitions
| Status | Meaning |
|--------|---------|
| **Draft** | Initial specification, subject to change |
| **Proposed** | Ready for review, implementation may begin |
| **Accepted** | Approved, implementation complete |
| **Deprecated** | Superseded by newer specification |
## Contributing
1. Create a new RFC with the next available number
2. Use the template format (see existing RFCs)
3. Start with "Draft" status
4. Update this README index
## Related Documentation
- [CLAUDE.md](../CLAUDE.md) - Developer quick reference
- [docs/](../docs/) - User documentation
- [examples/formats/](../examples/formats/) - Format examples

View file

@ -11,6 +11,9 @@
| Date | Status | Notes |
|------|--------|-------|
| 2026-01-13 | Proposed | **Adaptive Bitrate (ABR)**: HLS-style multi-quality streaming with encrypted variants. New Section 3.7. All Future Work items complete. |
| 2026-01-12 | Proposed | **Chunked streaming**: v3 now supports optional ChunkSize for independently decryptable chunks - enables seek, HTTP Range, and decrypt-while-downloading. |
| 2026-01-12 | Proposed | **v3 Streaming**: LTHN rolling keys with configurable cadence (daily/12h/6h/1h). CEK wrapping for zero-trust streaming. WASM v1.3.0 with decryptV3(). |
| 2026-01-10 | Proposed | Technical review passed. Fixed section numbering (7.x, 8.x, 9.x, 11.x). Updated WASM size to 5.9MB. Implementation verified complete for stated scope. |
---
@ -142,14 +145,16 @@ Key properties:
#### Format Versions
| Format | Payload Structure | Size | Speed |
|--------|------------------|------|-------|
| **v1** | JSON with base64-encoded attachments | +33% overhead | Baseline |
| **v2** | Binary header + raw attachments + zstd | ~Original size | 3-10x faster |
| Format | Payload Structure | Size | Speed | Use Case |
|--------|------------------|------|-------|----------|
| **v1** | JSON with base64-encoded attachments | +33% overhead | Baseline | Legacy |
| **v2** | Binary header + raw attachments + zstd | ~Original size | 3-10x faster | Download-to-own |
| **v3** | CEK + wrapped keys + rolling LTHN | ~Original size | 3-10x faster | **Streaming** |
| **v3+chunked** | v3 with independently decryptable chunks | ~Original size | Seekable | **Chunked streaming** |
v2 is recommended for production. v1 is maintained for backwards compatibility.
v2 is recommended for download-to-own (perpetual license). v3 is recommended for streaming (time-limited access). v3 with chunking is recommended for large files requiring seek capability or decrypt-while-downloading.
### 3.3 Key Derivation
### 3.3 Key Derivation (v1/v2)
```
License Key (password)
@ -168,7 +173,136 @@ Simple, auditable, no key escrow.
**Note on password hashing**: SHA-256 is used for simplicity and speed. For high-value content, artists may choose to use stronger KDFs (Argon2, scrypt) in custom implementations. The format supports algorithm negotiation via the header.
### 3.4 Supported Content Types
### 3.4 Streaming Key Derivation (v3)
v3 format uses **LTHN rolling keys** for zero-trust streaming. The platform controls key refresh cadence.
```
┌──────────────────────────────────────────────────────────────────┐
│ v3 STREAMING KEY FLOW │
├──────────────────────────────────────────────────────────────────┤
│ │
│ SERVER (encryption time): │
│ ───────────────────────── │
│ 1. Generate random CEK (Content Encryption Key) │
│ 2. Encrypt content with CEK (one-time) │
│ 3. For current period AND next period: │
│ streamKey = SHA256(LTHN(period:license:fingerprint)) │
│ wrappedKey = ChaCha(CEK, streamKey) │
│ 4. Store wrapped keys in header (CEK never transmitted) │
│ │
│ CLIENT (decryption time): │
│ ──────────────────────── │
│ 1. Derive streamKey = SHA256(LTHN(period:license:fingerprint)) │
│ 2. Try to unwrap CEK from current period key │
│ 3. If fails, try next period key │
│ 4. Decrypt content with unwrapped CEK │
│ │
└──────────────────────────────────────────────────────────────────┘
```
#### LTHN Hash Function
LTHN is rainbow-table resistant because the salt is derived from the input itself:
```
LTHN(input) = SHA256(input + reverse_leet(input))
where reverse_leet swaps: o↔0, l↔1, e↔3, a↔4, s↔z, t↔7
Example:
LTHN("2026-01-12:license:fp")
= SHA256("2026-01-12:license:fp" + "pf:3zn3ci1:21-10-6202")
```
You cannot compute the hash without knowing the original input.
#### Cadence Options
The platform chooses the key refresh rate. Faster cadence = tighter access control.
| Cadence | Period Format | Rolling Window | Use Case |
|---------|---------------|----------------|----------|
| `daily` | `2026-01-12` | 24-48 hours | Standard streaming |
| `12h` | `2026-01-12-AM/PM` | 12-24 hours | Premium content |
| `6h` | `2026-01-12-00/06/12/18` | 6-12 hours | High-value content |
| `1h` | `2026-01-12-15` | 1-2 hours | Live events |
The rolling window ensures smooth key transitions. At any time, both the current period key AND the next period key are valid.
#### Zero-Trust Properties
- **Server never stores keys** - Derived on-demand from LTHN
- **Keys auto-expire** - No revocation mechanism needed
- **Sharing keys is pointless** - They expire within the cadence window
- **Fingerprint binds to device** - Different device = different key
- **License ties to user** - Different user = different key
### 3.5 Chunked Streaming (v3 with ChunkSize)
When `StreamParams.ChunkSize > 0`, v3 format splits content into independently decryptable chunks, enabling:
- **Decrypt-while-downloading** - Play media as chunks arrive
- **HTTP Range requests** - Fetch specific chunks by byte offset
- **Seekable playback** - Jump to any position without decrypting previous chunks
```
┌──────────────────────────────────────────────────────────────────┐
│ V3 CHUNKED FORMAT │
├──────────────────────────────────────────────────────────────────┤
│ │
│ Header (cleartext): │
│ format: "v3" │
│ chunked: { │
│ chunkSize: 1048576, // 1MB default │
│ totalChunks: N, │
│ totalSize: X, // unencrypted total │
│ index: [ // for HTTP Range / seeking │
│ { offset: 0, size: Y }, │
│ { offset: Y, size: Z }, │
│ ... │
│ ] │
│ } │
│ wrappedKeys: [...] // same as non-chunked v3 │
│ │
│ Payload: │
│ [chunk 0: nonce + encrypted + tag] │
│ [chunk 1: nonce + encrypted + tag] │
│ ... │
│ [chunk N: nonce + encrypted + tag] │
│ │
└──────────────────────────────────────────────────────────────────┘
```
**Key insight**: Each chunk is encrypted with the same CEK but gets its own random nonce, making chunks independently decryptable. The chunk index in the header enables:
1. **Seeking**: Calculate which chunk contains byte offset X, fetch just that chunk
2. **Range requests**: Use HTTP Range headers to fetch specific encrypted chunks
3. **Streaming**: Decrypt chunk 0 for metadata, then stream chunks 1-N as they arrive
**Usage example**:
```go
params := &StreamParams{
License: "user-license",
Fingerprint: "device-fp",
ChunkSize: 1024 * 1024, // 1MB chunks
}
// Encrypt with chunking
encrypted, _ := EncryptV3(msg, params, manifest)
// For streaming playback:
header, _ := GetV3Header(encrypted)
cek, _ := UnwrapCEKFromHeader(header, params)
payload, _ := GetV3Payload(encrypted)
for i := 0; i < header.Chunked.TotalChunks; i++ {
chunk, _ := DecryptV3Chunk(payload, cek, i, header.Chunked)
player.Write(chunk) // Stream to audio/video player
}
```
### 3.6 Supported Content Types
SMSG is content-agnostic. Any file can be an attachment:
@ -183,6 +317,95 @@ SMSG is content-agnostic. Any file can be an attachment:
Multiple attachments per SMSG are supported (e.g., album + cover art + PDF booklet).
### 3.7 Adaptive Bitrate Streaming (ABR)
For large video content, ABR enables automatic quality switching based on network conditions—like HLS/DASH but with ChaCha20-Poly1305 encryption.
**Architecture:**
```
ABR Manifest (manifest.json)
├── Title: "My Video"
├── Version: "abr-v1"
├── Variants: [1080p, 720p, 480p, 360p]
└── DefaultIdx: 1 (720p)
track-1080p.smsg ──┐
track-720p.smsg ──┼── Each is standard v3 chunked SMSG
track-480p.smsg ──┤ Same password decrypts ALL variants
track-360p.smsg ──┘
```
**ABR Manifest Format:**
```json
{
"version": "abr-v1",
"title": "Content Title",
"duration": 300,
"variants": [
{
"name": "360p",
"bandwidth": 500000,
"width": 640,
"height": 360,
"codecs": "avc1.640028,mp4a.40.2",
"url": "track-360p.smsg",
"chunkCount": 12,
"fileSize": 18750000
},
{
"name": "720p",
"bandwidth": 2500000,
"width": 1280,
"height": 720,
"codecs": "avc1.640028,mp4a.40.2",
"url": "track-720p.smsg",
"chunkCount": 48,
"fileSize": 93750000
}
],
"defaultIdx": 1
}
```
**Bandwidth Estimation Algorithm:**
1. Measure download time for each chunk
2. Calculate bits per second: `(bytes × 8 × 1000) / timeMs`
3. Average last 3 samples for stability
4. Apply 80% safety factor to prevent buffering
**Variant Selection:**
```
Selected = highest quality where (bandwidth × 0.8) >= variant.bandwidth
```
**Key Properties:**
- **Same password for all variants**: CEK unwrapped once, works everywhere
- **Chunk-boundary switching**: Clean cuts, no partial chunk issues
- **Independent variants**: No cross-file dependencies
- **CDN-friendly**: Each variant is a standard file, cacheable separately
**Creating ABR Content:**
```bash
# Use mkdemo-abr to create variant set from source video
go run ./cmd/mkdemo-abr input.mp4 output-dir/ [password]
# Output:
# output-dir/manifest.json (ABR manifest)
# output-dir/track-1080p.smsg (v3 chunked, 5 Mbps)
# output-dir/track-720p.smsg (v3 chunked, 2.5 Mbps)
# output-dir/track-480p.smsg (v3 chunked, 1 Mbps)
# output-dir/track-360p.smsg (v3 chunked, 500 Kbps)
```
**Standard Presets:**
| Name | Resolution | Bitrate | Use Case |
|------|------------|---------|----------|
| 1080p | 1920×1080 | 5 Mbps | High quality, fast connections |
| 720p | 1280×720 | 2.5 Mbps | Default, most connections |
| 480p | 854×480 | 1 Mbps | Mobile, medium connections |
| 360p | 640×360 | 500 Kbps | Slow connections, previews |
## 4. Demo Page Architecture
**Live Demo**: https://demo.dapp.fm
@ -479,7 +702,7 @@ Local playback Third-party hosting
## 8. Implementation Status
### 8.1 Completed
- [x] SMSG format specification (v1 and v2)
- [x] SMSG format specification (v1, v2, v3)
- [x] Go encryption/decryption library (pkg/smsg)
- [x] WASM build for browser (pkg/wasm/stmf)
- [x] Native desktop app (Wails, cmd/dapp-fm-app)
@ -491,17 +714,22 @@ Local playback Third-party hosting
- [x] **Manifest links** - Artist platform links in metadata
- [x] **Live demo** - https://demo.dapp.fm
- [x] RFC-quality demo file with cryptographically secure password
- [x] **v3 streaming format** - LTHN rolling keys with CEK wrapping
- [x] **Configurable cadence** - daily/12h/6h/1h key rotation
- [x] **WASM v1.3.0** - `BorgSMSG.decryptV3()` for streaming
- [x] **Chunked streaming** - Independently decryptable chunks for seek/streaming
- [x] **Adaptive Bitrate (ABR)** - HLS-style multi-quality streaming with encrypted variants
### 8.2 Fixed Issues
- [x] ~~Double base64 encoding bug~~ - Fixed by using binary format
- [x] ~~Demo file format detection~~ - v2 format auto-detected via header
- [x] ~~Key wrapping for streaming~~ - Implemented in v3 format
### 8.3 Future Work
- [ ] Chunked streaming (decrypt while downloading)
- [ ] Key wrapping for multi-license files (dapp.radio.fm)
- [ ] Payment integration examples (Stripe, Gumroad)
- [ ] IPFS distribution guide
- [ ] Expiring license enforcement
- [x] Multi-bitrate adaptive streaming (see Section 3.7 ABR)
- [x] Payment integration examples (see `docs/payment-integration.md`)
- [x] IPFS distribution guide (see `docs/ipfs-distribution.md`)
- [x] Demo page "Streaming" tab for v3 showcase
## 9. Usage Examples
@ -588,10 +816,11 @@ SMSG includes version and format fields for forward compatibility:
|---------|--------|----------|
| 1.0 | v1 | ChaCha20-Poly1305, JSON+base64 attachments |
| 1.0 | **v2** | Binary attachments, zstd compression (25% smaller, 3-10x faster) |
| 1.0 | **v3** | LTHN rolling keys, CEK wrapping, chunked streaming |
| 1.0 | **v3+ABR** | Multi-quality variants with adaptive bitrate switching |
| 2 (future) | - | Algorithm negotiation, multiple KDFs |
| 3 (future) | - | Streaming chunks, adaptive bitrate, key wrapping |
Decoders MUST reject versions they don't understand. Encoders SHOULD use v2 format for production (smaller, faster).
Decoders MUST reject versions they don't understand. Use v2 for download-to-own, v3 for streaming, v3+ABR for video.
### 11.2 Third-Party Implementations
@ -634,6 +863,8 @@ The player is embeddable:
- WASM Module: `pkg/wasm/stmf/`
- Native App: `cmd/dapp-fm-app/`
- Demo Creator Tool: `cmd/mkdemo/`
- ABR Creator Tool: `cmd/mkdemo-abr/`
- ABR Package: `pkg/smsg/abr.go`
## 13. License

480
rfc/RFC-002-SMSG-FORMAT.md Normal file
View file

@ -0,0 +1,480 @@
# RFC-002: SMSG Container Format
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-001, RFC-007
---
## Abstract
SMSG (Secure Message) is an encrypted container format using ChaCha20-Poly1305 authenticated encryption. This RFC specifies the binary wire format, versioning, and encoding rules for SMSG files.
## 1. Overview
SMSG provides:
- Authenticated encryption (ChaCha20-Poly1305)
- Public metadata (manifest) readable without decryption
- Multiple format versions (v1 legacy, v2 binary, v3 streaming)
- Optional chunking for large files and seeking
## 2. File Structure
### 2.1 Binary Layout
```
Offset Size Field
------ ----- ------------------------------------
0 4 Magic: "SMSG" (ASCII)
4 2 Version: uint16 little-endian
6 3 Header Length: 3-byte big-endian
9 N Header JSON (plaintext)
9+N M Encrypted Payload
```
### 2.2 Magic Number
| Format | Value |
|--------|-------|
| Binary | `0x53 0x4D 0x53 0x47` |
| ASCII | `SMSG` |
| Base64 (first 6 chars) | `U01TRw` |
### 2.3 Version Field
Current version: `0x0001` (1)
Decoders MUST reject versions they don't understand.
### 2.4 Header Length
3 bytes, big-endian unsigned integer. Supports headers up to 16 MB.
## 3. Header Format (JSON)
Header is always plaintext (never encrypted), enabling metadata inspection without decryption.
### 3.1 Base Header
```json
{
"version": "1.0",
"algorithm": "chacha20poly1305",
"format": "v2",
"compression": "zstd",
"manifest": { ... }
}
```
### 3.2 V3 Header Extensions
```json
{
"version": "1.0",
"algorithm": "chacha20poly1305",
"format": "v3",
"compression": "zstd",
"keyMethod": "lthn-rolling",
"cadence": "daily",
"manifest": { ... },
"wrappedKeys": [
{"date": "2026-01-13", "wrapped": "<base64>"},
{"date": "2026-01-14", "wrapped": "<base64>"}
],
"chunked": {
"chunkSize": 1048576,
"totalChunks": 42,
"totalSize": 44040192,
"index": [
{"offset": 0, "size": 1048600},
{"offset": 1048600, "size": 1048600}
]
}
}
```
### 3.3 Header Field Reference
| Field | Type | Values | Description |
|-------|------|--------|-------------|
| version | string | "1.0" | Format version string |
| algorithm | string | "chacha20poly1305" | Always ChaCha20-Poly1305 |
| format | string | "", "v2", "v3" | Payload format version |
| compression | string | "", "gzip", "zstd" | Compression algorithm |
| keyMethod | string | "", "lthn-rolling" | Key derivation method |
| cadence | string | "daily", "12h", "6h", "1h" | Rolling key period (v3) |
| manifest | object | - | Content metadata |
| wrappedKeys | array | - | CEK wrapped for each period (v3) |
| chunked | object | - | Chunk index for seeking (v3) |
## 4. Manifest Structure
### 4.1 Complete Manifest
```go
type Manifest struct {
Title string `json:"title,omitempty"`
Artist string `json:"artist,omitempty"`
Album string `json:"album,omitempty"`
Genre string `json:"genre,omitempty"`
Year int `json:"year,omitempty"`
ReleaseType string `json:"release_type,omitempty"`
Duration int `json:"duration,omitempty"`
Format string `json:"format,omitempty"`
ExpiresAt int64 `json:"expires_at,omitempty"`
IssuedAt int64 `json:"issued_at,omitempty"`
LicenseType string `json:"license_type,omitempty"`
Tracks []Track `json:"tracks,omitempty"`
Links map[string]string `json:"links,omitempty"`
Tags []string `json:"tags,omitempty"`
Extra map[string]string `json:"extra,omitempty"`
}
type Track struct {
Title string `json:"title"`
Start float64 `json:"start"`
End float64 `json:"end,omitempty"`
Type string `json:"type,omitempty"`
TrackNum int `json:"track_num,omitempty"`
}
```
### 4.2 Manifest Field Reference
| Field | Type | Range | Description |
|-------|------|-------|-------------|
| title | string | 0-255 chars | Display name (required for discovery) |
| artist | string | 0-255 chars | Creator name |
| album | string | 0-255 chars | Album/collection name |
| genre | string | 0-255 chars | Genre classification |
| year | int | 0-9999 | Release year (0 = unset) |
| releaseType | string | enum | "single", "album", "ep", "mix" |
| duration | int | 0+ | Total duration in seconds |
| format | string | any | Platform format string (e.g., "dapp.fm/v1") |
| expiresAt | int64 | 0+ | Unix timestamp (0 = never expires) |
| issuedAt | int64 | 0+ | Unix timestamp of license issue |
| licenseType | string | enum | "perpetual", "rental", "stream", "preview" |
| tracks | []Track | - | Track boundaries for multi-track releases |
| links | map | - | Platform name → URL (e.g., "bandcamp" → URL) |
| tags | []string | - | Arbitrary string tags |
| extra | map | - | Free-form key-value extension data |
## 5. Format Versions
### 5.1 Version Comparison
| Aspect | v1 (Legacy) | v2 (Binary) | v3 (Streaming) |
|--------|-------------|-------------|----------------|
| Payload Structure | JSON only | Length-prefixed JSON + binary | Same as v2 |
| Attachment Encoding | Base64 in JSON | Size field + raw binary | Size field + raw binary |
| Compression | None | zstd (default) | zstd (default) |
| Key Derivation | SHA256(password) | SHA256(password) | LTHN rolling keys |
| Chunked Support | No | No | Yes (optional) |
| Size Overhead | ~33% | ~25% | ~15% |
| Use Case | Legacy | General purpose | Time-limited streaming |
### 5.2 V1 Format (Legacy)
**Payload (after decryption):**
```json
{
"body": "Message content",
"subject": "Optional subject",
"from": "sender@example.com",
"to": "recipient@example.com",
"timestamp": 1673644800,
"attachments": [
{
"name": "file.bin",
"content": "base64encodeddata==",
"mime": "application/octet-stream",
"size": 1024
}
],
"reply_key": {
"public_key": "base64x25519key==",
"algorithm": "x25519"
},
"meta": {
"custom_field": "custom_value"
}
}
```
- Attachments base64-encoded inline in JSON (~33% overhead)
- Simple but inefficient for large files
### 5.3 V2 Format (Binary)
**Payload structure (after decryption and decompression):**
```
Offset Size Field
------ ----- ------------------------------------
0 4 Message JSON Length (big-endian uint32)
4 N Message JSON (attachments have size only, no content)
4+N B1 Attachment 1 raw binary
4+N+B1 B2 Attachment 2 raw binary
...
```
**Message JSON (within payload):**
```json
{
"body": "Message text",
"subject": "Subject",
"from": "sender",
"attachments": [
{"name": "file1.bin", "mime": "application/octet-stream", "size": 4096},
{"name": "file2.bin", "mime": "image/png", "size": 65536}
],
"timestamp": 1673644800
}
```
- Attachment `content` field omitted; binary data follows JSON
- Compressed before encryption
- 3-10x faster than v1, ~25% smaller
### 5.4 V3 Format (Streaming)
Same payload structure as v2, but with:
- LTHN-derived rolling keys instead of password
- CEK (Content Encryption Key) wrapped for each time period
- Optional chunking for seek support
**CEK Wrapping:**
```
For each rolling period:
streamKey = SHA256(LTHN(period:license:fingerprint))
wrappedKey = ChaCha20-Poly1305(CEK, streamKey)
```
**Rolling Periods (cadence):**
| Cadence | Period Format | Example |
|---------|---------------|---------|
| daily | YYYY-MM-DD | "2026-01-13" |
| 12h | YYYY-MM-DD-AM/PM | "2026-01-13-AM" |
| 6h | YYYY-MM-DD-HH | "2026-01-13-00", "2026-01-13-06" |
| 1h | YYYY-MM-DD-HH | "2026-01-13-15" |
### 5.5 V3 Chunked Format
**Payload (independently decryptable chunks):**
```
Offset Size Content
------ ----- ----------------------------------
0 1048600 Chunk 0: [24-byte nonce][ciphertext][16-byte tag]
1048600 1048600 Chunk 1: [24-byte nonce][ciphertext][16-byte tag]
...
```
- Each chunk encrypted separately with same CEK, unique nonce
- Enables seeking, HTTP Range requests
- Chunk size typically 1MB (configurable)
## 6. Encryption
### 6.1 Algorithm
XChaCha20-Poly1305 (extended nonce variant)
| Parameter | Value |
|-----------|-------|
| Key size | 32 bytes |
| Nonce size | 24 bytes (XChaCha) |
| Tag size | 16 bytes |
### 6.2 Ciphertext Structure
```
[24-byte XChaCha20 nonce][encrypted data][16-byte Poly1305 tag]
```
**Critical**: Nonces are embedded IN the ciphertext by the Enchantrix library, NOT transmitted separately in headers.
### 6.3 Key Derivation
**V1/V2 (Password-based):**
```go
key := sha256.Sum256([]byte(password)) // 32 bytes
```
**V3 (LTHN Rolling):**
```go
// For each period in rolling window:
streamKey := sha256.Sum256([]byte(
crypt.NewService().Hash(crypt.LTHN, period + ":" + license + ":" + fingerprint)
))
```
## 7. Compression
| Value | Algorithm | Notes |
|-------|-----------|-------|
| "" (empty) | None | Raw bytes, default for v1 |
| "gzip" | RFC 1952 | Stdlib, WASM compatible |
| "zstd" | Zstandard | Default for v2/v3, better ratio |
**Order**: Compress → Encrypt (on write), Decrypt → Decompress (on read)
## 8. Message Structure
### 8.1 Go Types
```go
type Message struct {
From string `json:"from,omitempty"`
To string `json:"to,omitempty"`
Subject string `json:"subject,omitempty"`
Body string `json:"body"`
Timestamp int64 `json:"timestamp,omitempty"`
Attachments []Attachment `json:"attachments,omitempty"`
ReplyKey *KeyInfo `json:"reply_key,omitempty"`
Meta map[string]string `json:"meta,omitempty"`
}
type Attachment struct {
Name string `json:"name"`
Mime string `json:"mime"`
Size int `json:"size"`
Content string `json:"content,omitempty"` // Base64, v1 only
Data []byte `json:"-"` // Binary, v2/v3
}
type KeyInfo struct {
PublicKey string `json:"public_key"`
Algorithm string `json:"algorithm"`
}
```
### 8.2 Stream Parameters (V3)
```go
type StreamParams struct {
License string `json:"license"` // User's license identifier
Fingerprint string `json:"fingerprint"` // Device fingerprint (optional)
Cadence string `json:"cadence"` // Rolling period: daily, 12h, 6h, 1h
ChunkSize int `json:"chunk_size"` // Bytes per chunk (default 1MB)
}
```
## 9. Error Handling
### 9.1 Error Types
```go
var (
ErrInvalidMagic = errors.New("invalid SMSG magic")
ErrInvalidPayload = errors.New("invalid SMSG payload")
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
ErrPasswordRequired = errors.New("password is required")
ErrEmptyMessage = errors.New("message cannot be empty")
ErrStreamKeyExpired = errors.New("stream key expired (outside rolling window)")
ErrNoValidKey = errors.New("no valid wrapped key found for current date")
ErrLicenseRequired = errors.New("license is required for stream decryption")
)
```
### 9.2 Error Conditions
| Error | Cause | Recovery |
|-------|-------|----------|
| ErrInvalidMagic | File magic is not "SMSG" | Verify file format |
| ErrInvalidPayload | Corrupted payload structure | Re-download or restore |
| ErrDecryptionFailed | Wrong password or corrupted | Try correct password |
| ErrPasswordRequired | Empty password provided | Provide password |
| ErrStreamKeyExpired | Time outside rolling window | Wait for valid period or update file |
| ErrNoValidKey | No wrapped key for current period | License/fingerprint mismatch |
| ErrLicenseRequired | Empty StreamParams.License | Provide license identifier |
## 10. Constants
```go
const Magic = "SMSG" // 4 ASCII bytes
const Version = "1.0" // String version identifier
const DefaultChunkSize = 1024 * 1024 // 1 MB
const FormatV1 = "" // Legacy JSON format
const FormatV2 = "v2" // Binary format
const FormatV3 = "v3" // Streaming with rolling keys
const KeyMethodDirect = "" // Password-direct (v1/v2)
const KeyMethodLTHNRolling = "lthn-rolling" // LTHN rolling (v3)
const CompressionNone = ""
const CompressionGzip = "gzip"
const CompressionZstd = "zstd"
const CadenceDaily = "daily"
const CadenceHalfDay = "12h"
const CadenceQuarter = "6h"
const CadenceHourly = "1h"
```
## 11. API Usage
### 11.1 V1 (Legacy)
```go
msg := NewMessage("Hello").WithSubject("Test")
encrypted, _ := Encrypt(msg, "password")
decrypted, _ := Decrypt(encrypted, "password")
```
### 11.2 V2 (Binary)
```go
msg := NewMessage("Hello").AddBinaryAttachment("file.bin", data, "application/octet-stream")
manifest := NewManifest("My Content")
encrypted, _ := EncryptV2WithManifest(msg, "password", manifest)
decrypted, _ := Decrypt(encrypted, "password")
```
### 11.3 V3 (Streaming)
```go
msg := NewMessage("Stream content")
params := &StreamParams{
License: "user-license",
Fingerprint: "device-fingerprint",
Cadence: CadenceDaily,
ChunkSize: 1048576,
}
manifest := NewManifest("Stream Track")
manifest.LicenseType = "stream"
encrypted, _ := EncryptV3(msg, params, manifest)
decrypted, header, _ := DecryptV3(encrypted, params)
```
## 12. Implementation Reference
- Types: `pkg/smsg/types.go`
- Encryption: `pkg/smsg/smsg.go`
- Streaming: `pkg/smsg/stream.go`
- WASM: `pkg/wasm/stmf/main.go`
- Tests: `pkg/smsg/*_test.go`
## 13. Security Considerations
1. **Nonce uniqueness**: Enchantrix generates random 24-byte nonces automatically
2. **Key entropy**: Passwords should have 64+ bits entropy (no key stretching)
3. **Manifest exposure**: Manifest is public; never include sensitive data
4. **Constant-time crypto**: Enchantrix uses constant-time comparison for auth tags
5. **Rolling window**: V3 keys valid for current + next period only
## 14. Future Work
- [ ] Key stretching (Argon2 option)
- [ ] Multi-recipient encryption
- [ ] Streaming API with ReadableStream
- [ ] Hardware key support (WebAuthn)

326
rfc/RFC-003-DATANODE.md Normal file
View file

@ -0,0 +1,326 @@
# RFC-003: DataNode In-Memory Filesystem
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
---
## Abstract
DataNode is an in-memory filesystem abstraction implementing Go's `fs.FS` interface. It provides the foundation for collecting, manipulating, and serializing file trees without touching disk.
## 1. Overview
DataNode serves as the core data structure for:
- Collecting files from various sources (GitHub, websites, PWAs)
- Building container filesystems (TIM rootfs)
- Serializing to/from tar archives
- Encrypting as TRIX format
## 2. Implementation
### 2.1 Core Type
```go
type DataNode struct {
files map[string]*dataFile
}
type dataFile struct {
name string
content []byte
modTime time.Time
}
```
**Key insight**: DataNode uses a **flat key-value map**, not a nested tree structure. Paths are stored as keys directly, and directories are implicit (derived from path prefixes).
### 2.2 fs.FS Implementation
DataNode implements these interfaces:
| Interface | Method | Description |
|-----------|--------|-------------|
| `fs.FS` | `Open(name string)` | Returns fs.File for path |
| `fs.StatFS` | `Stat(name string)` | Returns fs.FileInfo |
| `fs.ReadDirFS` | `ReadDir(name string)` | Lists directory contents |
### 2.3 Internal Helper Types
```go
// File metadata
type dataFileInfo struct {
name string
size int64
modTime time.Time
}
func (fi *dataFileInfo) Mode() fs.FileMode { return 0444 } // Read-only
// Directory metadata
type dirInfo struct {
name string
}
func (di *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
// File reader (implements fs.File)
type dataFileReader struct {
info *dataFileInfo
reader *bytes.Reader
}
// Directory reader (implements fs.File)
type dirFile struct {
info *dirInfo
entries []fs.DirEntry
offset int
}
```
## 3. Operations
### 3.1 Construction
```go
// Create empty DataNode
node := datanode.New()
// Returns: &DataNode{files: make(map[string]*dataFile)}
```
### 3.2 Adding Files
```go
// Add file with content
node.AddData("path/to/file.txt", []byte("content"))
// Trailing slashes are ignored (treated as directory indicator)
node.AddData("path/to/dir/", []byte("")) // Stored as "path/to/dir"
```
**Note**: Parent directories are NOT explicitly created. They are implicit based on path prefixes.
### 3.3 File Access
```go
// Open file (fs.FS interface)
f, err := node.Open("path/to/file.txt")
if err != nil {
// fs.ErrNotExist if not found
}
defer f.Close()
content, _ := io.ReadAll(f)
// Stat file
info, err := node.Stat("path/to/file.txt")
// info.Name(), info.Size(), info.ModTime(), info.Mode()
// Read directory
entries, err := node.ReadDir("path/to")
for _, entry := range entries {
// entry.Name(), entry.IsDir(), entry.Type()
}
```
### 3.4 Walking
```go
err := fs.WalkDir(node, ".", func(path string, d fs.DirEntry, err error) error {
if err != nil {
return err
}
if !d.IsDir() {
// Process file
}
return nil
})
```
## 4. Path Semantics
### 4.1 Path Handling
- **Leading slashes stripped**: `/path/file``path/file`
- **Trailing slashes ignored**: `path/dir/``path/dir`
- **Forward slashes only**: Uses `/` regardless of OS
- **Case-sensitive**: `File.txt``file.txt`
- **Direct lookup**: Paths stored as flat keys
### 4.2 Valid Paths
```
file.txt → stored as "file.txt"
dir/file.txt → stored as "dir/file.txt"
/absolute/path → stored as "absolute/path" (leading / stripped)
path/to/dir/ → stored as "path/to/dir" (trailing / stripped)
```
### 4.3 Directory Detection
Directories are **implicit**. A directory exists if:
1. Any file path has it as a prefix
2. Example: Adding `a/b/c.txt` implicitly creates directories `a` and `a/b`
```go
// ReadDir finds directories by scanning all paths
func (dn *DataNode) ReadDir(name string) ([]fs.DirEntry, error) {
// Scans all keys for matching prefix
// Returns unique immediate children
}
```
## 5. Tar Serialization
### 5.1 ToTar
```go
tarBytes, err := node.ToTar()
```
**Format**:
- All files written as `tar.TypeReg` (regular files)
- Header Mode: **0600** (fixed, not original mode)
- No explicit directory entries
- ModTime preserved from dataFile
```go
// Serialization logic
for path, file := range dn.files {
header := &tar.Header{
Name: path,
Mode: 0600, // Fixed mode
Size: int64(len(file.content)),
ModTime: file.modTime,
Typeflag: tar.TypeReg,
}
tw.WriteHeader(header)
tw.Write(file.content)
}
```
### 5.2 FromTar
```go
node, err := datanode.FromTar(tarBytes)
```
**Parsing**:
- Only reads `tar.TypeReg` entries
- Ignores directory entries (`tar.TypeDir`)
- Stores path and content in flat map
```go
// Deserialization logic
for {
header, err := tr.Next()
if header.Typeflag == tar.TypeReg {
content, _ := io.ReadAll(tr)
dn.files[header.Name] = &dataFile{
name: filepath.Base(header.Name),
content: content,
modTime: header.ModTime,
}
}
}
```
### 5.3 Compressed Variants
```go
// gzip compressed
tarGz, err := node.ToTarGz()
node, err := datanode.FromTarGz(tarGzBytes)
// xz compressed
tarXz, err := node.ToTarXz()
node, err := datanode.FromTarXz(tarXzBytes)
```
## 6. File Modes
| Context | Mode | Notes |
|---------|------|-------|
| File read (fs.FS) | 0444 | Read-only for all |
| Directory (fs.FS) | 0555 | Read+execute for all |
| Tar export | 0600 | Owner read/write only |
**Note**: Original file modes are NOT preserved. All files get fixed modes.
## 7. Memory Model
- All content held in memory as `[]byte`
- No lazy loading
- No memory mapping
- Thread-safe for concurrent reads (map is not mutated after creation)
### 7.1 Size Calculation
```go
func (dn *DataNode) Size() int64 {
var total int64
for _, f := range dn.files {
total += int64(len(f.content))
}
return total
}
```
## 8. Integration Points
### 8.1 TIM RootFS
```go
tim := &tim.TIM{
Config: configJSON,
RootFS: datanode, // DataNode as container filesystem
}
```
### 8.2 TRIX Encryption
```go
// Encrypt DataNode to TRIX
encrypted, err := trix.Encrypt(datanode.ToTar(), password)
// Decrypt TRIX to DataNode
tarBytes, err := trix.Decrypt(encrypted, password)
node, err := datanode.FromTar(tarBytes)
```
### 8.3 Collectors
```go
// GitHub collector returns DataNode
node, err := github.CollectRepo(url)
// Website collector returns DataNode
node, err := website.Collect(url, depth)
```
## 9. Implementation Reference
- Source: `pkg/datanode/datanode.go`
- Tests: `pkg/datanode/datanode_test.go`
## 10. Security Considerations
1. **Path traversal**: Leading slashes stripped; no `..` handling needed (flat map)
2. **Memory exhaustion**: No built-in limits; caller must validate input size
3. **Tar bombs**: FromTar reads all entries into memory
4. **Symlinks**: Not supported (intentional - tar.TypeReg only)
## 11. Limitations
- No symlink support
- No extended attributes
- No sparse files
- Fixed file modes (0600 on export)
- No streaming (full content in memory)
## 12. Future Work
- [ ] Streaming tar generation for large files
- [ ] Optional mode preservation
- [ ] Size limits for untrusted input
- [ ] Lazy loading for large datasets

330
rfc/RFC-004-TIM.md Normal file
View file

@ -0,0 +1,330 @@
# RFC-004: Terminal Isolation Matrix (TIM)
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-003
---
## Abstract
TIM (Terminal Isolation Matrix) is an OCI-compatible container bundle format. It packages a runtime configuration with a root filesystem (DataNode) for execution via runc or compatible runtimes.
## 1. Overview
TIM provides:
- OCI runtime-spec compatible bundles
- Portable container packaging
- Integration with DataNode filesystem
- Encryption via STIM (RFC-005)
## 2. Implementation
### 2.1 Core Type
```go
// pkg/tim/tim.go:28-32
type TerminalIsolationMatrix struct {
Config []byte // Raw OCI runtime specification (JSON)
RootFS *datanode.DataNode // In-memory filesystem
}
```
### 2.2 Error Variables
```go
var (
ErrDataNodeRequired = errors.New("datanode is required")
ErrConfigIsNil = errors.New("config is nil")
ErrPasswordRequired = errors.New("password is required for encryption")
ErrInvalidStimPayload = errors.New("invalid stim payload")
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
)
```
## 3. Public API
### 3.1 Constructors
```go
// Create empty TIM with default config
func New() (*TerminalIsolationMatrix, error)
// Wrap existing DataNode into TIM
func FromDataNode(dn *DataNode) (*TerminalIsolationMatrix, error)
// Deserialize from tar archive
func FromTar(data []byte) (*TerminalIsolationMatrix, error)
```
### 3.2 Serialization
```go
// Serialize to tar archive
func (m *TerminalIsolationMatrix) ToTar() ([]byte, error)
// Encrypt to STIM format (ChaCha20-Poly1305)
func (m *TerminalIsolationMatrix) ToSigil(password string) ([]byte, error)
```
### 3.3 Decryption
```go
// Decrypt from STIM format
func FromSigil(data []byte, password string) (*TerminalIsolationMatrix, error)
```
### 3.4 Execution
```go
// Run plain .tim file with runc
func Run(timPath string) error
// Decrypt and run .stim file
func RunEncrypted(stimPath, password string) error
```
## 4. Tar Archive Structure
### 4.1 Layout
```
config.json (root level, mode 0600)
rootfs/ (directory, mode 0755)
rootfs/bin/app (files within rootfs/)
rootfs/etc/config
...
```
### 4.2 Serialization (ToTar)
```go
// pkg/tim/tim.go:111-195
func (m *TerminalIsolationMatrix) ToTar() ([]byte, error) {
// 1. Write config.json header (size = len(m.Config), mode 0600)
// 2. Write config.json content
// 3. Write rootfs/ directory entry (TypeDir, mode 0755)
// 4. Walk m.RootFS depth-first
// 5. For each file: tar entry with name "rootfs/" + path, mode 0600
}
```
### 4.3 Deserialization (FromTar)
```go
func FromTar(data []byte) (*TerminalIsolationMatrix, error) {
// 1. Parse tar entries
// 2. "config.json" → stored as raw bytes in Config
// 3. "rootfs/*" prefix → stripped and added to DataNode
// 4. Error if config.json missing (ErrConfigIsNil)
}
```
## 5. OCI Config
### 5.1 Default Config
The `New()` function creates a TIM with a default config from `pkg/tim/config.go`:
```go
func defaultConfig() (*trix.Trix, error) {
return &trix.Trix{Header: make(map[string]interface{})}, nil
}
```
**Note**: The default config is minimal. Applications should populate the Config field with a proper OCI runtime spec.
### 5.2 OCI Runtime Spec Example
```json
{
"ociVersion": "1.0.2",
"process": {
"terminal": false,
"user": {"uid": 0, "gid": 0},
"args": ["/bin/app"],
"env": ["PATH=/usr/bin:/bin"],
"cwd": "/"
},
"root": {
"path": "rootfs",
"readonly": true
},
"mounts": [],
"linux": {
"namespaces": [
{"type": "pid"},
{"type": "network"},
{"type": "mount"}
]
}
}
```
## 6. Execution Flow
### 6.1 Plain TIM (Run)
```go
// pkg/tim/run.go:18-74
func Run(timPath string) error {
// 1. Create temporary directory (borg-run-*)
// 2. Extract tar entry-by-entry
// - Security: Path traversal check (prevents ../)
// - Validates: target = Clean(target) within tempDir
// 3. Create directories as needed (0755)
// 4. Write files with 0600 permissions
// 5. Execute: runc run -b <tempDir> borg-container
// 6. Stream stdout/stderr directly
// 7. Return exit code
}
```
### 6.2 Encrypted TIM (RunEncrypted)
```go
// pkg/tim/run.go:79-134
func RunEncrypted(stimPath, password string) error {
// 1. Read encrypted .stim file
// 2. Decrypt using FromSigil() with password
// 3. Create temporary directory (borg-run-*)
// 4. Write config.json to tempDir
// 5. Create rootfs/ subdirectory
// 6. Walk DataNode and extract all files to rootfs/
// - Uses CopyFile() with 0600 permissions
// 7. Execute: runc run -b <tempDir> borg-container
// 8. Stream stdout/stderr
// 9. Clean up temp directory (defer os.RemoveAll)
// 10. Return exit code
}
```
### 6.3 Security Controls
| Control | Implementation |
|---------|----------------|
| Path traversal | `filepath.Clean()` + prefix validation |
| Temp cleanup | `defer os.RemoveAll(tempDir)` |
| File permissions | Hardcoded 0600 (files), 0755 (dirs) |
| Test injection | `ExecCommand` variable for mocking runc |
## 7. Cache API
### 7.1 Cache Structure
```go
// pkg/tim/cache.go
type Cache struct {
Dir string // Directory path for storage
Password string // Shared password for all TIMs
}
```
### 7.2 Cache Operations
```go
// Create cache with master password
func NewCache(dir, password string) (*Cache, error)
// Store TIM (encrypted automatically as .stim)
func (c *Cache) Store(name string, m *TerminalIsolationMatrix) error
// Load TIM (decrypted automatically)
func (c *Cache) Load(name string) (*TerminalIsolationMatrix, error)
// Delete cached TIM
func (c *Cache) Delete(name string) error
// Check if TIM exists
func (c *Cache) Exists(name string) bool
// List all cached TIM names
func (c *Cache) List() ([]string, error)
// Load and execute cached TIM
func (c *Cache) Run(name string) error
// Get file size of cached .stim
func (c *Cache) Size(name string) (int64, error)
```
### 7.3 Cache Directory Structure
```
cache/
├── mycontainer.stim (encrypted)
├── another.stim (encrypted)
└── ...
```
- All TIMs stored as `.stim` files (encrypted)
- Single password protects entire cache
- Directory created with 0700 permissions
- Files stored with 0600 permissions
## 8. CLI Usage
```bash
# Compile Borgfile to TIM
borg compile -f Borgfile -o container.tim
# Compile with encryption
borg compile -f Borgfile -e "password" -o container.stim
# Run plain TIM
borg run container.tim
# Run encrypted TIM
borg run container.stim -p "password"
# Decode (extract) to tar
borg decode container.stim -p "password" --i-am-in-isolation -o container.tar
# Inspect metadata without decrypting
borg inspect container.stim
```
## 9. Implementation Reference
- TIM core: `pkg/tim/tim.go`
- Execution: `pkg/tim/run.go`
- Cache: `pkg/tim/cache.go`
- Config: `pkg/tim/config.go`
- Tests: `pkg/tim/tim_test.go`, `pkg/tim/run_test.go`, `pkg/tim/cache_test.go`
## 10. Security Considerations
1. **Path traversal prevention**: `filepath.Clean()` + prefix validation
2. **Permission hardcoding**: 0600 files, 0755 directories
3. **Secure cleanup**: `defer os.RemoveAll()` on temp directories
4. **Command injection prevention**: `ExecCommand` variable (no shell)
5. **Config validation**: Validate OCI spec before execution
## 11. OCI Compatibility
TIM bundles are compatible with:
- runc
- crun
- youki
- Any OCI runtime-spec 1.0.2 compliant runtime
## 12. Test Coverage
| Area | Tests |
|------|-------|
| TIM creation | DataNode wrapping, default config |
| Serialization | Tar round-trips, large files (1MB+) |
| Encryption | ToSigil/FromSigil, wrong password detection |
| Caching | Store/Load/Delete, List, Size |
| Execution | ZIP slip prevention, temp cleanup |
| Error handling | Nil DataNode, nil config, invalid tar |
## 13. Future Work
- [ ] Image layer support
- [ ] Registry push/pull
- [ ] Multi-platform bundles
- [ ] Signature verification
- [ ] Full OCI config generation

303
rfc/RFC-005-STIM.md Normal file
View file

@ -0,0 +1,303 @@
# RFC-005: STIM Encrypted Container Format
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-003, RFC-004
---
## Abstract
STIM (Secure TIM) is an encrypted container format that wraps TIM bundles using ChaCha20-Poly1305 authenticated encryption. It enables secure distribution and execution of containers without exposing the contents.
## 1. Overview
STIM provides:
- Encrypted TIM containers
- ChaCha20-Poly1305 authenticated encryption
- Separate encryption of config and rootfs
- Direct execution without persistent decryption
## 2. Format Name
**ChaChaPolySigil** - The internal name for the STIM format, using:
- ChaCha20-Poly1305 algorithm (via Enchantrix library)
- Trix container wrapper with "STIM" magic
## 3. File Structure
### 3.1 Container Format
STIM uses the **Trix container format** from Enchantrix library:
```
┌─────────────────────────────────────────┐
│ Magic: "STIM" (4 bytes ASCII) │
├─────────────────────────────────────────┤
│ Trix Header (Gob-encoded JSON) │
│ - encryption_algorithm: "chacha20poly1305"
│ - tim: true │
│ - config_size: uint32 │
│ - rootfs_size: uint32 │
│ - version: "1.0" │
├─────────────────────────────────────────┤
│ Trix Payload: │
│ [config_size: 4 bytes BE uint32] │
│ [encrypted config] │
│ [encrypted rootfs tar] │
└─────────────────────────────────────────┘
```
### 3.2 Payload Structure
```
Offset Size Field
------ ----- ------------------------------------
0 4 Config size (big-endian uint32)
4 N Encrypted config (includes nonce + tag)
4+N M Encrypted rootfs tar (includes nonce + tag)
```
### 3.3 Encrypted Component Format
Each encrypted component (config and rootfs) follows Enchantrix format:
```
[24-byte XChaCha20 nonce][ciphertext][16-byte Poly1305 tag]
```
**Critical**: Nonces are **embedded in the ciphertext**, not transmitted separately.
## 4. Encryption
### 4.1 Algorithm
XChaCha20-Poly1305 (extended nonce variant)
| Parameter | Value |
|-----------|-------|
| Key size | 32 bytes |
| Nonce size | 24 bytes (embedded) |
| Tag size | 16 bytes |
### 4.2 Key Derivation
```go
// pkg/trix/trix.go:64-67
func DeriveKey(password string) []byte {
hash := sha256.Sum256([]byte(password))
return hash[:] // 32 bytes
}
```
### 4.3 Dual Encryption
Config and RootFS are encrypted **separately** with independent nonces:
```go
// pkg/tim/tim.go:217-232
func (m *TerminalIsolationMatrix) ToSigil(password string) ([]byte, error) {
// 1. Derive key
key := trix.DeriveKey(password)
// 2. Create sigil
sigil, _ := enchantrix.NewChaChaPolySigil(key)
// 3. Encrypt config (generates fresh nonce automatically)
encConfig, _ := sigil.In(m.Config)
// 4. Serialize rootfs to tar
rootfsTar, _ := m.RootFS.ToTar()
// 5. Encrypt rootfs (generates different fresh nonce)
encRootFS, _ := sigil.In(rootfsTar)
// 6. Build payload
payload := make([]byte, 4+len(encConfig)+len(encRootFS))
binary.BigEndian.PutUint32(payload[:4], uint32(len(encConfig)))
copy(payload[4:4+len(encConfig)], encConfig)
copy(payload[4+len(encConfig):], encRootFS)
// 7. Create Trix container with STIM magic
// ...
}
```
**Rationale for dual encryption:**
- Config can be decrypted separately for inspection
- Allows streaming decryption of large rootfs
- Independent nonces prevent any nonce reuse
## 5. Decryption Flow
```go
// pkg/tim/tim.go:255-308
func FromSigil(data []byte, password string) (*TerminalIsolationMatrix, error) {
// 1. Decode Trix container with magic "STIM"
t, _ := trix.Decode(data, "STIM", nil)
// 2. Derive key from password
key := trix.DeriveKey(password)
// 3. Create sigil
sigil, _ := enchantrix.NewChaChaPolySigil(key)
// 4. Parse payload: extract configSize from first 4 bytes
configSize := binary.BigEndian.Uint32(t.Payload[:4])
// 5. Validate bounds
if int(configSize) > len(t.Payload)-4 {
return nil, ErrInvalidStimPayload
}
// 6. Extract encrypted components
encConfig := t.Payload[4 : 4+configSize]
encRootFS := t.Payload[4+configSize:]
// 7. Decrypt config (nonce auto-extracted by Enchantrix)
config, err := sigil.Out(encConfig)
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
}
// 8. Decrypt rootfs
rootfsTar, err := sigil.Out(encRootFS)
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
}
// 9. Reconstruct DataNode from tar
rootfs, _ := datanode.FromTar(rootfsTar)
return &TerminalIsolationMatrix{Config: config, RootFS: rootfs}, nil
}
```
## 6. Trix Header
```go
Header: map[string]interface{}{
"encryption_algorithm": "chacha20poly1305",
"tim": true,
"config_size": len(encConfig),
"rootfs_size": len(encRootFS),
"version": "1.0",
}
```
## 7. CLI Usage
```bash
# Create encrypted container
borg compile -f Borgfile -e "password" -o container.stim
# Run encrypted container
borg run container.stim -p "password"
# Decode (extract) encrypted container
borg decode container.stim -p "password" --i-am-in-isolation -o container.tar
# Inspect without decrypting (shows header metadata only)
borg inspect container.stim
# Output:
# Format: STIM
# encryption_algorithm: chacha20poly1305
# config_size: 1234
# rootfs_size: 567890
```
## 8. Cache API
```go
// Create cache with master password
cache, err := tim.NewCache("/path/to/cache", masterPassword)
// Store TIM (encrypted automatically as .stim)
err := cache.Store("name", tim)
// Load TIM (decrypted automatically)
tim, err := cache.Load("name")
// List cached containers
names, err := cache.List()
```
## 9. Execution Security
```go
// Secure execution flow
func RunEncrypted(path, password string) error {
// 1. Create secure temp directory
tmpDir, _ := os.MkdirTemp("", "borg-run-*")
defer os.RemoveAll(tmpDir) // Secure cleanup
// 2. Read and decrypt
data, _ := os.ReadFile(path)
tim, _ := FromSigil(data, password)
// 3. Extract to temp
tim.ExtractTo(tmpDir)
// 4. Execute with runc
return runRunc(tmpDir)
}
```
## 10. Security Properties
### 10.1 Confidentiality
- Contents encrypted with ChaCha20-Poly1305
- Password-derived key never stored
- Nonces are random, never reused
### 10.2 Integrity
- Poly1305 MAC prevents tampering
- Decryption fails if modified
- Separate MACs for config and rootfs
### 10.3 Error Detection
| Error | Cause |
|-------|-------|
| `ErrPasswordRequired` | Empty password provided |
| `ErrInvalidStimPayload` | Payload < 4 bytes or invalid size |
| `ErrDecryptionFailed` | Wrong password or corrupted data |
## 11. Comparison to TRIX
| Feature | STIM | TRIX |
|---------|------|------|
| Algorithm | ChaCha20-Poly1305 | PGP/AES or ChaCha |
| Content | TIM bundles | DataNode (raw files) |
| Structure | Dual encryption | Single blob |
| Magic | "STIM" | "TRIX" |
| Use case | Container execution | General encryption, accounts |
STIM is for containers. TRIX is for general file encryption and accounts.
## 12. Implementation Reference
- Encryption: `pkg/tim/tim.go` (ToSigil, FromSigil)
- Key derivation: `pkg/trix/trix.go` (DeriveKey)
- Cache: `pkg/tim/cache.go`
- CLI: `cmd/run.go`, `cmd/decode.go`, `cmd/compile.go`
- Enchantrix: `github.com/Snider/Enchantrix`
## 13. Security Considerations
1. **Password strength**: Recommend 64+ bits entropy (12+ chars)
2. **Key derivation**: SHA-256 only (no stretching) - use strong passwords
3. **Memory handling**: Keys should be wiped after use
4. **Temp files**: Use tmpfs when available, secure wipe after
5. **Side channels**: Enchantrix uses constant-time crypto operations
## 14. Future Work
- [ ] Hardware key support (YubiKey, TPM)
- [ ] Key stretching (Argon2)
- [ ] Multi-recipient encryption
- [ ] Streaming decryption for large rootfs

342
rfc/RFC-006-TRIX.md Normal file
View file

@ -0,0 +1,342 @@
# RFC-006: TRIX PGP Encryption Format
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-003
---
## Abstract
TRIX is a PGP-based encryption format for DataNode archives and account credentials. It provides symmetric and asymmetric encryption using OpenPGP standards and ChaCha20-Poly1305, enabling secure data exchange and identity management.
## 1. Overview
TRIX provides:
- PGP symmetric encryption for DataNode archives
- ChaCha20-Poly1305 modern encryption
- PGP armored keys for account/identity management
- Integration with Enchantrix library
## 2. Public API
### 2.1 Key Derivation
```go
// pkg/trix/trix.go:64-67
func DeriveKey(password string) []byte {
hash := sha256.Sum256([]byte(password))
return hash[:] // 32 bytes
}
```
- Input: password string (any length)
- Output: 32-byte key (256 bits)
- Algorithm: SHA-256 hash of UTF-8 bytes
- Deterministic: identical passwords → identical keys
### 2.2 Legacy PGP Encryption
```go
// Encrypt DataNode to TRIX (PGP symmetric)
func ToTrix(dn *datanode.DataNode, password string) ([]byte, error)
// Decrypt TRIX to DataNode (DISABLED for encrypted payloads)
func FromTrix(data []byte, password string) (*datanode.DataNode, error)
```
**Note**: `FromTrix` with a non-empty password returns error `"decryption disabled: cannot accept encrypted payloads"`. This is intentional to prevent accidental password use.
### 2.3 Modern ChaCha20-Poly1305 Encryption
```go
// Encrypt with ChaCha20-Poly1305
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error)
// Decrypt ChaCha20-Poly1305
func FromTrixChaCha(data []byte, password string) (*datanode.DataNode, error)
```
### 2.4 Error Variables
```go
var (
ErrPasswordRequired = errors.New("password is required for encryption")
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
)
```
## 3. File Format
### 3.1 Container Structure
```
[4 bytes] Magic: "TRIX" (ASCII)
[Variable] Gob-encoded Header (map[string]interface{})
[Variable] Payload (encrypted or unencrypted tarball)
```
### 3.2 Header Examples
**Unencrypted:**
```go
Header: map[string]interface{}{} // Empty map
```
**ChaCha20-Poly1305:**
```go
Header: map[string]interface{}{
"encryption_algorithm": "chacha20poly1305",
}
```
### 3.3 ChaCha20-Poly1305 Payload
```
[24 bytes] XChaCha20 Nonce (embedded)
[N bytes] Encrypted tar archive
[16 bytes] Poly1305 authentication tag
```
**Note**: Nonces are embedded in the ciphertext by Enchantrix, not stored separately.
## 4. Encryption Workflows
### 4.1 ChaCha20-Poly1305 (Recommended)
```go
// Encryption
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error) {
// 1. Validate password is non-empty
if password == "" {
return nil, ErrPasswordRequired
}
// 2. Serialize DataNode to tar
tarball, _ := dn.ToTar()
// 3. Derive 32-byte key
key := DeriveKey(password)
// 4. Create sigil and encrypt
sigil, _ := enchantrix.NewChaChaPolySigil(key)
encrypted, _ := sigil.In(tarball) // Generates nonce automatically
// 5. Create Trix container
t := &trix.Trix{
Header: map[string]interface{}{"encryption_algorithm": "chacha20poly1305"},
Payload: encrypted,
}
// 6. Encode with TRIX magic
return trix.Encode(t, "TRIX", nil)
}
```
### 4.2 Decryption
```go
func FromTrixChaCha(data []byte, password string) (*datanode.DataNode, error) {
// 1. Validate password
if password == "" {
return nil, ErrPasswordRequired
}
// 2. Decode TRIX container
t, _ := trix.Decode(data, "TRIX", nil)
// 3. Derive key and decrypt
key := DeriveKey(password)
sigil, _ := enchantrix.NewChaChaPolySigil(key)
tarball, err := sigil.Out(t.Payload) // Extracts nonce, verifies MAC
if err != nil {
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
}
// 4. Deserialize DataNode
return datanode.FromTar(tarball)
}
```
### 4.3 Legacy PGP (Disabled Decryption)
```go
func ToTrix(dn *datanode.DataNode, password string) ([]byte, error) {
tarball, _ := dn.ToTar()
var payload []byte
if password != "" {
// PGP symmetric encryption
cryptService := crypt.NewService()
payload, _ = cryptService.SymmetricallyEncryptPGP([]byte(password), tarball)
} else {
payload = tarball
}
t := &trix.Trix{Header: map[string]interface{}{}, Payload: payload}
return trix.Encode(t, "TRIX", nil)
}
func FromTrix(data []byte, password string) (*datanode.DataNode, error) {
// Security: Reject encrypted payloads
if password != "" {
return nil, errors.New("decryption disabled: cannot accept encrypted payloads")
}
t, _ := trix.Decode(data, "TRIX", nil)
return datanode.FromTar(t.Payload)
}
```
## 5. Enchantrix Library
### 5.1 Dependencies
```go
import (
"github.com/Snider/Enchantrix/pkg/trix" // Container format
"github.com/Snider/Enchantrix/pkg/crypt" // PGP operations
"github.com/Snider/Enchantrix/pkg/enchantrix" // AEAD sigils
)
```
### 5.2 Trix Container
```go
type Trix struct {
Header map[string]interface{}
Payload []byte
}
func Encode(t *Trix, magic string, extra interface{}) ([]byte, error)
func Decode(data []byte, magic string, extra interface{}) (*Trix, error)
```
### 5.3 ChaCha20-Poly1305 Sigil
```go
// Create sigil with 32-byte key
sigil, err := enchantrix.NewChaChaPolySigil(key)
// Encrypt (generates random 24-byte nonce)
ciphertext, err := sigil.In(plaintext)
// Decrypt (extracts nonce, verifies MAC)
plaintext, err := sigil.Out(ciphertext)
```
## 6. Account System Integration
### 6.1 PGP Armored Keys
```
-----BEGIN PGP PUBLIC KEY BLOCK-----
mQENBGX...base64...
-----END PGP PUBLIC KEY BLOCK-----
```
### 6.2 Key Storage
```
~/.borg/
├── identity.pub # PGP public key (armored)
├── identity.key # PGP private key (armored, encrypted)
└── keyring/ # Trusted public keys
```
## 7. CLI Usage
```bash
# Encrypt with TRIX (PGP symmetric)
borg collect github repo https://github.com/user/repo \
--format trix \
--password "password"
# Decrypt unencrypted TRIX
borg decode archive.trix -o decoded.tar
# Inspect without decrypting
borg inspect archive.trix
# Output:
# Format: TRIX
# encryption_algorithm: chacha20poly1305 (if present)
# Payload Size: N bytes
```
## 8. Format Comparison
| Format | Extension | Algorithm | Use Case |
|--------|-----------|-----------|----------|
| `datanode` | `.tar` | None | Uncompressed archive |
| `tim` | `.tim` | None | Container bundle |
| `trix` | `.trix` | PGP/AES or ChaCha | Encrypted archives, accounts |
| `stim` | `.stim` | ChaCha20-Poly1305 | Encrypted containers |
| `smsg` | `.smsg` | ChaCha20-Poly1305 | Encrypted media |
## 9. Security Analysis
### 9.1 Key Derivation Limitations
**Current implementation: SHA-256 (single round)**
| Metric | Value |
|--------|-------|
| Algorithm | SHA-256 |
| Iterations | 1 |
| Salt | None |
| Key stretching | None |
**Implications:**
- GPU brute force: ~10 billion guesses/second
- 8-character password: ~10 seconds to break
- Recommendation: Use 15+ character passwords
### 9.2 ChaCha20-Poly1305 Properties
| Property | Status |
|----------|--------|
| Authentication | Poly1305 MAC (16 bytes) |
| Key size | 256 bits |
| Nonce size | 192 bits (XChaCha) |
| Standard | RFC 7539 compliant |
## 10. Test Coverage
| Test | Description |
|------|-------------|
| DeriveKey length | Output is exactly 32 bytes |
| DeriveKey determinism | Same password → same key |
| DeriveKey uniqueness | Different passwords → different keys |
| ToTrix without password | Valid TRIX with "TRIX" magic |
| ToTrix with password | PGP encryption applied |
| FromTrix unencrypted | Round-trip preserves files |
| FromTrix password rejection | Returns error |
| ToTrixChaCha success | Valid TRIX created |
| ToTrixChaCha empty password | Returns ErrPasswordRequired |
| FromTrixChaCha round-trip | Preserves nested directories |
| FromTrixChaCha wrong password | Returns ErrDecryptionFailed |
| FromTrixChaCha large data | 1MB file processed |
## 11. Implementation Reference
- Source: `pkg/trix/trix.go`
- Tests: `pkg/trix/trix_test.go`
- Enchantrix: `github.com/Snider/Enchantrix v0.0.2`
## 12. Security Considerations
1. **Use strong passwords**: 15+ characters due to no key stretching
2. **Prefer ChaCha**: Use `ToTrixChaCha` over legacy PGP
3. **Key backup**: Securely backup private keys
4. **Interoperability**: TRIX files with GPG require password
## 13. Future Work
- [ ] Key stretching (Argon2 option in DeriveKey)
- [ ] Public key encryption support
- [ ] Signature support
- [ ] Key expiration metadata
- [ ] Multi-recipient encryption

355
rfc/RFC-007-LTHN.md Normal file
View file

@ -0,0 +1,355 @@
# RFC-007: LTHN Key Derivation
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-002
---
## Abstract
LTHN (Leet-Hash-Nonce) is a rainbow-table resistant key derivation function used for streaming DRM with time-limited access. It generates rolling keys that automatically expire without requiring revocation infrastructure.
## 1. Overview
LTHN provides:
- Rainbow-table resistant hashing
- Time-based key rolling
- Zero-trust key derivation (no key server)
- Configurable cadence (daily to hourly)
## 2. Motivation
Traditional DRM requires:
- Central key server
- License validation
- Revocation lists
- Network connectivity
LTHN eliminates these by:
- Deriving keys from public information + secret
- Time-bounding keys automatically
- Making rainbow tables impractical
- Working completely offline
## 3. Algorithm
### 3.1 Core Function
The LTHN hash is implemented in the Enchantrix library:
```go
import "github.com/Snider/Enchantrix/pkg/crypt"
cryptService := crypt.NewService()
lthnHash := cryptService.Hash(crypt.LTHN, input)
```
**LTHN formula**:
```
LTHN(input) = SHA256(input || reverse_leet(input))
```
Where `reverse_leet` performs bidirectional character substitution.
### 3.2 Reverse Leet Mapping
| Original | Leet | Bidirectional |
|----------|------|---------------|
| o | 0 | o ↔ 0 |
| l | 1 | l ↔ 1 |
| e | 3 | e ↔ 3 |
| a | 4 | a ↔ 4 |
| s | z | s ↔ z |
| t | 7 | t ↔ 7 |
### 3.3 Example
```
Input: "2026-01-13:license:fp"
reverse_leet: "pf:3zn3ci1:31-10-6202"
Combined: "2026-01-13:license:fppf:3zn3ci1:31-10-6202"
Result: SHA256(combined) → 32-byte hash
```
## 4. Stream Key Derivation
### 4.1 Implementation
```go
// pkg/smsg/stream.go:49-60
func DeriveStreamKey(date, license, fingerprint string) []byte {
input := fmt.Sprintf("%s:%s:%s", date, license, fingerprint)
cryptService := crypt.NewService()
lthnHash := cryptService.Hash(crypt.LTHN, input)
key := sha256.Sum256([]byte(lthnHash))
return key[:]
}
```
### 4.2 Input Format
```
period:license:fingerprint
Where:
- period: Time period identifier (see Cadence)
- license: User's license key (password)
- fingerprint: Device/browser fingerprint
```
### 4.3 Output
32-byte key suitable for ChaCha20-Poly1305.
## 5. Cadence
### 5.1 Options
| Cadence | Constant | Period Format | Example | Duration |
|---------|----------|---------------|---------|----------|
| Daily | `CadenceDaily` | `2006-01-02` | `2026-01-13` | 24h |
| 12-hour | `CadenceHalfDay` | `2006-01-02-AM/PM` | `2026-01-13-PM` | 12h |
| 6-hour | `CadenceQuarter` | `2006-01-02-HH` | `2026-01-13-12` | 6h |
| Hourly | `CadenceHourly` | `2006-01-02-HH` | `2026-01-13-15` | 1h |
### 5.2 Period Calculation
```go
// pkg/smsg/stream.go:73-119
func GetCurrentPeriod(cadence Cadence) string {
return GetPeriodAt(time.Now(), cadence)
}
func GetPeriodAt(t time.Time, cadence Cadence) string {
switch cadence {
case CadenceDaily:
return t.Format("2006-01-02")
case CadenceHalfDay:
suffix := "AM"
if t.Hour() >= 12 {
suffix = "PM"
}
return t.Format("2006-01-02") + "-" + suffix
case CadenceQuarter:
bucket := (t.Hour() / 6) * 6
return fmt.Sprintf("%s-%02d", t.Format("2006-01-02"), bucket)
case CadenceHourly:
return fmt.Sprintf("%s-%02d", t.Format("2006-01-02"), t.Hour())
}
return t.Format("2006-01-02")
}
func GetNextPeriod(cadence Cadence) string {
return GetPeriodAt(time.Now().Add(GetCadenceDuration(cadence)), cadence)
}
```
### 5.3 Duration Mapping
```go
func GetCadenceDuration(cadence Cadence) time.Duration {
switch cadence {
case CadenceDaily:
return 24 * time.Hour
case CadenceHalfDay:
return 12 * time.Hour
case CadenceQuarter:
return 6 * time.Hour
case CadenceHourly:
return 1 * time.Hour
}
return 24 * time.Hour
}
```
## 6. Rolling Windows
### 6.1 Dual-Key Strategy
At encryption time, CEK is wrapped with **two** keys:
1. Current period key
2. Next period key
This creates a rolling validity window:
```
Time: 2026-01-13 23:30 (daily cadence)
Valid keys:
- "2026-01-13:license:fp" (current period)
- "2026-01-14:license:fp" (next period)
Window: 24-48 hours of validity
```
### 6.2 Key Wrapping
```go
// pkg/smsg/stream.go:135-155
func WrapCEK(cek []byte, streamKey []byte) (string, error) {
sigil := enchantrix.NewChaChaPolySigil()
wrapped, err := sigil.Seal(cek, streamKey)
if err != nil {
return "", err
}
return base64.StdEncoding.EncodeToString(wrapped), nil
}
```
**Wrapped format**:
```
[24-byte nonce][encrypted CEK][16-byte auth tag]
→ base64 encoded for header storage
```
### 6.3 Key Unwrapping
```go
// pkg/smsg/stream.go:157-170
func UnwrapCEK(wrapped string, streamKey []byte) ([]byte, error) {
data, err := base64.StdEncoding.DecodeString(wrapped)
if err != nil {
return nil, err
}
sigil := enchantrix.NewChaChaPolySigil()
return sigil.Open(data, streamKey)
}
```
### 6.4 Decryption Flow
```go
// pkg/smsg/stream.go:606-633
func UnwrapCEKFromHeader(header *V3Header, params *StreamParams) ([]byte, error) {
// Try current period first
currentPeriod := GetCurrentPeriod(params.Cadence)
currentKey := DeriveStreamKey(currentPeriod, params.License, params.Fingerprint)
for _, wk := range header.WrappedKeys {
cek, err := UnwrapCEK(wk.Key, currentKey)
if err == nil {
return cek, nil
}
}
// Try next period (for clock skew)
nextPeriod := GetNextPeriod(params.Cadence)
nextKey := DeriveStreamKey(nextPeriod, params.License, params.Fingerprint)
for _, wk := range header.WrappedKeys {
cek, err := UnwrapCEK(wk.Key, nextKey)
if err == nil {
return cek, nil
}
}
return nil, ErrKeyExpired
}
```
## 7. V3 Header Format
```go
type V3Header struct {
Format string `json:"format"` // "v3"
Manifest *Manifest `json:"manifest"`
WrappedKeys []WrappedKey `json:"wrappedKeys"`
Chunked *ChunkInfo `json:"chunked,omitempty"`
}
type WrappedKey struct {
Period string `json:"period"` // e.g., "2026-01-13"
Key string `json:"key"` // base64-encoded wrapped CEK
}
```
## 8. Rainbow Table Resistance
### 8.1 Why It Works
Standard hash:
```
SHA256("2026-01-13:license:fp") → predictable, precomputable
```
LTHN hash:
```
LTHN("2026-01-13:license:fp")
= SHA256("2026-01-13:license:fp" + reverse_leet("2026-01-13:license:fp"))
= SHA256("2026-01-13:license:fp" + "pf:3zn3ci1:31-10-6202")
```
The salt is **derived from the input itself**, making precomputation impractical:
- Each unique input has a unique salt
- Cannot build rainbow tables without knowing all possible inputs
- Input space includes license keys (high entropy)
### 8.2 Security Analysis
| Attack | Mitigation |
|--------|------------|
| Rainbow tables | Input-derived salt makes precomputation infeasible |
| Brute force | License key entropy (64+ bits recommended) |
| Time oracle | Rolling window prevents precise timing attacks |
| Key sharing | Keys expire within cadence window |
## 9. Zero-Trust Properties
| Property | Implementation |
|----------|----------------|
| No key server | Keys derived locally from LTHN |
| Auto-expiration | Rolling periods invalidate old keys |
| No revocation | Keys naturally expire within cadence window |
| Device binding | Fingerprint in derivation input |
| User binding | License key in derivation input |
## 10. Test Vectors
From `pkg/smsg/stream_test.go`:
```go
// Stream key generation
date := "2026-01-12"
license := "test-license"
fingerprint := "test-fp"
key := DeriveStreamKey(date, license, fingerprint)
// key is 32 bytes, deterministic
// Period calculation at 2026-01-12 15:30:00 UTC
t := time.Date(2026, 1, 12, 15, 30, 0, 0, time.UTC)
GetPeriodAt(t, CadenceDaily) // "2026-01-12"
GetPeriodAt(t, CadenceHalfDay) // "2026-01-12-PM"
GetPeriodAt(t, CadenceQuarter) // "2026-01-12-12"
GetPeriodAt(t, CadenceHourly) // "2026-01-12-15"
// Next periods
// Daily: "2026-01-12" → "2026-01-13"
// 12h: "2026-01-12-PM" → "2026-01-13-AM"
// 6h: "2026-01-12-12" → "2026-01-12-18"
// 1h: "2026-01-12-15" → "2026-01-12-16"
```
## 11. Implementation Reference
- Stream key derivation: `pkg/smsg/stream.go`
- LTHN hash: `github.com/Snider/Enchantrix/pkg/crypt`
- WASM bindings: `pkg/wasm/stmf/main.go` (decryptV3, unwrapCEK)
- Tests: `pkg/smsg/stream_test.go`
## 12. Security Considerations
1. **License entropy**: Recommend 64+ bits (12+ alphanumeric chars)
2. **Fingerprint stability**: Should be stable but not user-controllable
3. **Clock skew**: Rolling windows handle ±1 period drift
4. **Key exposure**: Derived keys valid only for one period
## 13. References
- RFC-002: SMSG Format (v3 streaming)
- RFC-001: OSS DRM (Section 3.4)
- RFC 8439: ChaCha20-Poly1305
- Enchantrix: github.com/Snider/Enchantrix

255
rfc/RFC-008-BORGFILE.md Normal file
View file

@ -0,0 +1,255 @@
# RFC-008: Borgfile Compilation
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-003, RFC-004
---
## Abstract
Borgfile is a declarative syntax for defining TIM container contents. It specifies how local files are mapped into the container filesystem, enabling reproducible container builds.
## 1. Overview
Borgfile provides:
- Dockerfile-like syntax for familiarity
- File mapping into containers
- Simple ADD directive
- Integration with TIM encryption
## 2. File Format
### 2.1 Location
- Default: `Borgfile` in current directory
- Override: `borg compile -f path/to/Borgfile`
### 2.2 Encoding
- UTF-8 text
- Unix line endings (LF)
- No BOM
## 3. Syntax
### 3.1 Parsing Implementation
```go
// cmd/compile.go:33-54
lines := strings.Split(content, "\n")
for _, line := range lines {
parts := strings.Fields(line) // Whitespace-separated tokens
if len(parts) == 0 {
continue // Skip empty lines
}
switch parts[0] {
case "ADD":
// Process ADD directive
default:
return fmt.Errorf("unknown instruction: %s", parts[0])
}
}
```
### 3.2 ADD Directive
```
ADD <source> <destination>
```
| Parameter | Description |
|-----------|-------------|
| source | Local path (relative to current working directory) |
| destination | Container path (leading slash stripped) |
### 3.3 Examples
```dockerfile
# Add single file
ADD ./app /usr/local/bin/app
# Add configuration
ADD ./config.yaml /etc/myapp/config.yaml
# Multiple files
ADD ./bin/server /app/server
ADD ./static /app/static
```
## 4. Path Resolution
### 4.1 Source Paths
- Resolved relative to **current working directory** (not Borgfile location)
- Must exist at compile time
- Read via `os.ReadFile(src)`
### 4.2 Destination Paths
- Leading slash stripped: `strings.TrimPrefix(dest, "/")`
- Added to DataNode as-is
```go
// cmd/compile.go:46-50
data, err := os.ReadFile(src)
if err != nil {
return fmt.Errorf("invalid ADD instruction: %s", line)
}
name := strings.TrimPrefix(dest, "/")
m.RootFS.AddData(name, data)
```
## 5. File Handling
### 5.1 Permissions
**Current implementation**: Permissions are NOT preserved.
| Source | Container |
|--------|-----------|
| Any file | 0600 (hardcoded in DataNode.ToTar) |
| Any directory | 0755 (implicit) |
### 5.2 Timestamps
- Set to `time.Now()` when added to DataNode
- Original timestamps not preserved
### 5.3 File Types
- Regular files only
- No directory recursion (each file must be added explicitly)
- No symlink following
## 6. Error Handling
| Error | Cause |
|-------|-------|
| `invalid ADD instruction: {line}` | Wrong number of arguments |
| `os.ReadFile` error | Source file not found |
| `unknown instruction: {name}` | Unrecognized directive |
| `ErrPasswordRequired` | Encryption requested without password |
## 7. CLI Flags
```go
// cmd/compile.go:80-82
-f, --file string Path to Borgfile (default: "Borgfile")
-o, --output string Output path (default: "a.tim")
-e, --encrypt string Password for .stim encryption (optional)
```
## 8. Output Formats
### 8.1 Plain TIM
```bash
borg compile -f Borgfile -o container.tim
```
Output: Standard TIM tar archive with `config.json` + `rootfs/`
### 8.2 Encrypted STIM
```bash
borg compile -f Borgfile -e "password" -o container.stim
```
Output: ChaCha20-Poly1305 encrypted STIM container
**Auto-detection**: If `-e` flag provided, output automatically uses `.stim` format even if `-o` specifies `.tim`.
## 9. Default OCI Config
The current implementation creates a minimal config:
```go
// pkg/tim/config.go:6-10
func defaultConfig() (*trix.Trix, error) {
return &trix.Trix{Header: make(map[string]interface{})}, nil
}
```
**Note**: This is a placeholder. For full OCI runtime execution, you'll need to provide a proper `config.json` in the container or modify the TIM after compilation.
## 10. Compilation Process
```
1. Read Borgfile content
2. Parse line-by-line
3. For each ADD directive:
a. Read source file from filesystem
b. Strip leading slash from destination
c. Add to DataNode
4. Create TIM with default config + populated RootFS
5. If password provided:
a. Encrypt to STIM via ToSigil()
b. Adjust output extension to .stim
6. Write output file
```
## 11. Implementation Reference
- Parser/Compiler: `cmd/compile.go`
- TIM creation: `pkg/tim/tim.go`
- DataNode: `pkg/datanode/datanode.go`
- Tests: `cmd/compile_test.go`
## 12. Current Limitations
| Feature | Status |
|---------|--------|
| Comment support (`#`) | Not implemented |
| Quoted paths | Not implemented |
| Directory recursion | Not implemented |
| Permission preservation | Not implemented |
| Path resolution relative to Borgfile | Not implemented (uses CWD) |
| Full OCI config generation | Not implemented (empty header) |
| Symlink following | Not implemented |
## 13. Examples
### 13.1 Simple Application
```dockerfile
ADD ./myapp /usr/local/bin/myapp
ADD ./config.yaml /etc/myapp/config.yaml
```
### 13.2 Web Application
```dockerfile
ADD ./server /app/server
ADD ./index.html /app/static/index.html
ADD ./style.css /app/static/style.css
ADD ./app.js /app/static/app.js
```
### 13.3 With Encryption
```bash
# Create Borgfile
cat > Borgfile << 'EOF'
ADD ./secret-app /app/secret-app
ADD ./credentials.json /etc/app/credentials.json
EOF
# Compile with encryption
borg compile -f Borgfile -e "MySecretPassword123" -o secret.stim
```
## 14. Future Work
- [ ] Comment support (`#`)
- [ ] Quoted path support for spaces
- [ ] Directory recursion in ADD
- [ ] Permission preservation
- [ ] Path resolution relative to Borgfile location
- [ ] Full OCI config generation
- [ ] Variable substitution (`${VAR}`)
- [ ] Include directive
- [ ] Glob patterns in source
- [ ] COPY directive (alias for ADD)

365
rfc/RFC-009-STMF.md Normal file
View file

@ -0,0 +1,365 @@
# RFC-009: STMF Secure To-Me Form
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
---
## Abstract
STMF (Secure To-Me Form) provides asymmetric encryption for web form submissions. It enables end-to-end encrypted form data where only the recipient can decrypt submissions, protecting sensitive data from server compromise.
## 1. Overview
STMF provides:
- Asymmetric encryption for form data
- X25519 key exchange
- ChaCha20-Poly1305 for payload encryption
- Browser-based encryption via WASM
- HTTP middleware for server-side decryption
## 2. Cryptographic Primitives
### 2.1 Key Exchange
X25519 (Curve25519 Diffie-Hellman)
| Parameter | Value |
|-----------|-------|
| Private key | 32 bytes |
| Public key | 32 bytes |
| Shared secret | 32 bytes |
### 2.2 Encryption
ChaCha20-Poly1305
| Parameter | Value |
|-----------|-------|
| Key | 32 bytes (SHA-256 of shared secret) |
| Nonce | 24 bytes (XChaCha variant) |
| Tag | 16 bytes |
## 3. Protocol
### 3.1 Setup (One-time)
```
Recipient (Server):
1. Generate X25519 keypair
2. Publish public key (embed in page or API)
3. Store private key securely
```
### 3.2 Encryption Flow (Browser)
```
1. Fetch recipient's public key
2. Generate ephemeral X25519 keypair
3. Compute shared secret: X25519(ephemeral_private, recipient_public)
4. Derive encryption key: SHA256(shared_secret)
5. Encrypt form data: ChaCha20-Poly1305(data, key, random_nonce)
6. Send: {ephemeral_public, nonce, ciphertext}
```
### 3.3 Decryption Flow (Server)
```
1. Receive {ephemeral_public, nonce, ciphertext}
2. Compute shared secret: X25519(recipient_private, ephemeral_public)
3. Derive encryption key: SHA256(shared_secret)
4. Decrypt: ChaCha20-Poly1305_Open(ciphertext, key, nonce)
```
## 4. Wire Format
### 4.1 Container (Trix-based)
```
[Magic: "STMF" (4 bytes)]
[Header: Gob-encoded JSON]
[Payload: ChaCha20-Poly1305 ciphertext]
```
### 4.2 Header Structure
```json
{
"version": "1.0",
"algorithm": "x25519-chacha20poly1305",
"ephemeral_pk": "<base64 32-byte ephemeral public key>"
}
```
### 4.3 Transmission
- Default form field: `_stmf_payload`
- Encoding: Base64 string
- Content-Type: `application/x-www-form-urlencoded` or `multipart/form-data`
## 5. Data Structures
### 5.1 FormField
```go
type FormField struct {
Name string // Field name
Value string // Base64 for files, plaintext otherwise
Type string // "text", "password", "file"
Filename string // For file uploads
MimeType string // For file uploads
}
```
### 5.2 FormData
```go
type FormData struct {
Fields []FormField // Array of form fields
Metadata map[string]string // Arbitrary key-value metadata
}
```
### 5.3 Builder Pattern
```go
formData := NewFormData().
AddField("email", "user@example.com").
AddFieldWithType("password", "secret", "password").
AddFile("document", base64Content, "report.pdf", "application/pdf").
SetMetadata("timestamp", time.Now().String())
```
## 6. Key Management API
### 6.1 Key Generation
```go
// pkg/stmf/keypair.go
func GenerateKeyPair() (*KeyPair, error)
type KeyPair struct {
privateKey *ecdh.PrivateKey
publicKey *ecdh.PublicKey
}
```
### 6.2 Key Loading
```go
// From raw bytes
func LoadPublicKey(data []byte) (*ecdh.PublicKey, error)
func LoadPrivateKey(data []byte) (*ecdh.PrivateKey, error)
// From base64
func LoadPublicKeyBase64(encoded string) (*ecdh.PublicKey, error)
func LoadPrivateKeyBase64(encoded string) (*ecdh.PrivateKey, error)
// Reconstruct keypair from private key
func LoadKeyPair(privateKeyBytes []byte) (*KeyPair, error)
```
### 6.3 Key Export
```go
func (kp *KeyPair) PublicKey() []byte // Raw 32 bytes
func (kp *KeyPair) PrivateKey() []byte // Raw 32 bytes
func (kp *KeyPair) PublicKeyBase64() string // Base64 encoded
func (kp *KeyPair) PrivateKeyBase64() string // Base64 encoded
```
## 7. WASM API
### 7.1 BorgSTMF Namespace
```javascript
// Generate X25519 keypair
const keypair = await BorgSTMF.generateKeyPair();
// keypair.publicKey: base64 string
// keypair.privateKey: base64 string
// Encrypt form data
const encrypted = await BorgSTMF.encrypt(
JSON.stringify(formData),
serverPublicKeyBase64
);
// Encrypt with field-level control
const encrypted = await BorgSTMF.encryptFields(
{email: "user@example.com", password: "secret"},
serverPublicKeyBase64,
{timestamp: Date.now().toString()} // Optional metadata
);
```
## 8. HTTP Middleware
### 8.1 Simple Usage
```go
import "github.com/Snider/Borg/pkg/stmf/middleware"
// Create middleware with private key
mw := middleware.Simple(privateKeyBytes)
// Or from base64
mw, err := middleware.SimpleBase64(privateKeyB64)
// Apply to handler
http.Handle("/submit", mw(myHandler))
```
### 8.2 Advanced Configuration
```go
cfg := middleware.DefaultConfig(privateKeyBytes)
cfg.FieldName = "_custom_field" // Custom field name (default: _stmf_payload)
cfg.PopulateForm = &true // Auto-populate r.Form
cfg.OnError = customErrorHandler // Custom error handling
cfg.OnMissingPayload = customHandler // When field is absent
mw := middleware.Middleware(cfg)
```
### 8.3 Context Access
```go
func myHandler(w http.ResponseWriter, r *http.Request) {
// Get decrypted form data
formData := middleware.GetFormData(r)
// Get metadata
metadata := middleware.GetMetadata(r)
// Access fields
email := formData.Get("email")
password := formData.Get("password")
}
```
### 8.4 Middleware Behavior
- Handles POST, PUT, PATCH requests only
- Parses multipart/form-data (32 MB limit) or application/x-www-form-urlencoded
- Looks for field `_stmf_payload` (configurable)
- Base64 decodes, then decrypts
- Populates `r.Form` and `r.PostForm` with decrypted fields
- Returns 400 Bad Request on decryption failure
## 9. Integration Example
### 9.1 HTML Form
```html
<form id="secure-form" data-stmf-pubkey="<base64-public-key>">
<input name="name" type="text">
<input name="email" type="email">
<input name="ssn" type="password">
<button type="submit">Send Securely</button>
</form>
<script>
document.getElementById('secure-form').addEventListener('submit', async (e) => {
e.preventDefault();
const form = e.target;
const pubkey = form.dataset.stmfPubkey;
const formData = new FormData(form);
const data = Object.fromEntries(formData);
const encrypted = await BorgSTMF.encrypt(JSON.stringify(data), pubkey);
await fetch('/api/submit', {
method: 'POST',
body: new URLSearchParams({_stmf_payload: encrypted}),
headers: {'Content-Type': 'application/x-www-form-urlencoded'}
});
});
</script>
```
### 9.2 Server Handler
```go
func main() {
privateKey, _ := os.ReadFile("private.key")
mw := middleware.Simple(privateKey)
http.Handle("/api/submit", mw(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
formData := middleware.GetFormData(r)
name := formData.Get("name")
email := formData.Get("email")
ssn := formData.Get("ssn")
// Process securely...
w.WriteHeader(http.StatusOK)
})))
http.ListenAndServeTLS(":443", "cert.pem", "key.pem", nil)
}
```
## 10. Security Properties
### 10.1 Forward Secrecy
- Fresh ephemeral keypair per encryption
- Compromised private key doesn't decrypt past messages
- Each ciphertext has unique shared secret
### 10.2 Authenticity
- Poly1305 MAC prevents tampering
- Decryption fails if ciphertext modified
### 10.3 Confidentiality
- ChaCha20 provides 256-bit security
- Nonces are random (24 bytes), collision unlikely
- Data encrypted before leaving browser
### 10.4 Key Isolation
- Private key never exposed to browser/JavaScript
- Public key can be safely distributed
- Ephemeral keys discarded after encryption
## 11. Error Handling
```go
var (
ErrInvalidMagic = errors.New("invalid STMF magic")
ErrInvalidPayload = errors.New("invalid STMF payload")
ErrDecryptionFailed = errors.New("decryption failed")
ErrInvalidPublicKey = errors.New("invalid public key")
ErrInvalidPrivateKey = errors.New("invalid private key")
ErrKeyGenerationFailed = errors.New("key generation failed")
)
```
## 12. Implementation Reference
- Types: `pkg/stmf/types.go`
- Key management: `pkg/stmf/keypair.go`
- Encryption: `pkg/stmf/encrypt.go`
- Decryption: `pkg/stmf/decrypt.go`
- Middleware: `pkg/stmf/middleware/http.go`
- WASM: `pkg/wasm/stmf/main.go`
## 13. Security Considerations
1. **Public key authenticity**: Verify public key source (HTTPS, pinning)
2. **Private key protection**: Never expose to browser, store securely
3. **Nonce uniqueness**: Random generation ensures uniqueness
4. **HTTPS required**: Transport layer must be encrypted
## 14. Future Work
- [ ] Multiple recipients
- [ ] Key attestation
- [ ] Offline decryption app
- [ ] Hardware key support (WebAuthn)
- [ ] Key rotation support

458
rfc/RFC-010-WASM-API.md Normal file
View file

@ -0,0 +1,458 @@
# RFC-010: WASM Decryption API
**Status**: Draft
**Author**: [Snider](https://github.com/Snider/)
**Created**: 2026-01-13
**License**: EUPL-1.2
**Depends On**: RFC-002, RFC-007, RFC-009
---
## Abstract
This RFC specifies the WebAssembly (WASM) API for browser-based decryption of SMSG content and STMF form encryption. The API is exposed through two JavaScript namespaces: `BorgSMSG` for content decryption and `BorgSTMF` for form encryption.
## 1. Overview
The WASM module provides:
- SMSG decryption (v1, v2, v3, chunked, ABR)
- SMSG encryption
- STMF form encryption/decryption
- Metadata extraction without decryption
## 2. Module Loading
### 2.1 Files Required
```
stmf.wasm (~5.9MB) Compiled Go WASM module
wasm_exec.js (~20KB) Go WASM runtime
```
### 2.2 Initialization
```html
<script src="wasm_exec.js"></script>
<script>
const go = new Go();
WebAssembly.instantiateStreaming(fetch('stmf.wasm'), go.importObject)
.then(result => {
go.run(result.instance);
// BorgSMSG and BorgSTMF now available globally
});
</script>
```
### 2.3 Ready Event
```javascript
document.addEventListener('borgstmf:ready', (event) => {
console.log('WASM ready, version:', event.detail.version);
});
```
## 3. BorgSMSG Namespace
### 3.1 Version
```javascript
BorgSMSG.version // "1.6.0"
BorgSMSG.ready // true when loaded
```
### 3.2 Metadata Functions
#### getInfo(base64) → Promise<ManifestInfo>
Get manifest without decryption.
```javascript
const info = await BorgSMSG.getInfo(base64Content);
// info.version, info.algorithm, info.format
// info.manifest.title, info.manifest.artist
// info.isV3Streaming, info.isChunked
// info.wrappedKeys (for v3)
```
#### getInfoBinary(uint8Array) → Promise<ManifestInfo>
Binary input variant (no base64 decode needed).
```javascript
const bytes = new Uint8Array(await response.arrayBuffer());
const info = await BorgSMSG.getInfoBinary(bytes);
```
### 3.3 Decryption Functions
#### decrypt(base64, password) → Promise<Message>
Full decryption (v1 format, base64 attachments).
```javascript
const msg = await BorgSMSG.decrypt(base64Content, password);
// msg.body, msg.subject, msg.from
// msg.attachments[0].name, .content (base64), .mime
```
#### decryptStream(base64, password) → Promise<StreamMessage>
Streaming decryption (v2 format, binary attachments).
```javascript
const msg = await BorgSMSG.decryptStream(base64Content, password);
// msg.attachments[0].data (Uint8Array)
// msg.attachments[0].mime
```
#### decryptBinary(uint8Array, password) → Promise<StreamMessage>
Binary input, binary output.
```javascript
const bytes = new Uint8Array(await fetch(url).then(r => r.arrayBuffer()));
const msg = await BorgSMSG.decryptBinary(bytes, password);
```
#### quickDecrypt(base64, password) → Promise<string>
Returns body text only (fast path).
```javascript
const body = await BorgSMSG.quickDecrypt(base64Content, password);
```
### 3.4 V3 Streaming Functions
#### decryptV3(base64, params) → Promise<StreamMessage>
Decrypt v3 streaming content with LTHN rolling keys.
```javascript
const msg = await BorgSMSG.decryptV3(base64Content, {
license: "user-license-key",
fingerprint: "device-fingerprint" // optional
});
```
#### getV3ChunkInfo(base64) → Promise<ChunkInfo>
Get chunk index for seeking without full decrypt.
```javascript
const chunkInfo = await BorgSMSG.getV3ChunkInfo(base64Content);
// chunkInfo.chunkSize (default 1MB)
// chunkInfo.totalChunks
// chunkInfo.totalSize
// chunkInfo.index[i].offset, .size
```
#### unwrapV3CEK(base64, params) → Promise<string>
Unwrap CEK for manual chunk decryption. Returns base64 CEK.
```javascript
const cekBase64 = await BorgSMSG.unwrapV3CEK(base64Content, {
license: "license",
fingerprint: "fp"
});
```
#### decryptV3Chunk(base64, cekBase64, chunkIndex) → Promise<Uint8Array>
Decrypt single chunk by index.
```javascript
const chunk = await BorgSMSG.decryptV3Chunk(base64Content, cekBase64, 5);
```
#### parseV3Header(uint8Array) → Promise<V3HeaderInfo>
Parse header from partial data (for streaming).
```javascript
const header = await BorgSMSG.parseV3Header(bytes);
// header.format, header.keyMethod, header.cadence
// header.payloadOffset (where chunks start)
// header.wrappedKeys, header.chunked, header.manifest
```
#### unwrapCEKFromHeader(wrappedKeys, params, cadence) → Promise<Uint8Array>
Unwrap CEK from parsed header.
```javascript
const cek = await BorgSMSG.unwrapCEKFromHeader(
header.wrappedKeys,
{license: "lic", fingerprint: "fp"},
"daily"
);
```
#### decryptChunkDirect(chunkBytes, cek) → Promise<Uint8Array>
Low-level chunk decryption with pre-unwrapped CEK.
```javascript
const plaintext = await BorgSMSG.decryptChunkDirect(chunkBytes, cek);
```
### 3.5 Encryption Functions
#### encrypt(message, password, hint?) → Promise<string>
Encrypt message (v1 format). Returns base64.
```javascript
const encrypted = await BorgSMSG.encrypt({
body: "Hello",
attachments: [{
name: "file.txt",
content: btoa("data"),
mime: "text/plain"
}]
}, password, "optional hint");
```
#### encryptWithManifest(message, password, manifest) → Promise<string>
Encrypt with manifest (v2 format). Returns base64.
```javascript
const encrypted = await BorgSMSG.encryptWithManifest(message, password, {
title: "My Track",
artist: "Artist Name",
licenseType: "perpetual"
});
```
### 3.6 ABR Functions
#### parseABRManifest(jsonString) → Promise<ABRManifest>
Parse HLS-style ABR manifest.
```javascript
const manifest = await BorgSMSG.parseABRManifest(manifestJson);
// manifest.version, manifest.title, manifest.duration
// manifest.variants[i].name, .bandwidth, .url
// manifest.defaultIdx
```
#### selectVariant(manifest, bandwidthBps) → Promise<number>
Select best variant for bandwidth (returns index).
```javascript
const idx = await BorgSMSG.selectVariant(manifest, measuredBandwidth);
// Uses 80% safety threshold
```
## 4. BorgSTMF Namespace
### 4.1 Key Generation
```javascript
const keypair = await BorgSTMF.generateKeyPair();
// keypair.publicKey (base64 X25519)
// keypair.privateKey (base64 X25519) - KEEP SECRET
```
### 4.2 Encryption
```javascript
// Encrypt JSON string
const encrypted = await BorgSTMF.encrypt(
JSON.stringify(formData),
serverPublicKeyBase64
);
// Encrypt with metadata
const encrypted = await BorgSTMF.encryptFields(
{email: "user@example.com", password: "secret"},
serverPublicKeyBase64,
{timestamp: Date.now().toString()} // optional metadata
);
```
## 5. Type Definitions
### 5.1 ManifestInfo
```typescript
interface ManifestInfo {
version: string;
algorithm: string;
format?: string;
compression?: string;
hint?: string;
keyMethod?: string; // "LTHN" for v3
cadence?: string; // "daily", "12h", "6h", "1h"
wrappedKeys?: WrappedKey[];
isV3Streaming: boolean;
chunked?: ChunkInfo;
isChunked: boolean;
manifest?: Manifest;
}
```
### 5.2 Message / StreamMessage
```typescript
interface Message {
from?: string;
to?: string;
subject?: string;
body: string;
timestamp?: number;
attachments: Attachment[];
replyKey?: KeyInfo;
meta?: Record<string, string>;
}
interface Attachment {
name: string;
mime: string;
size: number;
content?: string; // base64 (v1)
data?: Uint8Array; // binary (v2/v3)
}
```
### 5.3 ChunkInfo
```typescript
interface ChunkInfo {
chunkSize: number; // default 1048576 (1MB)
totalChunks: number;
totalSize: number;
index: ChunkEntry[];
}
interface ChunkEntry {
offset: number;
size: number;
}
```
### 5.4 Manifest
```typescript
interface Manifest {
title: string;
artist?: string;
album?: string;
genre?: string;
year?: number;
releaseType?: string; // "single", "album", "ep", "mix"
duration?: number; // seconds
format?: string;
expiresAt?: number; // Unix timestamp
issuedAt?: number; // Unix timestamp
licenseType?: string; // "perpetual", "rental", "stream", "preview"
tracks?: Track[];
tags?: string[];
links?: Record<string, string>;
extra?: Record<string, string>;
}
```
## 6. Error Handling
### 6.1 Pattern
All functions throw on error:
```javascript
try {
const msg = await BorgSMSG.decrypt(content, password);
} catch (e) {
console.error(e.message);
}
```
### 6.2 Common Errors
| Error | Cause |
|-------|-------|
| `decrypt requires 2 arguments` | Wrong argument count |
| `decryption failed: {reason}` | Wrong password or corrupted |
| `invalid format` | Not a valid SMSG file |
| `unsupported version` | Unknown format version |
| `key expired` | v3 rolling key outside window |
| `invalid base64: {reason}` | Base64 decode failed |
| `chunk out of range` | Invalid chunk index |
## 7. Performance
### 7.1 Binary vs Base64
- Binary functions (`*Binary`, `decryptStream`) are ~30% faster
- Avoid double base64 encoding
### 7.2 Large Files (>50MB)
Use chunked streaming:
```javascript
// Efficient: Cache CEK, stream chunks
const header = await BorgSMSG.parseV3Header(bytes);
const cek = await BorgSMSG.unwrapCEKFromHeader(header.wrappedKeys, params);
for (let i = 0; i < header.chunked.totalChunks; i++) {
const chunk = await BorgSMSG.decryptChunkDirect(payload, cek);
player.write(chunk);
// chunk is GC'd after each iteration
}
```
### 7.3 Typical Execution Times
| Operation | Size | Time |
|-----------|------|------|
| getInfo | any | ~50-100ms |
| decrypt (small) | <1MB | ~200-500ms |
| decrypt (large) | 100MB | 2-5s |
| decryptV3Chunk | 1MB | ~200-400ms |
| generateKeyPair | - | ~50-200ms |
## 8. Browser Compatibility
| Browser | Support |
|---------|---------|
| Chrome 57+ | Full |
| Firefox 52+ | Full |
| Safari 11+ | Full |
| Edge 16+ | Full |
| IE | Not supported |
Requirements:
- WebAssembly support
- Async/await (ES2017)
- Uint8Array
## 9. Memory Management
- WASM module: ~5.9MB static
- Per-operation: Peak ~2-3x file size during decryption
- Go GC reclaims after Promise resolution
- Keys never leave WASM memory
## 10. Implementation Reference
- Source: `pkg/wasm/stmf/main.go` (1758 lines)
- Build: `GOOS=js GOARCH=wasm go build -o stmf.wasm ./pkg/wasm/stmf/`
## 11. Security Considerations
1. **Password handling**: Clear from memory after use
2. **Memory isolation**: WASM sandbox prevents JS access
3. **Constant-time crypto**: Go crypto uses safe operations
4. **Key protection**: Keys never exposed to JavaScript
## 12. Future Work
- [ ] WebWorker support for background decryption
- [ ] Streaming API with ReadableStream
- [ ] Smaller WASM size via TinyGo
- [ ] Native Web Crypto fallback for simple operations