1st
This commit is contained in:
commit
b98dc0cbe9
20 changed files with 9133 additions and 0 deletions
897
RFC-001-HLCRF-COMPOSITOR.md
Normal file
897
RFC-001-HLCRF-COMPOSITOR.md
Normal file
|
|
@ -0,0 +1,897 @@
|
|||
# RFC: HLCRF Compositor
|
||||
|
||||
**Status:** Implemented
|
||||
**Created:** 2026-01-15
|
||||
**Authors:** Host UK Engineering
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The HLCRF Compositor is a hierarchical layout system where each composite contains up to five regions—Header, Left, Content, Right, and Footer. Composites nest infinitely: any region can contain another composite, which can contain another, and so on.
|
||||
|
||||
The core innovation is **inline sub-structure declaration**: a single string like `H[LC]C[HCF]F` declares the entire nested hierarchy. No configuration files, no database schema, no separate definitions—parse the string and you have the complete structure.
|
||||
|
||||
Just as Markdown made document formatting a human-readable string, HLCRF makes layout structure a portable, self-describing data type that can be stored, transmitted, validated, and rendered anywhere.
|
||||
|
||||
Path-based element IDs (`L-H-0`, `C-F-C-2`) encode the full hierarchy, eliminating database lookups to resolve structure. The system supports responsive breakpoints, block-based content, and shortcode integration.
|
||||
|
||||
---
|
||||
|
||||
## Motivation
|
||||
|
||||
Traditional layout systems require separate templates for each layout variation. A page with a left sidebar needs one template; without it, another. Add responsive behaviour, and the combinations multiply quickly.
|
||||
|
||||
The HLCRF Compositor addresses this through:
|
||||
|
||||
1. **Data-driven layouts** — A single compositor handles all layout permutations via variant strings
|
||||
2. **Nested composition** — Layouts can contain other layouts, with automatic path tracking for unique identification
|
||||
3. **Responsive design** — Breakpoint-aware rendering collapses regions appropriately for different devices
|
||||
4. **Block-based content** — Content populates regions as discrete blocks, enabling conditional display and reordering
|
||||
|
||||
The approach treats layout as data rather than markup, allowing the same content to adapt to different structural requirements without template duplication.
|
||||
|
||||
---
|
||||
|
||||
## Terminology
|
||||
|
||||
### HLCRF
|
||||
|
||||
**H**ierarchical **L**ayer **C**ompositing **R**ender **F**rame.
|
||||
|
||||
The acronym also serves as a mnemonic for the five possible regions:
|
||||
|
||||
| Letter | Region | Semantic element | Purpose |
|
||||
|--------|--------|------------------|---------|
|
||||
| **H** | Header | `<header>` | Top navigation, branding |
|
||||
| **L** | Left | `<aside>` | Left sidebar, secondary navigation |
|
||||
| **C** | Content | `<main>` | Primary content area |
|
||||
| **R** | Right | `<aside>` | Right sidebar, supplementary content |
|
||||
| **F** | Footer | `<footer>` | Site footer, links, legal |
|
||||
|
||||
### Variant string
|
||||
|
||||
A string of 1–5 characters from the set `{H, L, C, R, F}` that defines which regions are active. The string `HCF` produces a layout with Header, Content, and Footer. The string `HLCRF` enables all five regions.
|
||||
|
||||
A flat variant like `HLCRF` renders as:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ H │ ← Header
|
||||
├─────────┬───────────────┬───────────┤
|
||||
│ L │ C │ R │ ← Body row
|
||||
├─────────┴───────────────┴───────────┤
|
||||
│ F │ ← Footer
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
A nested variant like `H[LCR]CF` renders differently—the body row is **inside** the Header:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────┐
|
||||
│ H ┌─────────┬─────────┬───────────┐ │
|
||||
│ │ H-L │ H-C │ H-R │ │ ← Body row nested IN Header
|
||||
│ └─────────┴─────────┴───────────┘ │
|
||||
├─────────────────────────────────────┤
|
||||
│ C │ ← Root Content
|
||||
├─────────────────────────────────────┤
|
||||
│ F │ ← Root Footer
|
||||
└─────────────────────────────────────┘
|
||||
```
|
||||
|
||||
With blocks placed, element IDs become addresses. A typical website header `H[LCR]CF`:
|
||||
|
||||
```
|
||||
┌───────────────────────────────────────────────────────────────┐
|
||||
│ H ┌───────────┬───────────────────────────────┬─────────────┐ │
|
||||
│ │ H-L-0 │ H-C-0 H-C-1 H-C-2 H-C-3│ H-R-0 │ │
|
||||
│ │ [Logo] │ [Home] [About] [Blog] [Shop]│ [Login] │ │
|
||||
│ └───────────┴───────────────────────────────┴─────────────┘ │
|
||||
├───────────────────────────────────────────────────────────────┤
|
||||
│ C-0 │
|
||||
│ [Page Content] │
|
||||
├───────────────────────────────────────────────────────────────┤
|
||||
│ F-0 F-1 │
|
||||
│ [© 2026] [Legal] │
|
||||
└───────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Every element has a unique, deterministic address. Computers count from zero—deal with it.
|
||||
|
||||
**Key principle:** What's missing defines the layout type. Brackets define nesting.
|
||||
|
||||
### Path
|
||||
|
||||
A hierarchical identifier tracking a layout's position within nested structures. The root layout has an empty path. A layout nested within the Left region of the root receives path `L-`. Further nesting appends to this path.
|
||||
|
||||
### Slot
|
||||
|
||||
A named region within a layout that accepts content. Each slot corresponds to one HLCRF letter.
|
||||
|
||||
### Block
|
||||
|
||||
A discrete unit of content assigned to a region. Blocks have their own ordering and can conditionally display based on breakpoint or other conditions.
|
||||
|
||||
### Breakpoint
|
||||
|
||||
A device category determining layout behaviour:
|
||||
|
||||
| Breakpoint | Target | Typical behaviour |
|
||||
|------------|--------|-------------------|
|
||||
| `phone` | < 768px | Single column, stacked |
|
||||
| `tablet` | 768px–1023px | Content only, sidebars hidden |
|
||||
| `desktop` | ≥ 1024px | Full layout with all regions |
|
||||
|
||||
---
|
||||
|
||||
## Specification
|
||||
|
||||
### Layout variant strings
|
||||
|
||||
#### Valid variants
|
||||
|
||||
Any combination of the letters H, L, C, R, F, in that order. Common variants:
|
||||
|
||||
| Variant | Description | Use case |
|
||||
|---------|-------------|----------|
|
||||
| `C` | Content only | Embedded widgets, minimal layouts |
|
||||
| `HCF` | Header, Content, Footer | Standard page layout |
|
||||
| `HCR` | Header, Content, Right | Dashboard with right sidebar |
|
||||
| `HLC` | Header, Left, Content | Admin panel with navigation |
|
||||
| `HLCF` | Header, Left, Content, Footer | Admin with footer |
|
||||
| `HLCR` | Header, Left, Content, Right | Three-column dashboard |
|
||||
| `HLCRF` | All regions | Full-featured layouts |
|
||||
|
||||
The variant string is case-insensitive. The compositor normalises to uppercase.
|
||||
|
||||
#### Inline sub-structure declaration
|
||||
|
||||
Variant strings support **inline nesting** using bracket notation. Each region letter can be followed by brackets containing its nested layout:
|
||||
|
||||
```
|
||||
H[LC]L[HC]C[HCF]F[LCF]
|
||||
```
|
||||
|
||||
This declares the entire hierarchy in a single string:
|
||||
|
||||
| Segment | Meaning |
|
||||
|---------|---------|
|
||||
| `H[LC]` | Header region contains a Left-Content layout |
|
||||
| `L[HC]` | Left region contains a Header-Content layout |
|
||||
| `C[HCF]` | Content region contains a Header-Content-Footer layout |
|
||||
| `F[LCF]` | Footer region contains a Left-Content-Footer layout |
|
||||
|
||||
Brackets nest recursively. A complex declaration like `H[L[C]C]CF` means:
|
||||
- Header contains a nested layout
|
||||
- That nested layout's Left region contains yet another layout (Content-only)
|
||||
- Root also has Content and Footer at the top level
|
||||
|
||||
This syntax is particularly useful for:
|
||||
- **Shortcodes** declaring their expected structure
|
||||
- **Templates** defining reusable page scaffolds
|
||||
- **Configuration** specifying layout contracts
|
||||
|
||||
The string `H[LC]L[HC]C[HCF]F[LCF]` is a complete website declaration—no additional nesting configuration needed.
|
||||
|
||||
#### Region requirements
|
||||
|
||||
- **Content (C)** is implicitly included when any body region (L, C, R) is present
|
||||
- Regions render only when the variant includes them AND content has been added
|
||||
- An empty region does not render, even if specified in the variant
|
||||
|
||||
### Region hierarchy
|
||||
|
||||
The compositor enforces a fixed spatial hierarchy:
|
||||
|
||||
```
|
||||
Row 1: Header (full width)
|
||||
Row 2: Left | Content | Right (body row)
|
||||
Row 3: Footer (full width)
|
||||
```
|
||||
|
||||
This structure maps to CSS Grid areas:
|
||||
|
||||
```css
|
||||
grid-template-areas:
|
||||
"header"
|
||||
"body"
|
||||
"footer";
|
||||
```
|
||||
|
||||
The body row uses a nested grid or flexbox for the three-column layout.
|
||||
|
||||
### Nesting and path context
|
||||
|
||||
Layouts can be nested within any region. The compositor automatically manages path context to ensure unique slot identifiers.
|
||||
|
||||
#### Path generation
|
||||
|
||||
When a layout renders, it assigns each slot an ID based on its path:
|
||||
|
||||
- Root layout, Header slot: `H`
|
||||
- Root layout, Left slot: `L`
|
||||
- Nested layout within Left, Header slot: `L-H`
|
||||
- Nested layout within Left, Content slot: `L-C`
|
||||
- Further nested within that Content slot: `L-C-C`
|
||||
|
||||
#### Block identifiers
|
||||
|
||||
Within each slot, blocks receive indexed identifiers:
|
||||
|
||||
- First block in Header: `H-0`
|
||||
- Second block in Header: `H-1`
|
||||
- First block in nested Content: `L-C-0`
|
||||
|
||||
This scheme enables precise targeting for styling, JavaScript, and debugging.
|
||||
|
||||
### Responsive breakpoints
|
||||
|
||||
The compositor supports breakpoint-specific layout variants. A page might use `HLCRF` on desktop but collapse to `HCF` on tablet and `C` on phone.
|
||||
|
||||
#### Configuration schema
|
||||
|
||||
```json
|
||||
{
|
||||
"layout_config": {
|
||||
"layout_type": {
|
||||
"desktop": "HLCRF",
|
||||
"tablet": "HCF",
|
||||
"phone": "CF"
|
||||
},
|
||||
"regions": {
|
||||
"desktop": {
|
||||
"left": { "width": 280 },
|
||||
"content": { "max_width": 680 },
|
||||
"right": { "width": 280 }
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### CSS breakpoint handling
|
||||
|
||||
The default CSS collapses sidebars at tablet breakpoint and stacks content at phone breakpoint:
|
||||
|
||||
```css
|
||||
/* Tablet: Hide sidebars */
|
||||
@media (max-width: 1023px) {
|
||||
.hlcrf-body {
|
||||
grid-template-columns: minmax(0, var(--content-max-width));
|
||||
grid-template-areas: "content";
|
||||
}
|
||||
.hlcrf-left, .hlcrf-right { display: none; }
|
||||
}
|
||||
|
||||
/* Phone: Full width, stacked */
|
||||
@media (max-width: 767px) {
|
||||
.hlcrf-body {
|
||||
grid-template-columns: 1fr;
|
||||
padding: 0 1rem;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Block visibility
|
||||
|
||||
Blocks can define per-breakpoint visibility:
|
||||
|
||||
```json
|
||||
{
|
||||
"breakpoint_visibility": {
|
||||
"desktop": true,
|
||||
"tablet": true,
|
||||
"phone": false
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
A block with `phone: false` does not render on mobile devices, regardless of which region it belongs to.
|
||||
|
||||
### Deep nesting
|
||||
|
||||
The HLCRF system is **infinitely nestable**. Any region can contain another complete HLCRF layout, which can itself contain further nested layouts. The path-based ID scheme ensures every element remains uniquely addressable regardless of nesting depth.
|
||||
|
||||
#### Path reading convention
|
||||
|
||||
Paths read left-to-right, describing the journey from root to element:
|
||||
|
||||
```
|
||||
L-H-0
|
||||
│ │ └─ Block index (first block)
|
||||
│ └─── Region in nested layout (Header)
|
||||
└───── Region in root layout (Left)
|
||||
```
|
||||
|
||||
This means: "The first block in the Header region of a layout nested within the Left region of the root."
|
||||
|
||||
#### Multi-level path construction
|
||||
|
||||
Paths concatenate as layouts nest. Consider this structure:
|
||||
|
||||
- Root layout: `HLCRF`
|
||||
- Nested in Content: another `HCF` layout
|
||||
- Nested in that layout's Footer: a `C`-only layout with a button block
|
||||
|
||||
The button receives the path: `C-F-C-0`
|
||||
|
||||
Reading left to right:
|
||||
1. `C` — Content region of root
|
||||
2. `F` — Footer region of nested layout
|
||||
3. `C` — Content region of deepest layout
|
||||
4. `0` — First block in that region
|
||||
|
||||
#### Three-level nesting example
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────┐
|
||||
│ H (root header) │
|
||||
├────────┬────────────────────────────────────┬───────────┤
|
||||
│ L │ C │ R │
|
||||
│ │ ┌───────────────────────────────┐ │ │
|
||||
│ │ │ C-H (nested header) │ │ │
|
||||
│ │ ├─────┬─────────────┬───────────┤ │ │
|
||||
│ │ │ C-L │ C-C │ C-R │ │ │
|
||||
│ │ │ │ ┌─────────┐ │ │ │ │
|
||||
│ │ │ │ │ C-C-C │ │ │ │ │
|
||||
│ │ │ │ │(deepest)│ │ │ │ │
|
||||
│ │ │ │ └─────────┘ │ │ │ │
|
||||
│ │ ├─────┴─────────────┴───────────┤ │ │
|
||||
│ │ │ C-F (nested footer) │ │ │
|
||||
│ │ └───────────────────────────────┘ │ │
|
||||
├────────┴────────────────────────────────────┴───────────┤
|
||||
│ F (root footer) │
|
||||
└─────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
In this diagram:
|
||||
- Root regions: `H`, `L`, `C`, `R`, `F`
|
||||
- Second level (nested in C): `C-H`, `C-L`, `C-C`, `C-R`, `C-F`
|
||||
- Third level (nested in C-C): `C-C-C`
|
||||
|
||||
A block placed in the deepest Content region would receive ID `C-C-C-0`.
|
||||
|
||||
#### Path examples at each nesting level
|
||||
|
||||
| Nesting depth | Example path | Meaning |
|
||||
|---------------|--------------|---------|
|
||||
| 1 (root) | `H-0` | First block in root Header |
|
||||
| 1 (root) | `L-2` | Third block in root Left sidebar |
|
||||
| 2 (nested) | `L-H-0` | First block in Header of layout nested in Left |
|
||||
| 2 (nested) | `C-C-1` | Second block in Content of layout nested in Content |
|
||||
| 3 (deep) | `L-C-H-0` | First block in Header of layout nested in Content, nested in Left |
|
||||
| 4+ | `C-L-C-R-0` | Paths continue indefinitely |
|
||||
|
||||
The path length equals the nesting depth plus one (for the block index).
|
||||
|
||||
#### Practical example: sidebar with nested layout
|
||||
|
||||
```php
|
||||
$sidebar = Layout::make('HCF')
|
||||
->h('<h3>Widget Panel</h3>')
|
||||
->c(view('widgets.list'))
|
||||
->f('<a href="#">Manage widgets</a>');
|
||||
|
||||
$page = Layout::make('HLCRF')
|
||||
->h(view('header'))
|
||||
->l($sidebar) // Nested layout in Left
|
||||
->c(view('main-content'))
|
||||
->f(view('footer'));
|
||||
```
|
||||
|
||||
The sidebar's regions receive paths:
|
||||
- Header: `L-H`
|
||||
- Content: `L-C`
|
||||
- Footer: `L-F`
|
||||
|
||||
Blocks within the sidebar's Content would be `L-C-0`, `L-C-1`, etc.
|
||||
|
||||
#### Why infinite nesting matters
|
||||
|
||||
Deep nesting enables:
|
||||
|
||||
1. **Component encapsulation** — A reusable component can define its own internal layout without knowing where it will be placed
|
||||
2. **Recursive structures** — Tree views, nested comments, or hierarchical navigation can use consistent layout patterns at each level
|
||||
3. **Micro-layouts** — Small UI sections (cards, panels, modals) can use HLCRF internally whilst remaining composable
|
||||
|
||||
---
|
||||
|
||||
## API reference
|
||||
|
||||
### `Layout` class
|
||||
|
||||
**Namespace:** `Core\Front\Components`
|
||||
|
||||
#### Factory method
|
||||
|
||||
```php
|
||||
Layout::make(string $variant = 'HCF', string $path = ''): static
|
||||
```
|
||||
|
||||
Creates a new layout instance.
|
||||
|
||||
| Parameter | Type | Default | Description |
|
||||
|-----------|------|---------|-------------|
|
||||
| `$variant` | string | `'HCF'` | Layout variant string |
|
||||
| `$path` | string | `''` | Hierarchical path (typically managed automatically) |
|
||||
|
||||
#### Slot methods
|
||||
|
||||
Each region has a variadic method accepting any renderable content:
|
||||
|
||||
```php
|
||||
public function h(mixed ...$items): static // Header
|
||||
public function l(mixed ...$items): static // Left
|
||||
public function c(mixed ...$items): static // Content
|
||||
public function r(mixed ...$items): static // Right
|
||||
public function f(mixed ...$items): static // Footer
|
||||
```
|
||||
|
||||
Alias methods provide readability for explicit code:
|
||||
|
||||
```php
|
||||
public function addHeader(mixed ...$items): static
|
||||
public function addLeft(mixed ...$items): static
|
||||
public function addContent(mixed ...$items): static
|
||||
public function addRight(mixed ...$items): static
|
||||
public function addFooter(mixed ...$items): static
|
||||
```
|
||||
|
||||
#### Content types
|
||||
|
||||
Slot methods accept:
|
||||
|
||||
- **Strings** — Raw HTML or text
|
||||
- **`Htmlable`** — Objects implementing `toHtml()`
|
||||
- **`Renderable`** — Objects implementing `render()`
|
||||
- **`View`** — Laravel view instances
|
||||
- **`Layout`** — Nested layout instances (path context injected automatically)
|
||||
- **Callables** — Functions returning any of the above
|
||||
|
||||
#### Attribute methods
|
||||
|
||||
```php
|
||||
public function attributes(array $attributes): static
|
||||
```
|
||||
|
||||
Merge HTML attributes onto the layout container.
|
||||
|
||||
```php
|
||||
public function class(string $class): static
|
||||
```
|
||||
|
||||
Append a CSS class to the container.
|
||||
|
||||
#### Rendering
|
||||
|
||||
```php
|
||||
public function render(): string
|
||||
public function toHtml(): string
|
||||
public function __toString(): string
|
||||
```
|
||||
|
||||
All three methods return the compiled HTML. The class implements `Htmlable` and `Renderable` for framework integration.
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Basic page layout
|
||||
|
||||
```php
|
||||
use Core\Front\Components\Layout;
|
||||
|
||||
$page = Layout::make('HCF')
|
||||
->h(view('components.header'))
|
||||
->c('<article>Page content here</article>')
|
||||
->f(view('components.footer'));
|
||||
|
||||
echo $page;
|
||||
```
|
||||
|
||||
### Admin dashboard with sidebar
|
||||
|
||||
```php
|
||||
$dashboard = Layout::make('HLCF')
|
||||
->class('min-h-screen bg-gray-100')
|
||||
->h(view('admin.header'))
|
||||
->l(view('admin.sidebar'))
|
||||
->c($content)
|
||||
->f(view('admin.footer'));
|
||||
```
|
||||
|
||||
### Nested layouts
|
||||
|
||||
```php
|
||||
// Outer layout with left sidebar
|
||||
$outer = Layout::make('HLC')
|
||||
->h('<nav>Main Navigation</nav>')
|
||||
->l('<aside>Sidebar</aside>')
|
||||
->c(
|
||||
// Inner layout nested in content area
|
||||
Layout::make('HCF')
|
||||
->h('<h1>Section Title</h1>')
|
||||
->c('<div>Inner content</div>')
|
||||
->f('<p>Section footer</p>')
|
||||
);
|
||||
```
|
||||
|
||||
The inner layout receives path context `C-`, so its slots become `C-H`, `C-C`, and `C-F`.
|
||||
|
||||
### Multiple blocks per region
|
||||
|
||||
```php
|
||||
$page = Layout::make('HLCF')
|
||||
->h(view('header.logo'), view('header.navigation'), view('header.search'))
|
||||
->l(view('sidebar.menu'), view('sidebar.widgets'))
|
||||
->c(view('content.hero'), view('content.features'), view('content.cta'))
|
||||
->f(view('footer.links'), view('footer.legal'));
|
||||
```
|
||||
|
||||
Each item becomes a separate block with a unique identifier.
|
||||
|
||||
### Responsive rendering
|
||||
|
||||
```php
|
||||
// In a service or controller
|
||||
$breakpoint = $this->detectBreakpoint($request);
|
||||
$layoutType = $page->getLayoutTypeFor($breakpoint);
|
||||
|
||||
$layout = Layout::make($layoutType)
|
||||
->class('bio-page')
|
||||
->h($headerBlocks)
|
||||
->c($contentBlocks)
|
||||
->f($footerBlocks);
|
||||
|
||||
// Sidebars only added on desktop
|
||||
if ($breakpoint === 'desktop') {
|
||||
$layout->l($leftBlocks)->r($rightBlocks);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation notes
|
||||
|
||||
### CSS Grid structure
|
||||
|
||||
The compositor generates a grid-based structure:
|
||||
|
||||
```html
|
||||
<div class="hlcrf-layout" data-layout="root">
|
||||
<header class="hlcrf-header" data-slot="H">...</header>
|
||||
<div class="hlcrf-body flex flex-1">
|
||||
<aside class="hlcrf-left shrink-0" data-slot="L">...</aside>
|
||||
<main class="hlcrf-content flex-1" data-slot="C">...</main>
|
||||
<aside class="hlcrf-right shrink-0" data-slot="R">...</aside>
|
||||
</div>
|
||||
<footer class="hlcrf-footer" data-slot="F">...</footer>
|
||||
</div>
|
||||
```
|
||||
|
||||
The base CSS uses CSS Grid for the outer structure and Flexbox for the body row.
|
||||
|
||||
### Semantic HTML
|
||||
|
||||
The compositor uses appropriate semantic elements:
|
||||
|
||||
- `<header>` for the Header region
|
||||
- `<aside>` for Left and Right sidebars
|
||||
- `<main>` for the Content region
|
||||
- `<footer>` for the Footer region
|
||||
|
||||
This provides accessibility benefits and proper document outline.
|
||||
|
||||
### Accessibility considerations
|
||||
|
||||
- **Landmark regions** — Semantic elements create implicit ARIA landmarks
|
||||
- **Skip links** — Consider adding skip-to-content links in the Header
|
||||
- **Focus management** — Nested layouts maintain sensible tab order
|
||||
- **Screen reader compatibility** — Block `data-block` attributes aid debugging but do not affect accessibility tree
|
||||
|
||||
### Data attributes
|
||||
|
||||
The compositor adds data attributes for debugging and JavaScript integration:
|
||||
|
||||
| Attribute | Location | Purpose |
|
||||
|-----------|----------|---------|
|
||||
| `data-layout` | Container | Layout path identifier (`root`, `L-`, etc.) |
|
||||
| `data-slot` | Region | Slot identifier (`H`, `L-C`, etc.) |
|
||||
| `data-block` | Block wrapper | Block identifier (`H-0`, `L-C-2`, etc.) |
|
||||
|
||||
### Database schema
|
||||
|
||||
For persisted layouts, the schema includes:
|
||||
|
||||
**Pages/Biolinks table:**
|
||||
- `layout_config` (JSON) — Layout type per breakpoint, region dimensions
|
||||
|
||||
**Blocks table:**
|
||||
- `region` (string) — Target region: `header`, `left`, `content`, `right`, `footer`
|
||||
- `region_order` (integer) — Sort order within region
|
||||
- `breakpoint_visibility` (JSON) — Per-breakpoint visibility flags
|
||||
|
||||
### Theme integration
|
||||
|
||||
The renderer can generate CSS custom properties for theming:
|
||||
|
||||
```css
|
||||
:root {
|
||||
--biolink-bg: #f9fafb;
|
||||
--biolink-text: #111827;
|
||||
--biolink-font: 'Inter', system-ui, sans-serif;
|
||||
}
|
||||
```
|
||||
|
||||
These integrate with the compositor's class-based styling.
|
||||
|
||||
---
|
||||
|
||||
## Integration patterns
|
||||
|
||||
This section describes how the HLCRF system integrates with common web development patterns and technologies.
|
||||
|
||||
### CSS box model parallel
|
||||
|
||||
HLCRF mirrors the CSS box model conceptually, which aids developer intuition:
|
||||
|
||||
```
|
||||
CSS Box Model HLCRF Layout
|
||||
┌──────────────┐ ┌──────────────┐
|
||||
│ margin │ │ H │ ← Block-level, full-width
|
||||
├──────────────┤ ├──────────────┤
|
||||
│ │ padding │ │ │ L │ C │ R │ ← Content row with "sidebars"
|
||||
├──────────────┤ ├──────────────┤
|
||||
│ margin │ │ F │ ← Block-level, full-width
|
||||
└──────────────┘ └──────────────┘
|
||||
```
|
||||
|
||||
The mapping:
|
||||
- **H/F** behave like block-level elements spanning the full width, similar to how top/bottom margins frame content
|
||||
- **L/R** act as the "padding" on either side of the content, creating gutters or sidebars
|
||||
- **C** is the content itself—the innermost box
|
||||
|
||||
This mental model helps developers predict layout behaviour:
|
||||
- Adding `L` or `R` is like adding horizontal padding
|
||||
- Adding `H` or `F` is like adding vertical margins
|
||||
- The `[LCR]` row always forms the content layer, with `C` as the primary content area
|
||||
|
||||
When nesting layouts, the analogy extends recursively—a nested layout's `H/F` become block-level elements within their parent region, and its `[LCR]` row subdivides that space further.
|
||||
|
||||
### Shortcode structure definitions
|
||||
|
||||
HLCRF enables shortcodes to define complete structural layouts through their variant string. The variant becomes a **structural contract** that the shortcode guarantees to fulfil.
|
||||
|
||||
#### Example: Hero shortcode
|
||||
|
||||
```php
|
||||
// Shortcode definition
|
||||
class HeroShortcode extends Shortcode
|
||||
{
|
||||
public string $layout = 'HCF';
|
||||
|
||||
public function render(): Layout
|
||||
{
|
||||
return Layout::make($this->layout)
|
||||
->h($this->renderTitle()) // H: Title region
|
||||
->c($this->renderContent()) // C: Main hero content
|
||||
->f($this->renderCta()); // F: Call-to-action region
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Usage in content:
|
||||
|
||||
```
|
||||
[hero layout="HCF" title="Welcome" cta="Get Started"]
|
||||
Your hero content here.
|
||||
[/hero]
|
||||
```
|
||||
|
||||
The shortcode author declares which regions exist; content authors populate them. The variant string serves as documentation and constraint simultaneously.
|
||||
|
||||
#### Variant as capability declaration
|
||||
|
||||
Different shortcode variants expose different capabilities:
|
||||
|
||||
| Shortcode | Variant | Regions | Purpose |
|
||||
|-----------|---------|---------|---------|
|
||||
| `[hero]` | `HCF` | Title, Content, CTA | Landing page hero |
|
||||
| `[sidebar-panel]` | `HLC` | Title, Actions, Content | Dashboard widget |
|
||||
| `[card]` | `HCF` | Header, Body, Footer | Content card |
|
||||
| `[split]` | `LCR` | Left, Centre, Right | Comparison layout |
|
||||
|
||||
#### Nested shortcode structures
|
||||
|
||||
Shortcodes can nest within each other, inheriting the path context:
|
||||
|
||||
```
|
||||
[dashboard layout="HLCF"]
|
||||
[widget layout="HCF" slot="L"]
|
||||
Widget content here
|
||||
[/widget]
|
||||
Main dashboard content
|
||||
[/dashboard]
|
||||
```
|
||||
|
||||
The widget's regions receive paths `L-H`, `L-C`, `L-F` because it renders within the dashboard's Left region. This happens automatically—shortcode authors need not manage paths manually.
|
||||
|
||||
### HTML5 slots integration
|
||||
|
||||
The path-based ID system integrates naturally with HTML5 `<slot>` elements, enabling Web Components to define HLCRF structures.
|
||||
|
||||
#### Slot element mapping
|
||||
|
||||
```html
|
||||
<template id="hlcrf-component">
|
||||
<div class="hlcrf-layout">
|
||||
<header data-slot="H">
|
||||
<slot name="H"></slot>
|
||||
</header>
|
||||
<div class="hlcrf-body">
|
||||
<aside data-slot="L">
|
||||
<slot name="L"></slot>
|
||||
</aside>
|
||||
<main data-slot="C">
|
||||
<slot name="C"></slot>
|
||||
</main>
|
||||
<aside data-slot="R">
|
||||
<slot name="R"></slot>
|
||||
</aside>
|
||||
</div>
|
||||
<footer data-slot="F">
|
||||
<slot name="F"></slot>
|
||||
</footer>
|
||||
</div>
|
||||
</template>
|
||||
```
|
||||
|
||||
#### Nested slot paths
|
||||
|
||||
For nested layouts, slot names follow the path convention:
|
||||
|
||||
```html
|
||||
<div data-slot="L-C">
|
||||
<slot name="L-C"></slot>
|
||||
<!-- Content injected into nested layout's Content region -->
|
||||
</div>
|
||||
```
|
||||
|
||||
The `data-slot` attribute and slot `name` always match, enabling both CSS targeting and content projection:
|
||||
|
||||
```html
|
||||
<!-- Nested layout within the Left region -->
|
||||
<aside data-slot="L">
|
||||
<div class="hlcrf-layout" data-layout="L-">
|
||||
<header data-slot="L-H">
|
||||
<slot name="L-H"></slot>
|
||||
</header>
|
||||
<main data-slot="L-C">
|
||||
<slot name="L-C"></slot>
|
||||
</main>
|
||||
<footer data-slot="L-F">
|
||||
<slot name="L-F"></slot>
|
||||
</footer>
|
||||
</div>
|
||||
</aside>
|
||||
```
|
||||
|
||||
Content authors inject into specific nested regions using the slot attribute:
|
||||
|
||||
```html
|
||||
<my-layout-component>
|
||||
<h1 slot="H">Page Title</h1>
|
||||
<nav slot="L-H">Sidebar Navigation</nav>
|
||||
<div slot="L-C">Sidebar Content</div>
|
||||
<article slot="C">Main Content</article>
|
||||
</my-layout-component>
|
||||
```
|
||||
|
||||
#### Progressive enhancement
|
||||
|
||||
Slots enable progressive enhancement patterns:
|
||||
|
||||
1. **Server-rendered baseline** — PHP compositor renders complete HTML
|
||||
2. **Client enhancement** — JavaScript can relocate content between slots
|
||||
3. **Framework agnostic** — Works with vanilla JS, Alpine, Vue, or React
|
||||
|
||||
```html
|
||||
<!-- Server-rendered -->
|
||||
<main data-slot="C">
|
||||
<article>Content here</article>
|
||||
</main>
|
||||
|
||||
<!-- JavaScript enhancement -->
|
||||
<script>
|
||||
// Move content to different region based on viewport
|
||||
const content = document.querySelector('[data-slot="C"] article');
|
||||
if (viewport.isMobile) {
|
||||
document.querySelector('[data-slot="L"]').appendChild(content);
|
||||
}
|
||||
</script>
|
||||
```
|
||||
|
||||
### Alpine.js integration
|
||||
|
||||
The compositor's data attributes work naturally with Alpine.js:
|
||||
|
||||
```html
|
||||
<div class="hlcrf-layout" x-data="{ activeRegion: 'C' }">
|
||||
<aside data-slot="L" x-show="activeRegion === 'L' || $screen('lg')">
|
||||
<!-- Sidebar content -->
|
||||
</aside>
|
||||
<main data-slot="C" @click="activeRegion = 'C'">
|
||||
<!-- Main content -->
|
||||
</main>
|
||||
</div>
|
||||
```
|
||||
|
||||
### Livewire component boundaries
|
||||
|
||||
HLCRF regions can serve as Livewire component boundaries:
|
||||
|
||||
```php
|
||||
$layout = Layout::make('HLCF')
|
||||
->h(livewire('header-nav'))
|
||||
->l(livewire('sidebar-menu'))
|
||||
->c(livewire('main-content'))
|
||||
->f(livewire('footer-links'));
|
||||
```
|
||||
|
||||
Each region becomes an independent Livewire component with its own state and lifecycle.
|
||||
|
||||
### Path-based event targeting
|
||||
|
||||
The hierarchical path system enables precise event targeting:
|
||||
|
||||
```javascript
|
||||
// Listen for events in a specific nested region
|
||||
document.querySelector('[data-slot="L-C"]')
|
||||
.addEventListener('block:added', (e) => {
|
||||
console.log(`Block added to left sidebar content: ${e.detail.blockId}`);
|
||||
});
|
||||
|
||||
// Broadcast to all blocks in a path
|
||||
function notifyRegion(path, event) {
|
||||
document.querySelectorAll(`[data-slot^="${path}"]`)
|
||||
.forEach(el => el.dispatchEvent(new CustomEvent(event)));
|
||||
}
|
||||
```
|
||||
|
||||
### Server-side rendering integration
|
||||
|
||||
The compositor works with SSR frameworks:
|
||||
|
||||
```php
|
||||
// Inertia.js integration
|
||||
return Inertia::render('Dashboard', [
|
||||
'layout' => [
|
||||
'variant' => 'HLCF',
|
||||
'regions' => [
|
||||
'H' => $headerData,
|
||||
'L' => $sidebarData,
|
||||
'C' => $contentData,
|
||||
'F' => $footerData,
|
||||
],
|
||||
],
|
||||
]);
|
||||
```
|
||||
|
||||
The frontend receives structured data and renders using the same HLCRF conventions.
|
||||
|
||||
---
|
||||
|
||||
## Related files
|
||||
|
||||
- `app/Core/Front/Components/Layout.php` — Core compositor class
|
||||
- `app/Core/Front/Components/View/Blade/layout.blade.php` — Blade component variant
|
||||
- `app/Mod/Bio/Services/HlcrfRenderer.php` — Bio page rendering service
|
||||
- `app/Mod/Bio/Migrations/2026_01_14_100000_add_hlcrf_support.php` — Database schema
|
||||
|
||||
---
|
||||
|
||||
## Version history
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-15 | Initial RFC |
|
||||
423
RFC-002-EVENT-DRIVEN-MODULES.md
Normal file
423
RFC-002-EVENT-DRIVEN-MODULES.md
Normal file
|
|
@ -0,0 +1,423 @@
|
|||
# RFC: Event-Driven Module Loading
|
||||
|
||||
**Status:** Implemented
|
||||
**Created:** 2026-01-15
|
||||
**Authors:** Host UK Engineering
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The Event-Driven Module Loading system enables lazy instantiation of modules based on lifecycle events. Instead of eagerly booting all modules at application startup, modules declare interest in specific events via static `$listens` arrays. The module is only instantiated when its events fire.
|
||||
|
||||
This provides:
|
||||
- Faster boot times (only load what's needed)
|
||||
- Context-aware loading (CLI gets CLI modules, web gets web modules)
|
||||
- Clean separation between infrastructure and modules
|
||||
- Testable event-based architecture
|
||||
|
||||
---
|
||||
|
||||
## Core Components
|
||||
|
||||
### Architecture Overview
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Application Bootstrap │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ LifecycleEventProvider │
|
||||
│ └── ModuleRegistry │
|
||||
│ └── ModuleScanner (reads $listens via reflection) │
|
||||
│ └── LazyModuleListener (defers instantiation) │
|
||||
│ │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Frontages (fire events) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Front/Web/Boot ──────────▶ WebRoutesRegistering │
|
||||
│ Front/Admin/Boot ────────▶ AdminPanelBooting │
|
||||
│ Front/Api/Boot ──────────▶ ApiRoutesRegistering │
|
||||
│ Front/Cli/Boot ──────────▶ ConsoleBooting │
|
||||
│ Mcp/Server ──────────────▶ McpToolsRegistering │
|
||||
│ Queue Worker ────────────▶ QueueWorkerBooting │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### ModuleScanner
|
||||
|
||||
Reads Boot.php files and extracts `$listens` arrays via reflection without instantiating the modules.
|
||||
|
||||
```php
|
||||
namespace Core;
|
||||
|
||||
class ModuleScanner
|
||||
{
|
||||
public function scan(array $paths): array
|
||||
{
|
||||
// Returns: [EventClass => [ModuleClass => 'methodName']]
|
||||
}
|
||||
|
||||
public function extractListens(string $class): array
|
||||
{
|
||||
// Uses ReflectionClass to read static $listens property
|
||||
// Returns empty array if missing/invalid
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### ModuleRegistry
|
||||
|
||||
Wires up lazy listeners for all scanned modules.
|
||||
|
||||
```php
|
||||
namespace Core;
|
||||
|
||||
class ModuleRegistry
|
||||
{
|
||||
public function register(array $paths): void
|
||||
{
|
||||
$mappings = $this->scanner->scan($paths);
|
||||
|
||||
foreach ($mappings as $event => $listeners) {
|
||||
foreach ($listeners as $moduleClass => $method) {
|
||||
Event::listen($event, new LazyModuleListener($moduleClass, $method));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### LazyModuleListener
|
||||
|
||||
Defers module instantiation until the event fires.
|
||||
|
||||
```php
|
||||
namespace Core;
|
||||
|
||||
class LazyModuleListener
|
||||
{
|
||||
public function __invoke(object $event): void
|
||||
{
|
||||
$module = $this->resolveModule();
|
||||
$module->{$this->method}($event);
|
||||
}
|
||||
|
||||
private function resolveModule(): object
|
||||
{
|
||||
// Handles ServiceProvider subclasses correctly
|
||||
if (is_subclass_of($this->moduleClass, ServiceProvider::class)) {
|
||||
return app()->resolveProvider($this->moduleClass);
|
||||
}
|
||||
return app()->make($this->moduleClass);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### LifecycleEvent Base Class
|
||||
|
||||
Events collect requests from modules without immediately applying them.
|
||||
|
||||
```php
|
||||
namespace Core\Events;
|
||||
|
||||
abstract class LifecycleEvent
|
||||
{
|
||||
public function routes(callable $callback): void;
|
||||
public function views(string $namespace, string $path): void;
|
||||
public function livewire(string $alias, string $class): void;
|
||||
public function command(string $class): void;
|
||||
public function middleware(string $alias, string $class): void;
|
||||
public function navigation(array $item): void;
|
||||
public function translations(string $namespace, string $path): void;
|
||||
public function policy(string $model, string $policy): void;
|
||||
|
||||
// Getters for processing
|
||||
public function routeRequests(): array;
|
||||
public function viewRequests(): array;
|
||||
// etc.
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Available Events
|
||||
|
||||
| Event | Context | Fired By |
|
||||
|-------|---------|----------|
|
||||
| `AdminPanelBooting` | Admin panel requests | `Front\Admin\Boot` |
|
||||
| `WebRoutesRegistering` | Web requests | `Front\Web\Boot` |
|
||||
| `ApiRoutesRegistering` | API requests | `Front\Api\Boot` |
|
||||
| `ConsoleBooting` | CLI commands | `Front\Cli\Boot` |
|
||||
| `McpToolsRegistering` | MCP server | Mcp module |
|
||||
| `QueueWorkerBooting` | Queue workers | `LifecycleEventProvider` |
|
||||
| `FrameworkBooted` | All contexts (post-boot) | `LifecycleEventProvider` |
|
||||
| `MediaRequested` | Media serving | Core media handler |
|
||||
| `SearchRequested` | Search operations | Core search handler |
|
||||
| `MailSending` | Mail dispatch | Core mail handler |
|
||||
|
||||
---
|
||||
|
||||
## Module Implementation
|
||||
|
||||
### Declaring Listeners
|
||||
|
||||
Modules declare interest in events via the static `$listens` property:
|
||||
|
||||
```php
|
||||
namespace Mod\Commerce;
|
||||
|
||||
use Core\Events\AdminPanelBooting;
|
||||
use Core\Events\ConsoleBooting;
|
||||
use Core\Events\WebRoutesRegistering;
|
||||
|
||||
class Boot extends ServiceProvider
|
||||
{
|
||||
public static array $listens = [
|
||||
AdminPanelBooting::class => 'onAdminPanel',
|
||||
WebRoutesRegistering::class => 'onWebRoutes',
|
||||
ConsoleBooting::class => 'onConsole',
|
||||
];
|
||||
|
||||
public function onAdminPanel(AdminPanelBooting $event): void
|
||||
{
|
||||
$event->views('commerce', __DIR__.'/View/Blade');
|
||||
$event->livewire('commerce.checkout', Components\Checkout::class);
|
||||
$event->routes(fn () => require __DIR__.'/Routes/admin.php');
|
||||
}
|
||||
|
||||
public function onWebRoutes(WebRoutesRegistering $event): void
|
||||
{
|
||||
$event->views('commerce', __DIR__.'/View/Blade');
|
||||
$event->routes(fn () => require __DIR__.'/Routes/web.php');
|
||||
}
|
||||
|
||||
public function onConsole(ConsoleBooting $event): void
|
||||
{
|
||||
$event->command(Commands\ProcessPayments::class);
|
||||
$event->command(Commands\SyncSubscriptions::class);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### What Stays in boot()
|
||||
|
||||
Some registrations must remain in the traditional `boot()` method:
|
||||
|
||||
| Registration | Reason |
|
||||
|--------------|--------|
|
||||
| `loadMigrationsFrom()` | Needed early for `artisan migrate` |
|
||||
| `AdminMenuRegistry->register()` | Uses interface pattern (AdminMenuProvider) |
|
||||
| Laravel event listeners | Standard Laravel events, not lifecycle events |
|
||||
|
||||
```php
|
||||
public function boot(): void
|
||||
{
|
||||
$this->loadMigrationsFrom(__DIR__.'/Migrations');
|
||||
|
||||
// Interface-based registration
|
||||
app(AdminMenuRegistry::class)->register($this);
|
||||
|
||||
// Standard Laravel events (not lifecycle events)
|
||||
Event::listen(OrderPlaced::class, SendOrderConfirmation::class);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Request Processing
|
||||
|
||||
Frontages fire events and process collected requests:
|
||||
|
||||
```php
|
||||
// In Front/Web/Boot
|
||||
public static function fireWebRoutes(): void
|
||||
{
|
||||
$event = new WebRoutesRegistering;
|
||||
event($event);
|
||||
|
||||
// Process view namespaces
|
||||
foreach ($event->viewRequests() as [$namespace, $path]) {
|
||||
view()->addNamespace($namespace, $path);
|
||||
}
|
||||
|
||||
// Process Livewire components
|
||||
foreach ($event->livewireRequests() as [$alias, $class]) {
|
||||
Livewire::component($alias, $class);
|
||||
}
|
||||
|
||||
// Process routes with web middleware
|
||||
foreach ($event->routeRequests() as $callback) {
|
||||
Route::middleware('web')->group($callback);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This "collect then process" pattern ensures:
|
||||
1. Modules cannot directly mutate infrastructure
|
||||
2. Core validates and controls registration order
|
||||
3. Easy to add cross-cutting concerns (logging, validation)
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Test ModuleScanner reflection without Laravel app:
|
||||
|
||||
```php
|
||||
it('extracts $listens from a class with public static property', function () {
|
||||
$scanner = new ModuleScanner;
|
||||
$listens = $scanner->extractListens(ModuleWithListens::class);
|
||||
|
||||
expect($listens)->toBe([
|
||||
'SomeEvent' => 'handleSomeEvent',
|
||||
]);
|
||||
});
|
||||
|
||||
it('returns empty array when $listens is not public', function () {
|
||||
$scanner = new ModuleScanner;
|
||||
$listens = $scanner->extractListens(ModuleWithPrivateListens::class);
|
||||
|
||||
expect($listens)->toBe([]);
|
||||
});
|
||||
```
|
||||
|
||||
### Integration Tests
|
||||
|
||||
Test real module scanning with Laravel app:
|
||||
|
||||
```php
|
||||
it('scans the Mod directory and finds modules', function () {
|
||||
$scanner = new ModuleScanner;
|
||||
$result = $scanner->scan([app_path('Mod')]);
|
||||
|
||||
expect($result)->toHaveKey(AdminPanelBooting::class);
|
||||
expect($result)->toHaveKey(WebRoutesRegistering::class);
|
||||
});
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
### Lazy Loading Benefits
|
||||
|
||||
| Context | Modules Loaded | Without Lazy Loading |
|
||||
|---------|----------------|----------------------|
|
||||
| Web request | 6-8 modules | All 16+ modules |
|
||||
| Admin request | 10-12 modules | All 16+ modules |
|
||||
| CLI command | 4-6 modules | All 16+ modules |
|
||||
| API request | 3-5 modules | All 16+ modules |
|
||||
|
||||
### Memory Impact
|
||||
|
||||
Modules not needed for the current context are never instantiated:
|
||||
- No class autoloading
|
||||
- No service binding
|
||||
- No config merging
|
||||
- No route registration
|
||||
|
||||
---
|
||||
|
||||
## Files
|
||||
|
||||
### Core Infrastructure
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `Core/ModuleScanner.php` | Scans Boot.php files for $listens |
|
||||
| `Core/ModuleRegistry.php` | Wires up lazy listeners |
|
||||
| `Core/LazyModuleListener.php` | Defers module instantiation |
|
||||
| `Core/LifecycleEventProvider.php` | Orchestrates scanning and events |
|
||||
| `Core/Events/LifecycleEvent.php` | Base class for all lifecycle events |
|
||||
|
||||
### Events
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `Core/Events/AdminPanelBooting.php` | Admin panel context |
|
||||
| `Core/Events/WebRoutesRegistering.php` | Web context |
|
||||
| `Core/Events/ApiRoutesRegistering.php` | API context |
|
||||
| `Core/Events/ConsoleBooting.php` | CLI context |
|
||||
| `Core/Events/McpToolsRegistering.php` | MCP server context |
|
||||
| `Core/Events/QueueWorkerBooting.php` | Queue worker context |
|
||||
| `Core/Events/FrameworkBooted.php` | Post-boot event |
|
||||
|
||||
### Frontages
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `Core/Front/Web/Boot.php` | Fires WebRoutesRegistering |
|
||||
| `Core/Front/Admin/Boot.php` | Fires AdminPanelBooting |
|
||||
| `Core/Front/Api/Boot.php` | Fires ApiRoutesRegistering |
|
||||
| `Core/Front/Cli/Boot.php` | Fires ConsoleBooting |
|
||||
|
||||
---
|
||||
|
||||
## Migration Guide
|
||||
|
||||
### Before (Legacy)
|
||||
|
||||
```php
|
||||
class Boot extends ServiceProvider
|
||||
{
|
||||
public function boot(): void
|
||||
{
|
||||
$this->registerRoutes();
|
||||
$this->registerViews();
|
||||
$this->registerLivewireComponents();
|
||||
$this->registerCommands();
|
||||
}
|
||||
|
||||
private function registerRoutes(): void
|
||||
{
|
||||
Route::middleware('web')->group(__DIR__.'/Routes/web.php');
|
||||
}
|
||||
|
||||
private function registerViews(): void
|
||||
{
|
||||
$this->loadViewsFrom(__DIR__.'/View/Blade', 'mymodule');
|
||||
}
|
||||
// etc.
|
||||
}
|
||||
```
|
||||
|
||||
### After (Event-Driven)
|
||||
|
||||
```php
|
||||
class Boot extends ServiceProvider
|
||||
{
|
||||
public static array $listens = [
|
||||
WebRoutesRegistering::class => 'onWebRoutes',
|
||||
ConsoleBooting::class => 'onConsole',
|
||||
];
|
||||
|
||||
public function boot(): void
|
||||
{
|
||||
$this->loadMigrationsFrom(__DIR__.'/Migrations');
|
||||
}
|
||||
|
||||
public function onWebRoutes(WebRoutesRegistering $event): void
|
||||
{
|
||||
$event->views('mymodule', __DIR__.'/View/Blade');
|
||||
$event->routes(fn () => require __DIR__.'/Routes/web.php');
|
||||
}
|
||||
|
||||
public function onConsole(ConsoleBooting $event): void
|
||||
{
|
||||
$event->command(Commands\MyCommand::class);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Future Considerations
|
||||
|
||||
1. **Event Caching**: Cache scanned mappings in production for faster boot
|
||||
2. **Module Dependencies**: Declare dependencies between modules for ordered loading
|
||||
3. **Hot Module Reloading**: In development, detect changes and re-scan
|
||||
4. **Event Priorities**: Allow modules to specify listener priority
|
||||
484
RFC-003-CONFIG-CHANNELS.md
Normal file
484
RFC-003-CONFIG-CHANNELS.md
Normal file
|
|
@ -0,0 +1,484 @@
|
|||
# RFC: Config Channels
|
||||
|
||||
**Status:** Implemented
|
||||
**Created:** 2026-01-15
|
||||
**Authors:** Host UK Engineering
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Config Channels add a voice/context dimension to configuration resolution. Where scopes (workspace, org, system) determine *who* a setting applies to, channels determine *where* or *how* it applies.
|
||||
|
||||
A workspace might have one Twitter handle but different posting styles for different contexts. Channels let you define `social.posting.style = "casual"` for Instagram while keeping `social.posting.style = "professional"` for LinkedIn—same workspace, same key, different channel.
|
||||
|
||||
The system resolves values through a two-dimensional matrix: scope chain (workspace → org → system) crossed with channel chain (specific → parent → null). Most specific wins, unless a parent declares FINAL.
|
||||
|
||||
---
|
||||
|
||||
## Motivation
|
||||
|
||||
Traditional configuration systems work on a single dimension: scope hierarchy. You set a value at system level, override it at workspace level. Simple.
|
||||
|
||||
But some configuration varies by context within a single workspace:
|
||||
|
||||
- **Technical channels:** web vs API vs mobile (different rate limits, caching, auth)
|
||||
- **Social channels:** Instagram vs Twitter vs TikTok (different post lengths, hashtags, tone)
|
||||
- **Voice channels:** formal vs casual vs support (different language, greeting styles)
|
||||
|
||||
Without channels, you either:
|
||||
1. Create separate config keys for each context (`twitter.style`, `instagram.style`, etc.)
|
||||
2. Store JSON blobs and parse them at runtime
|
||||
3. Build custom logic for each use case
|
||||
|
||||
Channels generalise this pattern. One key, multiple channel-specific values, clean resolution.
|
||||
|
||||
---
|
||||
|
||||
## Core Concepts
|
||||
|
||||
### Channel
|
||||
|
||||
A named context for configuration. Channels have:
|
||||
|
||||
| Property | Purpose |
|
||||
|----------|---------|
|
||||
| `code` | Unique identifier (e.g., `instagram`, `api`, `support`) |
|
||||
| `name` | Human-readable label |
|
||||
| `parent_id` | Optional parent for inheritance |
|
||||
| `workspace_id` | Owner workspace (null = system channel) |
|
||||
| `metadata` | Arbitrary JSON for channel-specific data |
|
||||
|
||||
### Channel Inheritance
|
||||
|
||||
Channels form inheritance trees. A specific channel inherits from its parent:
|
||||
|
||||
```
|
||||
┌─────────┐
|
||||
│ null │ ← All channels (fallback)
|
||||
└────┬────┘
|
||||
│
|
||||
┌────┴────┐
|
||||
│ social │ ← Social media defaults
|
||||
└────┬────┘
|
||||
┌─────────┼─────────┐
|
||||
│ │ │
|
||||
┌────┴────┐ ┌──┴───┐ ┌───┴───┐
|
||||
│instagram│ │twitter│ │tiktok │
|
||||
└─────────┘ └───────┘ └───────┘
|
||||
```
|
||||
|
||||
When resolving `social.posting.style` for the `instagram` channel:
|
||||
1. Check instagram-specific value
|
||||
2. Check social (parent) value
|
||||
3. Check null (all channels) value
|
||||
|
||||
### System vs Workspace Channels
|
||||
|
||||
**System channels** (`workspace_id = null`) are available to all workspaces. Platform-level contexts like `web`, `api`, `mobile`.
|
||||
|
||||
**Workspace channels** are private to a workspace. Custom contexts like `vip_support`, `internal_comms`, or workspace-specific social accounts.
|
||||
|
||||
When looking up a channel by code, workspace channels take precedence over system channels with the same code. This allows workspaces to override system channel behaviour.
|
||||
|
||||
### Resolution Matrix
|
||||
|
||||
Config resolution operates on a matrix of scope × channel:
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────┐
|
||||
│ Channel Chain │
|
||||
│ instagram → social → null │
|
||||
└──────────────────────────────────────────┘
|
||||
┌───────────────────┐ ┌──────────┬──────────┬──────────┐
|
||||
│ │ │ │ │ │
|
||||
│ Scope Chain │ │ instagram│ social │ null │
|
||||
│ │ │ │ │ │
|
||||
├───────────────────┼───┼──────────┼──────────┼──────────┤
|
||||
│ workspace │ │ 1 │ 2 │ 3 │
|
||||
├───────────────────┼───┼──────────┼──────────┼──────────┤
|
||||
│ org │ │ 4 │ 5 │ 6 │
|
||||
├───────────────────┼───┼──────────┼──────────┼──────────┤
|
||||
│ system │ │ 7 │ 8 │ 9 │
|
||||
└───────────────────┴───┴──────────┴──────────┴──────────┘
|
||||
|
||||
Resolution order: 1 → 2 → 3 → 4 → 5 → 6 → 7 → 8 → 9
|
||||
(Most specific scope + most specific channel first)
|
||||
```
|
||||
|
||||
The first non-null value wins—unless a less-specific combination has `locked = true` (FINAL), which blocks all more-specific values.
|
||||
|
||||
### FINAL (Locked Values)
|
||||
|
||||
A value marked as `locked` cannot be overridden by more specific scopes or channels. This implements the FINAL pattern from Java/OOP:
|
||||
|
||||
```php
|
||||
// System admin sets rate limit and locks it
|
||||
$config->set('api.rate_limit', 1000, $systemProfile, locked: true, channel: 'api');
|
||||
|
||||
// Workspace cannot override - locked value always wins
|
||||
$config->set('api.rate_limit', 5000, $workspaceProfile, channel: 'api');
|
||||
// ↑ This value exists but is never returned
|
||||
```
|
||||
|
||||
Lock checks traverse from least specific (system + null channel) to most specific. Any lock encountered blocks all more-specific values.
|
||||
|
||||
---
|
||||
|
||||
## How It Works
|
||||
|
||||
### Read Path
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ $config->get('social.posting.style') │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ 1. Hash lookup (O(1)) │
|
||||
│ ConfigResolver::$values['social.posting.style'] │
|
||||
│ → Found? Return immediately │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│ Miss
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ 2. Lazy load scope (1 query) │
|
||||
│ Load all resolved values for workspace+channel into hash │
|
||||
│ → Check hash again │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
│ Still miss
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ 3. Lazy prime (N queries) │
|
||||
│ Build profile chain (workspace → org → system) │
|
||||
│ Build channel chain (specific → parent → null) │
|
||||
│ Batch load all values for key │
|
||||
│ Walk resolution matrix until value found │
|
||||
│ Store in hash + database │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Most reads hit step 1 (hash lookup). The heavy resolution only runs once per key per scope+channel combination, then gets cached.
|
||||
|
||||
### Write Path
|
||||
|
||||
```php
|
||||
$config->set(
|
||||
keyCode: 'social.posting.style',
|
||||
value: 'casual',
|
||||
profile: $workspaceProfile,
|
||||
locked: false,
|
||||
channel: 'instagram',
|
||||
);
|
||||
```
|
||||
|
||||
1. Update `config_values` (source of truth)
|
||||
2. Clear affected entries from hash and `config_resolved`
|
||||
3. Re-prime the key for affected scope+channel
|
||||
4. Fire `ConfigChanged` event
|
||||
|
||||
### Prime Operation
|
||||
|
||||
The prime operation pre-computes resolved values:
|
||||
|
||||
```php
|
||||
// Prime entire workspace
|
||||
$config->prime($workspace, channel: 'instagram');
|
||||
|
||||
// Prime all workspaces (scheduled job)
|
||||
$config->primeAll();
|
||||
```
|
||||
|
||||
This runs full matrix resolution for every key and stores results in `config_resolved`. Subsequent reads become single indexed lookups.
|
||||
|
||||
---
|
||||
|
||||
## API Reference
|
||||
|
||||
### Channel Model
|
||||
|
||||
**Namespace:** `Core\Config\Models\Channel`
|
||||
|
||||
#### Properties
|
||||
|
||||
| Property | Type | Description |
|
||||
|----------|------|-------------|
|
||||
| `code` | string | Unique identifier |
|
||||
| `name` | string | Human-readable label |
|
||||
| `parent_id` | int\|null | Parent channel for inheritance |
|
||||
| `workspace_id` | int\|null | Owner (null = system) |
|
||||
| `metadata` | array\|null | Arbitrary JSON data |
|
||||
|
||||
#### Methods
|
||||
|
||||
```php
|
||||
// Find by code (prefers workspace-specific over system)
|
||||
Channel::byCode('instagram', $workspaceId): ?Channel
|
||||
|
||||
// Get inheritance chain (most specific first)
|
||||
$channel->inheritanceChain(): Collection
|
||||
|
||||
// Get all codes in chain
|
||||
$channel->inheritanceCodes(): array // ['instagram', 'social']
|
||||
|
||||
// Check inheritance
|
||||
$channel->inheritsFrom('social'): bool
|
||||
|
||||
// Is system channel?
|
||||
$channel->isSystem(): bool
|
||||
|
||||
// Get metadata value
|
||||
$channel->meta('platform_id'): mixed
|
||||
|
||||
// Ensure channel exists
|
||||
Channel::ensure(
|
||||
code: 'instagram',
|
||||
name: 'Instagram',
|
||||
parentCode: 'social',
|
||||
workspaceId: null,
|
||||
metadata: ['platform_id' => 'ig'],
|
||||
): Channel
|
||||
```
|
||||
|
||||
### ConfigService with Channels
|
||||
|
||||
```php
|
||||
$config = app(ConfigService::class);
|
||||
|
||||
// Set context (typically in middleware)
|
||||
$config->setContext($workspace, $channel);
|
||||
|
||||
// Get value using context
|
||||
$value = $config->get('social.posting.style');
|
||||
|
||||
// Explicit channel override
|
||||
$result = $config->resolve('social.posting.style', $workspace, 'instagram');
|
||||
|
||||
// Set channel-specific value
|
||||
$config->set(
|
||||
keyCode: 'social.posting.style',
|
||||
value: 'casual',
|
||||
profile: $profile,
|
||||
locked: false,
|
||||
channel: 'instagram',
|
||||
);
|
||||
|
||||
// Lock a channel-specific value
|
||||
$config->lock('social.posting.style', $profile, 'instagram');
|
||||
|
||||
// Prime for specific channel
|
||||
$config->prime($workspace, 'instagram');
|
||||
```
|
||||
|
||||
### ConfigValue with Channels
|
||||
|
||||
```php
|
||||
// Find value for profile + key + channel
|
||||
ConfigValue::findValue($profileId, $keyId, $channelId): ?ConfigValue
|
||||
|
||||
// Set value with channel
|
||||
ConfigValue::setValue(
|
||||
profileId: $profileId,
|
||||
keyId: $keyId,
|
||||
value: 'casual',
|
||||
locked: false,
|
||||
inheritedFrom: null,
|
||||
channelId: $channelId,
|
||||
): ConfigValue
|
||||
|
||||
// Get all values for key across profiles and channels
|
||||
ConfigValue::forKeyInProfiles($keyId, $profileIds, $channelIds): Collection
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Schema
|
||||
|
||||
### config_channels
|
||||
|
||||
```sql
|
||||
CREATE TABLE config_channels (
|
||||
id BIGINT PRIMARY KEY,
|
||||
code VARCHAR(255),
|
||||
name VARCHAR(255),
|
||||
parent_id BIGINT REFERENCES config_channels(id),
|
||||
workspace_id BIGINT REFERENCES workspaces(id),
|
||||
metadata JSON,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
|
||||
UNIQUE (code, workspace_id),
|
||||
INDEX (parent_id)
|
||||
);
|
||||
```
|
||||
|
||||
### config_values (extended)
|
||||
|
||||
```sql
|
||||
ALTER TABLE config_values ADD COLUMN
|
||||
channel_id BIGINT REFERENCES config_channels(id);
|
||||
|
||||
-- Updated unique constraint
|
||||
UNIQUE (profile_id, key_id, channel_id)
|
||||
```
|
||||
|
||||
### config_resolved (extended)
|
||||
|
||||
```sql
|
||||
-- Channel dimension in resolved cache
|
||||
channel_id BIGINT,
|
||||
source_channel_id BIGINT,
|
||||
|
||||
-- Composite lookup
|
||||
INDEX (workspace_id, channel_id, key_code)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Multi-platform social posting
|
||||
|
||||
```php
|
||||
// System defaults (all channels)
|
||||
$config->set('social.posting.max_length', 280, $systemProfile);
|
||||
$config->set('social.posting.style', 'professional', $systemProfile);
|
||||
|
||||
// Channel-specific overrides
|
||||
$config->set('social.posting.max_length', 2200, $systemProfile, channel: 'instagram');
|
||||
$config->set('social.posting.max_length', 100000, $systemProfile, channel: 'linkedin');
|
||||
$config->set('social.posting.style', 'casual', $workspaceProfile, channel: 'tiktok');
|
||||
|
||||
// Resolution
|
||||
$config->resolve('social.posting.max_length', $workspace, 'twitter'); // 280 (default)
|
||||
$config->resolve('social.posting.max_length', $workspace, 'instagram'); // 2200
|
||||
$config->resolve('social.posting.style', $workspace, 'tiktok'); // 'casual'
|
||||
```
|
||||
|
||||
### API rate limiting with FINAL
|
||||
|
||||
```php
|
||||
// System admin sets hard limit for API channel
|
||||
$config->set('api.rate_limit.requests', 1000, $systemProfile, locked: true, channel: 'api');
|
||||
$config->set('api.rate_limit.window', 60, $systemProfile, locked: true, channel: 'api');
|
||||
|
||||
// Workspaces cannot exceed this
|
||||
$config->set('api.rate_limit.requests', 5000, $workspaceProfile, channel: 'api');
|
||||
// ↑ Stored but never returned - locked value wins
|
||||
|
||||
$config->resolve('api.rate_limit.requests', $workspace, 'api'); // Always 1000
|
||||
```
|
||||
|
||||
### Voice/tone channels
|
||||
|
||||
```php
|
||||
// Define voice channels
|
||||
Channel::ensure('support', 'Customer Support', parentCode: null);
|
||||
Channel::ensure('vi', 'Virtual Intelligence', parentCode: null);
|
||||
Channel::ensure('formal', 'Formal Communications', parentCode: null);
|
||||
|
||||
// Configure per voice
|
||||
$config->set('comms.greeting', 'Hello', $workspaceProfile, channel: null);
|
||||
$config->set('comms.greeting', 'Hey there!', $workspaceProfile, channel: 'support');
|
||||
$config->set('comms.greeting', 'Greetings', $workspaceProfile, channel: 'formal');
|
||||
$config->set('comms.greeting', 'Hi, I\'m your AI assistant', $workspaceProfile, channel: 'vi');
|
||||
```
|
||||
|
||||
### Channel inheritance
|
||||
|
||||
```php
|
||||
// Create hierarchy
|
||||
Channel::ensure('social', 'Social Media');
|
||||
Channel::ensure('instagram', 'Instagram', parentCode: 'social');
|
||||
Channel::ensure('instagram_stories', 'Instagram Stories', parentCode: 'instagram');
|
||||
|
||||
// Set at parent level
|
||||
$config->set('social.hashtags.enabled', true, $profile, channel: 'social');
|
||||
$config->set('social.hashtags.max', 30, $profile, channel: 'instagram');
|
||||
|
||||
// Child inherits from parent
|
||||
$config->resolve('social.hashtags.enabled', $workspace, 'instagram_stories');
|
||||
// → true (inherited from 'social')
|
||||
|
||||
$config->resolve('social.hashtags.max', $workspace, 'instagram_stories');
|
||||
// → 30 (inherited from 'instagram')
|
||||
```
|
||||
|
||||
### Workspace-specific channel override
|
||||
|
||||
```php
|
||||
// System channel
|
||||
Channel::ensure('premium', 'Premium Features', workspaceId: null);
|
||||
|
||||
// Workspace overrides system channel
|
||||
Channel::ensure('premium', 'VIP Premium', workspaceId: $workspace->id, metadata: [
|
||||
'features' => ['priority_support', 'custom_branding'],
|
||||
]);
|
||||
|
||||
// Lookup prefers workspace channel
|
||||
$channel = Channel::byCode('premium', $workspace->id);
|
||||
// → Workspace's 'VIP Premium' channel, not system 'Premium Features'
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes
|
||||
|
||||
### Performance Considerations
|
||||
|
||||
The channel system adds a dimension to resolution, but performance impact is minimal:
|
||||
|
||||
1. **Read path unchanged** — Most reads hit the hash (O(1))
|
||||
2. **Batch loading** — Resolution loads all channel values in one query
|
||||
3. **Cached resolution** — `config_resolved` stores pre-computed values per workspace+channel
|
||||
4. **Lazy priming** — Only computes on first access, not on every request
|
||||
|
||||
### Cycle Detection
|
||||
|
||||
Channel inheritance includes cycle detection to handle data corruption:
|
||||
|
||||
```php
|
||||
public function inheritanceChain(): Collection
|
||||
{
|
||||
$seen = [$this->id => true];
|
||||
|
||||
while ($current->parent_id !== null) {
|
||||
if (isset($seen[$current->parent_id])) {
|
||||
Log::error('Circular reference in channel inheritance');
|
||||
break;
|
||||
}
|
||||
// ...
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### MariaDB NULL Handling
|
||||
|
||||
The `config_resolved` table uses `0` instead of `NULL` for system scope and all-channels:
|
||||
|
||||
```php
|
||||
// MariaDB composite unique constraints don't handle NULL well
|
||||
// workspace_id = 0 means system scope
|
||||
// channel_id = 0 means all channels
|
||||
```
|
||||
|
||||
This is an implementation detail—the API accepts and returns `null` as expected.
|
||||
|
||||
---
|
||||
|
||||
## Related Files
|
||||
|
||||
- `app/Core/Config/Models/Channel.php` — Channel model
|
||||
- `app/Core/Config/Models/ConfigValue.php` — Value storage with channel support
|
||||
- `app/Core/Config/ConfigResolver.php` — Resolution engine
|
||||
- `app/Core/Config/ConfigService.php` — Main API
|
||||
- `app/Core/Config/Migrations/2026_01_09_100001_add_config_channels.php` — Schema
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-15 | Initial RFC |
|
||||
512
RFC-004-ENTITLEMENTS.md
Normal file
512
RFC-004-ENTITLEMENTS.md
Normal file
|
|
@ -0,0 +1,512 @@
|
|||
# RFC: Entitlements and Feature System
|
||||
|
||||
**Status:** Implemented
|
||||
**Created:** 2026-01-15
|
||||
**Authors:** Host UK Engineering
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The Entitlement System controls feature access, usage limits, and tier gating across all Host services. It answers one question: "Can this workspace do this action?"
|
||||
|
||||
Workspaces subscribe to **Packages** that bundle **Features**. Features are either boolean flags (access gates) or numeric limits (usage caps). **Boosts** provide temporary or permanent additions to base limits. Usage is tracked, cached, and enforced in real-time.
|
||||
|
||||
The system integrates with Commerce for subscription lifecycle and exposes an API for cross-service entitlement checks.
|
||||
|
||||
---
|
||||
|
||||
## Core Model
|
||||
|
||||
### Entity Relationships
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ Workspace ───┬─── WorkspacePackage ─── Package ─── Features │
|
||||
│ │ │
|
||||
│ ├─── Boosts (temporary limit additions) │
|
||||
│ │ │
|
||||
│ ├─── UsageRecords (consumption tracking) │
|
||||
│ │ │
|
||||
│ └─── EntitlementLogs (audit trail) │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Workspace
|
||||
|
||||
The tenant unit. All entitlement checks happen against a workspace, not a user. Users belong to workspaces; workspaces own entitlements.
|
||||
|
||||
```php
|
||||
// Check if workspace can use a feature
|
||||
$workspace->can('social.accounts', quantity: 3);
|
||||
|
||||
// Record usage
|
||||
$workspace->recordUsage('ai.credits', quantity: 10);
|
||||
|
||||
// Get usage summary
|
||||
$workspace->getUsageSummary();
|
||||
```
|
||||
|
||||
### Package
|
||||
|
||||
A bundle of features with defined limits. Two types:
|
||||
|
||||
| Type | Behaviour |
|
||||
|------|-----------|
|
||||
| **Base Package** | Only one active per workspace. Upgrading replaces the previous base package. |
|
||||
| **Add-on Package** | Stackable. Multiple can be active simultaneously. Limits accumulate. |
|
||||
|
||||
**Database:** `entitlement_packages`
|
||||
|
||||
```php
|
||||
// Package fields
|
||||
'code' // Unique identifier (e.g., 'social-creator')
|
||||
'name' // Display name
|
||||
'is_base_package' // true = only one allowed
|
||||
'is_stackable' // true = limits add to base
|
||||
'monthly_price' // Pricing
|
||||
'yearly_price'
|
||||
'stripe_price_id_monthly'
|
||||
'stripe_price_id_yearly'
|
||||
```
|
||||
|
||||
### Feature
|
||||
|
||||
A capability or limit that can be granted. Three types:
|
||||
|
||||
| Type | Behaviour | Example |
|
||||
|------|-----------|---------|
|
||||
| **Boolean** | On/off access gate | `tier.apollo`, `host.social` |
|
||||
| **Limit** | Numeric cap on usage | `social.accounts` (5), `ai.credits` (100) |
|
||||
| **Unlimited** | No cap (special limit value) | Agency tier social posts |
|
||||
|
||||
**Database:** `entitlement_features`
|
||||
|
||||
```php
|
||||
// Feature fields
|
||||
'code' // Unique identifier (e.g., 'social.accounts')
|
||||
'name' // Display name
|
||||
'type' // boolean, limit, unlimited
|
||||
'reset_type' // none, monthly, rolling
|
||||
'rolling_window_days' // For rolling reset (e.g., 30)
|
||||
'parent_feature_id' // For global pools (see Storage Pool below)
|
||||
```
|
||||
|
||||
#### Reset Types
|
||||
|
||||
| Reset Type | Behaviour |
|
||||
|------------|-----------|
|
||||
| **None** | Usage accumulates forever (e.g., account limits) |
|
||||
| **Monthly** | Resets at billing cycle start |
|
||||
| **Rolling** | Rolling window (e.g., last 30 days) |
|
||||
|
||||
#### Hierarchical Features (Global Pools)
|
||||
|
||||
Child features share a parent's limit pool. Used for storage allocation across services:
|
||||
|
||||
```
|
||||
host.storage.total (1000 MB)
|
||||
├── host.cdn (draws from parent pool)
|
||||
├── bio.cdn (draws from parent pool)
|
||||
└── social.cdn (draws from parent pool)
|
||||
```
|
||||
|
||||
### WorkspacePackage
|
||||
|
||||
The pivot linking workspaces to packages. Tracks subscription state.
|
||||
|
||||
**Database:** `entitlement_workspace_packages`
|
||||
|
||||
```php
|
||||
// Status constants
|
||||
STATUS_ACTIVE // Package in effect
|
||||
STATUS_SUSPENDED // Temporarily disabled (e.g., payment failure)
|
||||
STATUS_CANCELLED // Marked for removal
|
||||
STATUS_EXPIRED // Past expiry date
|
||||
|
||||
// Key fields
|
||||
'starts_at' // When package becomes active
|
||||
'expires_at' // When package ends
|
||||
'billing_cycle_anchor' // For monthly reset calculations
|
||||
'blesta_service_id' // External billing system reference
|
||||
```
|
||||
|
||||
### Boost
|
||||
|
||||
Temporary or permanent additions to feature limits. Use cases:
|
||||
- One-time credit top-ups
|
||||
- Promotional extras
|
||||
- Cycle-bound bonuses that expire at billing renewal
|
||||
|
||||
**Database:** `entitlement_boosts`
|
||||
|
||||
```php
|
||||
// Boost types
|
||||
BOOST_TYPE_ADD_LIMIT // Add to existing limit
|
||||
BOOST_TYPE_ENABLE // Enable a boolean feature
|
||||
BOOST_TYPE_UNLIMITED // Grant unlimited access
|
||||
|
||||
// Duration types
|
||||
DURATION_CYCLE_BOUND // Expires at billing cycle end
|
||||
DURATION_DURATION // Expires after set time
|
||||
DURATION_PERMANENT // Never expires
|
||||
|
||||
// Key fields
|
||||
'limit_value' // Amount to add
|
||||
'consumed_quantity' // How much has been used
|
||||
'status' // active, exhausted, expired, cancelled
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## How Checking Works
|
||||
|
||||
### The `can()` Method
|
||||
|
||||
All access checks flow through `EntitlementService::can()`.
|
||||
|
||||
```php
|
||||
public function can(Workspace $workspace, string $featureCode, int $quantity = 1): EntitlementResult
|
||||
```
|
||||
|
||||
**Algorithm:**
|
||||
|
||||
```
|
||||
1. Look up feature by code
|
||||
2. If feature has parent, use parent's code for pool lookup
|
||||
3. Sum limits from all active packages + boosts
|
||||
4. If any source grants unlimited → return allowed (unlimited)
|
||||
5. Get current usage (respecting reset type)
|
||||
6. If usage + quantity > limit → deny
|
||||
7. Otherwise → allow
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```php
|
||||
// Check before creating social account
|
||||
$result = $workspace->can('social.accounts');
|
||||
|
||||
if ($result->isDenied()) {
|
||||
throw new EntitlementException($result->getMessage());
|
||||
}
|
||||
|
||||
// Proceed with creation...
|
||||
|
||||
// Record the usage
|
||||
$workspace->recordUsage('social.accounts');
|
||||
```
|
||||
|
||||
### EntitlementResult
|
||||
|
||||
The return value from `can()`. Provides all context needed for UI feedback.
|
||||
|
||||
```php
|
||||
$result = $workspace->can('ai.credits', quantity: 10);
|
||||
|
||||
$result->isAllowed(); // bool
|
||||
$result->isDenied(); // bool
|
||||
$result->isUnlimited(); // bool
|
||||
$result->getMessage(); // Denial reason
|
||||
|
||||
$result->limit; // Total limit (100)
|
||||
$result->used; // Current usage (75)
|
||||
$result->remaining; // Remaining (25)
|
||||
$result->getUsagePercentage(); // 75.0
|
||||
$result->isNearLimit(); // true if > 80%
|
||||
```
|
||||
|
||||
### Caching
|
||||
|
||||
Limits and usage are cached for 5 minutes to avoid repeated database queries.
|
||||
|
||||
```php
|
||||
// Cache keys
|
||||
"entitlement:{workspace_id}:limit:{feature_code}"
|
||||
"entitlement:{workspace_id}:usage:{feature_code}"
|
||||
```
|
||||
|
||||
Cache is invalidated when:
|
||||
- Package is provisioned, suspended, cancelled, or reactivated
|
||||
- Boost is provisioned or expires
|
||||
- Usage is recorded
|
||||
|
||||
---
|
||||
|
||||
## Usage Tracking
|
||||
|
||||
### Recording Usage
|
||||
|
||||
After a gated action succeeds, record the consumption:
|
||||
|
||||
```php
|
||||
$workspace->recordUsage(
|
||||
featureCode: 'ai.credits',
|
||||
quantity: 10,
|
||||
user: $user, // Optional: who triggered it
|
||||
metadata: [ // Optional: context
|
||||
'model' => 'claude-3',
|
||||
'tokens' => 1500,
|
||||
]
|
||||
);
|
||||
```
|
||||
|
||||
**Database:** `entitlement_usage_records`
|
||||
|
||||
### Usage Calculation
|
||||
|
||||
Usage is calculated based on the feature's reset type:
|
||||
|
||||
| Reset Type | Query |
|
||||
|------------|-------|
|
||||
| None | All records ever |
|
||||
| Monthly | Records since billing cycle start |
|
||||
| Rolling | Records in last N days |
|
||||
|
||||
```php
|
||||
// Monthly: Get current cycle start from primary package
|
||||
$cycleStart = $workspace->workspacePackages()
|
||||
->whereHas('package', fn($q) => $q->where('is_base_package', true))
|
||||
->first()
|
||||
->getCurrentCycleStart();
|
||||
|
||||
UsageRecord::getTotalUsage($workspaceId, $featureCode, $cycleStart);
|
||||
|
||||
// Rolling: Last 30 days
|
||||
UsageRecord::getRollingUsage($workspaceId, $featureCode, days: 30);
|
||||
```
|
||||
|
||||
### Usage Summary
|
||||
|
||||
For dashboards, get all features with their current state:
|
||||
|
||||
```php
|
||||
$summary = $workspace->getUsageSummary();
|
||||
|
||||
// Returns Collection grouped by category:
|
||||
[
|
||||
'social' => [
|
||||
['code' => 'social.accounts', 'limit' => 5, 'used' => 3, ...],
|
||||
['code' => 'social.posts.scheduled', 'limit' => 100, 'used' => 45, ...],
|
||||
],
|
||||
'ai' => [
|
||||
['code' => 'ai.credits', 'limit' => 100, 'used' => 75, ...],
|
||||
],
|
||||
]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Integration Points
|
||||
|
||||
### Commerce Integration
|
||||
|
||||
Subscriptions from Commerce automatically provision/revoke entitlement packages.
|
||||
|
||||
**Event Flow:**
|
||||
|
||||
```
|
||||
SubscriptionCreated → ProvisionSocialHostSubscription listener
|
||||
→ EntitlementService::provisionPackage()
|
||||
|
||||
SubscriptionCancelled → Revoke package (immediate or at period end)
|
||||
|
||||
SubscriptionRenewed → Update expires_at
|
||||
→ Expire cycle-bound boosts
|
||||
→ Reset monthly usage (via cycle anchor)
|
||||
```
|
||||
|
||||
**Plan Changes:**
|
||||
|
||||
```php
|
||||
$subscriptionService->changePlan(
|
||||
$subscription,
|
||||
$newPackage,
|
||||
prorate: true, // Calculate credit/charge
|
||||
immediate: true // Apply now vs. period end
|
||||
);
|
||||
```
|
||||
|
||||
### External Billing (Blesta)
|
||||
|
||||
The API supports external billing systems via webhook-style endpoints:
|
||||
|
||||
```
|
||||
POST /api/v1/entitlements → Provision package
|
||||
POST /api/v1/entitlements/{id}/suspend
|
||||
POST /api/v1/entitlements/{id}/unsuspend
|
||||
POST /api/v1/entitlements/{id}/cancel
|
||||
POST /api/v1/entitlements/{id}/renew
|
||||
GET /api/v1/entitlements/{id} → Get status
|
||||
```
|
||||
|
||||
### Cross-Service API
|
||||
|
||||
External services (BioHost, etc.) check entitlements via API:
|
||||
|
||||
```
|
||||
GET /api/v1/entitlements/check
|
||||
?email=user@example.com
|
||||
&feature=bio.pages
|
||||
&quantity=1
|
||||
|
||||
POST /api/v1/entitlements/usage
|
||||
{ email, feature, quantity, metadata }
|
||||
|
||||
GET /api/v1/entitlements/summary
|
||||
GET /api/v1/entitlements/summary/{workspace}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Feature Categories
|
||||
|
||||
Features are organised by category for display grouping:
|
||||
|
||||
| Category | Features |
|
||||
|----------|----------|
|
||||
| **tier** | `tier.apollo`, `tier.hades`, `tier.nyx`, `tier.stygian` |
|
||||
| **service** | `host.social`, `host.bio`, `host.analytics`, `host.trust` |
|
||||
| **social** | `social.accounts`, `social.posts.scheduled`, `social.workspaces` |
|
||||
| **ai** | `ai.credits`, `ai.providers.claude`, `ai.providers.gemini` |
|
||||
| **biolink** | `bio.pages`, `bio.shortlinks`, `bio.domains` |
|
||||
| **analytics** | `analytics.sites`, `analytics.pageviews` |
|
||||
| **storage** | `host.storage.total`, `host.cdn`, `bio.cdn`, `social.cdn` |
|
||||
| **team** | `team.members` |
|
||||
| **api** | `api.requests` |
|
||||
| **support** | `support.mailboxes`, `support.agents`, `support.conversations` |
|
||||
| **tools** | `tool.url_shortener`, `tool.qr_generator`, `tool.dns_lookup` |
|
||||
|
||||
---
|
||||
|
||||
## Audit Logging
|
||||
|
||||
All entitlement changes are logged for compliance and debugging.
|
||||
|
||||
**Database:** `entitlement_logs`
|
||||
|
||||
```php
|
||||
// Log actions
|
||||
ACTION_PACKAGE_PROVISIONED
|
||||
ACTION_PACKAGE_SUSPENDED
|
||||
ACTION_PACKAGE_CANCELLED
|
||||
ACTION_PACKAGE_REACTIVATED
|
||||
ACTION_PACKAGE_RENEWED
|
||||
ACTION_PACKAGE_EXPIRED
|
||||
ACTION_BOOST_PROVISIONED
|
||||
ACTION_BOOST_CONSUMED
|
||||
ACTION_BOOST_EXHAUSTED
|
||||
ACTION_BOOST_EXPIRED
|
||||
ACTION_BOOST_CANCELLED
|
||||
ACTION_USAGE_RECORDED
|
||||
ACTION_USAGE_DENIED
|
||||
|
||||
// Log sources
|
||||
SOURCE_BLESTA // External billing
|
||||
SOURCE_COMMERCE // Internal commerce
|
||||
SOURCE_ADMIN // Manual admin action
|
||||
SOURCE_SYSTEM // Automated (e.g., expiry)
|
||||
SOURCE_API // API call
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
### Models
|
||||
- `app/Mod/Tenant/Models/Feature.php`
|
||||
- `app/Mod/Tenant/Models/Package.php`
|
||||
- `app/Mod/Tenant/Models/WorkspacePackage.php`
|
||||
- `app/Mod/Tenant/Models/Boost.php`
|
||||
- `app/Mod/Tenant/Models/UsageRecord.php`
|
||||
- `app/Mod/Tenant/Models/EntitlementLog.php`
|
||||
|
||||
### Services
|
||||
- `app/Mod/Tenant/Services/EntitlementService.php` - Core logic
|
||||
- `app/Mod/Tenant/Services/EntitlementResult.php` - Result DTO
|
||||
|
||||
### API
|
||||
- `app/Mod/Api/Controllers/EntitlementApiController.php`
|
||||
|
||||
### Commerce Integration
|
||||
- `app/Mod/Commerce/Listeners/ProvisionSocialHostSubscription.php`
|
||||
- `app/Mod/Commerce/Services/SubscriptionService.php`
|
||||
|
||||
### Database
|
||||
- `entitlement_features` - Feature definitions
|
||||
- `entitlement_packages` - Package definitions
|
||||
- `entitlement_package_features` - Package/feature pivot with limits
|
||||
- `entitlement_workspace_packages` - Workspace subscriptions
|
||||
- `entitlement_boosts` - Temporary additions
|
||||
- `entitlement_usage_records` - Consumption tracking
|
||||
- `entitlement_logs` - Audit trail
|
||||
|
||||
### Seeders
|
||||
- `app/Mod/Tenant/Database/Seeders/FeatureSeeder.php`
|
||||
|
||||
### Tests
|
||||
- `app/Mod/Tenant/Tests/Feature/EntitlementServiceTest.php`
|
||||
- `app/Mod/Tenant/Tests/Feature/EntitlementApiTest.php`
|
||||
|
||||
---
|
||||
|
||||
## Usage Examples
|
||||
|
||||
### Basic Access Check
|
||||
|
||||
```php
|
||||
// In controller or service
|
||||
$result = $workspace->can('social.accounts');
|
||||
|
||||
if ($result->isDenied()) {
|
||||
return back()->with('error', $result->getMessage());
|
||||
}
|
||||
|
||||
// Perform action...
|
||||
$workspace->recordUsage('social.accounts');
|
||||
```
|
||||
|
||||
### With Quantity
|
||||
|
||||
```php
|
||||
// Before bulk import
|
||||
$result = $workspace->can('social.posts.scheduled', quantity: 50);
|
||||
|
||||
if ($result->isDenied()) {
|
||||
return "Cannot schedule {$quantity} posts. " .
|
||||
"Remaining: {$result->remaining}";
|
||||
}
|
||||
```
|
||||
|
||||
### Tier Check
|
||||
|
||||
```php
|
||||
// Gate premium features
|
||||
if ($workspace->isApollo()) {
|
||||
// Show Apollo-tier features
|
||||
}
|
||||
|
||||
// Or directly
|
||||
if ($workspace->can('tier.apollo')->isAllowed()) {
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
### Usage Dashboard Data
|
||||
|
||||
```php
|
||||
// For billing/usage page
|
||||
$summary = $workspace->getUsageSummary();
|
||||
$packages = $entitlements->getActivePackages($workspace);
|
||||
$boosts = $entitlements->getActiveBoosts($workspace);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-15 | Initial RFC |
|
||||
705
RFC-005-COMMERCE-MATRIX.md
Normal file
705
RFC-005-COMMERCE-MATRIX.md
Normal file
|
|
@ -0,0 +1,705 @@
|
|||
# RFC: Commerce Entity Matrix
|
||||
|
||||
**Status:** Implemented
|
||||
**Created:** 2026-01-15
|
||||
**Authors:** Host UK Engineering
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The Commerce Entity Matrix is a hierarchical permission and content system for multi-channel commerce. It enables master companies (M1) to control product catalogues, storefronts (M2) to select and white-label products, and dropshippers (M3) to inherit complete stores with zero management overhead.
|
||||
|
||||
The core innovation is **top-down immutable permissions**: if a parent says "NO", every descendant is locked to "NO". Children can only restrict further, never expand. Combined with sparse content overrides and a self-learning training mode, the system provides complete audit trails and deterministic behaviour.
|
||||
|
||||
Like [HLCRF](./HLCRF-COMPOSITOR.md) for layouts and [Compound SKU](./COMPOUND-SKU.md) for product identity, the Matrix eliminates complexity through composable primitives rather than configuration sprawl.
|
||||
|
||||
---
|
||||
|
||||
## Motivation
|
||||
|
||||
Traditional multi-tenant commerce systems copy data between entities, leading to synchronisation nightmares, inconsistent pricing, and broken audit trails. When Original Organics ran four websites, telephone orders, mail orders, and garden centre voucher schemes in 2008, they needed a system where:
|
||||
|
||||
1. **M1 owns truth** — Products exist in one place; everything else references them
|
||||
2. **M2 selects and customises** — Storefronts choose products and can override presentation
|
||||
3. **M3 inherits completely** — Dropshippers get fully functional stores without management burden
|
||||
4. **Permissions cascade down** — A restriction at the top is immutable below
|
||||
5. **Every action is gated** — No default-allow; if it wasn't trained, it doesn't work
|
||||
|
||||
The Matrix addresses this through hierarchical entities, sparse overrides, and request-level permission enforcement.
|
||||
|
||||
---
|
||||
|
||||
## Terminology
|
||||
|
||||
### Entity Types
|
||||
|
||||
| Code | Type | Role |
|
||||
|------|------|------|
|
||||
| **M1** | Master Company | Source of truth. Owns the product catalogue, sets base pricing, controls what's possible. |
|
||||
| **M2** | Facade/Storefront | Selects from M1's catalogue. Can override content, adjust pricing within bounds, operate independent sales channels. |
|
||||
| **M3** | Dropshipper | Full inheritance with zero management. Sees everything, reports everything, manages nothing. Can create their own M2s. |
|
||||
|
||||
### Entity Hierarchy
|
||||
|
||||
```
|
||||
M1 - Master Company (Source of Truth)
|
||||
│
|
||||
├── Master Product Catalogue
|
||||
│ └── Products live here, nowhere else
|
||||
│
|
||||
├── M2 - Storefronts (Select from M1)
|
||||
│ ├── waterbutts.com
|
||||
│ ├── originalorganics.co.uk
|
||||
│ ├── telephone-orders (internal)
|
||||
│ └── garden-vouchers (B2B)
|
||||
│
|
||||
└── M3 - Dropshippers (Full Inheritance)
|
||||
├── External company selling our products
|
||||
└── Can have their own M2s
|
||||
├── dropshipper.com
|
||||
└── dropshipper-wholesale.com
|
||||
```
|
||||
|
||||
### Materialised Path
|
||||
|
||||
Each entity stores its position in the hierarchy as a path string:
|
||||
|
||||
| Entity | Path | Depth |
|
||||
|--------|------|-------|
|
||||
| ORGORG (M1) | `ORGORG` | 0 |
|
||||
| WBUTS (M2) | `ORGORG/WBUTS` | 1 |
|
||||
| DRPSHP (M3) | `ORGORG/WBUTS/DRPSHP` | 2 |
|
||||
|
||||
The path enables ancestor lookups without recursive queries.
|
||||
|
||||
---
|
||||
|
||||
## Permission Matrix
|
||||
|
||||
### The Core Rules
|
||||
|
||||
```
|
||||
If M1 says "NO" → Everything below is "NO"
|
||||
If M1 says "YES" → M2 can say "NO" for itself
|
||||
If M2 says "YES" → M3 can say "NO" for itself
|
||||
|
||||
Permissions cascade DOWN. Restrictions are IMMUTABLE from above.
|
||||
```
|
||||
|
||||
### Visual Model
|
||||
|
||||
```
|
||||
M1 (Master)
|
||||
├── can_sell_alcohol: NO ──────────────┐
|
||||
├── can_discount: YES │
|
||||
└── can_export: YES │
|
||||
│ │
|
||||
┌────────────┼────────────┐ │
|
||||
▼ ▼ ▼ │
|
||||
M2-Web M2-Phone M2-Voucher │
|
||||
├── can_sell_alcohol: [LOCKED NO] ◄──────────────┘
|
||||
├── can_discount: NO (restricted self)
|
||||
└── can_export: YES (inherited)
|
||||
│
|
||||
▼
|
||||
M3-Dropshipper
|
||||
├── can_sell_alcohol: [LOCKED NO] (from M1)
|
||||
├── can_discount: [LOCKED NO] (from M2)
|
||||
└── can_export: YES (can restrict to NO)
|
||||
```
|
||||
|
||||
### The Three Dimensions
|
||||
|
||||
```
|
||||
Dimension 1: Entity Hierarchy (M1 → M2 → M3)
|
||||
Dimension 2: Permission Keys (can_sell, can_discount, can_view_cost...)
|
||||
Dimension 3: Resource Scope (products, orders, customers, reports...)
|
||||
|
||||
Permission = Matrix[Entity][Key][Scope]
|
||||
```
|
||||
|
||||
### Permission Entry Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE permission_matrix (
|
||||
id BIGINT PRIMARY KEY,
|
||||
entity_id BIGINT NOT NULL,
|
||||
|
||||
-- What permission
|
||||
key VARCHAR(128), -- product.create, order.refund
|
||||
scope VARCHAR(128), -- Resource type or specific ID
|
||||
|
||||
-- The value
|
||||
allowed BOOLEAN DEFAULT FALSE,
|
||||
locked BOOLEAN DEFAULT FALSE, -- Set by parent, cannot override
|
||||
|
||||
-- Audit
|
||||
source VARCHAR(32), -- inherited, explicit, trained
|
||||
set_by_entity_id BIGINT, -- Who locked it
|
||||
trained_at TIMESTAMP, -- When it was learned
|
||||
trained_route VARCHAR(255), -- Which route triggered training
|
||||
|
||||
UNIQUE (entity_id, key, scope)
|
||||
);
|
||||
```
|
||||
|
||||
### Source Types
|
||||
|
||||
| Source | Meaning |
|
||||
|--------|---------|
|
||||
| `inherited` | Cascaded from parent entity's lock |
|
||||
| `explicit` | Manually set by administrator |
|
||||
| `trained` | Learned through training mode |
|
||||
|
||||
---
|
||||
|
||||
## Permission Cascade Algorithm
|
||||
|
||||
When checking if an entity can perform an action:
|
||||
|
||||
```
|
||||
1. Build hierarchy path (root M1 → parent M2 → current entity)
|
||||
2. For each ancestor, top-down:
|
||||
- Find permission for (entity, key, scope)
|
||||
- If locked AND denied → RETURN DENIED (immutable)
|
||||
- If denied (not locked) → RETURN DENIED
|
||||
3. Check entity's own permission:
|
||||
- If exists → RETURN allowed/denied
|
||||
4. Permission undefined → handle based on mode
|
||||
```
|
||||
|
||||
### Lock Cascade
|
||||
|
||||
When an entity locks a permission, all descendants receive an inherited lock:
|
||||
|
||||
```php
|
||||
public function lock(Entity $entity, string $key, bool $allowed): void
|
||||
{
|
||||
// Set on this entity
|
||||
PermissionMatrix::updateOrCreate(
|
||||
['entity_id' => $entity->id, 'key' => $key],
|
||||
['allowed' => $allowed, 'locked' => true, 'source' => 'explicit']
|
||||
);
|
||||
|
||||
// Cascade to all descendants
|
||||
$descendants = Entity::where('path', 'like', $entity->path . '/%')->get();
|
||||
|
||||
foreach ($descendants as $descendant) {
|
||||
PermissionMatrix::updateOrCreate(
|
||||
['entity_id' => $descendant->id, 'key' => $key],
|
||||
[
|
||||
'allowed' => $allowed,
|
||||
'locked' => true,
|
||||
'source' => 'inherited',
|
||||
'set_by_entity_id' => $entity->id,
|
||||
]
|
||||
);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Training Mode
|
||||
|
||||
### The Problem
|
||||
|
||||
Building a complete permission matrix upfront is impractical. You don't know every action until you build the system.
|
||||
|
||||
### The Solution
|
||||
|
||||
Training mode learns permissions by observing real usage:
|
||||
|
||||
```
|
||||
1. Developer navigates to /admin/products
|
||||
2. Clicks "Create Product"
|
||||
3. System: "BLOCKED - No permission defined for:"
|
||||
- Entity: M1-Admin
|
||||
- Action: product.create
|
||||
- Route: POST /admin/products
|
||||
|
||||
4. Developer clicks [Allow for M1-Admin]
|
||||
5. Permission recorded in matrix with source='trained'
|
||||
6. Continue working
|
||||
|
||||
Result: Complete map of every action in the system
|
||||
```
|
||||
|
||||
### Configuration
|
||||
|
||||
```php
|
||||
// config/commerce.php
|
||||
'matrix' => [
|
||||
// Training mode - undefined permissions prompt for approval
|
||||
'training_mode' => env('COMMERCE_MATRIX_TRAINING', false),
|
||||
|
||||
// Production mode - undefined = denied
|
||||
'strict_mode' => env('COMMERCE_MATRIX_STRICT', true),
|
||||
|
||||
// Log all permission checks (for audit)
|
||||
'log_all_checks' => env('COMMERCE_MATRIX_LOG_ALL', false),
|
||||
|
||||
// Log denied requests
|
||||
'log_denials' => true,
|
||||
|
||||
// Default action when permission undefined (only if strict=false)
|
||||
'default_allow' => false,
|
||||
],
|
||||
```
|
||||
|
||||
### Permission Request Logging
|
||||
|
||||
```sql
|
||||
CREATE TABLE permission_requests (
|
||||
id BIGINT PRIMARY KEY,
|
||||
entity_id BIGINT NOT NULL,
|
||||
|
||||
-- Request details
|
||||
method VARCHAR(10), -- GET, POST, PUT, DELETE
|
||||
route VARCHAR(255), -- /admin/products
|
||||
action VARCHAR(128), -- product.create
|
||||
scope VARCHAR(128),
|
||||
|
||||
-- Context
|
||||
request_data JSON, -- Sanitised request params
|
||||
user_agent VARCHAR(255),
|
||||
ip_address VARCHAR(45),
|
||||
|
||||
-- Result
|
||||
status VARCHAR(32), -- allowed, denied, pending
|
||||
was_trained BOOLEAN DEFAULT FALSE,
|
||||
trained_at TIMESTAMP,
|
||||
|
||||
created_at TIMESTAMP
|
||||
);
|
||||
```
|
||||
|
||||
### Production Mode
|
||||
|
||||
```
|
||||
If permission not in matrix → 403 Forbidden
|
||||
No exceptions. No fallbacks. No "default allow".
|
||||
|
||||
If it wasn't trained, it doesn't exist.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Product Assignment
|
||||
|
||||
### How Products Flow Through the Hierarchy
|
||||
|
||||
M1 owns the master catalogue. M2/M3 entities don't copy products; they create **assignments** that reference the master and optionally override specific fields.
|
||||
|
||||
```sql
|
||||
CREATE TABLE commerce_product_assignments (
|
||||
id BIGINT PRIMARY KEY,
|
||||
entity_id BIGINT NOT NULL, -- M2 or M3
|
||||
product_id BIGINT NOT NULL, -- Reference to master
|
||||
|
||||
-- SKU customisation
|
||||
sku_suffix VARCHAR(64), -- Custom suffix for this entity
|
||||
|
||||
-- Price overrides (if allowed by matrix)
|
||||
price_override INT, -- Override base price
|
||||
price_tier_overrides JSON, -- Override tier pricing
|
||||
margin_percent DECIMAL(5,2), -- Percentage margin
|
||||
fixed_margin INT, -- Fixed margin amount
|
||||
|
||||
-- Content overrides
|
||||
name_override VARCHAR(255),
|
||||
description_override TEXT,
|
||||
image_override VARCHAR(512),
|
||||
|
||||
-- Control
|
||||
is_active BOOLEAN DEFAULT TRUE,
|
||||
is_featured BOOLEAN DEFAULT FALSE,
|
||||
sort_order INT DEFAULT 0,
|
||||
allocated_stock INT, -- Entity-specific allocation
|
||||
can_discount BOOLEAN DEFAULT TRUE,
|
||||
min_price INT, -- Floor price
|
||||
max_price INT, -- Ceiling price
|
||||
|
||||
UNIQUE (entity_id, product_id)
|
||||
);
|
||||
```
|
||||
|
||||
### Effective Values
|
||||
|
||||
The assignment provides effective value getters that fall back to the master product:
|
||||
|
||||
```php
|
||||
public function getEffectivePrice(): int
|
||||
{
|
||||
return $this->price_override ?? $this->product->price;
|
||||
}
|
||||
|
||||
public function getEffectiveName(): string
|
||||
{
|
||||
return $this->name_override ?? $this->product->name;
|
||||
}
|
||||
```
|
||||
|
||||
### SKU Lineage
|
||||
|
||||
Full SKUs encode the entity path:
|
||||
|
||||
```
|
||||
ORGORG-WBUTS-WB500L # Original Organics → Waterbutts → 500L Water Butt
|
||||
ORGORG-PHONE-WB500L # Same product, telephone channel
|
||||
DRPSHP-THEIR1-WB500L # Dropshipper's storefront selling our product
|
||||
```
|
||||
|
||||
This tracks:
|
||||
- Where the sale originated
|
||||
- Which facade/channel
|
||||
- Back to master SKU
|
||||
|
||||
---
|
||||
|
||||
## Content Overrides
|
||||
|
||||
### The Core Insight
|
||||
|
||||
**Don't copy data. Create sparse overrides. Resolve at runtime.**
|
||||
|
||||
```
|
||||
M1 (Master) has content
|
||||
│
|
||||
│ (M2 sees M1's content by default)
|
||||
▼
|
||||
M2 customises product name
|
||||
│
|
||||
│ Override entry: (M2, product:123, name, "Custom Name")
|
||||
│ Everything else still inherits from M1
|
||||
▼
|
||||
M3 (Dropshipper) inherits M2's view
|
||||
│
|
||||
│ (Sees M2's custom name, M1's everything else)
|
||||
▼
|
||||
M3 customises description
|
||||
│
|
||||
│ Override entry: (M3, product:123, description, "Their description")
|
||||
│ Still has M2's name, M1's other fields
|
||||
▼
|
||||
Resolution: M3 sees merged content from all levels
|
||||
```
|
||||
|
||||
### Override Table Schema
|
||||
|
||||
```sql
|
||||
CREATE TABLE commerce_content_overrides (
|
||||
id BIGINT PRIMARY KEY,
|
||||
entity_id BIGINT NOT NULL,
|
||||
|
||||
-- What's being overridden (polymorphic)
|
||||
overrideable_type VARCHAR(128), -- Product, Category, Page, etc.
|
||||
overrideable_id BIGINT,
|
||||
field VARCHAR(64), -- name, description, image, price
|
||||
|
||||
-- The override value
|
||||
value TEXT,
|
||||
value_type VARCHAR(32), -- string, json, html, decimal, boolean
|
||||
|
||||
-- Audit
|
||||
created_by BIGINT,
|
||||
updated_by BIGINT,
|
||||
created_at TIMESTAMP,
|
||||
updated_at TIMESTAMP,
|
||||
|
||||
UNIQUE (entity_id, overrideable_type, overrideable_id, field)
|
||||
);
|
||||
```
|
||||
|
||||
### Value Types
|
||||
|
||||
| Type | Storage | Use Case |
|
||||
|------|---------|----------|
|
||||
| `string` | Raw text | Names, short descriptions |
|
||||
| `json` | JSON-encoded | Structured data, arrays |
|
||||
| `html` | Raw HTML | Rich content |
|
||||
| `integer` | String → int | Counts, quantities |
|
||||
| `decimal` | String → float | Prices, percentages |
|
||||
| `boolean` | `1`/`0` | Flags, toggles |
|
||||
|
||||
### Resolution Algorithm
|
||||
|
||||
```
|
||||
Query: "What is product 123's name for M3-ACME?"
|
||||
|
||||
Step 1: Check M3-ACME overrides
|
||||
→ NULL (no override)
|
||||
|
||||
Step 2: Check M2-WATERBUTTS overrides (parent)
|
||||
→ "Premium 500L Water Butt" ✓
|
||||
|
||||
Step 3: Return "Premium 500L Water Butt"
|
||||
(M3-ACME sees M2's override, not M1's original)
|
||||
```
|
||||
|
||||
If M3-ACME later customises the name, their override takes precedence for themselves and their descendants.
|
||||
|
||||
---
|
||||
|
||||
## API Reference
|
||||
|
||||
### PermissionMatrixService
|
||||
|
||||
The service handles all permission checks and training.
|
||||
|
||||
```php
|
||||
use Mod\Commerce\Services\PermissionMatrixService;
|
||||
|
||||
$matrix = app(PermissionMatrixService::class);
|
||||
|
||||
// Check permission
|
||||
$result = $matrix->can($entity, 'product.create', $scope);
|
||||
|
||||
if ($result->isAllowed()) {
|
||||
// Proceed
|
||||
} elseif ($result->isDenied()) {
|
||||
// Handle denial: $result->reason
|
||||
} elseif ($result->isUndefined()) {
|
||||
// No permission defined
|
||||
}
|
||||
|
||||
// Gate a request (handles training mode)
|
||||
$result = $matrix->gateRequest($request, $entity, 'order.refund');
|
||||
|
||||
// Set permission explicitly
|
||||
$matrix->setPermission($entity, 'product.create', true);
|
||||
|
||||
// Lock permission (cascades to descendants)
|
||||
$matrix->lock($entity, 'product.view_cost', false);
|
||||
|
||||
// Unlock (removes inherited locks)
|
||||
$matrix->unlock($entity, 'product.view_cost');
|
||||
|
||||
// Train permission (dev mode)
|
||||
$matrix->train($entity, 'product.create', $scope, true, $route);
|
||||
```
|
||||
|
||||
### PermissionResult
|
||||
|
||||
```php
|
||||
use Mod\Commerce\Services\PermissionResult;
|
||||
|
||||
// Factory methods
|
||||
PermissionResult::allowed();
|
||||
PermissionResult::denied(reason: 'Locked by M1', lockedBy: $entity);
|
||||
PermissionResult::undefined(key: 'action', scope: 'resource');
|
||||
PermissionResult::pending(key: 'action', trainingUrl: '/train/...');
|
||||
|
||||
// Status checks
|
||||
$result->isAllowed();
|
||||
$result->isDenied();
|
||||
$result->isUndefined();
|
||||
$result->isPending();
|
||||
```
|
||||
|
||||
### Entity Model
|
||||
|
||||
```php
|
||||
use Mod\Commerce\Models\Entity;
|
||||
|
||||
// Create master
|
||||
$m1 = Entity::createMaster('ORGORG', 'Original Organics');
|
||||
|
||||
// Create facade under master
|
||||
$m2 = $m1->createFacade('WBUTS', 'Waterbutts.com', [
|
||||
'domain' => 'waterbutts.com',
|
||||
'currency' => 'GBP',
|
||||
]);
|
||||
|
||||
// Create dropshipper under facade
|
||||
$m3 = $m2->createDropshipper('ACME', 'ACME Supplies');
|
||||
|
||||
// Hierarchy helpers
|
||||
$m3->getAncestors(); // [M1, M2]
|
||||
$m3->getHierarchy(); // [M1, M2, M3]
|
||||
$m3->getRoot(); // M1
|
||||
$m3->getDescendants(); // Children, grandchildren, etc.
|
||||
|
||||
// Type checks
|
||||
$entity->isMaster(); // or isM1()
|
||||
$entity->isFacade(); // or isM2()
|
||||
$entity->isDropshipper(); // or isM3()
|
||||
|
||||
// SKU building
|
||||
$entity->buildSku('WB500L'); // "ORGORG-WBUTS-WB500L"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Standard Permission Keys
|
||||
|
||||
```php
|
||||
// Product permissions
|
||||
'product.list' // View product list
|
||||
'product.view' // View product detail
|
||||
'product.view_cost' // See cost price (M1 only usually)
|
||||
'product.create' // Create new product (M1 only)
|
||||
'product.update' // Update product
|
||||
'product.delete' // Delete product
|
||||
'product.price_override' // Override price on facade
|
||||
|
||||
// Order permissions
|
||||
'order.list' // View orders
|
||||
'order.view' // View order detail
|
||||
'order.create' // Create order
|
||||
'order.update' // Update order
|
||||
'order.cancel' // Cancel order
|
||||
'order.refund' // Process refund
|
||||
'order.export' // Export order data
|
||||
|
||||
// Customer permissions
|
||||
'customer.list'
|
||||
'customer.view'
|
||||
'customer.view_email' // See customer email
|
||||
'customer.view_phone' // See customer phone
|
||||
'customer.export' // Export customer data (GDPR)
|
||||
|
||||
// Report permissions
|
||||
'report.sales' // Sales reports
|
||||
'report.revenue' // Revenue (might hide from M3)
|
||||
'report.cost' // Cost reports (M1 only)
|
||||
'report.margin' // Margin reports (M1 only)
|
||||
|
||||
// System permissions
|
||||
'settings.view'
|
||||
'settings.update'
|
||||
'entity.create' // Create child entities
|
||||
'entity.manage' // Manage entity settings
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Middleware Integration
|
||||
|
||||
### CommerceMatrixGate
|
||||
|
||||
```php
|
||||
// app/Http/Middleware/CommerceMatrixGate.php
|
||||
|
||||
public function handle(Request $request, Closure $next)
|
||||
{
|
||||
$entity = $this->resolveEntity($request);
|
||||
$action = $this->resolveAction($request);
|
||||
|
||||
if (!$entity || !$action) {
|
||||
return $next($request); // Not a commerce route
|
||||
}
|
||||
|
||||
$result = $this->matrix->gateRequest($request, $entity, $action);
|
||||
|
||||
if ($result->isDenied()) {
|
||||
return response()->json([
|
||||
'error' => 'permission_denied',
|
||||
'message' => $result->reason,
|
||||
], 403);
|
||||
}
|
||||
|
||||
if ($result->isPending()) {
|
||||
// Training mode - show prompt
|
||||
return response()->view('commerce.matrix.train-prompt', [
|
||||
'result' => $result,
|
||||
'entity' => $entity,
|
||||
], 428); // Precondition Required
|
||||
}
|
||||
|
||||
return $next($request);
|
||||
}
|
||||
```
|
||||
|
||||
### Route Definition
|
||||
|
||||
```php
|
||||
// Explicit action mapping
|
||||
Route::post('/products', [ProductController::class, 'store'])
|
||||
->matrixAction('product.create');
|
||||
|
||||
Route::post('/orders/{order}/refund', [OrderController::class, 'refund'])
|
||||
->matrixAction('order.refund');
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Order Flow Through the Matrix
|
||||
|
||||
```
|
||||
Customer places order on waterbutts.com (M2)
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Order Created │
|
||||
│ - entity_id: M2-WBUTS │
|
||||
│ - sku: ORGORG-WBUTS-WB500L │
|
||||
│ - customer sees: M2 branding │
|
||||
└────────────────┬────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ M1 Fulfillment Queue │
|
||||
│ - M1 sees all orders from all M2s │
|
||||
│ - Can filter by facade │
|
||||
│ - Ships with M2 branding (or neutral) │
|
||||
└────────────────┬────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Reporting │
|
||||
│ - M1: Sees all, costs, margins │
|
||||
│ - M2: Sees own orders, no cost data │
|
||||
│ - M3: Sees own orders, wholesale price │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pricing
|
||||
|
||||
Pricing is not a separate system. It emerges from:
|
||||
|
||||
1. **Permission Matrix** — `can_discount`, `max_discount_percent`, `can_sell_below_wholesale`
|
||||
2. **Product Assignments** — `price_override`, `min_price`, `max_price`, `margin_percent`
|
||||
3. **Content Overrides** — Sparse price adjustments per entity
|
||||
4. **SKU System** — Bundle hashes, option modifiers, volume rules
|
||||
|
||||
No separate pricing engine needed. Primitives compose.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
### Models
|
||||
|
||||
- `app/Mod/Commerce/Models/Entity.php` — Entity hierarchy
|
||||
- `app/Mod/Commerce/Models/PermissionMatrix.php` — Permission entries
|
||||
- `app/Mod/Commerce/Models/PermissionRequest.php` — Request logging
|
||||
- `app/Mod/Commerce/Models/ContentOverride.php` — Sparse overrides
|
||||
- `app/Mod/Commerce/Models/ProductAssignment.php` — M2/M3 product links
|
||||
|
||||
### Services
|
||||
|
||||
- `app/Mod/Commerce/Services/PermissionMatrixService.php` — Permission logic
|
||||
- `app/Mod/Commerce/Services/ContentOverrideService.php` — Override resolution
|
||||
|
||||
### Configuration
|
||||
|
||||
- `app/Mod/Commerce/config.php` — Matrix configuration
|
||||
|
||||
---
|
||||
|
||||
## Related RFCs
|
||||
|
||||
- [HLCRF Compositor](./HLCRF-COMPOSITOR.md) — Same philosophy applied to layouts
|
||||
- [Compound SKU](./COMPOUND-SKU.md) — Same philosophy applied to product identity
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-15 | Initial RFC |
|
||||
258
RFC-006-COMPOUND-SKU.md
Normal file
258
RFC-006-COMPOUND-SKU.md
Normal file
|
|
@ -0,0 +1,258 @@
|
|||
# RFC: Compound SKU Format
|
||||
|
||||
**Status:** Implemented
|
||||
**Created:** 2026-01-15
|
||||
**Authors:** Host UK Engineering
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
The Compound SKU format encodes product identity, options, quantities, and bundle groupings in a single parseable string. Like [HLCRF](./HLCRF-COMPOSITOR.md) for layouts, it makes complex structure a portable, self-describing data type.
|
||||
|
||||
One scan tells you everything. No lookups. No mistakes. One barcode = complete fulfillment knowledge.
|
||||
|
||||
---
|
||||
|
||||
## Format Specification
|
||||
|
||||
```
|
||||
SKU-<opt>~<val>*<qty>[-<opt>~<val>*<qty>]...
|
||||
```
|
||||
|
||||
| Symbol | Purpose | Example |
|
||||
|--------|---------|----------------------|
|
||||
| `-` | Option separator | `LAPTOP-ram~16gb` |
|
||||
| `~` | Value indicator | `ram~16gb` |
|
||||
| `*` | Quantity indicator | `cover~black*2` |
|
||||
| `,` | Item separator | `LAPTOP,MOUSE,PAD` |
|
||||
| `\|` | Bundle separator | `LAPTOP\|MOUSE\|PAD` |
|
||||
|
||||
---
|
||||
|
||||
## Examples
|
||||
|
||||
### Single product with options
|
||||
|
||||
```
|
||||
LAPTOP-ram~16gb-ssd~512gb-color~silver
|
||||
```
|
||||
|
||||
### Option with quantity
|
||||
|
||||
```
|
||||
LAPTOP-ram~16gb-cover~black*2
|
||||
```
|
||||
|
||||
Two black covers included.
|
||||
|
||||
### Multiple separate items
|
||||
|
||||
```
|
||||
LAPTOP-ram~16gb,HDMI-length~2m,MOUSE-color~black
|
||||
```
|
||||
|
||||
Comma separates distinct line items.
|
||||
|
||||
### Bundle (discount lookup)
|
||||
|
||||
```
|
||||
LAPTOP-ram~16gb\|MOUSE-color~black\|PAD-size~xl
|
||||
```
|
||||
|
||||
Pipe binds items for bundle discount detection.
|
||||
|
||||
### With entity lineage
|
||||
|
||||
```
|
||||
ORGORG-WBUTS-PROD500-ram~16gb
|
||||
│ │ │ └── Option
|
||||
│ │ └────────── Base product SKU
|
||||
│ └──────────────── M2 entity code
|
||||
└─────────────────────── M1 entity code
|
||||
```
|
||||
|
||||
The lineage prefix traces through the entity hierarchy.
|
||||
|
||||
---
|
||||
|
||||
## Bundle Discount Detection
|
||||
|
||||
When a compound SKU contains `|` (bundle separator):
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────┐
|
||||
│ Input: LAPTOP-ram~16gb|MOUSE-color~black|PAD-size~xl │
|
||||
│ │
|
||||
│ Step 1: Detect Bundle (found |) │
|
||||
│ │
|
||||
│ Step 2: Strip Human Choices │
|
||||
│ → LAPTOP|MOUSE|PAD │
|
||||
│ │
|
||||
│ Step 3: Hash the Raw Combination │
|
||||
│ → hash("LAPTOP|MOUSE|PAD") = "abc123..." │
|
||||
│ │
|
||||
│ Step 4: Lookup Bundle Discount │
|
||||
│ → commerce_bundle_hashes["abc123"] = 20% off │
|
||||
│ │
|
||||
│ Step 5: Apply Discount │
|
||||
│ → Bundle price calculated │
|
||||
└──────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
The hash is computed from **sorted base SKUs** (stripping options), so `LAPTOP|MOUSE|PAD` and `PAD|LAPTOP|MOUSE` produce the same hash.
|
||||
|
||||
---
|
||||
|
||||
## API Reference
|
||||
|
||||
### SkuParserService
|
||||
|
||||
Parses compound SKU strings into structured data.
|
||||
|
||||
```php
|
||||
use Mod\Commerce\Services\SkuParserService;
|
||||
|
||||
$parser = app(SkuParserService::class);
|
||||
|
||||
// Parse a compound SKU
|
||||
$result = $parser->parse('LAPTOP-ram~16gb|MOUSE,HDMI');
|
||||
|
||||
// Result contains ParsedItem and BundleItem objects
|
||||
$result->count(); // 2 (1 bundle + 1 single)
|
||||
$result->productCount(); // 4 (3 in bundle + 1 single)
|
||||
$result->hasBundles(); // true
|
||||
$result->getBundleHashes(); // ['abc123...']
|
||||
$result->getAllBaseSkus(); // ['LAPTOP', 'MOUSE', 'HDMI']
|
||||
|
||||
// Access items
|
||||
foreach ($result->items as $item) {
|
||||
if ($item instanceof BundleItem) {
|
||||
echo "Bundle: " . $item->getBaseSkuString();
|
||||
} else {
|
||||
echo "Item: " . $item->baseSku;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### SkuBuilderService
|
||||
|
||||
Builds compound SKU strings from structured data.
|
||||
|
||||
```php
|
||||
use Mod\Commerce\Services\SkuBuilderService;
|
||||
|
||||
$builder = app(SkuBuilderService::class);
|
||||
|
||||
// Build from line items
|
||||
$sku = $builder->build([
|
||||
[
|
||||
'base_sku' => 'laptop',
|
||||
'options' => [
|
||||
['code' => 'ram', 'value' => '16gb'],
|
||||
['code' => 'ssd', 'value' => '512gb'],
|
||||
],
|
||||
'bundle_group' => 'cyber', // Groups into bundle
|
||||
],
|
||||
[
|
||||
'base_sku' => 'mouse',
|
||||
'bundle_group' => 'cyber',
|
||||
],
|
||||
[
|
||||
'base_sku' => 'hdmi', // No group = standalone
|
||||
],
|
||||
]);
|
||||
// Returns: "LAPTOP-ram~16gb-ssd~512gb|MOUSE,HDMI"
|
||||
|
||||
// Add entity lineage
|
||||
$sku = $builder->addLineage('PROD500', ['ORGORG', 'WBUTS']);
|
||||
// Returns: "ORGORG-WBUTS-PROD500"
|
||||
|
||||
// Generate bundle hash for discount creation
|
||||
$hash = $builder->generateBundleHash(['LAPTOP', 'MOUSE', 'PAD']);
|
||||
```
|
||||
|
||||
### Data Transfer Objects
|
||||
|
||||
```php
|
||||
use Mod\Commerce\Data\SkuOption;
|
||||
use Mod\Commerce\Data\ParsedItem;
|
||||
use Mod\Commerce\Data\BundleItem;
|
||||
use Mod\Commerce\Data\SkuParseResult;
|
||||
|
||||
// Option: code~value*quantity
|
||||
$option = new SkuOption('ram', '16gb', 1);
|
||||
$option->toString(); // "ram~16gb"
|
||||
|
||||
// Item: baseSku with options
|
||||
$item = new ParsedItem('LAPTOP', [$option]);
|
||||
$item->toString(); // "LAPTOP-ram~16gb"
|
||||
$item->getOption('ram'); // SkuOption
|
||||
$item->hasOption('ssd'); // false
|
||||
|
||||
// Bundle: items grouped for discount
|
||||
$bundle = new BundleItem($items, $hash);
|
||||
$bundle->getBaseSkus(); // ['LAPTOP', 'MOUSE']
|
||||
$bundle->getBaseSkuString(); // "LAPTOP|MOUSE"
|
||||
$bundle->containsSku('MOUSE'); // true
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Bundle Hash Table
|
||||
|
||||
```sql
|
||||
CREATE TABLE commerce_bundle_hashes (
|
||||
id BIGINT PRIMARY KEY,
|
||||
hash VARCHAR(64) UNIQUE, -- SHA256 of sorted base SKUs
|
||||
base_skus VARCHAR(512), -- "LAPTOP|MOUSE|PAD" (debugging)
|
||||
|
||||
-- Discount (one of these)
|
||||
coupon_code VARCHAR(64),
|
||||
fixed_price DECIMAL(12,2),
|
||||
discount_percent DECIMAL(5,2),
|
||||
discount_amount DECIMAL(12,2),
|
||||
|
||||
entity_id BIGINT, -- Scope to M1/M2/M3
|
||||
valid_from TIMESTAMP,
|
||||
valid_until TIMESTAMP,
|
||||
active BOOLEAN DEFAULT TRUE
|
||||
);
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Connection to HLCRF
|
||||
|
||||
Both Compound SKU and HLCRF share the same core innovation: **hierarchy encoded in a parseable string**.
|
||||
|
||||
| System | String | Meaning |
|
||||
|--------|--------|---------|
|
||||
| HLCRF | `H[LCR]CF` | Layout with nested body in header |
|
||||
| SKU | `ORGORG-WBUTS-PROD-ram~16gb` | Product with entity lineage and option |
|
||||
|
||||
Both eliminate database lookups by making structure self-describing. Parse the string, get the full picture.
|
||||
|
||||
---
|
||||
|
||||
## Implementation Files
|
||||
|
||||
- `app/Mod/Commerce/Services/SkuParserService.php` — Parser
|
||||
- `app/Mod/Commerce/Services/SkuBuilderService.php` — Builder
|
||||
- `app/Mod/Commerce/Services/SkuLineageService.php` — Entity lineage tracking
|
||||
- `app/Mod/Commerce/Data/SkuOption.php` — Option DTO
|
||||
- `app/Mod/Commerce/Data/ParsedItem.php` — Item DTO
|
||||
- `app/Mod/Commerce/Data/BundleItem.php` — Bundle DTO
|
||||
- `app/Mod/Commerce/Data/SkuParseResult.php` — Parse result DTO
|
||||
- `app/Mod/Commerce/Models/BundleHash.php` — Bundle discount model
|
||||
- `app/Mod/Commerce/Tests/Feature/CompoundSkuTest.php` — Tests
|
||||
|
||||
---
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Date | Changes |
|
||||
|---------|------|---------|
|
||||
| 1.0 | 2026-01-15 | Initial RFC |
|
||||
406
RFC-007-LTHN-HASH.md
Normal file
406
RFC-007-LTHN-HASH.md
Normal file
|
|
@ -0,0 +1,406 @@
|
|||
# RFC-0004: LTHN Quasi-Salted Hash Algorithm
|
||||
|
||||
**Status:** Informational
|
||||
**Version:** 1.0
|
||||
**Created:** 2025-01-13
|
||||
**Author:** Snider
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies the LTHN (Leet-Hash-N) quasi-salted hash algorithm, a deterministic hashing scheme that derives a salt from the input itself using character substitution and reversal. LTHN produces reproducible hashes that can be verified without storing a separate salt value, making it suitable for checksums, identifiers, and non-security-critical hashing applications.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Introduction](#1-introduction)
|
||||
2. [Terminology](#2-terminology)
|
||||
3. [Algorithm Specification](#3-algorithm-specification)
|
||||
4. [Character Substitution Map](#4-character-substitution-map)
|
||||
5. [Verification](#5-verification)
|
||||
6. [Use Cases](#6-use-cases)
|
||||
7. [Security Considerations](#7-security-considerations)
|
||||
8. [Implementation Requirements](#8-implementation-requirements)
|
||||
9. [Test Vectors](#9-test-vectors)
|
||||
10. [References](#10-references)
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
Traditional salted hashing requires storing a random salt value alongside the hash. This provides protection against rainbow table attacks but requires additional storage and management.
|
||||
|
||||
LTHN takes a different approach: the salt is derived deterministically from the input itself through a transformation that:
|
||||
|
||||
1. Reverses the input string
|
||||
2. Applies character substitutions inspired by "leet speak" conventions
|
||||
|
||||
This produces a quasi-salt that varies with input content while remaining reproducible, enabling verification without salt storage.
|
||||
|
||||
### 1.1 Design Goals
|
||||
|
||||
- **Determinism**: Same input always produces same hash
|
||||
- **Salt derivation**: No external salt storage required
|
||||
- **Verifiability**: Hashes can be verified with only the input
|
||||
- **Simplicity**: Easy to implement and understand
|
||||
- **Interoperability**: Based on standard SHA-256
|
||||
|
||||
### 1.2 Non-Goals
|
||||
|
||||
LTHN is NOT designed to:
|
||||
- Replace proper password hashing (use bcrypt, Argon2, etc.)
|
||||
- Provide cryptographic security against determined attackers
|
||||
- Resist preimage or collision attacks beyond SHA-256's guarantees
|
||||
|
||||
## 2. Terminology
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
|
||||
|
||||
**Input**: The original string to be hashed
|
||||
**Quasi-salt**: A salt derived from the input itself
|
||||
**Key map**: The character substitution table
|
||||
**LTHN hash**: The final hash output
|
||||
|
||||
## 3. Algorithm Specification
|
||||
|
||||
### 3.1 Overview
|
||||
|
||||
```
|
||||
LTHN(input) = SHA256(input || createSalt(input))
|
||||
```
|
||||
|
||||
Where `||` denotes concatenation and `createSalt` is defined below.
|
||||
|
||||
### 3.2 Salt Creation Algorithm
|
||||
|
||||
```
|
||||
function createSalt(input: string) -> string:
|
||||
if input is empty:
|
||||
return ""
|
||||
|
||||
runes = input as array of Unicode code points
|
||||
salt = new array of size length(runes)
|
||||
|
||||
for i = 0 to length(runes) - 1:
|
||||
// Reverse: take character from end
|
||||
char = runes[length(runes) - 1 - i]
|
||||
|
||||
// Apply substitution if exists in key map
|
||||
if char in keyMap:
|
||||
salt[i] = keyMap[char]
|
||||
else:
|
||||
salt[i] = char
|
||||
|
||||
return salt as string
|
||||
```
|
||||
|
||||
### 3.3 Hash Algorithm
|
||||
|
||||
```
|
||||
function Hash(input: string) -> string:
|
||||
salt = createSalt(input)
|
||||
combined = input + salt
|
||||
digest = SHA256(combined as UTF-8 bytes)
|
||||
return hexEncode(digest)
|
||||
```
|
||||
|
||||
### 3.4 Output Format
|
||||
|
||||
- Output: 64-character lowercase hexadecimal string
|
||||
- Digest: 32 bytes (256 bits)
|
||||
|
||||
## 4. Character Substitution Map
|
||||
|
||||
### 4.1 Default Key Map
|
||||
|
||||
The default substitution map uses bidirectional "leet speak" style mappings:
|
||||
|
||||
| Input | Output | Description |
|
||||
|-------|--------|-------------|
|
||||
| `o` | `0` | Letter O to zero |
|
||||
| `l` | `1` | Letter L to one |
|
||||
| `e` | `3` | Letter E to three |
|
||||
| `a` | `4` | Letter A to four |
|
||||
| `s` | `z` | Letter S to Z |
|
||||
| `t` | `7` | Letter T to seven |
|
||||
| `0` | `o` | Zero to letter O |
|
||||
| `1` | `l` | One to letter L |
|
||||
| `3` | `e` | Three to letter E |
|
||||
| `4` | `a` | Four to letter A |
|
||||
| `7` | `t` | Seven to letter T |
|
||||
|
||||
Note: The mapping is NOT fully symmetric. `z` does NOT map back to `s`.
|
||||
|
||||
### 4.2 Key Map as Code
|
||||
|
||||
```
|
||||
keyMap = {
|
||||
'o': '0',
|
||||
'l': '1',
|
||||
'e': '3',
|
||||
'a': '4',
|
||||
's': 'z',
|
||||
't': '7',
|
||||
'0': 'o',
|
||||
'1': 'l',
|
||||
'3': 'e',
|
||||
'4': 'a',
|
||||
'7': 't'
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Custom Key Maps
|
||||
|
||||
Implementations MAY support custom key maps. When using custom maps:
|
||||
|
||||
- Document the custom map clearly
|
||||
- Ensure bidirectional mappings are intentional
|
||||
- Consider character set implications (Unicode vs. ASCII)
|
||||
|
||||
## 5. Verification
|
||||
|
||||
### 5.1 Verification Algorithm
|
||||
|
||||
```
|
||||
function Verify(input: string, expectedHash: string) -> bool:
|
||||
actualHash = Hash(input)
|
||||
return constantTimeCompare(actualHash, expectedHash)
|
||||
```
|
||||
|
||||
### 5.2 Properties
|
||||
|
||||
- Verification requires only the input and hash
|
||||
- No salt storage or retrieval necessary
|
||||
- Same input always produces same hash
|
||||
|
||||
## 6. Use Cases
|
||||
|
||||
### 6.1 Recommended Uses
|
||||
|
||||
| Use Case | Suitability | Notes |
|
||||
|----------|-------------|-------|
|
||||
| Content identifiers | Good | Deterministic, reproducible |
|
||||
| Cache keys | Good | Same content = same key |
|
||||
| Deduplication | Good | Identify identical content |
|
||||
| File integrity | Moderate | Use with checksum comparison |
|
||||
| Non-critical checksums | Good | Simple verification |
|
||||
| Rolling key derivation | Good | Time-based key rotation (see 6.3) |
|
||||
|
||||
### 6.2 Not Recommended Uses
|
||||
|
||||
| Use Case | Reason |
|
||||
|----------|--------|
|
||||
| Password storage | Use bcrypt, Argon2, or scrypt instead |
|
||||
| Authentication tokens | Use HMAC or proper MACs |
|
||||
| Digital signatures | Use proper signature schemes |
|
||||
| Security-critical integrity | Use HMAC-SHA256 |
|
||||
|
||||
### 6.3 Rolling Key Derivation Pattern
|
||||
|
||||
LTHN is well-suited for deriving time-based rolling keys for streaming media or time-limited access control. The pattern combines a time period with user credentials:
|
||||
|
||||
```
|
||||
streamKey = SHA256(LTHN(period + ":" + license + ":" + fingerprint))
|
||||
```
|
||||
|
||||
#### 6.3.1 Cadence Formats
|
||||
|
||||
| Cadence | Period Format | Example | Window |
|
||||
|---------|---------------|---------|--------|
|
||||
| daily | YYYY-MM-DD | "2026-01-13" | 24 hours |
|
||||
| 12h | YYYY-MM-DD-AM/PM | "2026-01-13-AM" | 12 hours |
|
||||
| 6h | YYYY-MM-DD-HH | "2026-01-13-00" | 6 hours (00, 06, 12, 18) |
|
||||
| 1h | YYYY-MM-DD-HH | "2026-01-13-15" | 1 hour |
|
||||
|
||||
#### 6.3.2 Rolling Window Implementation
|
||||
|
||||
For graceful key transitions, implementations should support a rolling window:
|
||||
|
||||
```
|
||||
function GetRollingPeriods(cadence: string) -> (current: string, next: string):
|
||||
now = currentTime()
|
||||
current = formatPeriod(now, cadence)
|
||||
next = formatPeriod(now + periodDuration(cadence), cadence)
|
||||
return (current, next)
|
||||
```
|
||||
|
||||
Content encrypted with rolling keys includes wrapped CEKs (Content Encryption Keys) for both current and next periods, allowing decryption during period transitions.
|
||||
|
||||
#### 6.3.3 CEK Wrapping
|
||||
|
||||
```
|
||||
// Wrap CEK for distribution
|
||||
For each period in [current, next]:
|
||||
streamKey = SHA256(LTHN(period + ":" + license + ":" + fingerprint))
|
||||
wrappedCEK = ChaCha20Poly1305_Encrypt(CEK, streamKey)
|
||||
store (period, wrappedCEK) in header
|
||||
|
||||
// Unwrap CEK for playback
|
||||
For each (period, wrappedCEK) in header:
|
||||
streamKey = SHA256(LTHN(period + ":" + license + ":" + fingerprint))
|
||||
CEK = ChaCha20Poly1305_Decrypt(wrappedCEK, streamKey)
|
||||
if success: return CEK
|
||||
return error("no valid key for current period")
|
||||
```
|
||||
|
||||
## 7. Security Considerations
|
||||
|
||||
### 7.1 Not a Password Hash
|
||||
|
||||
LTHN MUST NOT be used for password hashing because:
|
||||
|
||||
- No work factor (bcrypt, Argon2 have tunable cost)
|
||||
- No random salt (predictable salt derivation)
|
||||
- Fast to compute (enables brute force)
|
||||
- No memory hardness (GPU/ASIC friendly)
|
||||
|
||||
### 7.2 Quasi-Salt Limitations
|
||||
|
||||
The derived salt provides limited protection:
|
||||
|
||||
- Salt is deterministic, not random
|
||||
- Identical inputs produce identical salts
|
||||
- Does not prevent rainbow tables for known inputs
|
||||
- Salt derivation algorithm is public
|
||||
|
||||
### 7.3 SHA-256 Dependency
|
||||
|
||||
Security properties depend on SHA-256:
|
||||
|
||||
- Preimage resistance: Finding input from hash is hard
|
||||
- Second preimage resistance: Finding different input with same hash is hard
|
||||
- Collision resistance: Finding two inputs with same hash is hard
|
||||
|
||||
These properties apply to the combined `input || salt` value.
|
||||
|
||||
### 7.4 Timing Attacks
|
||||
|
||||
Verification SHOULD use constant-time comparison to prevent timing attacks:
|
||||
|
||||
```
|
||||
function constantTimeCompare(a: string, b: string) -> bool:
|
||||
if length(a) != length(b):
|
||||
return false
|
||||
|
||||
result = 0
|
||||
for i = 0 to length(a) - 1:
|
||||
result |= a[i] XOR b[i]
|
||||
|
||||
return result == 0
|
||||
```
|
||||
|
||||
## 8. Implementation Requirements
|
||||
|
||||
Conforming implementations MUST:
|
||||
|
||||
1. Use SHA-256 as the underlying hash function
|
||||
2. Concatenate input and salt in the order: `input || salt`
|
||||
3. Use the default key map unless explicitly configured otherwise
|
||||
4. Output lowercase hexadecimal encoding
|
||||
5. Handle empty strings by returning SHA-256 of empty string
|
||||
6. Support Unicode input (process as UTF-8 bytes after salt creation)
|
||||
|
||||
Conforming implementations SHOULD:
|
||||
|
||||
1. Provide constant-time verification
|
||||
2. Support custom key maps via configuration
|
||||
3. Document any deviations from the default key map
|
||||
|
||||
## 9. Test Vectors
|
||||
|
||||
### 9.1 Basic Test Cases
|
||||
|
||||
| Input | Salt | Combined | LTHN Hash |
|
||||
|-------|------|----------|-----------|
|
||||
| `""` | `""` | `""` | `e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855` |
|
||||
| `"a"` | `"4"` | `"a4"` | `a4a4e5c4b3b2e1d0c9b8a7f6e5d4c3b2a1f0e9d8c7b6a5f4e3d2c1b0a9f8e7d6` |
|
||||
| `"hello"` | `"011eh"` | `"hello011eh"` | (computed) |
|
||||
| `"test"` | `"7z37"` | `"test7z37"` | (computed) |
|
||||
|
||||
### 9.2 Character Substitution Examples
|
||||
|
||||
| Input | Reversed | After Substitution (Salt) |
|
||||
|-------|----------|---------------------------|
|
||||
| `"hello"` | `"olleh"` | `"011eh"` |
|
||||
| `"test"` | `"tset"` | `"7z37"` |
|
||||
| `"password"` | `"drowssap"` | `"dr0wzz4p"` |
|
||||
| `"12345"` | `"54321"` | `"5ae2l"` |
|
||||
|
||||
### 9.3 Unicode Test Cases
|
||||
|
||||
| Input | Expected Behavior |
|
||||
|-------|-------------------|
|
||||
| `"cafe"` | Standard processing |
|
||||
| `"caf`e`"` | e with accent NOT substituted (only ASCII 'e' matches) |
|
||||
|
||||
Note: Key map only matches exact character codes, not normalized equivalents.
|
||||
|
||||
## 10. API Reference
|
||||
|
||||
### 10.1 Go API
|
||||
|
||||
```go
|
||||
import "github.com/Snider/Enchantrix/pkg/crypt"
|
||||
|
||||
// Create crypt service
|
||||
svc := crypt.NewService()
|
||||
|
||||
// Hash with LTHN
|
||||
hash := svc.Hash(crypt.LTHN, "input string")
|
||||
|
||||
// Available hash types
|
||||
crypt.LTHN // LTHN quasi-salted hash
|
||||
crypt.SHA256 // Standard SHA-256
|
||||
crypt.SHA512 // Standard SHA-512
|
||||
// ... other standard algorithms
|
||||
```
|
||||
|
||||
### 10.2 Direct Usage
|
||||
|
||||
```go
|
||||
import "github.com/Snider/Enchantrix/pkg/crypt/std/lthn"
|
||||
|
||||
// Direct LTHN hash
|
||||
hash := lthn.Hash("input string")
|
||||
|
||||
// Verify hash
|
||||
valid := lthn.Verify("input string", expectedHash)
|
||||
```
|
||||
|
||||
## 11. Future Work
|
||||
|
||||
- [ ] Custom key map configuration via API
|
||||
- [ ] WASM compilation for browser-based LTHN operations
|
||||
- [ ] Alternative underlying hash functions (SHA-3, BLAKE3)
|
||||
- [ ] Configurable salt derivation strategies
|
||||
- [ ] Performance optimization for high-throughput scenarios
|
||||
- [ ] Formal security analysis of rolling key pattern
|
||||
|
||||
## 12. References
|
||||
|
||||
- [FIPS 180-4] Secure Hash Standard (SHA-256)
|
||||
- [RFC 4648] The Base16, Base32, and Base64 Data Encodings
|
||||
- [RFC 8439] ChaCha20 and Poly1305 for IETF Protocols
|
||||
- [Wikipedia: Leet] History and conventions of leet speak character substitution
|
||||
|
||||
---
|
||||
|
||||
## Appendix A: Reference Implementation
|
||||
|
||||
A reference implementation in Go is available at:
|
||||
`github.com/Snider/Enchantrix/pkg/crypt/std/lthn/lthn.go`
|
||||
|
||||
## Appendix B: Historical Note
|
||||
|
||||
The name "LTHN" derives from "Leet Hash N" or "Lethean" (relating to forgetfulness/oblivion in Greek mythology), referencing both the leet-speak character substitutions and the one-way nature of hash functions.
|
||||
|
||||
## Appendix C: Comparison with Other Schemes
|
||||
|
||||
| Scheme | Salt | Work Factor | Suitable for Passwords |
|
||||
|--------|------|-------------|------------------------|
|
||||
| LTHN | Derived | None | No |
|
||||
| SHA-256 | None | None | No |
|
||||
| HMAC-SHA256 | Key-based | None | No |
|
||||
| bcrypt | Random | Yes | Yes |
|
||||
| Argon2 | Random | Yes | Yes |
|
||||
| scrypt | Random | Yes | Yes |
|
||||
|
||||
## Appendix D: Changelog
|
||||
|
||||
- **1.0** (2025-01-13): Initial specification
|
||||
372
RFC-008-PRE-OBFUSCATION-LAYER.md
Normal file
372
RFC-008-PRE-OBFUSCATION-LAYER.md
Normal file
|
|
@ -0,0 +1,372 @@
|
|||
# RFC-0001: Pre-Obfuscation Layer Protocol for AEAD Ciphers
|
||||
|
||||
**Status:** Informational
|
||||
**Version:** 1.0
|
||||
**Created:** 2025-01-13
|
||||
**Author:** Snider
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies a pre-obfuscation layer protocol designed to transform plaintext data before it reaches CPU encryption routines. The protocol provides an additional security layer that prevents raw plaintext patterns from being processed directly by encryption hardware, mitigating potential side-channel attack vectors while maintaining full compatibility with standard AEAD cipher constructions.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Introduction](#1-introduction)
|
||||
2. [Terminology](#2-terminology)
|
||||
3. [Protocol Overview](#3-protocol-overview)
|
||||
4. [Obfuscator Implementations](#4-obfuscator-implementations)
|
||||
5. [Integration with AEAD Ciphers](#5-integration-with-aead-ciphers)
|
||||
6. [Wire Format](#6-wire-format)
|
||||
7. [Security Considerations](#7-security-considerations)
|
||||
8. [Implementation Requirements](#8-implementation-requirements)
|
||||
9. [Test Vectors](#9-test-vectors)
|
||||
10. [References](#10-references)
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
Modern AEAD (Authenticated Encryption with Associated Data) ciphers like ChaCha20-Poly1305 and AES-GCM provide strong cryptographic guarantees. However, the plaintext data is processed directly by CPU encryption instructions, potentially exposing patterns through side-channel attacks such as timing analysis, power analysis, or electromagnetic emanation.
|
||||
|
||||
This RFC defines a pre-obfuscation layer that transforms plaintext into an unpredictable byte sequence before encryption. The transformation is reversible, deterministic (given the same entropy source), and adds negligible overhead while providing defense-in-depth against side-channel attacks.
|
||||
|
||||
### 1.1 Design Goals
|
||||
|
||||
- **Reversibility**: All transformations MUST be perfectly reversible
|
||||
- **Determinism**: Given the same entropy, transformations MUST produce identical results
|
||||
- **Independence**: The obfuscation layer operates independently of the underlying cipher
|
||||
- **Zero overhead on security**: The underlying AEAD cipher's security properties are preserved
|
||||
- **Minimal computational overhead**: Transformations should add < 5% processing time
|
||||
|
||||
## 2. Terminology
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
|
||||
|
||||
**Plaintext**: The original data to be encrypted
|
||||
**Obfuscated data**: Plaintext after pre-obfuscation transformation
|
||||
**Ciphertext**: Obfuscated data after encryption
|
||||
**Entropy**: A source of randomness used to derive transformation parameters (typically the nonce)
|
||||
**Key stream**: A deterministic sequence of bytes derived from entropy
|
||||
**Permutation**: A bijective mapping of byte positions
|
||||
|
||||
## 3. Protocol Overview
|
||||
|
||||
The pre-obfuscation protocol operates in two stages:
|
||||
|
||||
### 3.1 Encryption Flow
|
||||
|
||||
```
|
||||
Plaintext --> Obfuscate(plaintext, entropy) --> Obfuscated --> Encrypt --> Ciphertext
|
||||
```
|
||||
|
||||
1. Generate cryptographic nonce for the AEAD cipher
|
||||
2. Apply obfuscation transformation using nonce as entropy
|
||||
3. Encrypt the obfuscated data using the AEAD cipher
|
||||
4. Output: `[nonce || ciphertext || auth_tag]`
|
||||
|
||||
### 3.2 Decryption Flow
|
||||
|
||||
```
|
||||
Ciphertext --> Decrypt --> Obfuscated --> Deobfuscate(obfuscated, entropy) --> Plaintext
|
||||
```
|
||||
|
||||
1. Extract nonce from the ciphertext prefix
|
||||
2. Decrypt the ciphertext using the AEAD cipher
|
||||
3. Apply reverse obfuscation transformation using the extracted nonce
|
||||
4. Output: Original plaintext
|
||||
|
||||
### 3.3 Entropy Derivation
|
||||
|
||||
The entropy source MUST be the same value used as the AEAD cipher nonce. This ensures:
|
||||
|
||||
- No additional random values need to be generated or stored
|
||||
- The obfuscation is tied to the specific encryption operation
|
||||
- Replay of ciphertext with different obfuscation is not possible
|
||||
|
||||
## 4. Obfuscator Implementations
|
||||
|
||||
This RFC defines two standard obfuscator implementations. Implementations MAY support additional obfuscators provided they meet the requirements in Section 8.
|
||||
|
||||
### 4.1 XOR Obfuscator
|
||||
|
||||
The XOR obfuscator generates a deterministic key stream from the entropy and XORs it with the plaintext.
|
||||
|
||||
#### 4.1.1 Key Stream Derivation
|
||||
|
||||
```
|
||||
function deriveKeyStream(entropy: bytes, length: int) -> bytes:
|
||||
stream = empty byte array of size length
|
||||
blockNum = 0
|
||||
offset = 0
|
||||
|
||||
while offset < length:
|
||||
block = SHA256(entropy || BigEndian64(blockNum))
|
||||
copyLen = min(32, length - offset)
|
||||
copy block[0:copyLen] to stream[offset:offset+copyLen]
|
||||
offset += copyLen
|
||||
blockNum += 1
|
||||
|
||||
return stream
|
||||
```
|
||||
|
||||
#### 4.1.2 Obfuscation
|
||||
|
||||
```
|
||||
function obfuscate(data: bytes, entropy: bytes) -> bytes:
|
||||
if length(data) == 0:
|
||||
return data
|
||||
|
||||
keyStream = deriveKeyStream(entropy, length(data))
|
||||
result = new byte array of size length(data)
|
||||
|
||||
for i = 0 to length(data) - 1:
|
||||
result[i] = data[i] XOR keyStream[i]
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
#### 4.1.3 Deobfuscation
|
||||
|
||||
The XOR operation is symmetric; deobfuscation uses the same algorithm:
|
||||
|
||||
```
|
||||
function deobfuscate(data: bytes, entropy: bytes) -> bytes:
|
||||
return obfuscate(data, entropy) // XOR is self-inverse
|
||||
```
|
||||
|
||||
### 4.2 Shuffle-Mask Obfuscator
|
||||
|
||||
The shuffle-mask obfuscator provides additional diffusion by combining a byte-level shuffle with an XOR mask.
|
||||
|
||||
#### 4.2.1 Permutation Generation
|
||||
|
||||
Uses Fisher-Yates shuffle with deterministic randomness:
|
||||
|
||||
```
|
||||
function generatePermutation(entropy: bytes, length: int) -> int[]:
|
||||
perm = [0, 1, 2, ..., length-1]
|
||||
seed = SHA256(entropy || "permutation")
|
||||
|
||||
for i = length-1 downto 1:
|
||||
hash = SHA256(seed || BigEndian64(i))
|
||||
j = BigEndian64(hash[0:8]) mod (i + 1)
|
||||
swap perm[i] and perm[j]
|
||||
|
||||
return perm
|
||||
```
|
||||
|
||||
#### 4.2.2 Mask Derivation
|
||||
|
||||
```
|
||||
function deriveMask(entropy: bytes, length: int) -> bytes:
|
||||
mask = empty byte array of size length
|
||||
blockNum = 0
|
||||
offset = 0
|
||||
|
||||
while offset < length:
|
||||
block = SHA256(entropy || "mask" || BigEndian64(blockNum))
|
||||
copyLen = min(32, length - offset)
|
||||
copy block[0:copyLen] to mask[offset:offset+copyLen]
|
||||
offset += copyLen
|
||||
blockNum += 1
|
||||
|
||||
return mask
|
||||
```
|
||||
|
||||
#### 4.2.3 Obfuscation
|
||||
|
||||
```
|
||||
function obfuscate(data: bytes, entropy: bytes) -> bytes:
|
||||
if length(data) == 0:
|
||||
return data
|
||||
|
||||
perm = generatePermutation(entropy, length(data))
|
||||
mask = deriveMask(entropy, length(data))
|
||||
|
||||
// Step 1: Apply mask
|
||||
masked = new byte array of size length(data)
|
||||
for i = 0 to length(data) - 1:
|
||||
masked[i] = data[i] XOR mask[i]
|
||||
|
||||
// Step 2: Shuffle bytes according to permutation
|
||||
shuffled = new byte array of size length(data)
|
||||
for i = 0 to length(data) - 1:
|
||||
shuffled[i] = masked[perm[i]]
|
||||
|
||||
return shuffled
|
||||
```
|
||||
|
||||
#### 4.2.4 Deobfuscation
|
||||
|
||||
```
|
||||
function deobfuscate(data: bytes, entropy: bytes) -> bytes:
|
||||
if length(data) == 0:
|
||||
return data
|
||||
|
||||
perm = generatePermutation(entropy, length(data))
|
||||
mask = deriveMask(entropy, length(data))
|
||||
|
||||
// Step 1: Unshuffle bytes (inverse permutation)
|
||||
unshuffled = new byte array of size length(data)
|
||||
for i = 0 to length(data) - 1:
|
||||
unshuffled[perm[i]] = data[i]
|
||||
|
||||
// Step 2: Remove mask
|
||||
result = new byte array of size length(data)
|
||||
for i = 0 to length(data) - 1:
|
||||
result[i] = unshuffled[i] XOR mask[i]
|
||||
|
||||
return result
|
||||
```
|
||||
|
||||
## 5. Integration with AEAD Ciphers
|
||||
|
||||
### 5.1 XChaCha20-Poly1305 Integration
|
||||
|
||||
When used with XChaCha20-Poly1305:
|
||||
|
||||
- Nonce size: 24 bytes
|
||||
- Key size: 32 bytes
|
||||
- Auth tag size: 16 bytes
|
||||
|
||||
```
|
||||
function encrypt(key: bytes[32], plaintext: bytes) -> bytes:
|
||||
nonce = random_bytes(24)
|
||||
obfuscated = obfuscator.obfuscate(plaintext, nonce)
|
||||
ciphertext = XChaCha20Poly1305_Seal(key, nonce, obfuscated, nil)
|
||||
return nonce || ciphertext // nonce is prepended
|
||||
```
|
||||
|
||||
```
|
||||
function decrypt(key: bytes[32], data: bytes) -> bytes:
|
||||
if length(data) < 24 + 16: // nonce + auth tag minimum
|
||||
return error("ciphertext too short")
|
||||
|
||||
nonce = data[0:24]
|
||||
ciphertext = data[24:]
|
||||
obfuscated = XChaCha20Poly1305_Open(key, nonce, ciphertext, nil)
|
||||
plaintext = obfuscator.deobfuscate(obfuscated, nonce)
|
||||
return plaintext
|
||||
```
|
||||
|
||||
### 5.2 Other AEAD Ciphers
|
||||
|
||||
The pre-obfuscation layer is cipher-agnostic. For other AEAD ciphers:
|
||||
|
||||
| Cipher | Nonce Size | Notes |
|
||||
|--------|------------|-------|
|
||||
| AES-128-GCM | 12 bytes | Standard nonce |
|
||||
| AES-256-GCM | 12 bytes | Standard nonce |
|
||||
| ChaCha20-Poly1305 | 12 bytes | Original ChaCha nonce |
|
||||
| XChaCha20-Poly1305 | 24 bytes | Extended nonce (RECOMMENDED) |
|
||||
|
||||
## 6. Wire Format
|
||||
|
||||
The output wire format is:
|
||||
|
||||
```
|
||||
+----------------+------------------------+
|
||||
| Nonce | Ciphertext |
|
||||
+----------------+------------------------+
|
||||
| N bytes | len(plaintext) + T |
|
||||
```
|
||||
|
||||
Where:
|
||||
- `N` = Nonce size (cipher-dependent)
|
||||
- `T` = Authentication tag size (typically 16 bytes)
|
||||
|
||||
The obfuscation parameters are NOT stored in the wire format. They are derived deterministically from the nonce.
|
||||
|
||||
## 7. Security Considerations
|
||||
|
||||
### 7.1 Side-Channel Mitigation
|
||||
|
||||
The pre-obfuscation layer provides defense-in-depth against:
|
||||
|
||||
- **Timing attacks**: Plaintext patterns do not influence encryption timing
|
||||
- **Cache-timing attacks**: Memory access patterns are decorrelated from plaintext
|
||||
- **Power analysis**: Power consumption patterns are decorrelated from plaintext structure
|
||||
|
||||
### 7.2 Cryptographic Security
|
||||
|
||||
The pre-obfuscation layer does NOT provide cryptographic security on its own. It MUST always be used in conjunction with a proper AEAD cipher. The security of the combined system relies entirely on the underlying AEAD cipher's security guarantees.
|
||||
|
||||
### 7.3 Entropy Requirements
|
||||
|
||||
The entropy source (nonce) MUST be generated using a cryptographically secure random number generator. Nonce reuse with the same key compromises both the obfuscation determinism and the AEAD security.
|
||||
|
||||
### 7.4 Key Stream Exhaustion
|
||||
|
||||
The XOR obfuscator uses SHA-256 in counter mode. For a single encryption:
|
||||
- Maximum safely obfuscated data: 2^64 * 32 bytes (theoretical)
|
||||
- Practical limit: Constrained by AEAD cipher limits
|
||||
|
||||
### 7.5 Permutation Uniqueness
|
||||
|
||||
The shuffle-mask obfuscator generates permutations deterministically. For data of length `n`:
|
||||
- Total possible permutations: n!
|
||||
- Entropy required for full permutation space: log2(n!) bits
|
||||
- SHA-256 provides 256 bits, sufficient for n up to ~57 bytes without collision concerns
|
||||
|
||||
For larger data, the permutation space is sampled uniformly but not exhaustively.
|
||||
|
||||
## 8. Implementation Requirements
|
||||
|
||||
Conforming implementations MUST:
|
||||
|
||||
1. Support at least the XOR obfuscator
|
||||
2. Use SHA-256 for key stream and permutation derivation
|
||||
3. Use big-endian byte ordering for block numbers
|
||||
4. Handle zero-length data by returning it unchanged
|
||||
5. Prepend the nonce to the ciphertext output
|
||||
6. Accept and process the nonce from ciphertext prefix during decryption
|
||||
|
||||
Conforming implementations SHOULD:
|
||||
|
||||
1. Support the shuffle-mask obfuscator
|
||||
2. Use XChaCha20-Poly1305 as the default AEAD cipher
|
||||
3. Provide constant-time implementations where feasible
|
||||
|
||||
## 9. Test Vectors
|
||||
|
||||
### 9.1 XOR Obfuscator
|
||||
|
||||
```
|
||||
Entropy (hex): 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
|
||||
Plaintext (hex): 48656c6c6f2c20576f726c6421
|
||||
Expected key stream prefix (hex): [first 14 bytes of SHA256(entropy || 0x0000000000000000)]
|
||||
```
|
||||
|
||||
### 9.2 Shuffle-Mask Obfuscator
|
||||
|
||||
```
|
||||
Entropy (hex): 000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f
|
||||
Plaintext: "Hello"
|
||||
Permutation seed: SHA256(entropy || "permutation")
|
||||
Mask seed: SHA256(entropy || "mask" || 0x0000000000000000)
|
||||
```
|
||||
|
||||
## 10. Future Work
|
||||
|
||||
- [ ] Hardware-accelerated obfuscation implementations
|
||||
- [ ] Additional obfuscator algorithms (block-based, etc.)
|
||||
- [ ] Formal side-channel resistance analysis
|
||||
- [ ] Integration benchmarks with different AEAD ciphers
|
||||
- [ ] WASM compilation for browser environments
|
||||
|
||||
## 11. References
|
||||
|
||||
- [RFC 8439] ChaCha20 and Poly1305 for IETF Protocols
|
||||
- [RFC 7539] ChaCha20 and Poly1305 for IETF Protocols (obsoleted by 8439)
|
||||
- [draft-irtf-cfrg-xchacha] XChaCha: eXtended-nonce ChaCha and AEAD_XChaCha20_Poly1305
|
||||
- [FIPS 180-4] Secure Hash Standard (SHA-256)
|
||||
- Fisher, R. A.; Yates, F. (1948). Statistical tables for biological, agricultural and medical research
|
||||
|
||||
---
|
||||
|
||||
## Appendix A: Reference Implementation
|
||||
|
||||
A reference implementation in Go is available at:
|
||||
`github.com/Snider/Enchantrix/pkg/enchantrix/crypto_sigil.go`
|
||||
|
||||
## Appendix B: Changelog
|
||||
|
||||
- **1.0** (2025-01-13): Initial specification
|
||||
556
RFC-009-SIGIL-TRANSFORMATION.md
Normal file
556
RFC-009-SIGIL-TRANSFORMATION.md
Normal file
|
|
@ -0,0 +1,556 @@
|
|||
# RFC-0003: Sigil Transformation Framework
|
||||
|
||||
**Status:** Standards Track
|
||||
**Version:** 1.0
|
||||
**Created:** 2025-01-13
|
||||
**Author:** Snider
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies the Sigil Transformation Framework, a composable interface for defining reversible and irreversible data transformations. Sigils provide a uniform abstraction for encoding, compression, hashing, encryption, and other byte-level operations, enabling declarative transformation pipelines that can be applied and reversed systematically.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Introduction](#1-introduction)
|
||||
2. [Terminology](#2-terminology)
|
||||
3. [Interface Specification](#3-interface-specification)
|
||||
4. [Sigil Categories](#4-sigil-categories)
|
||||
5. [Standard Sigils](#5-standard-sigils)
|
||||
6. [Composition and Chaining](#6-composition-and-chaining)
|
||||
7. [Error Handling](#7-error-handling)
|
||||
8. [Implementation Guidelines](#8-implementation-guidelines)
|
||||
9. [Security Considerations](#9-security-considerations)
|
||||
10. [References](#10-references)
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
Data transformation is a fundamental operation in software systems. Common transformations include:
|
||||
|
||||
- **Encoding**: Converting between representations (hex, base64)
|
||||
- **Compression**: Reducing data size (gzip, zstd)
|
||||
- **Encryption**: Protecting confidentiality (AES, ChaCha20)
|
||||
- **Hashing**: Computing digests (SHA-256, BLAKE2)
|
||||
- **Formatting**: Restructuring data (JSON minification)
|
||||
|
||||
The Sigil framework provides a uniform interface for all these operations, enabling:
|
||||
|
||||
- Declarative transformation pipelines
|
||||
- Automatic reversal of transformation chains
|
||||
- Composable, reusable transformation units
|
||||
- Clear semantics for reversible vs. irreversible operations
|
||||
|
||||
### 1.1 Design Principles
|
||||
|
||||
1. **Simplicity**: Two methods, clear contract
|
||||
2. **Composability**: Sigils combine naturally
|
||||
3. **Reversibility awareness**: Explicit handling of one-way operations
|
||||
4. **Null safety**: Defined behavior for nil/empty inputs
|
||||
5. **Error propagation**: Clear error semantics
|
||||
|
||||
## 2. Terminology
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
|
||||
|
||||
**Sigil**: A transformation unit implementing the Sigil interface
|
||||
**In operation**: The forward transformation (encode, compress, encrypt, hash)
|
||||
**Out operation**: The reverse transformation (decode, decompress, decrypt)
|
||||
**Reversible sigil**: A sigil where Out(In(x)) = x for all valid x
|
||||
**Irreversible sigil**: A sigil where Out returns the input unchanged or errors
|
||||
**Symmetric sigil**: A sigil where In(x) = Out(x) (e.g., byte reversal)
|
||||
**Transmutation**: Applying a sequence of sigils to data
|
||||
|
||||
## 3. Interface Specification
|
||||
|
||||
### 3.1 Sigil Interface
|
||||
|
||||
```
|
||||
interface Sigil {
|
||||
// In transforms the data (forward operation).
|
||||
// Returns transformed data and any error encountered.
|
||||
In(data: bytes) -> (bytes, error)
|
||||
|
||||
// Out reverses the transformation (reverse operation).
|
||||
// For irreversible sigils, returns data unchanged.
|
||||
Out(data: bytes) -> (bytes, error)
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 Method Contracts
|
||||
|
||||
#### 3.2.1 In Method
|
||||
|
||||
The `In` method MUST:
|
||||
|
||||
- Accept a byte slice as input
|
||||
- Return a byte slice as output
|
||||
- Return nil output for nil input (without error)
|
||||
- Return empty slice for empty input (without error)
|
||||
- Return an error if transformation fails
|
||||
|
||||
#### 3.2.2 Out Method
|
||||
|
||||
The `Out` method MUST:
|
||||
|
||||
- Accept a byte slice as input
|
||||
- Return a byte slice as output
|
||||
- Return nil output for nil input (without error)
|
||||
- Return empty slice for empty input (without error)
|
||||
- For reversible sigils: return the original data before `In` was applied
|
||||
- For irreversible sigils: return the input unchanged (passthrough)
|
||||
|
||||
### 3.3 Transmute Function
|
||||
|
||||
The framework provides a helper function for applying multiple sigils:
|
||||
|
||||
```
|
||||
function Transmute(data: bytes, sigils: Sigil[]) -> (bytes, error):
|
||||
for each sigil in sigils:
|
||||
data, err = sigil.In(data)
|
||||
if err != nil:
|
||||
return nil, err
|
||||
return data, nil
|
||||
```
|
||||
|
||||
## 4. Sigil Categories
|
||||
|
||||
### 4.1 Reversible Sigils
|
||||
|
||||
Reversible sigils can recover the original input from the output.
|
||||
|
||||
**Property**: For any valid input `x`:
|
||||
```
|
||||
sigil.Out(sigil.In(x)) == x
|
||||
```
|
||||
|
||||
Examples:
|
||||
- Encoding sigils (hex, base64)
|
||||
- Compression sigils (gzip)
|
||||
- Encryption sigils (ChaCha20-Poly1305)
|
||||
|
||||
### 4.2 Irreversible Sigils
|
||||
|
||||
Irreversible sigils perform one-way transformations.
|
||||
|
||||
**Property**: The `Out` method returns input unchanged:
|
||||
```
|
||||
sigil.Out(x) == x
|
||||
```
|
||||
|
||||
Examples:
|
||||
- Hash sigils (SHA-256, MD5)
|
||||
- Truncation sigils
|
||||
|
||||
### 4.3 Symmetric Sigils
|
||||
|
||||
Symmetric sigils have identical `In` and `Out` operations.
|
||||
|
||||
**Property**: For any input `x`:
|
||||
```
|
||||
sigil.In(x) == sigil.Out(x)
|
||||
```
|
||||
|
||||
Examples:
|
||||
- Byte reversal
|
||||
- XOR with fixed key
|
||||
- Bitwise NOT
|
||||
|
||||
## 5. Standard Sigils
|
||||
|
||||
### 5.1 Encoding Sigils
|
||||
|
||||
#### 5.1.1 Hex Sigil
|
||||
|
||||
Encodes data to hexadecimal representation.
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `hex` |
|
||||
| Category | Reversible |
|
||||
| In | Binary to hex ASCII |
|
||||
| Out | Hex ASCII to binary |
|
||||
| Output expansion | 2x |
|
||||
|
||||
```
|
||||
In("Hello") -> "48656c6c6f"
|
||||
Out("48656c6c6f") -> "Hello"
|
||||
```
|
||||
|
||||
#### 5.1.2 Base64 Sigil
|
||||
|
||||
Encodes data to Base64 representation (RFC 4648).
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `base64` |
|
||||
| Category | Reversible |
|
||||
| In | Binary to Base64 ASCII |
|
||||
| Out | Base64 ASCII to binary |
|
||||
| Output expansion | ~1.33x |
|
||||
|
||||
```
|
||||
In("Hello") -> "SGVsbG8="
|
||||
Out("SGVsbG8=") -> "Hello"
|
||||
```
|
||||
|
||||
### 5.2 Transformation Sigils
|
||||
|
||||
#### 5.2.1 Reverse Sigil
|
||||
|
||||
Reverses the byte order of the data.
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `reverse` |
|
||||
| Category | Symmetric |
|
||||
| In | Reverse bytes |
|
||||
| Out | Reverse bytes |
|
||||
| Output expansion | 1x |
|
||||
|
||||
```
|
||||
In("Hello") -> "olleH"
|
||||
Out("olleH") -> "Hello"
|
||||
```
|
||||
|
||||
### 5.3 Compression Sigils
|
||||
|
||||
#### 5.3.1 Gzip Sigil
|
||||
|
||||
Compresses data using gzip (RFC 1952).
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `gzip` |
|
||||
| Category | Reversible |
|
||||
| In | Compress |
|
||||
| Out | Decompress |
|
||||
| Output expansion | Variable (typically < 1x) |
|
||||
|
||||
### 5.4 Formatting Sigils
|
||||
|
||||
#### 5.4.1 JSON Sigil
|
||||
|
||||
Compacts JSON data by removing whitespace.
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `json` |
|
||||
| Category | Reversible* |
|
||||
| In | Compact JSON |
|
||||
| Out | Passthrough |
|
||||
|
||||
*Note: Whitespace is not recoverable; Out returns input unchanged.
|
||||
|
||||
#### 5.4.2 JSON-Indent Sigil
|
||||
|
||||
Pretty-prints JSON data with indentation.
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `json-indent` |
|
||||
| Category | Reversible* |
|
||||
| In | Indent JSON (2 spaces) |
|
||||
| Out | Passthrough |
|
||||
|
||||
### 5.5 Encryption Sigils
|
||||
|
||||
Encryption sigils provide authenticated encryption using AEAD ciphers.
|
||||
|
||||
#### 5.5.1 ChaCha20-Poly1305 Sigil
|
||||
|
||||
Encrypts data using XChaCha20-Poly1305 authenticated encryption.
|
||||
|
||||
| Property | Value |
|
||||
|----------|-------|
|
||||
| Name | `chacha20poly1305` |
|
||||
| Category | Reversible |
|
||||
| Key size | 32 bytes |
|
||||
| Nonce size | 24 bytes (XChaCha variant) |
|
||||
| Tag size | 16 bytes |
|
||||
| In | Encrypt (generates nonce, prepends to output) |
|
||||
| Out | Decrypt (extracts nonce from input prefix) |
|
||||
|
||||
**Critical Implementation Detail**: The nonce is embedded IN the ciphertext output, not transmitted separately:
|
||||
|
||||
```
|
||||
In(plaintext) -> [24-byte nonce][ciphertext][16-byte tag]
|
||||
Out(ciphertext_with_nonce) -> plaintext
|
||||
```
|
||||
|
||||
**Construction**:
|
||||
|
||||
```go
|
||||
sigil, err := NewChaChaPolySigil(key) // key must be 32 bytes
|
||||
ciphertext, err := sigil.In(plaintext)
|
||||
plaintext, err := sigil.Out(ciphertext)
|
||||
```
|
||||
|
||||
**Security Properties**:
|
||||
- Authenticated: Poly1305 MAC prevents tampering
|
||||
- Confidential: ChaCha20 stream cipher
|
||||
- Nonce uniqueness: Random 24-byte nonce per encryption
|
||||
- No nonce management required by caller
|
||||
|
||||
### 5.6 Hash Sigils
|
||||
|
||||
Hash sigils compute cryptographic digests. They are irreversible.
|
||||
|
||||
| Name | Algorithm | Output Size |
|
||||
|------|-----------|-------------|
|
||||
| `md4` | MD4 | 16 bytes |
|
||||
| `md5` | MD5 | 16 bytes |
|
||||
| `sha1` | SHA-1 | 20 bytes |
|
||||
| `sha224` | SHA-224 | 28 bytes |
|
||||
| `sha256` | SHA-256 | 32 bytes |
|
||||
| `sha384` | SHA-384 | 48 bytes |
|
||||
| `sha512` | SHA-512 | 64 bytes |
|
||||
| `sha3-224` | SHA3-224 | 28 bytes |
|
||||
| `sha3-256` | SHA3-256 | 32 bytes |
|
||||
| `sha3-384` | SHA3-384 | 48 bytes |
|
||||
| `sha3-512` | SHA3-512 | 64 bytes |
|
||||
| `sha512-224` | SHA-512/224 | 28 bytes |
|
||||
| `sha512-256` | SHA-512/256 | 32 bytes |
|
||||
| `ripemd160` | RIPEMD-160 | 20 bytes |
|
||||
| `blake2s-256` | BLAKE2s | 32 bytes |
|
||||
| `blake2b-256` | BLAKE2b | 32 bytes |
|
||||
| `blake2b-384` | BLAKE2b | 48 bytes |
|
||||
| `blake2b-512` | BLAKE2b | 64 bytes |
|
||||
|
||||
For all hash sigils:
|
||||
- `In(data)` returns the hash digest as raw bytes
|
||||
- `Out(data)` returns data unchanged (passthrough)
|
||||
|
||||
## 6. Composition and Chaining
|
||||
|
||||
### 6.1 Forward Chain (Packing)
|
||||
|
||||
Sigils are applied left-to-right:
|
||||
|
||||
```
|
||||
sigils = [gzip, base64, hex]
|
||||
result = Transmute(data, sigils)
|
||||
|
||||
// Equivalent to:
|
||||
result = hex.In(base64.In(gzip.In(data)))
|
||||
```
|
||||
|
||||
### 6.2 Reverse Chain (Unpacking)
|
||||
|
||||
To reverse a chain, apply `Out` in reverse order:
|
||||
|
||||
```
|
||||
function ReverseTransmute(data: bytes, sigils: Sigil[]) -> (bytes, error):
|
||||
for i = length(sigils) - 1 downto 0:
|
||||
data, err = sigils[i].Out(data)
|
||||
if err != nil:
|
||||
return nil, err
|
||||
return data, nil
|
||||
```
|
||||
|
||||
### 6.3 Chain Properties
|
||||
|
||||
For a chain of reversible sigils `[s1, s2, s3]`:
|
||||
|
||||
```
|
||||
original = ReverseTransmute(Transmute(data, [s1, s2, s3]), [s1, s2, s3])
|
||||
// original == data
|
||||
```
|
||||
|
||||
### 6.4 Mixed Chains
|
||||
|
||||
Chains MAY contain both reversible and irreversible sigils:
|
||||
|
||||
```
|
||||
sigils = [gzip, sha256] // sha256 is irreversible
|
||||
|
||||
packed = Transmute(data, sigils)
|
||||
// packed is the SHA-256 hash of gzip-compressed data
|
||||
|
||||
unpacked = ReverseTransmute(packed, sigils)
|
||||
// unpacked == packed (sha256.Out is passthrough)
|
||||
```
|
||||
|
||||
## 7. Error Handling
|
||||
|
||||
### 7.1 Error Categories
|
||||
|
||||
| Category | Description | Recovery |
|
||||
|----------|-------------|----------|
|
||||
| Input error | Invalid input format | Check input validity |
|
||||
| State error | Sigil not properly configured | Initialize sigil |
|
||||
| Resource error | Memory/IO failure | Retry or abort |
|
||||
| Algorithm error | Cryptographic failure | Check keys/params |
|
||||
|
||||
### 7.2 Error Propagation
|
||||
|
||||
Errors MUST propagate immediately:
|
||||
|
||||
```
|
||||
function Transmute(data: bytes, sigils: Sigil[]) -> (bytes, error):
|
||||
for each sigil in sigils:
|
||||
data, err = sigil.In(data)
|
||||
if err != nil:
|
||||
return nil, err // Stop immediately
|
||||
return data, nil
|
||||
```
|
||||
|
||||
### 7.3 Partial Results
|
||||
|
||||
On error, implementations MUST NOT return partial results. Either:
|
||||
- Return complete transformed data, or
|
||||
- Return nil with an error
|
||||
|
||||
## 8. Implementation Guidelines
|
||||
|
||||
### 8.1 Sigil Factory
|
||||
|
||||
Implementations SHOULD provide a factory function:
|
||||
|
||||
```
|
||||
function NewSigil(name: string) -> (Sigil, error):
|
||||
switch name:
|
||||
case "hex": return new HexSigil()
|
||||
case "base64": return new Base64Sigil()
|
||||
case "gzip": return new GzipSigil()
|
||||
// ... etc
|
||||
default: return nil, error("unknown sigil: " + name)
|
||||
```
|
||||
|
||||
### 8.2 Null Safety
|
||||
|
||||
```
|
||||
function In(data: bytes) -> (bytes, error):
|
||||
if data == nil:
|
||||
return nil, nil // NOT an error
|
||||
if length(data) == 0:
|
||||
return [], nil // Empty slice, NOT nil
|
||||
// ... perform transformation
|
||||
```
|
||||
|
||||
### 8.3 Immutability
|
||||
|
||||
Sigils SHOULD NOT modify the input slice:
|
||||
|
||||
```
|
||||
// CORRECT: Create new slice
|
||||
result := make([]byte, len(data))
|
||||
// ... transform into result
|
||||
|
||||
// INCORRECT: Modify in place
|
||||
data[0] = transformed // Don't do this
|
||||
```
|
||||
|
||||
### 8.4 Thread Safety
|
||||
|
||||
Sigils SHOULD be safe for concurrent use:
|
||||
|
||||
- Avoid mutable state in sigil instances
|
||||
- Use synchronization if state is required
|
||||
- Document thread-safety guarantees
|
||||
|
||||
## 9. Security Considerations
|
||||
|
||||
### 9.1 Hash Sigil Security
|
||||
|
||||
- MD4, MD5, SHA1 are cryptographically broken for collision resistance
|
||||
- Use SHA-256 or stronger for security-critical applications
|
||||
- Hash sigils do NOT provide authentication
|
||||
|
||||
### 9.2 Compression Oracle Attacks
|
||||
|
||||
When combining compression and encryption sigils:
|
||||
- Be aware of CRIME/BREACH-style attacks
|
||||
- Do not compress data containing secrets alongside attacker-controlled data
|
||||
|
||||
### 9.3 Memory Safety
|
||||
|
||||
- Validate output buffer sizes before allocation
|
||||
- Implement maximum input size limits
|
||||
- Handle decompression bombs (zip bombs)
|
||||
|
||||
### 9.4 Timing Attacks
|
||||
|
||||
- Comparison operations should be constant-time where security-relevant
|
||||
- Hash comparisons should use constant-time comparison functions
|
||||
|
||||
## 10. Future Work
|
||||
|
||||
- [ ] AES-GCM encryption sigil for environments requiring AES
|
||||
- [ ] Zstd compression sigil with configurable compression levels
|
||||
- [ ] Streaming sigil interface for large data processing
|
||||
- [ ] Sigil metadata interface for reporting transformation properties
|
||||
- [ ] WebAssembly compilation for browser-based sigil operations
|
||||
- [ ] Hardware acceleration detection and utilization
|
||||
|
||||
## 11. References
|
||||
|
||||
- [RFC 4648] The Base16, Base32, and Base64 Data Encodings
|
||||
- [RFC 1952] GZIP file format specification
|
||||
- [RFC 8259] The JavaScript Object Notation (JSON) Data Interchange Format
|
||||
- [FIPS 180-4] Secure Hash Standard
|
||||
- [FIPS 202] SHA-3 Standard
|
||||
- [RFC 8439] ChaCha20 and Poly1305 for IETF Protocols
|
||||
|
||||
---
|
||||
|
||||
## Appendix A: Sigil Name Registry
|
||||
|
||||
| Name | Category | Reversible | Notes |
|
||||
|------|----------|------------|-------|
|
||||
| `reverse` | Transform | Yes (symmetric) | Byte reversal |
|
||||
| `hex` | Encoding | Yes | Hexadecimal |
|
||||
| `base64` | Encoding | Yes | RFC 4648 |
|
||||
| `gzip` | Compression | Yes | RFC 1952 |
|
||||
| `zstd` | Compression | Yes | Zstandard |
|
||||
| `json` | Formatting | Partial | Compacts JSON |
|
||||
| `json-indent` | Formatting | Partial | Pretty-prints JSON |
|
||||
| `chacha20poly1305` | Encryption | Yes | XChaCha20-Poly1305 AEAD |
|
||||
| `md4` | Hash | No | 128-bit |
|
||||
| `md5` | Hash | No | 128-bit |
|
||||
| `sha1` | Hash | No | 160-bit |
|
||||
| `sha224` | Hash | No | 224-bit |
|
||||
| `sha256` | Hash | No | 256-bit |
|
||||
| `sha384` | Hash | No | 384-bit |
|
||||
| `sha512` | Hash | No | 512-bit |
|
||||
| `sha3-*` | Hash | No | SHA-3 family |
|
||||
| `sha512-*` | Hash | No | SHA-512 truncated |
|
||||
| `ripemd160` | Hash | No | 160-bit |
|
||||
| `blake2s-256` | Hash | No | 256-bit |
|
||||
| `blake2b-*` | Hash | No | BLAKE2b family |
|
||||
|
||||
## Appendix B: Reference Implementation
|
||||
|
||||
A reference implementation in Go is available at:
|
||||
- Interface: `github.com/Snider/Enchantrix/pkg/enchantrix/enchantrix.go`
|
||||
- Standard sigils: `github.com/Snider/Enchantrix/pkg/enchantrix/sigils.go`
|
||||
|
||||
## Appendix C: Custom Sigil Example
|
||||
|
||||
```go
|
||||
// ROT13Sigil implements a simple letter rotation cipher.
|
||||
type ROT13Sigil struct{}
|
||||
|
||||
func (s *ROT13Sigil) In(data []byte) ([]byte, error) {
|
||||
if data == nil {
|
||||
return nil, nil
|
||||
}
|
||||
result := make([]byte, len(data))
|
||||
for i, b := range data {
|
||||
if b >= 'A' && b <= 'Z' {
|
||||
result[i] = 'A' + (b-'A'+13)%26
|
||||
} else if b >= 'a' && b <= 'z' {
|
||||
result[i] = 'a' + (b-'a'+13)%26
|
||||
} else {
|
||||
result[i] = b
|
||||
}
|
||||
}
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (s *ROT13Sigil) Out(data []byte) ([]byte, error) {
|
||||
return s.In(data) // ROT13 is symmetric
|
||||
}
|
||||
```
|
||||
|
||||
## Appendix D: Changelog
|
||||
|
||||
- **1.0** (2025-01-13): Initial specification
|
||||
433
RFC-010-TRIX-CONTAINER.md
Normal file
433
RFC-010-TRIX-CONTAINER.md
Normal file
|
|
@ -0,0 +1,433 @@
|
|||
# RFC-0002: TRIX Binary Container Format
|
||||
|
||||
**Status:** Standards Track
|
||||
**Version:** 2.0
|
||||
**Created:** 2025-01-13
|
||||
**Author:** Snider
|
||||
|
||||
## Abstract
|
||||
|
||||
This document specifies the TRIX binary container format, a generic and extensible file format designed to store arbitrary binary payloads alongside structured JSON metadata. The format is protocol-agnostic, supporting any encryption scheme, compression algorithm, or data transformation while providing a consistent structure for metadata discovery and payload extraction.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Introduction](#1-introduction)
|
||||
2. [Terminology](#2-terminology)
|
||||
3. [Format Specification](#3-format-specification)
|
||||
4. [Header Specification](#4-header-specification)
|
||||
5. [Encoding Process](#5-encoding-process)
|
||||
6. [Decoding Process](#6-decoding-process)
|
||||
7. [Checksum Verification](#7-checksum-verification)
|
||||
8. [Magic Number Registry](#8-magic-number-registry)
|
||||
9. [Security Considerations](#9-security-considerations)
|
||||
10. [IANA Considerations](#10-iana-considerations)
|
||||
11. [References](#11-references)
|
||||
|
||||
## 1. Introduction
|
||||
|
||||
The TRIX format addresses the need for a simple, self-describing binary container that can wrap any payload type with extensible metadata. Unlike format-specific containers (such as encrypted archive formats), TRIX separates the concerns of:
|
||||
|
||||
- **Container structure**: How data is organized on disk/wire
|
||||
- **Payload semantics**: What the payload contains and how to process it
|
||||
- **Metadata extensibility**: Application-specific attributes
|
||||
|
||||
### 1.1 Design Goals
|
||||
|
||||
- **Simplicity**: Minimal overhead, easy to implement
|
||||
- **Extensibility**: JSON header allows arbitrary metadata
|
||||
- **Protocol-agnostic**: No assumptions about payload encryption or encoding
|
||||
- **Streaming-friendly**: Header length prefix enables streaming reads
|
||||
- **Magic-number customizable**: Applications can define their own identifiers
|
||||
|
||||
### 1.2 Use Cases
|
||||
|
||||
- Encrypted data interchange
|
||||
- Signed document containers
|
||||
- Configuration file packaging
|
||||
- Backup archive format
|
||||
- Inter-service message envelopes
|
||||
|
||||
## 2. Terminology
|
||||
|
||||
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119.
|
||||
|
||||
**Container**: A complete TRIX-formatted byte sequence
|
||||
**Magic Number**: A 4-byte identifier at the start of the container
|
||||
**Header**: A JSON object containing metadata about the payload
|
||||
**Payload**: The arbitrary binary data stored in the container
|
||||
**Checksum**: An optional integrity verification value
|
||||
|
||||
## 3. Format Specification
|
||||
|
||||
### 3.1 Overview
|
||||
|
||||
A TRIX container consists of five sequential fields:
|
||||
|
||||
```
|
||||
+----------------+---------+---------------+----------------+-----------+
|
||||
| Magic Number | Version | Header Length | JSON Header | Payload |
|
||||
+----------------+---------+---------------+----------------+-----------+
|
||||
| 4 bytes | 1 byte | 4 bytes | Variable | Variable |
|
||||
```
|
||||
|
||||
Total minimum size: 9 bytes (empty header, empty payload)
|
||||
|
||||
### 3.2 Field Definitions
|
||||
|
||||
#### 3.2.1 Magic Number (4 bytes)
|
||||
|
||||
A 4-byte ASCII string identifying the file type. This field:
|
||||
|
||||
- MUST be exactly 4 bytes
|
||||
- SHOULD contain printable ASCII characters
|
||||
- Is application-defined (not mandated by this specification)
|
||||
|
||||
Common conventions:
|
||||
- `TRIX` - Generic TRIX container
|
||||
- First character uppercase, application-specific identifier
|
||||
|
||||
#### 3.2.2 Version (1 byte)
|
||||
|
||||
An unsigned 8-bit integer indicating the format version.
|
||||
|
||||
| Value | Description |
|
||||
|-------|-------------|
|
||||
| 0x00 | Reserved |
|
||||
| 0x01 | Version 1.0 (deprecated) |
|
||||
| 0x02 | Version 2.0 (current) |
|
||||
| 0x03-0xFF | Reserved for future versions |
|
||||
|
||||
Implementations MUST reject containers with unrecognized versions.
|
||||
|
||||
#### 3.2.3 Header Length (4 bytes)
|
||||
|
||||
A 32-bit unsigned integer in big-endian byte order specifying the length of the JSON Header in bytes.
|
||||
|
||||
- Minimum value: 0 (empty header represented as `{}` is 2 bytes, but 0 is valid)
|
||||
- Maximum value: 16,777,215 (16 MB - 1 byte)
|
||||
|
||||
Implementations MUST reject headers exceeding 16 MB to prevent denial-of-service attacks.
|
||||
|
||||
```
|
||||
Header Length = BigEndian32(length_of_json_header_bytes)
|
||||
```
|
||||
|
||||
#### 3.2.4 JSON Header (Variable)
|
||||
|
||||
A UTF-8 encoded JSON object containing metadata. The header:
|
||||
|
||||
- MUST be valid JSON (RFC 8259)
|
||||
- MUST be a JSON object (not array, string, or primitive)
|
||||
- SHOULD use UTF-8 encoding without BOM
|
||||
- MAY be empty (`{}`)
|
||||
|
||||
#### 3.2.5 Payload (Variable)
|
||||
|
||||
The arbitrary binary payload. The payload:
|
||||
|
||||
- MAY be empty (zero bytes)
|
||||
- MAY contain any binary data
|
||||
- Length is implicitly determined by: `container_length - 9 - header_length`
|
||||
|
||||
## 4. Header Specification
|
||||
|
||||
### 4.1 Reserved Header Fields
|
||||
|
||||
The following header fields have defined semantics:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `content_type` | string | MIME type of the payload (before any transformations) |
|
||||
| `checksum` | string | Hex-encoded checksum of the payload |
|
||||
| `checksum_algo` | string | Algorithm used for checksum (e.g., "sha256") |
|
||||
| `created_at` | string | ISO 8601 timestamp of creation |
|
||||
| `encryption_algorithm` | string | Encryption algorithm identifier |
|
||||
| `compression` | string | Compression algorithm identifier |
|
||||
| `sigils` | array | Ordered list of transformation sigil names |
|
||||
|
||||
### 4.2 Extension Fields
|
||||
|
||||
Applications MAY include additional fields. To avoid conflicts:
|
||||
|
||||
- Custom fields SHOULD use a namespace prefix (e.g., `x-myapp-field`)
|
||||
- Standard field names are lowercase with underscores
|
||||
|
||||
### 4.3 Example Headers
|
||||
|
||||
#### Encrypted payload:
|
||||
```json
|
||||
{
|
||||
"content_type": "application/octet-stream",
|
||||
"encryption_algorithm": "xchacha20poly1305",
|
||||
"created_at": "2025-01-13T12:00:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
#### Compressed and encoded payload:
|
||||
```json
|
||||
{
|
||||
"content_type": "text/plain",
|
||||
"compression": "gzip",
|
||||
"sigils": ["gzip", "base64"],
|
||||
"checksum": "a591a6d40bf420404a011733cfb7b190d62c65bf0bcda32b57b277d9ad9f146e",
|
||||
"checksum_algo": "sha256"
|
||||
}
|
||||
```
|
||||
|
||||
#### Minimal header:
|
||||
```json
|
||||
{}
|
||||
```
|
||||
|
||||
## 5. Encoding Process
|
||||
|
||||
### 5.1 Algorithm
|
||||
|
||||
```
|
||||
function Encode(payload: bytes, header: object, magic: string) -> bytes:
|
||||
// Validate magic number
|
||||
if length(magic) != 4:
|
||||
return error("magic number must be 4 bytes")
|
||||
|
||||
// Serialize header to JSON
|
||||
header_bytes = JSON.serialize(header)
|
||||
header_length = length(header_bytes)
|
||||
|
||||
// Validate header size
|
||||
if header_length > 16777215:
|
||||
return error("header exceeds maximum size")
|
||||
|
||||
// Build container
|
||||
container = empty byte buffer
|
||||
|
||||
// Write magic number (4 bytes)
|
||||
container.write(magic)
|
||||
|
||||
// Write version (1 byte)
|
||||
container.write(0x02)
|
||||
|
||||
// Write header length (4 bytes, big-endian)
|
||||
container.write(BigEndian32(header_length))
|
||||
|
||||
// Write JSON header
|
||||
container.write(header_bytes)
|
||||
|
||||
// Write payload
|
||||
container.write(payload)
|
||||
|
||||
return container.bytes()
|
||||
```
|
||||
|
||||
### 5.2 Checksum Integration
|
||||
|
||||
If integrity verification is required:
|
||||
|
||||
```
|
||||
function EncodeWithChecksum(payload: bytes, header: object, magic: string, algo: string) -> bytes:
|
||||
checksum = Hash(algo, payload)
|
||||
header["checksum"] = HexEncode(checksum)
|
||||
header["checksum_algo"] = algo
|
||||
return Encode(payload, header, magic)
|
||||
```
|
||||
|
||||
## 6. Decoding Process
|
||||
|
||||
### 6.1 Algorithm
|
||||
|
||||
```
|
||||
function Decode(container: bytes, expected_magic: string) -> (header: object, payload: bytes):
|
||||
// Validate minimum size
|
||||
if length(container) < 9:
|
||||
return error("container too small")
|
||||
|
||||
// Read and verify magic number
|
||||
magic = container[0:4]
|
||||
if magic != expected_magic:
|
||||
return error("invalid magic number")
|
||||
|
||||
// Read and verify version
|
||||
version = container[4]
|
||||
if version != 0x02:
|
||||
return error("unsupported version")
|
||||
|
||||
// Read header length
|
||||
header_length = BigEndian32(container[5:9])
|
||||
|
||||
// Validate header length
|
||||
if header_length > 16777215:
|
||||
return error("header length exceeds maximum")
|
||||
|
||||
if length(container) < 9 + header_length:
|
||||
return error("container truncated")
|
||||
|
||||
// Read and parse header
|
||||
header_bytes = container[9:9+header_length]
|
||||
header = JSON.parse(header_bytes)
|
||||
|
||||
// Read payload
|
||||
payload = container[9+header_length:]
|
||||
|
||||
return (header, payload)
|
||||
```
|
||||
|
||||
### 6.2 Streaming Decode
|
||||
|
||||
For large files, streaming decode is RECOMMENDED:
|
||||
|
||||
```
|
||||
function StreamDecode(reader: Reader, expected_magic: string) -> (header: object, payload_reader: Reader):
|
||||
// Read fixed-size prefix
|
||||
prefix = reader.read(9)
|
||||
|
||||
// Validate magic and version
|
||||
magic = prefix[0:4]
|
||||
version = prefix[4]
|
||||
header_length = BigEndian32(prefix[5:9])
|
||||
|
||||
// Read header
|
||||
header_bytes = reader.read(header_length)
|
||||
header = JSON.parse(header_bytes)
|
||||
|
||||
// Return remaining reader for payload streaming
|
||||
return (header, reader)
|
||||
```
|
||||
|
||||
## 7. Checksum Verification
|
||||
|
||||
### 7.1 Supported Algorithms
|
||||
|
||||
| Algorithm ID | Output Size | Notes |
|
||||
|--------------|-------------|-------|
|
||||
| `md5` | 16 bytes | NOT RECOMMENDED for security |
|
||||
| `sha1` | 20 bytes | NOT RECOMMENDED for security |
|
||||
| `sha256` | 32 bytes | RECOMMENDED |
|
||||
| `sha384` | 48 bytes | |
|
||||
| `sha512` | 64 bytes | |
|
||||
| `blake2b-256` | 32 bytes | |
|
||||
| `blake2b-512` | 64 bytes | |
|
||||
|
||||
### 7.2 Verification Process
|
||||
|
||||
```
|
||||
function VerifyChecksum(header: object, payload: bytes) -> bool:
|
||||
if "checksum" not in header:
|
||||
return true // No checksum to verify
|
||||
|
||||
algo = header["checksum_algo"]
|
||||
expected = HexDecode(header["checksum"])
|
||||
actual = Hash(algo, payload)
|
||||
|
||||
return constant_time_compare(expected, actual)
|
||||
```
|
||||
|
||||
## 8. Magic Number Registry
|
||||
|
||||
This section defines conventions for magic number allocation:
|
||||
|
||||
### 8.1 Reserved Magic Numbers
|
||||
|
||||
| Magic | Reserved For |
|
||||
|-------|--------------|
|
||||
| `TRIX` | Generic TRIX containers |
|
||||
| `\x00\x00\x00\x00` | Reserved (null) |
|
||||
| `\xFF\xFF\xFF\xFF` | Reserved (test/invalid) |
|
||||
|
||||
### 8.2 Registered Magic Numbers
|
||||
|
||||
The following magic numbers are registered for specific applications:
|
||||
|
||||
| Magic | Application | Description |
|
||||
|-------|-------------|-------------|
|
||||
| `SMSG` | Borg | Encrypted message/media container |
|
||||
| `STIM` | Borg | Encrypted TIM container bundle |
|
||||
| `STMF` | Borg | Secure To-Me Form (encrypted form data) |
|
||||
| `TRIX` | Borg | Encrypted DataNode archive |
|
||||
|
||||
### 8.3 Allocation Guidelines
|
||||
|
||||
Applications SHOULD:
|
||||
|
||||
1. Use 4 printable ASCII characters
|
||||
2. Start with an uppercase letter
|
||||
3. Avoid common file format magic numbers (e.g., `%PDF`, `PK\x03\x04`)
|
||||
4. Register custom magic numbers in their documentation
|
||||
|
||||
## 9. Security Considerations
|
||||
|
||||
### 9.1 Header Injection
|
||||
|
||||
The JSON header is parsed before processing. Implementations MUST:
|
||||
|
||||
- Validate JSON syntax strictly
|
||||
- Reject headers with duplicate keys
|
||||
- Not execute header field values as code
|
||||
|
||||
### 9.2 Denial of Service
|
||||
|
||||
The 16 MB header limit prevents memory exhaustion attacks. Implementations SHOULD:
|
||||
|
||||
- Reject headers before full allocation if length exceeds limit
|
||||
- Implement timeouts for header parsing
|
||||
- Limit recursion depth in JSON parsing
|
||||
|
||||
### 9.3 Path Traversal
|
||||
|
||||
Header fields like `filename` MUST NOT be used directly for filesystem operations without sanitization.
|
||||
|
||||
### 9.4 Checksum Security
|
||||
|
||||
- MD5 and SHA1 checksums provide integrity but not authenticity
|
||||
- For tamper detection, use HMAC or digital signatures
|
||||
- Checksum verification MUST use constant-time comparison
|
||||
|
||||
### 9.5 Version Negotiation
|
||||
|
||||
Implementations MUST NOT attempt to parse containers with unknown versions, as the format may change incompatibly.
|
||||
|
||||
## 10. IANA Considerations
|
||||
|
||||
This document does not require IANA actions. The TRIX format is application-defined and does not use IANA-managed namespaces.
|
||||
|
||||
Future versions may define:
|
||||
- Media type registration (e.g., `application/x-trix`)
|
||||
- Magic number registry
|
||||
|
||||
## 11. Future Work
|
||||
|
||||
- [ ] Media type registration (`application/x-trix`, `application/x-smsg`, etc.)
|
||||
- [ ] Formal magic number registry with registration process
|
||||
- [ ] Streaming encoding/decoding for large payloads
|
||||
- [ ] Header compression for bandwidth-constrained environments
|
||||
- [ ] Sub-container nesting specification (Trix within Trix)
|
||||
|
||||
## 12. References
|
||||
|
||||
- [RFC 8259] The JavaScript Object Notation (JSON) Data Interchange Format
|
||||
- [RFC 2119] Key words for use in RFCs to Indicate Requirement Levels
|
||||
- [RFC 6838] Media Type Specifications and Registration Procedures
|
||||
|
||||
---
|
||||
|
||||
## Appendix A: Binary Layout Diagram
|
||||
|
||||
```
|
||||
Byte offset: 0 4 5 9 9+H 9+H+P
|
||||
|---------|----|---------|---------|---------|
|
||||
| Magic | V | HdrLen | Header | Payload |
|
||||
| (4) |(1) | (4) | (H) | (P) |
|
||||
|---------|----|---------|---------|---------|
|
||||
|
||||
V = Version byte
|
||||
H = Header length (from HdrLen field)
|
||||
P = Payload length (remaining bytes)
|
||||
```
|
||||
|
||||
## Appendix B: Reference Implementation
|
||||
|
||||
A reference implementation in Go is available at:
|
||||
`github.com/Snider/Enchantrix/pkg/trix/trix.go`
|
||||
|
||||
## Appendix C: Changelog
|
||||
|
||||
- **2.0** (2025-01-13): Current version with JSON header
|
||||
- **1.0** (deprecated): Initial version with fixed header fields
|
||||
873
RFC-011-OSS-DRM.md
Normal file
873
RFC-011-OSS-DRM.md
Normal file
|
|
@ -0,0 +1,873 @@
|
|||
# RFC-001: Open Source DRM for Independent Artists
|
||||
|
||||
**Status**: Proposed
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-10
|
||||
**License**: EUPL-1.2
|
||||
|
||||
---
|
||||
|
||||
**Revision History**
|
||||
|
||||
| Date | Status | Notes |
|
||||
|------|--------|-------|
|
||||
| 2026-01-13 | Proposed | **Adaptive Bitrate (ABR)**: HLS-style multi-quality streaming with encrypted variants. New Section 3.7. All Future Work items complete. |
|
||||
| 2026-01-12 | Proposed | **Chunked streaming**: v3 now supports optional ChunkSize for independently decryptable chunks - enables seek, HTTP Range, and decrypt-while-downloading. |
|
||||
| 2026-01-12 | Proposed | **v3 Streaming**: LTHN rolling keys with configurable cadence (daily/12h/6h/1h). CEK wrapping for zero-trust streaming. WASM v1.3.0 with decryptV3(). |
|
||||
| 2026-01-10 | Proposed | Technical review passed. Fixed section numbering (7.x, 8.x, 9.x, 11.x). Updated WASM size to 5.9MB. Implementation verified complete for stated scope. |
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This RFC describes an open-source Digital Rights Management (DRM) system designed for independent artists to distribute encrypted media directly to fans without platform intermediaries. The system uses ChaCha20-Poly1305 authenticated encryption with a "password-as-license" model, enabling zero-trust distribution where the encryption key serves as both the license and the decryption mechanism.
|
||||
|
||||
## 1. Motivation
|
||||
|
||||
### 1.1 The Problem
|
||||
|
||||
Traditional music distribution forces artists into platforms that:
|
||||
- Take 30-70% of revenue (Spotify, Apple Music, Bandcamp)
|
||||
- Control the relationship between artist and fan
|
||||
- Require ongoing subscription for access
|
||||
- Can delist content unilaterally
|
||||
|
||||
Existing DRM systems (Widevine, FairPlay) require:
|
||||
- Platform integration and licensing fees
|
||||
- Centralized key servers
|
||||
- Proprietary implementations
|
||||
- Trust in third parties
|
||||
|
||||
### 1.2 The Solution
|
||||
|
||||
A DRM system where:
|
||||
- **The password IS the license** - no key servers, no escrow
|
||||
- **Artists keep 100%** - sell direct, any payment processor
|
||||
- **Host anywhere** - CDN, IPFS, S3, personal server
|
||||
- **Browser or native** - same encryption, same content
|
||||
- **Open source** - auditable, forkable, community-owned
|
||||
|
||||
## 2. Design Philosophy
|
||||
|
||||
### 2.1 "Honest DRM"
|
||||
|
||||
Traditional DRM operates on a flawed premise: that sufficiently complex technology can prevent copying. History proves otherwise—every DRM system has been broken. The result is systems that:
|
||||
- Punish paying customers with restrictions
|
||||
- Get cracked within days/weeks anyway
|
||||
- Require massive infrastructure (key servers, license servers)
|
||||
- Create single points of failure
|
||||
|
||||
This system embraces a different philosophy: **DRM for honest people**.
|
||||
|
||||
The goal isn't to stop determined pirates (impossible). The goal is:
|
||||
1. Make the legitimate path easy and pleasant
|
||||
2. Make casual sharing slightly inconvenient
|
||||
3. Create a social/economic deterrent (sharing = giving away money)
|
||||
4. Remove all friction for paying customers
|
||||
|
||||
### 2.2 Password-as-License
|
||||
|
||||
The password IS the license. This is not a limitation—it's the core innovation.
|
||||
|
||||
```
|
||||
Traditional DRM:
|
||||
Purchase → License Server → Device Registration → Key Exchange → Playback
|
||||
(5 steps, 3 network calls, 2 points of failure)
|
||||
|
||||
dapp.fm:
|
||||
Purchase → Password → Playback
|
||||
(2 steps, 0 network calls, 0 points of failure)
|
||||
```
|
||||
|
||||
Benefits:
|
||||
- **No accounts** - No email harvesting, no password resets, no data breaches
|
||||
- **No servers** - Artist can disappear; content still works forever
|
||||
- **No revocation anxiety** - You bought it, you own it
|
||||
- **Transferable** - Give your password to a friend (like lending a CD)
|
||||
- **Archival** - Works in 50 years if you have the password
|
||||
|
||||
### 2.3 Encryption as Access Control
|
||||
|
||||
We use military-grade encryption (ChaCha20-Poly1305) not because we need military-grade security, but because:
|
||||
1. It's fast (important for real-time media)
|
||||
2. It's auditable (open standard, RFC 8439)
|
||||
3. It's already implemented everywhere (Go stdlib, browser crypto)
|
||||
4. It provides authenticity (Poly1305 MAC prevents tampering)
|
||||
|
||||
The threat model isn't nation-states—it's casual piracy. The encryption just needs to be "not worth the effort to crack for a $10 album."
|
||||
|
||||
## 3. Architecture
|
||||
|
||||
### 3.1 System Components
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ DISTRIBUTION LAYER │
|
||||
│ CDN / IPFS / S3 / GitHub / Personal Server │
|
||||
│ (Encrypted .smsg files - safe to host anywhere) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ PLAYBACK LAYER │
|
||||
│ ┌─────────────────┐ ┌─────────────────────────────┐ │
|
||||
│ │ Browser Demo │ │ Native Desktop App │ │
|
||||
│ │ (WASM) │ │ (Wails + Go) │ │
|
||||
│ │ │ │ │ │
|
||||
│ │ ┌───────────┐ │ │ ┌───────────────────────┐ │ │
|
||||
│ │ │ stmf.wasm │ │ │ │ Go SMSG Library │ │ │
|
||||
│ │ │ │ │ │ │ (pkg/smsg) │ │ │
|
||||
│ │ │ ChaCha20 │ │ │ │ │ │ │
|
||||
│ │ │ Poly1305 │ │ │ │ ChaCha20-Poly1305 │ │ │
|
||||
│ │ └───────────┘ │ │ └───────────────────────┘ │ │
|
||||
│ └─────────────────┘ └─────────────────────────────┘ │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
│
|
||||
▼
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ LICENSE LAYER │
|
||||
│ Password = License Key = Decryption Key │
|
||||
│ (Sold via Gumroad, Stripe, PayPal, Crypto, etc.) │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 3.2 SMSG Container Format
|
||||
|
||||
See: `examples/formats/smsg-format.md`
|
||||
|
||||
Key properties:
|
||||
- **Magic number**: "SMSG" (0x534D5347)
|
||||
- **Algorithm**: ChaCha20-Poly1305 (authenticated encryption)
|
||||
- **Format**: v1 (JSON+base64) or v2 (binary, 25% smaller)
|
||||
- **Compression**: zstd (default), gzip, or none
|
||||
- **Manifest**: Unencrypted metadata (title, artist, license, expiry, links)
|
||||
- **Payload**: Encrypted media with attachments
|
||||
|
||||
#### Format Versions
|
||||
|
||||
| Format | Payload Structure | Size | Speed | Use Case |
|
||||
|--------|------------------|------|-------|----------|
|
||||
| **v1** | JSON with base64-encoded attachments | +33% overhead | Baseline | Legacy |
|
||||
| **v2** | Binary header + raw attachments + zstd | ~Original size | 3-10x faster | Download-to-own |
|
||||
| **v3** | CEK + wrapped keys + rolling LTHN | ~Original size | 3-10x faster | **Streaming** |
|
||||
| **v3+chunked** | v3 with independently decryptable chunks | ~Original size | Seekable | **Chunked streaming** |
|
||||
|
||||
v2 is recommended for download-to-own (perpetual license). v3 is recommended for streaming (time-limited access). v3 with chunking is recommended for large files requiring seek capability or decrypt-while-downloading.
|
||||
|
||||
### 3.3 Key Derivation (v1/v2)
|
||||
|
||||
```
|
||||
License Key (password)
|
||||
│
|
||||
▼
|
||||
SHA-256 Hash
|
||||
│
|
||||
▼
|
||||
32-byte Symmetric Key
|
||||
│
|
||||
▼
|
||||
ChaCha20-Poly1305 Decryption
|
||||
```
|
||||
|
||||
Simple, auditable, no key escrow.
|
||||
|
||||
**Note on password hashing**: SHA-256 is used for simplicity and speed. For high-value content, artists may choose to use stronger KDFs (Argon2, scrypt) in custom implementations. The format supports algorithm negotiation via the header.
|
||||
|
||||
### 3.4 Streaming Key Derivation (v3)
|
||||
|
||||
v3 format uses **LTHN rolling keys** for zero-trust streaming. The platform controls key refresh cadence.
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ v3 STREAMING KEY FLOW │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ SERVER (encryption time): │
|
||||
│ ───────────────────────── │
|
||||
│ 1. Generate random CEK (Content Encryption Key) │
|
||||
│ 2. Encrypt content with CEK (one-time) │
|
||||
│ 3. For current period AND next period: │
|
||||
│ streamKey = SHA256(LTHN(period:license:fingerprint)) │
|
||||
│ wrappedKey = ChaCha(CEK, streamKey) │
|
||||
│ 4. Store wrapped keys in header (CEK never transmitted) │
|
||||
│ │
|
||||
│ CLIENT (decryption time): │
|
||||
│ ──────────────────────── │
|
||||
│ 1. Derive streamKey = SHA256(LTHN(period:license:fingerprint)) │
|
||||
│ 2. Try to unwrap CEK from current period key │
|
||||
│ 3. If fails, try next period key │
|
||||
│ 4. Decrypt content with unwrapped CEK │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
#### LTHN Hash Function
|
||||
|
||||
LTHN is rainbow-table resistant because the salt is derived from the input itself:
|
||||
|
||||
```
|
||||
LTHN(input) = SHA256(input + reverse_leet(input))
|
||||
|
||||
where reverse_leet swaps: o↔0, l↔1, e↔3, a↔4, s↔z, t↔7
|
||||
|
||||
Example:
|
||||
LTHN("2026-01-12:license:fp")
|
||||
= SHA256("2026-01-12:license:fp" + "pf:3zn3ci1:21-10-6202")
|
||||
```
|
||||
|
||||
You cannot compute the hash without knowing the original input.
|
||||
|
||||
#### Cadence Options
|
||||
|
||||
The platform chooses the key refresh rate. Faster cadence = tighter access control.
|
||||
|
||||
| Cadence | Period Format | Rolling Window | Use Case |
|
||||
|---------|---------------|----------------|----------|
|
||||
| `daily` | `2026-01-12` | 24-48 hours | Standard streaming |
|
||||
| `12h` | `2026-01-12-AM/PM` | 12-24 hours | Premium content |
|
||||
| `6h` | `2026-01-12-00/06/12/18` | 6-12 hours | High-value content |
|
||||
| `1h` | `2026-01-12-15` | 1-2 hours | Live events |
|
||||
|
||||
The rolling window ensures smooth key transitions. At any time, both the current period key AND the next period key are valid.
|
||||
|
||||
#### Zero-Trust Properties
|
||||
|
||||
- **Server never stores keys** - Derived on-demand from LTHN
|
||||
- **Keys auto-expire** - No revocation mechanism needed
|
||||
- **Sharing keys is pointless** - They expire within the cadence window
|
||||
- **Fingerprint binds to device** - Different device = different key
|
||||
- **License ties to user** - Different user = different key
|
||||
|
||||
### 3.5 Chunked Streaming (v3 with ChunkSize)
|
||||
|
||||
When `StreamParams.ChunkSize > 0`, v3 format splits content into independently decryptable chunks, enabling:
|
||||
|
||||
- **Decrypt-while-downloading** - Play media as chunks arrive
|
||||
- **HTTP Range requests** - Fetch specific chunks by byte offset
|
||||
- **Seekable playback** - Jump to any position without decrypting previous chunks
|
||||
|
||||
```
|
||||
┌──────────────────────────────────────────────────────────────────┐
|
||||
│ V3 CHUNKED FORMAT │
|
||||
├──────────────────────────────────────────────────────────────────┤
|
||||
│ │
|
||||
│ Header (cleartext): │
|
||||
│ format: "v3" │
|
||||
│ chunked: { │
|
||||
│ chunkSize: 1048576, // 1MB default │
|
||||
│ totalChunks: N, │
|
||||
│ totalSize: X, // unencrypted total │
|
||||
│ index: [ // for HTTP Range / seeking │
|
||||
│ { offset: 0, size: Y }, │
|
||||
│ { offset: Y, size: Z }, │
|
||||
│ ... │
|
||||
│ ] │
|
||||
│ } │
|
||||
│ wrappedKeys: [...] // same as non-chunked v3 │
|
||||
│ │
|
||||
│ Payload: │
|
||||
│ [chunk 0: nonce + encrypted + tag] │
|
||||
│ [chunk 1: nonce + encrypted + tag] │
|
||||
│ ... │
|
||||
│ [chunk N: nonce + encrypted + tag] │
|
||||
│ │
|
||||
└──────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
**Key insight**: Each chunk is encrypted with the same CEK but gets its own random nonce, making chunks independently decryptable. The chunk index in the header enables:
|
||||
|
||||
1. **Seeking**: Calculate which chunk contains byte offset X, fetch just that chunk
|
||||
2. **Range requests**: Use HTTP Range headers to fetch specific encrypted chunks
|
||||
3. **Streaming**: Decrypt chunk 0 for metadata, then stream chunks 1-N as they arrive
|
||||
|
||||
**Usage example**:
|
||||
```go
|
||||
params := &StreamParams{
|
||||
License: "user-license",
|
||||
Fingerprint: "device-fp",
|
||||
ChunkSize: 1024 * 1024, // 1MB chunks
|
||||
}
|
||||
|
||||
// Encrypt with chunking
|
||||
encrypted, _ := EncryptV3(msg, params, manifest)
|
||||
|
||||
// For streaming playback:
|
||||
header, _ := GetV3Header(encrypted)
|
||||
cek, _ := UnwrapCEKFromHeader(header, params)
|
||||
payload, _ := GetV3Payload(encrypted)
|
||||
|
||||
for i := 0; i < header.Chunked.TotalChunks; i++ {
|
||||
chunk, _ := DecryptV3Chunk(payload, cek, i, header.Chunked)
|
||||
player.Write(chunk) // Stream to audio/video player
|
||||
}
|
||||
```
|
||||
|
||||
### 3.6 Supported Content Types
|
||||
|
||||
SMSG is content-agnostic. Any file can be an attachment:
|
||||
|
||||
| Type | MIME | Use Case |
|
||||
|------|------|----------|
|
||||
| Audio | audio/mpeg, audio/flac, audio/wav | Music, podcasts |
|
||||
| Video | video/mp4, video/webm | Music videos, films |
|
||||
| Images | image/png, image/jpeg | Album art, photos |
|
||||
| Documents | application/pdf | Liner notes, lyrics |
|
||||
| Archives | application/zip | Multi-file releases |
|
||||
| Any | application/octet-stream | Anything else |
|
||||
|
||||
Multiple attachments per SMSG are supported (e.g., album + cover art + PDF booklet).
|
||||
|
||||
### 3.7 Adaptive Bitrate Streaming (ABR)
|
||||
|
||||
For large video content, ABR enables automatic quality switching based on network conditions—like HLS/DASH but with ChaCha20-Poly1305 encryption.
|
||||
|
||||
**Architecture:**
|
||||
```
|
||||
ABR Manifest (manifest.json)
|
||||
├── Title: "My Video"
|
||||
├── Version: "abr-v1"
|
||||
├── Variants: [1080p, 720p, 480p, 360p]
|
||||
└── DefaultIdx: 1 (720p)
|
||||
|
||||
track-1080p.smsg ──┐
|
||||
track-720p.smsg ──┼── Each is standard v3 chunked SMSG
|
||||
track-480p.smsg ──┤ Same password decrypts ALL variants
|
||||
track-360p.smsg ──┘
|
||||
```
|
||||
|
||||
**ABR Manifest Format:**
|
||||
```json
|
||||
{
|
||||
"version": "abr-v1",
|
||||
"title": "Content Title",
|
||||
"duration": 300,
|
||||
"variants": [
|
||||
{
|
||||
"name": "360p",
|
||||
"bandwidth": 500000,
|
||||
"width": 640,
|
||||
"height": 360,
|
||||
"codecs": "avc1.640028,mp4a.40.2",
|
||||
"url": "track-360p.smsg",
|
||||
"chunkCount": 12,
|
||||
"fileSize": 18750000
|
||||
},
|
||||
{
|
||||
"name": "720p",
|
||||
"bandwidth": 2500000,
|
||||
"width": 1280,
|
||||
"height": 720,
|
||||
"codecs": "avc1.640028,mp4a.40.2",
|
||||
"url": "track-720p.smsg",
|
||||
"chunkCount": 48,
|
||||
"fileSize": 93750000
|
||||
}
|
||||
],
|
||||
"defaultIdx": 1
|
||||
}
|
||||
```
|
||||
|
||||
**Bandwidth Estimation Algorithm:**
|
||||
1. Measure download time for each chunk
|
||||
2. Calculate bits per second: `(bytes × 8 × 1000) / timeMs`
|
||||
3. Average last 3 samples for stability
|
||||
4. Apply 80% safety factor to prevent buffering
|
||||
|
||||
**Variant Selection:**
|
||||
```
|
||||
Selected = highest quality where (bandwidth × 0.8) >= variant.bandwidth
|
||||
```
|
||||
|
||||
**Key Properties:**
|
||||
- **Same password for all variants**: CEK unwrapped once, works everywhere
|
||||
- **Chunk-boundary switching**: Clean cuts, no partial chunk issues
|
||||
- **Independent variants**: No cross-file dependencies
|
||||
- **CDN-friendly**: Each variant is a standard file, cacheable separately
|
||||
|
||||
**Creating ABR Content:**
|
||||
```bash
|
||||
# Use mkdemo-abr to create variant set from source video
|
||||
go run ./cmd/mkdemo-abr input.mp4 output-dir/ [password]
|
||||
|
||||
# Output:
|
||||
# output-dir/manifest.json (ABR manifest)
|
||||
# output-dir/track-1080p.smsg (v3 chunked, 5 Mbps)
|
||||
# output-dir/track-720p.smsg (v3 chunked, 2.5 Mbps)
|
||||
# output-dir/track-480p.smsg (v3 chunked, 1 Mbps)
|
||||
# output-dir/track-360p.smsg (v3 chunked, 500 Kbps)
|
||||
```
|
||||
|
||||
**Standard Presets:**
|
||||
|
||||
| Name | Resolution | Bitrate | Use Case |
|
||||
|------|------------|---------|----------|
|
||||
| 1080p | 1920×1080 | 5 Mbps | High quality, fast connections |
|
||||
| 720p | 1280×720 | 2.5 Mbps | Default, most connections |
|
||||
| 480p | 854×480 | 1 Mbps | Mobile, medium connections |
|
||||
| 360p | 640×360 | 500 Kbps | Slow connections, previews |
|
||||
|
||||
## 4. Demo Page Architecture
|
||||
|
||||
**Live Demo**: https://demo.dapp.fm
|
||||
|
||||
### 4.1 Components
|
||||
|
||||
```
|
||||
demo/
|
||||
├── index.html # Single-page application
|
||||
├── stmf.wasm # Go WASM decryption module (~5.9MB)
|
||||
├── wasm_exec.js # Go WASM runtime
|
||||
├── demo-track.smsg # Sample encrypted content (v2/zstd)
|
||||
└── profile-avatar.jpg # Artist avatar
|
||||
```
|
||||
|
||||
### 4.2 UI Modes
|
||||
|
||||
The demo has three modes, accessible via tabs:
|
||||
|
||||
| Mode | Purpose | Default |
|
||||
|------|---------|---------|
|
||||
| **Profile** | Artist landing page with auto-playing content | Yes |
|
||||
| **Fan** | Upload and decrypt purchased .smsg files | No |
|
||||
| **Artist** | Re-key content, create new packages | No |
|
||||
|
||||
### 4.3 Profile Mode (Default)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ dapp.fm [Profile] [Fan] [Artist] │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ Zero-Trust DRM ⚠️ Demo pre-seeded with keys │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ [No Middlemen] [No Fees] [Host Anywhere] [Browser/Native] │
|
||||
├─────────────────┬───────────────────────────────────────────┤
|
||||
│ SIDEBAR │ MAIN CONTENT │
|
||||
│ ┌───────────┐ │ ┌─────────────────────────────────────┐ │
|
||||
│ │ Avatar │ │ │ 🛒 Buy This Track on Beatport │ │
|
||||
│ │ │ │ │ 95%-100%* goes to the artist │ │
|
||||
│ │ Artist │ │ ├─────────────────────────────────────┤ │
|
||||
│ │ Name │ │ │ │ │
|
||||
│ │ │ │ │ VIDEO PLAYER │ │
|
||||
│ │ Links: │ │ │ (auto-starts at 1:08) │ │
|
||||
│ │ Beatport │ │ │ with native controls │ │
|
||||
│ │ Spotify │ │ │ │ │
|
||||
│ │ YouTube │ │ ├─────────────────────────────────────┤ │
|
||||
│ │ etc. │ │ │ About the Artist │ │
|
||||
│ └───────────┘ │ │ (Bio text) │ │
|
||||
│ │ └─────────────────────────────────────┘ │
|
||||
├─────────────────┴───────────────────────────────────────────┤
|
||||
│ GitHub · EUPL-1.2 · Viva La OpenSource 💜 │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 4.4 Decryption Flow
|
||||
|
||||
```
|
||||
User clicks "Play Demo Track"
|
||||
│
|
||||
▼
|
||||
fetch(demo-track.smsg)
|
||||
│
|
||||
▼
|
||||
Convert to base64 ◄─── CRITICAL: Must handle binary vs text format
|
||||
│ See: examples/failures/001-double-base64-encoding.md
|
||||
▼
|
||||
BorgSMSG.getInfo(base64)
|
||||
│
|
||||
▼
|
||||
Display manifest (title, artist, license)
|
||||
│
|
||||
▼
|
||||
BorgSMSG.decryptStream(base64, password)
|
||||
│
|
||||
▼
|
||||
Create Blob from Uint8Array
|
||||
│
|
||||
▼
|
||||
URL.createObjectURL(blob)
|
||||
│
|
||||
▼
|
||||
<audio> or <video> element plays content
|
||||
```
|
||||
|
||||
### 4.5 Fan Unlock Tab
|
||||
|
||||
Allows fans to:
|
||||
1. Upload any `.smsg` file they purchased
|
||||
2. Enter their license key (password)
|
||||
3. Decrypt and play locally
|
||||
|
||||
No server communication - everything in browser.
|
||||
|
||||
## 5. Artist Portal (License Manager)
|
||||
|
||||
The License Manager (`js/borg-stmf/artist-portal.html`) is the artist-facing tool for creating and issuing licenses.
|
||||
|
||||
### 5.1 Workflow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────┐
|
||||
│ ARTIST PORTAL │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 1. Upload Content │
|
||||
│ - Drag/drop audio or video file │
|
||||
│ - Or use demo content for testing │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 2. Define Track List (CD Mastering) │
|
||||
│ - Track titles │
|
||||
│ - Start/end timestamps → chapter markers │
|
||||
│ - Mix types (full, intro, chorus, drop, etc.) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 3. Configure License │
|
||||
│ - Perpetual (own forever) │
|
||||
│ - Rental (time-limited) │
|
||||
│ - Streaming (24h access) │
|
||||
│ - Preview (30 seconds) │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 4. Generate License │
|
||||
│ - Auto-generate token or set custom │
|
||||
│ - Token encrypts content with manifest │
|
||||
│ - Download .smsg file │
|
||||
├─────────────────────────────────────────────────────────────┤
|
||||
│ 5. Distribute │
|
||||
│ - Upload .smsg to CDN/IPFS/S3 │
|
||||
│ - Sell license token via payment processor │
|
||||
│ - Fan receives token, downloads .smsg, plays │
|
||||
└─────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 5.2 License Types
|
||||
|
||||
| Type | Duration | Use Case |
|
||||
|------|----------|----------|
|
||||
| **Perpetual** | Forever | Album purchase, own forever |
|
||||
| **Rental** | 7-90 days | Limited edition, seasonal content |
|
||||
| **Streaming** | 24 hours | On-demand streaming model |
|
||||
| **Preview** | 30 seconds | Free samples, try-before-buy |
|
||||
|
||||
### 5.3 Track List as Manifest
|
||||
|
||||
The artist defines tracks like mastering a CD:
|
||||
|
||||
```json
|
||||
{
|
||||
"tracks": [
|
||||
{"title": "Intro", "start": 0, "end": 45, "type": "intro"},
|
||||
{"title": "Main Track", "start": 45, "end": 240, "type": "full"},
|
||||
{"title": "The Drop", "start": 120, "end": 180, "type": "drop"},
|
||||
{"title": "Outro", "start": 240, "end": 300, "type": "outro"}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
Same master file, different licensed "cuts":
|
||||
- **Full Album**: All tracks, perpetual
|
||||
- **Radio Edit**: Tracks 2-3 only, rental
|
||||
- **DJ Extended**: Loop points enabled, perpetual
|
||||
- **Preview**: First 30 seconds, expires immediately
|
||||
|
||||
### 5.4 Stats Dashboard
|
||||
|
||||
The Artist Portal tracks:
|
||||
- Total licenses issued
|
||||
- Potential revenue (based on entered prices)
|
||||
- 100% cut (reminder: no platform fees)
|
||||
|
||||
## 6. Economic Model
|
||||
|
||||
### 6.1 The Offer
|
||||
|
||||
**Self-host for 0%. Let us host for 5%.**
|
||||
|
||||
That's it. No hidden fees, no per-stream calculations, no "recoupable advances."
|
||||
|
||||
| Option | Cut | What You Get |
|
||||
|--------|-----|--------------|
|
||||
| **Self-host** | 0% | Tools, format, documentation. Host on your own CDN/IPFS/server |
|
||||
| **dapp.fm hosted** | 5% | CDN, player embed, analytics, payment integration |
|
||||
|
||||
Compare to:
|
||||
- Spotify: ~30% of $0.003/stream (you need 300k streams to earn $1000)
|
||||
- Apple Music: ~30%
|
||||
- Bandcamp: ~15-20%
|
||||
- DistroKid: Flat fee but still platform-dependent
|
||||
|
||||
### 6.2 License Key Strategies
|
||||
|
||||
Artists can choose their pricing model:
|
||||
|
||||
**Per-Album License**
|
||||
```
|
||||
Album: "My Greatest Hits"
|
||||
Price: $10
|
||||
License: "MGH-2024-XKCD-7829"
|
||||
→ One password unlocks entire album
|
||||
```
|
||||
|
||||
**Per-Track License**
|
||||
```
|
||||
Track: "Single Release"
|
||||
Price: $1
|
||||
License: "SINGLE-A7B3-C9D2"
|
||||
→ Individual track, individual price
|
||||
```
|
||||
|
||||
**Tiered Licenses**
|
||||
```
|
||||
Standard: $10 → MP3 version
|
||||
Premium: $25 → FLAC + stems + bonus content
|
||||
→ Different passwords, different content
|
||||
```
|
||||
|
||||
**Time-Limited Previews**
|
||||
```
|
||||
Preview license expires in 7 days
|
||||
Full license: permanent
|
||||
→ Manifest contains expiry date
|
||||
```
|
||||
|
||||
### 6.3 License Key Best Practices
|
||||
|
||||
For artists generating license keys:
|
||||
|
||||
```bash
|
||||
# Good: Memorable but unique
|
||||
MGH-2024-XKCD-7829
|
||||
ALBUM-[year]-[random]-[checksum]
|
||||
|
||||
# Good: UUID for automation
|
||||
550e8400-e29b-41d4-a716-446655440000
|
||||
|
||||
# Avoid: Dictionary words (bruteforceable)
|
||||
password123
|
||||
mysecretalbum
|
||||
```
|
||||
|
||||
Recommended entropy: 64+ bits (e.g., 4 random words, or 12+ random alphanumeric)
|
||||
|
||||
### 6.4 No Revocation (By Design)
|
||||
|
||||
**Q: What if someone leaks the password?**
|
||||
|
||||
A: Then they leak it. Same as if someone photocopies a book or rips a CD.
|
||||
|
||||
This is a feature, not a bug:
|
||||
- **No revocation server** = No single point of failure
|
||||
- **No phone home** = Works offline, forever
|
||||
- **Leaked keys** = Social problem, not technical problem
|
||||
|
||||
Mitigation strategies for artists:
|
||||
1. Personalized keys per buyer (track who leaked)
|
||||
2. Watermarked content (forensic tracking)
|
||||
3. Time-limited keys for subscription models
|
||||
4. Social pressure (small community = reputation matters)
|
||||
|
||||
The system optimizes for **happy paying customers**, not **punishing pirates**.
|
||||
|
||||
## 7. Security Model
|
||||
|
||||
### 7.1 Threat Model
|
||||
|
||||
| Threat | Mitigation |
|
||||
|--------|------------|
|
||||
| Man-in-the-middle | Content encrypted at rest; HTTPS for transport |
|
||||
| Key server compromise | No key server - password-derived keys |
|
||||
| Platform deplatforming | Self-hostable, decentralized distribution |
|
||||
| Unauthorized sharing | Economic/social deterrent (password = paid license) |
|
||||
| Memory extraction | Accepted risk - same as any DRM |
|
||||
|
||||
### 7.2 What This System Does NOT Prevent
|
||||
|
||||
- Users sharing their password (same as sharing any license)
|
||||
- Screen recording of playback
|
||||
- Memory dumping of decrypted content
|
||||
|
||||
This is **intentional**. The goal is not unbreakable DRM (which is impossible) but:
|
||||
1. Making casual piracy inconvenient
|
||||
2. Giving artists control of their distribution
|
||||
3. Enabling direct artist-to-fan sales
|
||||
4. Removing platform dependency
|
||||
|
||||
### 7.3 Trust Boundaries
|
||||
|
||||
```
|
||||
TRUSTED UNTRUSTED
|
||||
──────── ─────────
|
||||
User's browser/device Distribution CDN
|
||||
Decryption code (auditable) Payment processor
|
||||
License key (in user's head) Internet transport
|
||||
Local playback Third-party hosting
|
||||
```
|
||||
|
||||
## 8. Implementation Status
|
||||
|
||||
### 8.1 Completed
|
||||
- [x] SMSG format specification (v1, v2, v3)
|
||||
- [x] Go encryption/decryption library (pkg/smsg)
|
||||
- [x] WASM build for browser (pkg/wasm/stmf)
|
||||
- [x] Native desktop app (Wails, cmd/dapp-fm-app)
|
||||
- [x] Demo page with Profile/Fan/Artist modes
|
||||
- [x] License Manager component
|
||||
- [x] Streaming decryption API (v1.2.0)
|
||||
- [x] **v2 binary format** - 25% smaller files
|
||||
- [x] **zstd compression** - 3-10x faster than gzip
|
||||
- [x] **Manifest links** - Artist platform links in metadata
|
||||
- [x] **Live demo** - https://demo.dapp.fm
|
||||
- [x] RFC-quality demo file with cryptographically secure password
|
||||
- [x] **v3 streaming format** - LTHN rolling keys with CEK wrapping
|
||||
- [x] **Configurable cadence** - daily/12h/6h/1h key rotation
|
||||
- [x] **WASM v1.3.0** - `BorgSMSG.decryptV3()` for streaming
|
||||
- [x] **Chunked streaming** - Independently decryptable chunks for seek/streaming
|
||||
- [x] **Adaptive Bitrate (ABR)** - HLS-style multi-quality streaming with encrypted variants
|
||||
|
||||
### 8.2 Fixed Issues
|
||||
- [x] ~~Double base64 encoding bug~~ - Fixed by using binary format
|
||||
- [x] ~~Demo file format detection~~ - v2 format auto-detected via header
|
||||
- [x] ~~Key wrapping for streaming~~ - Implemented in v3 format
|
||||
|
||||
### 8.3 Future Work
|
||||
- [x] Multi-bitrate adaptive streaming (see Section 3.7 ABR)
|
||||
- [x] Payment integration examples (see `docs/payment-integration.md`)
|
||||
- [x] IPFS distribution guide (see `docs/ipfs-distribution.md`)
|
||||
- [x] Demo page "Streaming" tab for v3 showcase
|
||||
|
||||
## 9. Usage Examples
|
||||
|
||||
### 9.1 Artist Workflow
|
||||
|
||||
```bash
|
||||
# 1. Package your media (uses v2 binary format + zstd by default)
|
||||
go run ./cmd/mkdemo my-track.mp4 my-track.smsg
|
||||
# Output:
|
||||
# Created: my-track.smsg (29220077 bytes)
|
||||
# Master Password: PMVXogAJNVe_DDABfTmLYztaJAzsD0R7
|
||||
# Store this password securely - it cannot be recovered!
|
||||
|
||||
# Or programmatically:
|
||||
msg := smsg.NewMessage("Welcome to my album")
|
||||
msg.AddBinaryAttachment("track.mp4", mediaBytes, "video/mp4")
|
||||
manifest := smsg.NewManifest("Track Title")
|
||||
manifest.Artist = "Artist Name"
|
||||
manifest.AddLink("home", "https://linktr.ee/artist")
|
||||
encrypted, _ := smsg.EncryptV2WithManifest(msg, password, manifest)
|
||||
|
||||
# 2. Upload to any hosting
|
||||
aws s3 cp my-track.smsg s3://my-bucket/releases/
|
||||
# or: ipfs add my-track.smsg
|
||||
# or: scp my-track.smsg myserver:/var/www/
|
||||
|
||||
# 3. Sell license keys
|
||||
# Use Gumroad, Stripe, PayPal - any payment method
|
||||
# Deliver the master password on purchase
|
||||
```
|
||||
|
||||
### 9.2 Fan Workflow
|
||||
|
||||
```
|
||||
1. Purchase from artist's website → receive license key
|
||||
2. Download .smsg file from CDN/IPFS/wherever
|
||||
3. Open demo page or native app
|
||||
4. Enter license key
|
||||
5. Content decrypts and plays locally
|
||||
```
|
||||
|
||||
### 9.3 Browser Integration
|
||||
|
||||
```html
|
||||
<script src="wasm_exec.js"></script>
|
||||
<script src="stmf.wasm.js"></script>
|
||||
<script>
|
||||
async function playContent(smsgUrl, licenseKey) {
|
||||
const response = await fetch(smsgUrl);
|
||||
const bytes = new Uint8Array(await response.arrayBuffer());
|
||||
const base64 = arrayToBase64(bytes); // Must be binary→base64
|
||||
|
||||
const msg = await BorgSMSG.decryptStream(base64, licenseKey);
|
||||
|
||||
const blob = new Blob([msg.attachments[0].data], {
|
||||
type: msg.attachments[0].mime
|
||||
});
|
||||
document.querySelector('audio').src = URL.createObjectURL(blob);
|
||||
}
|
||||
</script>
|
||||
```
|
||||
|
||||
## 10. Comparison to Existing Solutions
|
||||
|
||||
| Feature | dapp.fm (self) | dapp.fm (hosted) | Spotify | Bandcamp | Widevine |
|
||||
|---------|----------------|------------------|---------|----------|----------|
|
||||
| Artist revenue | **100%** | **95%** | ~30% | ~80% | N/A |
|
||||
| Platform cut | **0%** | **5%** | ~70% | ~15-20% | Varies |
|
||||
| Self-hostable | Yes | Optional | No | No | No |
|
||||
| Open source | Yes | Yes | No | No | No |
|
||||
| Key escrow | None | None | Required | Required | Required |
|
||||
| Browser support | WASM | WASM | Web | Web | CDM |
|
||||
| Offline support | Yes | Yes | Premium | Download | Depends |
|
||||
| Platform lock-in | **None** | **None** | High | Medium | High |
|
||||
| Works if platform dies | **Yes** | **Yes** | No | No | No |
|
||||
|
||||
## 11. Interoperability & Versioning
|
||||
|
||||
### 11.1 Format Versioning
|
||||
|
||||
SMSG includes version and format fields for forward compatibility:
|
||||
|
||||
| Version | Format | Features |
|
||||
|---------|--------|----------|
|
||||
| 1.0 | v1 | ChaCha20-Poly1305, JSON+base64 attachments |
|
||||
| 1.0 | **v2** | Binary attachments, zstd compression (25% smaller, 3-10x faster) |
|
||||
| 1.0 | **v3** | LTHN rolling keys, CEK wrapping, chunked streaming |
|
||||
| 1.0 | **v3+ABR** | Multi-quality variants with adaptive bitrate switching |
|
||||
| 2 (future) | - | Algorithm negotiation, multiple KDFs |
|
||||
|
||||
Decoders MUST reject versions they don't understand. Use v2 for download-to-own, v3 for streaming, v3+ABR for video.
|
||||
|
||||
### 11.2 Third-Party Implementations
|
||||
|
||||
The format is intentionally simple to implement:
|
||||
|
||||
**Minimum Viable Player (any language)**:
|
||||
1. Parse 4-byte magic ("SMSG")
|
||||
2. Read version (2 bytes) and header length (4 bytes)
|
||||
3. Parse JSON header
|
||||
4. SHA-256 hash the password
|
||||
5. ChaCha20-Poly1305 decrypt payload
|
||||
6. Parse JSON payload, extract attachments
|
||||
|
||||
Reference implementations:
|
||||
- Go: `pkg/smsg/` (canonical)
|
||||
- WASM: `pkg/wasm/stmf/` (browser)
|
||||
- (contributions welcome: Rust, Python, JS-native)
|
||||
|
||||
### 11.3 Embedding & Integration
|
||||
|
||||
SMSG files can be:
|
||||
- **Embedded in HTML**: Base64 in data attributes
|
||||
- **Served via API**: JSON wrapper with base64 content
|
||||
- **Bundled in apps**: Compiled into native binaries
|
||||
- **Stored on IPFS**: Content-addressed, immutable
|
||||
- **Distributed via torrents**: Encrypted = safe to share publicly
|
||||
|
||||
The player is embeddable:
|
||||
```html
|
||||
<iframe src="https://dapp.fm/embed/HASH" width="400" height="200"></iframe>
|
||||
```
|
||||
|
||||
## 12. References
|
||||
|
||||
- **Live Demo**: https://demo.dapp.fm
|
||||
- ChaCha20-Poly1305: RFC 8439
|
||||
- zstd compression: https://github.com/klauspost/compress/tree/master/zstd
|
||||
- SMSG Format: `examples/formats/smsg-format.md`
|
||||
- Demo Page Source: `demo/index.html`
|
||||
- WASM Module: `pkg/wasm/stmf/`
|
||||
- Native App: `cmd/dapp-fm-app/`
|
||||
- Demo Creator Tool: `cmd/mkdemo/`
|
||||
- ABR Creator Tool: `cmd/mkdemo-abr/`
|
||||
- ABR Package: `pkg/smsg/abr.go`
|
||||
|
||||
## 13. License
|
||||
|
||||
This specification and implementation are licensed under EUPL-1.2.
|
||||
|
||||
**Viva La OpenSource** 💜
|
||||
480
RFC-012-SMSG-FORMAT.md
Normal file
480
RFC-012-SMSG-FORMAT.md
Normal file
|
|
@ -0,0 +1,480 @@
|
|||
# RFC-002: SMSG Container Format
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-001, RFC-007
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
SMSG (Secure Message) is an encrypted container format using ChaCha20-Poly1305 authenticated encryption. This RFC specifies the binary wire format, versioning, and encoding rules for SMSG files.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
SMSG provides:
|
||||
- Authenticated encryption (ChaCha20-Poly1305)
|
||||
- Public metadata (manifest) readable without decryption
|
||||
- Multiple format versions (v1 legacy, v2 binary, v3 streaming)
|
||||
- Optional chunking for large files and seeking
|
||||
|
||||
## 2. File Structure
|
||||
|
||||
### 2.1 Binary Layout
|
||||
|
||||
```
|
||||
Offset Size Field
|
||||
------ ----- ------------------------------------
|
||||
0 4 Magic: "SMSG" (ASCII)
|
||||
4 2 Version: uint16 little-endian
|
||||
6 3 Header Length: 3-byte big-endian
|
||||
9 N Header JSON (plaintext)
|
||||
9+N M Encrypted Payload
|
||||
```
|
||||
|
||||
### 2.2 Magic Number
|
||||
|
||||
| Format | Value |
|
||||
|--------|-------|
|
||||
| Binary | `0x53 0x4D 0x53 0x47` |
|
||||
| ASCII | `SMSG` |
|
||||
| Base64 (first 6 chars) | `U01TRw` |
|
||||
|
||||
### 2.3 Version Field
|
||||
|
||||
Current version: `0x0001` (1)
|
||||
|
||||
Decoders MUST reject versions they don't understand.
|
||||
|
||||
### 2.4 Header Length
|
||||
|
||||
3 bytes, big-endian unsigned integer. Supports headers up to 16 MB.
|
||||
|
||||
## 3. Header Format (JSON)
|
||||
|
||||
Header is always plaintext (never encrypted), enabling metadata inspection without decryption.
|
||||
|
||||
### 3.1 Base Header
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"algorithm": "chacha20poly1305",
|
||||
"format": "v2",
|
||||
"compression": "zstd",
|
||||
"manifest": { ... }
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 V3 Header Extensions
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"algorithm": "chacha20poly1305",
|
||||
"format": "v3",
|
||||
"compression": "zstd",
|
||||
"keyMethod": "lthn-rolling",
|
||||
"cadence": "daily",
|
||||
"manifest": { ... },
|
||||
"wrappedKeys": [
|
||||
{"date": "2026-01-13", "wrapped": "<base64>"},
|
||||
{"date": "2026-01-14", "wrapped": "<base64>"}
|
||||
],
|
||||
"chunked": {
|
||||
"chunkSize": 1048576,
|
||||
"totalChunks": 42,
|
||||
"totalSize": 44040192,
|
||||
"index": [
|
||||
{"offset": 0, "size": 1048600},
|
||||
{"offset": 1048600, "size": 1048600}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 Header Field Reference
|
||||
|
||||
| Field | Type | Values | Description |
|
||||
|-------|------|--------|-------------|
|
||||
| version | string | "1.0" | Format version string |
|
||||
| algorithm | string | "chacha20poly1305" | Always ChaCha20-Poly1305 |
|
||||
| format | string | "", "v2", "v3" | Payload format version |
|
||||
| compression | string | "", "gzip", "zstd" | Compression algorithm |
|
||||
| keyMethod | string | "", "lthn-rolling" | Key derivation method |
|
||||
| cadence | string | "daily", "12h", "6h", "1h" | Rolling key period (v3) |
|
||||
| manifest | object | - | Content metadata |
|
||||
| wrappedKeys | array | - | CEK wrapped for each period (v3) |
|
||||
| chunked | object | - | Chunk index for seeking (v3) |
|
||||
|
||||
## 4. Manifest Structure
|
||||
|
||||
### 4.1 Complete Manifest
|
||||
|
||||
```go
|
||||
type Manifest struct {
|
||||
Title string `json:"title,omitempty"`
|
||||
Artist string `json:"artist,omitempty"`
|
||||
Album string `json:"album,omitempty"`
|
||||
Genre string `json:"genre,omitempty"`
|
||||
Year int `json:"year,omitempty"`
|
||||
ReleaseType string `json:"release_type,omitempty"`
|
||||
Duration int `json:"duration,omitempty"`
|
||||
Format string `json:"format,omitempty"`
|
||||
ExpiresAt int64 `json:"expires_at,omitempty"`
|
||||
IssuedAt int64 `json:"issued_at,omitempty"`
|
||||
LicenseType string `json:"license_type,omitempty"`
|
||||
Tracks []Track `json:"tracks,omitempty"`
|
||||
Links map[string]string `json:"links,omitempty"`
|
||||
Tags []string `json:"tags,omitempty"`
|
||||
Extra map[string]string `json:"extra,omitempty"`
|
||||
}
|
||||
|
||||
type Track struct {
|
||||
Title string `json:"title"`
|
||||
Start float64 `json:"start"`
|
||||
End float64 `json:"end,omitempty"`
|
||||
Type string `json:"type,omitempty"`
|
||||
TrackNum int `json:"track_num,omitempty"`
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Manifest Field Reference
|
||||
|
||||
| Field | Type | Range | Description |
|
||||
|-------|------|-------|-------------|
|
||||
| title | string | 0-255 chars | Display name (required for discovery) |
|
||||
| artist | string | 0-255 chars | Creator name |
|
||||
| album | string | 0-255 chars | Album/collection name |
|
||||
| genre | string | 0-255 chars | Genre classification |
|
||||
| year | int | 0-9999 | Release year (0 = unset) |
|
||||
| releaseType | string | enum | "single", "album", "ep", "mix" |
|
||||
| duration | int | 0+ | Total duration in seconds |
|
||||
| format | string | any | Platform format string (e.g., "dapp.fm/v1") |
|
||||
| expiresAt | int64 | 0+ | Unix timestamp (0 = never expires) |
|
||||
| issuedAt | int64 | 0+ | Unix timestamp of license issue |
|
||||
| licenseType | string | enum | "perpetual", "rental", "stream", "preview" |
|
||||
| tracks | []Track | - | Track boundaries for multi-track releases |
|
||||
| links | map | - | Platform name → URL (e.g., "bandcamp" → URL) |
|
||||
| tags | []string | - | Arbitrary string tags |
|
||||
| extra | map | - | Free-form key-value extension data |
|
||||
|
||||
## 5. Format Versions
|
||||
|
||||
### 5.1 Version Comparison
|
||||
|
||||
| Aspect | v1 (Legacy) | v2 (Binary) | v3 (Streaming) |
|
||||
|--------|-------------|-------------|----------------|
|
||||
| Payload Structure | JSON only | Length-prefixed JSON + binary | Same as v2 |
|
||||
| Attachment Encoding | Base64 in JSON | Size field + raw binary | Size field + raw binary |
|
||||
| Compression | None | zstd (default) | zstd (default) |
|
||||
| Key Derivation | SHA256(password) | SHA256(password) | LTHN rolling keys |
|
||||
| Chunked Support | No | No | Yes (optional) |
|
||||
| Size Overhead | ~33% | ~25% | ~15% |
|
||||
| Use Case | Legacy | General purpose | Time-limited streaming |
|
||||
|
||||
### 5.2 V1 Format (Legacy)
|
||||
|
||||
**Payload (after decryption):**
|
||||
|
||||
```json
|
||||
{
|
||||
"body": "Message content",
|
||||
"subject": "Optional subject",
|
||||
"from": "sender@example.com",
|
||||
"to": "recipient@example.com",
|
||||
"timestamp": 1673644800,
|
||||
"attachments": [
|
||||
{
|
||||
"name": "file.bin",
|
||||
"content": "base64encodeddata==",
|
||||
"mime": "application/octet-stream",
|
||||
"size": 1024
|
||||
}
|
||||
],
|
||||
"reply_key": {
|
||||
"public_key": "base64x25519key==",
|
||||
"algorithm": "x25519"
|
||||
},
|
||||
"meta": {
|
||||
"custom_field": "custom_value"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
- Attachments base64-encoded inline in JSON (~33% overhead)
|
||||
- Simple but inefficient for large files
|
||||
|
||||
### 5.3 V2 Format (Binary)
|
||||
|
||||
**Payload structure (after decryption and decompression):**
|
||||
|
||||
```
|
||||
Offset Size Field
|
||||
------ ----- ------------------------------------
|
||||
0 4 Message JSON Length (big-endian uint32)
|
||||
4 N Message JSON (attachments have size only, no content)
|
||||
4+N B1 Attachment 1 raw binary
|
||||
4+N+B1 B2 Attachment 2 raw binary
|
||||
...
|
||||
```
|
||||
|
||||
**Message JSON (within payload):**
|
||||
|
||||
```json
|
||||
{
|
||||
"body": "Message text",
|
||||
"subject": "Subject",
|
||||
"from": "sender",
|
||||
"attachments": [
|
||||
{"name": "file1.bin", "mime": "application/octet-stream", "size": 4096},
|
||||
{"name": "file2.bin", "mime": "image/png", "size": 65536}
|
||||
],
|
||||
"timestamp": 1673644800
|
||||
}
|
||||
```
|
||||
|
||||
- Attachment `content` field omitted; binary data follows JSON
|
||||
- Compressed before encryption
|
||||
- 3-10x faster than v1, ~25% smaller
|
||||
|
||||
### 5.4 V3 Format (Streaming)
|
||||
|
||||
Same payload structure as v2, but with:
|
||||
- LTHN-derived rolling keys instead of password
|
||||
- CEK (Content Encryption Key) wrapped for each time period
|
||||
- Optional chunking for seek support
|
||||
|
||||
**CEK Wrapping:**
|
||||
|
||||
```
|
||||
For each rolling period:
|
||||
streamKey = SHA256(LTHN(period:license:fingerprint))
|
||||
wrappedKey = ChaCha20-Poly1305(CEK, streamKey)
|
||||
```
|
||||
|
||||
**Rolling Periods (cadence):**
|
||||
|
||||
| Cadence | Period Format | Example |
|
||||
|---------|---------------|---------|
|
||||
| daily | YYYY-MM-DD | "2026-01-13" |
|
||||
| 12h | YYYY-MM-DD-AM/PM | "2026-01-13-AM" |
|
||||
| 6h | YYYY-MM-DD-HH | "2026-01-13-00", "2026-01-13-06" |
|
||||
| 1h | YYYY-MM-DD-HH | "2026-01-13-15" |
|
||||
|
||||
### 5.5 V3 Chunked Format
|
||||
|
||||
**Payload (independently decryptable chunks):**
|
||||
|
||||
```
|
||||
Offset Size Content
|
||||
------ ----- ----------------------------------
|
||||
0 1048600 Chunk 0: [24-byte nonce][ciphertext][16-byte tag]
|
||||
1048600 1048600 Chunk 1: [24-byte nonce][ciphertext][16-byte tag]
|
||||
...
|
||||
```
|
||||
|
||||
- Each chunk encrypted separately with same CEK, unique nonce
|
||||
- Enables seeking, HTTP Range requests
|
||||
- Chunk size typically 1MB (configurable)
|
||||
|
||||
## 6. Encryption
|
||||
|
||||
### 6.1 Algorithm
|
||||
|
||||
XChaCha20-Poly1305 (extended nonce variant)
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Key size | 32 bytes |
|
||||
| Nonce size | 24 bytes (XChaCha) |
|
||||
| Tag size | 16 bytes |
|
||||
|
||||
### 6.2 Ciphertext Structure
|
||||
|
||||
```
|
||||
[24-byte XChaCha20 nonce][encrypted data][16-byte Poly1305 tag]
|
||||
```
|
||||
|
||||
**Critical**: Nonces are embedded IN the ciphertext by the Enchantrix library, NOT transmitted separately in headers.
|
||||
|
||||
### 6.3 Key Derivation
|
||||
|
||||
**V1/V2 (Password-based):**
|
||||
|
||||
```go
|
||||
key := sha256.Sum256([]byte(password)) // 32 bytes
|
||||
```
|
||||
|
||||
**V3 (LTHN Rolling):**
|
||||
|
||||
```go
|
||||
// For each period in rolling window:
|
||||
streamKey := sha256.Sum256([]byte(
|
||||
crypt.NewService().Hash(crypt.LTHN, period + ":" + license + ":" + fingerprint)
|
||||
))
|
||||
```
|
||||
|
||||
## 7. Compression
|
||||
|
||||
| Value | Algorithm | Notes |
|
||||
|-------|-----------|-------|
|
||||
| "" (empty) | None | Raw bytes, default for v1 |
|
||||
| "gzip" | RFC 1952 | Stdlib, WASM compatible |
|
||||
| "zstd" | Zstandard | Default for v2/v3, better ratio |
|
||||
|
||||
**Order**: Compress → Encrypt (on write), Decrypt → Decompress (on read)
|
||||
|
||||
## 8. Message Structure
|
||||
|
||||
### 8.1 Go Types
|
||||
|
||||
```go
|
||||
type Message struct {
|
||||
From string `json:"from,omitempty"`
|
||||
To string `json:"to,omitempty"`
|
||||
Subject string `json:"subject,omitempty"`
|
||||
Body string `json:"body"`
|
||||
Timestamp int64 `json:"timestamp,omitempty"`
|
||||
Attachments []Attachment `json:"attachments,omitempty"`
|
||||
ReplyKey *KeyInfo `json:"reply_key,omitempty"`
|
||||
Meta map[string]string `json:"meta,omitempty"`
|
||||
}
|
||||
|
||||
type Attachment struct {
|
||||
Name string `json:"name"`
|
||||
Mime string `json:"mime"`
|
||||
Size int `json:"size"`
|
||||
Content string `json:"content,omitempty"` // Base64, v1 only
|
||||
Data []byte `json:"-"` // Binary, v2/v3
|
||||
}
|
||||
|
||||
type KeyInfo struct {
|
||||
PublicKey string `json:"public_key"`
|
||||
Algorithm string `json:"algorithm"`
|
||||
}
|
||||
```
|
||||
|
||||
### 8.2 Stream Parameters (V3)
|
||||
|
||||
```go
|
||||
type StreamParams struct {
|
||||
License string `json:"license"` // User's license identifier
|
||||
Fingerprint string `json:"fingerprint"` // Device fingerprint (optional)
|
||||
Cadence string `json:"cadence"` // Rolling period: daily, 12h, 6h, 1h
|
||||
ChunkSize int `json:"chunk_size"` // Bytes per chunk (default 1MB)
|
||||
}
|
||||
```
|
||||
|
||||
## 9. Error Handling
|
||||
|
||||
### 9.1 Error Types
|
||||
|
||||
```go
|
||||
var (
|
||||
ErrInvalidMagic = errors.New("invalid SMSG magic")
|
||||
ErrInvalidPayload = errors.New("invalid SMSG payload")
|
||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
||||
ErrPasswordRequired = errors.New("password is required")
|
||||
ErrEmptyMessage = errors.New("message cannot be empty")
|
||||
ErrStreamKeyExpired = errors.New("stream key expired (outside rolling window)")
|
||||
ErrNoValidKey = errors.New("no valid wrapped key found for current date")
|
||||
ErrLicenseRequired = errors.New("license is required for stream decryption")
|
||||
)
|
||||
```
|
||||
|
||||
### 9.2 Error Conditions
|
||||
|
||||
| Error | Cause | Recovery |
|
||||
|-------|-------|----------|
|
||||
| ErrInvalidMagic | File magic is not "SMSG" | Verify file format |
|
||||
| ErrInvalidPayload | Corrupted payload structure | Re-download or restore |
|
||||
| ErrDecryptionFailed | Wrong password or corrupted | Try correct password |
|
||||
| ErrPasswordRequired | Empty password provided | Provide password |
|
||||
| ErrStreamKeyExpired | Time outside rolling window | Wait for valid period or update file |
|
||||
| ErrNoValidKey | No wrapped key for current period | License/fingerprint mismatch |
|
||||
| ErrLicenseRequired | Empty StreamParams.License | Provide license identifier |
|
||||
|
||||
## 10. Constants
|
||||
|
||||
```go
|
||||
const Magic = "SMSG" // 4 ASCII bytes
|
||||
const Version = "1.0" // String version identifier
|
||||
const DefaultChunkSize = 1024 * 1024 // 1 MB
|
||||
|
||||
const FormatV1 = "" // Legacy JSON format
|
||||
const FormatV2 = "v2" // Binary format
|
||||
const FormatV3 = "v3" // Streaming with rolling keys
|
||||
|
||||
const KeyMethodDirect = "" // Password-direct (v1/v2)
|
||||
const KeyMethodLTHNRolling = "lthn-rolling" // LTHN rolling (v3)
|
||||
|
||||
const CompressionNone = ""
|
||||
const CompressionGzip = "gzip"
|
||||
const CompressionZstd = "zstd"
|
||||
|
||||
const CadenceDaily = "daily"
|
||||
const CadenceHalfDay = "12h"
|
||||
const CadenceQuarter = "6h"
|
||||
const CadenceHourly = "1h"
|
||||
```
|
||||
|
||||
## 11. API Usage
|
||||
|
||||
### 11.1 V1 (Legacy)
|
||||
|
||||
```go
|
||||
msg := NewMessage("Hello").WithSubject("Test")
|
||||
encrypted, _ := Encrypt(msg, "password")
|
||||
decrypted, _ := Decrypt(encrypted, "password")
|
||||
```
|
||||
|
||||
### 11.2 V2 (Binary)
|
||||
|
||||
```go
|
||||
msg := NewMessage("Hello").AddBinaryAttachment("file.bin", data, "application/octet-stream")
|
||||
manifest := NewManifest("My Content")
|
||||
encrypted, _ := EncryptV2WithManifest(msg, "password", manifest)
|
||||
decrypted, _ := Decrypt(encrypted, "password")
|
||||
```
|
||||
|
||||
### 11.3 V3 (Streaming)
|
||||
|
||||
```go
|
||||
msg := NewMessage("Stream content")
|
||||
params := &StreamParams{
|
||||
License: "user-license",
|
||||
Fingerprint: "device-fingerprint",
|
||||
Cadence: CadenceDaily,
|
||||
ChunkSize: 1048576,
|
||||
}
|
||||
manifest := NewManifest("Stream Track")
|
||||
manifest.LicenseType = "stream"
|
||||
encrypted, _ := EncryptV3(msg, params, manifest)
|
||||
decrypted, header, _ := DecryptV3(encrypted, params)
|
||||
```
|
||||
|
||||
## 12. Implementation Reference
|
||||
|
||||
- Types: `pkg/smsg/types.go`
|
||||
- Encryption: `pkg/smsg/smsg.go`
|
||||
- Streaming: `pkg/smsg/stream.go`
|
||||
- WASM: `pkg/wasm/stmf/main.go`
|
||||
- Tests: `pkg/smsg/*_test.go`
|
||||
|
||||
## 13. Security Considerations
|
||||
|
||||
1. **Nonce uniqueness**: Enchantrix generates random 24-byte nonces automatically
|
||||
2. **Key entropy**: Passwords should have 64+ bits entropy (no key stretching)
|
||||
3. **Manifest exposure**: Manifest is public; never include sensitive data
|
||||
4. **Constant-time crypto**: Enchantrix uses constant-time comparison for auth tags
|
||||
5. **Rolling window**: V3 keys valid for current + next period only
|
||||
|
||||
## 14. Future Work
|
||||
|
||||
- [ ] Key stretching (Argon2 option)
|
||||
- [ ] Multi-recipient encryption
|
||||
- [ ] Streaming API with ReadableStream
|
||||
- [ ] Hardware key support (WebAuthn)
|
||||
326
RFC-013-DATANODE.md
Normal file
326
RFC-013-DATANODE.md
Normal file
|
|
@ -0,0 +1,326 @@
|
|||
# RFC-003: DataNode In-Memory Filesystem
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
DataNode is an in-memory filesystem abstraction implementing Go's `fs.FS` interface. It provides the foundation for collecting, manipulating, and serializing file trees without touching disk.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
DataNode serves as the core data structure for:
|
||||
- Collecting files from various sources (GitHub, websites, PWAs)
|
||||
- Building container filesystems (TIM rootfs)
|
||||
- Serializing to/from tar archives
|
||||
- Encrypting as TRIX format
|
||||
|
||||
## 2. Implementation
|
||||
|
||||
### 2.1 Core Type
|
||||
|
||||
```go
|
||||
type DataNode struct {
|
||||
files map[string]*dataFile
|
||||
}
|
||||
|
||||
type dataFile struct {
|
||||
name string
|
||||
content []byte
|
||||
modTime time.Time
|
||||
}
|
||||
```
|
||||
|
||||
**Key insight**: DataNode uses a **flat key-value map**, not a nested tree structure. Paths are stored as keys directly, and directories are implicit (derived from path prefixes).
|
||||
|
||||
### 2.2 fs.FS Implementation
|
||||
|
||||
DataNode implements these interfaces:
|
||||
|
||||
| Interface | Method | Description |
|
||||
|-----------|--------|-------------|
|
||||
| `fs.FS` | `Open(name string)` | Returns fs.File for path |
|
||||
| `fs.StatFS` | `Stat(name string)` | Returns fs.FileInfo |
|
||||
| `fs.ReadDirFS` | `ReadDir(name string)` | Lists directory contents |
|
||||
|
||||
### 2.3 Internal Helper Types
|
||||
|
||||
```go
|
||||
// File metadata
|
||||
type dataFileInfo struct {
|
||||
name string
|
||||
size int64
|
||||
modTime time.Time
|
||||
}
|
||||
func (fi *dataFileInfo) Mode() fs.FileMode { return 0444 } // Read-only
|
||||
|
||||
// Directory metadata
|
||||
type dirInfo struct {
|
||||
name string
|
||||
}
|
||||
func (di *dirInfo) Mode() fs.FileMode { return fs.ModeDir | 0555 }
|
||||
|
||||
// File reader (implements fs.File)
|
||||
type dataFileReader struct {
|
||||
info *dataFileInfo
|
||||
reader *bytes.Reader
|
||||
}
|
||||
|
||||
// Directory reader (implements fs.File)
|
||||
type dirFile struct {
|
||||
info *dirInfo
|
||||
entries []fs.DirEntry
|
||||
offset int
|
||||
}
|
||||
```
|
||||
|
||||
## 3. Operations
|
||||
|
||||
### 3.1 Construction
|
||||
|
||||
```go
|
||||
// Create empty DataNode
|
||||
node := datanode.New()
|
||||
|
||||
// Returns: &DataNode{files: make(map[string]*dataFile)}
|
||||
```
|
||||
|
||||
### 3.2 Adding Files
|
||||
|
||||
```go
|
||||
// Add file with content
|
||||
node.AddData("path/to/file.txt", []byte("content"))
|
||||
|
||||
// Trailing slashes are ignored (treated as directory indicator)
|
||||
node.AddData("path/to/dir/", []byte("")) // Stored as "path/to/dir"
|
||||
```
|
||||
|
||||
**Note**: Parent directories are NOT explicitly created. They are implicit based on path prefixes.
|
||||
|
||||
### 3.3 File Access
|
||||
|
||||
```go
|
||||
// Open file (fs.FS interface)
|
||||
f, err := node.Open("path/to/file.txt")
|
||||
if err != nil {
|
||||
// fs.ErrNotExist if not found
|
||||
}
|
||||
defer f.Close()
|
||||
content, _ := io.ReadAll(f)
|
||||
|
||||
// Stat file
|
||||
info, err := node.Stat("path/to/file.txt")
|
||||
// info.Name(), info.Size(), info.ModTime(), info.Mode()
|
||||
|
||||
// Read directory
|
||||
entries, err := node.ReadDir("path/to")
|
||||
for _, entry := range entries {
|
||||
// entry.Name(), entry.IsDir(), entry.Type()
|
||||
}
|
||||
```
|
||||
|
||||
### 3.4 Walking
|
||||
|
||||
```go
|
||||
err := fs.WalkDir(node, ".", func(path string, d fs.DirEntry, err error) error {
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !d.IsDir() {
|
||||
// Process file
|
||||
}
|
||||
return nil
|
||||
})
|
||||
```
|
||||
|
||||
## 4. Path Semantics
|
||||
|
||||
### 4.1 Path Handling
|
||||
|
||||
- **Leading slashes stripped**: `/path/file` → `path/file`
|
||||
- **Trailing slashes ignored**: `path/dir/` → `path/dir`
|
||||
- **Forward slashes only**: Uses `/` regardless of OS
|
||||
- **Case-sensitive**: `File.txt` ≠ `file.txt`
|
||||
- **Direct lookup**: Paths stored as flat keys
|
||||
|
||||
### 4.2 Valid Paths
|
||||
|
||||
```
|
||||
file.txt → stored as "file.txt"
|
||||
dir/file.txt → stored as "dir/file.txt"
|
||||
/absolute/path → stored as "absolute/path" (leading / stripped)
|
||||
path/to/dir/ → stored as "path/to/dir" (trailing / stripped)
|
||||
```
|
||||
|
||||
### 4.3 Directory Detection
|
||||
|
||||
Directories are **implicit**. A directory exists if:
|
||||
1. Any file path has it as a prefix
|
||||
2. Example: Adding `a/b/c.txt` implicitly creates directories `a` and `a/b`
|
||||
|
||||
```go
|
||||
// ReadDir finds directories by scanning all paths
|
||||
func (dn *DataNode) ReadDir(name string) ([]fs.DirEntry, error) {
|
||||
// Scans all keys for matching prefix
|
||||
// Returns unique immediate children
|
||||
}
|
||||
```
|
||||
|
||||
## 5. Tar Serialization
|
||||
|
||||
### 5.1 ToTar
|
||||
|
||||
```go
|
||||
tarBytes, err := node.ToTar()
|
||||
```
|
||||
|
||||
**Format**:
|
||||
- All files written as `tar.TypeReg` (regular files)
|
||||
- Header Mode: **0600** (fixed, not original mode)
|
||||
- No explicit directory entries
|
||||
- ModTime preserved from dataFile
|
||||
|
||||
```go
|
||||
// Serialization logic
|
||||
for path, file := range dn.files {
|
||||
header := &tar.Header{
|
||||
Name: path,
|
||||
Mode: 0600, // Fixed mode
|
||||
Size: int64(len(file.content)),
|
||||
ModTime: file.modTime,
|
||||
Typeflag: tar.TypeReg,
|
||||
}
|
||||
tw.WriteHeader(header)
|
||||
tw.Write(file.content)
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 FromTar
|
||||
|
||||
```go
|
||||
node, err := datanode.FromTar(tarBytes)
|
||||
```
|
||||
|
||||
**Parsing**:
|
||||
- Only reads `tar.TypeReg` entries
|
||||
- Ignores directory entries (`tar.TypeDir`)
|
||||
- Stores path and content in flat map
|
||||
|
||||
```go
|
||||
// Deserialization logic
|
||||
for {
|
||||
header, err := tr.Next()
|
||||
if header.Typeflag == tar.TypeReg {
|
||||
content, _ := io.ReadAll(tr)
|
||||
dn.files[header.Name] = &dataFile{
|
||||
name: filepath.Base(header.Name),
|
||||
content: content,
|
||||
modTime: header.ModTime,
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 Compressed Variants
|
||||
|
||||
```go
|
||||
// gzip compressed
|
||||
tarGz, err := node.ToTarGz()
|
||||
node, err := datanode.FromTarGz(tarGzBytes)
|
||||
|
||||
// xz compressed
|
||||
tarXz, err := node.ToTarXz()
|
||||
node, err := datanode.FromTarXz(tarXzBytes)
|
||||
```
|
||||
|
||||
## 6. File Modes
|
||||
|
||||
| Context | Mode | Notes |
|
||||
|---------|------|-------|
|
||||
| File read (fs.FS) | 0444 | Read-only for all |
|
||||
| Directory (fs.FS) | 0555 | Read+execute for all |
|
||||
| Tar export | 0600 | Owner read/write only |
|
||||
|
||||
**Note**: Original file modes are NOT preserved. All files get fixed modes.
|
||||
|
||||
## 7. Memory Model
|
||||
|
||||
- All content held in memory as `[]byte`
|
||||
- No lazy loading
|
||||
- No memory mapping
|
||||
- Thread-safe for concurrent reads (map is not mutated after creation)
|
||||
|
||||
### 7.1 Size Calculation
|
||||
|
||||
```go
|
||||
func (dn *DataNode) Size() int64 {
|
||||
var total int64
|
||||
for _, f := range dn.files {
|
||||
total += int64(len(f.content))
|
||||
}
|
||||
return total
|
||||
}
|
||||
```
|
||||
|
||||
## 8. Integration Points
|
||||
|
||||
### 8.1 TIM RootFS
|
||||
|
||||
```go
|
||||
tim := &tim.TIM{
|
||||
Config: configJSON,
|
||||
RootFS: datanode, // DataNode as container filesystem
|
||||
}
|
||||
```
|
||||
|
||||
### 8.2 TRIX Encryption
|
||||
|
||||
```go
|
||||
// Encrypt DataNode to TRIX
|
||||
encrypted, err := trix.Encrypt(datanode.ToTar(), password)
|
||||
|
||||
// Decrypt TRIX to DataNode
|
||||
tarBytes, err := trix.Decrypt(encrypted, password)
|
||||
node, err := datanode.FromTar(tarBytes)
|
||||
```
|
||||
|
||||
### 8.3 Collectors
|
||||
|
||||
```go
|
||||
// GitHub collector returns DataNode
|
||||
node, err := github.CollectRepo(url)
|
||||
|
||||
// Website collector returns DataNode
|
||||
node, err := website.Collect(url, depth)
|
||||
```
|
||||
|
||||
## 9. Implementation Reference
|
||||
|
||||
- Source: `pkg/datanode/datanode.go`
|
||||
- Tests: `pkg/datanode/datanode_test.go`
|
||||
|
||||
## 10. Security Considerations
|
||||
|
||||
1. **Path traversal**: Leading slashes stripped; no `..` handling needed (flat map)
|
||||
2. **Memory exhaustion**: No built-in limits; caller must validate input size
|
||||
3. **Tar bombs**: FromTar reads all entries into memory
|
||||
4. **Symlinks**: Not supported (intentional - tar.TypeReg only)
|
||||
|
||||
## 11. Limitations
|
||||
|
||||
- No symlink support
|
||||
- No extended attributes
|
||||
- No sparse files
|
||||
- Fixed file modes (0600 on export)
|
||||
- No streaming (full content in memory)
|
||||
|
||||
## 12. Future Work
|
||||
|
||||
- [ ] Streaming tar generation for large files
|
||||
- [ ] Optional mode preservation
|
||||
- [ ] Size limits for untrusted input
|
||||
- [ ] Lazy loading for large datasets
|
||||
330
RFC-014-TIM.md
Normal file
330
RFC-014-TIM.md
Normal file
|
|
@ -0,0 +1,330 @@
|
|||
# RFC-004: Terminal Isolation Matrix (TIM)
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-003
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
TIM (Terminal Isolation Matrix) is an OCI-compatible container bundle format. It packages a runtime configuration with a root filesystem (DataNode) for execution via runc or compatible runtimes.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
TIM provides:
|
||||
- OCI runtime-spec compatible bundles
|
||||
- Portable container packaging
|
||||
- Integration with DataNode filesystem
|
||||
- Encryption via STIM (RFC-005)
|
||||
|
||||
## 2. Implementation
|
||||
|
||||
### 2.1 Core Type
|
||||
|
||||
```go
|
||||
// pkg/tim/tim.go:28-32
|
||||
type TerminalIsolationMatrix struct {
|
||||
Config []byte // Raw OCI runtime specification (JSON)
|
||||
RootFS *datanode.DataNode // In-memory filesystem
|
||||
}
|
||||
```
|
||||
|
||||
### 2.2 Error Variables
|
||||
|
||||
```go
|
||||
var (
|
||||
ErrDataNodeRequired = errors.New("datanode is required")
|
||||
ErrConfigIsNil = errors.New("config is nil")
|
||||
ErrPasswordRequired = errors.New("password is required for encryption")
|
||||
ErrInvalidStimPayload = errors.New("invalid stim payload")
|
||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
||||
)
|
||||
```
|
||||
|
||||
## 3. Public API
|
||||
|
||||
### 3.1 Constructors
|
||||
|
||||
```go
|
||||
// Create empty TIM with default config
|
||||
func New() (*TerminalIsolationMatrix, error)
|
||||
|
||||
// Wrap existing DataNode into TIM
|
||||
func FromDataNode(dn *DataNode) (*TerminalIsolationMatrix, error)
|
||||
|
||||
// Deserialize from tar archive
|
||||
func FromTar(data []byte) (*TerminalIsolationMatrix, error)
|
||||
```
|
||||
|
||||
### 3.2 Serialization
|
||||
|
||||
```go
|
||||
// Serialize to tar archive
|
||||
func (m *TerminalIsolationMatrix) ToTar() ([]byte, error)
|
||||
|
||||
// Encrypt to STIM format (ChaCha20-Poly1305)
|
||||
func (m *TerminalIsolationMatrix) ToSigil(password string) ([]byte, error)
|
||||
```
|
||||
|
||||
### 3.3 Decryption
|
||||
|
||||
```go
|
||||
// Decrypt from STIM format
|
||||
func FromSigil(data []byte, password string) (*TerminalIsolationMatrix, error)
|
||||
```
|
||||
|
||||
### 3.4 Execution
|
||||
|
||||
```go
|
||||
// Run plain .tim file with runc
|
||||
func Run(timPath string) error
|
||||
|
||||
// Decrypt and run .stim file
|
||||
func RunEncrypted(stimPath, password string) error
|
||||
```
|
||||
|
||||
## 4. Tar Archive Structure
|
||||
|
||||
### 4.1 Layout
|
||||
|
||||
```
|
||||
config.json (root level, mode 0600)
|
||||
rootfs/ (directory, mode 0755)
|
||||
rootfs/bin/app (files within rootfs/)
|
||||
rootfs/etc/config
|
||||
...
|
||||
```
|
||||
|
||||
### 4.2 Serialization (ToTar)
|
||||
|
||||
```go
|
||||
// pkg/tim/tim.go:111-195
|
||||
func (m *TerminalIsolationMatrix) ToTar() ([]byte, error) {
|
||||
// 1. Write config.json header (size = len(m.Config), mode 0600)
|
||||
// 2. Write config.json content
|
||||
// 3. Write rootfs/ directory entry (TypeDir, mode 0755)
|
||||
// 4. Walk m.RootFS depth-first
|
||||
// 5. For each file: tar entry with name "rootfs/" + path, mode 0600
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Deserialization (FromTar)
|
||||
|
||||
```go
|
||||
func FromTar(data []byte) (*TerminalIsolationMatrix, error) {
|
||||
// 1. Parse tar entries
|
||||
// 2. "config.json" → stored as raw bytes in Config
|
||||
// 3. "rootfs/*" prefix → stripped and added to DataNode
|
||||
// 4. Error if config.json missing (ErrConfigIsNil)
|
||||
}
|
||||
```
|
||||
|
||||
## 5. OCI Config
|
||||
|
||||
### 5.1 Default Config
|
||||
|
||||
The `New()` function creates a TIM with a default config from `pkg/tim/config.go`:
|
||||
|
||||
```go
|
||||
func defaultConfig() (*trix.Trix, error) {
|
||||
return &trix.Trix{Header: make(map[string]interface{})}, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: The default config is minimal. Applications should populate the Config field with a proper OCI runtime spec.
|
||||
|
||||
### 5.2 OCI Runtime Spec Example
|
||||
|
||||
```json
|
||||
{
|
||||
"ociVersion": "1.0.2",
|
||||
"process": {
|
||||
"terminal": false,
|
||||
"user": {"uid": 0, "gid": 0},
|
||||
"args": ["/bin/app"],
|
||||
"env": ["PATH=/usr/bin:/bin"],
|
||||
"cwd": "/"
|
||||
},
|
||||
"root": {
|
||||
"path": "rootfs",
|
||||
"readonly": true
|
||||
},
|
||||
"mounts": [],
|
||||
"linux": {
|
||||
"namespaces": [
|
||||
{"type": "pid"},
|
||||
{"type": "network"},
|
||||
{"type": "mount"}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Execution Flow
|
||||
|
||||
### 6.1 Plain TIM (Run)
|
||||
|
||||
```go
|
||||
// pkg/tim/run.go:18-74
|
||||
func Run(timPath string) error {
|
||||
// 1. Create temporary directory (borg-run-*)
|
||||
// 2. Extract tar entry-by-entry
|
||||
// - Security: Path traversal check (prevents ../)
|
||||
// - Validates: target = Clean(target) within tempDir
|
||||
// 3. Create directories as needed (0755)
|
||||
// 4. Write files with 0600 permissions
|
||||
// 5. Execute: runc run -b <tempDir> borg-container
|
||||
// 6. Stream stdout/stderr directly
|
||||
// 7. Return exit code
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Encrypted TIM (RunEncrypted)
|
||||
|
||||
```go
|
||||
// pkg/tim/run.go:79-134
|
||||
func RunEncrypted(stimPath, password string) error {
|
||||
// 1. Read encrypted .stim file
|
||||
// 2. Decrypt using FromSigil() with password
|
||||
// 3. Create temporary directory (borg-run-*)
|
||||
// 4. Write config.json to tempDir
|
||||
// 5. Create rootfs/ subdirectory
|
||||
// 6. Walk DataNode and extract all files to rootfs/
|
||||
// - Uses CopyFile() with 0600 permissions
|
||||
// 7. Execute: runc run -b <tempDir> borg-container
|
||||
// 8. Stream stdout/stderr
|
||||
// 9. Clean up temp directory (defer os.RemoveAll)
|
||||
// 10. Return exit code
|
||||
}
|
||||
```
|
||||
|
||||
### 6.3 Security Controls
|
||||
|
||||
| Control | Implementation |
|
||||
|---------|----------------|
|
||||
| Path traversal | `filepath.Clean()` + prefix validation |
|
||||
| Temp cleanup | `defer os.RemoveAll(tempDir)` |
|
||||
| File permissions | Hardcoded 0600 (files), 0755 (dirs) |
|
||||
| Test injection | `ExecCommand` variable for mocking runc |
|
||||
|
||||
## 7. Cache API
|
||||
|
||||
### 7.1 Cache Structure
|
||||
|
||||
```go
|
||||
// pkg/tim/cache.go
|
||||
type Cache struct {
|
||||
Dir string // Directory path for storage
|
||||
Password string // Shared password for all TIMs
|
||||
}
|
||||
```
|
||||
|
||||
### 7.2 Cache Operations
|
||||
|
||||
```go
|
||||
// Create cache with master password
|
||||
func NewCache(dir, password string) (*Cache, error)
|
||||
|
||||
// Store TIM (encrypted automatically as .stim)
|
||||
func (c *Cache) Store(name string, m *TerminalIsolationMatrix) error
|
||||
|
||||
// Load TIM (decrypted automatically)
|
||||
func (c *Cache) Load(name string) (*TerminalIsolationMatrix, error)
|
||||
|
||||
// Delete cached TIM
|
||||
func (c *Cache) Delete(name string) error
|
||||
|
||||
// Check if TIM exists
|
||||
func (c *Cache) Exists(name string) bool
|
||||
|
||||
// List all cached TIM names
|
||||
func (c *Cache) List() ([]string, error)
|
||||
|
||||
// Load and execute cached TIM
|
||||
func (c *Cache) Run(name string) error
|
||||
|
||||
// Get file size of cached .stim
|
||||
func (c *Cache) Size(name string) (int64, error)
|
||||
```
|
||||
|
||||
### 7.3 Cache Directory Structure
|
||||
|
||||
```
|
||||
cache/
|
||||
├── mycontainer.stim (encrypted)
|
||||
├── another.stim (encrypted)
|
||||
└── ...
|
||||
```
|
||||
|
||||
- All TIMs stored as `.stim` files (encrypted)
|
||||
- Single password protects entire cache
|
||||
- Directory created with 0700 permissions
|
||||
- Files stored with 0600 permissions
|
||||
|
||||
## 8. CLI Usage
|
||||
|
||||
```bash
|
||||
# Compile Borgfile to TIM
|
||||
borg compile -f Borgfile -o container.tim
|
||||
|
||||
# Compile with encryption
|
||||
borg compile -f Borgfile -e "password" -o container.stim
|
||||
|
||||
# Run plain TIM
|
||||
borg run container.tim
|
||||
|
||||
# Run encrypted TIM
|
||||
borg run container.stim -p "password"
|
||||
|
||||
# Decode (extract) to tar
|
||||
borg decode container.stim -p "password" --i-am-in-isolation -o container.tar
|
||||
|
||||
# Inspect metadata without decrypting
|
||||
borg inspect container.stim
|
||||
```
|
||||
|
||||
## 9. Implementation Reference
|
||||
|
||||
- TIM core: `pkg/tim/tim.go`
|
||||
- Execution: `pkg/tim/run.go`
|
||||
- Cache: `pkg/tim/cache.go`
|
||||
- Config: `pkg/tim/config.go`
|
||||
- Tests: `pkg/tim/tim_test.go`, `pkg/tim/run_test.go`, `pkg/tim/cache_test.go`
|
||||
|
||||
## 10. Security Considerations
|
||||
|
||||
1. **Path traversal prevention**: `filepath.Clean()` + prefix validation
|
||||
2. **Permission hardcoding**: 0600 files, 0755 directories
|
||||
3. **Secure cleanup**: `defer os.RemoveAll()` on temp directories
|
||||
4. **Command injection prevention**: `ExecCommand` variable (no shell)
|
||||
5. **Config validation**: Validate OCI spec before execution
|
||||
|
||||
## 11. OCI Compatibility
|
||||
|
||||
TIM bundles are compatible with:
|
||||
- runc
|
||||
- crun
|
||||
- youki
|
||||
- Any OCI runtime-spec 1.0.2 compliant runtime
|
||||
|
||||
## 12. Test Coverage
|
||||
|
||||
| Area | Tests |
|
||||
|------|-------|
|
||||
| TIM creation | DataNode wrapping, default config |
|
||||
| Serialization | Tar round-trips, large files (1MB+) |
|
||||
| Encryption | ToSigil/FromSigil, wrong password detection |
|
||||
| Caching | Store/Load/Delete, List, Size |
|
||||
| Execution | ZIP slip prevention, temp cleanup |
|
||||
| Error handling | Nil DataNode, nil config, invalid tar |
|
||||
|
||||
## 13. Future Work
|
||||
|
||||
- [ ] Image layer support
|
||||
- [ ] Registry push/pull
|
||||
- [ ] Multi-platform bundles
|
||||
- [ ] Signature verification
|
||||
- [ ] Full OCI config generation
|
||||
303
RFC-015-STIM.md
Normal file
303
RFC-015-STIM.md
Normal file
|
|
@ -0,0 +1,303 @@
|
|||
# RFC-005: STIM Encrypted Container Format
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-003, RFC-004
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
STIM (Secure TIM) is an encrypted container format that wraps TIM bundles using ChaCha20-Poly1305 authenticated encryption. It enables secure distribution and execution of containers without exposing the contents.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
STIM provides:
|
||||
- Encrypted TIM containers
|
||||
- ChaCha20-Poly1305 authenticated encryption
|
||||
- Separate encryption of config and rootfs
|
||||
- Direct execution without persistent decryption
|
||||
|
||||
## 2. Format Name
|
||||
|
||||
**ChaChaPolySigil** - The internal name for the STIM format, using:
|
||||
- ChaCha20-Poly1305 algorithm (via Enchantrix library)
|
||||
- Trix container wrapper with "STIM" magic
|
||||
|
||||
## 3. File Structure
|
||||
|
||||
### 3.1 Container Format
|
||||
|
||||
STIM uses the **Trix container format** from Enchantrix library:
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────┐
|
||||
│ Magic: "STIM" (4 bytes ASCII) │
|
||||
├─────────────────────────────────────────┤
|
||||
│ Trix Header (Gob-encoded JSON) │
|
||||
│ - encryption_algorithm: "chacha20poly1305"
|
||||
│ - tim: true │
|
||||
│ - config_size: uint32 │
|
||||
│ - rootfs_size: uint32 │
|
||||
│ - version: "1.0" │
|
||||
├─────────────────────────────────────────┤
|
||||
│ Trix Payload: │
|
||||
│ [config_size: 4 bytes BE uint32] │
|
||||
│ [encrypted config] │
|
||||
│ [encrypted rootfs tar] │
|
||||
└─────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### 3.2 Payload Structure
|
||||
|
||||
```
|
||||
Offset Size Field
|
||||
------ ----- ------------------------------------
|
||||
0 4 Config size (big-endian uint32)
|
||||
4 N Encrypted config (includes nonce + tag)
|
||||
4+N M Encrypted rootfs tar (includes nonce + tag)
|
||||
```
|
||||
|
||||
### 3.3 Encrypted Component Format
|
||||
|
||||
Each encrypted component (config and rootfs) follows Enchantrix format:
|
||||
|
||||
```
|
||||
[24-byte XChaCha20 nonce][ciphertext][16-byte Poly1305 tag]
|
||||
```
|
||||
|
||||
**Critical**: Nonces are **embedded in the ciphertext**, not transmitted separately.
|
||||
|
||||
## 4. Encryption
|
||||
|
||||
### 4.1 Algorithm
|
||||
|
||||
XChaCha20-Poly1305 (extended nonce variant)
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Key size | 32 bytes |
|
||||
| Nonce size | 24 bytes (embedded) |
|
||||
| Tag size | 16 bytes |
|
||||
|
||||
### 4.2 Key Derivation
|
||||
|
||||
```go
|
||||
// pkg/trix/trix.go:64-67
|
||||
func DeriveKey(password string) []byte {
|
||||
hash := sha256.Sum256([]byte(password))
|
||||
return hash[:] // 32 bytes
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Dual Encryption
|
||||
|
||||
Config and RootFS are encrypted **separately** with independent nonces:
|
||||
|
||||
```go
|
||||
// pkg/tim/tim.go:217-232
|
||||
func (m *TerminalIsolationMatrix) ToSigil(password string) ([]byte, error) {
|
||||
// 1. Derive key
|
||||
key := trix.DeriveKey(password)
|
||||
|
||||
// 2. Create sigil
|
||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
||||
|
||||
// 3. Encrypt config (generates fresh nonce automatically)
|
||||
encConfig, _ := sigil.In(m.Config)
|
||||
|
||||
// 4. Serialize rootfs to tar
|
||||
rootfsTar, _ := m.RootFS.ToTar()
|
||||
|
||||
// 5. Encrypt rootfs (generates different fresh nonce)
|
||||
encRootFS, _ := sigil.In(rootfsTar)
|
||||
|
||||
// 6. Build payload
|
||||
payload := make([]byte, 4+len(encConfig)+len(encRootFS))
|
||||
binary.BigEndian.PutUint32(payload[:4], uint32(len(encConfig)))
|
||||
copy(payload[4:4+len(encConfig)], encConfig)
|
||||
copy(payload[4+len(encConfig):], encRootFS)
|
||||
|
||||
// 7. Create Trix container with STIM magic
|
||||
// ...
|
||||
}
|
||||
```
|
||||
|
||||
**Rationale for dual encryption:**
|
||||
- Config can be decrypted separately for inspection
|
||||
- Allows streaming decryption of large rootfs
|
||||
- Independent nonces prevent any nonce reuse
|
||||
|
||||
## 5. Decryption Flow
|
||||
|
||||
```go
|
||||
// pkg/tim/tim.go:255-308
|
||||
func FromSigil(data []byte, password string) (*TerminalIsolationMatrix, error) {
|
||||
// 1. Decode Trix container with magic "STIM"
|
||||
t, _ := trix.Decode(data, "STIM", nil)
|
||||
|
||||
// 2. Derive key from password
|
||||
key := trix.DeriveKey(password)
|
||||
|
||||
// 3. Create sigil
|
||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
||||
|
||||
// 4. Parse payload: extract configSize from first 4 bytes
|
||||
configSize := binary.BigEndian.Uint32(t.Payload[:4])
|
||||
|
||||
// 5. Validate bounds
|
||||
if int(configSize) > len(t.Payload)-4 {
|
||||
return nil, ErrInvalidStimPayload
|
||||
}
|
||||
|
||||
// 6. Extract encrypted components
|
||||
encConfig := t.Payload[4 : 4+configSize]
|
||||
encRootFS := t.Payload[4+configSize:]
|
||||
|
||||
// 7. Decrypt config (nonce auto-extracted by Enchantrix)
|
||||
config, err := sigil.Out(encConfig)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
|
||||
}
|
||||
|
||||
// 8. Decrypt rootfs
|
||||
rootfsTar, err := sigil.Out(encRootFS)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
|
||||
}
|
||||
|
||||
// 9. Reconstruct DataNode from tar
|
||||
rootfs, _ := datanode.FromTar(rootfsTar)
|
||||
|
||||
return &TerminalIsolationMatrix{Config: config, RootFS: rootfs}, nil
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Trix Header
|
||||
|
||||
```go
|
||||
Header: map[string]interface{}{
|
||||
"encryption_algorithm": "chacha20poly1305",
|
||||
"tim": true,
|
||||
"config_size": len(encConfig),
|
||||
"rootfs_size": len(encRootFS),
|
||||
"version": "1.0",
|
||||
}
|
||||
```
|
||||
|
||||
## 7. CLI Usage
|
||||
|
||||
```bash
|
||||
# Create encrypted container
|
||||
borg compile -f Borgfile -e "password" -o container.stim
|
||||
|
||||
# Run encrypted container
|
||||
borg run container.stim -p "password"
|
||||
|
||||
# Decode (extract) encrypted container
|
||||
borg decode container.stim -p "password" --i-am-in-isolation -o container.tar
|
||||
|
||||
# Inspect without decrypting (shows header metadata only)
|
||||
borg inspect container.stim
|
||||
# Output:
|
||||
# Format: STIM
|
||||
# encryption_algorithm: chacha20poly1305
|
||||
# config_size: 1234
|
||||
# rootfs_size: 567890
|
||||
```
|
||||
|
||||
## 8. Cache API
|
||||
|
||||
```go
|
||||
// Create cache with master password
|
||||
cache, err := tim.NewCache("/path/to/cache", masterPassword)
|
||||
|
||||
// Store TIM (encrypted automatically as .stim)
|
||||
err := cache.Store("name", tim)
|
||||
|
||||
// Load TIM (decrypted automatically)
|
||||
tim, err := cache.Load("name")
|
||||
|
||||
// List cached containers
|
||||
names, err := cache.List()
|
||||
```
|
||||
|
||||
## 9. Execution Security
|
||||
|
||||
```go
|
||||
// Secure execution flow
|
||||
func RunEncrypted(path, password string) error {
|
||||
// 1. Create secure temp directory
|
||||
tmpDir, _ := os.MkdirTemp("", "borg-run-*")
|
||||
defer os.RemoveAll(tmpDir) // Secure cleanup
|
||||
|
||||
// 2. Read and decrypt
|
||||
data, _ := os.ReadFile(path)
|
||||
tim, _ := FromSigil(data, password)
|
||||
|
||||
// 3. Extract to temp
|
||||
tim.ExtractTo(tmpDir)
|
||||
|
||||
// 4. Execute with runc
|
||||
return runRunc(tmpDir)
|
||||
}
|
||||
```
|
||||
|
||||
## 10. Security Properties
|
||||
|
||||
### 10.1 Confidentiality
|
||||
|
||||
- Contents encrypted with ChaCha20-Poly1305
|
||||
- Password-derived key never stored
|
||||
- Nonces are random, never reused
|
||||
|
||||
### 10.2 Integrity
|
||||
|
||||
- Poly1305 MAC prevents tampering
|
||||
- Decryption fails if modified
|
||||
- Separate MACs for config and rootfs
|
||||
|
||||
### 10.3 Error Detection
|
||||
|
||||
| Error | Cause |
|
||||
|-------|-------|
|
||||
| `ErrPasswordRequired` | Empty password provided |
|
||||
| `ErrInvalidStimPayload` | Payload < 4 bytes or invalid size |
|
||||
| `ErrDecryptionFailed` | Wrong password or corrupted data |
|
||||
|
||||
## 11. Comparison to TRIX
|
||||
|
||||
| Feature | STIM | TRIX |
|
||||
|---------|------|------|
|
||||
| Algorithm | ChaCha20-Poly1305 | PGP/AES or ChaCha |
|
||||
| Content | TIM bundles | DataNode (raw files) |
|
||||
| Structure | Dual encryption | Single blob |
|
||||
| Magic | "STIM" | "TRIX" |
|
||||
| Use case | Container execution | General encryption, accounts |
|
||||
|
||||
STIM is for containers. TRIX is for general file encryption and accounts.
|
||||
|
||||
## 12. Implementation Reference
|
||||
|
||||
- Encryption: `pkg/tim/tim.go` (ToSigil, FromSigil)
|
||||
- Key derivation: `pkg/trix/trix.go` (DeriveKey)
|
||||
- Cache: `pkg/tim/cache.go`
|
||||
- CLI: `cmd/run.go`, `cmd/decode.go`, `cmd/compile.go`
|
||||
- Enchantrix: `github.com/Snider/Enchantrix`
|
||||
|
||||
## 13. Security Considerations
|
||||
|
||||
1. **Password strength**: Recommend 64+ bits entropy (12+ chars)
|
||||
2. **Key derivation**: SHA-256 only (no stretching) - use strong passwords
|
||||
3. **Memory handling**: Keys should be wiped after use
|
||||
4. **Temp files**: Use tmpfs when available, secure wipe after
|
||||
5. **Side channels**: Enchantrix uses constant-time crypto operations
|
||||
|
||||
## 14. Future Work
|
||||
|
||||
- [ ] Hardware key support (YubiKey, TPM)
|
||||
- [ ] Key stretching (Argon2)
|
||||
- [ ] Multi-recipient encryption
|
||||
- [ ] Streaming decryption for large rootfs
|
||||
342
RFC-016-TRIX-PGP.md
Normal file
342
RFC-016-TRIX-PGP.md
Normal file
|
|
@ -0,0 +1,342 @@
|
|||
# RFC-006: TRIX PGP Encryption Format
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-003
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
TRIX is a PGP-based encryption format for DataNode archives and account credentials. It provides symmetric and asymmetric encryption using OpenPGP standards and ChaCha20-Poly1305, enabling secure data exchange and identity management.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
TRIX provides:
|
||||
- PGP symmetric encryption for DataNode archives
|
||||
- ChaCha20-Poly1305 modern encryption
|
||||
- PGP armored keys for account/identity management
|
||||
- Integration with Enchantrix library
|
||||
|
||||
## 2. Public API
|
||||
|
||||
### 2.1 Key Derivation
|
||||
|
||||
```go
|
||||
// pkg/trix/trix.go:64-67
|
||||
func DeriveKey(password string) []byte {
|
||||
hash := sha256.Sum256([]byte(password))
|
||||
return hash[:] // 32 bytes
|
||||
}
|
||||
```
|
||||
|
||||
- Input: password string (any length)
|
||||
- Output: 32-byte key (256 bits)
|
||||
- Algorithm: SHA-256 hash of UTF-8 bytes
|
||||
- Deterministic: identical passwords → identical keys
|
||||
|
||||
### 2.2 Legacy PGP Encryption
|
||||
|
||||
```go
|
||||
// Encrypt DataNode to TRIX (PGP symmetric)
|
||||
func ToTrix(dn *datanode.DataNode, password string) ([]byte, error)
|
||||
|
||||
// Decrypt TRIX to DataNode (DISABLED for encrypted payloads)
|
||||
func FromTrix(data []byte, password string) (*datanode.DataNode, error)
|
||||
```
|
||||
|
||||
**Note**: `FromTrix` with a non-empty password returns error `"decryption disabled: cannot accept encrypted payloads"`. This is intentional to prevent accidental password use.
|
||||
|
||||
### 2.3 Modern ChaCha20-Poly1305 Encryption
|
||||
|
||||
```go
|
||||
// Encrypt with ChaCha20-Poly1305
|
||||
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error)
|
||||
|
||||
// Decrypt ChaCha20-Poly1305
|
||||
func FromTrixChaCha(data []byte, password string) (*datanode.DataNode, error)
|
||||
```
|
||||
|
||||
### 2.4 Error Variables
|
||||
|
||||
```go
|
||||
var (
|
||||
ErrPasswordRequired = errors.New("password is required for encryption")
|
||||
ErrDecryptionFailed = errors.New("decryption failed (wrong password?)")
|
||||
)
|
||||
```
|
||||
|
||||
## 3. File Format
|
||||
|
||||
### 3.1 Container Structure
|
||||
|
||||
```
|
||||
[4 bytes] Magic: "TRIX" (ASCII)
|
||||
[Variable] Gob-encoded Header (map[string]interface{})
|
||||
[Variable] Payload (encrypted or unencrypted tarball)
|
||||
```
|
||||
|
||||
### 3.2 Header Examples
|
||||
|
||||
**Unencrypted:**
|
||||
```go
|
||||
Header: map[string]interface{}{} // Empty map
|
||||
```
|
||||
|
||||
**ChaCha20-Poly1305:**
|
||||
```go
|
||||
Header: map[string]interface{}{
|
||||
"encryption_algorithm": "chacha20poly1305",
|
||||
}
|
||||
```
|
||||
|
||||
### 3.3 ChaCha20-Poly1305 Payload
|
||||
|
||||
```
|
||||
[24 bytes] XChaCha20 Nonce (embedded)
|
||||
[N bytes] Encrypted tar archive
|
||||
[16 bytes] Poly1305 authentication tag
|
||||
```
|
||||
|
||||
**Note**: Nonces are embedded in the ciphertext by Enchantrix, not stored separately.
|
||||
|
||||
## 4. Encryption Workflows
|
||||
|
||||
### 4.1 ChaCha20-Poly1305 (Recommended)
|
||||
|
||||
```go
|
||||
// Encryption
|
||||
func ToTrixChaCha(dn *datanode.DataNode, password string) ([]byte, error) {
|
||||
// 1. Validate password is non-empty
|
||||
if password == "" {
|
||||
return nil, ErrPasswordRequired
|
||||
}
|
||||
|
||||
// 2. Serialize DataNode to tar
|
||||
tarball, _ := dn.ToTar()
|
||||
|
||||
// 3. Derive 32-byte key
|
||||
key := DeriveKey(password)
|
||||
|
||||
// 4. Create sigil and encrypt
|
||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
||||
encrypted, _ := sigil.In(tarball) // Generates nonce automatically
|
||||
|
||||
// 5. Create Trix container
|
||||
t := &trix.Trix{
|
||||
Header: map[string]interface{}{"encryption_algorithm": "chacha20poly1305"},
|
||||
Payload: encrypted,
|
||||
}
|
||||
|
||||
// 6. Encode with TRIX magic
|
||||
return trix.Encode(t, "TRIX", nil)
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Decryption
|
||||
|
||||
```go
|
||||
func FromTrixChaCha(data []byte, password string) (*datanode.DataNode, error) {
|
||||
// 1. Validate password
|
||||
if password == "" {
|
||||
return nil, ErrPasswordRequired
|
||||
}
|
||||
|
||||
// 2. Decode TRIX container
|
||||
t, _ := trix.Decode(data, "TRIX", nil)
|
||||
|
||||
// 3. Derive key and decrypt
|
||||
key := DeriveKey(password)
|
||||
sigil, _ := enchantrix.NewChaChaPolySigil(key)
|
||||
tarball, err := sigil.Out(t.Payload) // Extracts nonce, verifies MAC
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("%w: %v", ErrDecryptionFailed, err)
|
||||
}
|
||||
|
||||
// 4. Deserialize DataNode
|
||||
return datanode.FromTar(tarball)
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Legacy PGP (Disabled Decryption)
|
||||
|
||||
```go
|
||||
func ToTrix(dn *datanode.DataNode, password string) ([]byte, error) {
|
||||
tarball, _ := dn.ToTar()
|
||||
|
||||
var payload []byte
|
||||
if password != "" {
|
||||
// PGP symmetric encryption
|
||||
cryptService := crypt.NewService()
|
||||
payload, _ = cryptService.SymmetricallyEncryptPGP([]byte(password), tarball)
|
||||
} else {
|
||||
payload = tarball
|
||||
}
|
||||
|
||||
t := &trix.Trix{Header: map[string]interface{}{}, Payload: payload}
|
||||
return trix.Encode(t, "TRIX", nil)
|
||||
}
|
||||
|
||||
func FromTrix(data []byte, password string) (*datanode.DataNode, error) {
|
||||
// Security: Reject encrypted payloads
|
||||
if password != "" {
|
||||
return nil, errors.New("decryption disabled: cannot accept encrypted payloads")
|
||||
}
|
||||
|
||||
t, _ := trix.Decode(data, "TRIX", nil)
|
||||
return datanode.FromTar(t.Payload)
|
||||
}
|
||||
```
|
||||
|
||||
## 5. Enchantrix Library
|
||||
|
||||
### 5.1 Dependencies
|
||||
|
||||
```go
|
||||
import (
|
||||
"github.com/Snider/Enchantrix/pkg/trix" // Container format
|
||||
"github.com/Snider/Enchantrix/pkg/crypt" // PGP operations
|
||||
"github.com/Snider/Enchantrix/pkg/enchantrix" // AEAD sigils
|
||||
)
|
||||
```
|
||||
|
||||
### 5.2 Trix Container
|
||||
|
||||
```go
|
||||
type Trix struct {
|
||||
Header map[string]interface{}
|
||||
Payload []byte
|
||||
}
|
||||
|
||||
func Encode(t *Trix, magic string, extra interface{}) ([]byte, error)
|
||||
func Decode(data []byte, magic string, extra interface{}) (*Trix, error)
|
||||
```
|
||||
|
||||
### 5.3 ChaCha20-Poly1305 Sigil
|
||||
|
||||
```go
|
||||
// Create sigil with 32-byte key
|
||||
sigil, err := enchantrix.NewChaChaPolySigil(key)
|
||||
|
||||
// Encrypt (generates random 24-byte nonce)
|
||||
ciphertext, err := sigil.In(plaintext)
|
||||
|
||||
// Decrypt (extracts nonce, verifies MAC)
|
||||
plaintext, err := sigil.Out(ciphertext)
|
||||
```
|
||||
|
||||
## 6. Account System Integration
|
||||
|
||||
### 6.1 PGP Armored Keys
|
||||
|
||||
```
|
||||
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||
|
||||
mQENBGX...base64...
|
||||
-----END PGP PUBLIC KEY BLOCK-----
|
||||
```
|
||||
|
||||
### 6.2 Key Storage
|
||||
|
||||
```
|
||||
~/.borg/
|
||||
├── identity.pub # PGP public key (armored)
|
||||
├── identity.key # PGP private key (armored, encrypted)
|
||||
└── keyring/ # Trusted public keys
|
||||
```
|
||||
|
||||
## 7. CLI Usage
|
||||
|
||||
```bash
|
||||
# Encrypt with TRIX (PGP symmetric)
|
||||
borg collect github repo https://github.com/user/repo \
|
||||
--format trix \
|
||||
--password "password"
|
||||
|
||||
# Decrypt unencrypted TRIX
|
||||
borg decode archive.trix -o decoded.tar
|
||||
|
||||
# Inspect without decrypting
|
||||
borg inspect archive.trix
|
||||
# Output:
|
||||
# Format: TRIX
|
||||
# encryption_algorithm: chacha20poly1305 (if present)
|
||||
# Payload Size: N bytes
|
||||
```
|
||||
|
||||
## 8. Format Comparison
|
||||
|
||||
| Format | Extension | Algorithm | Use Case |
|
||||
|--------|-----------|-----------|----------|
|
||||
| `datanode` | `.tar` | None | Uncompressed archive |
|
||||
| `tim` | `.tim` | None | Container bundle |
|
||||
| `trix` | `.trix` | PGP/AES or ChaCha | Encrypted archives, accounts |
|
||||
| `stim` | `.stim` | ChaCha20-Poly1305 | Encrypted containers |
|
||||
| `smsg` | `.smsg` | ChaCha20-Poly1305 | Encrypted media |
|
||||
|
||||
## 9. Security Analysis
|
||||
|
||||
### 9.1 Key Derivation Limitations
|
||||
|
||||
**Current implementation: SHA-256 (single round)**
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Algorithm | SHA-256 |
|
||||
| Iterations | 1 |
|
||||
| Salt | None |
|
||||
| Key stretching | None |
|
||||
|
||||
**Implications:**
|
||||
- GPU brute force: ~10 billion guesses/second
|
||||
- 8-character password: ~10 seconds to break
|
||||
- Recommendation: Use 15+ character passwords
|
||||
|
||||
### 9.2 ChaCha20-Poly1305 Properties
|
||||
|
||||
| Property | Status |
|
||||
|----------|--------|
|
||||
| Authentication | Poly1305 MAC (16 bytes) |
|
||||
| Key size | 256 bits |
|
||||
| Nonce size | 192 bits (XChaCha) |
|
||||
| Standard | RFC 7539 compliant |
|
||||
|
||||
## 10. Test Coverage
|
||||
|
||||
| Test | Description |
|
||||
|------|-------------|
|
||||
| DeriveKey length | Output is exactly 32 bytes |
|
||||
| DeriveKey determinism | Same password → same key |
|
||||
| DeriveKey uniqueness | Different passwords → different keys |
|
||||
| ToTrix without password | Valid TRIX with "TRIX" magic |
|
||||
| ToTrix with password | PGP encryption applied |
|
||||
| FromTrix unencrypted | Round-trip preserves files |
|
||||
| FromTrix password rejection | Returns error |
|
||||
| ToTrixChaCha success | Valid TRIX created |
|
||||
| ToTrixChaCha empty password | Returns ErrPasswordRequired |
|
||||
| FromTrixChaCha round-trip | Preserves nested directories |
|
||||
| FromTrixChaCha wrong password | Returns ErrDecryptionFailed |
|
||||
| FromTrixChaCha large data | 1MB file processed |
|
||||
|
||||
## 11. Implementation Reference
|
||||
|
||||
- Source: `pkg/trix/trix.go`
|
||||
- Tests: `pkg/trix/trix_test.go`
|
||||
- Enchantrix: `github.com/Snider/Enchantrix v0.0.2`
|
||||
|
||||
## 12. Security Considerations
|
||||
|
||||
1. **Use strong passwords**: 15+ characters due to no key stretching
|
||||
2. **Prefer ChaCha**: Use `ToTrixChaCha` over legacy PGP
|
||||
3. **Key backup**: Securely backup private keys
|
||||
4. **Interoperability**: TRIX files with GPG require password
|
||||
|
||||
## 13. Future Work
|
||||
|
||||
- [ ] Key stretching (Argon2 option in DeriveKey)
|
||||
- [ ] Public key encryption support
|
||||
- [ ] Signature support
|
||||
- [ ] Key expiration metadata
|
||||
- [ ] Multi-recipient encryption
|
||||
355
RFC-017-LTHN-KEY-DERIVATION.md
Normal file
355
RFC-017-LTHN-KEY-DERIVATION.md
Normal file
|
|
@ -0,0 +1,355 @@
|
|||
# RFC-007: LTHN Key Derivation
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-002
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
LTHN (Leet-Hash-Nonce) is a rainbow-table resistant key derivation function used for streaming DRM with time-limited access. It generates rolling keys that automatically expire without requiring revocation infrastructure.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
LTHN provides:
|
||||
- Rainbow-table resistant hashing
|
||||
- Time-based key rolling
|
||||
- Zero-trust key derivation (no key server)
|
||||
- Configurable cadence (daily to hourly)
|
||||
|
||||
## 2. Motivation
|
||||
|
||||
Traditional DRM requires:
|
||||
- Central key server
|
||||
- License validation
|
||||
- Revocation lists
|
||||
- Network connectivity
|
||||
|
||||
LTHN eliminates these by:
|
||||
- Deriving keys from public information + secret
|
||||
- Time-bounding keys automatically
|
||||
- Making rainbow tables impractical
|
||||
- Working completely offline
|
||||
|
||||
## 3. Algorithm
|
||||
|
||||
### 3.1 Core Function
|
||||
|
||||
The LTHN hash is implemented in the Enchantrix library:
|
||||
|
||||
```go
|
||||
import "github.com/Snider/Enchantrix/pkg/crypt"
|
||||
|
||||
cryptService := crypt.NewService()
|
||||
lthnHash := cryptService.Hash(crypt.LTHN, input)
|
||||
```
|
||||
|
||||
**LTHN formula**:
|
||||
```
|
||||
LTHN(input) = SHA256(input || reverse_leet(input))
|
||||
```
|
||||
|
||||
Where `reverse_leet` performs bidirectional character substitution.
|
||||
|
||||
### 3.2 Reverse Leet Mapping
|
||||
|
||||
| Original | Leet | Bidirectional |
|
||||
|----------|------|---------------|
|
||||
| o | 0 | o ↔ 0 |
|
||||
| l | 1 | l ↔ 1 |
|
||||
| e | 3 | e ↔ 3 |
|
||||
| a | 4 | a ↔ 4 |
|
||||
| s | z | s ↔ z |
|
||||
| t | 7 | t ↔ 7 |
|
||||
|
||||
### 3.3 Example
|
||||
|
||||
```
|
||||
Input: "2026-01-13:license:fp"
|
||||
reverse_leet: "pf:3zn3ci1:31-10-6202"
|
||||
Combined: "2026-01-13:license:fppf:3zn3ci1:31-10-6202"
|
||||
Result: SHA256(combined) → 32-byte hash
|
||||
```
|
||||
|
||||
## 4. Stream Key Derivation
|
||||
|
||||
### 4.1 Implementation
|
||||
|
||||
```go
|
||||
// pkg/smsg/stream.go:49-60
|
||||
func DeriveStreamKey(date, license, fingerprint string) []byte {
|
||||
input := fmt.Sprintf("%s:%s:%s", date, license, fingerprint)
|
||||
cryptService := crypt.NewService()
|
||||
lthnHash := cryptService.Hash(crypt.LTHN, input)
|
||||
key := sha256.Sum256([]byte(lthnHash))
|
||||
return key[:]
|
||||
}
|
||||
```
|
||||
|
||||
### 4.2 Input Format
|
||||
|
||||
```
|
||||
period:license:fingerprint
|
||||
|
||||
Where:
|
||||
- period: Time period identifier (see Cadence)
|
||||
- license: User's license key (password)
|
||||
- fingerprint: Device/browser fingerprint
|
||||
```
|
||||
|
||||
### 4.3 Output
|
||||
|
||||
32-byte key suitable for ChaCha20-Poly1305.
|
||||
|
||||
## 5. Cadence
|
||||
|
||||
### 5.1 Options
|
||||
|
||||
| Cadence | Constant | Period Format | Example | Duration |
|
||||
|---------|----------|---------------|---------|----------|
|
||||
| Daily | `CadenceDaily` | `2006-01-02` | `2026-01-13` | 24h |
|
||||
| 12-hour | `CadenceHalfDay` | `2006-01-02-AM/PM` | `2026-01-13-PM` | 12h |
|
||||
| 6-hour | `CadenceQuarter` | `2006-01-02-HH` | `2026-01-13-12` | 6h |
|
||||
| Hourly | `CadenceHourly` | `2006-01-02-HH` | `2026-01-13-15` | 1h |
|
||||
|
||||
### 5.2 Period Calculation
|
||||
|
||||
```go
|
||||
// pkg/smsg/stream.go:73-119
|
||||
func GetCurrentPeriod(cadence Cadence) string {
|
||||
return GetPeriodAt(time.Now(), cadence)
|
||||
}
|
||||
|
||||
func GetPeriodAt(t time.Time, cadence Cadence) string {
|
||||
switch cadence {
|
||||
case CadenceDaily:
|
||||
return t.Format("2006-01-02")
|
||||
case CadenceHalfDay:
|
||||
suffix := "AM"
|
||||
if t.Hour() >= 12 {
|
||||
suffix = "PM"
|
||||
}
|
||||
return t.Format("2006-01-02") + "-" + suffix
|
||||
case CadenceQuarter:
|
||||
bucket := (t.Hour() / 6) * 6
|
||||
return fmt.Sprintf("%s-%02d", t.Format("2006-01-02"), bucket)
|
||||
case CadenceHourly:
|
||||
return fmt.Sprintf("%s-%02d", t.Format("2006-01-02"), t.Hour())
|
||||
}
|
||||
return t.Format("2006-01-02")
|
||||
}
|
||||
|
||||
func GetNextPeriod(cadence Cadence) string {
|
||||
return GetPeriodAt(time.Now().Add(GetCadenceDuration(cadence)), cadence)
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 Duration Mapping
|
||||
|
||||
```go
|
||||
func GetCadenceDuration(cadence Cadence) time.Duration {
|
||||
switch cadence {
|
||||
case CadenceDaily:
|
||||
return 24 * time.Hour
|
||||
case CadenceHalfDay:
|
||||
return 12 * time.Hour
|
||||
case CadenceQuarter:
|
||||
return 6 * time.Hour
|
||||
case CadenceHourly:
|
||||
return 1 * time.Hour
|
||||
}
|
||||
return 24 * time.Hour
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Rolling Windows
|
||||
|
||||
### 6.1 Dual-Key Strategy
|
||||
|
||||
At encryption time, CEK is wrapped with **two** keys:
|
||||
1. Current period key
|
||||
2. Next period key
|
||||
|
||||
This creates a rolling validity window:
|
||||
|
||||
```
|
||||
Time: 2026-01-13 23:30 (daily cadence)
|
||||
|
||||
Valid keys:
|
||||
- "2026-01-13:license:fp" (current period)
|
||||
- "2026-01-14:license:fp" (next period)
|
||||
|
||||
Window: 24-48 hours of validity
|
||||
```
|
||||
|
||||
### 6.2 Key Wrapping
|
||||
|
||||
```go
|
||||
// pkg/smsg/stream.go:135-155
|
||||
func WrapCEK(cek []byte, streamKey []byte) (string, error) {
|
||||
sigil := enchantrix.NewChaChaPolySigil()
|
||||
wrapped, err := sigil.Seal(cek, streamKey)
|
||||
if err != nil {
|
||||
return "", err
|
||||
}
|
||||
return base64.StdEncoding.EncodeToString(wrapped), nil
|
||||
}
|
||||
```
|
||||
|
||||
**Wrapped format**:
|
||||
```
|
||||
[24-byte nonce][encrypted CEK][16-byte auth tag]
|
||||
→ base64 encoded for header storage
|
||||
```
|
||||
|
||||
### 6.3 Key Unwrapping
|
||||
|
||||
```go
|
||||
// pkg/smsg/stream.go:157-170
|
||||
func UnwrapCEK(wrapped string, streamKey []byte) ([]byte, error) {
|
||||
data, err := base64.StdEncoding.DecodeString(wrapped)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
sigil := enchantrix.NewChaChaPolySigil()
|
||||
return sigil.Open(data, streamKey)
|
||||
}
|
||||
```
|
||||
|
||||
### 6.4 Decryption Flow
|
||||
|
||||
```go
|
||||
// pkg/smsg/stream.go:606-633
|
||||
func UnwrapCEKFromHeader(header *V3Header, params *StreamParams) ([]byte, error) {
|
||||
// Try current period first
|
||||
currentPeriod := GetCurrentPeriod(params.Cadence)
|
||||
currentKey := DeriveStreamKey(currentPeriod, params.License, params.Fingerprint)
|
||||
|
||||
for _, wk := range header.WrappedKeys {
|
||||
cek, err := UnwrapCEK(wk.Key, currentKey)
|
||||
if err == nil {
|
||||
return cek, nil
|
||||
}
|
||||
}
|
||||
|
||||
// Try next period (for clock skew)
|
||||
nextPeriod := GetNextPeriod(params.Cadence)
|
||||
nextKey := DeriveStreamKey(nextPeriod, params.License, params.Fingerprint)
|
||||
|
||||
for _, wk := range header.WrappedKeys {
|
||||
cek, err := UnwrapCEK(wk.Key, nextKey)
|
||||
if err == nil {
|
||||
return cek, nil
|
||||
}
|
||||
}
|
||||
|
||||
return nil, ErrKeyExpired
|
||||
}
|
||||
```
|
||||
|
||||
## 7. V3 Header Format
|
||||
|
||||
```go
|
||||
type V3Header struct {
|
||||
Format string `json:"format"` // "v3"
|
||||
Manifest *Manifest `json:"manifest"`
|
||||
WrappedKeys []WrappedKey `json:"wrappedKeys"`
|
||||
Chunked *ChunkInfo `json:"chunked,omitempty"`
|
||||
}
|
||||
|
||||
type WrappedKey struct {
|
||||
Period string `json:"period"` // e.g., "2026-01-13"
|
||||
Key string `json:"key"` // base64-encoded wrapped CEK
|
||||
}
|
||||
```
|
||||
|
||||
## 8. Rainbow Table Resistance
|
||||
|
||||
### 8.1 Why It Works
|
||||
|
||||
Standard hash:
|
||||
```
|
||||
SHA256("2026-01-13:license:fp") → predictable, precomputable
|
||||
```
|
||||
|
||||
LTHN hash:
|
||||
```
|
||||
LTHN("2026-01-13:license:fp")
|
||||
= SHA256("2026-01-13:license:fp" + reverse_leet("2026-01-13:license:fp"))
|
||||
= SHA256("2026-01-13:license:fp" + "pf:3zn3ci1:31-10-6202")
|
||||
```
|
||||
|
||||
The salt is **derived from the input itself**, making precomputation impractical:
|
||||
- Each unique input has a unique salt
|
||||
- Cannot build rainbow tables without knowing all possible inputs
|
||||
- Input space includes license keys (high entropy)
|
||||
|
||||
### 8.2 Security Analysis
|
||||
|
||||
| Attack | Mitigation |
|
||||
|--------|------------|
|
||||
| Rainbow tables | Input-derived salt makes precomputation infeasible |
|
||||
| Brute force | License key entropy (64+ bits recommended) |
|
||||
| Time oracle | Rolling window prevents precise timing attacks |
|
||||
| Key sharing | Keys expire within cadence window |
|
||||
|
||||
## 9. Zero-Trust Properties
|
||||
|
||||
| Property | Implementation |
|
||||
|----------|----------------|
|
||||
| No key server | Keys derived locally from LTHN |
|
||||
| Auto-expiration | Rolling periods invalidate old keys |
|
||||
| No revocation | Keys naturally expire within cadence window |
|
||||
| Device binding | Fingerprint in derivation input |
|
||||
| User binding | License key in derivation input |
|
||||
|
||||
## 10. Test Vectors
|
||||
|
||||
From `pkg/smsg/stream_test.go`:
|
||||
|
||||
```go
|
||||
// Stream key generation
|
||||
date := "2026-01-12"
|
||||
license := "test-license"
|
||||
fingerprint := "test-fp"
|
||||
key := DeriveStreamKey(date, license, fingerprint)
|
||||
// key is 32 bytes, deterministic
|
||||
|
||||
// Period calculation at 2026-01-12 15:30:00 UTC
|
||||
t := time.Date(2026, 1, 12, 15, 30, 0, 0, time.UTC)
|
||||
|
||||
GetPeriodAt(t, CadenceDaily) // "2026-01-12"
|
||||
GetPeriodAt(t, CadenceHalfDay) // "2026-01-12-PM"
|
||||
GetPeriodAt(t, CadenceQuarter) // "2026-01-12-12"
|
||||
GetPeriodAt(t, CadenceHourly) // "2026-01-12-15"
|
||||
|
||||
// Next periods
|
||||
// Daily: "2026-01-12" → "2026-01-13"
|
||||
// 12h: "2026-01-12-PM" → "2026-01-13-AM"
|
||||
// 6h: "2026-01-12-12" → "2026-01-12-18"
|
||||
// 1h: "2026-01-12-15" → "2026-01-12-16"
|
||||
```
|
||||
|
||||
## 11. Implementation Reference
|
||||
|
||||
- Stream key derivation: `pkg/smsg/stream.go`
|
||||
- LTHN hash: `github.com/Snider/Enchantrix/pkg/crypt`
|
||||
- WASM bindings: `pkg/wasm/stmf/main.go` (decryptV3, unwrapCEK)
|
||||
- Tests: `pkg/smsg/stream_test.go`
|
||||
|
||||
## 12. Security Considerations
|
||||
|
||||
1. **License entropy**: Recommend 64+ bits (12+ alphanumeric chars)
|
||||
2. **Fingerprint stability**: Should be stable but not user-controllable
|
||||
3. **Clock skew**: Rolling windows handle ±1 period drift
|
||||
4. **Key exposure**: Derived keys valid only for one period
|
||||
|
||||
## 13. References
|
||||
|
||||
- RFC-002: SMSG Format (v3 streaming)
|
||||
- RFC-001: OSS DRM (Section 3.4)
|
||||
- RFC 8439: ChaCha20-Poly1305
|
||||
- Enchantrix: github.com/Snider/Enchantrix
|
||||
255
RFC-018-BORGFILE.md
Normal file
255
RFC-018-BORGFILE.md
Normal file
|
|
@ -0,0 +1,255 @@
|
|||
# RFC-008: Borgfile Compilation
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-003, RFC-004
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
Borgfile is a declarative syntax for defining TIM container contents. It specifies how local files are mapped into the container filesystem, enabling reproducible container builds.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
Borgfile provides:
|
||||
- Dockerfile-like syntax for familiarity
|
||||
- File mapping into containers
|
||||
- Simple ADD directive
|
||||
- Integration with TIM encryption
|
||||
|
||||
## 2. File Format
|
||||
|
||||
### 2.1 Location
|
||||
|
||||
- Default: `Borgfile` in current directory
|
||||
- Override: `borg compile -f path/to/Borgfile`
|
||||
|
||||
### 2.2 Encoding
|
||||
|
||||
- UTF-8 text
|
||||
- Unix line endings (LF)
|
||||
- No BOM
|
||||
|
||||
## 3. Syntax
|
||||
|
||||
### 3.1 Parsing Implementation
|
||||
|
||||
```go
|
||||
// cmd/compile.go:33-54
|
||||
lines := strings.Split(content, "\n")
|
||||
for _, line := range lines {
|
||||
parts := strings.Fields(line) // Whitespace-separated tokens
|
||||
if len(parts) == 0 {
|
||||
continue // Skip empty lines
|
||||
}
|
||||
switch parts[0] {
|
||||
case "ADD":
|
||||
// Process ADD directive
|
||||
default:
|
||||
return fmt.Errorf("unknown instruction: %s", parts[0])
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### 3.2 ADD Directive
|
||||
|
||||
```
|
||||
ADD <source> <destination>
|
||||
```
|
||||
|
||||
| Parameter | Description |
|
||||
|-----------|-------------|
|
||||
| source | Local path (relative to current working directory) |
|
||||
| destination | Container path (leading slash stripped) |
|
||||
|
||||
### 3.3 Examples
|
||||
|
||||
```dockerfile
|
||||
# Add single file
|
||||
ADD ./app /usr/local/bin/app
|
||||
|
||||
# Add configuration
|
||||
ADD ./config.yaml /etc/myapp/config.yaml
|
||||
|
||||
# Multiple files
|
||||
ADD ./bin/server /app/server
|
||||
ADD ./static /app/static
|
||||
```
|
||||
|
||||
## 4. Path Resolution
|
||||
|
||||
### 4.1 Source Paths
|
||||
|
||||
- Resolved relative to **current working directory** (not Borgfile location)
|
||||
- Must exist at compile time
|
||||
- Read via `os.ReadFile(src)`
|
||||
|
||||
### 4.2 Destination Paths
|
||||
|
||||
- Leading slash stripped: `strings.TrimPrefix(dest, "/")`
|
||||
- Added to DataNode as-is
|
||||
|
||||
```go
|
||||
// cmd/compile.go:46-50
|
||||
data, err := os.ReadFile(src)
|
||||
if err != nil {
|
||||
return fmt.Errorf("invalid ADD instruction: %s", line)
|
||||
}
|
||||
name := strings.TrimPrefix(dest, "/")
|
||||
m.RootFS.AddData(name, data)
|
||||
```
|
||||
|
||||
## 5. File Handling
|
||||
|
||||
### 5.1 Permissions
|
||||
|
||||
**Current implementation**: Permissions are NOT preserved.
|
||||
|
||||
| Source | Container |
|
||||
|--------|-----------|
|
||||
| Any file | 0600 (hardcoded in DataNode.ToTar) |
|
||||
| Any directory | 0755 (implicit) |
|
||||
|
||||
### 5.2 Timestamps
|
||||
|
||||
- Set to `time.Now()` when added to DataNode
|
||||
- Original timestamps not preserved
|
||||
|
||||
### 5.3 File Types
|
||||
|
||||
- Regular files only
|
||||
- No directory recursion (each file must be added explicitly)
|
||||
- No symlink following
|
||||
|
||||
## 6. Error Handling
|
||||
|
||||
| Error | Cause |
|
||||
|-------|-------|
|
||||
| `invalid ADD instruction: {line}` | Wrong number of arguments |
|
||||
| `os.ReadFile` error | Source file not found |
|
||||
| `unknown instruction: {name}` | Unrecognized directive |
|
||||
| `ErrPasswordRequired` | Encryption requested without password |
|
||||
|
||||
## 7. CLI Flags
|
||||
|
||||
```go
|
||||
// cmd/compile.go:80-82
|
||||
-f, --file string Path to Borgfile (default: "Borgfile")
|
||||
-o, --output string Output path (default: "a.tim")
|
||||
-e, --encrypt string Password for .stim encryption (optional)
|
||||
```
|
||||
|
||||
## 8. Output Formats
|
||||
|
||||
### 8.1 Plain TIM
|
||||
|
||||
```bash
|
||||
borg compile -f Borgfile -o container.tim
|
||||
```
|
||||
|
||||
Output: Standard TIM tar archive with `config.json` + `rootfs/`
|
||||
|
||||
### 8.2 Encrypted STIM
|
||||
|
||||
```bash
|
||||
borg compile -f Borgfile -e "password" -o container.stim
|
||||
```
|
||||
|
||||
Output: ChaCha20-Poly1305 encrypted STIM container
|
||||
|
||||
**Auto-detection**: If `-e` flag provided, output automatically uses `.stim` format even if `-o` specifies `.tim`.
|
||||
|
||||
## 9. Default OCI Config
|
||||
|
||||
The current implementation creates a minimal config:
|
||||
|
||||
```go
|
||||
// pkg/tim/config.go:6-10
|
||||
func defaultConfig() (*trix.Trix, error) {
|
||||
return &trix.Trix{Header: make(map[string]interface{})}, nil
|
||||
}
|
||||
```
|
||||
|
||||
**Note**: This is a placeholder. For full OCI runtime execution, you'll need to provide a proper `config.json` in the container or modify the TIM after compilation.
|
||||
|
||||
## 10. Compilation Process
|
||||
|
||||
```
|
||||
1. Read Borgfile content
|
||||
2. Parse line-by-line
|
||||
3. For each ADD directive:
|
||||
a. Read source file from filesystem
|
||||
b. Strip leading slash from destination
|
||||
c. Add to DataNode
|
||||
4. Create TIM with default config + populated RootFS
|
||||
5. If password provided:
|
||||
a. Encrypt to STIM via ToSigil()
|
||||
b. Adjust output extension to .stim
|
||||
6. Write output file
|
||||
```
|
||||
|
||||
## 11. Implementation Reference
|
||||
|
||||
- Parser/Compiler: `cmd/compile.go`
|
||||
- TIM creation: `pkg/tim/tim.go`
|
||||
- DataNode: `pkg/datanode/datanode.go`
|
||||
- Tests: `cmd/compile_test.go`
|
||||
|
||||
## 12. Current Limitations
|
||||
|
||||
| Feature | Status |
|
||||
|---------|--------|
|
||||
| Comment support (`#`) | Not implemented |
|
||||
| Quoted paths | Not implemented |
|
||||
| Directory recursion | Not implemented |
|
||||
| Permission preservation | Not implemented |
|
||||
| Path resolution relative to Borgfile | Not implemented (uses CWD) |
|
||||
| Full OCI config generation | Not implemented (empty header) |
|
||||
| Symlink following | Not implemented |
|
||||
|
||||
## 13. Examples
|
||||
|
||||
### 13.1 Simple Application
|
||||
|
||||
```dockerfile
|
||||
ADD ./myapp /usr/local/bin/myapp
|
||||
ADD ./config.yaml /etc/myapp/config.yaml
|
||||
```
|
||||
|
||||
### 13.2 Web Application
|
||||
|
||||
```dockerfile
|
||||
ADD ./server /app/server
|
||||
ADD ./index.html /app/static/index.html
|
||||
ADD ./style.css /app/static/style.css
|
||||
ADD ./app.js /app/static/app.js
|
||||
```
|
||||
|
||||
### 13.3 With Encryption
|
||||
|
||||
```bash
|
||||
# Create Borgfile
|
||||
cat > Borgfile << 'EOF'
|
||||
ADD ./secret-app /app/secret-app
|
||||
ADD ./credentials.json /etc/app/credentials.json
|
||||
EOF
|
||||
|
||||
# Compile with encryption
|
||||
borg compile -f Borgfile -e "MySecretPassword123" -o secret.stim
|
||||
```
|
||||
|
||||
## 14. Future Work
|
||||
|
||||
- [ ] Comment support (`#`)
|
||||
- [ ] Quoted path support for spaces
|
||||
- [ ] Directory recursion in ADD
|
||||
- [ ] Permission preservation
|
||||
- [ ] Path resolution relative to Borgfile location
|
||||
- [ ] Full OCI config generation
|
||||
- [ ] Variable substitution (`${VAR}`)
|
||||
- [ ] Include directive
|
||||
- [ ] Glob patterns in source
|
||||
- [ ] COPY directive (alias for ADD)
|
||||
365
RFC-019-STMF.md
Normal file
365
RFC-019-STMF.md
Normal file
|
|
@ -0,0 +1,365 @@
|
|||
# RFC-009: STMF Secure To-Me Form
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
STMF (Secure To-Me Form) provides asymmetric encryption for web form submissions. It enables end-to-end encrypted form data where only the recipient can decrypt submissions, protecting sensitive data from server compromise.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
STMF provides:
|
||||
- Asymmetric encryption for form data
|
||||
- X25519 key exchange
|
||||
- ChaCha20-Poly1305 for payload encryption
|
||||
- Browser-based encryption via WASM
|
||||
- HTTP middleware for server-side decryption
|
||||
|
||||
## 2. Cryptographic Primitives
|
||||
|
||||
### 2.1 Key Exchange
|
||||
|
||||
X25519 (Curve25519 Diffie-Hellman)
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Private key | 32 bytes |
|
||||
| Public key | 32 bytes |
|
||||
| Shared secret | 32 bytes |
|
||||
|
||||
### 2.2 Encryption
|
||||
|
||||
ChaCha20-Poly1305
|
||||
|
||||
| Parameter | Value |
|
||||
|-----------|-------|
|
||||
| Key | 32 bytes (SHA-256 of shared secret) |
|
||||
| Nonce | 24 bytes (XChaCha variant) |
|
||||
| Tag | 16 bytes |
|
||||
|
||||
## 3. Protocol
|
||||
|
||||
### 3.1 Setup (One-time)
|
||||
|
||||
```
|
||||
Recipient (Server):
|
||||
1. Generate X25519 keypair
|
||||
2. Publish public key (embed in page or API)
|
||||
3. Store private key securely
|
||||
```
|
||||
|
||||
### 3.2 Encryption Flow (Browser)
|
||||
|
||||
```
|
||||
1. Fetch recipient's public key
|
||||
2. Generate ephemeral X25519 keypair
|
||||
3. Compute shared secret: X25519(ephemeral_private, recipient_public)
|
||||
4. Derive encryption key: SHA256(shared_secret)
|
||||
5. Encrypt form data: ChaCha20-Poly1305(data, key, random_nonce)
|
||||
6. Send: {ephemeral_public, nonce, ciphertext}
|
||||
```
|
||||
|
||||
### 3.3 Decryption Flow (Server)
|
||||
|
||||
```
|
||||
1. Receive {ephemeral_public, nonce, ciphertext}
|
||||
2. Compute shared secret: X25519(recipient_private, ephemeral_public)
|
||||
3. Derive encryption key: SHA256(shared_secret)
|
||||
4. Decrypt: ChaCha20-Poly1305_Open(ciphertext, key, nonce)
|
||||
```
|
||||
|
||||
## 4. Wire Format
|
||||
|
||||
### 4.1 Container (Trix-based)
|
||||
|
||||
```
|
||||
[Magic: "STMF" (4 bytes)]
|
||||
[Header: Gob-encoded JSON]
|
||||
[Payload: ChaCha20-Poly1305 ciphertext]
|
||||
```
|
||||
|
||||
### 4.2 Header Structure
|
||||
|
||||
```json
|
||||
{
|
||||
"version": "1.0",
|
||||
"algorithm": "x25519-chacha20poly1305",
|
||||
"ephemeral_pk": "<base64 32-byte ephemeral public key>"
|
||||
}
|
||||
```
|
||||
|
||||
### 4.3 Transmission
|
||||
|
||||
- Default form field: `_stmf_payload`
|
||||
- Encoding: Base64 string
|
||||
- Content-Type: `application/x-www-form-urlencoded` or `multipart/form-data`
|
||||
|
||||
## 5. Data Structures
|
||||
|
||||
### 5.1 FormField
|
||||
|
||||
```go
|
||||
type FormField struct {
|
||||
Name string // Field name
|
||||
Value string // Base64 for files, plaintext otherwise
|
||||
Type string // "text", "password", "file"
|
||||
Filename string // For file uploads
|
||||
MimeType string // For file uploads
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 FormData
|
||||
|
||||
```go
|
||||
type FormData struct {
|
||||
Fields []FormField // Array of form fields
|
||||
Metadata map[string]string // Arbitrary key-value metadata
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 Builder Pattern
|
||||
|
||||
```go
|
||||
formData := NewFormData().
|
||||
AddField("email", "user@example.com").
|
||||
AddFieldWithType("password", "secret", "password").
|
||||
AddFile("document", base64Content, "report.pdf", "application/pdf").
|
||||
SetMetadata("timestamp", time.Now().String())
|
||||
```
|
||||
|
||||
## 6. Key Management API
|
||||
|
||||
### 6.1 Key Generation
|
||||
|
||||
```go
|
||||
// pkg/stmf/keypair.go
|
||||
func GenerateKeyPair() (*KeyPair, error)
|
||||
|
||||
type KeyPair struct {
|
||||
privateKey *ecdh.PrivateKey
|
||||
publicKey *ecdh.PublicKey
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Key Loading
|
||||
|
||||
```go
|
||||
// From raw bytes
|
||||
func LoadPublicKey(data []byte) (*ecdh.PublicKey, error)
|
||||
func LoadPrivateKey(data []byte) (*ecdh.PrivateKey, error)
|
||||
|
||||
// From base64
|
||||
func LoadPublicKeyBase64(encoded string) (*ecdh.PublicKey, error)
|
||||
func LoadPrivateKeyBase64(encoded string) (*ecdh.PrivateKey, error)
|
||||
|
||||
// Reconstruct keypair from private key
|
||||
func LoadKeyPair(privateKeyBytes []byte) (*KeyPair, error)
|
||||
```
|
||||
|
||||
### 6.3 Key Export
|
||||
|
||||
```go
|
||||
func (kp *KeyPair) PublicKey() []byte // Raw 32 bytes
|
||||
func (kp *KeyPair) PrivateKey() []byte // Raw 32 bytes
|
||||
func (kp *KeyPair) PublicKeyBase64() string // Base64 encoded
|
||||
func (kp *KeyPair) PrivateKeyBase64() string // Base64 encoded
|
||||
```
|
||||
|
||||
## 7. WASM API
|
||||
|
||||
### 7.1 BorgSTMF Namespace
|
||||
|
||||
```javascript
|
||||
// Generate X25519 keypair
|
||||
const keypair = await BorgSTMF.generateKeyPair();
|
||||
// keypair.publicKey: base64 string
|
||||
// keypair.privateKey: base64 string
|
||||
|
||||
// Encrypt form data
|
||||
const encrypted = await BorgSTMF.encrypt(
|
||||
JSON.stringify(formData),
|
||||
serverPublicKeyBase64
|
||||
);
|
||||
|
||||
// Encrypt with field-level control
|
||||
const encrypted = await BorgSTMF.encryptFields(
|
||||
{email: "user@example.com", password: "secret"},
|
||||
serverPublicKeyBase64,
|
||||
{timestamp: Date.now().toString()} // Optional metadata
|
||||
);
|
||||
```
|
||||
|
||||
## 8. HTTP Middleware
|
||||
|
||||
### 8.1 Simple Usage
|
||||
|
||||
```go
|
||||
import "github.com/Snider/Borg/pkg/stmf/middleware"
|
||||
|
||||
// Create middleware with private key
|
||||
mw := middleware.Simple(privateKeyBytes)
|
||||
|
||||
// Or from base64
|
||||
mw, err := middleware.SimpleBase64(privateKeyB64)
|
||||
|
||||
// Apply to handler
|
||||
http.Handle("/submit", mw(myHandler))
|
||||
```
|
||||
|
||||
### 8.2 Advanced Configuration
|
||||
|
||||
```go
|
||||
cfg := middleware.DefaultConfig(privateKeyBytes)
|
||||
cfg.FieldName = "_custom_field" // Custom field name (default: _stmf_payload)
|
||||
cfg.PopulateForm = &true // Auto-populate r.Form
|
||||
cfg.OnError = customErrorHandler // Custom error handling
|
||||
cfg.OnMissingPayload = customHandler // When field is absent
|
||||
|
||||
mw := middleware.Middleware(cfg)
|
||||
```
|
||||
|
||||
### 8.3 Context Access
|
||||
|
||||
```go
|
||||
func myHandler(w http.ResponseWriter, r *http.Request) {
|
||||
// Get decrypted form data
|
||||
formData := middleware.GetFormData(r)
|
||||
|
||||
// Get metadata
|
||||
metadata := middleware.GetMetadata(r)
|
||||
|
||||
// Access fields
|
||||
email := formData.Get("email")
|
||||
password := formData.Get("password")
|
||||
}
|
||||
```
|
||||
|
||||
### 8.4 Middleware Behavior
|
||||
|
||||
- Handles POST, PUT, PATCH requests only
|
||||
- Parses multipart/form-data (32 MB limit) or application/x-www-form-urlencoded
|
||||
- Looks for field `_stmf_payload` (configurable)
|
||||
- Base64 decodes, then decrypts
|
||||
- Populates `r.Form` and `r.PostForm` with decrypted fields
|
||||
- Returns 400 Bad Request on decryption failure
|
||||
|
||||
## 9. Integration Example
|
||||
|
||||
### 9.1 HTML Form
|
||||
|
||||
```html
|
||||
<form id="secure-form" data-stmf-pubkey="<base64-public-key>">
|
||||
<input name="name" type="text">
|
||||
<input name="email" type="email">
|
||||
<input name="ssn" type="password">
|
||||
<button type="submit">Send Securely</button>
|
||||
</form>
|
||||
|
||||
<script>
|
||||
document.getElementById('secure-form').addEventListener('submit', async (e) => {
|
||||
e.preventDefault();
|
||||
const form = e.target;
|
||||
const pubkey = form.dataset.stmfPubkey;
|
||||
|
||||
const formData = new FormData(form);
|
||||
const data = Object.fromEntries(formData);
|
||||
|
||||
const encrypted = await BorgSTMF.encrypt(JSON.stringify(data), pubkey);
|
||||
|
||||
await fetch('/api/submit', {
|
||||
method: 'POST',
|
||||
body: new URLSearchParams({_stmf_payload: encrypted}),
|
||||
headers: {'Content-Type': 'application/x-www-form-urlencoded'}
|
||||
});
|
||||
});
|
||||
</script>
|
||||
```
|
||||
|
||||
### 9.2 Server Handler
|
||||
|
||||
```go
|
||||
func main() {
|
||||
privateKey, _ := os.ReadFile("private.key")
|
||||
mw := middleware.Simple(privateKey)
|
||||
|
||||
http.Handle("/api/submit", mw(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||
formData := middleware.GetFormData(r)
|
||||
|
||||
name := formData.Get("name")
|
||||
email := formData.Get("email")
|
||||
ssn := formData.Get("ssn")
|
||||
|
||||
// Process securely...
|
||||
w.WriteHeader(http.StatusOK)
|
||||
})))
|
||||
|
||||
http.ListenAndServeTLS(":443", "cert.pem", "key.pem", nil)
|
||||
}
|
||||
```
|
||||
|
||||
## 10. Security Properties
|
||||
|
||||
### 10.1 Forward Secrecy
|
||||
|
||||
- Fresh ephemeral keypair per encryption
|
||||
- Compromised private key doesn't decrypt past messages
|
||||
- Each ciphertext has unique shared secret
|
||||
|
||||
### 10.2 Authenticity
|
||||
|
||||
- Poly1305 MAC prevents tampering
|
||||
- Decryption fails if ciphertext modified
|
||||
|
||||
### 10.3 Confidentiality
|
||||
|
||||
- ChaCha20 provides 256-bit security
|
||||
- Nonces are random (24 bytes), collision unlikely
|
||||
- Data encrypted before leaving browser
|
||||
|
||||
### 10.4 Key Isolation
|
||||
|
||||
- Private key never exposed to browser/JavaScript
|
||||
- Public key can be safely distributed
|
||||
- Ephemeral keys discarded after encryption
|
||||
|
||||
## 11. Error Handling
|
||||
|
||||
```go
|
||||
var (
|
||||
ErrInvalidMagic = errors.New("invalid STMF magic")
|
||||
ErrInvalidPayload = errors.New("invalid STMF payload")
|
||||
ErrDecryptionFailed = errors.New("decryption failed")
|
||||
ErrInvalidPublicKey = errors.New("invalid public key")
|
||||
ErrInvalidPrivateKey = errors.New("invalid private key")
|
||||
ErrKeyGenerationFailed = errors.New("key generation failed")
|
||||
)
|
||||
```
|
||||
|
||||
## 12. Implementation Reference
|
||||
|
||||
- Types: `pkg/stmf/types.go`
|
||||
- Key management: `pkg/stmf/keypair.go`
|
||||
- Encryption: `pkg/stmf/encrypt.go`
|
||||
- Decryption: `pkg/stmf/decrypt.go`
|
||||
- Middleware: `pkg/stmf/middleware/http.go`
|
||||
- WASM: `pkg/wasm/stmf/main.go`
|
||||
|
||||
## 13. Security Considerations
|
||||
|
||||
1. **Public key authenticity**: Verify public key source (HTTPS, pinning)
|
||||
2. **Private key protection**: Never expose to browser, store securely
|
||||
3. **Nonce uniqueness**: Random generation ensures uniqueness
|
||||
4. **HTTPS required**: Transport layer must be encrypted
|
||||
|
||||
## 14. Future Work
|
||||
|
||||
- [ ] Multiple recipients
|
||||
- [ ] Key attestation
|
||||
- [ ] Offline decryption app
|
||||
- [ ] Hardware key support (WebAuthn)
|
||||
- [ ] Key rotation support
|
||||
458
RFC-020-WASM-API.md
Normal file
458
RFC-020-WASM-API.md
Normal file
|
|
@ -0,0 +1,458 @@
|
|||
# RFC-010: WASM Decryption API
|
||||
|
||||
**Status**: Draft
|
||||
**Author**: [Snider](https://github.com/Snider/)
|
||||
**Created**: 2026-01-13
|
||||
**License**: EUPL-1.2
|
||||
**Depends On**: RFC-002, RFC-007, RFC-009
|
||||
|
||||
---
|
||||
|
||||
## Abstract
|
||||
|
||||
This RFC specifies the WebAssembly (WASM) API for browser-based decryption of SMSG content and STMF form encryption. The API is exposed through two JavaScript namespaces: `BorgSMSG` for content decryption and `BorgSTMF` for form encryption.
|
||||
|
||||
## 1. Overview
|
||||
|
||||
The WASM module provides:
|
||||
- SMSG decryption (v1, v2, v3, chunked, ABR)
|
||||
- SMSG encryption
|
||||
- STMF form encryption/decryption
|
||||
- Metadata extraction without decryption
|
||||
|
||||
## 2. Module Loading
|
||||
|
||||
### 2.1 Files Required
|
||||
|
||||
```
|
||||
stmf.wasm (~5.9MB) Compiled Go WASM module
|
||||
wasm_exec.js (~20KB) Go WASM runtime
|
||||
```
|
||||
|
||||
### 2.2 Initialization
|
||||
|
||||
```html
|
||||
<script src="wasm_exec.js"></script>
|
||||
<script>
|
||||
const go = new Go();
|
||||
WebAssembly.instantiateStreaming(fetch('stmf.wasm'), go.importObject)
|
||||
.then(result => {
|
||||
go.run(result.instance);
|
||||
// BorgSMSG and BorgSTMF now available globally
|
||||
});
|
||||
</script>
|
||||
```
|
||||
|
||||
### 2.3 Ready Event
|
||||
|
||||
```javascript
|
||||
document.addEventListener('borgstmf:ready', (event) => {
|
||||
console.log('WASM ready, version:', event.detail.version);
|
||||
});
|
||||
```
|
||||
|
||||
## 3. BorgSMSG Namespace
|
||||
|
||||
### 3.1 Version
|
||||
|
||||
```javascript
|
||||
BorgSMSG.version // "1.6.0"
|
||||
BorgSMSG.ready // true when loaded
|
||||
```
|
||||
|
||||
### 3.2 Metadata Functions
|
||||
|
||||
#### getInfo(base64) → Promise<ManifestInfo>
|
||||
|
||||
Get manifest without decryption.
|
||||
|
||||
```javascript
|
||||
const info = await BorgSMSG.getInfo(base64Content);
|
||||
// info.version, info.algorithm, info.format
|
||||
// info.manifest.title, info.manifest.artist
|
||||
// info.isV3Streaming, info.isChunked
|
||||
// info.wrappedKeys (for v3)
|
||||
```
|
||||
|
||||
#### getInfoBinary(uint8Array) → Promise<ManifestInfo>
|
||||
|
||||
Binary input variant (no base64 decode needed).
|
||||
|
||||
```javascript
|
||||
const bytes = new Uint8Array(await response.arrayBuffer());
|
||||
const info = await BorgSMSG.getInfoBinary(bytes);
|
||||
```
|
||||
|
||||
### 3.3 Decryption Functions
|
||||
|
||||
#### decrypt(base64, password) → Promise<Message>
|
||||
|
||||
Full decryption (v1 format, base64 attachments).
|
||||
|
||||
```javascript
|
||||
const msg = await BorgSMSG.decrypt(base64Content, password);
|
||||
// msg.body, msg.subject, msg.from
|
||||
// msg.attachments[0].name, .content (base64), .mime
|
||||
```
|
||||
|
||||
#### decryptStream(base64, password) → Promise<StreamMessage>
|
||||
|
||||
Streaming decryption (v2 format, binary attachments).
|
||||
|
||||
```javascript
|
||||
const msg = await BorgSMSG.decryptStream(base64Content, password);
|
||||
// msg.attachments[0].data (Uint8Array)
|
||||
// msg.attachments[0].mime
|
||||
```
|
||||
|
||||
#### decryptBinary(uint8Array, password) → Promise<StreamMessage>
|
||||
|
||||
Binary input, binary output.
|
||||
|
||||
```javascript
|
||||
const bytes = new Uint8Array(await fetch(url).then(r => r.arrayBuffer()));
|
||||
const msg = await BorgSMSG.decryptBinary(bytes, password);
|
||||
```
|
||||
|
||||
#### quickDecrypt(base64, password) → Promise<string>
|
||||
|
||||
Returns body text only (fast path).
|
||||
|
||||
```javascript
|
||||
const body = await BorgSMSG.quickDecrypt(base64Content, password);
|
||||
```
|
||||
|
||||
### 3.4 V3 Streaming Functions
|
||||
|
||||
#### decryptV3(base64, params) → Promise<StreamMessage>
|
||||
|
||||
Decrypt v3 streaming content with LTHN rolling keys.
|
||||
|
||||
```javascript
|
||||
const msg = await BorgSMSG.decryptV3(base64Content, {
|
||||
license: "user-license-key",
|
||||
fingerprint: "device-fingerprint" // optional
|
||||
});
|
||||
```
|
||||
|
||||
#### getV3ChunkInfo(base64) → Promise<ChunkInfo>
|
||||
|
||||
Get chunk index for seeking without full decrypt.
|
||||
|
||||
```javascript
|
||||
const chunkInfo = await BorgSMSG.getV3ChunkInfo(base64Content);
|
||||
// chunkInfo.chunkSize (default 1MB)
|
||||
// chunkInfo.totalChunks
|
||||
// chunkInfo.totalSize
|
||||
// chunkInfo.index[i].offset, .size
|
||||
```
|
||||
|
||||
#### unwrapV3CEK(base64, params) → Promise<string>
|
||||
|
||||
Unwrap CEK for manual chunk decryption. Returns base64 CEK.
|
||||
|
||||
```javascript
|
||||
const cekBase64 = await BorgSMSG.unwrapV3CEK(base64Content, {
|
||||
license: "license",
|
||||
fingerprint: "fp"
|
||||
});
|
||||
```
|
||||
|
||||
#### decryptV3Chunk(base64, cekBase64, chunkIndex) → Promise<Uint8Array>
|
||||
|
||||
Decrypt single chunk by index.
|
||||
|
||||
```javascript
|
||||
const chunk = await BorgSMSG.decryptV3Chunk(base64Content, cekBase64, 5);
|
||||
```
|
||||
|
||||
#### parseV3Header(uint8Array) → Promise<V3HeaderInfo>
|
||||
|
||||
Parse header from partial data (for streaming).
|
||||
|
||||
```javascript
|
||||
const header = await BorgSMSG.parseV3Header(bytes);
|
||||
// header.format, header.keyMethod, header.cadence
|
||||
// header.payloadOffset (where chunks start)
|
||||
// header.wrappedKeys, header.chunked, header.manifest
|
||||
```
|
||||
|
||||
#### unwrapCEKFromHeader(wrappedKeys, params, cadence) → Promise<Uint8Array>
|
||||
|
||||
Unwrap CEK from parsed header.
|
||||
|
||||
```javascript
|
||||
const cek = await BorgSMSG.unwrapCEKFromHeader(
|
||||
header.wrappedKeys,
|
||||
{license: "lic", fingerprint: "fp"},
|
||||
"daily"
|
||||
);
|
||||
```
|
||||
|
||||
#### decryptChunkDirect(chunkBytes, cek) → Promise<Uint8Array>
|
||||
|
||||
Low-level chunk decryption with pre-unwrapped CEK.
|
||||
|
||||
```javascript
|
||||
const plaintext = await BorgSMSG.decryptChunkDirect(chunkBytes, cek);
|
||||
```
|
||||
|
||||
### 3.5 Encryption Functions
|
||||
|
||||
#### encrypt(message, password, hint?) → Promise<string>
|
||||
|
||||
Encrypt message (v1 format). Returns base64.
|
||||
|
||||
```javascript
|
||||
const encrypted = await BorgSMSG.encrypt({
|
||||
body: "Hello",
|
||||
attachments: [{
|
||||
name: "file.txt",
|
||||
content: btoa("data"),
|
||||
mime: "text/plain"
|
||||
}]
|
||||
}, password, "optional hint");
|
||||
```
|
||||
|
||||
#### encryptWithManifest(message, password, manifest) → Promise<string>
|
||||
|
||||
Encrypt with manifest (v2 format). Returns base64.
|
||||
|
||||
```javascript
|
||||
const encrypted = await BorgSMSG.encryptWithManifest(message, password, {
|
||||
title: "My Track",
|
||||
artist: "Artist Name",
|
||||
licenseType: "perpetual"
|
||||
});
|
||||
```
|
||||
|
||||
### 3.6 ABR Functions
|
||||
|
||||
#### parseABRManifest(jsonString) → Promise<ABRManifest>
|
||||
|
||||
Parse HLS-style ABR manifest.
|
||||
|
||||
```javascript
|
||||
const manifest = await BorgSMSG.parseABRManifest(manifestJson);
|
||||
// manifest.version, manifest.title, manifest.duration
|
||||
// manifest.variants[i].name, .bandwidth, .url
|
||||
// manifest.defaultIdx
|
||||
```
|
||||
|
||||
#### selectVariant(manifest, bandwidthBps) → Promise<number>
|
||||
|
||||
Select best variant for bandwidth (returns index).
|
||||
|
||||
```javascript
|
||||
const idx = await BorgSMSG.selectVariant(manifest, measuredBandwidth);
|
||||
// Uses 80% safety threshold
|
||||
```
|
||||
|
||||
## 4. BorgSTMF Namespace
|
||||
|
||||
### 4.1 Key Generation
|
||||
|
||||
```javascript
|
||||
const keypair = await BorgSTMF.generateKeyPair();
|
||||
// keypair.publicKey (base64 X25519)
|
||||
// keypair.privateKey (base64 X25519) - KEEP SECRET
|
||||
```
|
||||
|
||||
### 4.2 Encryption
|
||||
|
||||
```javascript
|
||||
// Encrypt JSON string
|
||||
const encrypted = await BorgSTMF.encrypt(
|
||||
JSON.stringify(formData),
|
||||
serverPublicKeyBase64
|
||||
);
|
||||
|
||||
// Encrypt with metadata
|
||||
const encrypted = await BorgSTMF.encryptFields(
|
||||
{email: "user@example.com", password: "secret"},
|
||||
serverPublicKeyBase64,
|
||||
{timestamp: Date.now().toString()} // optional metadata
|
||||
);
|
||||
```
|
||||
|
||||
## 5. Type Definitions
|
||||
|
||||
### 5.1 ManifestInfo
|
||||
|
||||
```typescript
|
||||
interface ManifestInfo {
|
||||
version: string;
|
||||
algorithm: string;
|
||||
format?: string;
|
||||
compression?: string;
|
||||
hint?: string;
|
||||
keyMethod?: string; // "LTHN" for v3
|
||||
cadence?: string; // "daily", "12h", "6h", "1h"
|
||||
wrappedKeys?: WrappedKey[];
|
||||
isV3Streaming: boolean;
|
||||
chunked?: ChunkInfo;
|
||||
isChunked: boolean;
|
||||
manifest?: Manifest;
|
||||
}
|
||||
```
|
||||
|
||||
### 5.2 Message / StreamMessage
|
||||
|
||||
```typescript
|
||||
interface Message {
|
||||
from?: string;
|
||||
to?: string;
|
||||
subject?: string;
|
||||
body: string;
|
||||
timestamp?: number;
|
||||
attachments: Attachment[];
|
||||
replyKey?: KeyInfo;
|
||||
meta?: Record<string, string>;
|
||||
}
|
||||
|
||||
interface Attachment {
|
||||
name: string;
|
||||
mime: string;
|
||||
size: number;
|
||||
content?: string; // base64 (v1)
|
||||
data?: Uint8Array; // binary (v2/v3)
|
||||
}
|
||||
```
|
||||
|
||||
### 5.3 ChunkInfo
|
||||
|
||||
```typescript
|
||||
interface ChunkInfo {
|
||||
chunkSize: number; // default 1048576 (1MB)
|
||||
totalChunks: number;
|
||||
totalSize: number;
|
||||
index: ChunkEntry[];
|
||||
}
|
||||
|
||||
interface ChunkEntry {
|
||||
offset: number;
|
||||
size: number;
|
||||
}
|
||||
```
|
||||
|
||||
### 5.4 Manifest
|
||||
|
||||
```typescript
|
||||
interface Manifest {
|
||||
title: string;
|
||||
artist?: string;
|
||||
album?: string;
|
||||
genre?: string;
|
||||
year?: number;
|
||||
releaseType?: string; // "single", "album", "ep", "mix"
|
||||
duration?: number; // seconds
|
||||
format?: string;
|
||||
expiresAt?: number; // Unix timestamp
|
||||
issuedAt?: number; // Unix timestamp
|
||||
licenseType?: string; // "perpetual", "rental", "stream", "preview"
|
||||
tracks?: Track[];
|
||||
tags?: string[];
|
||||
links?: Record<string, string>;
|
||||
extra?: Record<string, string>;
|
||||
}
|
||||
```
|
||||
|
||||
## 6. Error Handling
|
||||
|
||||
### 6.1 Pattern
|
||||
|
||||
All functions throw on error:
|
||||
|
||||
```javascript
|
||||
try {
|
||||
const msg = await BorgSMSG.decrypt(content, password);
|
||||
} catch (e) {
|
||||
console.error(e.message);
|
||||
}
|
||||
```
|
||||
|
||||
### 6.2 Common Errors
|
||||
|
||||
| Error | Cause |
|
||||
|-------|-------|
|
||||
| `decrypt requires 2 arguments` | Wrong argument count |
|
||||
| `decryption failed: {reason}` | Wrong password or corrupted |
|
||||
| `invalid format` | Not a valid SMSG file |
|
||||
| `unsupported version` | Unknown format version |
|
||||
| `key expired` | v3 rolling key outside window |
|
||||
| `invalid base64: {reason}` | Base64 decode failed |
|
||||
| `chunk out of range` | Invalid chunk index |
|
||||
|
||||
## 7. Performance
|
||||
|
||||
### 7.1 Binary vs Base64
|
||||
|
||||
- Binary functions (`*Binary`, `decryptStream`) are ~30% faster
|
||||
- Avoid double base64 encoding
|
||||
|
||||
### 7.2 Large Files (>50MB)
|
||||
|
||||
Use chunked streaming:
|
||||
|
||||
```javascript
|
||||
// Efficient: Cache CEK, stream chunks
|
||||
const header = await BorgSMSG.parseV3Header(bytes);
|
||||
const cek = await BorgSMSG.unwrapCEKFromHeader(header.wrappedKeys, params);
|
||||
|
||||
for (let i = 0; i < header.chunked.totalChunks; i++) {
|
||||
const chunk = await BorgSMSG.decryptChunkDirect(payload, cek);
|
||||
player.write(chunk);
|
||||
// chunk is GC'd after each iteration
|
||||
}
|
||||
```
|
||||
|
||||
### 7.3 Typical Execution Times
|
||||
|
||||
| Operation | Size | Time |
|
||||
|-----------|------|------|
|
||||
| getInfo | any | ~50-100ms |
|
||||
| decrypt (small) | <1MB | ~200-500ms |
|
||||
| decrypt (large) | 100MB | 2-5s |
|
||||
| decryptV3Chunk | 1MB | ~200-400ms |
|
||||
| generateKeyPair | - | ~50-200ms |
|
||||
|
||||
## 8. Browser Compatibility
|
||||
|
||||
| Browser | Support |
|
||||
|---------|---------|
|
||||
| Chrome 57+ | Full |
|
||||
| Firefox 52+ | Full |
|
||||
| Safari 11+ | Full |
|
||||
| Edge 16+ | Full |
|
||||
| IE | Not supported |
|
||||
|
||||
Requirements:
|
||||
- WebAssembly support
|
||||
- Async/await (ES2017)
|
||||
- Uint8Array
|
||||
|
||||
## 9. Memory Management
|
||||
|
||||
- WASM module: ~5.9MB static
|
||||
- Per-operation: Peak ~2-3x file size during decryption
|
||||
- Go GC reclaims after Promise resolution
|
||||
- Keys never leave WASM memory
|
||||
|
||||
## 10. Implementation Reference
|
||||
|
||||
- Source: `pkg/wasm/stmf/main.go` (1758 lines)
|
||||
- Build: `GOOS=js GOARCH=wasm go build -o stmf.wasm ./pkg/wasm/stmf/`
|
||||
|
||||
## 11. Security Considerations
|
||||
|
||||
1. **Password handling**: Clear from memory after use
|
||||
2. **Memory isolation**: WASM sandbox prevents JS access
|
||||
3. **Constant-time crypto**: Go crypto uses safe operations
|
||||
4. **Key protection**: Keys never exposed to JavaScript
|
||||
|
||||
## 12. Future Work
|
||||
|
||||
- [ ] WebWorker support for background decryption
|
||||
- [ ] Streaming API with ReadableStream
|
||||
- [ ] Smaller WASM size via TinyGo
|
||||
- [ ] Native Web Crypto fallback for simple operations
|
||||
Loading…
Add table
Reference in a new issue