agent updates

This commit is contained in:
Snider 2026-03-20 19:31:45 +00:00
parent 2e3f56c4a7
commit be1130f470
456 changed files with 1166 additions and 5268 deletions

View file

@ -0,0 +1,139 @@
---
name: app-split
description: This skill should be used when the user asks to "split an app", "fork an app", "create a new app from host.uk.com", "de-hostuk", "copy app to new domain", or needs to extract a Website module from the host.uk.com monolith into a standalone CorePHP application. Covers the full copy-strip-rebrand process.
---
# App Split — Extract CorePHP App from Monolith
Split a Website module from the host.uk.com monolith into a standalone CorePHP application. The approach is copy-everything-then-strip rather than build-from-scratch.
## When to Use
- Extracting a domain-specific app (lthn.ai, bio.host.uk.com, etc.) from host.uk.com
- Creating a new standalone CorePHP app from the existing platform
- Any "fork and specialise" operation on the host.uk.com codebase
## Process
### 1. Inventory — Decide What Stays and Goes
Before copying, map which modules belong to the target app.
**Inputs needed from user:**
- Target domain (e.g. `lthn.ai`)
- Which `Website/*` modules to keep (check `$domains` in each Boot.php)
- Which `Mod/*` modules to keep (product modules vs platform modules)
- Which `Service/*` providers to keep (depends on kept Mod modules)
Run the inventory script to see all modules and their domain bindings:
```bash
!`scripts/inventory.sh /Users/snider/Code/lab/host.uk.com`
```
Consult `references/module-classification.md` for the standard keep/remove classification.
### 2. Copy — Wholesale Clone
```bash
rsync -a \
--exclude='vendor/' \
--exclude='node_modules/' \
--exclude='.git/' \
--exclude='storage/logs/*' \
--exclude='storage/framework/cache/*' \
--exclude='storage/framework/sessions/*' \
--exclude='storage/framework/views/*' \
SOURCE/ TARGET/
```
Copy everything. Do not cherry-pick — the framework has deep cross-references and it is faster to remove than to reconstruct.
### 3. Strip — Remove Unwanted Modules
Delete removed module directories:
```bash
# Website modules
rm -rf TARGET/app/Website/{Host,Html,Lab,Service}
# Mod modules
rm -rf TARGET/app/Mod/{Links,Social,Trees,Front,Hub}
# Service providers that depend on removed Mod modules
rm -rf TARGET/app/Service/Hub
```
### 4. Update Boot.php Providers
Edit `TARGET/app/Boot.php`:
- Remove all `\Website\*\Boot::class` entries for deleted Website modules
- Remove all `\Mod\*\Boot::class` entries for deleted Mod modules
- Remove all `\Service\*\Boot::class` entries for deleted Service providers
- Update class docblock (name, description)
- Update `guestRedirectUrl()` — change fallback login host from `host.uk.com` to target domain
### 5. Rebrand — Domain References
Run the domain scan script to find all references:
```bash
!`scripts/domain-scan.sh TARGET`
```
**Critical files to update** (in priority order):
| File | What to Change |
|------|----------------|
| `composer.json` | name, description, licence |
| `config/app.php` | `base_domain` default |
| `.env.example` | APP_URL, SESSION_DOMAIN, MCP_DOMAIN, DB_DATABASE, mail |
| `vite.config.js` | dev server host + HMR host |
| `app/Boot.php` | providers, guest redirect, comments |
| `CLAUDE.md` | Full rewrite for new app |
| `.gitignore` | Add any env files with secrets |
| `robots.txt` | Sitemap URL, allowed paths |
| `public/errors/*.html` | Support contact links |
| `public/js/*.js` | API base URLs in embed widgets |
| `config/cdn.php` | default_domain, apex URL |
| `config/mail.php` | contact_recipient |
| `database/seeders/` | email, domains, branding |
**Leave alone** (shared infrastructure):
- `analytics.host.uk.com` references in CSP headers and tracking pixels
- CDN storage zone names (same Hetzner/BunnyCDN buckets)
- External links to host.uk.com in footers (legitimate cross-links)
### 6. Secure — Check for Secrets
Scan for env files with real credentials before committing:
```bash
# Find env files that might have secrets
find TARGET -name ".env*" -not -name ".env.example" | while read f; do
if grep -qE '(KEY|SECRET|PASSWORD|TOKEN)=.{8,}' "$f"; then
echo "SECRETS: $f — add to .gitignore"
fi
done
```
### 7. Init Git and Verify
```bash
cd TARGET
git init
git add -A
git status # Review what's being committed
```
Check for:
- No `.env` files with real secrets staged
- No `auth.json` staged
- No `vendor/` or `node_modules/` staged
## Gotchas
- **Service providers reference Mod modules**: If `Service/Hub` depends on `Mod/Hub` and you remove `Mod/Hub`, also remove `Service/Hub` — otherwise the app crashes on boot.
- **Boot.php $providers is the master list**: Every module must be listed here. Missing entries = module doesn't load. Extra entries for deleted modules = crash.
- **Seeders reference removed services**: SystemUserSeeder sets up analytics, trust, push, bio etc. The seeder uses `class_exists()` checks so it gracefully skips missing services, but domain references still need updating.
- **Composer deps for removed modules**: Packages like `core/php-plug-social` are only needed for removed modules. Safe to remove from composer.json but not urgent — they're just unused.
- **The `.env.lthn-ai` pattern**: Production env files often live in the repo for reference but MUST be gitignored since they contain real credentials.

View file

@ -0,0 +1,100 @@
# Module Classification Guide
When splitting an app from host.uk.com, classify each module as **keep** or **remove** based on domain ownership.
## Website Modules
Website modules have `$domains` arrays that define which domains they respond to. Check the regex patterns to determine ownership.
| Module | Domains | Classification |
|--------|---------|----------------|
| Host | `host.uk.com`, `host.test` | host.uk.com only |
| Lthn | `lthn.ai`, `lthn.test`, `lthn.sh` | lthn.ai only |
| App | `app.lthn.*`, `hub.lthn.*` | lthn.ai (client dashboard) |
| Api | `api.lthn.*`, `api.host.*` | Shared — check domain patterns |
| Mcp | `mcp.lthn.*`, `mcp.host.*` | Shared — check domain patterns |
| Docs | `docs.lthn.*`, `docs.host.*` | Shared — check domain patterns |
| Html | Static HTML pages | host.uk.com only |
| Lab | `lab.host.*` | host.uk.com only |
| Service | `*.host.uk.com` service subdomains | host.uk.com only |
**Rule**: If the module's `$domains` patterns match the target domain, keep it. If they only match host.uk.com patterns, remove it. For shared modules (Api, Mcp, Docs), strip the host.uk.com domain patterns.
## Mod Modules (Products)
Mod modules are product-level features. Classify by which platform they serve.
### host.uk.com Products (Remove for lthn.ai)
| Module | Product | Why Remove |
|--------|---------|------------|
| Links | BioHost (link-in-bio) | host.uk.com SaaS product |
| Social | SocialHost (scheduling) | host.uk.com SaaS product |
| Front | Frontend chrome/nav | host.uk.com-specific UI |
| Hub | Admin dashboard | host.uk.com admin panel |
| Trees | Trees for Agents | host.uk.com feature |
### lthn.ai Products (Keep for lthn.ai)
| Module | Product | Why Keep |
|--------|---------|----------|
| Agentic | AI agent orchestration | Core lthn.ai feature |
| Lem | LEM model management | Core lthn.ai feature |
| Mcp | MCP tool registry | Core lthn.ai feature |
| Studio | Multimedia pipeline | lthn.ai content creation |
| Uptelligence | Server monitoring | Cross-platform, lthn.ai relevant |
## Service Providers
Service providers in `app/Service/` are the product layer — they register ServiceDefinition contracts. They depend on their corresponding Mod module.
**Rule**: If the Mod module is removed, the Service provider MUST also be removed. Otherwise the app crashes on boot when it tries to resolve the missing module's classes.
| Service | Depends On | Action |
|---------|-----------|--------|
| Hub | Mod/Hub | Remove with Hub |
| Commerce | Core\Mod\Commerce (package) | Keep — it's a core package |
| Agentic | Core\Mod\Agentic (package) | Keep — it's a core package |
## Core Framework Providers
These are from CorePHP packages (`core/php`, `core/php-admin`, etc.) and should always be kept — they're the framework itself.
- `Core\Storage\CacheResilienceProvider`
- `Core\LifecycleEventProvider`
- `Core\Website\Boot`
- `Core\Bouncer\Boot`
- `Core\Config\Boot`
- `Core\Tenant\Boot`
- `Core\Cdn\Boot`, `Core\Mail\Boot`, `Core\Front\Boot`
- `Core\Headers\Boot`, `Core\Helpers\Boot`
- `Core\Media\Boot`, `Core\Search\Boot`, `Core\Seo\Boot`
- `Core\Webhook\Boot`
- `Core\Api\Boot`
- `Core\Mod\Agentic\Boot`, `Core\Mod\Commerce\Boot`
- `Core\Mod\Uptelligence\Boot`, `Core\Mod\Content\Boot`
## Shared Infrastructure
Some host.uk.com references are shared infrastructure that ALL apps use. These should NOT be changed during the split:
| Reference | Why Keep |
|-----------|----------|
| `analytics.host.uk.com` | Shared analytics service (CSP headers, tracking pixel) |
| `cdn.host.uk.com` | Shared CDN delivery URL |
| Hetzner S3 bucket names (`hostuk`, `host-uk`) | Shared storage |
| BunnyCDN storage zones | Shared CDN zones |
| Footer link to host.uk.com | Legitimate external link |
## Composer Dependencies
After removing modules, review composer.json for packages only needed by removed modules:
| Package | Used By | Action |
|---------|---------|--------|
| `core/php-plug-social` | Mod/Social | Remove |
| `core/php-plug-stock` | Stock photo integration | Keep if any module uses it |
| `webklex/php-imap` | Mod/Support (if removed) | Safe to remove |
| `minishlink/web-push` | Mod/Notify (if removed) | Safe to remove |
**Conservative approach**: Leave deps in place. They don't hurt — they're just unused. Remove later during a cleanup pass.

View file

@ -0,0 +1,54 @@
#!/usr/bin/env bash
# domain-scan.sh — Find all host.uk.com / host.test references in a CorePHP app.
# Usage: ./domain-scan.sh /path/to/app [domain_pattern]
# Default domain pattern: host\.uk\.com|host\.test
APP_DIR="${1:-.}"
PATTERN="${2:-host\.uk\.com|host\.test}"
echo "=== Domain Reference Scan ==="
echo "Directory: $APP_DIR"
echo "Pattern: $PATTERN"
echo ""
echo "--- By Directory ---"
for dir in app config database public resources routes; do
[ -d "$APP_DIR/$dir" ] || continue
count=$(grep -rlE "$PATTERN" "$APP_DIR/$dir" 2>/dev/null | wc -l | tr -d ' ')
[ "$count" -gt 0 ] && printf "%-20s %s files\n" "$dir/" "$count"
done
# Root files
echo ""
echo "--- Root Files ---"
for f in .env.example vite.config.js CLAUDE.md robots.txt Makefile playwright.config.ts; do
[ -f "$APP_DIR/$f" ] && grep -qE "$PATTERN" "$APP_DIR/$f" 2>/dev/null && printf " %s\n" "$f"
done
echo ""
echo "--- Critical Files (app code, not docs) ---"
grep -rnE "$PATTERN" \
"$APP_DIR/app/" \
"$APP_DIR/config/" \
"$APP_DIR/database/seeders/" \
"$APP_DIR/public/js/" \
"$APP_DIR/public/errors/" \
"$APP_DIR/public/robots.txt" \
"$APP_DIR/vite.config.js" \
"$APP_DIR/.env.example" \
2>/dev/null \
| grep -v '/docs/' \
| grep -v '/plans/' \
| grep -v 'node_modules' \
| grep -v 'vendor/' \
|| echo "(none found)"
echo ""
echo "--- Shared Infra References (review — may be intentional) ---"
grep -rnE 'analytics\.host\.uk\.com|cdn\.host\.uk\.com' \
"$APP_DIR/app/" \
"$APP_DIR/config/" \
2>/dev/null \
|| echo "(none found)"
exit 0

View file

@ -0,0 +1,43 @@
#!/usr/bin/env bash
# inventory.sh — List all modules and their domain bindings in a CorePHP app.
# Usage: ./inventory.sh /path/to/app
set -euo pipefail
APP_DIR="${1:-.}"
echo "=== Website Modules ==="
echo ""
for boot in "$APP_DIR"/app/Website/*/Boot.php; do
[ -f "$boot" ] || continue
mod=$(basename "$(dirname "$boot")")
# Extract domain patterns from $domains array
domains=$(grep -E "'/\^" "$boot" 2>/dev/null | sed "s/.*'\(.*\)'.*/\1/" | tr '\n' ' ' || echo "(no domain pattern)")
# Extract event class names from $listens array
listens=$(grep '::class' "$boot" 2>/dev/null | grep -oE '[A-Za-z]+::class' | sed 's/::class//' | tr '\n' ', ' | sed 's/,$//' || echo "none")
printf "%-15s domains: %s\n" "$mod" "$domains"
printf "%-15s listens: %s\n" "" "$listens"
echo ""
done
echo "=== Mod Modules ==="
echo ""
for boot in "$APP_DIR"/app/Mod/*/Boot.php; do
[ -f "$boot" ] || continue
mod=$(basename "$(dirname "$boot")")
listens=$(grep '::class' "$boot" 2>/dev/null | grep -oE '[A-Za-z]+::class' | sed 's/::class//' | tr '\n' ', ' | sed 's/,$//' || echo "none")
printf "%-15s listens: %s\n" "$mod" "$listens"
done
echo ""
echo "=== Service Providers ==="
echo ""
for boot in "$APP_DIR"/app/Service/*/Boot.php; do
[ -f "$boot" ] || continue
mod=$(basename "$(dirname "$boot")")
code=$(grep -oE "'code'\s*=>\s*'[^']+'" "$boot" 2>/dev/null | head -1 || echo "")
printf "%-15s %s\n" "$mod" "$code"
done
echo ""
echo "=== Boot.php Provider List ==="
grep '::class' "$APP_DIR/app/Boot.php" 2>/dev/null | grep -v '//' | sed 's/^[[:space:]]*/ /' | sed 's/,$//'

View file

@ -0,0 +1,125 @@
---
name: deploy-homelab
description: This skill should be used when the user asks to "deploy to homelab", "deploy to lthn.sh", "ship to homelab", "build and deploy", "push image to homelab", or needs to build a Docker image locally and transfer it to the homelab server at 10.69.69.165. Covers the full build-locally → transfer-tarball → deploy pipeline for CorePHP apps.
---
# Deploy to Homelab
Build a CorePHP app Docker image locally (required for paid package auth), transfer via tarball to the homelab (no registry), and deploy.
## When to Use
- Deploying any CorePHP app to the homelab (*.lthn.sh)
- Building images that need `auth.json` for Flux Pro or other paid packages
- Shipping a new version of an app to 10.69.69.165
## Prerequisites
- Docker Desktop running locally
- `auth.json` in the app root (for Flux Pro licence)
- Homelab accessible at 10.69.69.165 (SSH: claude/claude)
- **NEVER ssh directly** — use the deploy script or Ansible from `~/Code/DevOps`
## Process
### 1. Build Locally
Run from the app directory (e.g. `/Users/snider/Code/lab/lthn.ai`):
```bash
# Install deps (auth.json provides paid package access)
composer install --no-dev --optimize-autoloader
npm ci
npm run build
# Build the Docker image for linux/amd64 (homelab is x86_64)
docker build --platform linux/amd64 -t IMAGE_NAME:latest .
```
The image name follows the pattern: `lthn-sh`, `lthn-ai`, etc.
### 2. Transfer to Homelab
```bash
# Save image as compressed tarball
docker save IMAGE_NAME:latest | gzip > /tmp/IMAGE_NAME.tar.gz
# SCP to homelab
sshpass -p claude scp -P 22 /tmp/IMAGE_NAME.tar.gz claude@10.69.69.165:/tmp/
# Load image on homelab
sshpass -p claude ssh -p 22 claude@10.69.69.165 'echo claude | sudo -S docker load < /tmp/IMAGE_NAME.tar.gz'
```
**Note:** Homelab SSH is port 22 (NOT port 4819 — that's production servers). Credentials: claude/claude.
### 3. Deploy on Homelab
```bash
# Restart container with new image
sshpass -p claude ssh -p 22 claude@10.69.69.165 'echo claude | sudo -S docker restart CONTAINER_NAME'
# Or if using docker-compose
sshpass -p claude ssh -p 22 claude@10.69.69.165 'cd /opt/services/APP_DIR && echo claude | sudo -S docker compose up -d'
```
### 4. Post-Deploy Checks
```bash
# Run migrations
sshpass -p claude ssh -p 22 claude@10.69.69.165 'echo claude | sudo -S docker exec CONTAINER_NAME php artisan migrate --force'
# Clear and rebuild caches
sshpass -p claude ssh -p 22 claude@10.69.69.165 'echo claude | sudo -S docker exec CONTAINER_NAME php artisan config:cache && sudo docker exec CONTAINER_NAME php artisan route:cache && sudo docker exec CONTAINER_NAME php artisan view:cache && sudo docker exec CONTAINER_NAME php artisan event:cache'
# Health check
curl -sf https://APP_DOMAIN/up && echo "OK" || echo "FAILED"
```
### One-Shot Script
Use the bundled script for the full pipeline:
```bash
scripts/build-and-ship.sh APP_DIR IMAGE_NAME CONTAINER_NAME
```
Example:
```bash
scripts/build-and-ship.sh /Users/snider/Code/lab/host.uk.com lthn-sh lthn-sh-hub
scripts/build-and-ship.sh /Users/snider/Code/lab/lthn.ai lthn-ai lthn-ai
```
## Or Use Ansible (Preferred)
The Ansible playbooks handle all of this automatically:
```bash
cd ~/Code/DevOps
ansible-playbook playbooks/deploy/website/lthn_sh.yml -i inventory/linux_snider_dev.yml
```
Available playbooks:
- `lthn_sh.yml` — host.uk.com app to homelab
- `lthn_ai.yml` — lthn.ai app to homelab/prod
## Known Apps on Homelab
| App | Image | Container | Port | Data Dir |
|-----|-------|-----------|------|----------|
| host.uk.com | lthn-sh:latest | lthn-sh-hub | 8088 | /opt/services/lthn-lan |
| lthn.ai | lthn-ai:latest | lthn-ai | 80 | /opt/services/lthn-ai |
## Gotchas
- **Platform flag required**: Mac builds ARM images by default. Always use `--platform linux/amd64` — homelab is x86_64 Ryzen 9.
- **auth.json stays local**: The Dockerfile copies the entire app directory. The `.dockerignore` should exclude `auth.json` to avoid leaking licence keys into the image. If it doesn't, add it.
- **Tarball size**: Full images are 500MB1GB compressed. Ensure `/tmp` has space on both ends.
- **Homelab SSH is port 22**: Unlike production servers (port 4819 + Endlessh on 22), the homelab uses standard port 22.
- **No `sudo` password prompt**: Use `echo claude | sudo -S` pattern for sudo commands over SSH.
- **Redis is embedded**: The FrankenPHP image includes supervisord running Redis. No separate Redis container needed on homelab.
- **GPU services**: The homelab has Ollama (11434), Whisper (9150), TTS (9200), ComfyUI (8188) running natively — the app container connects to them via `127.0.0.1` with `--network host`.
## Consult References
- `references/environments.md` — Environment variables and service mapping for each deployment target

View file

@ -0,0 +1,115 @@
# Environment Reference
## Homelab (lthn.sh)
**Host:** 10.69.69.165 (Ryzen 9 + 128GB RAM + RX 7800 XT)
**SSH:** claude:claude (port 22)
**Domains:** *.lthn.sh → 10.69.69.165
### host.uk.com on homelab
| Setting | Value |
|---------|-------|
| Container | lthn-sh-hub |
| Image | lthn-sh:latest |
| Port | 8088 (Octane/FrankenPHP) |
| Network | --network host |
| Data | /opt/services/lthn-lan |
| DB | MariaDB 127.0.0.1:3306, db=lthn_sh |
| Redis | Embedded (supervisord in container) |
| APP_URL | https://lthn.sh |
| SESSION_DOMAIN | .lthn.sh |
### lthn.ai on homelab
| Setting | Value |
|---------|-------|
| Container | lthn-ai |
| Image | lthn-ai:latest |
| Port | 80 (via docker-compose) |
| Network | proxy + lthn-ai bridge |
| Data | /opt/services/lthn-ai |
| DB | MariaDB lthn-ai-db:3306, db=lthn_ai |
| Redis | Embedded |
| APP_URL | https://lthn.sh (homelab) |
| SESSION_DOMAIN | .lthn.sh |
### GPU Services (native on homelab)
| Service | Port | Used By |
|---------|------|---------|
| Ollama | 11434 | LEM scoring (lem-4b model) |
| Whisper | 9150 | Studio speech-to-text |
| Kokoro TTS | 9200 | Studio text-to-speech |
| ComfyUI | 8188 | Studio image generation |
| InfluxDB | via https://influx.infra.lthn.sh | LEM metrics |
### Key .env differences from production
```env
# Homelab-specific
APP_ENV=production
APP_URL=https://lthn.sh
SESSION_DOMAIN=.lthn.sh
# Local GPU services (--network host)
STUDIO_WHISPER_URL=http://127.0.0.1:9150
STUDIO_OLLAMA_URL=http://127.0.0.1:11434
STUDIO_TTS_URL=http://127.0.0.1:9200
STUDIO_COMFYUI_URL=http://127.0.0.1:8188
# Local Redis (embedded in container via supervisord)
REDIS_HOST=127.0.0.1
REDIS_PORT=6379
```
## Production (de1 — Falkenstein)
**Host:** eu-prd-01.lthn.io (Hetzner AX102)
**SSH:** Port 4819 only (port 22 = Endlessh tarpit)
**Deploy:** ONLY via Ansible from ~/Code/DevOps
### Port Map
| Port | Service |
|------|---------|
| 80/443 | Traefik (TLS termination) |
| 2223/3000 | Forgejo |
| 3306 | Galera (MariaDB cluster) |
| 5432 | PostgreSQL |
| 6379 | Dragonfly (Redis-compatible) |
| 8000-8001 | host.uk.com |
| 8003 | lthn.io |
| 8004 | bugseti.app |
| 8005-8006 | lthn.ai |
| 8007 | api.lthn.ai |
| 8008 | mcp.lthn.ai |
| 8083 | 66Biolinks |
| 8084 | Blesta |
| 8085 | Analytics |
| 8086 | Push Notifications |
| 8087 | Social Proof |
| 3900 | Garage S3 |
| 9000/9443 | Authentik |
### Ansible Playbooks
```bash
cd ~/Code/DevOps
# Homelab
ansible-playbook playbooks/deploy/website/lthn_sh.yml -i inventory/linux_snider_dev.yml
# Production (de1)
ansible-playbook playbooks/deploy/website/lthn_ai.yml -i inventory/production.yml
```
## Dockerfile Base
All CorePHP apps use the same Dockerfile pattern:
- Base: `dunglas/frankenphp:1-php8.5-trixie`
- PHP extensions: pcntl, pdo_mysql, redis, gd, intl, zip, opcache, bcmath, exif, sockets
- System packages: redis-server, supervisor, curl, mariadb-client
- Runtime: Supervisord (FrankenPHP + Redis + Horizon + Scheduler)
- Healthcheck: `curl -f http://localhost:${OCTANE_PORT}/up`

View file

@ -0,0 +1,94 @@
#!/usr/bin/env bash
# build-and-ship.sh — Build Docker image locally and ship to homelab.
#
# Usage: ./build-and-ship.sh APP_DIR IMAGE_NAME [CONTAINER_NAME]
#
# Examples:
# ./build-and-ship.sh ~/Code/lab/host.uk.com lthn-sh lthn-sh-hub
# ./build-and-ship.sh ~/Code/lab/lthn.ai lthn-ai lthn-ai
set -euo pipefail
APP_DIR="${1:?Usage: build-and-ship.sh APP_DIR IMAGE_NAME [CONTAINER_NAME]}"
IMAGE_NAME="${2:?Usage: build-and-ship.sh APP_DIR IMAGE_NAME [CONTAINER_NAME]}"
CONTAINER_NAME="${3:-$IMAGE_NAME}"
HOMELAB_HOST="10.69.69.165"
HOMELAB_USER="claude"
HOMELAB_PASS="claude"
TARBALL="/tmp/${IMAGE_NAME}.tar.gz"
ssh_cmd() {
sshpass -p "$HOMELAB_PASS" ssh -o StrictHostKeyChecking=no "$HOMELAB_USER@$HOMELAB_HOST" "$@"
}
scp_cmd() {
sshpass -p "$HOMELAB_PASS" scp -o StrictHostKeyChecking=no "$@"
}
sudo_cmd() {
ssh_cmd "echo $HOMELAB_PASS | sudo -S $*"
}
echo "=== Build & Ship to Homelab ==="
echo "App: $APP_DIR"
echo "Image: $IMAGE_NAME:latest"
echo "Container: $CONTAINER_NAME"
echo "Target: $HOMELAB_USER@$HOMELAB_HOST"
echo ""
# Step 1: Build dependencies
echo "--- Step 1: Dependencies ---"
cd "$APP_DIR"
composer install --no-dev --optimize-autoloader --quiet
npm ci --silent
npm run build
# Step 2: Docker build
echo ""
echo "--- Step 2: Docker Build (linux/amd64) ---"
docker build --platform linux/amd64 -t "${IMAGE_NAME}:latest" .
# Step 3: Save and transfer
echo ""
echo "--- Step 3: Save & Transfer ---"
echo "Saving image..."
docker save "${IMAGE_NAME}:latest" | gzip > "$TARBALL"
SIZE=$(du -h "$TARBALL" | cut -f1)
echo "Tarball: $TARBALL ($SIZE)"
echo "Transferring to homelab..."
scp_cmd "$TARBALL" "${HOMELAB_USER}@${HOMELAB_HOST}:/tmp/"
# Step 4: Load on homelab
echo ""
echo "--- Step 4: Load Image ---"
sudo_cmd "docker load < /tmp/${IMAGE_NAME}.tar.gz"
# Step 5: Restart container
echo ""
echo "--- Step 5: Restart Container ---"
sudo_cmd "docker restart $CONTAINER_NAME" 2>/dev/null || echo "Container $CONTAINER_NAME not running — start manually"
# Step 6: Post-deploy
echo ""
echo "--- Step 6: Post-Deploy ---"
sleep 3
sudo_cmd "docker exec $CONTAINER_NAME php artisan migrate --force" 2>/dev/null || echo "Migration skipped (container may not be running)"
sudo_cmd "docker exec $CONTAINER_NAME php artisan config:cache" 2>/dev/null || true
sudo_cmd "docker exec $CONTAINER_NAME php artisan route:cache" 2>/dev/null || true
sudo_cmd "docker exec $CONTAINER_NAME php artisan view:cache" 2>/dev/null || true
# Step 7: Health check
echo ""
echo "--- Step 7: Health Check ---"
sleep 2
if sudo_cmd "curl -sf http://localhost:8088/up" >/dev/null 2>&1; then
echo "Health check: OK"
else
echo "Health check: FAILED (may need manual start)"
fi
# Cleanup
rm -f "$TARBALL"
echo ""
echo "=== Deploy Complete ==="

View file

@ -0,0 +1,104 @@
---
name: deploy-production
description: This skill should be used when the user asks to "deploy to production", "deploy to de1", "push to prod", "deploy lthn.ai", "deploy host.uk.com", or needs to deploy any website or service to the production fleet. Covers the full Ansible-based deployment pipeline. NEVER ssh directly to production.
---
# Deploy to Production
All production deployments go through Ansible from `~/Code/DevOps`. NEVER ssh directly.
## Quick Reference
```bash
cd ~/Code/DevOps
# Websites
ansible-playbook playbooks/deploy/website/lthn_ai.yml -l primary -e ansible_port=4819
ansible-playbook playbooks/deploy/website/saas.yml -l primary -e ansible_port=4819
ansible-playbook playbooks/deploy/website/core_help.yml -l primary -e ansible_port=4819
# Homelab (different inventory)
ansible-playbook playbooks/deploy/website/lthn_sh.yml -i inventory/linux_snider_dev.yml
# Services
ansible-playbook playbooks/deploy/service/forgejo.yml -l primary -e ansible_port=4819
ansible-playbook playbooks/deploy/service/authentik.yml -l primary -e ansible_port=4819
# Infrastructure
ansible-playbook playbooks/deploy/server/base.yml -l primary -e ansible_port=4819 --tags traefik
```
## Production Fleet
| Host | IP | DC | SSH |
|------|----|----|-----|
| eu-prd-01.lthn.io (de1) | 116.202.82.115 | Falkenstein | Port 4819 |
| eu-prd-noc.lthn.io | 77.42.42.205 | Helsinki | Port 4819 |
| ap-au-syd1.lthn.io | 139.99.131.177 | Sydney | Port 4819 |
Port 22 = Endlessh honeypot. ALWAYS use `-e ansible_port=4819`.
## Website Deploy Pattern (Build + Ship)
For Laravel/CorePHP apps that need local build:
1. **Local build** (needs auth.json for paid packages):
```bash
cd ~/Code/lab/APP_DIR
composer install --no-dev --optimize-autoloader
npm ci && npm run build
docker build --platform linux/amd64 -t IMAGE_NAME:latest .
docker save IMAGE_NAME:latest | gzip > /tmp/IMAGE_NAME.tar.gz
```
2. **Ship to server**:
```bash
scp -P 4819 /tmp/IMAGE_NAME.tar.gz root@116.202.82.115:/tmp/
```
Or let the Ansible playbook handle the transfer.
3. **Deploy via Ansible**:
```bash
cd ~/Code/DevOps
ansible-playbook playbooks/deploy/website/PLAYBOOK.yml -l primary -e ansible_port=4819
```
4. **Verify**:
```bash
curl -sf https://DOMAIN/up
```
## Containers on de1
| Website | Container | Port | Domain |
|---------|-----------|------|--------|
| lthn.ai | lthn-ai | 8005/8006 | lthn.ai, api.lthn.ai, mcp.lthn.ai |
| bugseti.app | bugseti-app | 8004 | bugseti.app |
| core.help | core-help | — | core.help |
| SaaS analytics | saas-analytics | 8085 | analytics.host.uk.com |
| SaaS biolinks | saas-biolinks | 8083 | link.host.uk.com |
| SaaS pusher | saas-pusher | 8086 | notify.host.uk.com |
| SaaS socialproof | saas-socialproof | 8087 | trust.host.uk.com |
| SaaS blesta | saas-blesta | 8084 | order.host.uk.com |
## Traefik Routing
De1 uses Docker labels for routing (Traefik Docker provider). Each container declares its own Traefik labels in its docker-compose. Traefik auto-discovers via Docker socket.
Homelab uses file-based routing at `/opt/noc/traefik/config/dynamic.yml`.
## Key Rules
- **NEVER ssh directly** — ALL operations through Ansible or ad-hoc commands
- **Port 4819** — always pass `-e ansible_port=4819` for production hosts
- **Credentials** — stored in `inventory/.credentials/` via Ansible password lookup
- **Dry run** — test with `--check` before applying
- **Existing playbooks** — ALWAYS check `playbooks/deploy/` before creating new ones
- **CLAUDE.md files** — read them at `DevOps/CLAUDE.md`, `playbooks/CLAUDE.md`, `playbooks/deploy/CLAUDE.md`, `playbooks/deploy/website/CLAUDE.md`, `roles/CLAUDE.md`
## Gotchas
- The lthn.ai container on de1 previously ran the FULL host.uk.com app (serving both host.uk.com and lthn.ai domains). Now lthn.ai is a separate split app.
- The host.uk.com SaaS products (analytics, biolinks, pusher, socialproof, blesta) are separate AltumCode containers, NOT part of the CorePHP app.
- host.uk.com itself does NOT have a separate container on de1 yet — it was served by the lthn-ai container. After the split, host.uk.com needs its own container or the lthn-ai playbook needs updating.
- Galera replication: de1 is bootstrap node. Don't run galera playbooks unless you understand the cluster state.

View file

@ -6,9 +6,9 @@ import (
"os"
"path/filepath"
"forge.lthn.ai/core/agent/pkg/agentic"
"forge.lthn.ai/core/agent/pkg/brain"
"forge.lthn.ai/core/agent/pkg/monitor"
"forge.lthn.ai/core/agent/agentic"
"forge.lthn.ai/core/agent/brain"
"forge.lthn.ai/core/agent/monitor"
"forge.lthn.ai/core/cli/pkg/cli"
"forge.lthn.ai/core/go-process"
"forge.lthn.ai/core/go/pkg/core"

View file

@ -1,4 +1,4 @@
module forge.lthn.ai/core/agent
module dAppCo.re/go/agent
go 1.26.0

View file

View file

@ -17,7 +17,7 @@ import (
"strings"
"time"
"forge.lthn.ai/core/agent/pkg/lib"
"forge.lthn.ai/core/agent/lib"
coreio "forge.lthn.ai/core/go-io"
coreerr "forge.lthn.ai/core/go-log"
"github.com/modelcontextprotocol/go-sdk/mcp"

View file

@ -14,7 +14,7 @@ import (
"strings"
"time"
"forge.lthn.ai/core/agent/pkg/agentic"
"forge.lthn.ai/core/agent/agentic"
coreio "forge.lthn.ai/core/go-io"
coreerr "forge.lthn.ai/core/go-log"
"github.com/modelcontextprotocol/go-sdk/mcp"

View file

@ -33,17 +33,17 @@ type RememberOutput struct {
// RecallInput is the input for brain_recall.
type RecallInput struct {
Query string `json:"query"`
TopK int `json:"top_k,omitempty"`
Query string `json:"query"`
TopK int `json:"top_k,omitempty"`
Filter RecallFilter `json:"filter,omitempty"`
}
// RecallFilter holds optional filter criteria for brain_recall.
type RecallFilter struct {
Project string `json:"project,omitempty"`
Type any `json:"type,omitempty"`
AgentID string `json:"agent_id,omitempty"`
MinConfidence float64 `json:"min_confidence,omitempty"`
Project string `json:"project,omitempty"`
Type any `json:"type,omitempty"`
AgentID string `json:"agent_id,omitempty"`
MinConfidence float64 `json:"min_confidence,omitempty"`
}
// RecallOutput is the output for brain_recall.

Some files were not shown because too many files have changed in this diff Show more