docs: remove completed plan files
Webhook (implemented in core/php), review pipeline (commands + skills exist), and OpenBrain (BrainService, BrainMemory, 4 MCP tools, Actions, Commands all implemented) plans are all completed. Co-Authored-By: Virgil <virgil@lethean.io>
This commit is contained in:
parent
0a81a6046f
commit
72c097ebcc
6 changed files with 0 additions and 3971 deletions
|
|
@ -1,133 +0,0 @@
|
|||
# Core\Webhook Design
|
||||
|
||||
**Date**: 2026-03-12
|
||||
**Status**: Approved
|
||||
**Location**: `core/php/src/Core/Webhook/`
|
||||
|
||||
## Goal
|
||||
|
||||
Framework-level webhook infrastructure: append-only inbound log + config-driven outbound cron triggers. Replaces 4 AltumCode `*-cron` Docker containers. No inline processing — ever.
|
||||
|
||||
## Hard Rule
|
||||
|
||||
**No inline processing.** The webhook endpoint stores data and returns 200. `WebhookReceived` is for lightweight awareness (increment a counter, set a flag) — never for HTTP calls, DB writes beyond the log, or queue dispatch. Actual work happens when the background worker reads the table on its own schedule. This prevents DDoS through the webhook endpoint.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Inbound: Record Everything
|
||||
|
||||
**Table: `webhook_calls`**
|
||||
|
||||
| Column | Type | Purpose |
|
||||
|--------|------|---------|
|
||||
| `id` | `ulid` | Primary key |
|
||||
| `source` | `string(64)` | Tag: `altum-biolinks`, `stripe`, `blesta` |
|
||||
| `event_type` | `string(128)` nullable | From payload if parseable (e.g. `link.created`) |
|
||||
| `headers` | `json` | Raw request headers |
|
||||
| `payload` | `json` | Raw request body |
|
||||
| `signature_valid` | `boolean` nullable | null = no verifier registered |
|
||||
| `processed_at` | `timestamp` nullable | Set by consumer when they've handled it |
|
||||
| `created_at` | `timestamp` | |
|
||||
|
||||
Indexed on `(source, processed_at, created_at)` — modules query unprocessed rows by source.
|
||||
|
||||
**Route:** `POST /webhooks/{source}` — no auth middleware, rate-limited by IP. Stores the raw request, fires `WebhookReceived` (source + call ID only, no payload in the event), returns `200 OK`.
|
||||
|
||||
**Signature verification:** `WebhookVerifier` interface with a single `verify(Request, string $secret): bool` method. Modules register their verifier per source. If no verifier registered, `signature_valid` stays null. If verification fails, still store the row (for debugging) but mark `signature_valid = false`.
|
||||
|
||||
### Outbound: Cron Triggers
|
||||
|
||||
**`CronTrigger` scheduled action** — runs every minute via the `#[Scheduled]` system. Config-driven:
|
||||
|
||||
```php
|
||||
// config/webhook.php
|
||||
'cron_triggers' => [
|
||||
'altum-biolinks' => [
|
||||
'base_url' => env('ALTUM_BIOLINKS_URL'),
|
||||
'key' => env('ALTUM_BIOLINKS_CRON_KEY'),
|
||||
'endpoints' => ['/cron', '/cron/email_reports', '/cron/broadcasts', '/cron/push_notifications'],
|
||||
'stagger_seconds' => 15,
|
||||
'offset_seconds' => 5,
|
||||
],
|
||||
'altum-analytics' => [
|
||||
'base_url' => env('ALTUM_ANALYTICS_URL'),
|
||||
'key' => env('ALTUM_ANALYTICS_CRON_KEY'),
|
||||
'endpoints' => ['/cron', '/cron/email_reports', '/cron/broadcasts', '/cron/push_notifications'],
|
||||
'stagger_seconds' => 15,
|
||||
'offset_seconds' => 0,
|
||||
],
|
||||
'altum-pusher' => [
|
||||
'base_url' => env('ALTUM_PUSHER_URL'),
|
||||
'key' => env('ALTUM_PUSHER_CRON_KEY'),
|
||||
'endpoints' => [
|
||||
'/cron/reset', '/cron/broadcasts', '/cron/campaigns',
|
||||
'/cron/flows', '/cron/flows_notifications', '/cron/personal_notifications',
|
||||
'/cron/rss_automations', '/cron/recurring_campaigns', '/cron/push_notifications',
|
||||
],
|
||||
'stagger_seconds' => 7,
|
||||
'offset_seconds' => 7,
|
||||
],
|
||||
'altum-socialproof' => [
|
||||
'base_url' => env('ALTUM_SOCIALPROOF_URL'),
|
||||
'key' => env('ALTUM_SOCIALPROOF_CRON_KEY'),
|
||||
'endpoints' => ['/cron', '/cron/email_reports', '/cron/broadcasts', '/cron/push_notifications'],
|
||||
'stagger_seconds' => 15,
|
||||
'offset_seconds' => 10,
|
||||
],
|
||||
],
|
||||
```
|
||||
|
||||
The action iterates products with their offset/stagger, hits each endpoint with `GET ?key={key}`. Fire-and-forget HTTP (short timeout, no retries). Logs failures to `webhook_calls` with source `cron-trigger-{product}` for health visibility.
|
||||
|
||||
### Lifecycle Event
|
||||
|
||||
`WebhookReceived` carries only `source` and `call_id`. Modules subscribe via `$listens` for lightweight awareness — never for processing.
|
||||
|
||||
Consuming module pattern (e.g. Mod/Links Boot.php):
|
||||
```php
|
||||
public static array $listens = [
|
||||
WebhookReceived::class => 'onWebhook', // lightweight flag only
|
||||
];
|
||||
```
|
||||
|
||||
Actual processing: a scheduled action or queue job in the module queries `webhook_calls` where `source = 'altum-biolinks' and processed_at is null`.
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
src/Core/Webhook/
|
||||
├── WebhookCall.php # Eloquent model (ULID, append-only)
|
||||
├── WebhookReceived.php # Lifecycle event (source + call_id only)
|
||||
├── WebhookController.php # POST /webhooks/{source}
|
||||
├── WebhookVerifier.php # Interface: verify(Request, secret): bool
|
||||
├── CronTrigger.php # #[Scheduled('everyMinute')] Action
|
||||
├── config.php # cron_triggers config
|
||||
└── Migrations/
|
||||
└── create_webhook_calls_table.php
|
||||
```
|
||||
|
||||
## What This Replaces
|
||||
|
||||
| Before | After |
|
||||
|--------|-------|
|
||||
| `saas-biolinks-cron` container (wget loop) | CronTrigger scheduled action |
|
||||
| `saas-analytics-cron` container (wget loop) | CronTrigger scheduled action |
|
||||
| `saas-pusher-cron` container (wget loop) | CronTrigger scheduled action |
|
||||
| `saas-socialproof-cron` container (wget loop) | CronTrigger scheduled action |
|
||||
| No inbound webhook handling | `POST /webhooks/{source}` → append-only log |
|
||||
|
||||
## Scope Boundaries (YAGNI)
|
||||
|
||||
- No retry/backoff for outbound cron triggers (fire-and-forget)
|
||||
- No webhook delivery (outbound webhooks to external consumers) — that's a separate feature
|
||||
- No payload parsing or transformation — raw JSON storage
|
||||
- No admin UI — query the table directly or via MCP
|
||||
- No automatic cleanup/archival — add later if needed
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- `POST /webhooks/{source}` stores raw request and returns 200 in <10ms
|
||||
- CronTrigger hits all 4 AltumCode products per-minute with correct stagger
|
||||
- `signature_valid` correctly set when a verifier is registered
|
||||
- No inline processing triggered by inbound webhooks
|
||||
- 4 Docker cron containers can be removed from docker-compose.prod.yml
|
||||
|
|
@ -1,948 +0,0 @@
|
|||
# Core\Webhook Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Add framework-level webhook infrastructure to core/php — append-only inbound log, config-driven outbound cron triggers, replaces 4 AltumCode Docker cron containers.
|
||||
|
||||
**Architecture:** `Core\Webhook` namespace in `src/Core/Webhook/`. One migration, one model, one controller, one lifecycle event, one scheduled action for cron triggers, one verifier interface. Modules subscribe to `WebhookReceived` for awareness; they query the table for processing on their own schedule. No inline processing — ever.
|
||||
|
||||
**Tech Stack:** Laravel 12, Pest testing, Eloquent (ULID keys), `#[Scheduled]` attribute, lifecycle events (`ApiRoutesRegistering`), HTTP client for outbound triggers.
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Migration — `webhook_calls` table
|
||||
|
||||
**Files:**
|
||||
- Create: `database/migrations/2026_03_12_000001_create_webhook_calls_table.php`
|
||||
|
||||
**Step 1: Create the migration**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
use Illuminate\Database\Migrations\Migration;
|
||||
use Illuminate\Database\Schema\Blueprint;
|
||||
use Illuminate\Support\Facades\Schema;
|
||||
|
||||
return new class extends Migration
|
||||
{
|
||||
public function up(): void
|
||||
{
|
||||
Schema::create('webhook_calls', function (Blueprint $table) {
|
||||
$table->ulid('id')->primary();
|
||||
$table->string('source', 64)->index();
|
||||
$table->string('event_type', 128)->nullable();
|
||||
$table->json('headers');
|
||||
$table->json('payload');
|
||||
$table->boolean('signature_valid')->nullable();
|
||||
$table->timestamp('processed_at')->nullable();
|
||||
$table->timestamp('created_at')->useCurrent();
|
||||
|
||||
$table->index(['source', 'processed_at', 'created_at']);
|
||||
});
|
||||
}
|
||||
|
||||
public function down(): void
|
||||
{
|
||||
Schema::dropIfExists('webhook_calls');
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
**Step 2: Verify**
|
||||
|
||||
```bash
|
||||
cd /Users/snider/Code/core/php
|
||||
head -5 database/migrations/2026_03_12_000001_create_webhook_calls_table.php
|
||||
```
|
||||
|
||||
Expected: `declare(strict_types=1)` and `use Illuminate\Database\Migrations\Migration`.
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add database/migrations/2026_03_12_000001_create_webhook_calls_table.php
|
||||
git commit -m "feat(webhook): add webhook_calls migration — append-only inbound log"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Model — `WebhookCall`
|
||||
|
||||
**Files:**
|
||||
- Create: `src/Core/Webhook/WebhookCall.php`
|
||||
- Test: `tests/Feature/WebhookCallTest.php`
|
||||
|
||||
**Step 1: Write the test**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Tests\Feature;
|
||||
|
||||
use Core\Webhook\WebhookCall;
|
||||
use Illuminate\Foundation\Testing\RefreshDatabase;
|
||||
use Core\Tests\TestCase;
|
||||
|
||||
class WebhookCallTest extends TestCase
|
||||
{
|
||||
use RefreshDatabase;
|
||||
|
||||
public function test_create_webhook_call(): void
|
||||
{
|
||||
$call = WebhookCall::create([
|
||||
'source' => 'altum-biolinks',
|
||||
'event_type' => 'link.created',
|
||||
'headers' => ['webhook-id' => 'abc123'],
|
||||
'payload' => ['type' => 'link.created', 'data' => ['id' => 1]],
|
||||
]);
|
||||
|
||||
$this->assertNotNull($call->id);
|
||||
$this->assertSame('altum-biolinks', $call->source);
|
||||
$this->assertSame('link.created', $call->event_type);
|
||||
$this->assertIsArray($call->headers);
|
||||
$this->assertIsArray($call->payload);
|
||||
$this->assertNull($call->signature_valid);
|
||||
$this->assertNull($call->processed_at);
|
||||
}
|
||||
|
||||
public function test_unprocessed_scope(): void
|
||||
{
|
||||
WebhookCall::create([
|
||||
'source' => 'stripe',
|
||||
'headers' => [],
|
||||
'payload' => ['type' => 'invoice.paid'],
|
||||
]);
|
||||
|
||||
WebhookCall::create([
|
||||
'source' => 'stripe',
|
||||
'headers' => [],
|
||||
'payload' => ['type' => 'invoice.created'],
|
||||
'processed_at' => now(),
|
||||
]);
|
||||
|
||||
$unprocessed = WebhookCall::unprocessed()->get();
|
||||
$this->assertCount(1, $unprocessed);
|
||||
$this->assertSame('invoice.paid', $unprocessed->first()->payload['type']);
|
||||
}
|
||||
|
||||
public function test_for_source_scope(): void
|
||||
{
|
||||
WebhookCall::create(['source' => 'stripe', 'headers' => [], 'payload' => []]);
|
||||
WebhookCall::create(['source' => 'altum-biolinks', 'headers' => [], 'payload' => []]);
|
||||
|
||||
$this->assertCount(1, WebhookCall::forSource('stripe')->get());
|
||||
$this->assertCount(1, WebhookCall::forSource('altum-biolinks')->get());
|
||||
}
|
||||
|
||||
public function test_mark_processed(): void
|
||||
{
|
||||
$call = WebhookCall::create([
|
||||
'source' => 'test',
|
||||
'headers' => [],
|
||||
'payload' => [],
|
||||
]);
|
||||
|
||||
$this->assertNull($call->processed_at);
|
||||
|
||||
$call->markProcessed();
|
||||
|
||||
$this->assertNotNull($call->fresh()->processed_at);
|
||||
}
|
||||
|
||||
public function test_signature_valid_is_nullable_boolean(): void
|
||||
{
|
||||
$call = WebhookCall::create([
|
||||
'source' => 'test',
|
||||
'headers' => [],
|
||||
'payload' => [],
|
||||
'signature_valid' => false,
|
||||
]);
|
||||
|
||||
$this->assertFalse($call->signature_valid);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
```bash
|
||||
cd /Users/snider/Code/core/php
|
||||
vendor/bin/phpunit tests/Feature/WebhookCallTest.php
|
||||
```
|
||||
|
||||
Expected: FAIL — class `Core\Webhook\WebhookCall` not found.
|
||||
|
||||
**Step 3: Create the model**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
/*
|
||||
* Core PHP Framework
|
||||
*
|
||||
* Licensed under the European Union Public Licence (EUPL) v1.2.
|
||||
* See LICENSE file for details.
|
||||
*/
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Webhook;
|
||||
|
||||
use Illuminate\Database\Eloquent\Builder;
|
||||
use Illuminate\Database\Eloquent\Concerns\HasUlids;
|
||||
use Illuminate\Database\Eloquent\Model;
|
||||
|
||||
class WebhookCall extends Model
|
||||
{
|
||||
use HasUlids;
|
||||
|
||||
public $timestamps = false;
|
||||
|
||||
protected $fillable = [
|
||||
'source',
|
||||
'event_type',
|
||||
'headers',
|
||||
'payload',
|
||||
'signature_valid',
|
||||
'processed_at',
|
||||
];
|
||||
|
||||
protected function casts(): array
|
||||
{
|
||||
return [
|
||||
'headers' => 'array',
|
||||
'payload' => 'array',
|
||||
'signature_valid' => 'boolean',
|
||||
'processed_at' => 'datetime',
|
||||
'created_at' => 'datetime',
|
||||
];
|
||||
}
|
||||
|
||||
public function scopeUnprocessed(Builder $query): Builder
|
||||
{
|
||||
return $query->whereNull('processed_at');
|
||||
}
|
||||
|
||||
public function scopeForSource(Builder $query, string $source): Builder
|
||||
{
|
||||
return $query->where('source', $source);
|
||||
}
|
||||
|
||||
public function markProcessed(): void
|
||||
{
|
||||
$this->update(['processed_at' => now()]);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Run tests**
|
||||
|
||||
```bash
|
||||
vendor/bin/phpunit tests/Feature/WebhookCallTest.php
|
||||
```
|
||||
|
||||
Expected: All 5 tests pass.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/Core/Webhook/WebhookCall.php tests/Feature/WebhookCallTest.php
|
||||
git commit -m "feat(webhook): add WebhookCall model — ULID, scopes, markProcessed"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Lifecycle event — `WebhookReceived`
|
||||
|
||||
**Files:**
|
||||
- Create: `src/Core/Webhook/WebhookReceived.php`
|
||||
|
||||
**Step 1: Create the event**
|
||||
|
||||
This is a simple value object. It carries only the source tag and call ID — never the payload. Modules subscribe to it for lightweight awareness only.
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
/*
|
||||
* Core PHP Framework
|
||||
*
|
||||
* Licensed under the European Union Public Licence (EUPL) v1.2.
|
||||
* See LICENSE file for details.
|
||||
*/
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Webhook;
|
||||
|
||||
class WebhookReceived
|
||||
{
|
||||
public function __construct(
|
||||
public readonly string $source,
|
||||
public readonly string $callId,
|
||||
) {}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add src/Core/Webhook/WebhookReceived.php
|
||||
git commit -m "feat(webhook): add WebhookReceived event — source + callId only"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Verifier interface — `WebhookVerifier`
|
||||
|
||||
**Files:**
|
||||
- Create: `src/Core/Webhook/WebhookVerifier.php`
|
||||
|
||||
**Step 1: Create the interface**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
/*
|
||||
* Core PHP Framework
|
||||
*
|
||||
* Licensed under the European Union Public Licence (EUPL) v1.2.
|
||||
* See LICENSE file for details.
|
||||
*/
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Webhook;
|
||||
|
||||
use Illuminate\Http\Request;
|
||||
|
||||
interface WebhookVerifier
|
||||
{
|
||||
/**
|
||||
* Verify the webhook signature.
|
||||
*
|
||||
* Returns true if valid, false if invalid.
|
||||
* Implementations should check headers like webhook-signature against
|
||||
* the raw request body using the provided secret.
|
||||
*/
|
||||
public function verify(Request $request, string $secret): bool;
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add src/Core/Webhook/WebhookVerifier.php
|
||||
git commit -m "feat(webhook): add WebhookVerifier interface — per-source signature check"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Controller — `WebhookController`
|
||||
|
||||
**Files:**
|
||||
- Create: `src/Core/Webhook/WebhookController.php`
|
||||
- Test: `tests/Feature/WebhookControllerTest.php`
|
||||
|
||||
**Step 1: Write the test**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Tests\Feature;
|
||||
|
||||
use Core\Webhook\WebhookCall;
|
||||
use Core\Webhook\WebhookReceived;
|
||||
use Core\Webhook\WebhookVerifier;
|
||||
use Illuminate\Foundation\Testing\RefreshDatabase;
|
||||
use Illuminate\Http\Request;
|
||||
use Illuminate\Support\Facades\Event;
|
||||
use Core\Tests\TestCase;
|
||||
|
||||
class WebhookControllerTest extends TestCase
|
||||
{
|
||||
use RefreshDatabase;
|
||||
|
||||
protected function defineRoutes($router): void
|
||||
{
|
||||
$router->post('/webhooks/{source}', [\Core\Webhook\WebhookController::class, 'handle']);
|
||||
}
|
||||
|
||||
public function test_stores_webhook_call(): void
|
||||
{
|
||||
$response = $this->postJson('/webhooks/altum-biolinks', [
|
||||
'type' => 'link.created',
|
||||
'data' => ['id' => 42],
|
||||
]);
|
||||
|
||||
$response->assertOk();
|
||||
|
||||
$call = WebhookCall::first();
|
||||
$this->assertNotNull($call);
|
||||
$this->assertSame('altum-biolinks', $call->source);
|
||||
$this->assertSame(['type' => 'link.created', 'data' => ['id' => 42]], $call->payload);
|
||||
$this->assertNull($call->processed_at);
|
||||
}
|
||||
|
||||
public function test_captures_headers(): void
|
||||
{
|
||||
$this->postJson('/webhooks/stripe', ['type' => 'invoice.paid'], [
|
||||
'Webhook-Id' => 'msg_abc123',
|
||||
'Webhook-Timestamp' => '1234567890',
|
||||
]);
|
||||
|
||||
$call = WebhookCall::first();
|
||||
$this->assertArrayHasKey('webhook-id', $call->headers);
|
||||
}
|
||||
|
||||
public function test_fires_webhook_received_event(): void
|
||||
{
|
||||
Event::fake([WebhookReceived::class]);
|
||||
|
||||
$this->postJson('/webhooks/altum-biolinks', ['type' => 'test']);
|
||||
|
||||
Event::assertDispatched(WebhookReceived::class, function ($event) {
|
||||
return $event->source === 'altum-biolinks' && ! empty($event->callId);
|
||||
});
|
||||
}
|
||||
|
||||
public function test_extracts_event_type_from_payload(): void
|
||||
{
|
||||
$this->postJson('/webhooks/stripe', ['type' => 'invoice.paid']);
|
||||
|
||||
$this->assertSame('invoice.paid', WebhookCall::first()->event_type);
|
||||
}
|
||||
|
||||
public function test_handles_empty_payload(): void
|
||||
{
|
||||
$response = $this->postJson('/webhooks/test', []);
|
||||
|
||||
$response->assertOk();
|
||||
$this->assertCount(1, WebhookCall::all());
|
||||
}
|
||||
|
||||
public function test_signature_valid_null_when_no_verifier(): void
|
||||
{
|
||||
$this->postJson('/webhooks/unknown-source', ['data' => 1]);
|
||||
|
||||
$this->assertNull(WebhookCall::first()->signature_valid);
|
||||
}
|
||||
|
||||
public function test_signature_verified_when_verifier_registered(): void
|
||||
{
|
||||
$verifier = new class implements WebhookVerifier {
|
||||
public function verify(Request $request, string $secret): bool
|
||||
{
|
||||
return $request->header('webhook-signature') === 'valid';
|
||||
}
|
||||
};
|
||||
|
||||
$this->app->instance('webhook.verifier.test-source', $verifier);
|
||||
$this->app['config']->set('webhook.secrets.test-source', 'test-secret');
|
||||
|
||||
$this->postJson('/webhooks/test-source', ['data' => 1], [
|
||||
'Webhook-Signature' => 'valid',
|
||||
]);
|
||||
|
||||
$this->assertTrue(WebhookCall::first()->signature_valid);
|
||||
}
|
||||
|
||||
public function test_signature_invalid_still_stores_call(): void
|
||||
{
|
||||
$verifier = new class implements WebhookVerifier {
|
||||
public function verify(Request $request, string $secret): bool
|
||||
{
|
||||
return false;
|
||||
}
|
||||
};
|
||||
|
||||
$this->app->instance('webhook.verifier.test-source', $verifier);
|
||||
$this->app['config']->set('webhook.secrets.test-source', 'test-secret');
|
||||
|
||||
$this->postJson('/webhooks/test-source', ['data' => 1]);
|
||||
|
||||
$call = WebhookCall::first();
|
||||
$this->assertNotNull($call);
|
||||
$this->assertFalse($call->signature_valid);
|
||||
}
|
||||
|
||||
public function test_source_is_sanitised(): void
|
||||
{
|
||||
$response = $this->postJson('/webhooks/valid-source-123', ['data' => 1]);
|
||||
$response->assertOk();
|
||||
|
||||
$response = $this->postJson('/webhooks/invalid source!', ['data' => 1]);
|
||||
$response->assertStatus(404);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
```bash
|
||||
vendor/bin/phpunit tests/Feature/WebhookControllerTest.php
|
||||
```
|
||||
|
||||
Expected: FAIL — class `Core\Webhook\WebhookController` not found.
|
||||
|
||||
**Step 3: Create the controller**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
/*
|
||||
* Core PHP Framework
|
||||
*
|
||||
* Licensed under the European Union Public Licence (EUPL) v1.2.
|
||||
* See LICENSE file for details.
|
||||
*/
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Webhook;
|
||||
|
||||
use Illuminate\Http\JsonResponse;
|
||||
use Illuminate\Http\Request;
|
||||
|
||||
class WebhookController
|
||||
{
|
||||
public function handle(Request $request, string $source): JsonResponse
|
||||
{
|
||||
$signatureValid = null;
|
||||
|
||||
// Check for registered verifier
|
||||
$verifier = app()->bound("webhook.verifier.{$source}")
|
||||
? app("webhook.verifier.{$source}")
|
||||
: null;
|
||||
|
||||
if ($verifier instanceof WebhookVerifier) {
|
||||
$secret = config("webhook.secrets.{$source}", '');
|
||||
$signatureValid = $verifier->verify($request, $secret);
|
||||
}
|
||||
|
||||
// Extract event type from common payload patterns
|
||||
$payload = $request->json()->all();
|
||||
$eventType = $payload['type'] ?? $payload['event_type'] ?? $payload['event'] ?? null;
|
||||
|
||||
$call = WebhookCall::create([
|
||||
'source' => $source,
|
||||
'event_type' => is_string($eventType) ? $eventType : null,
|
||||
'headers' => $request->headers->all(),
|
||||
'payload' => $payload,
|
||||
'signature_valid' => $signatureValid,
|
||||
]);
|
||||
|
||||
event(new WebhookReceived($source, $call->id));
|
||||
|
||||
return response()->json(['ok' => true]);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Run tests**
|
||||
|
||||
```bash
|
||||
vendor/bin/phpunit tests/Feature/WebhookControllerTest.php
|
||||
```
|
||||
|
||||
Expected: All 9 tests pass.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/Core/Webhook/WebhookController.php tests/Feature/WebhookControllerTest.php
|
||||
git commit -m "feat(webhook): add WebhookController — store, verify, fire event, return 200"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Config + route registration
|
||||
|
||||
**Files:**
|
||||
- Create: `src/Core/Webhook/config.php`
|
||||
- Create: `src/Core/Webhook/Boot.php`
|
||||
|
||||
**Step 1: Create the config**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
return [
|
||||
/*
|
||||
|--------------------------------------------------------------------------
|
||||
| Webhook Secrets
|
||||
|--------------------------------------------------------------------------
|
||||
|
|
||||
| Per-source signing secrets for signature verification.
|
||||
| Modules register WebhookVerifier implementations per source.
|
||||
|
|
||||
*/
|
||||
'secrets' => [
|
||||
'altum-biolinks' => env('ALTUM_BIOLINKS_WEBHOOK_SECRET'),
|
||||
'altum-analytics' => env('ALTUM_ANALYTICS_WEBHOOK_SECRET'),
|
||||
'altum-pusher' => env('ALTUM_PUSHER_WEBHOOK_SECRET'),
|
||||
'altum-socialproof' => env('ALTUM_SOCIALPROOF_WEBHOOK_SECRET'),
|
||||
],
|
||||
|
||||
/*
|
||||
|--------------------------------------------------------------------------
|
||||
| Cron Triggers
|
||||
|--------------------------------------------------------------------------
|
||||
|
|
||||
| Outbound HTTP triggers that replace Docker cron containers.
|
||||
| The CronTrigger action hits these endpoints every minute.
|
||||
|
|
||||
*/
|
||||
'cron_triggers' => [
|
||||
'altum-biolinks' => [
|
||||
'base_url' => env('ALTUM_BIOLINKS_URL'),
|
||||
'key' => env('ALTUM_BIOLINKS_CRON_KEY'),
|
||||
'endpoints' => ['/cron', '/cron/email_reports', '/cron/broadcasts', '/cron/push_notifications'],
|
||||
'stagger_seconds' => 15,
|
||||
'offset_seconds' => 5,
|
||||
],
|
||||
'altum-analytics' => [
|
||||
'base_url' => env('ALTUM_ANALYTICS_URL'),
|
||||
'key' => env('ALTUM_ANALYTICS_CRON_KEY'),
|
||||
'endpoints' => ['/cron', '/cron/email_reports', '/cron/broadcasts', '/cron/push_notifications'],
|
||||
'stagger_seconds' => 15,
|
||||
'offset_seconds' => 0,
|
||||
],
|
||||
'altum-pusher' => [
|
||||
'base_url' => env('ALTUM_PUSHER_URL'),
|
||||
'key' => env('ALTUM_PUSHER_CRON_KEY'),
|
||||
'endpoints' => [
|
||||
'/cron/reset', '/cron/broadcasts', '/cron/campaigns',
|
||||
'/cron/flows', '/cron/flows_notifications', '/cron/personal_notifications',
|
||||
'/cron/rss_automations', '/cron/recurring_campaigns', '/cron/push_notifications',
|
||||
],
|
||||
'stagger_seconds' => 7,
|
||||
'offset_seconds' => 7,
|
||||
],
|
||||
'altum-socialproof' => [
|
||||
'base_url' => env('ALTUM_SOCIALPROOF_URL'),
|
||||
'key' => env('ALTUM_SOCIALPROOF_CRON_KEY'),
|
||||
'endpoints' => ['/cron', '/cron/email_reports', '/cron/broadcasts', '/cron/push_notifications'],
|
||||
'stagger_seconds' => 15,
|
||||
'offset_seconds' => 10,
|
||||
],
|
||||
],
|
||||
];
|
||||
```
|
||||
|
||||
**Step 2: Create Boot.php**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
/*
|
||||
* Core PHP Framework
|
||||
*
|
||||
* Licensed under the European Union Public Licence (EUPL) v1.2.
|
||||
* See LICENSE file for details.
|
||||
*/
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Webhook;
|
||||
|
||||
use Core\Events\ApiRoutesRegistering;
|
||||
use Illuminate\Support\Facades\Route;
|
||||
use Illuminate\Support\ServiceProvider;
|
||||
|
||||
class Boot extends ServiceProvider
|
||||
{
|
||||
public static array $listens = [
|
||||
ApiRoutesRegistering::class => 'onApiRoutes',
|
||||
];
|
||||
|
||||
public function register(): void
|
||||
{
|
||||
$this->mergeConfigFrom(__DIR__.'/config.php', 'webhook');
|
||||
}
|
||||
|
||||
public function onApiRoutes(ApiRoutesRegistering $event): void
|
||||
{
|
||||
$event->routes(fn () => Route::post(
|
||||
'/webhooks/{source}',
|
||||
[WebhookController::class, 'handle']
|
||||
)->where('source', '[a-z0-9\-]+'));
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add src/Core/Webhook/config.php src/Core/Webhook/Boot.php
|
||||
git commit -m "feat(webhook): add config + Boot — route registration, cron trigger config"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 7: CronTrigger scheduled action
|
||||
|
||||
**Files:**
|
||||
- Create: `src/Core/Webhook/CronTrigger.php`
|
||||
- Test: `tests/Feature/CronTriggerTest.php`
|
||||
|
||||
**Step 1: Write the test**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Tests\Feature;
|
||||
|
||||
use Core\Actions\Action;
|
||||
use Core\Actions\Scheduled;
|
||||
use Core\Webhook\CronTrigger;
|
||||
use Illuminate\Support\Facades\Http;
|
||||
use Core\Tests\TestCase;
|
||||
|
||||
class CronTriggerTest extends TestCase
|
||||
{
|
||||
public function test_has_scheduled_attribute(): void
|
||||
{
|
||||
$ref = new \ReflectionClass(CronTrigger::class);
|
||||
$attrs = $ref->getAttributes(Scheduled::class);
|
||||
|
||||
$this->assertCount(1, $attrs);
|
||||
$this->assertSame('everyMinute', $attrs[0]->newInstance()->frequency);
|
||||
}
|
||||
|
||||
public function test_uses_action_trait(): void
|
||||
{
|
||||
$this->assertTrue(
|
||||
in_array(Action::class, class_uses_recursive(CronTrigger::class), true)
|
||||
);
|
||||
}
|
||||
|
||||
public function test_hits_configured_endpoints(): void
|
||||
{
|
||||
Http::fake();
|
||||
|
||||
config(['webhook.cron_triggers' => [
|
||||
'test-product' => [
|
||||
'base_url' => 'https://example.com',
|
||||
'key' => 'secret123',
|
||||
'endpoints' => ['/cron', '/cron/reports'],
|
||||
'stagger_seconds' => 0,
|
||||
'offset_seconds' => 0,
|
||||
],
|
||||
]]);
|
||||
|
||||
CronTrigger::run();
|
||||
|
||||
Http::assertSentCount(2);
|
||||
Http::assertSent(fn ($request) => str_contains($request->url(), '/cron?key=secret123'));
|
||||
Http::assertSent(fn ($request) => str_contains($request->url(), '/cron/reports?key=secret123'));
|
||||
}
|
||||
|
||||
public function test_skips_product_with_no_base_url(): void
|
||||
{
|
||||
Http::fake();
|
||||
|
||||
config(['webhook.cron_triggers' => [
|
||||
'disabled-product' => [
|
||||
'base_url' => null,
|
||||
'key' => 'secret',
|
||||
'endpoints' => ['/cron'],
|
||||
'stagger_seconds' => 0,
|
||||
'offset_seconds' => 0,
|
||||
],
|
||||
]]);
|
||||
|
||||
CronTrigger::run();
|
||||
|
||||
Http::assertSentCount(0);
|
||||
}
|
||||
|
||||
public function test_logs_failures_gracefully(): void
|
||||
{
|
||||
Http::fake([
|
||||
'*' => Http::response('error', 500),
|
||||
]);
|
||||
|
||||
config(['webhook.cron_triggers' => [
|
||||
'failing-product' => [
|
||||
'base_url' => 'https://broken.example.com',
|
||||
'key' => 'key',
|
||||
'endpoints' => ['/cron'],
|
||||
'stagger_seconds' => 0,
|
||||
'offset_seconds' => 0,
|
||||
],
|
||||
]]);
|
||||
|
||||
// Should not throw
|
||||
CronTrigger::run();
|
||||
|
||||
Http::assertSentCount(1);
|
||||
}
|
||||
|
||||
public function test_handles_empty_config(): void
|
||||
{
|
||||
Http::fake();
|
||||
config(['webhook.cron_triggers' => []]);
|
||||
|
||||
CronTrigger::run();
|
||||
|
||||
Http::assertSentCount(0);
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Run test to verify it fails**
|
||||
|
||||
```bash
|
||||
vendor/bin/phpunit tests/Feature/CronTriggerTest.php
|
||||
```
|
||||
|
||||
Expected: FAIL — class `Core\Webhook\CronTrigger` not found.
|
||||
|
||||
**Step 3: Create the action**
|
||||
|
||||
```php
|
||||
<?php
|
||||
|
||||
/*
|
||||
* Core PHP Framework
|
||||
*
|
||||
* Licensed under the European Union Public Licence (EUPL) v1.2.
|
||||
* See LICENSE file for details.
|
||||
*/
|
||||
|
||||
declare(strict_types=1);
|
||||
|
||||
namespace Core\Webhook;
|
||||
|
||||
use Core\Actions\Action;
|
||||
use Core\Actions\Scheduled;
|
||||
use Illuminate\Support\Facades\Http;
|
||||
|
||||
#[Scheduled(frequency: 'everyMinute', withoutOverlapping: true, runInBackground: true)]
|
||||
class CronTrigger
|
||||
{
|
||||
use Action;
|
||||
|
||||
public function handle(): void
|
||||
{
|
||||
$triggers = config('webhook.cron_triggers', []);
|
||||
|
||||
foreach ($triggers as $product => $config) {
|
||||
if (empty($config['base_url'])) {
|
||||
continue;
|
||||
}
|
||||
|
||||
$baseUrl = rtrim($config['base_url'], '/');
|
||||
$key = $config['key'] ?? '';
|
||||
$stagger = (int) ($config['stagger_seconds'] ?? 0);
|
||||
$offset = (int) ($config['offset_seconds'] ?? 0);
|
||||
|
||||
if ($offset > 0) {
|
||||
usleep($offset * 1_000_000);
|
||||
}
|
||||
|
||||
foreach ($config['endpoints'] ?? [] as $i => $endpoint) {
|
||||
if ($i > 0 && $stagger > 0) {
|
||||
usleep($stagger * 1_000_000);
|
||||
}
|
||||
|
||||
$url = $baseUrl . $endpoint . '?key=' . $key;
|
||||
|
||||
try {
|
||||
Http::timeout(30)->get($url);
|
||||
} catch (\Throwable $e) {
|
||||
logger()->warning("Cron trigger failed for {$product}{$endpoint}: {$e->getMessage()}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 4: Run tests**
|
||||
|
||||
```bash
|
||||
vendor/bin/phpunit tests/Feature/CronTriggerTest.php
|
||||
```
|
||||
|
||||
Expected: All 6 tests pass.
|
||||
|
||||
**Step 5: Commit**
|
||||
|
||||
```bash
|
||||
git add src/Core/Webhook/CronTrigger.php tests/Feature/CronTriggerTest.php
|
||||
git commit -m "feat(webhook): add CronTrigger action — replaces 4 Docker cron containers"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 8: Final verification
|
||||
|
||||
**Step 1: Verify file structure**
|
||||
|
||||
```bash
|
||||
cd /Users/snider/Code/core/php
|
||||
find src/Core/Webhook/ -type f | sort
|
||||
find database/migrations/ -name '*webhook*' | sort
|
||||
find tests/Feature/ -name '*Webhook*' -o -name '*CronTrigger*' | sort
|
||||
```
|
||||
|
||||
Expected:
|
||||
```
|
||||
src/Core/Webhook/Boot.php
|
||||
src/Core/Webhook/CronTrigger.php
|
||||
src/Core/Webhook/WebhookCall.php
|
||||
src/Core/Webhook/WebhookController.php
|
||||
src/Core/Webhook/WebhookReceived.php
|
||||
src/Core/Webhook/WebhookVerifier.php
|
||||
src/Core/Webhook/config.php
|
||||
database/migrations/2026_03_12_000001_create_webhook_calls_table.php
|
||||
tests/Feature/WebhookCallTest.php
|
||||
tests/Feature/WebhookControllerTest.php
|
||||
tests/Feature/CronTriggerTest.php
|
||||
```
|
||||
|
||||
**Step 2: Run all webhook tests**
|
||||
|
||||
```bash
|
||||
vendor/bin/phpunit tests/Feature/WebhookCallTest.php tests/Feature/WebhookControllerTest.php tests/Feature/CronTriggerTest.php
|
||||
```
|
||||
|
||||
Expected: All tests pass (5 + 9 + 6 = 20 tests).
|
||||
|
||||
**Step 3: Run lint**
|
||||
|
||||
```bash
|
||||
./vendor/bin/pint --test src/Core/Webhook/ tests/Feature/WebhookCallTest.php tests/Feature/WebhookControllerTest.php tests/Feature/CronTriggerTest.php
|
||||
```
|
||||
|
||||
Expected: Clean.
|
||||
|
||||
**Step 4: Verify strict types in all files**
|
||||
|
||||
```bash
|
||||
grep -rL 'declare(strict_types=1)' src/Core/Webhook/ database/migrations/*webhook*
|
||||
```
|
||||
|
||||
Expected: No output (all files have strict types).
|
||||
|
||||
**Step 5: Final commit if any lint fixes needed**
|
||||
|
||||
```bash
|
||||
git add -A src/Core/Webhook/ tests/Feature/ database/migrations/
|
||||
git status
|
||||
```
|
||||
|
||||
If clean, done. If fixes needed, commit them.
|
||||
|
|
@ -1,134 +0,0 @@
|
|||
# Review Pipeline Plugin Design
|
||||
|
||||
**Date**: 2026-03-12
|
||||
**Status**: Approved
|
||||
**Location**: `core/agent/claude/review/`
|
||||
|
||||
## Goal
|
||||
|
||||
Build a 5-stage automated code review pipeline as a Claude Code plugin command (`/review:pipeline`) that dispatches specialised agent personas sequentially, each building on the previous stage's findings.
|
||||
|
||||
## Architecture
|
||||
|
||||
Extend the existing `claude/review` plugin (not a new plugin). Add skills that reference agent persona files from `agents/` by path — single source of truth, no duplication.
|
||||
|
||||
### File Structure
|
||||
|
||||
```
|
||||
claude/review/
|
||||
├── .claude-plugin/plugin.json # Updated — add skills
|
||||
├── commands/
|
||||
│ ├── pipeline.md # NEW — /review:pipeline orchestrator
|
||||
│ ├── review.md # Existing (unchanged)
|
||||
│ ├── security.md # Existing (unchanged)
|
||||
│ └── pr.md # Existing (unchanged)
|
||||
├── skills/
|
||||
│ ├── security-review.md # Stage 1: Security Engineer
|
||||
│ ├── senior-dev-fix.md # Stage 2: Senior Developer (fix)
|
||||
│ ├── test-analysis.md # Stage 3: API Tester
|
||||
│ ├── architecture-review.md # Stage 4: Backend Architect
|
||||
│ └── reality-check.md # Stage 5: Reality Checker
|
||||
├── hooks.json # Existing (unchanged)
|
||||
└── scripts/
|
||||
└── post-pr-create.sh # Existing (unchanged)
|
||||
```
|
||||
|
||||
### Agent Personas (source of truth)
|
||||
|
||||
| Stage | Agent | Persona File |
|
||||
|-------|-------|--------------|
|
||||
| 1 | Security Engineer | `agents/engineering/engineering-security-engineer.md` |
|
||||
| 2 | Senior Developer | `agents/engineering/engineering-senior-developer.md` |
|
||||
| 3 | API Tester | `agents/testing/testing-api-tester.md` |
|
||||
| 4 | Backend Architect | `agents/engineering/engineering-backend-architect.md` |
|
||||
| 5 | Reality Checker | `agents/testing/testing-reality-checker.md` |
|
||||
|
||||
## Pipeline Flow
|
||||
|
||||
```
|
||||
/review:pipeline [range]
|
||||
│
|
||||
├─ Stage 1: Security Engineer (read-only review)
|
||||
│ → Findings: Critical/High/Medium/Low issues
|
||||
│ → If Critical found: flag for Stage 2
|
||||
│
|
||||
├─ Stage 2: Senior Developer
|
||||
│ → If security Criticals: FIX them, then re-run Stage 1
|
||||
│ → If no Criticals: skip to Stage 3
|
||||
│
|
||||
├─ Stage 3: API Tester (run tests, analyse coverage)
|
||||
│ → Test results + coverage gaps
|
||||
│
|
||||
├─ Stage 4: Backend Architect (architecture fit)
|
||||
│ → Lifecycle event usage, Actions pattern, tenant isolation
|
||||
│
|
||||
└─ Stage 5: Reality Checker (final gate)
|
||||
→ Verdict: READY / NEEDS WORK / FAILED
|
||||
→ Aggregated report
|
||||
```
|
||||
|
||||
## Command Interface
|
||||
|
||||
```
|
||||
/review:pipeline # Staged changes
|
||||
/review:pipeline HEAD~3..HEAD # Commit range
|
||||
/review:pipeline --pr=123 # PR (via gh)
|
||||
/review:pipeline --stage=security # Run single stage only
|
||||
/review:pipeline --skip=fix # Skip the fix stage (review only)
|
||||
```
|
||||
|
||||
## Skill Design
|
||||
|
||||
Each skill file is a markdown document that:
|
||||
|
||||
1. Reads the agent persona from the `agents/` directory at dispatch time
|
||||
2. Constructs a subagent prompt combining: persona + diff context + prior stage findings
|
||||
3. Dispatches via the Agent tool (general-purpose subagent)
|
||||
4. Returns structured findings in a consistent format
|
||||
|
||||
Skills are lightweight orchestration — the agent personas contain the domain knowledge.
|
||||
|
||||
## Output Format
|
||||
|
||||
```markdown
|
||||
# Review Pipeline Report
|
||||
|
||||
## Stage 1: Security Review
|
||||
**Agent**: Security Engineer
|
||||
[Structured findings with severity, file:line, attack vector, fix]
|
||||
|
||||
## Stage 2: Fixes Applied
|
||||
**Agent**: Senior Developer
|
||||
[What was fixed, or "Skipped — no Critical issues"]
|
||||
|
||||
## Stage 3: Test Analysis
|
||||
**Agent**: API Tester
|
||||
[Test pass/fail count, coverage gaps for changed code]
|
||||
|
||||
## Stage 4: Architecture Review
|
||||
**Agent**: Backend Architect
|
||||
[Lifecycle events, Actions pattern, tenant isolation, namespace mapping]
|
||||
|
||||
## Stage 5: Final Verdict
|
||||
**Agent**: Reality Checker
|
||||
**Status**: READY / NEEDS WORK / FAILED
|
||||
**Quality Rating**: C+ / B- / B / B+
|
||||
[Evidence-based summary with specific file references]
|
||||
```
|
||||
|
||||
## Scope Boundaries (YAGNI)
|
||||
|
||||
- No persistent storage of review results
|
||||
- No automatic PR commenting (add later via hook if needed)
|
||||
- No parallel agent dispatch (sequential by design — each builds on previous)
|
||||
- No custom agent selection — the 5-agent team is fixed
|
||||
- No CodeRabbit integration (separate learning exercise)
|
||||
|
||||
## Success Criteria
|
||||
|
||||
- `/review:pipeline` runs all 5 stages on a diff and produces an aggregated report
|
||||
- Each stage uses the tailored agent persona (not generic prompts)
|
||||
- Security Criticals trigger the fix→re-review loop
|
||||
- Reality Checker produces an evidence-based verdict with test output
|
||||
- Individual stages can be run standalone via `--stage=`
|
||||
- Plugin installs cleanly and doesn't break existing `/review` commands
|
||||
|
|
@ -1,821 +0,0 @@
|
|||
# Review Pipeline Plugin Implementation Plan
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Add a `/review:pipeline` command to the existing `claude/review` plugin that dispatches 5 specialised agent personas sequentially for automated code review.
|
||||
|
||||
**Architecture:** Extend `claude/review/` with a `pipeline.md` command and 5 skill files (one per review stage). Each skill reads its agent persona from `agents/` at dispatch time and constructs a subagent prompt with diff context + prior findings. The command orchestrates the pipeline sequentially.
|
||||
|
||||
**Tech Stack:** Claude Code plugin system (commands, skills, hooks.json), Agent tool for subagent dispatch, git/gh CLI for diff collection.
|
||||
|
||||
---
|
||||
|
||||
### Task 1: Update plugin.json
|
||||
|
||||
**Files:**
|
||||
- Modify: `claude/review/.claude-plugin/plugin.json`
|
||||
|
||||
**Step 1: Read the current plugin.json**
|
||||
|
||||
```bash
|
||||
cat /Users/snider/Code/core/agent/claude/review/.claude-plugin/plugin.json
|
||||
```
|
||||
|
||||
Current content:
|
||||
```json
|
||||
{
|
||||
"name": "review",
|
||||
"description": "Code review automation - PR review, security checks, best practices",
|
||||
"version": "0.1.0",
|
||||
"author": {
|
||||
"name": "Host UK"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 2: Update plugin.json with new version and description**
|
||||
|
||||
```json
|
||||
{
|
||||
"name": "review",
|
||||
"description": "Code review automation — 5-agent review pipeline, PR review, security checks, architecture validation",
|
||||
"version": "0.2.0",
|
||||
"author": {
|
||||
"name": "Host UK"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/.claude-plugin/plugin.json
|
||||
git commit -m "chore(review): bump plugin to v0.2.0 for pipeline feature"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 2: Create the pipeline command
|
||||
|
||||
**Files:**
|
||||
- Create: `claude/review/commands/pipeline.md`
|
||||
|
||||
**Step 1: Create the command file**
|
||||
|
||||
This is the main orchestrator. It tells Claude how to run the 5-stage pipeline. The command reads the diff, then dispatches subagents in sequence using the Agent tool.
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: pipeline
|
||||
description: Run the 5-agent review pipeline on code changes
|
||||
args: [commit-range|--pr=N|--stage=NAME|--skip=fix]
|
||||
---
|
||||
|
||||
# Review Pipeline
|
||||
|
||||
Run a 5-stage automated code review pipeline using specialised agent personas.
|
||||
|
||||
## Usage
|
||||
|
||||
```
|
||||
/review:pipeline # Staged changes
|
||||
/review:pipeline HEAD~3..HEAD # Commit range
|
||||
/review:pipeline --pr=123 # PR diff (via gh)
|
||||
/review:pipeline --stage=security # Single stage only
|
||||
/review:pipeline --skip=fix # Review only, no fixes
|
||||
```
|
||||
|
||||
## Pipeline Stages
|
||||
|
||||
| Stage | Agent | Role | Modifies Code? |
|
||||
|-------|-------|------|----------------|
|
||||
| 1 | Security Engineer | Threat analysis, injection, tenant isolation | No |
|
||||
| 2 | Senior Developer | Fix Critical security findings | Yes |
|
||||
| 3 | API Tester | Run tests, analyse coverage gaps | No |
|
||||
| 4 | Backend Architect | Architecture fit, lifecycle events, Actions pattern | No |
|
||||
| 5 | Reality Checker | Final gate — evidence-based verdict | No |
|
||||
|
||||
## Process
|
||||
|
||||
### Step 1: Gather the diff
|
||||
|
||||
Determine what code to review based on arguments:
|
||||
|
||||
```bash
|
||||
# Staged changes (default)
|
||||
git diff --cached
|
||||
|
||||
# Commit range
|
||||
git diff HEAD~3..HEAD
|
||||
|
||||
# PR
|
||||
gh pr diff 123
|
||||
|
||||
# Also get the list of changed files
|
||||
git diff --name-only HEAD~3..HEAD
|
||||
```
|
||||
|
||||
Store the diff and file list — every stage needs them.
|
||||
|
||||
### Step 2: Identify the package
|
||||
|
||||
Determine which package the changes belong to by checking file paths. This tells agents where to run tests:
|
||||
|
||||
```bash
|
||||
# If files are in src/Core/ or app/Core/ → core/php package
|
||||
# If files are in a core-{name}/ directory → that package
|
||||
# Check for composer.json or go.mod to confirm
|
||||
```
|
||||
|
||||
### Step 3: Run the pipeline
|
||||
|
||||
Dispatch each stage as a subagent using the Agent tool. Each stage receives:
|
||||
- The diff context
|
||||
- The list of changed files
|
||||
- Findings from all previous stages
|
||||
- Its agent persona (read from agents/ directory)
|
||||
|
||||
**Stage 1 — Security Review:**
|
||||
- Read persona: `agents/engineering/engineering-security-engineer.md`
|
||||
- Dispatch subagent with persona + diff
|
||||
- Task: Read-only security review. Find threats, injection, tenant isolation gaps
|
||||
- Output: Structured findings with severity ratings
|
||||
- If any CRITICAL findings → flag for Stage 2
|
||||
|
||||
**Stage 2 — Fix (conditional):**
|
||||
- Read persona: `agents/engineering/engineering-senior-developer.md`
|
||||
- SKIP if `--skip=fix` was passed
|
||||
- SKIP if Stage 1 found no CRITICAL issues
|
||||
- Dispatch subagent with persona + Stage 1 Critical findings
|
||||
- Task: Fix the Critical security issues
|
||||
- After fixing: re-dispatch Stage 1 to verify fixes
|
||||
- Output: List of files modified and what was fixed
|
||||
|
||||
**Stage 3 — Test Analysis:**
|
||||
- Read persona: `agents/testing/testing-api-tester.md`
|
||||
- Dispatch subagent with persona + diff + changed files
|
||||
- Task: Run tests (`composer test` or `core go test`), analyse which changes have test coverage
|
||||
- Output: Test results (pass/fail/count) + coverage gaps
|
||||
|
||||
**Stage 4 — Architecture Review:**
|
||||
- Read persona: `agents/engineering/engineering-backend-architect.md`
|
||||
- Dispatch subagent with persona + diff + changed files
|
||||
- Task: Check lifecycle event usage, Actions pattern adherence, tenant isolation, namespace mapping
|
||||
- Output: Architecture assessment with specific findings
|
||||
|
||||
**Stage 5 — Reality Check (final gate):**
|
||||
- Read persona: `agents/testing/testing-reality-checker.md`
|
||||
- Dispatch subagent with persona + ALL prior stage findings + test output
|
||||
- Task: Evidence-based final verdict. Default to NEEDS WORK.
|
||||
- Output: Verdict (READY / NEEDS WORK / FAILED) + quality rating + required fixes
|
||||
|
||||
### Step 4: Aggregate report
|
||||
|
||||
Combine all stage outputs into the final report:
|
||||
|
||||
```markdown
|
||||
# Review Pipeline Report
|
||||
|
||||
## Stage 1: Security Review
|
||||
**Agent**: Security Engineer
|
||||
[Stage 1 findings]
|
||||
|
||||
## Stage 2: Fixes Applied
|
||||
**Agent**: Senior Developer
|
||||
[Stage 2 output, or "Skipped — no Critical issues"]
|
||||
|
||||
## Stage 3: Test Analysis
|
||||
**Agent**: API Tester
|
||||
[Stage 3 test results + coverage gaps]
|
||||
|
||||
## Stage 4: Architecture Review
|
||||
**Agent**: Backend Architect
|
||||
[Stage 4 architecture assessment]
|
||||
|
||||
## Stage 5: Final Verdict
|
||||
**Agent**: Reality Checker
|
||||
**Status**: [READY / NEEDS WORK / FAILED]
|
||||
**Quality Rating**: [C+ / B- / B / B+]
|
||||
[Evidence-based summary]
|
||||
|
||||
---
|
||||
Pipeline completed: [timestamp]
|
||||
Stages run: [1-5]
|
||||
```
|
||||
|
||||
## Single Stage Mode
|
||||
|
||||
When `--stage=NAME` is passed, run only that stage:
|
||||
|
||||
| Name | Stage |
|
||||
|------|-------|
|
||||
| `security` | Stage 1: Security Engineer |
|
||||
| `fix` | Stage 2: Senior Developer |
|
||||
| `test` | Stage 3: API Tester |
|
||||
| `architecture` | Stage 4: Backend Architect |
|
||||
| `reality` | Stage 5: Reality Checker |
|
||||
|
||||
For single-stage mode, still gather the diff but skip prior/subsequent stages.
|
||||
|
||||
## Agent Persona Paths
|
||||
|
||||
All personas live in the `agents/` directory relative to the plugin root's parent:
|
||||
|
||||
```
|
||||
${CLAUDE_PLUGIN_ROOT}/../../agents/engineering/engineering-security-engineer.md
|
||||
${CLAUDE_PLUGIN_ROOT}/../../agents/engineering/engineering-senior-developer.md
|
||||
${CLAUDE_PLUGIN_ROOT}/../../agents/testing/testing-api-tester.md
|
||||
${CLAUDE_PLUGIN_ROOT}/../../agents/engineering/engineering-backend-architect.md
|
||||
${CLAUDE_PLUGIN_ROOT}/../../agents/testing/testing-reality-checker.md
|
||||
```
|
||||
|
||||
Read each persona file before dispatching that stage's subagent.
|
||||
```
|
||||
|
||||
**Step 2: Verify the command is valid markdown with frontmatter**
|
||||
|
||||
```bash
|
||||
head -5 claude/review/commands/pipeline.md
|
||||
```
|
||||
|
||||
Expected: YAML frontmatter with `name: pipeline`, `description`, `args`.
|
||||
|
||||
**Step 3: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/commands/pipeline.md
|
||||
git commit -m "feat(review): add /review:pipeline command — 5-agent review orchestrator"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 3: Create Stage 1 skill — Security Review
|
||||
|
||||
**Files:**
|
||||
- Create: `claude/review/skills/security-review.md`
|
||||
|
||||
**Step 1: Create the skill file**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: security-review
|
||||
description: Stage 1 of review pipeline — dispatch Security Engineer agent for threat analysis, injection review, and tenant isolation checks on code changes
|
||||
---
|
||||
|
||||
# Security Review Stage
|
||||
|
||||
Dispatch the **Security Engineer** agent to perform a read-only security review of code changes.
|
||||
|
||||
## When to Use
|
||||
|
||||
This skill is invoked as Stage 1 of the `/review:pipeline` command. It can also be triggered standalone via `/review:pipeline --stage=security`.
|
||||
|
||||
## Agent Persona
|
||||
|
||||
Read the Security Engineer persona from:
|
||||
```
|
||||
agents/engineering/engineering-security-engineer.md
|
||||
```
|
||||
|
||||
## Dispatch Instructions
|
||||
|
||||
1. Read the persona file contents
|
||||
2. Read the diff and list of changed files
|
||||
3. Dispatch a subagent with the Agent tool using this prompt structure:
|
||||
|
||||
```
|
||||
[Persona content from engineering-security-engineer.md]
|
||||
|
||||
## Your Task
|
||||
|
||||
Perform a security-focused code review of the following changes. This is a READ-ONLY review — do not modify any files.
|
||||
|
||||
### Changed Files
|
||||
[List of changed files]
|
||||
|
||||
### Diff
|
||||
[Full diff content]
|
||||
|
||||
### Focus Areas
|
||||
- Arbitrary code execution vectors
|
||||
- Method/class injection from DB or config values
|
||||
- Tenant isolation (BelongsToWorkspace on all tenant-scoped models)
|
||||
- Input validation in Action handle() methods
|
||||
- Namespace safety (allowlists where class names come from external sources)
|
||||
- Error handling (no silent swallowing, no stack trace leakage)
|
||||
- Secrets in code (API keys, credentials, .env values)
|
||||
|
||||
### Output Format
|
||||
|
||||
Produce findings in this exact format:
|
||||
|
||||
## Security Review Findings
|
||||
|
||||
### CRITICAL
|
||||
- **file:line** — [Title]: [Description]. **Attack vector**: [How]. **Fix**: [What to change]
|
||||
|
||||
### HIGH
|
||||
- **file:line** — [Title]: [Description]. **Fix**: [What to change]
|
||||
|
||||
### MEDIUM
|
||||
- **file:line** — [Title]: [Description]. **Fix**: [What to change]
|
||||
|
||||
### LOW
|
||||
- **file:line** — [Title]: [Description]
|
||||
|
||||
### Positive Controls
|
||||
[Things done well — allowlists, guards, scoping]
|
||||
|
||||
**Summary**: X critical, Y high, Z medium, W low
|
||||
```
|
||||
|
||||
4. Return the subagent's findings as the stage output
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/skills/security-review.md
|
||||
git commit -m "feat(review): add security-review skill — Stage 1 of pipeline"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 4: Create Stage 2 skill — Senior Dev Fix
|
||||
|
||||
**Files:**
|
||||
- Create: `claude/review/skills/senior-dev-fix.md`
|
||||
|
||||
**Step 1: Create the skill file**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: senior-dev-fix
|
||||
description: Stage 2 of review pipeline — dispatch Senior Developer agent to fix Critical security findings from Stage 1
|
||||
---
|
||||
|
||||
# Senior Developer Fix Stage
|
||||
|
||||
Dispatch the **Senior Developer** agent to fix Critical security findings from Stage 1.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoked as Stage 2 of `/review:pipeline` ONLY when Stage 1 found Critical issues. Skipped when `--skip=fix` is passed or when there are no Critical findings.
|
||||
|
||||
## Agent Persona
|
||||
|
||||
Read the Senior Developer persona from:
|
||||
```
|
||||
agents/engineering/engineering-senior-developer.md
|
||||
```
|
||||
|
||||
## Dispatch Instructions
|
||||
|
||||
1. Read the persona file contents
|
||||
2. Construct a prompt with the Critical findings from Stage 1
|
||||
3. Dispatch a subagent with the Agent tool:
|
||||
|
||||
```
|
||||
[Persona content from engineering-senior-developer.md]
|
||||
|
||||
## Your Task
|
||||
|
||||
Fix the following CRITICAL security issues found during review. Apply the fixes directly to the source files.
|
||||
|
||||
### Critical Findings to Fix
|
||||
[Stage 1 Critical findings — exact file:line, description, recommended fix]
|
||||
|
||||
### Rules
|
||||
- Fix ONLY the Critical issues listed above — do not refactor surrounding code
|
||||
- Follow existing code style (spacing, braces, naming)
|
||||
- declare(strict_types=1) in every PHP file
|
||||
- UK English in all comments and strings
|
||||
- Run tests after fixing to verify nothing breaks:
|
||||
[appropriate test command for the package]
|
||||
|
||||
### Output Format
|
||||
|
||||
## Fixes Applied
|
||||
|
||||
### Fix 1: [Title]
|
||||
**File**: `path/to/file.php:line`
|
||||
**Issue**: [What was wrong]
|
||||
**Change**: [What was changed]
|
||||
|
||||
### Fix 2: ...
|
||||
|
||||
**Tests**: [PASS/FAIL — test output summary]
|
||||
```
|
||||
|
||||
4. After the subagent completes, re-dispatch Stage 1 (security-review) to verify the fixes resolved the Critical issues
|
||||
5. If Criticals persist after one fix cycle, report them in the final output rather than looping indefinitely
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/skills/senior-dev-fix.md
|
||||
git commit -m "feat(review): add senior-dev-fix skill — Stage 2 of pipeline"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 5: Create Stage 3 skill — Test Analysis
|
||||
|
||||
**Files:**
|
||||
- Create: `claude/review/skills/test-analysis.md`
|
||||
|
||||
**Step 1: Create the skill file**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: test-analysis
|
||||
description: Stage 3 of review pipeline — dispatch API Tester agent to run tests and analyse coverage of changed code
|
||||
---
|
||||
|
||||
# Test Analysis Stage
|
||||
|
||||
Dispatch the **API Tester** agent to run tests and identify coverage gaps for the changed code.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoked as Stage 3 of `/review:pipeline`. Can be run standalone via `/review:pipeline --stage=test`.
|
||||
|
||||
## Agent Persona
|
||||
|
||||
Read the API Tester persona from:
|
||||
```
|
||||
agents/testing/testing-api-tester.md
|
||||
```
|
||||
|
||||
## Dispatch Instructions
|
||||
|
||||
1. Read the persona file contents
|
||||
2. Determine the test command based on the package:
|
||||
- PHP packages: `composer test` or `vendor/bin/phpunit [specific test files]`
|
||||
- Go packages: `core go test` or `go test ./...`
|
||||
3. Dispatch a subagent:
|
||||
|
||||
```
|
||||
[Persona content from testing-api-tester.md]
|
||||
|
||||
## Your Task
|
||||
|
||||
Run the test suite and analyse coverage for the following code changes. Do NOT write new tests — this is analysis only.
|
||||
|
||||
### Changed Files
|
||||
[List of changed files from the diff]
|
||||
|
||||
### Instructions
|
||||
|
||||
1. **Run existing tests**
|
||||
[Test command for this package]
|
||||
Report: total tests, passed, failed, assertion count
|
||||
|
||||
2. **Analyse coverage of changes**
|
||||
For each changed file, find the corresponding test file(s). Read both the source change and the test.
|
||||
Report whether the specific change is exercised by existing tests.
|
||||
|
||||
3. **Identify coverage gaps**
|
||||
List changes that have NO test coverage, with specific descriptions of what's untested.
|
||||
|
||||
### Output Format
|
||||
|
||||
## Test Analysis
|
||||
|
||||
### Test Results
|
||||
**Command**: `[exact command run]`
|
||||
**Result**: X tests, Y assertions, Z failures
|
||||
|
||||
### Coverage of Changes
|
||||
|
||||
| Changed File | Test File | Change Covered? | Gap |
|
||||
|-------------|-----------|-----------------|-----|
|
||||
| `path:lines` | `test/path` | YES/NO | [What's untested] |
|
||||
|
||||
### Coverage Gaps
|
||||
1. **file:line** — [What's changed but untested]
|
||||
2. ...
|
||||
|
||||
### Recommendations
|
||||
[Specific tests that should be written — Pest syntax for PHP, _Good/_Bad/_Ugly for Go]
|
||||
|
||||
**Summary**: X/Y changes covered, Z gaps identified
|
||||
```
|
||||
|
||||
4. Return the subagent's analysis as the stage output
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/skills/test-analysis.md
|
||||
git commit -m "feat(review): add test-analysis skill — Stage 3 of pipeline"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 6: Create Stage 4 skill — Architecture Review
|
||||
|
||||
**Files:**
|
||||
- Create: `claude/review/skills/architecture-review.md`
|
||||
|
||||
**Step 1: Create the skill file**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: architecture-review
|
||||
description: Stage 4 of review pipeline — dispatch Backend Architect agent to check lifecycle events, Actions pattern, tenant isolation, and namespace mapping
|
||||
---
|
||||
|
||||
# Architecture Review Stage
|
||||
|
||||
Dispatch the **Backend Architect** agent to review code changes for architectural correctness.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoked as Stage 4 of `/review:pipeline`. Can be run standalone via `/review:pipeline --stage=architecture`.
|
||||
|
||||
## Agent Persona
|
||||
|
||||
Read the Backend Architect persona from:
|
||||
```
|
||||
agents/engineering/engineering-backend-architect.md
|
||||
```
|
||||
|
||||
## Dispatch Instructions
|
||||
|
||||
1. Read the persona file contents
|
||||
2. Dispatch a subagent:
|
||||
|
||||
```
|
||||
[Persona content from engineering-backend-architect.md]
|
||||
|
||||
## Your Task
|
||||
|
||||
Review the following code changes for architectural correctness. This is a READ-ONLY review.
|
||||
|
||||
### Changed Files
|
||||
[List of changed files]
|
||||
|
||||
### Diff
|
||||
[Full diff content]
|
||||
|
||||
### Check These Patterns
|
||||
|
||||
1. **Lifecycle Events**: Are modules using `$listens` arrays in Boot.php? Are routes registered via event callbacks (`$event->routes()`), not direct `Route::get()` calls?
|
||||
|
||||
2. **Actions Pattern**: Is business logic in Action classes with `use Action` trait? Or is it leaking into controllers/Livewire components?
|
||||
|
||||
3. **Tenant Isolation**: Do new/modified models that hold tenant data use `BelongsToWorkspace`? Are migrations adding `workspace_id` with foreign key and cascade delete?
|
||||
|
||||
4. **Namespace Mapping**: Do files follow `src/Core/` → `Core\`, `src/Mod/` → `Core\Mod\`, `app/Mod/` → `Mod\`?
|
||||
|
||||
5. **Go Services** (if applicable): Are services registered via factory functions? Using `ServiceRuntime[T]`? Implementing `Startable`/`Stoppable`?
|
||||
|
||||
6. **Dependency Direction**: Do changes respect the dependency graph? Products depend on core-php and core-tenant, never on each other.
|
||||
|
||||
### Output Format
|
||||
|
||||
## Architecture Review
|
||||
|
||||
### Lifecycle Events
|
||||
[Findings or "Correct — events used properly"]
|
||||
|
||||
### Actions Pattern
|
||||
[Findings or "Correct — logic in Actions"]
|
||||
|
||||
### Tenant Isolation
|
||||
[Findings or "Correct — BelongsToWorkspace on all tenant models"]
|
||||
|
||||
### Namespace Mapping
|
||||
[Findings or "Correct"]
|
||||
|
||||
### Dependency Direction
|
||||
[Findings or "Correct"]
|
||||
|
||||
### Issues
|
||||
- **VIOLATION**: file:line — [Description]
|
||||
- **WARNING**: file:line — [Description]
|
||||
- **SUGGESTION**: file:line — [Description]
|
||||
|
||||
**Summary**: X violations, Y warnings, Z suggestions
|
||||
```
|
||||
|
||||
3. Return the subagent's review as the stage output
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/skills/architecture-review.md
|
||||
git commit -m "feat(review): add architecture-review skill — Stage 4 of pipeline"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 7: Create Stage 5 skill — Reality Check
|
||||
|
||||
**Files:**
|
||||
- Create: `claude/review/skills/reality-check.md`
|
||||
|
||||
**Step 1: Create the skill file**
|
||||
|
||||
```markdown
|
||||
---
|
||||
name: reality-check
|
||||
description: Stage 5 of review pipeline — dispatch Reality Checker agent as final gate with evidence-based verdict
|
||||
---
|
||||
|
||||
# Reality Check Stage (Final Gate)
|
||||
|
||||
Dispatch the **Reality Checker** agent as the final review gate. Defaults to NEEDS WORK.
|
||||
|
||||
## When to Use
|
||||
|
||||
Invoked as Stage 5 (final stage) of `/review:pipeline`. Can be run standalone via `/review:pipeline --stage=reality`.
|
||||
|
||||
## Agent Persona
|
||||
|
||||
Read the Reality Checker persona from:
|
||||
```
|
||||
agents/testing/testing-reality-checker.md
|
||||
```
|
||||
|
||||
## Dispatch Instructions
|
||||
|
||||
1. Read the persona file contents
|
||||
2. Gather ALL prior stage findings into a single context block
|
||||
3. Dispatch a subagent:
|
||||
|
||||
```
|
||||
[Persona content from testing-reality-checker.md]
|
||||
|
||||
## Your Task
|
||||
|
||||
You are the FINAL GATE. Review all prior stage findings and produce an evidence-based verdict. Default to NEEDS WORK.
|
||||
|
||||
### Prior Stage Findings
|
||||
|
||||
#### Stage 1: Security Review
|
||||
[Stage 1 output]
|
||||
|
||||
#### Stage 2: Fixes Applied
|
||||
[Stage 2 output, or "Skipped"]
|
||||
|
||||
#### Stage 3: Test Analysis
|
||||
[Stage 3 output]
|
||||
|
||||
#### Stage 4: Architecture Review
|
||||
[Stage 4 output]
|
||||
|
||||
### Changed Files
|
||||
[List of changed files]
|
||||
|
||||
### Your Assessment
|
||||
|
||||
1. **Cross-reference all findings** — do security fixes have tests? Do architecture violations have security implications?
|
||||
2. **Verify evidence** — are test results real (actual command output) or claimed?
|
||||
3. **Check for gaps** — what did previous stages miss?
|
||||
4. **Apply your FAIL triggers** — fantasy assessments, missing evidence, architecture violations
|
||||
|
||||
### Output Format
|
||||
|
||||
## Final Verdict
|
||||
|
||||
**Status**: READY / NEEDS WORK / FAILED
|
||||
**Quality Rating**: C+ / B- / B / B+
|
||||
|
||||
### Evidence Summary
|
||||
| Check | Status | Evidence |
|
||||
|-------|--------|----------|
|
||||
| Tests pass | YES/NO | [Command + output] |
|
||||
| Lint clean | YES/NO | [Command + output] |
|
||||
| Security issues resolved | YES/NO | [Remaining count] |
|
||||
| Architecture correct | YES/NO | [Violation count] |
|
||||
| Tenant isolation verified | YES/NO | [Specific check] |
|
||||
| UK English | YES/NO | [Violations found] |
|
||||
| Test coverage of changes | X/Y | [Gap count] |
|
||||
|
||||
### Outstanding Issues
|
||||
1. **[CRITICAL/IMPORTANT/MINOR]**: file:line — [Issue]
|
||||
2. ...
|
||||
|
||||
### Required Before Merge
|
||||
1. [Specific action with file path]
|
||||
2. ...
|
||||
|
||||
### What's Done Well
|
||||
[Positive findings from all stages]
|
||||
|
||||
---
|
||||
**Reviewer**: Reality Checker
|
||||
**Date**: [Date]
|
||||
**Re-review required**: YES/NO
|
||||
```
|
||||
|
||||
4. Return the subagent's verdict as the final pipeline output
|
||||
```
|
||||
|
||||
**Step 2: Commit**
|
||||
|
||||
```bash
|
||||
git add claude/review/skills/reality-check.md
|
||||
git commit -m "feat(review): add reality-check skill — Stage 5 final gate"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 8: Create skills directory and verify plugin structure
|
||||
|
||||
**Files:**
|
||||
- Verify: `claude/review/skills/` contains all 5 skill files
|
||||
- Verify: `claude/review/commands/pipeline.md` exists
|
||||
- Verify: `claude/review/.claude-plugin/plugin.json` is updated
|
||||
|
||||
**Step 1: Verify the complete file structure**
|
||||
|
||||
```bash
|
||||
find claude/review/ -type f | sort
|
||||
```
|
||||
|
||||
Expected output:
|
||||
```
|
||||
claude/review/.claude-plugin/plugin.json
|
||||
claude/review/commands/pipeline.md
|
||||
claude/review/commands/pr.md
|
||||
claude/review/commands/review.md
|
||||
claude/review/commands/security.md
|
||||
claude/review/hooks.json
|
||||
claude/review/scripts/post-pr-create.sh
|
||||
claude/review/skills/architecture-review.md
|
||||
claude/review/skills/reality-check.md
|
||||
claude/review/skills/security-review.md
|
||||
claude/review/skills/senior-dev-fix.md
|
||||
claude/review/skills/test-analysis.md
|
||||
```
|
||||
|
||||
**Step 2: Verify agent persona files exist**
|
||||
|
||||
```bash
|
||||
ls -la agents/engineering/engineering-security-engineer.md \
|
||||
agents/engineering/engineering-senior-developer.md \
|
||||
agents/testing/testing-api-tester.md \
|
||||
agents/engineering/engineering-backend-architect.md \
|
||||
agents/testing/testing-reality-checker.md
|
||||
```
|
||||
|
||||
Expected: All 5 files exist.
|
||||
|
||||
**Step 3: Final commit**
|
||||
|
||||
```bash
|
||||
git add -A claude/review/
|
||||
git commit -m "feat(review): complete review pipeline plugin — 5-agent automated code review"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### Task 9: Smoke test the plugin
|
||||
|
||||
**Step 1: Test that the pipeline command is recognised**
|
||||
|
||||
From the `core/agent` directory, verify the plugin structure is valid by checking the command is loadable:
|
||||
|
||||
```bash
|
||||
# Check frontmatter is valid
|
||||
head -5 claude/review/commands/pipeline.md
|
||||
```
|
||||
|
||||
Expected: Valid YAML frontmatter with `name: pipeline`.
|
||||
|
||||
**Step 2: Test that skill files have valid frontmatter**
|
||||
|
||||
```bash
|
||||
for f in claude/review/skills/*.md; do echo "=== $f ==="; head -4 "$f"; echo; done
|
||||
```
|
||||
|
||||
Expected: Each skill has `name:` and `description:` in frontmatter.
|
||||
|
||||
**Step 3: Test the pipeline manually**
|
||||
|
||||
Open a Claude Code session in a repo with recent changes and run:
|
||||
|
||||
```
|
||||
/review:pipeline HEAD~1..HEAD
|
||||
```
|
||||
|
||||
Verify:
|
||||
- Stage 1 dispatches and returns security findings
|
||||
- Stage 2 is skipped (if no Criticals) or runs fixes
|
||||
- Stage 3 runs tests and reports coverage
|
||||
- Stage 4 checks architecture patterns
|
||||
- Stage 5 produces a verdict
|
||||
|
||||
**Step 4: Test single-stage mode**
|
||||
|
||||
```
|
||||
/review:pipeline --stage=security HEAD~1..HEAD
|
||||
```
|
||||
|
||||
Verify: Only Stage 1 runs.
|
||||
|
|
@ -1,213 +0,0 @@
|
|||
# OpenBrain Design
|
||||
|
||||
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
|
||||
|
||||
**Goal:** Shared vector-indexed knowledge store that all agents (Virgil, Charon, Darbs, LEM) read/write through MCP, building singular state across sessions.
|
||||
|
||||
**Architecture:** MariaDB for relational metadata + Qdrant for vector embeddings. Four MCP tools in php-agentic. Go bridge in go-ai for CLI agents. Ollama for embedding generation.
|
||||
|
||||
**Repos:** `forge.lthn.ai/core/php-agentic` (primary), `forge.lthn.ai/core/go-ai` (bridge)
|
||||
|
||||
---
|
||||
|
||||
## Problem
|
||||
|
||||
Agent knowledge is scattered:
|
||||
- Virgil's `MEMORY.md` files in `~/.claude/projects/*/memory/` — file-based, single-agent, no semantic search
|
||||
- Plans in `docs/plans/` across repos — forgotten after completion
|
||||
- Session handoff notes in `agent_sessions.handoff_notes` — JSON blobs, not searchable
|
||||
- Research findings lost when context windows compress
|
||||
|
||||
When Charon discovers a scoring calibration bug, Virgil only knows about it if explicitly told. There's no shared knowledge graph.
|
||||
|
||||
## Concept
|
||||
|
||||
**OpenBrain** — "Open" means open protocol (MCP), not open source. All agents on the platform access the same knowledge graph via `brain_*` MCP tools. Data is stored *for agents* — structured for near-native context transfer between sessions and models.
|
||||
|
||||
## Data Model
|
||||
|
||||
### `brain_memories` table (MariaDB)
|
||||
|
||||
| Column | Type | Purpose |
|
||||
|--------|------|---------|
|
||||
| `id` | UUID | Primary key, also Qdrant point ID |
|
||||
| `workspace_id` | FK | Multi-tenant isolation |
|
||||
| `agent_id` | string | Who wrote it (virgil, charon, darbs, lem) |
|
||||
| `type` | enum | `decision`, `observation`, `convention`, `research`, `plan`, `bug`, `architecture` |
|
||||
| `content` | text | The knowledge (markdown) |
|
||||
| `tags` | JSON | Topic tags for filtering |
|
||||
| `project` | string nullable | Repo/project scope (null = cross-project) |
|
||||
| `confidence` | float | 0.0–1.0, how certain the agent is |
|
||||
| `supersedes_id` | UUID nullable | FK to older memory this replaces |
|
||||
| `expires_at` | timestamp nullable | TTL for session-scoped context |
|
||||
| `deleted_at` | timestamp nullable | Soft delete |
|
||||
| `created_at` | timestamp | |
|
||||
| `updated_at` | timestamp | |
|
||||
|
||||
### `openbrain` Qdrant collection
|
||||
|
||||
- **Vector dimension:** 768 (nomic-embed-text via Ollama)
|
||||
- **Distance metric:** Cosine
|
||||
- **Point ID:** MariaDB UUID
|
||||
- **Payload:** `workspace_id`, `agent_id`, `type`, `tags`, `project`, `confidence`, `created_at` (for filtered search)
|
||||
|
||||
## MCP Tools
|
||||
|
||||
### `brain_remember` — Store a memory
|
||||
|
||||
```json
|
||||
{
|
||||
"content": "LEM emotional_register was blind to negative emotions. Fixed by adding 8 weighted pattern groups.",
|
||||
"type": "bug",
|
||||
"tags": ["scoring", "emotional-register", "lem"],
|
||||
"project": "eaas",
|
||||
"confidence": 0.95,
|
||||
"supersedes": "uuid-of-outdated-memory"
|
||||
}
|
||||
```
|
||||
|
||||
Agent ID injected from MCP session context. Returns the new memory UUID.
|
||||
|
||||
**Pipeline:**
|
||||
1. Validate input
|
||||
2. Embed content via Ollama (`POST /api/embeddings`, model: `nomic-embed-text`)
|
||||
3. Insert into MariaDB
|
||||
4. Upsert into Qdrant with payload metadata
|
||||
5. If `supersedes` set, soft-delete the old memory and remove from Qdrant
|
||||
|
||||
### `brain_recall` — Semantic search
|
||||
|
||||
```json
|
||||
{
|
||||
"query": "How does verdict classification work?",
|
||||
"top_k": 5,
|
||||
"filter": {
|
||||
"project": "eaas",
|
||||
"type": ["decision", "architecture"],
|
||||
"min_confidence": 0.5
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Pipeline:**
|
||||
1. Embed query via Ollama
|
||||
2. Search Qdrant with vector + payload filters
|
||||
3. Get top-K point IDs with similarity scores
|
||||
4. Hydrate from MariaDB (content, tags, supersedes chain)
|
||||
5. Return ranked results with scores
|
||||
|
||||
Only returns latest version of superseded memories (includes `supersedes_count` so agent knows history exists).
|
||||
|
||||
### `brain_forget` — Soft-delete or supersede
|
||||
|
||||
```json
|
||||
{
|
||||
"id": "uuid",
|
||||
"reason": "Superseded by new calibration approach"
|
||||
}
|
||||
```
|
||||
|
||||
Sets `deleted_at` in MariaDB, removes point from Qdrant. Keeps audit trail.
|
||||
|
||||
### `brain_list` — Browse (no vectors)
|
||||
|
||||
```json
|
||||
{
|
||||
"project": "eaas",
|
||||
"type": "decision",
|
||||
"agent_id": "charon",
|
||||
"limit": 20
|
||||
}
|
||||
```
|
||||
|
||||
Pure MariaDB query. For browsing, auditing, bulk export. No embedding needed.
|
||||
|
||||
## Architecture
|
||||
|
||||
### PHP side (`php-agentic`)
|
||||
|
||||
```
|
||||
Mcp/Tools/Agent/Brain/
|
||||
├── BrainRemember.php
|
||||
├── BrainRecall.php
|
||||
├── BrainForget.php
|
||||
└── BrainList.php
|
||||
|
||||
Services/
|
||||
└── BrainService.php # Ollama embeddings + Qdrant client + MariaDB CRUD
|
||||
|
||||
Models/
|
||||
└── BrainMemory.php # Eloquent model
|
||||
|
||||
Migrations/
|
||||
└── XXXX_create_brain_memories_table.php
|
||||
```
|
||||
|
||||
`BrainService` handles:
|
||||
- Ollama HTTP calls for embeddings
|
||||
- Qdrant REST API (upsert, search, delete points)
|
||||
- MariaDB CRUD via Eloquent
|
||||
- Supersession chain management
|
||||
|
||||
### Go side (`go-ai`)
|
||||
|
||||
Thin bridge tools in the MCP server that proxy `brain_*` calls to Laravel via the existing WebSocket bridge. Same pattern as `ide_chat_send` / `ide_session_create`.
|
||||
|
||||
### Data flow
|
||||
|
||||
```
|
||||
Agent (any Claude)
|
||||
↓ MCP tool call
|
||||
Go MCP server (local, macOS/Linux)
|
||||
↓ WebSocket bridge
|
||||
Laravel php-agentic (lthn.ai, de1)
|
||||
↓ ↓
|
||||
MariaDB Qdrant
|
||||
(relational) (vectors)
|
||||
↑
|
||||
Ollama (embeddings)
|
||||
```
|
||||
|
||||
PHP-native agents skip the Go bridge — call `BrainService` directly.
|
||||
|
||||
### Infrastructure
|
||||
|
||||
- **Qdrant:** New container on de1. Shared between OpenBrain and EaaS scoring (different collections).
|
||||
- **Ollama:** Existing instance. `nomic-embed-text` model for 768d embeddings. CPU is fine for the volume (~10K memories).
|
||||
- **MariaDB:** Existing instance on de1. New table in the agentic database.
|
||||
|
||||
## Integration
|
||||
|
||||
### Plans → Brain
|
||||
|
||||
On plan completion, agents can extract key decisions/findings and `brain_remember` them. Optional — agents decide what's worth persisting. The plan itself stays in `agent_plans`; lessons learned go to the brain.
|
||||
|
||||
### Sessions → Brain
|
||||
|
||||
Handoff notes (summary, next_steps, blockers) can auto-persist as memories with `type: observation` and optional TTL. Agents can also manually remember during a session.
|
||||
|
||||
### MEMORY.md migration
|
||||
|
||||
Seed data: collect all `MEMORY.md` files from `~/.claude/projects/*/memory/` across worktrees. Parse into individual memories, embed, and load into OpenBrain. After migration, `brain_recall` replaces file-based memory.
|
||||
|
||||
### EaaS
|
||||
|
||||
Same Qdrant instance, different collection (`eaas_scoring` vs `openbrain`). Shared infrastructure, separate concerns.
|
||||
|
||||
### LEM
|
||||
|
||||
LEM models query the brain for project context during training data curation or benchmark analysis. Same MCP tools, different agent ID.
|
||||
|
||||
## What this replaces
|
||||
|
||||
- Virgil's `MEMORY.md` files (file-based, single-agent, no search)
|
||||
- Scattered `docs/plans/` findings that get forgotten
|
||||
- Manual "Charon found X" cross-agent handoffs
|
||||
- Session-scoped knowledge that dies with context compression
|
||||
|
||||
## What this enables
|
||||
|
||||
- Any Claude picks up where another left off — semantically
|
||||
- Decisions surface when related code is touched
|
||||
- Knowledge graph grows with every session across all agents
|
||||
- Near-native context transfer between models and sessions
|
||||
File diff suppressed because it is too large
Load diff
Loading…
Add table
Reference in a new issue