Foundation slice for Mantis #843 php/Mod/Admin + php/Website/Hub RFC:
* php/Mod/Admin/Boot.php — search registry, menu registry, form component
layer, HasRateLimiting concern, reusable form/view primitives under
Mod/Admin/Forms
* php/Website/Hub/Boot.php — host-aware Hub route naming for secondary
domains
* WorkspaceSwitcher and GlobalSearch global Hub Livewire components
* Foundation routed slice in Hub/Routes/admin.php: dashboard shell,
workspace listing, site settings (with WordPress/webhook connector),
account usage, platform user list+detail
* Foundation tests under php/tests/Feature/Mod/Admin/
53 PHP files. php -l clean. Pest unrunnable in sandbox (no vendor/).
Foundation slice only — composer.json kept off-limits so namespace stays
under Core\Mod\Agentic\... rather than standalone Core\Admin package.
Deferred: Profile, Settings, ServiceManager, ServicesAdmin, Honeypot,
Entitlement\{Dashboard,FeatureManager,PackageManager}, PromptManager,
WaitlistManager, Console, Databases, Deployments, Content,
ContentManager, ContentEditor, ActivityLog, Analytics, AIServices,
BoostPurchase. Lane was under-instructed by supervisor with stop-at
framing — follow-up tickets needed for remainder.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=843
Foundation slice for Mantis #841 php/Mod/Agent RFC implementation:
* CompleteTask now wraps in DB::transaction with idempotent credit awards
and safe current_task_id clearing
* Credits/{Award,GetBalance,GetCreditHistory} updated for agent_id +
fleet_task_id ledger support and richer balance totals
* GenerateCommand canonical agentic:generate wiring; legacy duplicate
no longer registered
* Boot wires brain:clean / brain:prune / brain:reindex
* EmbedMemory exits early when memory already indexed
* 3 follow-on fleet migrations reconcile fleet_nodes pointer column,
fleet_tasks/credit_entries fk/index hygiene, fleet+credit constraints
* 4 foundation tests under php/tests/Feature/Mod/Agent/
php -l clean on all modified files. pest unrunnable in sandbox (no vendor/).
Foundation slice only: remaining model/action parity, full MCP tool/
service sweep, fleet controller auth-context, and 41-tool/45-action
surface left for follow-up tickets.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=841
Bounded subset of RFC-OPENBRAIN lifted from lab/lthn.ai shim into the
OSS BrainService at php/Services/BrainService.php:
- search(query, filter, pagination): Elasticsearch path first, falls
back to MariaDB if ES is unavailable. Operates on active/latest
memories only.
- discoverTags(filter): tag-cloud / popular-tags discovery scoped to
authenticated org(s).
- listScopes(filter): org/project distribution counts for the
authenticated session.
All three:
- Enforce bounded inputs (per #1001 patterns)
- Honour org auth (per #312 patterns)
- Only operate on active/latest memories (active=1, deleted_at IS NULL)
Self-hosters now get the same discovery surface that lab/lthn.ai
exposes — no need to fork the OSS service to access these features.
Pest covers: bounds-violation rejection, fallback behaviour, scoped
discovery returning correct org/project breakdowns.
Lab-only features still out of scope for this lane (would pull in
extra schema/models/events): agentContext, recall feedback,
maintenance lifecycle (reindex/consolidate/clean/prune). Those need
follow-up tickets if/when bounded-lift becomes possible.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=180
WebhookRegistering event exposes:
- register(string $type, array $spec): add a webhook type to the
registry
- types(): array — queryable post-dispatch registry
CoreServiceProvider dispatches the event at app boot and exposes the
collected registry via webhookTypes() — matches the existing
ApiRoutesRegistering / ConsoleBooting / ClientRoutesRegistering
event-driven module pattern.
Pairs with #1034 ofm.bot WebhookRegistrar (just landed) — that
service can now also be wired through this event, allowing OTHER
modules and external apps using Core to register webhook types via
the standard Core lifecycle.
Note: real Core lifecycle dispatcher lives in a sibling read-only
framework checkout. CoreServiceProvider here is a local shim that
mirrors the dispatch behaviour. Upstream patch needed when that
sibling lands.
Pest covers: instantiation + register, boot-time dispatch, post-boot
registry lookup.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1013
#1000 was stale-fixed: BrainService::recall() validates filter input
via the shared validator at line 489, which already bounds org,
project, type, agent_id. forget() bounds id at line 499.
These tests pin the safety claim explicitly:
- project=129 chars rejected
- agent_id=65 chars rejected
- project="core" accepted (sanity)
- project=128 chars accepted (boundary)
Note: BrainList.php (separate MCP list path) still lacks explicit
max lengths for project + agent_id — file outside this lane's allow-
list. File a follow-up if that surface needs the same bounds.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1000
McpContext exposes the authenticated session's authorisation scopes
via getScopes(): array and hasScope(string): bool.
Resolution order:
1. Explicit scope source passed to constructor
2. Session-like object linked to an API key
3. Authenticated Laravel request context (mcp_workspace_context,
agent_api_key, api_key)
4. Empty array (default) — never null
Dedupes scope strings, normalises separators in hasScope() matching.
Closes the OFM MCP tool gap where scope-gated tools currently return
empty/incorrect handling. No call-site stubs found needing update in
this worktree — call sites pick up the new method directly.
Pest covers: session scopes returned, hasScope present/missing, empty
session defaults to [], request-context regression against real MCP
auth shape.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1014
Bound input field sizes against memory/DB/Qdrant bloat (DoS-by-self):
- content: 65536 bytes via mb_strlen
- tags: max 100 entries; each tag max 128 chars
- agent_id, type: 64 chars each
- project, org: 128 chars each
- supersedes_id: ULID-shape only
validateRememberInput() throws InvalidArgumentException at every entry
point (remember, recall, forget) before any DB or upstream call. Field-
specific error messages so callers know which field violated.
Pest covers good-path, content-too-long, tags-array-too-large, tag-
length, exact-boundary cases.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=1001
remember() now resolves a stale supersedes_id to the current live head
before writing — when X has been superseded by Y, a retried call with
supersedes_id=X automatically links the new memory to Y instead of
silently dropping the supersede.
- Walk the chain from supplied supersedes_id to find the active head
- Cap the walk at depth 100 (cycle/runaway protection)
- Throw RuntimeException("Detected cycle while resolving supersede chain")
on detected cycle, BEFORE any DB write
- Throw InvalidArgumentException("Superseded memory not found") when
the original supersedes_id never existed
- deleteSupersededMemory no longer silently no-ops once the resolved
head is expected to exist
Pest coverage extended:
- Direct chain link (X exists, succeeds with X→linked)
- Retry path (X→Y, then retry on X produces Z→Y, walks chain)
- Never-existed target (graceful error)
- Synthetic X↔Y cycle (caps walk + throws, no writes leak)
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=316
remember(), recall(), forget(), and elasticSearch() now resolve the
allowed-orgs set from the authenticated request context (mcp_workspace_context),
preferring explicit authorised_orgs/authorized_orgs, falling back to the
authenticated workspace's org/slug. A mismatched org throws
AuthorizationException BEFORE any Qdrant/Elasticsearch call or destructive
DB action — closes the horizontal-priv-escalation vector where an MCP
client could recall/remember/forget memories scoped to ANY org by
setting org="other-org" in the request body.
Pest coverage in OrgScopingTest covers good path, unauthorised recall
(asserts no HTTP), cross-org forget (asserts no DB delete), unauthorised
remember (asserts no embed/index jobs).
Note: BrainList free-form org filter is a separate ticket — outside this
lane's allowlist.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=312
retryableHttp() now retries only 408 (Request Timeout), 429 (Too Many
Requests), and 503 (Service Unavailable). 500-and-other-5xx fail
immediately so the circuit-breaker registers them as a single
failure rather than smearing across retry attempts. Retry-After
honoured (numeric + HTTP-date), capped reasonably.
Attempt budget bumped to 6 so a burst of 5 transient 503s can recover
within ONE circuit-permitted call — the original concern from #311.
Note: CircuitBreaker is already applied OUTSIDE the logical Brain
operation by the MCP tool layer, not around each HTTP retry. The
nesting report was stale at this code shape; the real drift was the
retryableHttp() retry set + budget.
Pest coverage in CircuitBreakerTest:
- Recovered 503 burst → circuit stays closed, no failure registered
- Exhausted 503 burst → ONE breaker failure (not five)
- 429 + Retry-After 1 → sleeps 1s, no breaker failure
- 500 → immediate breaker failure, no retry
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=311
Cache::lock keyed by memory id wraps the delete path in BrainService::
forget(); supersede cleanup in remember() lifted to the same idiom.
forget() now ALWAYS queues DeleteFromIndex on a successful delete
(was previously skipped when indexed_at was null — left late writes
from stale preloaded models a window to land entries after the
underlying memory was gone).
Index write paths (qdrantUpsert / elasticIndex) re-check that the
memory row still exists before writing — defence-in-depth against any
future caller that holds a stale model reference past a forget.
Pest coverage extended in SupersedeForgetIndexCleanupTest:
- never-indexed forget queues cleanup
- late stale-model index writes are skipped after forget
- never-indexed supersede cleanup queues deletion
- late stale-model index writes are skipped after supersede
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=999
Additive-only — appended to php/Routes/api.php (existing routes
preserved). Existing /v1/fleet/{nodes,heartbeat,stats} +
/v1/agent/auth/provision left untouched.
New routes:
- /v1/agent/auth/register
- /v1/fleet/dispatch + /v1/fleet/stream
- /v1/credits/{balance,deduct,refund,ledger}
- /v1/subscription/{status,upgrade,cancel}
- /v1/agent/sync/{push,pull}
New controllers under php/Controllers/Api/{Fleet,Credits,Subscription,
Sync,AgentAuth}/. Reference FleetService/CreditService/SessionService
when available with fallbacks to current action/model layer (pre #849).
Pest Feature coverage under php/tests/Feature/Api/. pest skipped
(vendor binaries missing in sandbox).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=848
Additive-only — no existing files modified.
- FleetOverview: node list + status badges + dispatch button + stats panel
- BrainExplorer: semantic-recall search with DB fallback + forget action
- CreditLedger: balance display + transaction list + deduct/refund actions
Flux Pro components (no vanilla Alpine). Uses existing
fleet/brain/credit actions+services in this package.
Pest Feature tests _Good/_Bad/_Ugly per AX-10 — load classes directly
since composer.json + Boot.php were left untouched per scope. Future
follow-up: wire PSR-4 + view registration in Boot.php.
pest skipped (vendor binaries missing in sandbox).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=850
Closes the 5 PARTIAL items flagged in docs/AUDIT-openbrain-20260424.md.
- Gap A (org scoping persisted on writes): new migration adds `org`
nullable+indexed column to brain_memories; BrainMemory fillable;
RememberKnowledge action forwards org; BrainService::remember
persists it.
- Gap B (supersede/forget Elastic cleanup): BrainService::forget
dispatches DeleteFromIndex (handles both Qdrant + Elastic); supersede
path dispatches cleanup for the old memory id before replacing it.
DeleteFromIndex itself untouched — already handled both indexes.
- Gap C (brain:reindex flags): --org, --project, --stale (null OR
>14d old), --dry-run (count+stop), --elastic-only added to the
artisan command.
- Gap D (MCP schemas expose org): brain_remember, brain_recall,
brain_list now accept `org` in input schema + forward into
action/service.
- Gap E (resilience uneven): brain_list now wrapped in
withCircuitBreaker('brain', ...) matching the pattern used by
BrainRemember/Recall/Forget. BrainService gains retryableHttp()
helper — 100/300/900ms exponential backoff, retries only on 5xx +
connection errors, not on 4xx. Qdrant calls route through it;
Ollama left alone (EmbedMemory job has its own retry).
Tests (Good/Bad/Ugly per gap):
- Feature/Brain/OrgScopingTest.php
- Feature/Brain/SupersedeForgetIndexCleanupTest.php
- Feature/Brain/ReindexFlagsTest.php
- Feature/Mcp/BrainSchemaOrgTest.php
- Feature/Brain/CircuitBreakerTest.php
php -l clean on all 13 files. Pest binary not in this checkout —
CI path validates the full suite.
Closes tasks.lthn.sh/view.php?id=107
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
BrainService::http() was building a PendingRequest with no auth
header, so when Qdrant has auth enabled (the production lthn.sh
deploy does) every upsert/lookup returned 401. The circuit breaker
logged the 401 via Cache::store('file'), which was the red-herring
cache-write error chased in the first #97 iteration.
Changes:
- BrainService loads + trims a Qdrant api key from
config('brain.qdrant.api_key') in the constructor.
- New qdrantHttp() helper returns a PendingRequest with the
api-key header when the key is non-empty, or the plain client
otherwise. Ollama + Elasticsearch call sites still use http()
(separate auth shapes).
- php/config.php adds a brain.qdrant.api_key entry reading
env('BRAIN_QDRANT_API_KEY').
- Good/Bad/Ugly Pest tests cover: configured key → header sent,
unset → header absent, empty-string → header absent.
Closes tasks.lthn.sh/view.php?id=97
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Exercises the 3 MCP handlers that work MariaDB-only (no Qdrant
dependency): brain_remember writes + returns id, brain_list
surfaces it, brain_forget removes. Negative case: brain_forget on
a non-existent id returns a proper error response (not TypeError).
brain_recall is out of scope — needs the Qdrant collection +
embedding pipeline.
Implementation note: handlers use `type` + workspace context for
scoping, not a `scope` parameter; the test matches the actual
signatures.
Closes tasks.lthn.sh/view.php?id=96
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Fleet tasks deliberately do not create AgentSession records.
AgentSession's work_log / artefacts / handoff / replay semantics are
designed for interactive, replayable, handoff-capable work — fleet
tasks are atomic assign→complete events with no in-between state
to replay. If a fleet-task handler needs session semantics, it
should start its own AgentSession via AgentSessionService when the
work begins.
Closes tasks.lthn.sh/view.php?id=94
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
ScanForWork and ManagePullRequest now depend on the MetaReader
interface (added in #89) instead of reading raw Forgejo body /
description / PR text. Epic child-linkage comes from
EpicMeta.children, PR merge decisions come from PRMeta.state /
mergeability / checkStatuses. The returned shape drops issue_body
and replaces it with structural issue_state / issue_labels.
Adds a feature test that injects a mocked MetaReader carrying
intentionally-tainted body/description/review_text fields and
recursively asserts none of those keys appear in the output of
either action — the regression fence for the RFC rule that body
content must never reach pipeline decisions.
Closes tasks.lthn.sh/view.php?id=90
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
AgentSession::addArtifact expects ?array $metadata in the third
argument slot; the MCP tool was passing the optional description
string directly, producing a TypeError whenever a caller supplied a
non-null description. Wrap the description into a metadata array so
the call matches the model signature, and add a feature test that
exercises the MCP handler end-to-end to prevent regression.
Closes tasks.lthn.sh/view.php?id=95
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Extend PushDispatchHistory so /v1/agent/sync writes four sync.*
workflow-progress keys into WorkspaceState (last_dispatch_at,
last_agent_type, last_findings_count, last_status) in addition to the
existing BrainMemory + SyncRecord persistence. Plan resolves via
agent_plan_id first, plan_slug fallback. Missing plan is treated as
non-fatal — state writes are skipped, BrainMemory still persists.
Adds a three-case feature test covering direct id, slug fallback, and
the missing-plan safety branch.
Closes tasks.lthn.sh/view.php?id=93
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Introduce a pipeline metadata surface that enforces "no body content
ever reaches pipeline decisions". MetaReader is an interface with four
methods (getPRMeta, getEpicMeta, getIssueState, getCommentReactions),
each returning a readonly DTO carrying only structural fields —
state, mergeability, SHAs, branches, reaction counts, child linkage.
ForgejoMetaReader projects raw Forgejo API payloads into these DTOs
and drops body/description/review text before the caller can see it.
Unit test mocks rich Forgejo payloads containing body, description,
review_text, and comment_body, then asserts the DTO toArray output
never exposes those keys — the regression fence for the RFC rule.
Downstream callers (ScanForWork, ManagePullRequest) still use the
raw ForgejoService today; that refactor lands under Mantis #90.
Closes tasks.lthn.sh/view.php?id=89
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
New endpoint GET /v1/brain/search?q=<query>&org=<>&project=<>&limit=<N>
that full-text-searches brain_memories via Elasticsearch using
BrainService::elasticSearch(). Separate from /v1/brain/recall (which is
vector/semantic via Qdrant) — this one is keyword/lexical.
Sits under the existing brain.read-auth middleware group.
Pest coverage (Http::fake for ES): Good (matches), Bad (invalid limit),
Ugly (empty query + filters).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=64
Two introspection endpoints for OpenBrain:
- GET /v1/brain/tags — ES terms aggregation over tags.keyword, returns
{tag: count} pairs for UI filter chips
- GET /v1/brain/scopes — composite aggregation over {org, project},
returns the scope hierarchy present in the index
Sits under the existing brain.read-auth group in Routes/api.php. New
BrainService helpers for aggregation shape; reuses the elasticSearch
HTTP path added in #59.
Pest coverage with Http::fake for ES.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=65
New artisan command brain:prune {--older-than=90} {--chunk=100} {--dry-run}
that completes the soft-delete → hard-delete lifecycle by:
1. selecting BrainMemory::onlyTrashed() where deleted_at < now - N days
2. dispatching DeleteFromIndex for each (Qdrant + ES cleanup)
3. forceDelete()'ing the rows
--dry-run counts without dispatching.
Complements brain:clean (which cleans recent soft-deletes) with a
retention-bounded terminal cleanup.
Pest coverage: Good (dispatch + forceDelete on aged trashed rows), Bad
(invalid chunk), Ugly (--dry-run skips both dispatch and delete).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=62
New artisan command brain:clean {--chunk=100} {--dry-run} that dispatches
the DeleteFromIndex job for soft-deleted BrainMemory rows (those in
onlyTrashed scope). Cleans up orphaned Qdrant + Elasticsearch index
entries that remain after a memory is soft-deleted.
--dry-run counts without dispatching.
php/tests/Feature/Console/BrainCleanCommandTest.php covers Good
(dispatches on trashed), Bad (invalid chunk), Ugly (--dry-run prevents
dispatch).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=61
remember() now writes the brain_memories row with indexed_at=null and
dispatches EmbedMemory::dispatch($memory->id) for async Qdrant + ES
indexing, instead of calling qdrantUpsert() synchronously. Response shape
matches the row state — caller gets the memory immediately, the Job
flips indexed_at once the Qdrant write succeeds.
Superseded rows still soft-delete synchronously (part of the remember
contract, not the indexing path).
php/tests/Feature/Services/BrainServiceRememberTest.php uses Queue::fake()
to assert EmbedMemory is dispatched and BrainService::qdrantUpsert() is
NOT called directly (subclass probe).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=55
New artisan command brain:reindex {--all} {--chunk=100} that dispatches
the EmbedMemory job for brain memories needing (re)indexing. Without
--all, only memories where indexed_at IS NULL are dispatched; --all
re-embeds every memory (useful after a Qdrant collection wipe or
embedding model change). Uses chunkById for memory-safe iteration at
scale.
php/tests/Feature/Console/BrainReindexCommandTest.php covers Good
(unindexed-only default), Bad (invalid chunk), Ugly (--all flag).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=60
Fills in the elasticIndex/elasticDelete stubs added by #56 and #57, plus a
new elasticSearch() method used by the upcoming /v1/brain/search endpoint
(#64).
- elasticIndex(BrainMemory) → PUT /brain_memories/_doc/{id}
- elasticDelete(string $id) → DELETE /brain_memories/_doc/{id}
- elasticSearch(string $query, array $filters) → POST /brain_memories/_search
- ES URL default http://127.0.0.1:9200 (config override via
BRAIN_ELASTICSEARCH_URL env var)
- RuntimeException on HTTP failures (same pattern as qdrantUpsert)
php/tests/Feature/Services/BrainServiceElasticTest.php covers Good/Bad/Ugly
for index, delete, and search using Http::fake.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=59
Inverse of the EmbedMemory job (#56): removes a memory from Qdrant (and
the future Elasticsearch index) when brain_forget fires or a memory is
soft-deleted.
- php/Jobs/DeleteFromIndex.php — Laravel Job, 3 retries with backoff
- BrainService: qdrantDelete() private→public and now throws on HTTP
failure (was silent Log::warning — wouldn't trigger Job retry)
- elasticDelete() stub added (fills in with the ES integration ticket)
- php/tests/Feature/Jobs/DeleteFromIndexTest.php — success + HTTP-failure
paths via mocked Http
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=57
Implements the async-embedding pipeline's worker side:
- php/Jobs/EmbedMemory.php — Laravel Job that calls BrainService::embed()
+ qdrantUpsert() and sets indexed_at on success
- php/Migrations/…_add_indexed_at_to_brain_memories.php — nullable
timestamp + index, portable across pgsql/mariadb (hasColumn guard)
- BrainMemory: +indexed_at fillable + datetime cast + PHPDoc
- BrainService: qdrantUpsert() private→public so the Job can use it;
elasticIndex() stub added (to be implemented by the ES ticket)
- php/tests/Feature/Jobs/EmbedMemoryTest.php — Pest tests for success
path and Qdrant-failure path
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=56
Adds an `org` match filter between workspace_id and project in the Qdrant
payload filter chain. Multi-org isolation for OpenBrain memory retrieval.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=58
Three related fixes so the brain DB works on Postgres, not just MariaDB:
1. config.php — brain charset/collation was hardcoded to utf8mb4 which
Postgres rejects as client_encoding. Now driver-aware: utf8 for
pgsql, utf8mb4 otherwise. Override via BRAIN_DB_CHARSET env var.
2. Migration 000008 (create_brain_memories) — self-referential FK on
supersedes_id was declared inside Schema::create{}, causing Postgres
to evaluate it before the PK index existed ('no unique constraint
matching given keys'). Split into Schema::create + separate
Schema::table to guarantee PK is in place when FK is added.
3. Migration 000009 (drop workspace FK) — try/catch inside the Blueprint
closure couldn't catch deferred SQL failures. Replaced with a
constraint-exists pre-query against information_schema, supporting
both pgsql and mariadb/mysql drivers. Fresh installs no longer fail
trying to drop a constraint that was never created.
Co-Authored-By: Virgil <virgil@lethean.io>