Closes the 5 PARTIAL items flagged in docs/AUDIT-openbrain-20260424.md.
- Gap A (org scoping persisted on writes): new migration adds `org`
nullable+indexed column to brain_memories; BrainMemory fillable;
RememberKnowledge action forwards org; BrainService::remember
persists it.
- Gap B (supersede/forget Elastic cleanup): BrainService::forget
dispatches DeleteFromIndex (handles both Qdrant + Elastic); supersede
path dispatches cleanup for the old memory id before replacing it.
DeleteFromIndex itself untouched — already handled both indexes.
- Gap C (brain:reindex flags): --org, --project, --stale (null OR
>14d old), --dry-run (count+stop), --elastic-only added to the
artisan command.
- Gap D (MCP schemas expose org): brain_remember, brain_recall,
brain_list now accept `org` in input schema + forward into
action/service.
- Gap E (resilience uneven): brain_list now wrapped in
withCircuitBreaker('brain', ...) matching the pattern used by
BrainRemember/Recall/Forget. BrainService gains retryableHttp()
helper — 100/300/900ms exponential backoff, retries only on 5xx +
connection errors, not on 4xx. Qdrant calls route through it;
Ollama left alone (EmbedMemory job has its own retry).
Tests (Good/Bad/Ugly per gap):
- Feature/Brain/OrgScopingTest.php
- Feature/Brain/SupersedeForgetIndexCleanupTest.php
- Feature/Brain/ReindexFlagsTest.php
- Feature/Mcp/BrainSchemaOrgTest.php
- Feature/Brain/CircuitBreakerTest.php
php -l clean on all 13 files. Pest binary not in this checkout —
CI path validates the full suite.
Closes tasks.lthn.sh/view.php?id=107
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
BrainService::http() was building a PendingRequest with no auth
header, so when Qdrant has auth enabled (the production lthn.sh
deploy does) every upsert/lookup returned 401. The circuit breaker
logged the 401 via Cache::store('file'), which was the red-herring
cache-write error chased in the first #97 iteration.
Changes:
- BrainService loads + trims a Qdrant api key from
config('brain.qdrant.api_key') in the constructor.
- New qdrantHttp() helper returns a PendingRequest with the
api-key header when the key is non-empty, or the plain client
otherwise. Ollama + Elasticsearch call sites still use http()
(separate auth shapes).
- php/config.php adds a brain.qdrant.api_key entry reading
env('BRAIN_QDRANT_API_KEY').
- Good/Bad/Ugly Pest tests cover: configured key → header sent,
unset → header absent, empty-string → header absent.
Closes tasks.lthn.sh/view.php?id=97
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Exercises the 3 MCP handlers that work MariaDB-only (no Qdrant
dependency): brain_remember writes + returns id, brain_list
surfaces it, brain_forget removes. Negative case: brain_forget on
a non-existent id returns a proper error response (not TypeError).
brain_recall is out of scope — needs the Qdrant collection +
embedding pipeline.
Implementation note: handlers use `type` + workspace context for
scoping, not a `scope` parameter; the test matches the actual
signatures.
Closes tasks.lthn.sh/view.php?id=96
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Fleet tasks deliberately do not create AgentSession records.
AgentSession's work_log / artefacts / handoff / replay semantics are
designed for interactive, replayable, handoff-capable work — fleet
tasks are atomic assign→complete events with no in-between state
to replay. If a fleet-task handler needs session semantics, it
should start its own AgentSession via AgentSessionService when the
work begins.
Closes tasks.lthn.sh/view.php?id=94
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
ScanForWork and ManagePullRequest now depend on the MetaReader
interface (added in #89) instead of reading raw Forgejo body /
description / PR text. Epic child-linkage comes from
EpicMeta.children, PR merge decisions come from PRMeta.state /
mergeability / checkStatuses. The returned shape drops issue_body
and replaces it with structural issue_state / issue_labels.
Adds a feature test that injects a mocked MetaReader carrying
intentionally-tainted body/description/review_text fields and
recursively asserts none of those keys appear in the output of
either action — the regression fence for the RFC rule that body
content must never reach pipeline decisions.
Closes tasks.lthn.sh/view.php?id=90
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
AgentSession::addArtifact expects ?array $metadata in the third
argument slot; the MCP tool was passing the optional description
string directly, producing a TypeError whenever a caller supplied a
non-null description. Wrap the description into a metadata array so
the call matches the model signature, and add a feature test that
exercises the MCP handler end-to-end to prevent regression.
Closes tasks.lthn.sh/view.php?id=95
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Extend PushDispatchHistory so /v1/agent/sync writes four sync.*
workflow-progress keys into WorkspaceState (last_dispatch_at,
last_agent_type, last_findings_count, last_status) in addition to the
existing BrainMemory + SyncRecord persistence. Plan resolves via
agent_plan_id first, plan_slug fallback. Missing plan is treated as
non-fatal — state writes are skipped, BrainMemory still persists.
Adds a three-case feature test covering direct id, slug fallback, and
the missing-plan safety branch.
Closes tasks.lthn.sh/view.php?id=93
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Introduce a pipeline metadata surface that enforces "no body content
ever reaches pipeline decisions". MetaReader is an interface with four
methods (getPRMeta, getEpicMeta, getIssueState, getCommentReactions),
each returning a readonly DTO carrying only structural fields —
state, mergeability, SHAs, branches, reaction counts, child linkage.
ForgejoMetaReader projects raw Forgejo API payloads into these DTOs
and drops body/description/review text before the caller can see it.
Unit test mocks rich Forgejo payloads containing body, description,
review_text, and comment_body, then asserts the DTO toArray output
never exposes those keys — the regression fence for the RFC rule.
Downstream callers (ScanForWork, ManagePullRequest) still use the
raw ForgejoService today; that refactor lands under Mantis #90.
Closes tasks.lthn.sh/view.php?id=89
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
New endpoint GET /v1/brain/search?q=<query>&org=<>&project=<>&limit=<N>
that full-text-searches brain_memories via Elasticsearch using
BrainService::elasticSearch(). Separate from /v1/brain/recall (which is
vector/semantic via Qdrant) — this one is keyword/lexical.
Sits under the existing brain.read-auth middleware group.
Pest coverage (Http::fake for ES): Good (matches), Bad (invalid limit),
Ugly (empty query + filters).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=64
Two introspection endpoints for OpenBrain:
- GET /v1/brain/tags — ES terms aggregation over tags.keyword, returns
{tag: count} pairs for UI filter chips
- GET /v1/brain/scopes — composite aggregation over {org, project},
returns the scope hierarchy present in the index
Sits under the existing brain.read-auth group in Routes/api.php. New
BrainService helpers for aggregation shape; reuses the elasticSearch
HTTP path added in #59.
Pest coverage with Http::fake for ES.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=65
New artisan command brain:prune {--older-than=90} {--chunk=100} {--dry-run}
that completes the soft-delete → hard-delete lifecycle by:
1. selecting BrainMemory::onlyTrashed() where deleted_at < now - N days
2. dispatching DeleteFromIndex for each (Qdrant + ES cleanup)
3. forceDelete()'ing the rows
--dry-run counts without dispatching.
Complements brain:clean (which cleans recent soft-deletes) with a
retention-bounded terminal cleanup.
Pest coverage: Good (dispatch + forceDelete on aged trashed rows), Bad
(invalid chunk), Ugly (--dry-run skips both dispatch and delete).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=62
New artisan command brain:clean {--chunk=100} {--dry-run} that dispatches
the DeleteFromIndex job for soft-deleted BrainMemory rows (those in
onlyTrashed scope). Cleans up orphaned Qdrant + Elasticsearch index
entries that remain after a memory is soft-deleted.
--dry-run counts without dispatching.
php/tests/Feature/Console/BrainCleanCommandTest.php covers Good
(dispatches on trashed), Bad (invalid chunk), Ugly (--dry-run prevents
dispatch).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=61
remember() now writes the brain_memories row with indexed_at=null and
dispatches EmbedMemory::dispatch($memory->id) for async Qdrant + ES
indexing, instead of calling qdrantUpsert() synchronously. Response shape
matches the row state — caller gets the memory immediately, the Job
flips indexed_at once the Qdrant write succeeds.
Superseded rows still soft-delete synchronously (part of the remember
contract, not the indexing path).
php/tests/Feature/Services/BrainServiceRememberTest.php uses Queue::fake()
to assert EmbedMemory is dispatched and BrainService::qdrantUpsert() is
NOT called directly (subclass probe).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=55
New artisan command brain:reindex {--all} {--chunk=100} that dispatches
the EmbedMemory job for brain memories needing (re)indexing. Without
--all, only memories where indexed_at IS NULL are dispatched; --all
re-embeds every memory (useful after a Qdrant collection wipe or
embedding model change). Uses chunkById for memory-safe iteration at
scale.
php/tests/Feature/Console/BrainReindexCommandTest.php covers Good
(unindexed-only default), Bad (invalid chunk), Ugly (--all flag).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=60
Fills in the elasticIndex/elasticDelete stubs added by #56 and #57, plus a
new elasticSearch() method used by the upcoming /v1/brain/search endpoint
(#64).
- elasticIndex(BrainMemory) → PUT /brain_memories/_doc/{id}
- elasticDelete(string $id) → DELETE /brain_memories/_doc/{id}
- elasticSearch(string $query, array $filters) → POST /brain_memories/_search
- ES URL default http://127.0.0.1:9200 (config override via
BRAIN_ELASTICSEARCH_URL env var)
- RuntimeException on HTTP failures (same pattern as qdrantUpsert)
php/tests/Feature/Services/BrainServiceElasticTest.php covers Good/Bad/Ugly
for index, delete, and search using Http::fake.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=59
Inverse of the EmbedMemory job (#56): removes a memory from Qdrant (and
the future Elasticsearch index) when brain_forget fires or a memory is
soft-deleted.
- php/Jobs/DeleteFromIndex.php — Laravel Job, 3 retries with backoff
- BrainService: qdrantDelete() private→public and now throws on HTTP
failure (was silent Log::warning — wouldn't trigger Job retry)
- elasticDelete() stub added (fills in with the ES integration ticket)
- php/tests/Feature/Jobs/DeleteFromIndexTest.php — success + HTTP-failure
paths via mocked Http
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=57
Implements the async-embedding pipeline's worker side:
- php/Jobs/EmbedMemory.php — Laravel Job that calls BrainService::embed()
+ qdrantUpsert() and sets indexed_at on success
- php/Migrations/…_add_indexed_at_to_brain_memories.php — nullable
timestamp + index, portable across pgsql/mariadb (hasColumn guard)
- BrainMemory: +indexed_at fillable + datetime cast + PHPDoc
- BrainService: qdrantUpsert() private→public so the Job can use it;
elasticIndex() stub added (to be implemented by the ES ticket)
- php/tests/Feature/Jobs/EmbedMemoryTest.php — Pest tests for success
path and Qdrant-failure path
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=56
Adds an `org` match filter between workspace_id and project in the Qdrant
payload filter chain. Multi-org isolation for OpenBrain memory retrieval.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=58
Three related fixes so the brain DB works on Postgres, not just MariaDB:
1. config.php — brain charset/collation was hardcoded to utf8mb4 which
Postgres rejects as client_encoding. Now driver-aware: utf8 for
pgsql, utf8mb4 otherwise. Override via BRAIN_DB_CHARSET env var.
2. Migration 000008 (create_brain_memories) — self-referential FK on
supersedes_id was declared inside Schema::create{}, causing Postgres
to evaluate it before the PK index existed ('no unique constraint
matching given keys'). Split into Schema::create + separate
Schema::table to guarantee PK is in place when FK is added.
3. Migration 000009 (drop workspace FK) — try/catch inside the Blueprint
closure couldn't catch deferred SQL failures. Replaced with a
constraint-exists pre-query against information_schema, supporting
both pgsql and mariadb/mysql drivers. Fresh installs no longer fail
trying to drop a constraint that was never created.
Co-Authored-By: Virgil <virgil@lethean.io>
Phase 2 of Core DI migration:
- Add *core.Core field + SetCore() to PrepSubsystem and monitor.Subsystem
- Register agentic/monitor/brain as Core services with lifecycle hooks
- Mark SetCompletionNotifier and SetNotifier as deprecated (removed in Phase 3)
- Fix monitor test to match actual event names
- initServices() now wires Core refs before legacy callbacks
Co-Authored-By: Virgil <virgil@lethean.io>