New endpoint GET /v1/brain/search?q=<query>&org=<>&project=<>&limit=<N>
that full-text-searches brain_memories via Elasticsearch using
BrainService::elasticSearch(). Separate from /v1/brain/recall (which is
vector/semantic via Qdrant) — this one is keyword/lexical.
Sits under the existing brain.read-auth middleware group.
Pest coverage (Http::fake for ES): Good (matches), Bad (invalid limit),
Ugly (empty query + filters).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=64
Two introspection endpoints for OpenBrain:
- GET /v1/brain/tags — ES terms aggregation over tags.keyword, returns
{tag: count} pairs for UI filter chips
- GET /v1/brain/scopes — composite aggregation over {org, project},
returns the scope hierarchy present in the index
Sits under the existing brain.read-auth group in Routes/api.php. New
BrainService helpers for aggregation shape; reuses the elasticSearch
HTTP path added in #59.
Pest coverage with Http::fake for ES.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=65
New artisan command brain:prune {--older-than=90} {--chunk=100} {--dry-run}
that completes the soft-delete → hard-delete lifecycle by:
1. selecting BrainMemory::onlyTrashed() where deleted_at < now - N days
2. dispatching DeleteFromIndex for each (Qdrant + ES cleanup)
3. forceDelete()'ing the rows
--dry-run counts without dispatching.
Complements brain:clean (which cleans recent soft-deletes) with a
retention-bounded terminal cleanup.
Pest coverage: Good (dispatch + forceDelete on aged trashed rows), Bad
(invalid chunk), Ugly (--dry-run skips both dispatch and delete).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=62
New artisan command brain:clean {--chunk=100} {--dry-run} that dispatches
the DeleteFromIndex job for soft-deleted BrainMemory rows (those in
onlyTrashed scope). Cleans up orphaned Qdrant + Elasticsearch index
entries that remain after a memory is soft-deleted.
--dry-run counts without dispatching.
php/tests/Feature/Console/BrainCleanCommandTest.php covers Good
(dispatches on trashed), Bad (invalid chunk), Ugly (--dry-run prevents
dispatch).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=61
remember() now writes the brain_memories row with indexed_at=null and
dispatches EmbedMemory::dispatch($memory->id) for async Qdrant + ES
indexing, instead of calling qdrantUpsert() synchronously. Response shape
matches the row state — caller gets the memory immediately, the Job
flips indexed_at once the Qdrant write succeeds.
Superseded rows still soft-delete synchronously (part of the remember
contract, not the indexing path).
php/tests/Feature/Services/BrainServiceRememberTest.php uses Queue::fake()
to assert EmbedMemory is dispatched and BrainService::qdrantUpsert() is
NOT called directly (subclass probe).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=55
New artisan command brain:reindex {--all} {--chunk=100} that dispatches
the EmbedMemory job for brain memories needing (re)indexing. Without
--all, only memories where indexed_at IS NULL are dispatched; --all
re-embeds every memory (useful after a Qdrant collection wipe or
embedding model change). Uses chunkById for memory-safe iteration at
scale.
php/tests/Feature/Console/BrainReindexCommandTest.php covers Good
(unindexed-only default), Bad (invalid chunk), Ugly (--all flag).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=60
Fills in the elasticIndex/elasticDelete stubs added by #56 and #57, plus a
new elasticSearch() method used by the upcoming /v1/brain/search endpoint
(#64).
- elasticIndex(BrainMemory) → PUT /brain_memories/_doc/{id}
- elasticDelete(string $id) → DELETE /brain_memories/_doc/{id}
- elasticSearch(string $query, array $filters) → POST /brain_memories/_search
- ES URL default http://127.0.0.1:9200 (config override via
BRAIN_ELASTICSEARCH_URL env var)
- RuntimeException on HTTP failures (same pattern as qdrantUpsert)
php/tests/Feature/Services/BrainServiceElasticTest.php covers Good/Bad/Ugly
for index, delete, and search using Http::fake.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=59
Inverse of the EmbedMemory job (#56): removes a memory from Qdrant (and
the future Elasticsearch index) when brain_forget fires or a memory is
soft-deleted.
- php/Jobs/DeleteFromIndex.php — Laravel Job, 3 retries with backoff
- BrainService: qdrantDelete() private→public and now throws on HTTP
failure (was silent Log::warning — wouldn't trigger Job retry)
- elasticDelete() stub added (fills in with the ES integration ticket)
- php/tests/Feature/Jobs/DeleteFromIndexTest.php — success + HTTP-failure
paths via mocked Http
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=57
Implements the async-embedding pipeline's worker side:
- php/Jobs/EmbedMemory.php — Laravel Job that calls BrainService::embed()
+ qdrantUpsert() and sets indexed_at on success
- php/Migrations/…_add_indexed_at_to_brain_memories.php — nullable
timestamp + index, portable across pgsql/mariadb (hasColumn guard)
- BrainMemory: +indexed_at fillable + datetime cast + PHPDoc
- BrainService: qdrantUpsert() private→public so the Job can use it;
elasticIndex() stub added (to be implemented by the ES ticket)
- php/tests/Feature/Jobs/EmbedMemoryTest.php — Pest tests for success
path and Qdrant-failure path
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=56
Adds an `org` match filter between workspace_id and project in the Qdrant
payload filter chain. Multi-org isolation for OpenBrain memory retrieval.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=58
Three related fixes so the brain DB works on Postgres, not just MariaDB:
1. config.php — brain charset/collation was hardcoded to utf8mb4 which
Postgres rejects as client_encoding. Now driver-aware: utf8 for
pgsql, utf8mb4 otherwise. Override via BRAIN_DB_CHARSET env var.
2. Migration 000008 (create_brain_memories) — self-referential FK on
supersedes_id was declared inside Schema::create{}, causing Postgres
to evaluate it before the PK index existed ('no unique constraint
matching given keys'). Split into Schema::create + separate
Schema::table to guarantee PK is in place when FK is added.
3. Migration 000009 (drop workspace FK) — try/catch inside the Blueprint
closure couldn't catch deferred SQL failures. Replaced with a
constraint-exists pre-query against information_schema, supporting
both pgsql and mariadb/mysql drivers. Fresh installs no longer fail
trying to drop a constraint that was never created.
Co-Authored-By: Virgil <virgil@lethean.io>
Phase 2 of Core DI migration:
- Add *core.Core field + SetCore() to PrepSubsystem and monitor.Subsystem
- Register agentic/monitor/brain as Core services with lifecycle hooks
- Mark SetCompletionNotifier and SetNotifier as deprecated (removed in Phase 3)
- Fix monitor test to match actual event names
- initServices() now wires Core refs before legacy callbacks
Co-Authored-By: Virgil <virgil@lethean.io>