Fleet registration in pkg/agentic already goes through the shared
&http.Client{Timeout: 30s} at transport.go:13 — no InsecureSkipVerify,
no custom TLS transport. This audit documents that finding and adds
regression coverage so future refactors can't silently strip TLS
validation from the /v1/fleet/register path.
Verdict: OK. No production bug. Tests pass trusted TLS server case
and reject untrusted cert with a wrapped error that surfaces the
certificate / x509 / tls signal in the message.
Closes tasks.lthn.sh/view.php?id=29
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Fleet tasks deliberately do not create AgentSession records.
AgentSession's work_log / artefacts / handoff / replay semantics are
designed for interactive, replayable, handoff-capable work — fleet
tasks are atomic assign→complete events with no in-between state
to replay. If a fleet-task handler needs session semantics, it
should start its own AgentSession via AgentSessionService when the
work begins.
Closes tasks.lthn.sh/view.php?id=94
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Rename legacy dappcore/dAppCore/dappco.re plugin-metadata identifiers
to the canonical core namespace: marketplace name is core-agent,
plugin manifests point at https://lthn.ai and
https://forge.lthn.ai/core/agent.git. Align the two .mcp.json files
on the new "core mcp serve" command. Scaffold google/gemini-cli with
a minimal plugin.json + README so the cross-agent plugin family is
complete.
Out of this ticket's scope: live Go module import paths at
dappco.re/go/core (that's a separate migration), and a stale pip
install URL in claude/camofox_mcp/README.md (follow-up child ticket).
Closes tasks.lthn.sh/view.php?id=92
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Creates three new Claude plugin family directories per the
plugin-restructure RFC: core-go (Go-family tooling), core-php
(PHP-family tooling), infra (ops / devops / deploy tooling). Each
carries .claude-plugin/plugin.json, YAML marketplace.yaml, README,
and commands/agents/skills stubs. Marketplace format is YAML per
RFC; the legacy JSON marketplace for core-agent is left untouched.
Appends a Resolution section to the plugin-restructure RFC recording
the YAML-over-JSON decision and noting that the dappcore→core rename
at cross-plugin metadata level is scoped to #92.
Closes tasks.lthn.sh/view.php?id=91
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
ScanForWork and ManagePullRequest now depend on the MetaReader
interface (added in #89) instead of reading raw Forgejo body /
description / PR text. Epic child-linkage comes from
EpicMeta.children, PR merge decisions come from PRMeta.state /
mergeability / checkStatuses. The returned shape drops issue_body
and replaces it with structural issue_state / issue_labels.
Adds a feature test that injects a mocked MetaReader carrying
intentionally-tainted body/description/review_text fields and
recursively asserts none of those keys appear in the output of
either action — the regression fence for the RFC rule that body
content must never reach pipeline decisions.
Closes tasks.lthn.sh/view.php?id=90
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
AgentSession::addArtifact expects ?array $metadata in the third
argument slot; the MCP tool was passing the optional description
string directly, producing a TypeError whenever a caller supplied a
non-null description. Wrap the description into a metadata array so
the call matches the model signature, and add a feature test that
exercises the MCP handler end-to-end to prevent regression.
Closes tasks.lthn.sh/view.php?id=95
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Extend PushDispatchHistory so /v1/agent/sync writes four sync.*
workflow-progress keys into WorkspaceState (last_dispatch_at,
last_agent_type, last_findings_count, last_status) in addition to the
existing BrainMemory + SyncRecord persistence. Plan resolves via
agent_plan_id first, plan_slug fallback. Missing plan is treated as
non-fatal — state writes are skipped, BrainMemory still persists.
Adds a three-case feature test covering direct id, slug fallback, and
the missing-plan safety branch.
Closes tasks.lthn.sh/view.php?id=93
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Introduce a pipeline metadata surface that enforces "no body content
ever reaches pipeline decisions". MetaReader is an interface with four
methods (getPRMeta, getEpicMeta, getIssueState, getCommentReactions),
each returning a readonly DTO carrying only structural fields —
state, mergeability, SHAs, branches, reaction counts, child linkage.
ForgejoMetaReader projects raw Forgejo API payloads into these DTOs
and drops body/description/review text before the caller can see it.
Unit test mocks rich Forgejo payloads containing body, description,
review_text, and comment_body, then asserts the DTO toArray output
never exposes those keys — the regression fence for the RFC rule.
Downstream callers (ScanForWork, ManagePullRequest) still use the
raw ForgejoService today; that refactor lands under Mantis #90.
Closes tasks.lthn.sh/view.php?id=89
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Python MCP server that dispatches sandboxed Hermes runs from Claude Code.
Exposes hermes_dispatch / hermes_status / hermes_fetch as MCP tools so
Cladius can offload work to Hermes runners via `claude mcp add hermes-runner`.
Passes --agents JSON through for dynamic subagent composition.
Closes tasks.lthn.sh/view.php?id=78
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Python MCP server wrapping the Camofox browser HTTP API on lthn.sh.
Exposes navigate/read_page/screenshot/click/fill/close_tab as MCP tools
so any Claude Code session can drive Camofox via `claude mcp add camofox`.
Closes tasks.lthn.sh/view.php?id=77
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Two Hermes skill files that auto-register OpenBrain memory tools when
the MemoryProvider plugin loads. Each names the triggering phrases,
the tool contract, and an example invocation so Hermes can route
recall/remember prompts without coaching.
Closes tasks.lthn.sh/view.php?id=75
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
Python plugin implementing Hermes ContextEngine backed by OpenBrain.
compress() does centrality-ranked retrieval over a candidate pool
pulled via brain_recall rather than linear turn truncation. Falls
back to naive head+tail truncation when recall is unavailable so the
caller never sees a raised exception.
Closes tasks.lthn.sh/view.php?id=74
Co-authored-by: Codex <noreply@openai.com>
Co-Authored-By: Virgil <virgil@lethean.io>
After the e2d1d32 → revert cycle the code moved back to php/ but
composer.json autoload paths stayed pointing at src/php/ (which
does not exist). package:discover fails with "Class
Core\\Mod\\Agentic\\Boot not found" as soon as a downstream
consumer composer-installs the dev branch.
Co-Authored-By: Virgil <virgil@lethean.io>
New endpoint GET /v1/brain/search?q=<query>&org=<>&project=<>&limit=<N>
that full-text-searches brain_memories via Elasticsearch using
BrainService::elasticSearch(). Separate from /v1/brain/recall (which is
vector/semantic via Qdrant) — this one is keyword/lexical.
Sits under the existing brain.read-auth middleware group.
Pest coverage (Http::fake for ES): Good (matches), Bad (invalid limit),
Ugly (empty query + filters).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=64
Two introspection endpoints for OpenBrain:
- GET /v1/brain/tags — ES terms aggregation over tags.keyword, returns
{tag: count} pairs for UI filter chips
- GET /v1/brain/scopes — composite aggregation over {org, project},
returns the scope hierarchy present in the index
Sits under the existing brain.read-auth group in Routes/api.php. New
BrainService helpers for aggregation shape; reuses the elasticSearch
HTTP path added in #59.
Pest coverage with Http::fake for ES.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=65
New artisan command brain:prune {--older-than=90} {--chunk=100} {--dry-run}
that completes the soft-delete → hard-delete lifecycle by:
1. selecting BrainMemory::onlyTrashed() where deleted_at < now - N days
2. dispatching DeleteFromIndex for each (Qdrant + ES cleanup)
3. forceDelete()'ing the rows
--dry-run counts without dispatching.
Complements brain:clean (which cleans recent soft-deletes) with a
retention-bounded terminal cleanup.
Pest coverage: Good (dispatch + forceDelete on aged trashed rows), Bad
(invalid chunk), Ugly (--dry-run skips both dispatch and delete).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=62
New artisan command brain:clean {--chunk=100} {--dry-run} that dispatches
the DeleteFromIndex job for soft-deleted BrainMemory rows (those in
onlyTrashed scope). Cleans up orphaned Qdrant + Elasticsearch index
entries that remain after a memory is soft-deleted.
--dry-run counts without dispatching.
php/tests/Feature/Console/BrainCleanCommandTest.php covers Good
(dispatches on trashed), Bad (invalid chunk), Ugly (--dry-run prevents
dispatch).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=61
remember() now writes the brain_memories row with indexed_at=null and
dispatches EmbedMemory::dispatch($memory->id) for async Qdrant + ES
indexing, instead of calling qdrantUpsert() synchronously. Response shape
matches the row state — caller gets the memory immediately, the Job
flips indexed_at once the Qdrant write succeeds.
Superseded rows still soft-delete synchronously (part of the remember
contract, not the indexing path).
php/tests/Feature/Services/BrainServiceRememberTest.php uses Queue::fake()
to assert EmbedMemory is dispatched and BrainService::qdrantUpsert() is
NOT called directly (subclass probe).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=55
New artisan command brain:reindex {--all} {--chunk=100} that dispatches
the EmbedMemory job for brain memories needing (re)indexing. Without
--all, only memories where indexed_at IS NULL are dispatched; --all
re-embeds every memory (useful after a Qdrant collection wipe or
embedding model change). Uses chunkById for memory-safe iteration at
scale.
php/tests/Feature/Console/BrainReindexCommandTest.php covers Good
(unindexed-only default), Bad (invalid chunk), Ugly (--all flag).
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=60
Fills in the elasticIndex/elasticDelete stubs added by #56 and #57, plus a
new elasticSearch() method used by the upcoming /v1/brain/search endpoint
(#64).
- elasticIndex(BrainMemory) → PUT /brain_memories/_doc/{id}
- elasticDelete(string $id) → DELETE /brain_memories/_doc/{id}
- elasticSearch(string $query, array $filters) → POST /brain_memories/_search
- ES URL default http://127.0.0.1:9200 (config override via
BRAIN_ELASTICSEARCH_URL env var)
- RuntimeException on HTTP failures (same pattern as qdrantUpsert)
php/tests/Feature/Services/BrainServiceElasticTest.php covers Good/Bad/Ugly
for index, delete, and search using Http::fake.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=59
Inverse of the EmbedMemory job (#56): removes a memory from Qdrant (and
the future Elasticsearch index) when brain_forget fires or a memory is
soft-deleted.
- php/Jobs/DeleteFromIndex.php — Laravel Job, 3 retries with backoff
- BrainService: qdrantDelete() private→public and now throws on HTTP
failure (was silent Log::warning — wouldn't trigger Job retry)
- elasticDelete() stub added (fills in with the ES integration ticket)
- php/tests/Feature/Jobs/DeleteFromIndexTest.php — success + HTTP-failure
paths via mocked Http
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=57
Implements the async-embedding pipeline's worker side:
- php/Jobs/EmbedMemory.php — Laravel Job that calls BrainService::embed()
+ qdrantUpsert() and sets indexed_at on success
- php/Migrations/…_add_indexed_at_to_brain_memories.php — nullable
timestamp + index, portable across pgsql/mariadb (hasColumn guard)
- BrainMemory: +indexed_at fillable + datetime cast + PHPDoc
- BrainService: qdrantUpsert() private→public so the Job can use it;
elasticIndex() stub added (to be implemented by the ES ticket)
- php/tests/Feature/Jobs/EmbedMemoryTest.php — Pest tests for success
path and Qdrant-failure path
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=56
- model_reasoning_effort: "extra-high" → "xhigh" (the variant name was
tightened in a recent codex CLI release; "extra-high" now fails config
load)
- Remove [model_providers.ollama] and [model_providers.lmstudio]
overrides — codex CLI 0.122+ reserves these as built-in provider IDs
and rejects overrides. The same localhost endpoints are used by the
built-ins, so the overrides were redundant anyway. Profiles that
reference `model_provider = "ollama"` continue to work via the
built-in.
Adds an `org` match filter between workspace_id and project in the Qdrant
payload filter chain. Multi-org isolation for OpenBrain memory retrieval.
Co-authored-by: Codex <noreply@openai.com>
Closes tasks.lthn.sh/view.php?id=58
Three related fixes so the brain DB works on Postgres, not just MariaDB:
1. config.php — brain charset/collation was hardcoded to utf8mb4 which
Postgres rejects as client_encoding. Now driver-aware: utf8 for
pgsql, utf8mb4 otherwise. Override via BRAIN_DB_CHARSET env var.
2. Migration 000008 (create_brain_memories) — self-referential FK on
supersedes_id was declared inside Schema::create{}, causing Postgres
to evaluate it before the PK index existed ('no unique constraint
matching given keys'). Split into Schema::create + separate
Schema::table to guarantee PK is in place when FK is added.
3. Migration 000009 (drop workspace FK) — try/catch inside the Blueprint
closure couldn't catch deferred SQL failures. Replaced with a
constraint-exists pre-query against information_schema, supporting
both pgsql and mariadb/mysql drivers. Fresh installs no longer fail
trying to drop a constraint that was never created.
Co-Authored-By: Virgil <virgil@lethean.io>
- `git add` / `git commit` fail with `Operation not permitted` on `.git/index.lock`
- even a plain `touch .git/...` is blocked
Co-Authored-By: Virgil <virgil@lethean.io>
Add warnings for silent filesystem write/delete failures in agentic persistence helpers and record two adjacent hardening gaps for follow-up.\n\nCo-Authored-By: Virgil <virgil@lethean.io>
One caveat:
- This repo did not contain the requested `.core/TODO.md`; the actionable TODO list present here was `php/TODO.md`.
Co-Authored-By: Virgil <virgil@lethean.io>
Three load-bearing gaps between the agent RFC and the MCP surface:
- RFC §9 Fleet Mode describes the 6-digit pairing-code bootstrap as the
primary way an unauthenticated node provisions its first AgentApiKey.
`handleAuthLogin` existed as an Action but never surfaced as an MCP
tool, so IDE/CLI callers had to shell out. Adds `agentic_auth_login`
under `registerPlatformTools` with a thin wrapper over the existing
handler so the platform contract stays single-sourced.
- `RegisterTools` was double-registering `agentic_scan` (bare
`mcp.AddTool` before the CORE_MCP_FULL gate, then again via
`AddToolRecorded` inside the gate). The second call silently replaced
the first and bypassed tool-registry accounting, so REST bridging and
metrics saw a zero for scan. Collapses both into a single recorded
registration before the gate.
- `registerPlanTools` and `registerWatchTool` were also fired twice in
the CORE_MCP_FULL branch. Removes the duplicates so the extended
registration list mirrors the always-on list exactly once.
- Switches `agentic_prep_workspace` from bare `mcp.AddTool` to
`AddToolRecorded` so prep-workspace participates in the same
accounting as every other RFC §6 tool.
TestPrep_RegisterTools_Good_RegistersCompletionTool now asserts all
three `agentic_auth_*` tools surface, covering the new login registration
alongside provision/revoke.
Co-Authored-By: Virgil <virgil@lethean.io>
go-process's OnStartup re-registers process.start/run/kill with
string-ID variants, clobbering the agent's custom handlers that return
*process.Process. This broke pid/queue helpers and 7 tests that need
the rich handle (TestPid_ProcessAlive_Good, TestQueue_CanDispatchAgent_Bad_AgentAtLimit,
etc). Register a Startable override service that reapplies the agent
handlers after every service finishes OnStartup — since services run in
registration order, "agentic.process-overrides" always runs last and
wins.
Co-Authored-By: Virgil <virgil@lethean.io>