What changed
- Added `outputSchema` support to the app-server APIs, mirroring `codex
exec --output-schema` behavior.
- V1 `sendUserTurn` now accepts `outputSchema` and constrains the final
assistant message for that turn.
- V2 `turn/start` now accepts `outputSchema` and constrains the final
assistant message for that turn (explicitly per-turn only).
Core behavior
- `Op::UserTurn` already supported `final_output_json_schema`; now V1
`sendUserTurn` forwards `outputSchema` into that field.
- `Op::UserInput` now carries `final_output_json_schema` for per-turn
settings updates; core maps it into
`SessionSettingsUpdate.final_output_json_schema` so it applies to the
created turn context.
- V2 `turn/start` does NOT persist the schema via `OverrideTurnContext`
(it’s applied only for the current turn). Other overrides
(cwd/model/etc) keep their existing persistent behavior.
API / docs
- `codex-rs/app-server-protocol/src/protocol/v1.rs`: add `output_schema:
Option<serde_json::Value>` to `SendUserTurnParams` (serialized as
`outputSchema`).
- `codex-rs/app-server-protocol/src/protocol/v2.rs`: add `output_schema:
Option<JsonValue>` to `TurnStartParams` (serialized as `outputSchema`).
- `codex-rs/app-server/README.md`: document `outputSchema` for
`turn/start` and clarify it applies only to the current turn.
- `codex-rs/docs/codex_mcp_interface.md`: document `outputSchema` for v1
`sendUserTurn` and v2 `turn/start`.
Tests added/updated
- New app-server integration tests asserting `outputSchema` is forwarded
into outbound `/responses` requests as `text.format`:
- `codex-rs/app-server/tests/suite/output_schema.rs`
- `codex-rs/app-server/tests/suite/v2/output_schema.rs`
- Added per-turn semantics tests (schema does not leak to the next
turn):
- `send_user_turn_output_schema_is_per_turn_v1`
- `turn_start_output_schema_is_per_turn_v2`
- Added protocol wire-compat tests for the merged op:
- serialize omits `final_output_json_schema` when `None`
- deserialize works when field is missing
- serialize includes `final_output_json_schema` when `Some(schema)`
Call site updates (high level)
- Updated all `Op::UserInput { .. }` constructions to include
`final_output_json_schema`:
- `codex-rs/app-server/src/codex_message_processor.rs`
- `codex-rs/core/src/codex_delegate.rs`
- `codex-rs/mcp-server/src/codex_tool_runner.rs`
- `codex-rs/tui/src/chatwidget.rs`
- `codex-rs/tui2/src/chatwidget.rs`
- plus impacted core tests.
Validation
- `just fmt`
- `cargo test -p codex-core`
- `cargo test -p codex-app-server`
- `cargo test -p codex-mcp-server`
- `cargo test -p codex-tui`
- `cargo test -p codex-tui2`
- `cargo test -p codex-protocol`
- `cargo clippy --all-features --tests --profile dev --fix -- -D
warnings`
6 KiB
Codex MCP Server Interface [experimental]
This document describes Codex’s experimental MCP server interface: a JSON‑RPC API that runs over the Model Context Protocol (MCP) transport to control a local Codex engine.
- Status: experimental and subject to change without notice
- Server binary:
codex mcp-server(orcodex-mcp-server) - Transport: standard MCP over stdio (JSON‑RPC 2.0, line‑delimited)
Overview
Codex exposes a small set of MCP‑compatible methods to create and manage conversations, send user input, receive live events, and handle approval prompts. The types are defined in protocol/src/mcp_protocol.rs and re‑used by the MCP server implementation in mcp-server/.
At a glance:
- Conversations
newConversation→ start a Codex sessionsendUserMessage/sendUserTurn→ send user input into a conversationinterruptConversation→ stop the current turnlistConversations,resumeConversation,archiveConversation
- Configuration and info
getUserSavedConfig,setDefaultModel,getUserAgent,userInfomodel/list→ enumerate available models and reasoning options
- Auth
account/read,account/login/start,account/login/cancel,account/logout,account/rateLimits/read- notifications:
account/login/completed,account/updated,account/rateLimits/updated
- Utilities
gitDiffToRemote,execOneOffCommand
- Approvals (server → client requests)
applyPatchApproval,execCommandApproval
- Notifications (server → client)
loginChatGptComplete,authStatusChangecodex/eventstream with agent events
See code for full type definitions and exact shapes: protocol/src/mcp_protocol.rs.
Starting the server
Run Codex as an MCP server and connect an MCP client:
codex mcp-server | your_mcp_client
For a simple inspection UI, you can also try:
npx @modelcontextprotocol/inspector codex mcp-server
Use the separate codex mcp subcommand to manage configured MCP server launchers in config.toml.
Conversations
Start a new session with optional overrides:
Request newConversation params (subset):
model: string model id (e.g. "o3", "gpt-5.1", "gpt-5.1-codex")profile: optional named profilecwd: optional working directoryapprovalPolicy:untrusted|on-request|on-failure|neversandbox:read-only|workspace-write|external-sandbox(honorsnetworkAccessrestricted/enabled) |danger-full-accessconfig: map of additional config overridesbaseInstructions: optional instruction overridecompactPrompt: optional replacement for the default compaction promptincludePlanTool/includeApplyPatchTool: booleans
Response: { conversationId, model, reasoningEffort?, rolloutPath }
Send input to the active turn:
sendUserMessage→ enqueue items to the conversationsendUserTurn→ structured turn with explicitcwd,approvalPolicy,sandboxPolicy,model, optionaleffort,summary, and optionaloutputSchema(JSON Schema for the final assistant message)
For v2 threads, turn/start also accepts outputSchema to constrain the final assistant message for that turn.
Interrupt a running turn: interruptConversation.
List/resume/archive: listConversations, resumeConversation, archiveConversation.
Models
Fetch the catalog of models available in the current Codex build with model/list. The request accepts optional pagination inputs:
pageSize– number of models to return (defaults to a server-selected value)cursor– opaque string from the previous response’snextCursor
Each response yields:
items– ordered list of models. A model includes:id,model,displayName,descriptionsupportedReasoningEfforts– array of objects with:reasoningEffort– one ofminimal|low|medium|highdescription– human-friendly label for the effort
defaultReasoningEffort– suggested effort for the UIisDefault– whether the model is recommended for most users
nextCursor– pass into the next request to continue paging (optional)
Event stream
While a conversation runs, the server sends notifications:
codex/eventwith the serialized Codex event payload. The shape matchescore/src/protocol.rs’sEventandEventMsgtypes. Some notifications include a_meta.requestIdto correlate with the originating request.- Auth notifications via method names
loginChatGptCompleteandauthStatusChange.
Clients should render events and, when present, surface approval requests (see next section).
Approvals (server → client)
When Codex needs approval to apply changes or run commands, the server issues JSON‑RPC requests to the client:
applyPatchApproval { conversationId, callId, fileChanges, reason?, grantRoot? }execCommandApproval { conversationId, callId, command, cwd, reason? }
The client must reply with { decision: "allow" | "deny" } for each request.
Auth helpers
For the complete request/response shapes and flow examples, see the “Auth endpoints (v2)” section in the app‑server README.
Example: start and send a message
{ "jsonrpc": "2.0", "id": 1, "method": "newConversation", "params": { "model": "gpt-5.1", "approvalPolicy": "on-request" } }
Server responds:
{ "jsonrpc": "2.0", "id": 1, "result": { "conversationId": "c7b0…", "model": "gpt-5.1", "rolloutPath": "/path/to/rollout.jsonl" } }
Then send input:
{ "jsonrpc": "2.0", "id": 2, "method": "sendUserMessage", "params": { "conversationId": "c7b0…", "items": [{ "type": "text", "text": "Hello Codex" }] } }
While processing, the server emits codex/event notifications containing agent output, approvals, and status updates.
Compatibility and stability
This interface is experimental. Method names, fields, and event shapes may evolve. For the authoritative schema, consult protocol/src/mcp_protocol.rs and the corresponding server wiring in mcp-server/.