core-agent-ide/codex-rs/docs/codex_mcp_interface.md
jif-oai f46b767b7e
feat: add search term to thread list (#12578)
Add `searchTerm` to `thread/list` that will search for a match in the
titles (the condition being `searchTerm` $$\in$$ `title`)
2026-02-25 09:59:41 +00:00

7.8 KiB
Raw Blame History

Codex MCP Server Interface [experimental]

This document describes Codexs experimental MCP server interface: a JSONRPC API that runs over the Model Context Protocol (MCP) transport to control a local Codex engine.

  • Status: experimental and subject to change without notice
  • Server binary: codex mcp-server (or codex-mcp-server)
  • Transport: standard MCP over stdio (JSONRPC 2.0, linedelimited)

Overview

Codex exposes a small set of MCPcompatible methods to create and manage conversations, send user input, receive live events, and handle approval prompts. The types are defined in protocol/src/mcp_protocol.rs and reused by the MCP server implementation in mcp-server/.

At a glance:

  • Conversations
    • newConversation → start a Codex session
    • sendUserMessage / sendUserTurn → send user input into a conversation
    • interruptConversation → stop the current turn
    • listConversations, resumeConversation, archiveConversation
  • Configuration and info
    • getUserSavedConfig, setDefaultModel, getUserAgent, userInfo
    • model/list → enumerate available models and reasoning options
    • collaborationMode/list → enumerate collaboration mode presets (experimental)
  • Auth
    • account/read, account/login/start, account/login/cancel, account/logout, account/rateLimits/read
    • notifications: account/login/completed, account/updated, account/rateLimits/updated
  • Utilities
    • gitDiffToRemote, execOneOffCommand
  • Approvals (server → client requests)
    • applyPatchApproval, execCommandApproval
  • Notifications (server → client)
    • loginChatGptComplete, authStatusChange
    • codex/event stream with agent events

See code for full type definitions and exact shapes: protocol/src/mcp_protocol.rs.

Starting the server

Run Codex as an MCP server and connect an MCP client:

codex mcp-server | your_mcp_client

For a simple inspection UI, you can also try:

npx @modelcontextprotocol/inspector codex mcp-server

Use the separate codex mcp subcommand to manage configured MCP server launchers in config.toml.

Conversations

Start a new session with optional overrides:

Request newConversation params (subset):

  • model: string model id (e.g. "o3", "gpt-5.1", "gpt-5.1-codex")
  • profile: optional named profile
  • cwd: optional working directory
  • approvalPolicy: untrusted | on-request | on-failure (deprecated) | never
  • sandbox: read-only | workspace-write | external-sandbox (honors networkAccess restricted/enabled) | danger-full-access
  • config: map of additional config overrides
  • baseInstructions: optional instruction override
  • compactPrompt: optional replacement for the default compaction prompt
  • includePlanTool / includeApplyPatchTool: booleans

Response: { conversationId, model, reasoningEffort?, rolloutPath }

Send input to the active turn:

  • sendUserMessage → enqueue items to the conversation
  • sendUserTurn → structured turn with explicit cwd, approvalPolicy, sandboxPolicy, model, optional effort, summary, optional personality, and optional outputSchema (JSON Schema for the final assistant message)

Valid personality values are friendly, pragmatic, and none. When none is selected, the personality placeholder is replaced with an empty string.

For v2 threads, turn/start also accepts outputSchema to constrain the final assistant message for that turn.

Interrupt a running turn: interruptConversation.

List/resume/archive: listConversations, resumeConversation, archiveConversation.

For v2 threads, use thread/list with filters such as archived: true, cwd: "/path", or searchTerm: "needle" to narrow results, and thread/unarchive to restore archived rollouts to the active sessions directory (it returns the restored thread summary).

Models

Fetch the catalog of models available in the current Codex build with model/list. The request accepts optional pagination inputs:

  • pageSize number of models to return (defaults to a server-selected value)
  • cursor opaque string from the previous responses nextCursor

Each response yields:

  • items ordered list of models. A model includes:
    • id, model, displayName, description
    • supportedReasoningEfforts array of objects with:
      • reasoningEffort one of minimal|low|medium|high
      • description human-friendly label for the effort
    • defaultReasoningEffort suggested effort for the UI
    • supportsPersonality whether the model supports personality-specific instructions
    • isDefault whether the model is recommended for most users
    • upgrade optional recommended upgrade model id
  • nextCursor pass into the next request to continue paging (optional)

Collaboration modes (experimental)

Fetch the built-in collaboration mode presets with collaborationMode/list. This endpoint does not accept pagination and returns the full list in one response:

  • data ordered list of collaboration mode masks (partial settings to apply on top of the base mode)
    • For tri-state fields like reasoning_effort and developer_instructions, omit the field to keep the current value, set it to null to clear it, or set a concrete value to update it.

When sending turn/start with collaborationMode, settings.developer_instructions: null means "use built-in instructions for the selected mode".

Event stream

While a conversation runs, the server sends notifications:

  • codex/event with the serialized Codex event payload. The shape matches core/src/protocol.rss Event and EventMsg types. Some notifications include a _meta.requestId to correlate with the originating request.
  • Auth notifications via method names loginChatGptComplete and authStatusChange.

Clients should render events and, when present, surface approval requests (see next section).

Tool responses

The codex and codex-reply tools return standard MCP CallToolResult payloads. For compatibility with MCP clients that prefer structuredContent, Codex mirrors the content blocks inside structuredContent alongside the threadId.

Example:

{
  "content": [{ "type": "text", "text": "Hello from Codex" }],
  "structuredContent": {
    "threadId": "019bbed6-1e9e-7f31-984c-a05b65045719",
    "content": "Hello from Codex"
  }
}

Approvals (server → client)

When Codex needs approval to apply changes or run commands, the server issues JSONRPC requests to the client:

  • applyPatchApproval { conversationId, callId, fileChanges, reason?, grantRoot? }
  • execCommandApproval { conversationId, callId, approvalId?, command, cwd, reason? }

The client must reply with { decision: "allow" | "deny" } for each request.

Auth helpers

For the complete request/response shapes and flow examples, see the “Auth endpoints (v2)” section in the appserver README.

Example: start and send a message

{ "jsonrpc": "2.0", "id": 1, "method": "newConversation", "params": { "model": "gpt-5.1", "approvalPolicy": "on-request" } }

Server responds:

{ "jsonrpc": "2.0", "id": 1, "result": { "conversationId": "c7b0…", "model": "gpt-5.1", "rolloutPath": "/path/to/rollout.jsonl" } }

Then send input:

{ "jsonrpc": "2.0", "id": 2, "method": "sendUserMessage", "params": { "conversationId": "c7b0…", "items": [{ "type": "text", "text": "Hello Codex" }] } }

While processing, the server emits codex/event notifications containing agent output, approvals, and status updates.

Compatibility and stability

This interface is experimental. Method names, fields, and event shapes may evolve. For the authoritative schema, consult protocol/src/mcp_protocol.rs and the corresponding server wiring in mcp-server/.