Commit graph

3134 commits

Author SHA1 Message Date
Ahmed Ibrahim
ebdd8795e9
Turn-state sticky routing per turn (#9332)
- capture the header from SSE/WS handshakes, store it per
ModelClientSession using `Oncelock`, echo it on turn-scoped requests,
and add SSE+WS integration tests for within-turn persistence +
cross-turn reset.

- keep `x-codex-turn-state` sticky within a user turn to maintain
routing continuity for retries/tool follow-ups.
2026-01-16 09:30:11 -08:00
Jeremy Rose
4125c825f9
add codex cloud list (#9324)
for listing cloud tasks.
2026-01-16 08:56:38 -08:00
Eric Traut
9147df0e60
Made codex exec resume --last consistent with codex resume --last (#9352)
PR #9245 made `codex resume --last` honor cwd, but I forgot to make the
same change for `codex exec resume --last`. This PR fixes the
inconsistency.

This addresses #8700
2026-01-16 08:53:47 -08:00
Matthew Zeng
131590066e
[device-auth] Add device code auth as a standalone option when headless environment is detected. (#9333) 2026-01-16 08:35:03 -08:00
jif-oai
2691e1ce21
fix: flaky tests (#9373) 2026-01-16 17:24:41 +01:00
jif-oai
1668ca726f
chore: close pipe on non-pty processes (#9369)
Closing the STDIN of piped process when starting them to avoid commands
like `rg` to wait for content on STDIN and hangs for ever
2026-01-16 15:54:32 +01:00
jif-oai
7905e99d03
prompt collab (#9367) 2026-01-16 15:12:41 +01:00
jif-oai
7fc49697dd
feat: CODEX_CI (#9366) 2026-01-16 13:52:16 +00:00
jif-oai
c576756c81
feat: collab wait multiple IDs (#9294) 2026-01-16 12:05:04 +01:00
jif-oai
c1ac5223e1
feat: run user commands under user snapshot (#9357)
The initial goal is for user snapshots to have access to aliases etc
2026-01-16 11:49:28 +01:00
jif-oai
f5b3e738fb
feat: propagate approval request of unsubscribed threads (#9232)
A thread can now be spawned by another thread. In order to process the
approval requests of such sub-threads, we need to detect those event and
show them in the TUI.

This is a temporary solution while the UX is being figured out. This PR
should be reverted once done
2026-01-16 11:23:01 +01:00
Ahmed Ibrahim
0cce6ebd83
rename model turn to sampling request (#9336)
We have two type of turns now: model and user turns. It's always
confusing to refer to either. Model turn is basically a sampling
request.
2026-01-16 10:06:24 +01:00
Eric Traut
1fc72c647f
Fix token estimate during compaction (#9337)
This addresses #9287
2026-01-15 19:48:11 -08:00
Michael Bolin
99f47d6e9a
fix(mcp): include threadId in both content and structuredContent in CallToolResult (#9338) 2026-01-15 18:33:11 -08:00
Thanh Nguyen
a6324ab34b
fix(tui): only show 'Worked for' separator when actual work was performed (#8958)
Fixes #7919.

This PR addresses a TUI display bug where the "Worked for" separator
would appear prematurely during the planning stage.

**Changes:**
- Added `had_work_activity` flag to `ChatWidget` to track if actual work
(exec commands, MCP tool calls, patches) was performed in the current
turn.
- Updated `handle_streaming_delta` to only display the
`FinalMessageSeparator` if both `needs_final_message_separator` AND
`had_work_activity` are true.
- Updated `handle_exec_end_now`, `handle_patch_apply_end_now`, and
`handle_mcp_end_now` to set `had_work_activity = true`.

**Verification:**
- Ran `cargo test -p codex-tui` to ensure no regressions.
- Manual verification confirms the separator now only appears after
actual work is completed.

---------

Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-16 01:41:43 +00:00
Dylan Hurd
3cabb24210
chore(windows) Enable Powershell UTF8 feature (#9195)
## Summary
We've received a lot of positive feedback about this feature, so we're
going to enable it by default.
2026-01-16 01:29:12 +00:00
charley-oai
1fa8350ae7
Add text element metadata to protocol, app server, and core (#9331)
The second part of breaking up PR
https://github.com/openai/codex/pull/9116

Summary:

- Add `TextElement` / `ByteRange` to protocol user inputs and user
message events with defaults.
- Thread `text_elements` through app-server v1/v2 request handling and
history rebuild.
- Preserve UI metadata only in user input/events (not `ContentItem`)
while keeping local image attachments in user events for rehydration.

Details:

- Protocol: `UserInput::Text` carries `text_elements`;
`UserMessageEvent` carries `text_elements` + `local_images`.
Serialization includes empty vectors for backward compatibility.
- app-server-protocol: v1 defines `V1TextElement` / `V1ByteRange` in
camelCase with conversions; v2 uses its own camelCase wrapper.
- app-server: v1/v2 input mapping includes `text_elements`; thread
history rebuilds include them.
- Core: user event emission preserves UI metadata while model history
stays clean; history replay round-trips the metadata.
2026-01-15 17:26:41 -08:00
Yuvraj Angad Singh
004a74940a
fix: send non-null content on elicitation Accept (#9196)
## Summary

- When a user accepts an MCP elicitation request, send `content:
Some(json!({}))` instead of `None`
- MCP servers that use elicitation expect content to be present when
action is Accept
- This matches the expected behavior shown in tests at
`exec-server/tests/common/lib.rs:171`

## Root Cause

In `codex-rs/core/src/codex.rs`, the `resolve_elicitation` function
always sent `content: None`:

```rust
let response = ElicitationResponse {
    action,
    content: None,  // Always None, even for Accept
};
```

## Fix

Send an empty object when accepting:

```rust
let content = match action {
    ElicitationAction::Accept => Some(serde_json::json!({})),
    ElicitationAction::Decline | ElicitationAction::Cancel => None,
};
```

## Test plan

- [x] Code compiles with `cargo check -p codex-core`
- [x] Formatted with `just fmt`
- [ ] Integration test `accept_elicitation_for_prompt_rule` (requires
MCP server binary)

Fixes #9053
2026-01-15 14:20:57 -08:00
Ahmed Ibrahim
749b58366c
Revert empty paste image handling (#9318)
Revert #9049 behavior so empty paste events no longer trigger a
clipboard image read.
2026-01-15 14:16:09 -08:00
pap-openai
d886a8646c
remove needs_follow_up error log (#9272) 2026-01-15 21:20:54 +00:00
sayan-oai
169201b1b5
[search] allow explicitly disabling web search (#9249)
moving `web_search` rollout serverside, so need a way to explicitly
disable search + signal eligibility from the client.

- Add `x‑oai‑web‑search‑eligible` header that signifies whether the
request can have web search.
- Only attach the `web_search` tool when the resolved `WebSearchMode` is
`Live` or `Cached`.
2026-01-15 11:28:57 -08:00
xl-openai
42fa4c237f
Support SKILL.toml file. (#9125)
We’re introducing a new SKILL.toml to hold skill metadata so Codex can
deliver a richer Skills experience.

Initial focus is the interface block:
```
[interface]
display_name = "Optional user-facing name"
short_description = "Optional user-facing description"
icon_small = "./assets/small-400px.png"
icon_large = "./assets/large-logo.svg"
brand_color = "#3B82F6"
default_prompt = "Optional surrounding prompt to use the skill with"
```

All fields are exposed via the app server API.
display_name and short_description are consumed by the TUI.
2026-01-15 11:20:04 -08:00
Eric Traut
5f10548772
Revert recent styling change for input prompt placeholder text (#9307)
A recent change in commit ccba737d26 modified the styling of the
placeholder text (e.g. "Implement {feature}") in the input box of the
CLI, changing it from non-italic to italic. I think this was likely
unintentional. It results in a bad display appearance on some terminal
emulators, and several users have complained about it.

This change switches back to non-italic styling, restoring the older
behavior.

It addresses #9262
2026-01-15 10:58:12 -08:00
jif-oai
da44569fef
nit: clean unified exec background processes (#9304)
To fix the occurences where the End event is received after the listener
stopped listenning
2026-01-15 18:34:33 +00:00
jif-oai
393a5a0311
chore: better orchestrator prompt (#9301) 2026-01-15 18:11:43 +00:00
viyatb-oai
55bda1a0f2
revert: remove pre-Landlock bind mounts apply (#9300)
**Description**

This removes the pre‑Landlock read‑only bind‑mount step from the Linux
sandbox so filesystem restrictions rely solely on Landlock again.
`mounts.rs` is kept in place but left unused. The linux‑sandbox README
is updated to match the new behavior and manual test expectations.
2026-01-15 09:47:57 -08:00
李琼羽
b4d240c3ae
fix(exec): improve stdin prompt decoding (#9151)
Fixes #8733.

- Read prompt from stdin as raw bytes and decode more helpfully.
- Strip UTF-8 BOM; decode UTF-16LE/UTF-16BE when a BOM is present.
- For other non-UTF8 input, fail with an actionable message (offset +
iconv hint).

Tests: `cargo test -p codex-exec`.
2026-01-15 09:29:05 -08:00
gt-oai
f6df1596eb
Propagate MCP disabled reason (#9207)
Indicate why MCP servers are disabled when they are disabled by
requirements:

```
➜  codex git:(main) ✗ just codex mcp list
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.27s
     Running `target/debug/codex mcp list`
Name         Command          Args  Env  Cwd  Status                                                                  Auth
docs         docs-mcp         -     -    -    disabled: requirements (MDM com.openai.codex:requirements_toml_base64)  Unsupported
hello_world  hello-world-mcp  -     -    -    disabled: requirements (MDM com.openai.codex:requirements_toml_base64)  Unsupported

➜  codex git:(main) ✗ just c
cargo run --bin codex -- "$@"
    Finished `dev` profile [unoptimized + debuginfo] target(s) in 0.90s
     Running `target/debug/codex`
╭─────────────────────────────────────────────╮
│ >_ OpenAI Codex (v0.0.0)                    │
│                                             │
│ model:     gpt-5.2 xhigh   /model to change │
│ directory: ~/code/codex/codex-rs            │
╰─────────────────────────────────────────────╯

/mcp

🔌  MCP Tools

  • No MCP tools available.

  • docs (disabled)
    • Reason: requirements (MDM com.openai.codex:requirements_toml_base64)

  • hello_world (disabled)
    • Reason: requirements (MDM com.openai.codex:requirements_toml_base64)
```
2026-01-15 17:24:00 +00:00
Eric Traut
ae96a15312
Changed codex resume --last to honor the current cwd (#9245)
This PR changes `codex resume --last` to work consistently with `codex
resume`. Namely, it filters based on the cwd when selecting the last
session. It also supports the `--all` modifier as an override.

This addresses #8700
2026-01-15 17:05:08 +00:00
jif-oai
3fc487e0e0
feat: basic tui for event emission (#9209) 2026-01-15 15:53:02 +00:00
jif-oai
faeb08c1e1
feat: add interrupt capabilities to send_input (#9276) 2026-01-15 14:59:07 +00:00
jif-oai
05b960671d
feat: add agent roles to collab tools (#9275)
Add `agent_type` parameter to the collab tool `spawn_agent` that
contains a preset to apply on the config when spawning this agent
2026-01-15 13:33:52 +00:00
jif-oai
bad4c12b9d
feat: collab tools app-server event mapping (#9213) 2026-01-15 09:03:26 +00:00
viyatb-oai
2259031d64
fix: fallback to Landlock-only when user namespaces unavailable and set PR_SET_NO_NEW_PRIVS early (#9250)
fixes https://github.com/openai/codex/issues/9236

### Motivation
- Prevent sandbox setup from failing when unprivileged user namespaces
are denied so Landlock-only protections can still be applied.
- Ensure `PR_SET_NO_NEW_PRIVS` is set before installing seccomp and
Landlock restrictions to avoid kernel `EPERM`/`LandlockRestrict`
ordering issues.

### Description
- Add `is_permission_denied` helper that detects `EPERM` /
`PermissionDenied` from `CodexErr` to drive fallback logic.
- In `apply_read_only_mounts` skip read-only bind-mount setup and return
`Ok(())` when `unshare_user_and_mount_namespaces()` fails with
permission-denied so Landlock rules can still be installed.
- Add `set_no_new_privs()` and call it from
`apply_sandbox_policy_to_current_thread` before installing seccomp
filters and Landlock rules when disk or network access is restricted.
2026-01-14 22:24:34 -08:00
Ahmed Ibrahim
a09711332a
Add migration_markdown in model_info (#9219)
Next step would be to clean Model Upgrade in model presets

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: aibrahim-oai <219906144+aibrahim-oai@users.noreply.github.com>
2026-01-15 01:55:22 +00:00
charley-oai
4a9c2bcc5a
Add text element metadata to types (#9235)
Initial type tweaking PR to make the diff of
https://github.com/openai/codex/pull/9116 smaller

This should not change any behavior, just adds some fields to types
2026-01-14 16:41:50 -08:00
Michael Bolin
2a68b74b9b
fix: increase timeout for release builds from 30 to 60 minutes (#9242)
Windows builds have been tripping the 30 minute timeout. For sure, we
need to improve this, but as a quick fix, let's just increase the
timeout.

Perhaps we should switch to `lto = "thin"` for release builds, at least
for Windows:


3728db11b8/codex-rs/Cargo.toml (L288)

See https://doc.rust-lang.org/cargo/reference/profiles.html#lto for
details.
2026-01-15 00:38:25 +00:00
Michael Bolin
3728db11b8
fix: eliminate unnecessary clone() for each SSE event (#9238)
Given how many SSE events we get, seems worth fixing.
2026-01-15 00:06:09 +00:00
viyatb-oai
e59e7d163d
fix: correct linux sandbox uid/gid mapping after unshare (#9234)
fixes https://github.com/openai/codex/issues/9233
## Summary
- capture effective uid/gid before unshare for user namespace maps
- pass captured ids into uid/gid map writer

## Testing
- just fmt
- just fix -p codex-linux-sandbox
- cargo test -p codex-linux-sandbox
2026-01-14 15:35:53 -08:00
willwang-openai
71a2973fd9
upgrade runners in rust-ci.yml to use the larger runners (#9106)
Upgrades runners in rust-ci.yaml to larger runners

ubuntu-24.04 (x64 and arm64) -> custom 16 core ubuntu 24.04 runners
macos-14 -> mac0s-15-xlarge
[TODO] windows (x64 and arm64) -> custom 16 core windows runners
2026-01-14 15:22:59 -08:00
Eric Traut
24b88890cb
Updated issue template to ask for terminal emulator (#9231)
Co-authored-by: Josh McKinney <joshka@openai.com>
2026-01-14 23:00:15 +00:00
sayan-oai
5e426ac270
add WebSearchMode enum (#9216)
### What
Add `WebSearchMode` enum (disabled, cached live, defaults to cached) to
config + V2 protocol. This enum takes precedence over legacy flags:
`web_search_cached`, `web_search_request`, and `tools.web_search`.

Keep `--search` as live.

### Tests
Added tests
2026-01-14 12:51:42 -08:00
Josh McKinney
27da8a68d3
fix(tui): disable double-press quit shortcut (#9220)
Disables the default Ctrl+C/Ctrl+D double-press quit UX (keeps the code
path behind a const) while we rethink the quit/interrupt flow.

Tests:
- just fmt
- cargo clippy --fix --all-features --tests --allow-dirty --allow-no-vcs
-p codex-tui
- cargo test -p codex-tui --lib
2026-01-14 20:28:18 +00:00
Josh McKinney
0471ddbe74
fix(tui2): align Steer submit keys (#9218)
- Remove legacy Ctrl+K queuing in tui2; Tab is the queue key.
- Make Enter queue when Steer is disabled and submit immediately when
Steer is enabled.
- Add Steer keybinding docs on both tui and tui2 chat composers.
2026-01-14 19:35:55 +00:00
pakrym-oai
e6d2ef432d
Rename hierarchical_agents to child_agents_md (#9215)
Clearer name
2026-01-14 19:14:24 +00:00
jif-oai
577e1fd1b2
feat: adding piped process to replace PTY when needed (#8797) 2026-01-14 18:44:04 +00:00
gt-oai
fe1e0da102
s/mcp_server_requirements/mcp_servers (#9212)
A simple `s/mcp_server_requirements/mcp_servers/g` for an unreleased
feature. @bolinfest correctly pointed out, it's already in
`requirements.toml` so the `_requirements` is redundant.
2026-01-14 18:41:52 +00:00
pakrym-oai
e958d0337e
Log headers in trace mode (#9214)
To enable:

```
export RUST_LOG="warn,codex_=trace"
```

Sample: 
```
Request completed method=POST url=https://chatgpt.com/backend-api/codex/responses status=200 OK headers={"date": "Wed, 14 Jan 2026 18:21:21 GMT", "transfer-encoding": "chunked", "connection": "keep-alive", "x-codex-plan-type": "business", "x-codex-primary-used-percent": "3", "x-codex-secondary-used-percent": "6", "x-codex-primary-window-minutes": "300", "x-codex-primary-over-secondary-limit-percent": "0", "x-codex-secondary-window-minutes": "10080", "x-codex-primary-reset-after-seconds": "9944", "x-codex-secondary-reset-after-seconds": "171121", "x-codex-primary-reset-at": "1768424824", "x-codex-secondary-reset-at": "1768586001", "x-codex-credits-has-credits": "False", "x-codex-credits-balance": "", "x-codex-credits-unlimited": "False", "x-models-etag": "W/\"7a7ffbc83c159dbd7a2a73aaa9c91b7a\"", "x-oai-request-id": "ffedcd30-6d8a-4c4d-be10-8ebb23c142c8", "x-envoy-upstream-service-time": "417", "x-openai-proxy-wasm": "v0.1", "cf-cache-status": "DYNAMIC", "set-cookie": "__cf_bm=xFKeaMbWNbKO5ZX.K5cJBhj34OA1QvnF_3nkdMThjlA-1768414881-1.0.1.1-uLpsE_BDkUfcmOMaeKVQmv_6_2ytnh_R3lO_il5N5K3YPQEkBo0cOMTdma6bK0Gz.hQYcIesFwKIJht1kZ9JKqAYYnjgB96hF4.sii2U3cE; path=/; expires=Wed, 14-Jan-26 18:51:21 GMT; domain=.chatgpt.com; HttpOnly; Secure; SameSite=None", "report-to": "{\"endpoints\":[{\"url\":\"https:\/\/a.nel.cloudflare.com\/report\/v4?s=4Kc7g4zUhKkIm3xHuB6ba4jyIUqqZ07ETwIPAYQASikRjA8JesbtUKDP9tSrZ5PnzWldaiSz5dZVQFI579LEsCMlMUSelTvmyQ8j4FbFDawi%2FprWZ5iRePiaSalr\"}],\"group\":\"cf-nel\",\"max_age\":604800}", "nel": "{\"success_fraction\":0.01,\"report_to\":\"cf-nel\",\"max_age\":604800}", "strict-transport-security": "max-age=31536000; includeSubDomains; preload", "x-content-type-options": "nosniff", "cross-origin-opener-policy": "same-origin-allow-popups", "referrer-policy": "strict-origin-when-cross-origin", "server": "cloudflare", "cf-ray": "9bdf270adc7aba3a-SEA"} version=HTTP/1.1
```
2026-01-14 18:38:12 +00:00
Ahmed Ibrahim
8e937fbba9
Get model on session configured (#9191)
- Don't try to precompute model unless you know it from `config`
- Block `/model` on session configured
- Queue messages until session configured
- show "loading" in status until session configured
2026-01-14 10:20:41 -08:00
Celia Chen
02f67bace8
fix: Emit response.completed immediately for Responses SSE (#9170)
we see windows test failures like this:
https://github.com/openai/codex/actions/runs/20930055601/job/60138344260.

The issue is that SSE connections sometimes remain open after the
completion event esp. for windows. We should emit the completion event
and return immediately. this is consistent with the protocol:

> The Model streams responses back in an SSE, which are collected until
"completed" message and the SSE terminates

from
https://github.com/openai/codex/blob/dev/cc/fix-windows-test/codex-rs/docs/protocol_v1.md#L37.

this helps us achieve parity with responses websocket logic here:
https://github.com/openai/codex/blob/dev/cc/fix-windows-test/codex-rs/codex-api/src/endpoint/responses_websocket.rs#L220-L227.
2026-01-14 10:05:00 -08:00