Commit graph

95 commits

Author SHA1 Message Date
Michael Bolin
7ecd0dc9b3
fix: stop honoring CODEX_MANAGED_CONFIG_PATH environment variable in production (#8762) 2026-01-06 07:10:27 -08:00
Anton Panasenko
807f8a43c2
feat: expose outputSchema to user_turn/turn_start app_server API (#8377)
What changed
- Added `outputSchema` support to the app-server APIs, mirroring `codex
exec --output-schema` behavior.
- V1 `sendUserTurn` now accepts `outputSchema` and constrains the final
assistant message for that turn.
- V2 `turn/start` now accepts `outputSchema` and constrains the final
assistant message for that turn (explicitly per-turn only).

Core behavior
- `Op::UserTurn` already supported `final_output_json_schema`; now V1
`sendUserTurn` forwards `outputSchema` into that field.
- `Op::UserInput` now carries `final_output_json_schema` for per-turn
settings updates; core maps it into
`SessionSettingsUpdate.final_output_json_schema` so it applies to the
created turn context.
- V2 `turn/start` does NOT persist the schema via `OverrideTurnContext`
(it’s applied only for the current turn). Other overrides
(cwd/model/etc) keep their existing persistent behavior.

API / docs
- `codex-rs/app-server-protocol/src/protocol/v1.rs`: add `output_schema:
Option<serde_json::Value>` to `SendUserTurnParams` (serialized as
`outputSchema`).
- `codex-rs/app-server-protocol/src/protocol/v2.rs`: add `output_schema:
Option<JsonValue>` to `TurnStartParams` (serialized as `outputSchema`).
- `codex-rs/app-server/README.md`: document `outputSchema` for
`turn/start` and clarify it applies only to the current turn.
- `codex-rs/docs/codex_mcp_interface.md`: document `outputSchema` for v1
`sendUserTurn` and v2 `turn/start`.

Tests added/updated
- New app-server integration tests asserting `outputSchema` is forwarded
into outbound `/responses` requests as `text.format`:
  - `codex-rs/app-server/tests/suite/output_schema.rs`
  - `codex-rs/app-server/tests/suite/v2/output_schema.rs`
- Added per-turn semantics tests (schema does not leak to the next
turn):
  - `send_user_turn_output_schema_is_per_turn_v1`
  - `turn_start_output_schema_is_per_turn_v2`
- Added protocol wire-compat tests for the merged op:
  - serialize omits `final_output_json_schema` when `None`
  - deserialize works when field is missing
  - serialize includes `final_output_json_schema` when `Some(schema)`

Call site updates (high level)
- Updated all `Op::UserInput { .. }` constructions to include
`final_output_json_schema`:
  - `codex-rs/app-server/src/codex_message_processor.rs`
  - `codex-rs/core/src/codex_delegate.rs`
  - `codex-rs/mcp-server/src/codex_tool_runner.rs`
  - `codex-rs/tui/src/chatwidget.rs`
  - `codex-rs/tui2/src/chatwidget.rs`
  - plus impacted core tests.

Validation
- `just fmt`
- `cargo test -p codex-core`
- `cargo test -p codex-app-server`
- `cargo test -p codex-mcp-server`
- `cargo test -p codex-tui`
- `cargo test -p codex-tui2`
- `cargo test -p codex-protocol`
- `cargo clippy --all-features --tests --profile dev --fix -- -D
warnings`
2026-01-05 10:27:00 -08:00
Michael Bolin
e61bae12e3
feat: introduce codex-utils-cargo-bin as an alternative to assert_cmd::Command (#8496)
This PR introduces a `codex-utils-cargo-bin` utility crate that
wraps/replaces our use of `assert_cmd::Command` and
`escargot::CargoBuild`.

As you can infer from the introduction of `buck_project_root()` in this
PR, I am attempting to make it possible to build Codex under
[Buck2](https://buck2.build) as well as `cargo`. With Buck2, I hope to
achieve faster incremental local builds (largely due to Buck2's
[dice](https://buck2.build/docs/insights_and_knowledge/modern_dice/)
build strategy, as well as benefits from its local build daemon) as well
as faster CI builds if we invest in remote execution and caching.

See
https://buck2.build/docs/getting_started/what_is_buck2/#why-use-buck2-key-advantages
for more details about the performance advantages of Buck2.

Buck2 enforces stronger requirements in terms of build and test
isolation. It discourages assumptions about absolute paths (which is key
to enabling remote execution). Because the `CARGO_BIN_EXE_*` environment
variables that Cargo provides are absolute paths (which
`assert_cmd::Command` reads), this is a problem for Buck2, which is why
we need this `codex-utils-cargo-bin` utility.

My WIP-Buck2 setup sets the `CARGO_BIN_EXE_*` environment variables
passed to a `rust_test()` build rule as relative paths.
`codex-utils-cargo-bin` will resolve these values to absolute paths,
when necessary.


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/8496).
* #8498
* __->__ #8496
2025-12-23 19:29:32 -08:00
Ahmed Ibrahim
40de81e7af
Remove reasoning format (#8484)
This isn't very useful parameter. 

logic:
```
if model puts `**` in their reasoning, trim it and visualize the header.
if couldn't trim: don't render
if model doesn't support: don't render
```

We can simplify to:
```
if could trim, visualize header.
if not, don't render
```
2025-12-23 16:01:46 -08:00
Michael Bolin
e27d9bd88f
feat: honor /etc/codex/config.toml (#8461)
This adds logic to load `/etc/codex/config.toml` and associate it with
`ConfigLayerSource::System` on UNIX. I refactored the code so it shares
logic with the creation of the `ConfigLayerSource::User` layer.
2025-12-22 19:06:04 -08:00
Ahmed Ibrahim
6b2ef216f1
remove minimal client version (#8447)
This isn't needed value by client
2025-12-22 12:52:24 -08:00
Ahmed Ibrahim
f0dc6fd3c7
Rename OpenAI models to models manager (#8346)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-19 16:20:05 -08:00
Michael Bolin
b903285746
feat: migrate to new constraint-based loading strategy (#8251)
This is a significant change to how layers of configuration are applied.
In particular, the `ConfigLayerStack` now has two important fields:

- `layers: Vec<ConfigLayerEntry>`
- `requirements: ConfigRequirements`

We merge `TomlValue`s across the layers, but they are subject to
`ConfigRequirements` before creating a `Config`.

How I would review this PR:

- start with `codex-rs/app-server-protocol/src/protocol/v2.rs` and note
the new variants added to the `ConfigLayerSource` enum:
`LegacyManagedConfigTomlFromFile` and `LegacyManagedConfigTomlFromMdm`
- note that `ConfigLayerSource` now has a `precedence()` method and
implements `PartialOrd`
- `codex-rs/core/src/config_loader/layer_io.rs` is responsible for
loading "admin" preferences from `/etc/codex/managed_config.toml` and
MDM. Because `/etc/codex/managed_config.toml` is now deprecated in favor
of `/etc/codex/requirements.toml` and `/etc/codex/config.toml`, we now
include some extra information on the `LoadedConfigLayers` returned in
`layer_io.rs`.
- `codex-rs/core/src/config_loader/mod.rs` has major changes to
`load_config_layers_state()`, which is what produces `ConfigLayerStack`.
The docstring has the new specification and describes the various layers
that will be loaded and the precedence order.
- It uses the information from `LoaderOverrides` "twice," both in the
spirit of legacy support:
- We use one instances to derive an instance of `ConfigRequirements`.
Currently, the only field in `managed_config.toml` that contributes to
`ConfigRequirements` is `approval_policy`. This PR introduces
`Constrained::allow_only()` to support this.
- We use a clone of `LoaderOverrides` to derive
`ConfigLayerSource::LegacyManagedConfigTomlFromFile` and
`ConfigLayerSource::LegacyManagedConfigTomlFromMdm` layers, as
appropriate. As before, this ends up being a "best effort" at enterprise
controls, but is enforcement is not guaranteed like it is for
`ConfigRequirements`.
- Now we only create a "user" layer if `$CODEX_HOME/config.toml` exists.
(Previously, a user layer was always created for `ConfigLayerStack`.)
- Similarly, we only add a "session flags" layer if there are CLI
overrides.
- `config_loader/state.rs` contains the updated implementation for
`ConfigLayerStack`. Note the public API is largely the same as before,
but the implementation is quite different. We leverage the fact that
`ConfigLayerSource` is now `PartialOrd` to ensure layers are in the
correct order.
- A `Config` constructed via `ConfigBuilder.build()` will use
`load_config_layers_state()` to create the `ConfigLayerStack` and use
the associated `ConfigRequirements` when constructing the `Config`
object.
- That said, a `Config` constructed via
`Config::load_from_base_config_with_overrides()` does _not_ yet use
`ConfigBuilder`, so it creates a `ConfigRequirements::default()` instead
of loading a proper `ConfigRequirements`. I will fix this in a
subsequent PR.

Then the following files are mostly test changes:

```
codex-rs/app-server/tests/suite/v2/config_rpc.rs
codex-rs/core/src/config/service.rs
codex-rs/core/src/config_loader/tests.rs
```

Again, because we do not always include "user" and "session flags"
layers when the contents are empty, `ConfigLayerStack` sometimes has
fewer layers than before (and the precedence order changed slightly),
which is the main reason integration tests changed.
2025-12-18 10:06:05 -08:00
Ahmed Ibrahim
f084e5264b
caribou (#8265)
Welcome caribou

<img width="1536" height="1024" alt="image"
src="https://github.com/user-attachments/assets/2a67b21f-40cf-4518-aee4-691af331ab50"
/>
2025-12-18 08:58:44 -08:00
Ahmed Ibrahim
374d591311
chores: clean picker (#8232)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-18 08:41:34 -08:00
Ahmed Ibrahim
774bd9e432
feat: model picker (#8209)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-17 16:12:35 -08:00
Ahmed Ibrahim
927a6acbea
Load models from static file (#8153)
- Load models from static file as a fallback
- Make API users use this file directly
- Add tests to make sure updates to the file always serialize
2025-12-17 14:34:13 -08:00
Shijie Rao
df35189366
feat: make list_models non-blocking (#8198)
### Summary
* Make `app_server.list_models` to be non-blocking and consumers (i.e.
extension) can manage the flow themselves.
* Force config to use remote models and therefore fetch codex-auto model
list.
2025-12-17 12:13:16 -08:00
Michael Bolin
de3fa03e1c
feat: change ConfigLayerName into a disjoint union rather than a simple enum (#8095)
This attempts to tighten up the types related to "config layers."
Currently, `ConfigLayerEntry` is defined as follows:


bef36f4ae7/codex-rs/core/src/config_loader/state.rs (L19-L25)

but the `source` field is a bit of a lie, as:

- for `ConfigLayerName::Mdm`, it is
`"com.openai.codex/config_toml_base64"`
- for `ConfigLayerName::SessionFlags`, it is `"--config"`
- for `ConfigLayerName::User`, it is `"config.toml"` (just the file
name, not the path to the `config.toml` on disk that was read)
- for `ConfigLayerName::System`, it seems like it is usually
`/etc/codex/managed_config.toml` in practice, though on Windows, it is
`%CODEX_HOME%/managed_config.toml`:


bef36f4ae7/codex-rs/core/src/config_loader/layer_io.rs (L84-L101)

All that is to say, in three out of the four `ConfigLayerName`, `source`
is a `PathBuf` that is not an absolute path (or even a true path).

This PR tries to uplevel things by eliminating `source` from
`ConfigLayerEntry` and turning `ConfigLayerName` into a disjoint union
named `ConfigLayerSource` that has the appropriate metadata for each
variant, favoring the use of `AbsolutePathBuf` where appropriate:

```rust
pub enum ConfigLayerSource {
    /// Managed preferences layer delivered by MDM (macOS only).
    #[serde(rename_all = "camelCase")]
    #[ts(rename_all = "camelCase")]
    Mdm { domain: String, key: String },
    /// Managed config layer from a file (usually `managed_config.toml`).
    #[serde(rename_all = "camelCase")]
    #[ts(rename_all = "camelCase")]
    System { file: AbsolutePathBuf },
    /// Session-layer overrides supplied via `-c`/`--config`.
    SessionFlags,
    /// User config layer from a file (usually `config.toml`).
    #[serde(rename_all = "camelCase")]
    #[ts(rename_all = "camelCase")]
    User { file: AbsolutePathBuf },
}
```
2025-12-17 08:13:59 -08:00
Michael Bolin
b1905d3754
fix: added test helpers for platform-specific paths (#7954)
This addresses post-merge feedback from
https://github.com/openai/codex/pull/7856.
2025-12-13 00:14:12 +00:00
Michael Bolin
642b7566df
fix: introduce AbsolutePathBuf as part of sandbox config (#7856)
Changes the `writable_roots` field of the `WorkspaceWrite` variant of
the `SandboxPolicy` enum from `Vec<PathBuf>` to `Vec<AbsolutePathBuf>`.
This is helpful because now callers can be sure the value is an absolute
path rather than a relative one. (Though when using an absolute path in
a Seatbelt config policy, we still have to _canonicalize_ it first.)

Because `writable_roots` can be read from a config file, it is important
that we are able to resolve relative paths properly using the parent
folder of the config file as the base path.
2025-12-12 15:25:22 -08:00
Ahmed Ibrahim
149696d959
chores: models manager (#7937) 2025-12-12 18:59:39 +00:00
Victor Vannara
95f7d37ec6
Fix misleading 'maximize' high effort description on xhigh models (#7874)
## Notes
- switch misleading High reasoning effort descriptions from "Maximizes
reasoning depth" to "Higher reasoning depth" across models with xhigh
reasoning. Affects GPT-5.1 Codex Max and Robin
- refresh model list fixtures and chatwidget snapshots to match new copy

## Revision
- R2: Change 'Higher' to 'Greater'
- R1: Initial

## Testing

<img width="583" height="142" alt="image"
src="https://github.com/user-attachments/assets/1ddd8971-7841-4cb3-b9ba-91095a7435d2"
/>

<img width="838" height="142" alt="image"
src="https://github.com/user-attachments/assets/79aaedbf-7624-4695-b822-93dea7d6a800"
/>
2025-12-11 16:38:52 -08:00
Ahmed Ibrahim
b7fa7ca8e9
Update Model Info (#7853) 2025-12-11 14:06:07 -08:00
Ahmed Ibrahim
238ce7dfad
feat: robin (#7882)
<img width="554" height="554" alt="image"
src="https://github.com/user-attachments/assets/aa86f4c8-fb34-4b0e-8b03-3a9980dfdb08"
/>

---------

Co-authored-by: Dylan Hurd <dylan.hurd@openai.com>
2025-12-11 09:04:08 -08:00
jif-oai
29381ba5c2
feat: add shell snapshot for shell command (#7786) 2025-12-11 13:46:43 +00:00
Dylan Hurd
dca7f4cb60
fix(stuff) (#7855)
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-12-11 00:39:47 -08:00
Celia Chen
7cabe54fc7
[app-server] make app server not throw error when login id is not found (#7831)
Our previous design of cancellation endpoint is not idempotent, which
caused a bunch of flaky tests. Make app server just returned a not_found
status instead of throwing an error if the login id is not found. Keep
V1 endpoint behavior the same.
2025-12-10 16:19:40 -08:00
Javi
e2559ab28d
fix: thread/list returning fewer than the requested amount due to filtering CXA-293 (#7509)
This caused some conversations to not appear when they otherwise should.

Prior to this change, `thread/list`/`list_conversations_common` would:
- Fetch N conversations from `RolloutRecorder::list_conversations`
- Then it would filter those (like by the provided `model_providers`)
- This would make it potentially return less than N items.

With this change:
- `list_conversations_common` now continues fetching more conversations
from `RolloutRecorder::list_conversations` until it "fills up" the
`requested_page_size`.
- Ultimately this means that clients can rely on getting eg 20
conversations if they request 20 conversations.
2025-12-10 23:06:32 +00:00
Celia Chen
bfb4d5710b
[app-server-protocol] Add types for config (#7658)
Currently the config returned by `config/read` in untyped. Add types so
it's easier for client to parse the config. Since currently configs are
all defined in snake case we'll keep that instead of using camel case
like the rest of V2.

Sample output by testing using the app server test client:
```
{
<   "id": "f28449f4-b015-459b-b07b-eef06980165d",
<   "result": {
<     "config": {
<       "approvalPolicy": null,
<       "compactPrompt": null,
<       "developerInstructions": null,
<       "features": {
<         "experimental_use_rmcp_client": true
<       },
<       "forcedChatgptWorkspaceId": null,
<       "forcedLoginMethod": null,
<       "instructions": null,
<       "model": "gpt-5.1-codex-max",
<       "modelAutoCompactTokenLimit": null,
<       "modelContextWindow": null,
<       "modelProvider": null,
<       "modelReasoningEffort": null,
<       "modelReasoningSummary": null,
<       "modelVerbosity": null,
<       "model_providers": {
<         "local": {
<           "base_url": "http://localhost:8061/api/codex",
<           "env_http_headers": {
<             "ChatGPT-Account-ID": "OPENAI_ACCOUNT_ID"
<           },
<           "env_key": "CHATGPT_TOKEN_STAGING",
<           "name": "local",
<           "wire_api": "responses"
<         }
<       },
<       "model_reasoning_effort": "medium",
<       "notice": {
<         "hide_gpt-5.1-codex-max_migration_prompt": true,
<         "hide_gpt5_1_migration_prompt": true
<       },
<       "profile": null,
<       "profiles": {},
<       "projects": {
<         "/Users/celia/code": {
<           "trust_level": "trusted"
<         },
<         "/Users/celia/code/codex": {
<           "trust_level": "trusted"
<         },
<         "/Users/celia/code/openai": {
<           "trust_level": "trusted"
<         }
<       },
<       "reviewModel": null,
<       "sandboxMode": null,
<       "sandboxWorkspaceWrite": null,
<       "tools": {
<         "viewImage": null,
<         "webSearch": null
<       }
<     },
<     "origins": {
<       "features.experimental_use_rmcp_client": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.base_url": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.env_http_headers.ChatGPT-Account-ID": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.env_key": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.name": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.wire_api": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_reasoning_effort": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "notice.hide_gpt-5.1-codex-max_migration_prompt": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "notice.hide_gpt5_1_migration_prompt": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "projects./Users/celia/code.trust_level": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "projects./Users/celia/code/codex.trust_level": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "projects./Users/celia/code/openai.trust_level": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "tools.web_search": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       }
<     }
<   }
< }
```
2025-12-10 21:35:31 +00:00
Ahmed Ibrahim
cb9a189857
make model optional in config (#7769)
- Make Config.model optional and centralize default-selection logic in
ModelsManager, including a default_model helper (with
codex-auto-balanced when available) so sessions now carry an explicit
chosen model separate from the base config.
- Resolve `model` once in `core` and `tui` from config. Then store the
state of it on other structs.
- Move refreshing models to be before resolving the default model
2025-12-10 11:19:00 -08:00
Eric Traut
c4af707e09
Removed experimental "command risk assessment" feature (#7799)
This experimental feature received lukewarm reception during internal
testing. Removing from the code base.
2025-12-10 09:48:11 -08:00
zhao-oai
0a32acaa2d
updating app server types to support execpoilcy amendment (#7747)
also includes minor refactor merging `ApprovalDecision` with
`CommandExecutionRequestAcceptSettings`
2025-12-08 13:56:22 -08:00
zhao-oai
b8eab7ce90
fix: taking plan type from usage endpoint instead of thru auth token (#7610)
pull plan type from the usage endpoint, persist it in session state /
tui state, and propagate through rate limit snapshots
2025-12-04 23:34:13 -08:00
Celia Chen
3e6cd5660c
[app-server] make file_path for config optional (#7560)
When we are writing to config using `config/value/write` or
`config/batchWrite`, it always require a `config/read` before it right
now in order to get the correct file path to write to. make this
optional so we read from the default user config file if this is not
passed in.
2025-12-04 03:08:18 +00:00
Ahmed Ibrahim
71504325d3
Migrate model preset (#7542)
- Introduce `openai_models` in `/core`
- Move `PRESETS` under it
- Move `ModelPreset`, `ModelUpgrade`, `ReasoningEffortPreset`,
`ReasoningEffortPreset`, and `ReasoningEffortPreset` to `protocol`
- Introduce `Op::ListModels` and `EventMsg::AvailableModels`

Next steps:
- migrate `app-server` and `tui` to use the introduced Operation
2025-12-03 20:30:43 +00:00
jif-oai
4b78e2ab09
chore: review everywhere (#7444) 2025-12-02 11:26:27 +00:00
Owen Lin
8532876ad8
[app-server] fix: emit item/fileChange/outputDelta for file change items (#7399) 2025-12-01 17:52:34 +00:00
jif-oai
6eeaf46ac1
fix: other flaky tests (#7372) 2025-11-28 15:29:44 +00:00
jif-oai
aaec8abf58
feat: detached review (#7292) 2025-11-28 11:34:57 +00:00
jif-oai
28ff364c3a
feat: update process ID for event handling (#7261) 2025-11-25 14:21:05 -08:00
jif-oai
4502b1b263
chore: proper client extraction (#6996) 2025-11-25 18:06:12 +00:00
Owen Lin
157a16cefa
[app-server] feat: add thread_id and turn_id to item and error notifications (#7124)
Add `thread_id` and `turn_id` to `item/started`, `item/completed`, and
`error` notifications. Otherwise the client will have a hard time
knowing which thread & turn (if multiple threads are running in
parallel) a new item/error is for.

Also add `thread_id` to `turn/started` and `turn/completed`.
2025-11-25 08:05:47 -08:00
jif-oai
523b40a129
feat[app-serve]: config management (#7241) 2025-11-25 09:29:38 +00:00
Josh McKinney
ec49b56874
chore: add cargo-deny configuration (#7119)
- add GitHub workflow running cargo-deny on push/PR
- document cargo-deny allowlist with workspace-dep notes and advisory
ignores
- align workspace crates to inherit version/edition/license for
consistent checks
2025-11-24 12:22:18 -08:00
Dylan Hurd
1e832b1438
fix(windows) support apply_patch parsing in powershell (#7221)
## Summary
Support powershell parsing of apply_patch

## Testing
- [x] Enable apply_patch unit tests

---------

Co-authored-by: jif-oai <jif@openai.com>
2025-11-24 19:32:47 +00:00
Owen Lin
aa4e0d823e
[app-server] feat: expose gitInfo/cwd/etc. on Thread (#7060)
Port the new additions from https://github.com/openai/codex/pull/6337 on
the legacy API to v2. Mainly need `gitInfo` and `cwd` for VSCE.
2025-11-21 10:37:12 -08:00
Owen Lin
2ae1f81d84
[app-server] feat: add Declined status for command exec (#7101)
Add a `Declined` status for when we request an approval from the user
and the user declines. This allows us to distinguish from commands that
actually ran, but failed.

This behaves similarly to apply_patch / FileChange, which does the same
thing.
2025-11-21 09:19:39 -08:00
pakrym-oai
767b66f407
Migrate coverage to shell_command (#7042) 2025-11-21 03:44:00 +00:00
Owen Lin
d6c30ed25e
[app-server] feat: v2 apply_patch approval flow (#6760)
This PR adds the API V2 version of the apply_patch approval flow, which
centers around `ThreadItem::FileChange`.

This PR wires the new RPC (`item/fileChange/requestApproval`, V2 only)
and related events (`item/started`, `item/completed` for
`ThreadItem::FileChange`, which are emitted in both V1 and V2) through
the app-server
protocol. The new approval RPC is only sent when the user initiates a
turn with the new `turn/start` API so we don't break backwards
compatibility with VSCE.

Similar to https://github.com/openai/codex/pull/6758, the approach I
took was to make as few changes to the Codex core as possible,
leveraging existing `EventMsg` core events, and translating those in
app-server. I did have to add a few additional fields to
`EventMsg::PatchApplyBegin` and `EventMsg::PatchApplyEnd`, but those
were fairly lightweight.

However, the `EventMsg`s emitted by core are the following:
```
1) Auto-approved (no request for approval)

- EventMsg::PatchApplyBegin
- EventMsg::PatchApplyEnd

2) Approved by user
- EventMsg::ApplyPatchApprovalRequest
- EventMsg::PatchApplyBegin
- EventMsg::PatchApplyEnd

3) Declined by user
- EventMsg::ApplyPatchApprovalRequest
- EventMsg::PatchApplyBegin
- EventMsg::PatchApplyEnd
```

For a request triggering an approval, this would result in:
```
item/fileChange/requestApproval
item/started
item/completed
```

which is different from the `ThreadItem::CommandExecution` flow
introduced in https://github.com/openai/codex/pull/6758, which does the
below and is preferable:
```
item/started
item/commandExecution/requestApproval
item/completed
```

To fix this, we leverage `TurnSummaryStore` on codex_message_processor
to store a little bit of state, allowing us to fire `item/started` and
`item/fileChange/requestApproval` whenever we receive the underlying
`EventMsg::ApplyPatchApprovalRequest`, and no-oping when we receive the
`EventMsg::PatchApplyBegin` later.

This is much less invasive than modifying the order of EventMsg within
core (I tried).

The resulting payloads:
```
{
  "method": "item/started",
  "params": {
    "item": {
      "changes": [
        {
          "diff": "Hello from Codex!\n",
          "kind": "add",
          "path": "/Users/owen/repos/codex/codex-rs/APPROVAL_DEMO.txt"
        }
      ],
      "id": "call_Nxnwj7B3YXigfV6Mwh03d686",
      "status": "inProgress",
      "type": "fileChange"
    }
  }
}
```

```
{
  "id": 0,
  "method": "item/fileChange/requestApproval",
  "params": {
    "grantRoot": null,
    "itemId": "call_Nxnwj7B3YXigfV6Mwh03d686",
    "reason": null,
    "threadId": "019a9e11-8295-7883-a283-779e06502c6f",
    "turnId": "1"
  }
}
```

```
{
  "id": 0,
  "result": {
    "decision": "accept"
  }
}
```

```
{
  "method": "item/completed",
  "params": {
    "item": {
      "changes": [
        {
          "diff": "Hello from Codex!\n",
          "kind": "add",
          "path": "/Users/owen/repos/codex/codex-rs/APPROVAL_DEMO.txt"
        }
      ],
      "id": "call_Nxnwj7B3YXigfV6Mwh03d686",
      "status": "completed",
      "type": "fileChange"
    }
  }
}
```
2025-11-19 20:13:31 -08:00
zhao-oai
72af589398
storing credits (#6858)
Expand the rate-limit cache/TUI: store credit snapshots alongside
primary and secondary windows, render “Credits” when the backend reports
they exist (unlimited vs rounded integer balances)
2025-11-19 10:49:35 -08:00
Ahmed Ibrahim
d5dfba2509
feat: arcticfox in the wild (#6906)
<img width="485" height="600" alt="image"
src="https://github.com/user-attachments/assets/4341740d-dd58-4a3e-b69a-33a3be0606c5"
/>

---------

Co-authored-by: jif-oai <jif@openai.com>
2025-11-19 16:31:06 +00:00
Owen Lin
1924500250
[app-server] populate thread>turns>items on thread/resume (#6848)
This PR allows clients to render historical messages when resuming a
thread via `thread/resume` by reading from the list of `EventMsg`
payloads loaded from the rollout, and then transforming them into Turns
and ThreadItems to be returned on the `Thread` object.

This is implemented by leveraging `SessionConfiguredNotification` which
returns this list of `EventMsg` objects when resuming a conversation,
and then applying a stateful `ThreadHistoryBuilder` that parses from
this EventMsg log and transforms it into Turns and ThreadItems.

Note that we only persist a subset of `EventMsg`s in a rollout as
defined in `policy.rs`, so we lose fidelity whenever we resume a thread
compared to when we streamed the thread's turns originally. However,
this behavior is at parity with the legacy API.
2025-11-19 15:58:09 +00:00
Ahmed Ibrahim
793063070b
fix: typos in model picker (#6859)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-11-19 06:29:02 +00:00
Michael Bolin
a75321a64c
fix: add more fields to ThreadStartResponse and ThreadResumeResponse (#6847)
This adds the following fields to `ThreadStartResponse` and
`ThreadResumeResponse`:

```rust
    pub model: String,
    pub model_provider: String,
    pub cwd: PathBuf,
    pub approval_policy: AskForApproval,
    pub sandbox: SandboxPolicy,
    pub reasoning_effort: Option<ReasoningEffort>,
```

This is important because these fields are optional in
`ThreadStartParams` and `ThreadResumeParams`, so the caller needs to be
able to determine what values were ultimately used to start/resume the
conversation. (Though note that any of these could be changed later
between turns in the conversation.)

Though to get this information reliably, it must be read from the
internal `SessionConfiguredEvent` that is created in response to the
start of a conversation. Because `SessionConfiguredEvent` (as defined in
`codex-rs/protocol/src/protocol.rs`) did not have all of these fields, a
number of them had to be added as part of this PR.

Because `SessionConfiguredEvent` is referenced in many tests, test
instances of `SessionConfiguredEvent` had to be updated, as well, which
is why this PR touches so many files.
2025-11-18 21:18:43 -08:00