Commit graph

182 commits

Author SHA1 Message Date
Michael Bolin
7ecd0dc9b3
fix: stop honoring CODEX_MANAGED_CONFIG_PATH environment variable in production (#8762) 2026-01-06 07:10:27 -08:00
xl-openai
58a91a0b50
Use ConfigLayerStack for skills discovery. (#8497)
Use ConfigLayerStack to get all folders while loading skills.
2026-01-05 13:47:39 -08:00
Anton Panasenko
807f8a43c2
feat: expose outputSchema to user_turn/turn_start app_server API (#8377)
What changed
- Added `outputSchema` support to the app-server APIs, mirroring `codex
exec --output-schema` behavior.
- V1 `sendUserTurn` now accepts `outputSchema` and constrains the final
assistant message for that turn.
- V2 `turn/start` now accepts `outputSchema` and constrains the final
assistant message for that turn (explicitly per-turn only).

Core behavior
- `Op::UserTurn` already supported `final_output_json_schema`; now V1
`sendUserTurn` forwards `outputSchema` into that field.
- `Op::UserInput` now carries `final_output_json_schema` for per-turn
settings updates; core maps it into
`SessionSettingsUpdate.final_output_json_schema` so it applies to the
created turn context.
- V2 `turn/start` does NOT persist the schema via `OverrideTurnContext`
(it’s applied only for the current turn). Other overrides
(cwd/model/etc) keep their existing persistent behavior.

API / docs
- `codex-rs/app-server-protocol/src/protocol/v1.rs`: add `output_schema:
Option<serde_json::Value>` to `SendUserTurnParams` (serialized as
`outputSchema`).
- `codex-rs/app-server-protocol/src/protocol/v2.rs`: add `output_schema:
Option<JsonValue>` to `TurnStartParams` (serialized as `outputSchema`).
- `codex-rs/app-server/README.md`: document `outputSchema` for
`turn/start` and clarify it applies only to the current turn.
- `codex-rs/docs/codex_mcp_interface.md`: document `outputSchema` for v1
`sendUserTurn` and v2 `turn/start`.

Tests added/updated
- New app-server integration tests asserting `outputSchema` is forwarded
into outbound `/responses` requests as `text.format`:
  - `codex-rs/app-server/tests/suite/output_schema.rs`
  - `codex-rs/app-server/tests/suite/v2/output_schema.rs`
- Added per-turn semantics tests (schema does not leak to the next
turn):
  - `send_user_turn_output_schema_is_per_turn_v1`
  - `turn_start_output_schema_is_per_turn_v2`
- Added protocol wire-compat tests for the merged op:
  - serialize omits `final_output_json_schema` when `None`
  - deserialize works when field is missing
  - serialize includes `final_output_json_schema` when `Some(schema)`

Call site updates (high level)
- Updated all `Op::UserInput { .. }` constructions to include
`final_output_json_schema`:
  - `codex-rs/app-server/src/codex_message_processor.rs`
  - `codex-rs/core/src/codex_delegate.rs`
  - `codex-rs/mcp-server/src/codex_tool_runner.rs`
  - `codex-rs/tui/src/chatwidget.rs`
  - `codex-rs/tui2/src/chatwidget.rs`
  - plus impacted core tests.

Validation
- `just fmt`
- `cargo test -p codex-core`
- `cargo test -p codex-app-server`
- `cargo test -p codex-mcp-server`
- `cargo test -p codex-tui`
- `cargo test -p codex-tui2`
- `cargo test -p codex-protocol`
- `cargo clippy --all-features --tests --profile dev --fix -- -D
warnings`
2026-01-05 10:27:00 -08:00
pakrym-oai
1b5095b5d1
Attach more tags to feedback submissions (#8688)
Attach more tags to sentry feedback so it's easier to classify and debug
without having to scan through logs.

Formatting isn't amazing but it's a start.
<img width="1234" height="276" alt="image"
src="https://github.com/user-attachments/assets/521a349d-f627-4051-b511-9811cd5cd933"
/>
2026-01-02 16:51:03 -08:00
sayan-oai
bf732600ea
[chore] add additional_details to StreamErrorEvent + wire through (#8307)
### What

Builds on #8293.

Add `additional_details`, which contains the upstream error message, to
relevant structures used to pass along retryable `StreamError`s.

Uses the new TUI status indicator's `details` field (shows under the
status header) to display the `additional_details` error to the user on
retryable `Reconnecting...` errors. This adds clarity for users for
retryable errors.

Will make corresponding change to VSCode extension to show
`additional_details` as expandable from the `Reconnecting...` cell.

Examples:
<img width="1012" height="326" alt="image"
src="https://github.com/user-attachments/assets/f35e7e6a-8f5e-4a2f-a764-358101776996"
/>

<img width="1526" height="358" alt="image"
src="https://github.com/user-attachments/assets/0029cbc0-f062-4233-8650-cc216c7808f0"
/>
2025-12-24 10:07:38 -08:00
Michael Bolin
e61bae12e3
feat: introduce codex-utils-cargo-bin as an alternative to assert_cmd::Command (#8496)
This PR introduces a `codex-utils-cargo-bin` utility crate that
wraps/replaces our use of `assert_cmd::Command` and
`escargot::CargoBuild`.

As you can infer from the introduction of `buck_project_root()` in this
PR, I am attempting to make it possible to build Codex under
[Buck2](https://buck2.build) as well as `cargo`. With Buck2, I hope to
achieve faster incremental local builds (largely due to Buck2's
[dice](https://buck2.build/docs/insights_and_knowledge/modern_dice/)
build strategy, as well as benefits from its local build daemon) as well
as faster CI builds if we invest in remote execution and caching.

See
https://buck2.build/docs/getting_started/what_is_buck2/#why-use-buck2-key-advantages
for more details about the performance advantages of Buck2.

Buck2 enforces stronger requirements in terms of build and test
isolation. It discourages assumptions about absolute paths (which is key
to enabling remote execution). Because the `CARGO_BIN_EXE_*` environment
variables that Cargo provides are absolute paths (which
`assert_cmd::Command` reads), this is a problem for Buck2, which is why
we need this `codex-utils-cargo-bin` utility.

My WIP-Buck2 setup sets the `CARGO_BIN_EXE_*` environment variables
passed to a `rust_test()` build rule as relative paths.
`codex-utils-cargo-bin` will resolve these values to absolute paths,
when necessary.


---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed
with [ReviewStack](https://reviewstack.dev/openai/codex/pull/8496).
* #8498
* __->__ #8496
2025-12-23 19:29:32 -08:00
Ahmed Ibrahim
40de81e7af
Remove reasoning format (#8484)
This isn't very useful parameter. 

logic:
```
if model puts `**` in their reasoning, trim it and visualize the header.
if couldn't trim: don't render
if model doesn't support: don't render
```

We can simplify to:
```
if could trim, visualize header.
if not, don't render
```
2025-12-23 16:01:46 -08:00
Michael Bolin
e27d9bd88f
feat: honor /etc/codex/config.toml (#8461)
This adds logic to load `/etc/codex/config.toml` and associate it with
`ConfigLayerSource::System` on UNIX. I refactored the code so it shares
logic with the creation of the `ConfigLayerSource::User` layer.
2025-12-22 19:06:04 -08:00
Ahmed Ibrahim
6b2ef216f1
remove minimal client version (#8447)
This isn't needed value by client
2025-12-22 12:52:24 -08:00
Shijie Rao
987dd7fde3
Chore: remove rmcp feature and exp flag usages (#8087)
### Summary
With codesigning on Mac, Windows and Linux, we should be able to safely
remove `features.rmcp_client` and `use_experimental_use_rmcp_client`
check from the codebase now.
2025-12-20 14:18:00 -08:00
Ahmed Ibrahim
f0dc6fd3c7
Rename OpenAI models to models manager (#8346)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-19 16:20:05 -08:00
Michael Bolin
dc61fc5f50
feat: support allowed_sandbox_modes in requirements.toml (#8298)
This adds support for `allowed_sandbox_modes` in `requirements.toml` and
provides legacy support for constraining sandbox modes in
`managed_config.toml`. This is converted to `Constrained<SandboxPolicy>`
in `ConfigRequirements` and applied to `Config` such that constraints
are enforced throughout the harness.

Note that, because `managed_config.toml` is deprecated, we do not add
support for the new `external-sandbox` variant recently introduced in
https://github.com/openai/codex/pull/8290. As noted, that variant is not
supported in `config.toml` today, but can be configured programmatically
via app server.
2025-12-19 21:09:20 +00:00
RQfreefly
ec3738b47e
feat: move file name derivation into codex-file-search (#8334)
## Summary

  - centralize file name derivation in codex-file-search
  - reuse the helper in app-server fuzzy search to avoid duplicate logic
  - add unit tests for file_name_from_path

  ## Testing

  - cargo test -p codex-file-search
  - cargo test -p codex-app-server
2025-12-19 12:50:55 -08:00
Anton Panasenko
3429de21b3
feat: introduce ExternalSandbox policy (#8290)
## Description

Introduced `ExternalSandbox` policy to cover use case when sandbox
defined by outside environment, effectively it translates to
`SandboxMode#DangerFullAccess` for file system (since sandbox configured
on container level) and configurable `network_access` (either Restricted
or Enabled by outside environment).

as example you can configure `ExternalSandbox` policy as part of
`sendUserTurn` v1 app_server API:

```
 {
            "conversationId": <id>,
            "cwd": <cwd>,
            "approvalPolicy": "never",
            "sandboxPolicy": {
                  "type": ""external-sandbox",
                  "network_access": "enabled"/"restricted"
            },
            "model": <model>,
            "effort": <effort>,
            ....
        }
```
2025-12-18 17:02:03 -08:00
xl-openai
358a5baba0
Support skills shortDescription. (#8278)
Allow SKILL.md to specify a more human-readable short description as
skill metadata.
2025-12-18 23:13:18 +00:00
Owen Lin
d7ae342ff4
feat(app-server): add v2 deprecation notice (#8285)
Add a v2 event for deprecation notices so we can get rid of
`codex/event/deprecation_notice`.
2025-12-18 21:45:36 +00:00
Michael Bolin
b903285746
feat: migrate to new constraint-based loading strategy (#8251)
This is a significant change to how layers of configuration are applied.
In particular, the `ConfigLayerStack` now has two important fields:

- `layers: Vec<ConfigLayerEntry>`
- `requirements: ConfigRequirements`

We merge `TomlValue`s across the layers, but they are subject to
`ConfigRequirements` before creating a `Config`.

How I would review this PR:

- start with `codex-rs/app-server-protocol/src/protocol/v2.rs` and note
the new variants added to the `ConfigLayerSource` enum:
`LegacyManagedConfigTomlFromFile` and `LegacyManagedConfigTomlFromMdm`
- note that `ConfigLayerSource` now has a `precedence()` method and
implements `PartialOrd`
- `codex-rs/core/src/config_loader/layer_io.rs` is responsible for
loading "admin" preferences from `/etc/codex/managed_config.toml` and
MDM. Because `/etc/codex/managed_config.toml` is now deprecated in favor
of `/etc/codex/requirements.toml` and `/etc/codex/config.toml`, we now
include some extra information on the `LoadedConfigLayers` returned in
`layer_io.rs`.
- `codex-rs/core/src/config_loader/mod.rs` has major changes to
`load_config_layers_state()`, which is what produces `ConfigLayerStack`.
The docstring has the new specification and describes the various layers
that will be loaded and the precedence order.
- It uses the information from `LoaderOverrides` "twice," both in the
spirit of legacy support:
- We use one instances to derive an instance of `ConfigRequirements`.
Currently, the only field in `managed_config.toml` that contributes to
`ConfigRequirements` is `approval_policy`. This PR introduces
`Constrained::allow_only()` to support this.
- We use a clone of `LoaderOverrides` to derive
`ConfigLayerSource::LegacyManagedConfigTomlFromFile` and
`ConfigLayerSource::LegacyManagedConfigTomlFromMdm` layers, as
appropriate. As before, this ends up being a "best effort" at enterprise
controls, but is enforcement is not guaranteed like it is for
`ConfigRequirements`.
- Now we only create a "user" layer if `$CODEX_HOME/config.toml` exists.
(Previously, a user layer was always created for `ConfigLayerStack`.)
- Similarly, we only add a "session flags" layer if there are CLI
overrides.
- `config_loader/state.rs` contains the updated implementation for
`ConfigLayerStack`. Note the public API is largely the same as before,
but the implementation is quite different. We leverage the fact that
`ConfigLayerSource` is now `PartialOrd` to ensure layers are in the
correct order.
- A `Config` constructed via `ConfigBuilder.build()` will use
`load_config_layers_state()` to create the `ConfigLayerStack` and use
the associated `ConfigRequirements` when constructing the `Config`
object.
- That said, a `Config` constructed via
`Config::load_from_base_config_with_overrides()` does _not_ yet use
`ConfigBuilder`, so it creates a `ConfigRequirements::default()` instead
of loading a proper `ConfigRequirements`. I will fix this in a
subsequent PR.

Then the following files are mostly test changes:

```
codex-rs/app-server/tests/suite/v2/config_rpc.rs
codex-rs/core/src/config/service.rs
codex-rs/core/src/config_loader/tests.rs
```

Again, because we do not always include "user" and "session flags"
layers when the contents are empty, `ConfigLayerStack` sometimes has
fewer layers than before (and the precedence order changed slightly),
which is the main reason integration tests changed.
2025-12-18 10:06:05 -08:00
Ahmed Ibrahim
f084e5264b
caribou (#8265)
Welcome caribou

<img width="1536" height="1024" alt="image"
src="https://github.com/user-attachments/assets/2a67b21f-40cf-4518-aee4-691af331ab50"
/>
2025-12-18 08:58:44 -08:00
Ahmed Ibrahim
374d591311
chores: clean picker (#8232)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-18 08:41:34 -08:00
xl-openai
da3869eeb6
Support SYSTEM skills. (#8220)
1. Remove PUBLIC skills and introduce SYSTEM skills embedded in the
binary and installed into $CODEX_HOME/skills/.system at startup.
2. Skills are now always enabled (feature flag removed).
3. Update skills/list to accept forceReload and plumb it through (not
used by clients yet).
2025-12-17 18:48:28 -08:00
Michael Bolin
a8797019a1
chore: cleanup Config instantiation codepaths (#8226)
This PR does various types of cleanup before I can proceed with more
ambitious changes to config loading.

First, I noticed duplicated code across these two methods:


774bd9e432/codex-rs/core/src/config/mod.rs (L314-L324)


774bd9e432/codex-rs/core/src/config/mod.rs (L334-L344)

This has now been consolidated in
`load_config_as_toml_with_cli_overrides()`.

Further, I noticed that `Config::load_with_cli_overrides()` took two
similar arguments:


774bd9e432/codex-rs/core/src/config/mod.rs (L308-L311)

The difference between `cli_overrides` and `overrides` was not
immediately obvious to me. At first glance, it appears that one should
be able to be expressed in terms of the other, but it turns out that
some fields of `ConfigOverrides` (such as `cwd` and
`codex_linux_sandbox_exe`) are, by design, not configurable via a
`.toml` file or a command-line `--config` flag.

That said, I discovered that many callers of
`Config::load_with_cli_overrides()` were passing
`ConfigOverrides::default()` for `overrides`, so I created two separate
methods:

- `Config::load_with_cli_overrides(cli_overrides: Vec<(String,
TomlValue)>)`
- `Config::load_with_cli_overrides_and_harness_overrides(cli_overrides:
Vec<(String, TomlValue)>, harness_overrides: ConfigOverrides)`

The latter has a long name, as it is _not_ what should be used in the
common case, so the extra typing is designed to draw attention to this
fact. I tried to update the existing callsites to use the shorter name,
where possible.

Further, in the cases where `ConfigOverrides` is used, usually only a
limited subset of fields are actually set, so I updated the declarations
to leverage `..Default::default()` where possible.
2025-12-17 18:01:17 -08:00
Ahmed Ibrahim
774bd9e432
feat: model picker (#8209)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-17 16:12:35 -08:00
Ahmed Ibrahim
927a6acbea
Load models from static file (#8153)
- Load models from static file as a fallback
- Make API users use this file directly
- Add tests to make sure updates to the file always serialize
2025-12-17 14:34:13 -08:00
Shijie Rao
df35189366
feat: make list_models non-blocking (#8198)
### Summary
* Make `app_server.list_models` to be non-blocking and consumers (i.e.
extension) can manage the flow themselves.
* Force config to use remote models and therefore fetch codex-auto model
list.
2025-12-17 12:13:16 -08:00
Shijie Rao
3702793882
chore: update listMcpServerStatus to be non-blocking (#8151)
### Summary
* Update `listMcpServerStatus` to be non-blocking by wrapping it with
tokio:spawn.
2025-12-17 10:11:02 -08:00
Michael Bolin
de3fa03e1c
feat: change ConfigLayerName into a disjoint union rather than a simple enum (#8095)
This attempts to tighten up the types related to "config layers."
Currently, `ConfigLayerEntry` is defined as follows:


bef36f4ae7/codex-rs/core/src/config_loader/state.rs (L19-L25)

but the `source` field is a bit of a lie, as:

- for `ConfigLayerName::Mdm`, it is
`"com.openai.codex/config_toml_base64"`
- for `ConfigLayerName::SessionFlags`, it is `"--config"`
- for `ConfigLayerName::User`, it is `"config.toml"` (just the file
name, not the path to the `config.toml` on disk that was read)
- for `ConfigLayerName::System`, it seems like it is usually
`/etc/codex/managed_config.toml` in practice, though on Windows, it is
`%CODEX_HOME%/managed_config.toml`:


bef36f4ae7/codex-rs/core/src/config_loader/layer_io.rs (L84-L101)

All that is to say, in three out of the four `ConfigLayerName`, `source`
is a `PathBuf` that is not an absolute path (or even a true path).

This PR tries to uplevel things by eliminating `source` from
`ConfigLayerEntry` and turning `ConfigLayerName` into a disjoint union
named `ConfigLayerSource` that has the appropriate metadata for each
variant, favoring the use of `AbsolutePathBuf` where appropriate:

```rust
pub enum ConfigLayerSource {
    /// Managed preferences layer delivered by MDM (macOS only).
    #[serde(rename_all = "camelCase")]
    #[ts(rename_all = "camelCase")]
    Mdm { domain: String, key: String },
    /// Managed config layer from a file (usually `managed_config.toml`).
    #[serde(rename_all = "camelCase")]
    #[ts(rename_all = "camelCase")]
    System { file: AbsolutePathBuf },
    /// Session-layer overrides supplied via `-c`/`--config`.
    SessionFlags,
    /// User config layer from a file (usually `config.toml`).
    #[serde(rename_all = "camelCase")]
    #[ts(rename_all = "camelCase")]
    User { file: AbsolutePathBuf },
}
```
2025-12-17 08:13:59 -08:00
Celia Chen
70913effc3
[app-server] add new RawResponseItem v2 event (#8152)
``codex/event/raw_response_item` (v1) -> `rawResponseItem/completed`
(v1).

test client log:
````
< {
<   "method": "codex/event/raw_response_item",
<   "params": {
<     "conversationId": "019b29f7-b089-7140-a535-3fe681562c15",
<     "id": "0",
<     "msg": {
<       "item": {
<         "arguments": "{\"command\":\"sed -n '1,160p' Cargo.toml\",\"workdir\":\"/Users/celia/code/codex/codex-rs\"}",
<         "call_id": "call_DrqbdB2jPxezPWc19YVEEt3h",
<         "name": "shell_command",
<         "type": "function_call"
<       },
<       "type": "raw_response_item"
<     }
<   }
< }
< {
<   "method": "rawResponseItem/completed",
<   "params": {
<     "item": {
<       "arguments": "{\"command\":\"sed -n '1,160p' Cargo.toml\",\"workdir\":\"/Users/celia/code/codex/codex-rs\"}",
<       "call_id": "call_DrqbdB2jPxezPWc19YVEEt3h",
<       "name": "shell_command",
<       "type": "function_call"
<     },
<     "threadId": "019b29f7-b089-7140-a535-3fe681562c15",
<     "turnId": "0"
<   }
< }
```
2025-12-17 02:19:30 +00:00
Shijie Rao
600d01b33a
chore: update listMcpServers to listMcpServerStatus (#8114)
### Summary
* rename app server `listMcpServers` to `listMcpServerStatuses`.
2025-12-16 15:28:45 -08:00
Owen Lin
412dd37956
chore(app-server): remove stubbed thread/compact API (#8086)
We want to rely on server-side auto-compaction instead of having the
client trigger context compaction manually. This API was stubbed as a
placeholder and never implemented.
2025-12-16 01:11:01 +00:00
iceweasel-oai
b4635ccc07
better name for windows sandbox features (#8077)
`--enable enable...` is a bad look
2025-12-15 10:15:40 -08:00
xl-openai
5d77d4db6b
Reimplement skills loading using SkillsManager + skills/list op. (#7914)
refactor the way we load and manage skills:
1. Move skill discovery/caching into SkillsManager and reuse it across
sessions.
2. Add the skills/list API (Op::ListSkills/SkillsListResponse) to fetch
skills for one or more cwds. Also update app-server for VSCE/App;
3. Trigger skills/list during session startup so UIs preload skills and
handle errors immediately.
2025-12-14 09:58:17 -08:00
Anton Panasenko
ad7b9d63c3
[codex] add otel tracing (#7844) 2025-12-12 17:07:17 -08:00
Michael Bolin
b1905d3754
fix: added test helpers for platform-specific paths (#7954)
This addresses post-merge feedback from
https://github.com/openai/codex/pull/7856.
2025-12-13 00:14:12 +00:00
Michael Bolin
642b7566df
fix: introduce AbsolutePathBuf as part of sandbox config (#7856)
Changes the `writable_roots` field of the `WorkspaceWrite` variant of
the `SandboxPolicy` enum from `Vec<PathBuf>` to `Vec<AbsolutePathBuf>`.
This is helpful because now callers can be sure the value is an absolute
path rather than a relative one. (Though when using an absolute path in
a Seatbelt config policy, we still have to _canonicalize_ it first.)

Because `writable_roots` can be read from a config file, it is important
that we are able to resolve relative paths properly using the parent
folder of the config file as the base path.
2025-12-12 15:25:22 -08:00
jif-oai
92098d36e8
feat: clean config loading and config api (#7924)
Check the README of the `config_loader` for details
2025-12-12 12:01:24 -08:00
Ahmed Ibrahim
149696d959
chores: models manager (#7937) 2025-12-12 18:59:39 +00:00
Shijie Rao
163a7e317e
feat: use latest disk value for mcp servers status (#7907)
### Summary
Instead of stale in memory config value for listing mcp server statuses,
we pull the latest disk value.
2025-12-11 18:56:55 -08:00
Victor Vannara
95f7d37ec6
Fix misleading 'maximize' high effort description on xhigh models (#7874)
## Notes
- switch misleading High reasoning effort descriptions from "Maximizes
reasoning depth" to "Higher reasoning depth" across models with xhigh
reasoning. Affects GPT-5.1 Codex Max and Robin
- refresh model list fixtures and chatwidget snapshots to match new copy

## Revision
- R2: Change 'Higher' to 'Greater'
- R1: Initial

## Testing

<img width="583" height="142" alt="image"
src="https://github.com/user-attachments/assets/1ddd8971-7841-4cb3-b9ba-91095a7435d2"
/>

<img width="838" height="142" alt="image"
src="https://github.com/user-attachments/assets/79aaedbf-7624-4695-b822-93dea7d6a800"
/>
2025-12-11 16:38:52 -08:00
Ahmed Ibrahim
b7fa7ca8e9
Update Model Info (#7853) 2025-12-11 14:06:07 -08:00
Ahmed Ibrahim
238ce7dfad
feat: robin (#7882)
<img width="554" height="554" alt="image"
src="https://github.com/user-attachments/assets/aa86f4c8-fb34-4b0e-8b03-3a9980dfdb08"
/>

---------

Co-authored-by: Dylan Hurd <dylan.hurd@openai.com>
2025-12-11 09:04:08 -08:00
jif-oai
29381ba5c2
feat: add shell snapshot for shell command (#7786) 2025-12-11 13:46:43 +00:00
Dylan Hurd
dca7f4cb60
fix(stuff) (#7855)
Co-authored-by: Ahmed Ibrahim <aibrahim@openai.com>
2025-12-11 00:39:47 -08:00
Celia Chen
ce19dbbb22
[app-server] Update readme to include mcp endpoints (#7850)
n/a
2025-12-11 01:08:31 +00:00
Celia Chen
7cabe54fc7
[app-server] make app server not throw error when login id is not found (#7831)
Our previous design of cancellation endpoint is not idempotent, which
caused a bunch of flaky tests. Make app server just returned a not_found
status instead of throwing an error if the login id is not found. Keep
V1 endpoint behavior the same.
2025-12-10 16:19:40 -08:00
Javi
e2559ab28d
fix: thread/list returning fewer than the requested amount due to filtering CXA-293 (#7509)
This caused some conversations to not appear when they otherwise should.

Prior to this change, `thread/list`/`list_conversations_common` would:
- Fetch N conversations from `RolloutRecorder::list_conversations`
- Then it would filter those (like by the provided `model_providers`)
- This would make it potentially return less than N items.

With this change:
- `list_conversations_common` now continues fetching more conversations
from `RolloutRecorder::list_conversations` until it "fills up" the
`requested_page_size`.
- Ultimately this means that clients can rely on getting eg 20
conversations if they request 20 conversations.
2025-12-10 23:06:32 +00:00
Celia Chen
bfb4d5710b
[app-server-protocol] Add types for config (#7658)
Currently the config returned by `config/read` in untyped. Add types so
it's easier for client to parse the config. Since currently configs are
all defined in snake case we'll keep that instead of using camel case
like the rest of V2.

Sample output by testing using the app server test client:
```
{
<   "id": "f28449f4-b015-459b-b07b-eef06980165d",
<   "result": {
<     "config": {
<       "approvalPolicy": null,
<       "compactPrompt": null,
<       "developerInstructions": null,
<       "features": {
<         "experimental_use_rmcp_client": true
<       },
<       "forcedChatgptWorkspaceId": null,
<       "forcedLoginMethod": null,
<       "instructions": null,
<       "model": "gpt-5.1-codex-max",
<       "modelAutoCompactTokenLimit": null,
<       "modelContextWindow": null,
<       "modelProvider": null,
<       "modelReasoningEffort": null,
<       "modelReasoningSummary": null,
<       "modelVerbosity": null,
<       "model_providers": {
<         "local": {
<           "base_url": "http://localhost:8061/api/codex",
<           "env_http_headers": {
<             "ChatGPT-Account-ID": "OPENAI_ACCOUNT_ID"
<           },
<           "env_key": "CHATGPT_TOKEN_STAGING",
<           "name": "local",
<           "wire_api": "responses"
<         }
<       },
<       "model_reasoning_effort": "medium",
<       "notice": {
<         "hide_gpt-5.1-codex-max_migration_prompt": true,
<         "hide_gpt5_1_migration_prompt": true
<       },
<       "profile": null,
<       "profiles": {},
<       "projects": {
<         "/Users/celia/code": {
<           "trust_level": "trusted"
<         },
<         "/Users/celia/code/codex": {
<           "trust_level": "trusted"
<         },
<         "/Users/celia/code/openai": {
<           "trust_level": "trusted"
<         }
<       },
<       "reviewModel": null,
<       "sandboxMode": null,
<       "sandboxWorkspaceWrite": null,
<       "tools": {
<         "viewImage": null,
<         "webSearch": null
<       }
<     },
<     "origins": {
<       "features.experimental_use_rmcp_client": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.base_url": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.env_http_headers.ChatGPT-Account-ID": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.env_key": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.name": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_providers.local.wire_api": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "model_reasoning_effort": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "notice.hide_gpt-5.1-codex-max_migration_prompt": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "notice.hide_gpt5_1_migration_prompt": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "projects./Users/celia/code.trust_level": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "projects./Users/celia/code/codex.trust_level": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "projects./Users/celia/code/openai.trust_level": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       },
<       "tools.web_search": {
<         "name": "user",
<         "source": "/Users/celia/.codex/config.toml",
<         "version": "sha256:a1d8eaedb5d9db5dfdfa69f30fa9df2efec66bb4dd46aa67f149fcc67cd0711c"
<       }
<     }
<   }
< }
```
2025-12-10 21:35:31 +00:00
Ahmed Ibrahim
cb9a189857
make model optional in config (#7769)
- Make Config.model optional and centralize default-selection logic in
ModelsManager, including a default_model helper (with
codex-auto-balanced when available) so sessions now carry an explicit
chosen model separate from the base config.
- Resolve `model` once in `core` and `tui` from config. Then store the
state of it on other structs.
- Move refreshing models to be before resolving the default model
2025-12-10 11:19:00 -08:00
Celia Chen
8a71f8b634
[app-server] Make sure that config writes preserve comments & order or configs (#7789)
Make sure that config writes preserve comments and order of configs by
utilizing the ConfigEditsBuilder in core.

Tested by running a real example and made sure that nothing in the
config file changes other than the configs to edit.
2025-12-10 19:14:27 +00:00
Eric Traut
c4af707e09
Removed experimental "command risk assessment" feature (#7799)
This experimental feature received lukewarm reception during internal
testing. Removing from the code base.
2025-12-10 09:48:11 -08:00
zhao-oai
e0fb3ca1db
refactoring with_escalated_permissions to use SandboxPermissions instead (#7750)
helpful in the future if we want more granularity for requesting
escalated permissions:
e.g when running in readonly sandbox, model can request to escalate to a
sandbox that allows writes
2025-12-10 17:18:48 +00:00