Commit graph

18 commits

Author SHA1 Message Date
Eric Traut
c4af707e09
Removed experimental "command risk assessment" feature (#7799)
This experimental feature received lukewarm reception during internal
testing. Removing from the code base.
2025-12-10 09:48:11 -08:00
Robby He
57ba9fa100
fix(doc): TOML otel exporter example — multi-line inline table is inv… (#7669)
…alid (#7668)

The `otel` exporter example in `docs/config.md` is misleading and will
cause
the configuration parser to fail if copied verbatim.

Summary
-------
The example uses a TOML inline table but spreads the inline-table braces
across multiple lines. TOML inline tables must be contained on a single
line
(`key = { a = 1, b = 2 }`); placing newlines inside the braces triggers
a
parse error in most TOML parsers and prevents Codex from starting.

Reproduction
------------
1. Paste the snippet below into `~/.codex/config.toml` (or your project
config).
2. Run `codex` (or the command that loads the config).
3. The process will fail to start with a TOML parse error similar to:

```text
Error loading config.toml: TOML parse error at line 55, column 27
   |
55 | exporter = { otlp-http = {
   |                           ^
newlines are unsupported in inline tables, expected nothing
```

Problematic snippet (as currently shown in the docs)
---------------------------------------------------
```toml
[otel]
exporter = { otlp-http = {
  endpoint = "https://otel.example.com/v1/logs",
  protocol = "binary",
  headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" }
}}
```

Recommended fixes
------------------
```toml
[otel.exporter."otlp-http"]
endpoint = "https://otel.example.com/v1/logs"
protocol = "binary"

[otel.exporter."otlp-http".headers]
"x-otlp-api-key" = "${OTLP_TOKEN}"
```

Or, keep an inline table but write it on one line (valid but less
readable):

```toml
[otel]
exporter = { "otlp-http" = { endpoint = "https://otel.example.com/v1/logs", protocol = "binary", headers = { "x-otlp-api-key" = "${OTLP_TOKEN}" } } }
```
2025-12-08 01:20:23 -08:00
Jay Sabva
9a74228c66
docs: Remove experimental_use_rmcp_client from config (#7672)
Removed experimental Rust MCP client option from config.
2025-12-06 16:51:07 -08:00
liam
4d4778ec1c
Trim history.jsonl when history.max_bytes is set (#6242)
This PR honors the `history.max_bytes` configuration parameter by
trimming `history.jsonl` whenever it grows past the configured limit.
While appending new entries we retain the newest record, drop the oldest
lines to stay within the byte budget, and serialize the compacted file
back to disk under the same lock to keep writers safe.
2025-12-02 14:01:05 -08:00
Kaden Gruizenga
41760f8a09
docs: clarify codex max defaults and xhigh availability (#7449)
## Summary
Adds the missing `xhigh` reasoning level everywhere it should have been
documented, and makes clear it only works with `gpt-5.1-codex-max`.

## Changes

* `docs/config.md`

* Add `xhigh` to the official list of reasoning levels with a note that
`xhigh` is exclusive to Codex Max.

* `docs/example-config.md`

* Update the example comment adding `xhigh` as a valid option but only
for Codex Max.

* `docs/faq.md`

  * Update the model recommendation to `GPT-5.1 Codex Max`.
* Mention that users can choose `high` or the newly documented `xhigh`
level when using Codex Max.
2025-12-01 10:46:53 -08:00
Eric Traut
fabdbfef9c
Fixes two bugs in example-config.md documentation (#7324)
This PR is a modified version of [a
PR](https://github.com/openai/codex/pull/7316) submitted by @yydrowz3.
* Removes a redundant `experimental_sandbox_command_assessment` flag
* Moves `mcp_oauth_credentials_store` from the `[features]` table, where
it doesn't belong
2025-11-26 09:52:13 -08:00
Eric Traut
207d94b0e7
Removed streamable_shell from docs (#7235)
This config option no longer exists

Addresses #7207
2025-11-24 11:47:57 -08:00
jif-oai
af65666561
chore: drop model_max_output_tokens (#7100) 2025-11-21 17:42:54 +00:00
Eric Traut
d909048a85
Added feature switch to disable animations in TUI (#6870)
This PR adds support for a new feature flag `tui.animations`. By
default, the TUI uses animations in its welcome screen, "working"
spinners, and "shimmer" effects. This animations can interfere with
screen readers, so it's good to provide a way to disable them.

This change is inspired by [a
PR](https://github.com/openai/codex/pull/4014) contributed by @Orinks.
That PR has faltered a bit, but I think the core idea is sound. This
version incorporates feedback from @aibrahim-oai. In particular:
1. It uses a feature flag (`tui.animations`) rather than the unqualified
CLI key `no-animations`. Feature flags are the preferred way to expose
boolean switches. They are also exposed via CLI command switches.
2. It includes more complete documentation.
3. It disables a few animations that the other PR omitted.
2025-11-20 10:40:08 -08:00
Ahmed Ibrahim
d5dfba2509
feat: arcticfox in the wild (#6906)
<img width="485" height="600" alt="image"
src="https://github.com/user-attachments/assets/4341740d-dd58-4a3e-b69a-33a3be0606c5"
/>

---------

Co-authored-by: jif-oai <jif@openai.com>
2025-11-19 16:31:06 +00:00
Ahmed Ibrahim
793063070b
fix: typos in model picker (#6859)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-11-19 06:29:02 +00:00
Anton Panasenko
f7a921039c
[codex][otel] support mtls configuration (#6228)
fix for https://github.com/openai/codex/issues/6153

supports mTLS configuration and includes TLS features in the library
build to enable secure HTTPS connections with custom root certificates.

grpc:
https://docs.rs/tonic/0.13.1/src/tonic/transport/channel/endpoint.rs.html#63
https:
https://docs.rs/reqwest/0.12.23/src/reqwest/async_impl/client.rs.html#516
2025-11-18 14:01:01 -08:00
Ahmed Ibrahim
3de8790714
Add the utility to truncate by tokens (#6746)
- This PR is to make it on path for truncating by tokens. This path will
be initially used by unified exec and context manager (responsible for
MCP calls mainly).
- We are exposing new config `calls_output_max_tokens`
- Use `tokens` as the main budget unit but truncate based on the model
family by Introducing `TruncationPolicy`.
- Introduce `truncate_text` as a router for truncation based on the
mode.

In next PRs:
- remove truncate_with_line_bytes_budget
- Add the ability to the model to override the token budget.
2025-11-18 11:36:23 -08:00
Ahmed Ibrahim
ddcc60a085
Update defaults to gpt-5.1 (#6652)
## Summary
- update documentation, example configs, and automation defaults to
reference gpt-5.1 / gpt-5.1-codex
- bump the CLI and core configuration defaults, model presets, and error
messaging to the new models while keeping the model-family/tool coverage
for legacy slugs
- refresh tests, fixtures, and TUI snapshots so they expect the upgraded
defaults

## Testing
- `cargo test -p codex-core
config::tests::test_precedence_fixture_with_gpt5_profile`


------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6916c5b3c2b08321ace04ee38604fc6b)
2025-11-17 17:40:11 -08:00
Jeremy Rose
799364de87
Enable TUI notifications by default (#6633)
## Summary
- default the `tui.notifications` setting to enabled so desktop
notifications work out of the box
- update configuration tests and documentation to reflect the new
default

## Testing
- `cargo test -p codex-core` *(fails:
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
is flaky in this sandbox because the spawned grandchild process stays
alive)*
- `cargo test -p codex-core
exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`
*(fails: same sandbox limitation as above)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_69166f811144832c9e8aaf8ee2642373)
2025-11-14 09:28:09 -08:00
Ahmed Ibrahim
e743d251a7
Add opt-out for rate limit model nudge (#6433)
## Summary
- add a `hide_rate_limit_model_nudge` notice flag plus config edit
plumbing so the rate limit reminder preference is persisted and
documented
- extend the chat widget prompt with a "never show again" option, and
wire new app events so selecting it hides future nudges immediately and
writes the config
- add unit coverage and refresh the snapshot for the three-option prompt

## Testing
- `just fmt`
- `just fix -p codex-tui`
- `just fix -p codex-core`
- `cargo test -p codex-tui`
- `cargo test -p codex-core` *(fails at
`exec::tests::kill_child_process_group_kills_grandchildren_on_timeout`:
grandchild process still alive)*

------
[Codex
Task](https://chatgpt.com/codex/tasks/task_i_6910d7f407748321b2661fc355416994)
2025-11-10 09:21:53 -08:00
pakrym-oai
c368c6aeea
Remove shell tool when unified exec is enabled (#6345)
Also drop streameable shell that's just an alias for unified exec.
2025-11-06 15:46:24 -08:00
Lucas Freire Sangoi
ab63a47173
docs: add example config.toml (#5175)
I was missing an example config.toml, and following docs/config.md alone
was slower. I had GPT-5 scan the codebase for every accepted config key,
check the defaults, and generate a single example config.toml with
annotations. It lists all keys Codex reads from TOML, sets each to its
effective default where it exists, leaves optional ones commented, and
adds short comments on purpose and valid values. This should make
onboarding faster and reduce configuration errors. I can rename it to
config.example.toml or move it under docs/ if you prefer.
2025-11-03 18:19:26 -08:00