docs: clarify codex max defaults and xhigh availability (#7449)
## Summary Adds the missing `xhigh` reasoning level everywhere it should have been documented, and makes clear it only works with `gpt-5.1-codex-max`. ## Changes * `docs/config.md` * Add `xhigh` to the official list of reasoning levels with a note that `xhigh` is exclusive to Codex Max. * `docs/example-config.md` * Update the example comment adding `xhigh` as a valid option but only for Codex Max. * `docs/faq.md` * Update the model recommendation to `GPT-5.1 Codex Max`. * Mention that users can choose `high` or the newly documented `xhigh` level when using Codex Max.
This commit is contained in:
parent
440c7acd8f
commit
41760f8a09
3 changed files with 3 additions and 2 deletions
|
|
@ -195,6 +195,7 @@ If the selected model is known to support reasoning (for example: `o3`, `o4-mini
|
|||
- `"low"`
|
||||
- `"medium"` (default)
|
||||
- `"high"`
|
||||
- `"xhigh"` (available only on `gpt-5.1-codex-max`)
|
||||
|
||||
Note: to minimize reasoning, choose `"minimal"`.
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ model_provider = "openai"
|
|||
# Reasoning & Verbosity (Responses API capable models)
|
||||
################################################################################
|
||||
|
||||
# Reasoning effort: minimal | low | medium | high (default: medium)
|
||||
# Reasoning effort: minimal | low | medium | high | xhigh (default: medium; xhigh only on gpt-5.1-codex-max)
|
||||
model_reasoning_effort = "medium"
|
||||
|
||||
# Reasoning summary: auto | concise | detailed | none (default: auto)
|
||||
|
|
|
|||
|
|
@ -8,7 +8,7 @@ In 2021, OpenAI released Codex, an AI system designed to generate code from natu
|
|||
|
||||
### Which models are supported?
|
||||
|
||||
We recommend using Codex with GPT-5.1 Codex, our best coding model. The default reasoning level is medium, and you can upgrade to high for complex tasks with the `/model` command.
|
||||
We recommend using Codex with GPT-5.1 Codex Max, our best coding model. The default reasoning level is medium, and you can upgrade to high or xhigh (Codex Max only) for complex tasks with the `/model` command.
|
||||
|
||||
You can also use older models by using API-based auth and launching codex with the `--model` flag.
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue