## Summary This PR improves the TUI experience for `request_user_input` by rendering submitted question/answer sets directly in conversation history with clear, structured formatting. It also intentionally simplifies interrupt behavior for now: on `Esc` / `Ctrl+C`, the questions overlay interrupts the turn without attempting to submit partial answers. <img width="1344" height="573" alt="Screenshot 2026-02-02 at 4 51 40 PM" src="https://github.com/user-attachments/assets/ff752131-7060-44c1-9ded-af061969a533" /> ## Scope - TUI-only changes. - No core/protocol/app-server behavior changes in this PR. - Resume reconstruction of interrupted question sets is out of scope for this PR. ## What Changed - Added a new history cell: `RequestUserInputResultCell` in `codex-rs/tui/src/history_cell.rs`. - On normal `request_user_input` submission, TUI now inserts that history cell immediately after sending `Op::UserInputAnswer`. - Rendering includes a `Questions` header with `answered/total` count. - Rendering shows each question as a bullet item. - Rendering styles submitted answer lines in cyan. - Rendering styles notes (for option questions) as `note:` lines in cyan. - Rendering styles freeform text (for no-option questions) as `answer:` lines in cyan. - Rendering dims only the `(unanswered)` suffix. - Rendering can include an interrupted suffix and summary text when the cell is marked interrupted. - Rendering redacts secret questions as `••••••` instead of showing raw values. - Added `wrap_with_prefix(...)` in `history_cell.rs` for wrapped prefixed lines. - Added `split_request_user_input_answer(...)` in `history_cell.rs` for decoding `"user_note: ..."` entries. ## Interrupt Behavior (Intentional for this PR) - `Esc` / `Ctrl+C` in the questions overlay now performs `Op::Interrupt` and exits the overlay. - It does **not** submit partial/committed answers on interrupt. - Added TODO comments in `request_user_input` overlay interrupt paths indicating where interrupted partial result emission should be reintroduced once core support is finalized. - Queued `request_user_input` overlays are discarded on interrupt in the current behavior. ## Tests Updated - Updated/added overlay tests in `codex-rs/tui/src/bottom_pane/request_user_input/mod.rs` to reflect interrupt-only behavior. - Added helper assertion for interrupt-only event expectation. - Existing submission-path tests now validate history insertion behavior and expected answer maps. ## Behavior Notes - Completed question flows now produce a readable `Questions` block in transcript history. - Interrupted flows currently do not persist partial answers to model-visible tool output. ## Follow-ups - Reintroduce partial-answer-on-interrupt semantics once core can persist/sequence interrupted `request_user_input` outputs safely. - Optionally add replay/resume rendering for interrupted question sets as a separate PR. ## Codex author `codex fork 019bfb8d-2a65-7313-9be2-ea7100d19a61` |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .vscode | ||
| codex-cli | ||
| codex-rs | ||
| docs | ||
| patches | ||
| scripts | ||
| sdk/typescript | ||
| shell-tool-mcp | ||
| third_party/wezterm | ||
| .bazelignore | ||
| .bazelrc | ||
| .bazelversion | ||
| .codespellignore | ||
| .codespellrc | ||
| .gitignore | ||
| .markdownlint-cli2.yaml | ||
| .npmrc | ||
| .prettierignore | ||
| .prettierrc.toml | ||
| AGENTS.md | ||
| announcement_tip.toml | ||
| BUILD.bazel | ||
| CHANGELOG.md | ||
| cliff.toml | ||
| defs.bzl | ||
| flake.lock | ||
| flake.nix | ||
| justfile | ||
| LICENSE | ||
| MODULE.bazel | ||
| MODULE.bazel.lock | ||
| NOTICE | ||
| package.json | ||
| pnpm-lock.yaml | ||
| pnpm-workspace.yaml | ||
| rbe.bzl | ||
| README.md | ||
npm i -g @openai/codex
or brew install --cask codex
Codex CLI is a coding agent from OpenAI that runs locally on your computer.
If you want Codex in your code editor (VS Code, Cursor, Windsurf), install in your IDE.
If you are looking for the cloud-based agent from OpenAI, Codex Web, go to chatgpt.com/codex.
Quickstart
Installing and running Codex CLI
Install globally with your preferred package manager:
# Install using npm
npm install -g @openai/codex
# Install using Homebrew
brew install --cask codex
Then simply run codex to get started.
You can also go to the latest GitHub Release and download the appropriate binary for your platform.
Each GitHub Release contains many executables, but in practice, you likely want one of these:
- macOS
- Apple Silicon/arm64:
codex-aarch64-apple-darwin.tar.gz - x86_64 (older Mac hardware):
codex-x86_64-apple-darwin.tar.gz
- Apple Silicon/arm64:
- Linux
- x86_64:
codex-x86_64-unknown-linux-musl.tar.gz - arm64:
codex-aarch64-unknown-linux-musl.tar.gz
- x86_64:
Each archive contains a single entry with the platform baked into the name (e.g., codex-x86_64-unknown-linux-musl), so you likely want to rename it to codex after extracting it.
Using Codex with your ChatGPT plan
Run codex and select Sign in with ChatGPT. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. Learn more about what's included in your ChatGPT plan.
You can also use Codex with an API key, but this requires additional setup.
Docs
This repository is licensed under the Apache-2.0 License.