## Summary Fixes streaming issue where Claude models return only 1-4 characters instead of full responses when used through certain API providers/proxies. ## Environment - **OS**: Windows - **Models affected**: Claude models (e.g., claude-haiku-4-5-20251001) - **API Provider**: AAAI API proxy (https://api.aaai.vip/v1) - **Working models**: GLM, Google models work correctly ## Problem When using Claude models in both TUI and exec modes, only 1-4 characters are displayed despite the backend receiving the full response. Debug logs revealed that some API providers send SSE chunks with an empty string finish_reason during active streaming, rather than null or omitting the field entirely. The current code treats any non-null finish_reason as a termination signal, causing the stream to exit prematurely after the first chunk. The problematic chunks contain finish_reason with an empty string instead of null. ## Solution Fix empty finish_reason handling in chat_completions.rs by adding a check to only process non-empty finish_reason values. This ensures empty strings are ignored and streaming continues normally. ## Testing - Tested on Windows with Claude Haiku model via AAAI API proxy - Full responses now received and displayed correctly in both TUI and exec modes - Other models (GLM, Google) continue to work as expected - No regression in existing functionality ## Impact - Improves compatibility with API providers that send empty finish_reason during streaming - Enables Claude models to work correctly in Windows environment - No breaking changes to existing functionality ## Related Issues This fix resolves the issue where Claude models appeared to return incomplete responses. The root cause was identified as a compatibility issue in parsing SSE responses from certain API providers/proxies, rather than a model-specific problem. This change improves overall robustness when working with various API endpoints. --------- Co-authored-by: Eric Traut <etraut@openai.com> |
||
|---|---|---|
| .devcontainer | ||
| .github | ||
| .vscode | ||
| codex-cli | ||
| codex-rs | ||
| docs | ||
| scripts | ||
| sdk/typescript | ||
| .codespellignore | ||
| .codespellrc | ||
| .gitignore | ||
| .npmrc | ||
| .prettierignore | ||
| .prettierrc.toml | ||
| AGENTS.md | ||
| CHANGELOG.md | ||
| cliff.toml | ||
| flake.lock | ||
| flake.nix | ||
| LICENSE | ||
| NOTICE | ||
| package.json | ||
| pnpm-lock.yaml | ||
| pnpm-workspace.yaml | ||
| PNPM.md | ||
| README.md | ||
npm i -g @openai/codex
or brew install --cask codex
Codex CLI is a coding agent from OpenAI that runs locally on your computer.
If you want Codex in your code editor (VS Code, Cursor, Windsurf), install in your IDE
If you are looking for the cloud-based agent from OpenAI, Codex Web, go to chatgpt.com/codex
Quickstart
Installing and running Codex CLI
Install globally with your preferred package manager. If you use npm:
npm install -g @openai/codex
Alternatively, if you use Homebrew:
brew install --cask codex
Then simply run codex to get started:
codex
If you're running into upgrade issues with Homebrew, see the FAQ entry on brew upgrade codex.
You can also go to the latest GitHub Release and download the appropriate binary for your platform.
Each GitHub Release contains many executables, but in practice, you likely want one of these:
- macOS
- Apple Silicon/arm64:
codex-aarch64-apple-darwin.tar.gz - x86_64 (older Mac hardware):
codex-x86_64-apple-darwin.tar.gz
- Apple Silicon/arm64:
- Linux
- x86_64:
codex-x86_64-unknown-linux-musl.tar.gz - arm64:
codex-aarch64-unknown-linux-musl.tar.gz
- x86_64:
Each archive contains a single entry with the platform baked into the name (e.g., codex-x86_64-unknown-linux-musl), so you likely want to rename it to codex after extracting it.
Using Codex with your ChatGPT plan
Run codex and select Sign in with ChatGPT. We recommend signing into your ChatGPT account to use Codex as part of your Plus, Pro, Team, Edu, or Enterprise plan. Learn more about what's included in your ChatGPT plan.
You can also use Codex with an API key, but this requires additional setup. If you previously used an API key for usage-based billing, see the migration steps. If you're having trouble with login, please comment on this issue.
Model Context Protocol (MCP)
Codex can access MCP servers. To configure them, refer to the config docs.
Configuration
Codex CLI supports a rich set of configuration options, with preferences stored in ~/.codex/config.toml. For full configuration options, see Configuration.
Docs & FAQ
- Getting started
- Configuration
- Sandbox & approvals
- Authentication
- Automating Codex
- Advanced
- Zero data retention (ZDR)
- Contributing
- Install & build
- FAQ
- Open source fund
License
This repository is licensed under the Apache-2.0 License.