core-agent-ide/codex-rs/codex-api
zbarsky-openai 2a06d64bc9
feat: add support for building with Bazel (#8875)
This PR configures Codex CLI so it can be built with
[Bazel](https://bazel.build) in addition to Cargo. The `.bazelrc`
includes configuration so that remote builds can be done using
[BuildBuddy](https://www.buildbuddy.io).

If you are familiar with Bazel, things should work as you expect, e.g.,
run `bazel test //... --keep-going` to run all the tests in the repo,
but we have also added some new aliases in the `justfile` for
convenience:

- `just bazel-test` to run tests locally
- `just bazel-remote-test` to run tests remotely (currently, the remote
build is for x86_64 Linux regardless of your host platform). Note we are
currently seeing the following test failures in the remote build, so we
still need to figure out what is happening here:

```
failures:
    suite::compact::manual_compact_twice_preserves_latest_user_messages
    suite::compact_resume_fork::compact_resume_after_second_compaction_preserves_history
    suite::compact_resume_fork::compact_resume_and_fork_preserve_model_history_view
```

- `just build-for-release` to build release binaries for all
platforms/architectures remotely

To setup remote execution:
- [Create a buildbuddy account](https://app.buildbuddy.io/) (OpenAI
employees should also request org access at
https://openai.buildbuddy.io/join/ with their `@openai.com` email
address.)
- [Copy your API key](https://app.buildbuddy.io/docs/setup/) to
`~/.bazelrc` (add the line `build
--remote_header=x-buildbuddy-api-key=YOUR_KEY`)
- Use `--config=remote` in your `bazel` invocations (or add `common
--config=remote` to your `~/.bazelrc`, or use the `just` commands)

## CI

In terms of CI, this PR introduces `.github/workflows/bazel.yml`, which
uses Bazel to run the tests _locally_ on Mac and Linux GitHub runners
(we are working on supporting Windows, but that is not ready yet). Note
that the failures we are seeing in `just bazel-remote-test` do not occur
on these GitHub CI jobs, so everything in `.github/workflows/bazel.yml`
is green right now.

The `bazel.yml` uses extra config in `.github/workflows/ci.bazelrc` so
that macOS CI jobs build _remotely_ on Linux hosts (using the
`docker://docker.io/mbolin491/codex-bazel` Docker image declared in the
root `BUILD.bazel`) using cross-compilation to build the macOS
artifacts. Then these artifacts are downloaded locally to GitHub's macOS
runner so the tests can be executed natively. This is the relevant
config that enables this:

```
common:macos --config=remote
common:macos --strategy=remote
common:macos --strategy=TestRunner=darwin-sandbox,local
```

Because of the remote caching benefits we get from BuildBuddy, these new
CI jobs can be extremely fast! For example, consider these two jobs that
ran all the tests on Linux x86_64:

- Bazel 1m37s
https://github.com/openai/codex/actions/runs/20861063212/job/59940545209?pr=8875
- Cargo 9m20s
https://github.com/openai/codex/actions/runs/20861063192/job/59940559592?pr=8875

For now, we will continue to run both the Bazel and Cargo jobs for PRs,
but once we add support for Windows and running Clippy, we should be
able to cutover to using Bazel exclusively for PRs, which should still
speed things up considerably. We will probably continue to run the Cargo
jobs post-merge for commits that land on `main` as a sanity check.

Release builds will also continue to be done by Cargo for now.

Earlier attempt at this PR: https://github.com/openai/codex/pull/8832
Earlier attempt to add support for Buck2, now abandoned:
https://github.com/openai/codex/pull/8504

---------

Co-authored-by: David Zbarsky <dzbarsky@gmail.com>
Co-authored-by: Michael Bolin <mbolin@openai.com>
2026-01-09 11:09:43 -08:00
..
src Add feature for optional request compression (#8767) 2026-01-07 13:21:40 -08:00
tests Add feature for optional request compression (#8767) 2026-01-07 13:21:40 -08:00
BUILD.bazel feat: add support for building with Bazel (#8875) 2026-01-09 11:09:43 -08:00
Cargo.toml Add models endpoint (#7603) 2025-12-04 12:57:54 -08:00
README.md chore: proper client extraction (#6996) 2025-11-25 18:06:12 +00:00

codex-api

Typed clients for Codex/OpenAI APIs built on top of the generic transport in codex-client.

  • Hosts the request/response models and prompt helpers for Responses, Chat Completions, and Compact APIs.
  • Owns provider configuration (base URLs, headers, query params), auth header injection, retry tuning, and stream idle settings.
  • Parses SSE streams into ResponseEvent/ResponseStream, including rate-limit snapshots and API-specific error mapping.
  • Serves as the wire-level layer consumed by codex-core; higher layers handle auth refresh and business logic.

Core interface

The public interface of this crate is intentionally small and uniform:

  • Prompted endpoints (Chat + Responses)

    • Input: a single Prompt plus endpoint-specific options.
      • Prompt (re-exported as codex_api::Prompt) carries:
        • instructions: String the fully-resolved system prompt for this turn.
        • input: Vec<ResponseItem> conversation history and user/tool messages.
        • tools: Vec<serde_json::Value> JSON tools compatible with the target API.
        • parallel_tool_calls: bool.
        • output_schema: Option<Value> used to build text.format when present.
    • Output: a ResponseStream of ResponseEvent (both re-exported from common).
  • Compaction endpoint

    • Input: CompactionInput<'a> (re-exported as codex_api::CompactionInput):
      • model: &str.
      • input: &[ResponseItem] history to compact.
      • instructions: &str fully-resolved compaction instructions.
    • Output: Vec<ResponseItem>.
    • CompactClient::compact_input(&CompactionInput, extra_headers) wraps the JSON encoding and retry/telemetry wiring.

All HTTP details (URLs, headers, retry/backoff policies, SSE framing) are encapsulated in codex-api and codex-client. Callers construct prompts/inputs using protocol types and work with typed streams of ResponseEvent or compacted ResponseItem values.