Commit graph

30 commits

Author SHA1 Message Date
jif-oai
e9335374b9
feat: add phase 1 mem client (#10629)
Adding a client on top of https://github.com/openai/openai/pull/672176
2026-02-04 17:59:36 +00:00
Rasmus Rygaard
df000da917
Add a codex.rate_limits event for websockets (#10324)
When communicating over websockets, we can't rely on headers to deliver
rate limit information. This PR adds a `codex.rate_limits` event that
the server can pass to the client to inform them about rate limit usage.
The client parses this data the same way we parse rate limit headers in
HTTP mode.

This PR also wires up the etag and reasoning headers for websockets
2026-02-04 06:01:47 -08:00
jif-oai
d5e7248958
feat: clean codex-api part 1 (#10501) 2026-02-03 14:08:09 +00:00
jif-oai
88598b9402
feat: drop wire_api from clients (#10498) 2026-02-03 12:43:09 +00:00
jif-oai
d2394a2494
chore: nuke chat/completions API (#10157) 2026-02-03 11:31:57 +00:00
sayan-oai
fc05374344
chore: add phase to message responseitem (#10455)
### What

add wiring for `phase` field on `ResponseItem::Message` to lay
groundwork for differentiating model preambles and final messages.
currently optional.

follows pattern in #9698.

updated schemas with `just write-app-server-schema` so we can see type
changes.

### Tests
Updated existing tests for SSE parsing and hydrating from history
2026-02-03 02:52:26 +00:00
Anton Panasenko
101d359cd7
Add websocket telemetry metrics and labels (#10316)
Summary
- expose websocket telemetry hooks through the responses client so
request durations and event processing can be reported
- record websocket request/event metrics and emit runtime telemetry
events that the history UI now surfaces
- improve tests to cover websocket telemetry reporting and guard runtime
summary updates


<img width="824" height="79" alt="Screenshot 2026-01-31 at 5 28 12 PM"
src="https://github.com/user-attachments/assets/ea9a7965-d8b4-4e3c-a984-ef4fdc44c81d"
/>
2026-01-31 19:16:44 -08:00
Anton Panasenko
e117a3ff33
feat: support proxy for ws connection (#9719)
reapply websocket changes without changing tls lib.
2026-01-22 15:23:15 -08:00
pakrym-oai
b511c38ddb
Support end_turn flag (#9698)
Experimental flag that signals the end of the turn.
2026-01-22 17:27:48 +00:00
pakrym-oai
4d48d4e0c2
Revert "feat: support proxy for ws connection" (#9693)
Reverts openai/codex#9409
2026-01-22 15:57:18 +00:00
pakrym-oai
f2e1ad59bc
Add websockets logging (#9633)
To help with debugging.
2026-01-21 21:35:38 +00:00
pakrym-oai
527b7b4c02
Feature to auto-enable websockets transport (#9578) 2026-01-20 20:32:06 -08:00
Anton Panasenko
7b27aa7707
feat: support proxy for ws connection (#9409)
unfortunately tokio-tungstenite doesn't support proxy configuration
outbox, while https://github.com/snapview/tokio-tungstenite/pull/370 is
in review, we can depend on source code for now.
2026-01-20 09:36:30 -08:00
Ahmed Ibrahim
b11e96fb04
Act on reasoning-included per turn (#9402)
- Reset reasoning-included flag each turn and update compaction test
2026-01-19 11:23:25 -08:00
Ahmed Ibrahim
ebdd8795e9
Turn-state sticky routing per turn (#9332)
- capture the header from SSE/WS handshakes, store it per
ModelClientSession using `Oncelock`, echo it on turn-scoped requests,
and add SSE+WS integration tests for within-turn persistence +
cross-turn reset.

- keep `x-codex-turn-state` sticky within a user turn to maintain
routing continuity for retries/tool follow-ups.
2026-01-16 09:30:11 -08:00
pakrym-oai
e726a82c8a
Websocket append support (#9128)
Support an incremental append request in websocket transport.
2026-01-13 06:07:13 +00:00
pakrym-oai
d75626ad99
Reuse websocket connection (#9127)
Reuses the connection but still sends full requests.
2026-01-13 03:30:09 +00:00
pakrym-oai
490c1c1fdd
Add model client sessions (#9102)
Maintain a long-running session.
2026-01-13 01:15:56 +00:00
Channing Conger
21c6d40a44
Add feature for optional request compression (#8767)
Adds a new feature
`enable_request_compression` that will compress using zstd requests to
the codex-backend. Currently only enabled for codex-backend so only enabled for openai providers when using chatgpt::auth even when the feature is enabled

Added a new info log line too for evaluating the compression ratio and
overhead off compressing before requesting. You can enable with
`RUST_LOG=$RUST_LOG,codex_client::transport=info`

```
2026-01-06T00:09:48.272113Z  INFO codex_client::transport: Compressed request body with zstd pre_compression_bytes=28914 post_compression_bytes=11485 compression_duration_ms=0
```
2026-01-07 13:21:40 -08:00
Ahmed Ibrahim
9179c9deac
Merge Modelfamily into modelinfo (#8763)
- Merge ModelFamily into ModelInfo
- Remove logic for adding instructions to apply patch
- Add compaction limit and visible context window to `ModelInfo`
2026-01-07 10:35:09 -08:00
Ahmed Ibrahim
66b7c673e9
Refresh on models etag mismatch (#8491)
- Send models etag
- Refresh models on 412
- This wires `ModelsManager` to `ModelFamily` so we don't mutate it
mid-turn
2026-01-01 11:41:16 -08:00
Ahmed Ibrahim
40de81e7af
Remove reasoning format (#8484)
This isn't very useful parameter. 

logic:
```
if model puts `**` in their reasoning, trim it and visualize the header.
if couldn't trim: don't render
if model doesn't support: don't render
```

We can simplify to:
```
if could trim, visualize header.
if not, don't render
```
2025-12-23 16:01:46 -08:00
jif-oai
ac6ba286aa
feat: experimental menu (#8071)
This will automatically render any `Stage::Beta` features.

The change only gets applied to the *next session*. This started as a
bug but actually this is a good thing to prevent out of distribution
push

<img width="986" height="288" alt="Screenshot 2025-12-15 at 15 38 35"
src="https://github.com/user-attachments/assets/78b7a71d-0e43-4828-a118-91c5237909c7"
/>


<img width="509" height="109" alt="Screenshot 2025-12-15 at 17 35 44"
src="https://github.com/user-attachments/assets/6933de52-9b66-4abf-b58b-a5f26d5747e2"
/>
2025-12-17 17:08:03 +00:00
jif-oai
d7482510b1
nit: trace span for regular task (#8053)
Logs are too spammy

---------

Co-authored-by: Anton Panasenko <apanasenko@openai.com>
2025-12-16 16:53:15 +00:00
Anton Panasenko
ad7b9d63c3
[codex] add otel tracing (#7844) 2025-12-12 17:07:17 -08:00
Ahmed Ibrahim
b7fa7ca8e9
Update Model Info (#7853) 2025-12-11 14:06:07 -08:00
Ahmed Ibrahim
222a491570
load models from disk and set a ttl and etag (#7722)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-08 13:43:04 -08:00
Ahmed Ibrahim
53a486f7ea
Add remote models feature flag (#7648)
# External (non-OpenAI) Pull Request Requirements

Before opening this Pull Request, please read the dedicated
"Contributing" markdown file or your PR may be closed:
https://github.com/openai/codex/blob/main/docs/contributing.md

If your PR conforms to our contribution guidelines, replace this text
with a detailed and high quality description of your changes.

Include a link to a bug report or enhancement request.
2025-12-07 09:47:48 -08:00
Ahmed Ibrahim
903b7774bc
Add models endpoint (#7603)
- Use the codex-api crate to introduce models endpoint. 
- Add `models` to codex core tests helpers
- Add `ModelsInfo` for the endpoint return type
2025-12-04 12:57:54 -08:00
jif-oai
4502b1b263
chore: proper client extraction (#6996) 2025-11-25 18:06:12 +00:00