Currently we don't load config properly for app server conversations. see: https://linear.app/openai/issue/CODEX-3956/config-flags-not-respected-in-codex-app-server. This PR fixes that by respecting the config passed in. Tested by running `cargo build -p codex-cli && RUST_LOG=codex_app_server=debug CODEX_BIN=target/debug/codex cargo run -p codex-app-server-test-client -- \ --config model_providers.mock_provider.base_url=\"http://localhost:4010/v2\" \ --config model_provider=\"mock_provider\" \ --config model_providers.mock_provider.name="hello" \ send-message-v2 "hello"` and verified that the mock_provider is called instead of default provider. #closes https://linear.app/openai/issue/CODEX-3956/config-flags-not-respected-in-codex-app-server --------- Co-authored-by: Michael Bolin <mbolin@openai.com> |
||
|---|---|---|
| .. | ||
| src | ||
| Cargo.lock | ||
| Cargo.toml | ||
| README.md | ||
App Server Test Client
Exercises simple codex app-server flows end-to-end, logging JSON-RPC messages sent between client and server to stdout.