This is an alternate PR to solving the same problem as <https://github.com/openai/codex/pull/8227>. In this PR, when Ollama is used via `--oss` (or via `model_provider = "ollama"`), we default it to use the Responses format. At runtime, we do an Ollama version check, and if the version is older than when Responses support was added to Ollama, we print out a warning. Because there's no way of configuring the wire api for a built-in provider, we temporarily add a new `oss_provider`/`model_provider` called `"ollama-chat"` that will force the chat format. Once the `"chat"` format is fully removed (see <https://github.com/openai/codex/discussions/7782>), `ollama-chat` can be removed as well --------- Co-authored-by: Eric Traut <etraut@openai.com> Co-authored-by: Michael Bolin <mbolin@openai.com> |
||
|---|---|---|
| .. | ||
| src | ||
| BUILD.bazel | ||
| Cargo.toml | ||
| README.md | ||
codex-common
This crate is designed for utilities that need to be shared across other crates in the workspace, but should not go in core.
For narrow utility features, the pattern is to add introduce a new feature under [features] in Cargo.toml and then gate it with #[cfg] in lib.rs, as appropriate.