This pull request add a new experimental format of JSON output.
You can try it using `codex exec --experimental-json`.
Design takes a lot of inspiration from Responses API items and stream
format.
# Session and items
Each invocation of `codex exec` starts or resumes a session.
Session contains multiple high-level item types:
1. Assistant message
2. Assistant thinking
3. Command execution
4. File changes
5. To-do lists
6. etc.
# Events
Session and items are going through their life cycles which is
represented by events.
Session is `session.created` or `session.resumed`
Items are `item.added`, `item.updated`, `item.completed`,
`item.require_approval` (or other item types like `item.output_delta`
when we need streaming).
So a typical session can look like:
<details>
```
{
"type": "session.created",
"session_id": "01997dac-9581-7de3-b6a0-1df8256f2752"
}
{
"type": "item.completed",
"item": {
"id": "itm_0",
"item_type": "assistant_message",
"text": "I’ll locate the top-level README and remove its first line. Then I’ll show a quick summary of what changed."
}
}
{
"type": "item.completed",
"item": {
"id": "itm_1",
"item_type": "command_execution",
"command": "bash -lc ls -la | sed -n '1,200p'",
"aggregated_output": "pyenv: cannot rehash: /Users/pakrym/.pyenv/shims isn't writable\ntotal 192\ndrwxr-xr-x@ 33 pakrym staff 1056 Sep 24 14:36 .\ndrwxr-xr-x 41 pakrym staff 1312 Sep 24 09:17 ..\n-rw-r--r--@ 1 pakrym staff 6 Jul 9 16:16 .codespellignore\n-rw-r--r--@ 1 pakrym staff 258 Aug 13 09:40 .codespellrc\ndrwxr-xr-x@ 5 pakrym staff 160 Jul 23 08:26 .devcontainer\n-rw-r--r--@ 1 pakrym staff 6148 Jul 22 10:03 .DS_Store\ndrwxr-xr-x@ 15 pakrym staff 480 Sep 24 14:38 .git\ndrwxr-xr-x@ 12 pakrym staff 384 Sep 2 16:00 .github\n-rw-r--r--@ 1 pakrym staff 778 Jul 9 16:16 .gitignore\ndrwxr-xr-x@ 3 pakrym staff 96 Aug 11 09:37 .husky\n-rw-r--r--@ 1 pakrym staff 104 Jul 9 16:16 .npmrc\n-rw-r--r--@ 1 pakrym staff 96 Sep 2 08:52 .prettierignore\n-rw-r--r--@ 1 pakrym staff 170 Jul 9 16:16 .prettierrc.toml\ndrwxr-xr-x@ 5 pakrym staff 160 Sep 14 17:43 .vscode\ndrwxr-xr-x@ 2 pakrym staff 64 Sep 11 11:37 2025-09-11\n-rw-r--r--@ 1 pakrym staff 5505 Sep 18 09:28 AGENTS.md\n-rw-r--r--@ 1 pakrym staff 92 Sep 2 08:52 CHANGELOG.md\n-rw-r--r--@ 1 pakrym staff 1145 Jul 9 16:16 cliff.toml\ndrwxr-xr-x@ 11 pakrym staff 352 Sep 24 13:03 codex-cli\ndrwxr-xr-x@ 38 pakrym staff 1216 Sep 24 14:38 codex-rs\ndrwxr-xr-x@ 18 pakrym staff 576 Sep 23 11:01 docs\n-rw-r--r--@ 1 pakrym staff 2038 Jul 9 16:16 flake.lock\n-rw-r--r--@ 1 pakrym staff 1434 Jul 9 16:16 flake.nix\n-rw-r--r--@ 1 pakrym staff 10926 Jul 9 16:16 LICENSE\ndrwxr-xr-x@ 465 pakrym staff 14880 Jul 15 07:36 node_modules\n-rw-r--r--@ 1 pakrym staff 242 Aug 5 08:25 NOTICE\n-rw-r--r--@ 1 pakrym staff 578 Aug 14 12:31 package.json\n-rw-r--r--@ 1 pakrym staff 498 Aug 11 09:37 pnpm-lock.yaml\n-rw-r--r--@ 1 pakrym staff 58 Aug 11 09:37 pnpm-workspace.yaml\n-rw-r--r--@ 1 pakrym staff 2402 Jul 9 16:16 PNPM.md\n-rw-r--r--@ 1 pakrym staff 4393 Sep 12 14:36 README.md\ndrwxr-xr-x@ 4 pakrym staff 128 Sep 18 09:28 scripts\ndrwxr-xr-x@ 2 pakrym staff 64 Sep 11 11:34 tmp\n",
"exit_code": 0,
"status": "completed"
}
}
{
"type": "item.completed",
"item": {
"id": "itm_2",
"item_type": "reasoning",
"text": "**Reviewing README.md file**\n\nI've located the README.md file at the root, and it’s 4393 bytes. Now, I need to remove the first line, but first, I should check its content to make sure I’m patching it correctly. I’ll use sed to display the first 20 lines. By reviewing those lines, I can determine exactly what needs to be removed before I proceed with the editing. Let's do this carefully!"
}
}
{
"type": "item.completed",
"item": {
"id": "itm_3",
"item_type": "command_execution",
"command": "bash -lc sed -n '1,40p' README.md",
"aggregated_output": "<h1 align=\"center\">OpenAI Codex CLI</h1>\n\n<p align=\"center\"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>\n\n<p align=\"center\"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.\n</br>\n</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href=\"https://developers.openai.com/codex/ide\">install in your IDE</a>\n</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href=\"https://chatgpt.com/codex\">chatgpt.com/codex</a></p>\n\n<p align=\"center\">\n <img src=\"./.github/codex-cli-splash.png\" alt=\"Codex CLI splash\" width=\"80%\" />\n </p>\n\n---\n\n## Quickstart\n\n### Installing and running Codex CLI\n\nInstall globally with your preferred package manager. If you use npm:\n\n```shell\nnpm install -g @openai/codex\n```\n\nAlternatively, if you use Homebrew:\n\n```shell\nbrew install codex\n```\n\nThen simply run `codex` to get started:\n\n```shell\ncodex\n```\n\n<details>\n<summary>You can also go to the <a href=\"https://github.com/openai/codex/releases/latest\">latest GitHub Release</a> and download the appropriate binary for your platform.</summary>\n\n",
"exit_code": 0,
"status": "completed"
}
}
{
"type": "item.completed",
"item": {
"id": "itm_4",
"item_type": "reasoning",
"text": "**Updating README file**\n\nThe user wants to update the README by removing the first line, which is an HTML h1 tag. I'll proceed to delete that line while keeping the rest intact. To do this, I need to use the apply_patch tool and create a patch in the diff format. I’ll take care to include context lines around the change. After removing the header, I’m considering whether to delete the following blank line as well to keep it tidy."
}
}
{
"type": "item.completed",
"item": {
"id": "itm_5",
"item_type": "reasoning",
"text": "**Crafting the patch**\n\nThe user has asked to remove just the first line, without mentioning the blank lines. Technically, leaving a blank line at the start is acceptable, but it's not ideal. Ideally, I'd prefer to remove the first line while maintaining the subsequent blank line. However, if I can also remove that blank line, it could tidy things up even more. Still, since the user's request is strictly to remove only the first line, I'll stick to that and craft the patch accordingly."
}
}
{
"type": "item.completed",
"item": {
"id": "itm_6",
"item_type": "assistant_message",
"text": "Applying the change by removing the first line from `README.md`."
}
}
{
"type": "item.completed",
"item": {
"id": "itm_7",
"item_type": "file_change",
"changes": [
{
"path": "/Users/pakrym/code/codex/README.md",
"kind": "update"
}
],
"status": "completed"
}
}
{
"type": "item.completed",
"item": {
"id": "itm_8",
"item_type": "reasoning",
"text": "**Validating README Update**\n\nI updated the README.md and now I need to validate that the first line has been successfully removed. To do this, I'll check the first 10 lines of the document to ensure everything looks correct. It’s always good to double-check these kinds of changes, so I’m going to take that extra step to make sure the update was executed properly. Let's see what those initial lines say!"
}
}
{
"type": "item.completed",
"item": {
"id": "itm_9",
"item_type": "command_execution",
"command": "bash -lc sed -n '1,20p' README.md",
"aggregated_output": "\n<p align=\"center\"><code>npm i -g @openai/codex</code><br />or <code>brew install codex</code></p>\n\n<p align=\"center\"><strong>Codex CLI</strong> is a coding agent from OpenAI that runs locally on your computer.\n</br>\n</br>If you want Codex in your code editor (VS Code, Cursor, Windsurf), <a href=\"https://developers.openai.com/codex/ide\">install in your IDE</a>\n</br>If you are looking for the <em>cloud-based agent</em> from OpenAI, <strong>Codex Web</strong>, go to <a href=\"https://chatgpt.com/codex\">chatgpt.com/codex</a></p>\n\n<p align=\"center\">\n <img src=\"./.github/codex-cli-splash.png\" alt=\"Codex CLI splash\" width=\"80%\" />\n </p>\n\n---\n\n## Quickstart\n\n### Installing and running Codex CLI\n\nInstall globally with your preferred package manager. If you use npm:\n\n",
"exit_code": 0,
"status": "completed"
}
}
{
"type": "item.completed",
"item": {
"id": "itm_10",
"item_type": "assistant_message",
"text": "- Updated `README.md:1` by removing the opening H1 line.\n- The README now starts with the install instructions paragraph.\n\nWant me to also remove the leading blank line at the top?"
}
}
```
</details>
The idea is to give users fully formatted items they can use directly in
their rendering/application logic and avoid having them building up
items manually based on events (unless they want to for streaming).
This PR implements only the `item.completed` payload for some event
types, more event types and item types to come.
---------
Co-authored-by: Michael Bolin <mbolin@openai.com>
394 lines
14 KiB
Rust
394 lines
14 KiB
Rust
mod cli;
|
||
mod event_processor;
|
||
mod event_processor_with_human_output;
|
||
pub mod event_processor_with_json_output;
|
||
pub mod exec_events;
|
||
pub mod experimental_event_processor_with_json_output;
|
||
|
||
use std::io::IsTerminal;
|
||
use std::io::Read;
|
||
use std::path::PathBuf;
|
||
|
||
pub use cli::Cli;
|
||
use codex_core::AuthManager;
|
||
use codex_core::BUILT_IN_OSS_MODEL_PROVIDER_ID;
|
||
use codex_core::ConversationManager;
|
||
use codex_core::NewConversation;
|
||
use codex_core::config::Config;
|
||
use codex_core::config::ConfigOverrides;
|
||
use codex_core::git_info::get_git_repo_root;
|
||
use codex_core::protocol::AskForApproval;
|
||
use codex_core::protocol::Event;
|
||
use codex_core::protocol::EventMsg;
|
||
use codex_core::protocol::InputItem;
|
||
use codex_core::protocol::Op;
|
||
use codex_core::protocol::TaskCompleteEvent;
|
||
use codex_ollama::DEFAULT_OSS_MODEL;
|
||
use codex_protocol::config_types::SandboxMode;
|
||
use event_processor_with_human_output::EventProcessorWithHumanOutput;
|
||
use experimental_event_processor_with_json_output::ExperimentalEventProcessorWithJsonOutput;
|
||
use serde_json::Value;
|
||
use tracing::debug;
|
||
use tracing::error;
|
||
use tracing::info;
|
||
use tracing_subscriber::EnvFilter;
|
||
|
||
use crate::cli::Command as ExecCommand;
|
||
use crate::event_processor::CodexStatus;
|
||
use crate::event_processor::EventProcessor;
|
||
use crate::event_processor_with_json_output::EventProcessorWithJsonOutput;
|
||
use codex_core::find_conversation_path_by_id_str;
|
||
|
||
pub async fn run_main(cli: Cli, codex_linux_sandbox_exe: Option<PathBuf>) -> anyhow::Result<()> {
|
||
let Cli {
|
||
command,
|
||
images,
|
||
model: model_cli_arg,
|
||
oss,
|
||
config_profile,
|
||
full_auto,
|
||
dangerously_bypass_approvals_and_sandbox,
|
||
cwd,
|
||
skip_git_repo_check,
|
||
color,
|
||
last_message_file,
|
||
json: json_mode,
|
||
experimental_json,
|
||
sandbox_mode: sandbox_mode_cli_arg,
|
||
prompt,
|
||
output_schema: output_schema_path,
|
||
include_plan_tool,
|
||
config_overrides,
|
||
} = cli;
|
||
|
||
// Determine the prompt source (parent or subcommand) and read from stdin if needed.
|
||
let prompt_arg = match &command {
|
||
// Allow prompt before the subcommand by falling back to the parent-level prompt
|
||
// when the Resume subcommand did not provide its own prompt.
|
||
Some(ExecCommand::Resume(args)) => args.prompt.clone().or(prompt),
|
||
None => prompt,
|
||
};
|
||
|
||
let prompt = match prompt_arg {
|
||
Some(p) if p != "-" => p,
|
||
// Either `-` was passed or no positional arg.
|
||
maybe_dash => {
|
||
// When no arg (None) **and** stdin is a TTY, bail out early – unless the
|
||
// user explicitly forced reading via `-`.
|
||
let force_stdin = matches!(maybe_dash.as_deref(), Some("-"));
|
||
|
||
if std::io::stdin().is_terminal() && !force_stdin {
|
||
eprintln!(
|
||
"No prompt provided. Either specify one as an argument or pipe the prompt into stdin."
|
||
);
|
||
std::process::exit(1);
|
||
}
|
||
|
||
// Ensure the user knows we are waiting on stdin, as they may
|
||
// have gotten into this state by mistake. If so, and they are not
|
||
// writing to stdin, Codex will hang indefinitely, so this should
|
||
// help them debug in that case.
|
||
if !force_stdin {
|
||
eprintln!("Reading prompt from stdin...");
|
||
}
|
||
let mut buffer = String::new();
|
||
if let Err(e) = std::io::stdin().read_to_string(&mut buffer) {
|
||
eprintln!("Failed to read prompt from stdin: {e}");
|
||
std::process::exit(1);
|
||
} else if buffer.trim().is_empty() {
|
||
eprintln!("No prompt provided via stdin.");
|
||
std::process::exit(1);
|
||
}
|
||
buffer
|
||
}
|
||
};
|
||
|
||
let output_schema = load_output_schema(output_schema_path);
|
||
|
||
let (stdout_with_ansi, stderr_with_ansi) = match color {
|
||
cli::Color::Always => (true, true),
|
||
cli::Color::Never => (false, false),
|
||
cli::Color::Auto => (
|
||
std::io::stdout().is_terminal(),
|
||
std::io::stderr().is_terminal(),
|
||
),
|
||
};
|
||
|
||
// TODO(mbolin): Take a more thoughtful approach to logging.
|
||
let default_level = "error";
|
||
let _ = tracing_subscriber::fmt()
|
||
// Fallback to the `default_level` log filter if the environment
|
||
// variable is not set _or_ contains an invalid value
|
||
.with_env_filter(
|
||
EnvFilter::try_from_default_env()
|
||
.or_else(|_| EnvFilter::try_new(default_level))
|
||
.unwrap_or_else(|_| EnvFilter::new(default_level)),
|
||
)
|
||
.with_ansi(stderr_with_ansi)
|
||
.with_writer(std::io::stderr)
|
||
.try_init();
|
||
|
||
let sandbox_mode = if full_auto {
|
||
Some(SandboxMode::WorkspaceWrite)
|
||
} else if dangerously_bypass_approvals_and_sandbox {
|
||
Some(SandboxMode::DangerFullAccess)
|
||
} else {
|
||
sandbox_mode_cli_arg.map(Into::<SandboxMode>::into)
|
||
};
|
||
|
||
// When using `--oss`, let the bootstrapper pick the model (defaulting to
|
||
// gpt-oss:20b) and ensure it is present locally. Also, force the built‑in
|
||
// `oss` model provider.
|
||
let model = if let Some(model) = model_cli_arg {
|
||
Some(model)
|
||
} else if oss {
|
||
Some(DEFAULT_OSS_MODEL.to_owned())
|
||
} else {
|
||
None // No model specified, will use the default.
|
||
};
|
||
|
||
let model_provider = if oss {
|
||
Some(BUILT_IN_OSS_MODEL_PROVIDER_ID.to_string())
|
||
} else {
|
||
None // No specific model provider override.
|
||
};
|
||
|
||
// Load configuration and determine approval policy
|
||
let overrides = ConfigOverrides {
|
||
model,
|
||
review_model: None,
|
||
config_profile,
|
||
// This CLI is intended to be headless and has no affordances for asking
|
||
// the user for approval.
|
||
approval_policy: Some(AskForApproval::Never),
|
||
sandbox_mode,
|
||
cwd: cwd.map(|p| p.canonicalize().unwrap_or(p)),
|
||
model_provider,
|
||
codex_linux_sandbox_exe,
|
||
base_instructions: None,
|
||
include_plan_tool: Some(include_plan_tool),
|
||
include_apply_patch_tool: None,
|
||
include_view_image_tool: None,
|
||
show_raw_agent_reasoning: oss.then_some(true),
|
||
tools_web_search_request: None,
|
||
};
|
||
// Parse `-c` overrides.
|
||
let cli_kv_overrides = match config_overrides.parse_overrides() {
|
||
Ok(v) => v,
|
||
Err(e) => {
|
||
eprintln!("Error parsing -c overrides: {e}");
|
||
std::process::exit(1);
|
||
}
|
||
};
|
||
|
||
let config = Config::load_with_cli_overrides(cli_kv_overrides, overrides)?;
|
||
let mut event_processor: Box<dyn EventProcessor> = match (json_mode, experimental_json) {
|
||
(_, true) => Box::new(ExperimentalEventProcessorWithJsonOutput::new(
|
||
last_message_file.clone(),
|
||
)),
|
||
(true, _) => {
|
||
eprintln!(
|
||
"The existing `--json` output format is being deprecated. Please try the new format using `--experimental-json`."
|
||
);
|
||
|
||
Box::new(EventProcessorWithJsonOutput::new(last_message_file.clone()))
|
||
}
|
||
_ => Box::new(EventProcessorWithHumanOutput::create_with_ansi(
|
||
stdout_with_ansi,
|
||
&config,
|
||
last_message_file.clone(),
|
||
)),
|
||
};
|
||
|
||
if oss {
|
||
codex_ollama::ensure_oss_ready(&config)
|
||
.await
|
||
.map_err(|e| anyhow::anyhow!("OSS setup failed: {e}"))?;
|
||
}
|
||
|
||
let default_cwd = config.cwd.to_path_buf();
|
||
let default_approval_policy = config.approval_policy;
|
||
let default_sandbox_policy = config.sandbox_policy.clone();
|
||
let default_model = config.model.clone();
|
||
let default_effort = config.model_reasoning_effort;
|
||
let default_summary = config.model_reasoning_summary;
|
||
|
||
if !skip_git_repo_check && get_git_repo_root(&default_cwd).is_none() {
|
||
eprintln!("Not inside a trusted directory and --skip-git-repo-check was not specified.");
|
||
std::process::exit(1);
|
||
}
|
||
|
||
let conversation_manager =
|
||
ConversationManager::new(AuthManager::shared(config.codex_home.clone()));
|
||
|
||
// Handle resume subcommand by resolving a rollout path and using explicit resume API.
|
||
let NewConversation {
|
||
conversation_id: _,
|
||
conversation,
|
||
session_configured,
|
||
} = if let Some(ExecCommand::Resume(args)) = command {
|
||
let resume_path = resolve_resume_path(&config, &args).await?;
|
||
|
||
if let Some(path) = resume_path {
|
||
conversation_manager
|
||
.resume_conversation_from_rollout(
|
||
config.clone(),
|
||
path,
|
||
AuthManager::shared(config.codex_home.clone()),
|
||
)
|
||
.await?
|
||
} else {
|
||
conversation_manager
|
||
.new_conversation(config.clone())
|
||
.await?
|
||
}
|
||
} else {
|
||
conversation_manager
|
||
.new_conversation(config.clone())
|
||
.await?
|
||
};
|
||
// Print the effective configuration and prompt so users can see what Codex
|
||
// is using.
|
||
event_processor.print_config_summary(&config, &prompt, &session_configured);
|
||
|
||
info!("Codex initialized with event: {session_configured:?}");
|
||
|
||
let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel::<Event>();
|
||
{
|
||
let conversation = conversation.clone();
|
||
tokio::spawn(async move {
|
||
loop {
|
||
tokio::select! {
|
||
_ = tokio::signal::ctrl_c() => {
|
||
tracing::debug!("Keyboard interrupt");
|
||
// Immediately notify Codex to abort any in‑flight task.
|
||
conversation.submit(Op::Interrupt).await.ok();
|
||
|
||
// Exit the inner loop and return to the main input prompt. The codex
|
||
// will emit a `TurnInterrupted` (Error) event which is drained later.
|
||
break;
|
||
}
|
||
res = conversation.next_event() => match res {
|
||
Ok(event) => {
|
||
debug!("Received event: {event:?}");
|
||
|
||
let is_shutdown_complete = matches!(event.msg, EventMsg::ShutdownComplete);
|
||
if let Err(e) = tx.send(event) {
|
||
error!("Error sending event: {e:?}");
|
||
break;
|
||
}
|
||
if is_shutdown_complete {
|
||
info!("Received shutdown event, exiting event loop.");
|
||
break;
|
||
}
|
||
},
|
||
Err(e) => {
|
||
error!("Error receiving event: {e:?}");
|
||
break;
|
||
}
|
||
}
|
||
}
|
||
}
|
||
});
|
||
}
|
||
|
||
// Send images first, if any.
|
||
if !images.is_empty() {
|
||
let items: Vec<InputItem> = images
|
||
.into_iter()
|
||
.map(|path| InputItem::LocalImage { path })
|
||
.collect();
|
||
let initial_images_event_id = conversation.submit(Op::UserInput { items }).await?;
|
||
info!("Sent images with event ID: {initial_images_event_id}");
|
||
while let Ok(event) = conversation.next_event().await {
|
||
if event.id == initial_images_event_id
|
||
&& matches!(
|
||
event.msg,
|
||
EventMsg::TaskComplete(TaskCompleteEvent {
|
||
last_agent_message: _,
|
||
})
|
||
)
|
||
{
|
||
break;
|
||
}
|
||
}
|
||
}
|
||
|
||
// Send the prompt.
|
||
let items: Vec<InputItem> = vec![InputItem::Text { text: prompt }];
|
||
let initial_prompt_task_id = conversation
|
||
.submit(Op::UserTurn {
|
||
items,
|
||
cwd: default_cwd,
|
||
approval_policy: default_approval_policy,
|
||
sandbox_policy: default_sandbox_policy,
|
||
model: default_model,
|
||
effort: default_effort,
|
||
summary: default_summary,
|
||
final_output_json_schema: output_schema,
|
||
})
|
||
.await?;
|
||
info!("Sent prompt with event ID: {initial_prompt_task_id}");
|
||
|
||
// Run the loop until the task is complete.
|
||
while let Some(event) = rx.recv().await {
|
||
let shutdown: CodexStatus = event_processor.process_event(event);
|
||
match shutdown {
|
||
CodexStatus::Running => continue,
|
||
CodexStatus::InitiateShutdown => {
|
||
conversation.submit(Op::Shutdown).await?;
|
||
}
|
||
CodexStatus::Shutdown => {
|
||
break;
|
||
}
|
||
}
|
||
}
|
||
|
||
Ok(())
|
||
}
|
||
|
||
async fn resolve_resume_path(
|
||
config: &Config,
|
||
args: &crate::cli::ResumeArgs,
|
||
) -> anyhow::Result<Option<PathBuf>> {
|
||
if args.last {
|
||
match codex_core::RolloutRecorder::list_conversations(&config.codex_home, 1, None).await {
|
||
Ok(page) => Ok(page.items.first().map(|it| it.path.clone())),
|
||
Err(e) => {
|
||
error!("Error listing conversations: {e}");
|
||
Ok(None)
|
||
}
|
||
}
|
||
} else if let Some(id_str) = args.session_id.as_deref() {
|
||
let path = find_conversation_path_by_id_str(&config.codex_home, id_str).await?;
|
||
Ok(path)
|
||
} else {
|
||
Ok(None)
|
||
}
|
||
}
|
||
|
||
fn load_output_schema(path: Option<PathBuf>) -> Option<Value> {
|
||
let path = path?;
|
||
|
||
let schema_str = match std::fs::read_to_string(&path) {
|
||
Ok(contents) => contents,
|
||
Err(err) => {
|
||
eprintln!(
|
||
"Failed to read output schema file {}: {err}",
|
||
path.display()
|
||
);
|
||
std::process::exit(1);
|
||
}
|
||
};
|
||
|
||
match serde_json::from_str::<Value>(&schema_str) {
|
||
Ok(value) => Some(value),
|
||
Err(err) => {
|
||
eprintln!(
|
||
"Output schema file {} is not valid JSON: {err}",
|
||
path.display()
|
||
);
|
||
std::process::exit(1);
|
||
}
|
||
}
|
||
}
|