core-agent-ide/codex-rs/tui2/src/streaming/mod.rs
Josh McKinney c92dbea7c1
tui2: stop baking streaming wraps; reflow agent markdown (#8761)
Background
Streaming assistant prose in tui2 was being rendered with viewport-width
wrapping during streaming, then stored in history cells as already split
`Line`s. Those width-derived breaks became indistinguishable from hard
newlines, so the transcript could not "un-split" on resize. This also
degraded copy/paste, since soft wraps looked like hard breaks.

What changed
- Introduce width-agnostic `MarkdownLogicalLine` output in
`tui2/src/markdown_render.rs`, preserving markdown wrap semantics:
initial/subsequent indents, per-line style, and a preformatted flag.
- Update the streaming collector (`tui2/src/markdown_stream.rs`) to emit
logical lines (newline-gated) and remove any captured viewport width.
- Update streaming orchestration (`tui2/src/streaming/*`) to queue and
emit logical lines, producing `AgentMessageCell::new_logical(...)`.
- Make `AgentMessageCell` store logical lines and wrap at render time in
`HistoryCell::transcript_lines_with_joiners(width)`, emitting joiners so
copy/paste can join soft-wrap continuations correctly.

Overlay deferral
When an overlay is active, defer *cells* (not rendered `Vec<Line>`) and
render them at overlay close time. This avoids baking width-derived
wraps based on a stale width.

Tests + docs
- Add resize/reflow regression tests + snapshots for streamed agent
output.
- Expand module/API docs for the new logical-line streaming pipeline and
clarify joiner semantics.
- Align scrollback-related docs/comments with current tui2 behavior
(main draw loop does not flush queued "history lines" to the terminal).

More details
See `codex-rs/tui2/docs/streaming_wrapping_design.md` for the full
problem statement and solution approach, and
`codex-rs/tui2/docs/tui_viewport_and_history.md` for viewport vs printed
output behavior.
2026-01-05 18:37:58 -08:00

59 lines
2.3 KiB
Rust

//! Streaming state for newline-gated assistant output.
//!
//! The streaming pipeline in `tui2` is split into:
//!
//! - [`crate::markdown_stream::MarkdownStreamCollector`]: accumulates raw deltas and commits
//! completed *logical* markdown lines (width-agnostic).
//! - [`StreamState`]: a small queue that supports "commit tick" animation by releasing at most one
//! logical line per tick.
//! - [`controller::StreamController`]: orchestration (header emission, finalize/drain semantics,
//! and converting queued logical lines into `HistoryCell`s).
//!
//! Keeping the queued units as logical lines (not wrapped visual lines) is essential for resize
//! reflow: visual wrapping depends on the current viewport width and must be performed at render
//! time inside the relevant history cell.
use std::collections::VecDeque;
use crate::markdown_render::MarkdownLogicalLine;
use crate::markdown_stream::MarkdownStreamCollector;
pub(crate) mod controller;
pub(crate) struct StreamState {
pub(crate) collector: MarkdownStreamCollector,
queued_lines: VecDeque<MarkdownLogicalLine>,
pub(crate) has_seen_delta: bool,
}
impl StreamState {
/// Create a fresh streaming state for one assistant message.
pub(crate) fn new() -> Self {
Self {
collector: MarkdownStreamCollector::new(),
queued_lines: VecDeque::new(),
has_seen_delta: false,
}
}
/// Reset state for the next stream.
pub(crate) fn clear(&mut self) {
self.collector.clear();
self.queued_lines.clear();
self.has_seen_delta = false;
}
/// Pop at most one queued logical line (for commit-tick animation).
pub(crate) fn step(&mut self) -> Vec<MarkdownLogicalLine> {
self.queued_lines.pop_front().into_iter().collect()
}
/// Drain all queued logical lines (used on finalize).
pub(crate) fn drain_all(&mut self) -> Vec<MarkdownLogicalLine> {
self.queued_lines.drain(..).collect()
}
/// True when there is no queued output waiting to be emitted by commit ticks.
pub(crate) fn is_idle(&self) -> bool {
self.queued_lines.is_empty()
}
/// Enqueue newly committed logical lines.
pub(crate) fn enqueue(&mut self, lines: Vec<MarkdownLogicalLine>) {
self.queued_lines.extend(lines);
}
}