Fix: Claude models return incomplete responses due to empty finish_reason handling (#6728)
## Summary Fixes streaming issue where Claude models return only 1-4 characters instead of full responses when used through certain API providers/proxies. ## Environment - **OS**: Windows - **Models affected**: Claude models (e.g., claude-haiku-4-5-20251001) - **API Provider**: AAAI API proxy (https://api.aaai.vip/v1) - **Working models**: GLM, Google models work correctly ## Problem When using Claude models in both TUI and exec modes, only 1-4 characters are displayed despite the backend receiving the full response. Debug logs revealed that some API providers send SSE chunks with an empty string finish_reason during active streaming, rather than null or omitting the field entirely. The current code treats any non-null finish_reason as a termination signal, causing the stream to exit prematurely after the first chunk. The problematic chunks contain finish_reason with an empty string instead of null. ## Solution Fix empty finish_reason handling in chat_completions.rs by adding a check to only process non-empty finish_reason values. This ensures empty strings are ignored and streaming continues normally. ## Testing - Tested on Windows with Claude Haiku model via AAAI API proxy - Full responses now received and displayed correctly in both TUI and exec modes - Other models (GLM, Google) continue to work as expected - No regression in existing functionality ## Impact - Improves compatibility with API providers that send empty finish_reason during streaming - Enables Claude models to work correctly in Windows environment - No breaking changes to existing functionality ## Related Issues This fix resolves the issue where Claude models appeared to return incomplete responses. The root cause was identified as a compatibility issue in parsing SSE responses from certain API providers/proxies, rather than a model-specific problem. This change improves overall robustness when working with various API endpoints. --------- Co-authored-by: Eric Traut <etraut@openai.com>
This commit is contained in:
parent
702238f004
commit
de1768d3ba
1 changed files with 3 additions and 1 deletions
|
|
@ -673,7 +673,9 @@ async fn process_chat_sse<S>(
|
|||
}
|
||||
|
||||
// Emit end-of-turn when finish_reason signals completion.
|
||||
if let Some(finish_reason) = choice.get("finish_reason").and_then(|v| v.as_str()) {
|
||||
if let Some(finish_reason) = choice.get("finish_reason").and_then(|v| v.as_str())
|
||||
&& !finish_reason.is_empty()
|
||||
{
|
||||
match finish_reason {
|
||||
"tool_calls" if fn_call_state.active => {
|
||||
// First, flush the terminal raw reasoning so UIs can finalize
|
||||
|
|
|
|||
Loading…
Add table
Reference in a new issue