-
-
Notifications
You must be signed in to change notification settings - Fork 115
AG-UI Support #244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. Weβll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
AG-UI Support #244
Conversation
π WalkthroughWalkthroughIntroduces AG-UI Protocol events across TanStack AI's streaming system. All text adapters now emit new lifecycle events (RUN_STARTED, RUN_FINISHED, RUN_ERROR, TEXT_MESSAGE_, TOOL_CALL_, STEP_*) with standardized payloads while maintaining backward compatibility with legacy event types through union types and conditional handling. Changes
Sequence Diagram(s)sequenceDiagram
actor Client
participant Adapter as Text Adapter<br/>(OpenAI/Anthropic)
participant Stream as Event Stream
participant Handler as Chat Activity<br/>Handler
Client->>Adapter: initiate chat/stream
Adapter->>Stream: RUN_STARTED{runId, model, timestamp}
Stream->>Handler: receive RUN_STARTED
Handler->>Handler: initialize tracking
Adapter->>Stream: TEXT_MESSAGE_START{messageId, runId, timestamp}
Stream->>Handler: receive TEXT_MESSAGE_START
loop for each text chunk
Adapter->>Stream: TEXT_MESSAGE_CONTENT{delta, content, messageId, timestamp}
Stream->>Handler: receive TEXT_MESSAGE_CONTENT
Handler->>Handler: accumulate content
end
alt Tool Call Detected
Adapter->>Stream: TOOL_CALL_START{toolCallId, toolName, runId, timestamp}
Stream->>Handler: receive TOOL_CALL_START
loop for each arg chunk
Adapter->>Stream: TOOL_CALL_ARGS{delta, args, toolCallId, timestamp}
Stream->>Handler: receive TOOL_CALL_ARGS
end
Adapter->>Stream: TOOL_CALL_END{toolCallId, input (parsed), timestamp}
Stream->>Handler: receive TOOL_CALL_END
Handler->>Handler: execute tool
end
Adapter->>Stream: TEXT_MESSAGE_END{messageId, runId, timestamp}
Stream->>Handler: receive TEXT_MESSAGE_END
alt Success
Adapter->>Stream: RUN_FINISHED{runId, finishReason, usage, timestamp}
Stream->>Handler: receive RUN_FINISHED
Handler->>Handler: finalize response
else Error
Adapter->>Stream: RUN_ERROR{runId, error, code, timestamp}
Stream->>Handler: receive RUN_ERROR
Handler->>Handler: handle error state
end
Handler->>Client: complete with response/error
Estimated code review effortπ― 4 (Complex) | β±οΈ ~60 minutes This introduces a comprehensive new AG-UI Protocol event system across multiple language implementations (Python, TypeScript, PHP) with intricate lifecycle tracking, state management, and event sequencing logic. Review requires understanding the new protocol, verifying proper event emission order, validating backward compatibility paths, and confirming lifecycle ID tracking across diverse adapter implementations. Possibly related PRs
Suggested reviewers
Poem
π₯ Pre-merge checks | β 1 | β 2β Failed checks (1 warning, 1 inconclusive)
β Passed checks (1 passed)
βοΈ Tip: You can configure your own custom pre-merge checks in the settings. β¨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
View your CI Pipeline Execution β for commit 63a1a79
βοΈ Nx Cloud last updated this comment at |
@tanstack/ai
@tanstack/ai-anthropic
@tanstack/ai-client
@tanstack/ai-devtools-core
@tanstack/ai-gemini
@tanstack/ai-grok
@tanstack/ai-ollama
@tanstack/ai-openai
@tanstack/ai-openrouter
@tanstack/ai-preact
@tanstack/ai-react
@tanstack/ai-react-ui
@tanstack/ai-solid
@tanstack/ai-solid-ui
@tanstack/ai-svelte
@tanstack/ai-vue
@tanstack/ai-vue-ui
@tanstack/preact-ai-devtools
@tanstack/react-ai-devtools
@tanstack/solid-ai-devtools
commit: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and canβt be posted inline due to platform limitations.
β οΈ Outside diff range comments (8)
packages/typescript/smoke-tests/adapters/src/harness.ts (1)
199-236: Fallback tocontentwhen delta is missing.
TEXT_MESSAGE_CONTENTis aggregated usingdeltaonly. If an adapter emits content-only chunks,fullResponseand the draft will miss text.π§ Proposed fix
- const delta = chunk.delta || '' + const delta = chunk.delta ?? chunk.content ?? '' fullResponse += delta if (!assistantDraft) { assistantDraft = { role: 'assistant', - content: chunk.content || '', + content: chunk.content ?? delta ?? '', toolCalls: [], } } else { assistantDraft.content = (assistantDraft.content || '') + delta }packages/typescript/ai-openai/src/adapters/text.ts (1)
626-645: Ensure RUN_STARTED precedes RUN_ERROR on early stream failure.If the iterator throws before the first chunk, the catch block emits RUN_ERROR without a prior RUN_STARTED, breaking lifecycle ordering for consumers.
π§ Proposed fix
} catch (error: unknown) { const err = error as Error & { code?: string } console.log( '[OpenAI Adapter] Stream ended with error. Event type summary:', { totalChunks: chunkCount, error: err.message, }, ) + if (!hasEmittedRunStarted) { + hasEmittedRunStarted = true + yield { + type: 'RUN_STARTED', + runId, + model: options.model, + timestamp, + } + } yield { type: 'RUN_ERROR', runId, model: options.model, timestamp, error: { message: err.message || 'Unknown error occurred', code: err.code, }, } }packages/typescript/ai-gemini/src/adapters/text.ts (2)
117-128: RUN_ERROR event missingrunIdfield.The error handler in
chatStreamemits aRUN_ERRORevent but doesn't includerunId. Since this catch block is reached before any streaming occurs,runIdhasn't been generated yet. However, for consistency with theRunErrorEventinterface (which has optionalrunId), consider generating a runId even for pre-stream errors:π§ Suggested fix: Generate runId for error events
async *chatStream( options: TextOptions<GeminiTextProviderOptions>, ): AsyncIterable<StreamChunk> { const mappedOptions = this.mapCommonOptionsToGemini(options) + const runId = generateId(this.name) try { const result = await this.client.models.generateContentStream(mappedOptions) - yield* this.processStreamChunks(result, options.model) + yield* this.processStreamChunks(result, options.model, runId) } catch (error) { const timestamp = Date.now() yield { type: 'RUN_ERROR', + runId, model: options.model, timestamp,
368-421: Duplicate TOOL_CALL_END events may be emitted for UNEXPECTED_TOOL_CALL.When
finishReason === FinishReason.UNEXPECTED_TOOL_CALL, tool calls are added totoolCallMapwithstarted: true(line 387), thenTOOL_CALL_STARTandTOOL_CALL_ENDare emitted (lines 391-418). However, the loop at lines 424-441 iterates over all entries intoolCallMapand emitsTOOL_CALL_ENDagain, causing duplicate events for these tool calls.π Proposed fix: Track which tool calls have already emitted TOOL_CALL_END
+ const endedToolCalls = new Set<string>() + if (finishReason === FinishReason.UNEXPECTED_TOOL_CALL) { if (chunk.candidates[0].content?.parts) { for (const part of chunk.candidates[0].content.parts) { const functionCall = part.functionCall if (functionCall) { const toolCallId = functionCall.id || `${functionCall.name}_${Date.now()}_${nextToolIndex}` // ... existing code ... yield { type: 'TOOL_CALL_END', toolCallId, toolName: functionCall.name || '', model, timestamp, input: parsedInput, } + endedToolCalls.add(toolCallId) } } } } // Emit TOOL_CALL_END for all tracked tool calls for (const [toolCallId, toolCallData] of toolCallMap.entries()) { + if (endedToolCalls.has(toolCallId)) { + continue + } let parsedInput: unknown = {}packages/typescript/ai-anthropic/src/adapters/text.ts (1)
613-671: Prevent duplicate terminal events (RUN_FINISHED/RUN_ERROR).The Anthropic Messages API always emits
message_deltawithstop_reasonbeforemessage_stop. Both handlers currently emit terminal events, causing duplicates that break downstream state machines expecting a single terminal event.Implement the suggested tracking flag to guard terminal event emissions:
Suggested fix
- let hasEmittedRunStarted = false - let hasEmittedTextMessageStart = false + let hasEmittedRunStarted = false + let hasEmittedTextMessageStart = false + let hasEmittedRunTerminal = false @@ - } else if (event.type === 'message_stop') { - yield { - type: 'RUN_FINISHED', - runId, - model, - timestamp, - finishReason: 'stop', - } + } else if (event.type === 'message_stop') { + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_FINISHED', + runId, + model, + timestamp, + finishReason: 'stop', + } + } } else if (event.type === 'message_delta') { if (event.delta.stop_reason) { switch (event.delta.stop_reason) { case 'tool_use': { - yield { - type: 'RUN_FINISHED', - runId, - model, - timestamp, - finishReason: 'tool_calls', - usage: { - promptTokens: event.usage.input_tokens || 0, - completionTokens: event.usage.output_tokens || 0, - totalTokens: - (event.usage.input_tokens || 0) + - (event.usage.output_tokens || 0), - }, - } + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_FINISHED', + runId, + model, + timestamp, + finishReason: 'tool_calls', + usage: { + promptTokens: event.usage.input_tokens || 0, + completionTokens: event.usage.output_tokens || 0, + totalTokens: + (event.usage.input_tokens || 0) + + (event.usage.output_tokens || 0), + }, + } + } break } case 'max_tokens': { - yield { - type: 'RUN_ERROR', - runId, - model, - timestamp, - error: { - message: - 'The response was cut off because the maximum token limit was reached.', - code: 'max_tokens', - }, - } + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_ERROR', + runId, + model, + timestamp, + error: { + message: + 'The response was cut off because the maximum token limit was reached.', + code: 'max_tokens', + }, + } + } break } default: { - yield { - type: 'RUN_FINISHED', - runId, - model, - timestamp, - finishReason: 'stop', - usage: { - promptTokens: event.usage.input_tokens || 0, - completionTokens: event.usage.output_tokens || 0, - totalTokens: - (event.usage.input_tokens || 0) + - (event.usage.output_tokens || 0), - }, - } + if (!hasEmittedRunTerminal) { + hasEmittedRunTerminal = true + yield { + type: 'RUN_FINISHED', + runId, + model, + timestamp, + finishReason: 'stop', + usage: { + promptTokens: event.usage.input_tokens || 0, + completionTokens: event.usage.output_tokens || 0, + totalTokens: + (event.usage.input_tokens || 0) + + (event.usage.output_tokens || 0), + }, + } + } } } } }packages/python/tanstack-ai/src/tanstack_ai/anthropic_adapter.py (2)
320-388: Prevent duplicate RUN_FINISHED / RUN_ERROR emissions.
message_deltaemits terminal events andmessage_stopalways emitsRUN_FINISHED, which can duplicate finishes and even emitRUN_FINISHEDafterRUN_ERROR. Track completion to avoid double-terminating a run.π Suggested fix
- if hasattr(delta, "stop_reason") and delta.stop_reason: + if hasattr(delta, "stop_reason") and delta.stop_reason and not run_finished: usage = None if hasattr(event, "usage") and event.usage: usage = { "promptTokens": event.usage.input_tokens, "completionTokens": event.usage.output_tokens, "totalTokens": event.usage.input_tokens + event.usage.output_tokens, } # Map Anthropic stop_reason to TanStack format if delta.stop_reason == "max_tokens": + run_finished = True yield RunErrorEvent( type="RUN_ERROR", runId=run_id, model=options.model, timestamp=timestamp, error={ "message": "The response was cut off because the maximum token limit was reached.", "code": "max_tokens", }, ) else: finish_reason = { "end_turn": "stop", "tool_use": "tool_calls", }.get(delta.stop_reason, "stop") + run_finished = True yield RunFinishedEvent( type="RUN_FINISHED", runId=run_id, model=options.model, timestamp=timestamp, finishReason=finish_reason, usage=usage, ) @@ - yield RunFinishedEvent( - type="RUN_FINISHED", - runId=run_id, - model=options.model, - timestamp=int(time.time() * 1000), - finishReason=finish_reason, - usage=usage, - ) + if not run_finished: + run_finished = True + yield RunFinishedEvent( + type="RUN_FINISHED", + runId=run_id, + model=options.model, + timestamp=int(time.time() * 1000), + finishReason=finish_reason, + usage=usage, + )Add the flag alongside the other lifecycle tracking variables:
run_finished = False
390-401: Emit RUN_STARTED before RUN_ERROR on early failures.If an exception occurs before the first stream event, the current path emits RUN_ERROR without a preceding RUN_STARTED.
π Suggested fix
- except Exception as e: - # Emit RUN_ERROR - yield RunErrorEvent( - type="RUN_ERROR", - runId=run_id, - model=options.model, - timestamp=int(time.time() * 1000), - error={ - "message": str(e), - "code": getattr(e, "code", None), - }, - ) + except Exception as e: + ts = int(time.time() * 1000) + if not has_emitted_run_started: + has_emitted_run_started = True + yield RunStartedEvent( + type="RUN_STARTED", + runId=run_id, + model=options.model, + timestamp=ts, + threadId=None, + ) + yield RunErrorEvent( + type="RUN_ERROR", + runId=run_id, + model=options.model, + timestamp=ts, + error={ + "message": str(e), + "code": getattr(e, "code", None), + }, + )packages/python/tanstack-ai/src/tanstack_ai/types.py (1)
326-333: Limit BaseStreamChunk.type to legacy values.
BaseStreamChunkis the base for legacy chunks, buttype: StreamChunkTypeallows AG-UI values on legacy shapes. Tighten it toLegacyStreamChunkTypeto avoid mixed typing.β Suggested fix
- type: StreamChunkType + type: LegacyStreamChunkType
π€ Fix all issues with AI agents
In `@docs/protocol/chunk-definitions.md`:
- Around line 2-41: The AG-UI event list mentions STATE_SNAPSHOT, STATE_DELTA,
and CUSTOM but no shapes are defined; update the docs by either adding explicit
interface/type definitions for these events (e.g., StateSnapshotEvent,
StateDeltaEvent, CustomAGUIEvent that extend BaseAGUIEvent and include fields
like state: unknown, delta: unknown, source?: string, and payload?: unknown) or
clearly mark them as "reserved/future" with example usage and minimal required
fields (type, timestamp, model, rawEvent) so readers know expected structure;
reference BaseAGUIEvent and AGUIEventType when adding the new sections to keep
the schema consistent.
In `@packages/php/tanstack-ai/src/StreamChunkConverter.php`:
- Around line 226-263: The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.
In `@packages/typescript/ai-gemini/src/adapters/text.ts`:
- Around line 248-255: The STEP_FINISHED yield uses a fallback expression
"stepId || generateId(this.name)" even though stepId must have been set by
STEP_STARTED; replace the fallback with a non-null assertion on stepId (e.g.,
use stepId! in the STEP_FINISHED object) so the code expresses the invariant and
avoids silently generating a new id, and ensure the change is made in the yield
that produces type: 'STEP_FINISHED' (referencing the stepId and generateId
symbols and the surrounding STEP_STARTED/STEP_FINISHED logic).
In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 336-345: The STEP_FINISHED emission currently falls back to
generateId('step') when stepId is null which can create inconsistent IDs; update
the emission to rely on the fact STEP_STARTED sets stepId and remove the
fallback by using a non-null assertion (stepId!) or otherwise assert/throw if
stepId is missing so STEP_FINISHED always uses the same stepId set by
STEP_STARTED (refer to STEP_FINISHED, STEP_STARTED, stepId, generateId, and
chunk.message.thinking in the surrounding code).
In `@packages/typescript/ai-openai/src/adapters/summarize.ts`:
- Around line 65-87: The SummarizationResult.id stays empty for AG-UI streams
because only legacy 'content' sets id; update the logic in summarize.ts so that
when handling chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId
(or chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set
id = chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.
In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 584-589: In handleTextMessageContentEvent, guard explicitly
against undefined instead of using if (chunk.content) so empty-string content
("") is not treated as absent; change the condition to check chunk.content !==
undefined (or typeof chunk.content !== "undefined") and assign
this.accumulatedContent = chunk.content when present, otherwise append
chunk.delta; also ensure this.accumulatedContent is initialized to an empty
string before appending to avoid NaN/undefined concatenation (references:
handleTextMessageContentEvent, TextMessageContentEvent, this.accumulatedContent,
chunk.content, chunk.delta).
In `@packages/typescript/ai/src/stream-to-response.ts`:
- Around line 29-33: The streamToText handler currently only appends chunk.delta
for TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content
without delta; update the logic in streamToText (the branch handling chunk.type
=== 'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.
In `@packages/typescript/smoke-tests/adapters/src/harness.ts`:
- Around line 268-326: The TOOL_CALL_END branch is leaving entries in
toolCallsInProgress which can leak stale args; inside the TOOL_CALL_END handling
(the else if block checking chunk.type === 'TOOL_CALL_END') remove the completed
entry from toolCallsInProgress (call toolCallsInProgress.delete(id) using the id
local variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.
π§Ή Nitpick comments (5)
packages/typescript/ai-ollama/src/adapters/text.ts (1)
212-257: Tool call handling emits TOOL_CALL_END immediately after TOOL_CALL_START without TOOL_CALL_ARGS.The
handleToolCallfunction emitsTOOL_CALL_STARTfollowed immediately byTOOL_CALL_END. This differs from the Gemini adapter which emitsTOOL_CALL_ARGSevents between start and end.If Ollama provides tool arguments in a single chunk (non-streaming), this is acceptable. However, for consistency with the AG-UI protocol and other adapters, consider emitting a
TOOL_CALL_ARGSevent with the full arguments beforeTOOL_CALL_END:β»οΈ Suggested addition of TOOL_CALL_ARGS event
// Emit TOOL_CALL_START if not already emitted for this tool call if (!toolCallsEmitted.has(toolCallId)) { toolCallsEmitted.add(toolCallId) events.push({ type: 'TOOL_CALL_START', toolCallId, toolName: actualToolCall.function.name || '', model: chunk.model, timestamp, index: actualToolCall.function.index, }) } // Parse input let parsedInput: unknown = {} const argsStr = typeof actualToolCall.function.arguments === 'string' ? actualToolCall.function.arguments : JSON.stringify(actualToolCall.function.arguments) try { parsedInput = JSON.parse(argsStr) } catch { parsedInput = actualToolCall.function.arguments } + // Emit TOOL_CALL_ARGS with full arguments + events.push({ + type: 'TOOL_CALL_ARGS', + toolCallId, + model: chunk.model, + timestamp, + delta: argsStr, + args: argsStr, + }) + // Emit TOOL_CALL_END events.push({ type: 'TOOL_CALL_END',packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)
300-331: Test assertions correctly validate AG-UI event sequence.The updated assertions properly verify the new event lifecycle:
RUN_STARTEDβTEXT_MESSAGE_STARTβTEXT_MESSAGE_CONTENT(ΓN) βTEXT_MESSAGE_ENDβRUN_FINISHED. UsingtoMatchObjectallows for flexible matching while validating essential fields.Consider adding test cases for:
- Tool call event sequence (
TOOL_CALL_STARTβTOOL_CALL_ARGSβTOOL_CALL_END)- Error scenarios (
RUN_ERRORevents)- Thinking/reasoning flow (
STEP_STARTEDβSTEP_FINISHED)packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (2)
72-80: Silent no-op if tool call not found inaddToolCallArgsEvent.If
addToolCallArgsEventis called beforeaddToolCallStartEvent(out-of-order events), the arguments are silently dropped. Consider logging a warning or throwing for debugging purposes.β»οΈ Optional: Add warning for missing tool call
addToolCallArgsEvent(event: ToolCallArgsEvent): void { // Find the tool call by ID for (const [, toolCall] of this.toolCallsMap.entries()) { if (toolCall.id === event.toolCallId) { toolCall.function.arguments += event.delta - break + return } } + // Tool call not found - this shouldn't happen in normal flow + console.warn(`TOOL_CALL_ARGS received for unknown toolCallId: ${event.toolCallId}`) }
240-258: TOOL_CALL_END event missinginputfield when emitted from executeTools.The
TOOL_CALL_ENDevent emitted at line 241-248 includesresultbut notinput. According to theToolCallEndEventinterface in types.ts,inputis an optional field that should contain the final parsed input arguments. For consistency with adapter emissions, consider including it:β»οΈ Add input field to TOOL_CALL_END event
if (finishEvent.type === 'RUN_FINISHED') { + let parsedInput: unknown + try { + parsedInput = JSON.parse(toolCall.function.arguments) + } catch { + parsedInput = undefined + } yield { type: 'TOOL_CALL_END', toolCallId: toolCall.id, toolName: toolCall.function.name, model: finishEvent.model, timestamp: Date.now(), + input: parsedInput, result: toolResultContent, }packages/python/tanstack-ai/src/tanstack_ai/converter.py (1)
391-407: Drop the unused loop index to satisfy lint.Ruff flags the loop index as unused. You can iterate over values directly.
β»οΈ Suggested tweak
- for tool_index, tool_call in self.tool_calls_map.items(): + for tool_call in self.tool_calls_map.values():
| title: AG-UI Event Definitions | ||
| id: chunk-definitions | ||
| --- | ||
|
|
||
| All streaming responses in TanStack AI consist of a series of **StreamChunks** - discrete JSON objects representing different events during the conversation. These chunks enable real-time updates for content generation, tool calls, errors, and completion signals. | ||
|
|
||
| This document defines the data structures (chunks) that flow between the TanStack AI server and client during streaming chat operations. | ||
| TanStack AI implements the [AG-UI (Agent-User Interaction) Protocol](https://docs.ag-ui.com/introduction), an open, lightweight, event-based protocol that standardizes how AI agents connect to user-facing applications. | ||
|
|
||
| All streaming responses in TanStack AI consist of a series of **AG-UI Events** - discrete JSON objects representing different stages of the conversation lifecycle. These events enable real-time updates for content generation, tool calls, thinking/reasoning, and completion signals. | ||
|
|
||
| ## Base Structure | ||
|
|
||
| All chunks share a common base structure: | ||
| All AG-UI events share a common base structure: | ||
|
|
||
| ```typescript | ||
| interface BaseStreamChunk { | ||
| type: StreamChunkType; | ||
| id: string; // Unique identifier for the message/response | ||
| model: string; // Model identifier (e.g., "gpt-5.2", "claude-3-5-sonnet") | ||
| timestamp: number; // Unix timestamp in milliseconds | ||
| interface BaseAGUIEvent { | ||
| type: AGUIEventType; | ||
| timestamp: number; // Unix timestamp in milliseconds | ||
| model?: string; // Model identifier (TanStack AI addition) | ||
| rawEvent?: unknown; // Original provider event for debugging | ||
| } | ||
| ``` | ||
|
|
||
| ### Chunk Types | ||
| ### AG-UI Event Types | ||
|
|
||
| ```typescript | ||
| type StreamChunkType = | ||
| | 'content' // Text content being generated | ||
| | 'thinking' // Model's reasoning process (when supported) | ||
| | 'tool_call' // Model calling a tool/function | ||
| | 'tool-input-available' // Tool inputs are ready for client execution | ||
| type AGUIEventType = | ||
| | 'RUN_STARTED' // Run lifecycle begins | ||
| | 'RUN_FINISHED' // Run completed successfully | ||
| | 'RUN_ERROR' // Error occurred | ||
| | 'TEXT_MESSAGE_START' // Text message begins | ||
| | 'TEXT_MESSAGE_CONTENT' // Text content streaming | ||
| | 'TEXT_MESSAGE_END' // Text message completes | ||
| | 'TOOL_CALL_START' // Tool invocation begins | ||
| | 'TOOL_CALL_ARGS' // Tool arguments streaming | ||
| | 'TOOL_CALL_END' // Tool call completes (with result) | ||
| | 'STEP_STARTED' // Thinking/reasoning step begins | ||
| | 'STEP_FINISHED' // Thinking/reasoning step completes | ||
| | 'STATE_SNAPSHOT' // Full state synchronization | ||
| | 'STATE_DELTA' // Incremental state update | ||
| | 'CUSTOM'; // Custom extensibility events | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Document STATE_ and CUSTOM event shapes (or mark them as reserved).*
AGUIEventType lists STATE_SNAPSHOT, STATE_DELTA, and CUSTOM, but there are no corresponding definitions below. Add sections or explicitly call them out as future/reserved to avoid reader confusion.
π€ Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 2 - 41, The AG-UI event list
mentions STATE_SNAPSHOT, STATE_DELTA, and CUSTOM but no shapes are defined;
update the docs by either adding explicit interface/type definitions for these
events (e.g., StateSnapshotEvent, StateDeltaEvent, CustomAGUIEvent that extend
BaseAGUIEvent and include fields like state: unknown, delta: unknown, source?:
string, and payload?: unknown) or clearly mark them as "reserved/future" with
example usage and minimal required fields (type, timestamp, model, rawEvent) so
readers know expected structure; reference BaseAGUIEvent and AGUIEventType when
adding the new sections to keep the schema consistent.
| } elseif ($eventType === 'content_block_stop') { | ||
| // Content block completed | ||
| $toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null; | ||
| if ($toolCall) { | ||
| // If tool call wasn't started yet (no args), start it now | ||
| if (!$toolCall['started']) { | ||
| $toolCall['started'] = true; | ||
| $this->toolCallsMap[$this->currentToolIndex] = $toolCall; | ||
|
|
||
| $chunks[] = [ | ||
| 'type' => 'TOOL_CALL_START', | ||
| 'toolCallId' => $toolCall['id'], | ||
| 'toolName' => $toolCall['name'], | ||
| 'model' => $this->model, | ||
| 'timestamp' => $this->timestamp, | ||
| 'toolCall' => [ | ||
| 'id' => $toolCall['id'], | ||
| 'type' => 'function', | ||
| 'function' => [ | ||
| 'name' => $toolCall['name'], | ||
| 'arguments' => $partialJson // Incremental JSON | ||
| ] | ||
| ], | ||
| 'index' => $this->currentToolIndex | ||
| ]; | ||
| } | ||
|
|
||
| // Parse input and emit TOOL_CALL_END | ||
| $parsedInput = []; | ||
| if (!empty($toolCall['input'])) { | ||
| try { | ||
| $parsedInput = json_decode($toolCall['input'], true) ?? []; | ||
| } catch (\Exception $e) { | ||
| $parsedInput = []; | ||
| } | ||
| } | ||
|
|
||
| $chunks[] = [ | ||
| 'type' => 'TOOL_CALL_END', | ||
| 'toolCallId' => $toolCall['id'], | ||
| 'toolName' => $toolCall['name'], | ||
| 'model' => $this->model, | ||
| 'timestamp' => $this->timestamp, | ||
| 'input' => $parsedInput | ||
| ]; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prevent duplicate TOOL_CALL_END emissions on later block stops.
content_block_stop reuses $currentToolIndex without clearing the tool call entry. If additional blocks occur after a tool_use block, TOOL_CALL_END can be emitted multiple times for the same call.
π§ Proposed fix
$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
+
+ unset($this->toolCallsMap[$this->currentToolIndex]);π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } elseif ($eventType === 'content_block_stop') { | |
| // Content block completed | |
| $toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null; | |
| if ($toolCall) { | |
| // If tool call wasn't started yet (no args), start it now | |
| if (!$toolCall['started']) { | |
| $toolCall['started'] = true; | |
| $this->toolCallsMap[$this->currentToolIndex] = $toolCall; | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_START', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'toolCall' => [ | |
| 'id' => $toolCall['id'], | |
| 'type' => 'function', | |
| 'function' => [ | |
| 'name' => $toolCall['name'], | |
| 'arguments' => $partialJson // Incremental JSON | |
| ] | |
| ], | |
| 'index' => $this->currentToolIndex | |
| ]; | |
| } | |
| // Parse input and emit TOOL_CALL_END | |
| $parsedInput = []; | |
| if (!empty($toolCall['input'])) { | |
| try { | |
| $parsedInput = json_decode($toolCall['input'], true) ?? []; | |
| } catch (\Exception $e) { | |
| $parsedInput = []; | |
| } | |
| } | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_END', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'input' => $parsedInput | |
| ]; | |
| } | |
| } elseif ($eventType === 'content_block_stop') { | |
| // Content block completed | |
| $toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null; | |
| if ($toolCall) { | |
| // If tool call wasn't started yet (no args), start it now | |
| if (!$toolCall['started']) { | |
| $toolCall['started'] = true; | |
| $this->toolCallsMap[$this->currentToolIndex] = $toolCall; | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_START', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'index' => $this->currentToolIndex | |
| ]; | |
| } | |
| // Parse input and emit TOOL_CALL_END | |
| $parsedInput = []; | |
| if (!empty($toolCall['input'])) { | |
| try { | |
| $parsedInput = json_decode($toolCall['input'], true) ?? []; | |
| } catch (\Exception $e) { | |
| $parsedInput = []; | |
| } | |
| } | |
| $chunks[] = [ | |
| 'type' => 'TOOL_CALL_END', | |
| 'toolCallId' => $toolCall['id'], | |
| 'toolName' => $toolCall['name'], | |
| 'model' => $this->model, | |
| 'timestamp' => $this->timestamp, | |
| 'input' => $parsedInput | |
| ]; | |
| unset($this->toolCallsMap[$this->currentToolIndex]); | |
| } |
π€ Prompt for AI Agents
In `@packages/php/tanstack-ai/src/StreamChunkConverter.php` around lines 226 -
263, The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.
| yield { | ||
| type: 'thinking', | ||
| content: part.text, | ||
| delta: part.text, | ||
| id: generateId(this.name), | ||
| type: 'STEP_FINISHED', | ||
| stepId: stepId || generateId(this.name), | ||
| model, | ||
| timestamp, | ||
| delta: part.text, | ||
| content: part.text, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same stepId fallback issue as Ollama adapter.
Line 250 uses stepId || generateId(this.name) as a fallback, but stepId should always be set by STEP_STARTED before STEP_FINISHED is yielded. Consider using a non-null assertion for consistency.
π€ Prompt for AI Agents
In `@packages/typescript/ai-gemini/src/adapters/text.ts` around lines 248 - 255,
The STEP_FINISHED yield uses a fallback expression "stepId ||
generateId(this.name)" even though stepId must have been set by STEP_STARTED;
replace the fallback with a non-null assertion on stepId (e.g., use stepId! in
the STEP_FINISHED object) so the code expresses the invariant and avoids
silently generating a new id, and ensure the change is made in the yield that
produces type: 'STEP_FINISHED' (referencing the stepId and generateId symbols
and the surrounding STEP_STARTED/STEP_FINISHED logic).
| accumulatedReasoning += chunk.message.thinking | ||
| yield { | ||
| type: 'thinking', | ||
| id: responseId, | ||
| type: 'STEP_FINISHED', | ||
| stepId: stepId || generateId('step'), | ||
| model: chunk.model, | ||
| timestamp, | ||
| content: accumulatedReasoning, | ||
| delta: chunk.message.thinking, | ||
| content: accumulatedReasoning, | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Potential issue: stepId fallback generates new ID on each STEP_FINISHED.
On line 339, if stepId is null, a new ID is generated via generateId('step'). However, stepId should always be set by the STEP_STARTED emission on line 326. The fallback || generateId('step') suggests defensive coding, but if reached, it would create inconsistent step IDs across events.
π§ Suggested fix: Assert stepId is set or remove fallback
yield {
type: 'STEP_FINISHED',
- stepId: stepId || generateId('step'),
+ stepId: stepId!,
model: chunk.model,
timestamp,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}The ! assertion is safe here because STEP_FINISHED is only yielded inside the if (chunk.message.thinking) block which always sets stepId via STEP_STARTED first.
π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| accumulatedReasoning += chunk.message.thinking | |
| yield { | |
| type: 'thinking', | |
| id: responseId, | |
| type: 'STEP_FINISHED', | |
| stepId: stepId || generateId('step'), | |
| model: chunk.model, | |
| timestamp, | |
| content: accumulatedReasoning, | |
| delta: chunk.message.thinking, | |
| content: accumulatedReasoning, | |
| } | |
| } | |
| accumulatedReasoning += chunk.message.thinking | |
| yield { | |
| type: 'STEP_FINISHED', | |
| stepId: stepId!, | |
| model: chunk.model, | |
| timestamp, | |
| delta: chunk.message.thinking, | |
| content: accumulatedReasoning, | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 336 - 345,
The STEP_FINISHED emission currently falls back to generateId('step') when
stepId is null which can create inconsistent IDs; update the emission to rely on
the fact STEP_STARTED sets stepId and remove the fallback by using a non-null
assertion (stepId!) or otherwise assert/throw if stepId is missing so
STEP_FINISHED always uses the same stepId set by STEP_STARTED (refer to
STEP_FINISHED, STEP_STARTED, stepId, generateId, and chunk.message.thinking in
the surrounding code).
| // Legacy content event | ||
| if (chunk.type === 'content') { | ||
| summary = chunk.content | ||
| id = chunk.id | ||
| model = chunk.model | ||
| } | ||
| // AG-UI TEXT_MESSAGE_CONTENT event | ||
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | ||
| if (chunk.content) { | ||
| summary = chunk.content | ||
| } else { | ||
| summary += chunk.delta | ||
| } | ||
| model = chunk.model || model | ||
| } | ||
| // Legacy done event | ||
| if (chunk.type === 'done' && chunk.usage) { | ||
| usage = chunk.usage | ||
| } | ||
| // AG-UI RUN_FINISHED event | ||
| else if (chunk.type === 'RUN_FINISHED' && chunk.usage) { | ||
| usage = chunk.usage | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Populate id for AG-UI chunks.
For AG-UI streams, id remains '' because only legacy content sets it. Consider mapping from messageId (or runId on RUN_FINISHED) to keep SummarizationResult.id meaningful.
π§ Proposed fix
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+ if (!id && 'messageId' in chunk) {
+ id = chunk.messageId
+ }
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
+ if (!id && 'runId' in chunk) {
+ id = chunk.runId
+ }
usage = chunk.usage
}π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Legacy content event | |
| if (chunk.type === 'content') { | |
| summary = chunk.content | |
| id = chunk.id | |
| model = chunk.model | |
| } | |
| // AG-UI TEXT_MESSAGE_CONTENT event | |
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| if (chunk.content) { | |
| summary = chunk.content | |
| } else { | |
| summary += chunk.delta | |
| } | |
| model = chunk.model || model | |
| } | |
| // Legacy done event | |
| if (chunk.type === 'done' && chunk.usage) { | |
| usage = chunk.usage | |
| } | |
| // AG-UI RUN_FINISHED event | |
| else if (chunk.type === 'RUN_FINISHED' && chunk.usage) { | |
| usage = chunk.usage | |
| } | |
| // Legacy content event | |
| if (chunk.type === 'content') { | |
| summary = chunk.content | |
| id = chunk.id | |
| model = chunk.model | |
| } | |
| // AG-UI TEXT_MESSAGE_CONTENT event | |
| else if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| if (!id && 'messageId' in chunk) { | |
| id = chunk.messageId | |
| } | |
| if (chunk.content) { | |
| summary = chunk.content | |
| } else { | |
| summary += chunk.delta | |
| } | |
| model = chunk.model || model | |
| } | |
| // Legacy done event | |
| if (chunk.type === 'done' && chunk.usage) { | |
| usage = chunk.usage | |
| } | |
| // AG-UI RUN_FINISHED event | |
| else if (chunk.type === 'RUN_FINISHED' && chunk.usage) { | |
| if (!id && 'runId' in chunk) { | |
| id = chunk.runId | |
| } | |
| usage = chunk.usage | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai-openai/src/adapters/summarize.ts` around lines 65 -
87, The SummarizationResult.id stays empty for AG-UI streams because only legacy
'content' sets id; update the logic in summarize.ts so that when handling
chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId (or
chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set id =
chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.
| private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void { | ||
| if (chunk.content) { | ||
| this.accumulatedContent = chunk.content | ||
| } else { | ||
| this.accumulatedContent += chunk.delta | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against empty-string content values when accumulating.
if (chunk.content) treats "" as absent and can mis-accumulate content. Prefer an explicit undefined check.
π©Ή Suggested fix
- if (chunk.content) {
+ if (chunk.content !== undefined) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void { | |
| if (chunk.content) { | |
| this.accumulatedContent = chunk.content | |
| } else { | |
| this.accumulatedContent += chunk.delta | |
| } | |
| private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void { | |
| if (chunk.content !== undefined) { | |
| this.accumulatedContent = chunk.content | |
| } else { | |
| this.accumulatedContent += chunk.delta | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 584 - 589,
In handleTextMessageContentEvent, guard explicitly against undefined instead of
using if (chunk.content) so empty-string content ("") is not treated as absent;
change the condition to check chunk.content !== undefined (or typeof
chunk.content !== "undefined") and assign this.accumulatedContent =
chunk.content when present, otherwise append chunk.delta; also ensure
this.accumulatedContent is initialized to an empty string before appending to
avoid NaN/undefined concatenation (references: handleTextMessageContentEvent,
TextMessageContentEvent, this.accumulatedContent, chunk.content, chunk.delta).
| // Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks | ||
| if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) { | ||
| accumulatedContent += chunk.delta | ||
| } else if (chunk.type === 'content' && chunk.delta) { | ||
| accumulatedContent += chunk.delta |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle TEXT_MESSAGE_CONTENT when delta is absent.
If an adapter emits TEXT_MESSAGE_CONTENT with only content (no delta), streamToText returns an empty string. Consider falling back to content to preserve non-delta events.
π§ Proposed fix
- if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
- accumulatedContent += chunk.delta
- } else if (chunk.type === 'content' && chunk.delta) {
- accumulatedContent += chunk.delta
- }
+ if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+ const piece = chunk.delta ?? chunk.content
+ if (piece) accumulatedContent += piece
+ } else if (chunk.type === 'content') {
+ const piece = chunk.delta ?? chunk.content
+ if (piece) accumulatedContent += piece
+ }π Committable suggestion
βΌοΈ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks | |
| if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) { | |
| accumulatedContent += chunk.delta | |
| } else if (chunk.type === 'content' && chunk.delta) { | |
| accumulatedContent += chunk.delta | |
| // Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks | |
| if (chunk.type === 'TEXT_MESSAGE_CONTENT') { | |
| const piece = chunk.delta ?? chunk.content | |
| if (piece) accumulatedContent += piece | |
| } else if (chunk.type === 'content') { | |
| const piece = chunk.delta ?? chunk.content | |
| if (piece) accumulatedContent += piece | |
| } |
π€ Prompt for AI Agents
In `@packages/typescript/ai/src/stream-to-response.ts` around lines 29 - 33, The
streamToText handler currently only appends chunk.delta for
TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content without
delta; update the logic in streamToText (the branch handling chunk.type ===
'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.
| // AG-UI TOOL_CALL_START event | ||
| else if (chunk.type === 'TOOL_CALL_START') { | ||
| const id = chunk.toolCallId | ||
| toolCallsInProgress.set(id, { | ||
| name: chunk.toolName, | ||
| args: '', | ||
| }) | ||
|
|
||
| if (!assistantDraft) { | ||
| assistantDraft = { role: 'assistant', content: null, toolCalls: [] } | ||
| } | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.toolName = chunk.toolName | ||
| } | ||
| // AG-UI TOOL_CALL_ARGS event | ||
| else if (chunk.type === 'TOOL_CALL_ARGS') { | ||
| const id = chunk.toolCallId | ||
| const existing = toolCallsInProgress.get(id) | ||
| if (existing) { | ||
| existing.args = chunk.args || existing.args + (chunk.delta || '') | ||
| } | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.delta = chunk.delta | ||
| chunkData.args = chunk.args | ||
| } | ||
| // AG-UI TOOL_CALL_END event | ||
| else if (chunk.type === 'TOOL_CALL_END') { | ||
| const id = chunk.toolCallId | ||
| const inProgress = toolCallsInProgress.get(id) | ||
| const name = chunk.toolName || inProgress?.name || '' | ||
| const args = | ||
| inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '') | ||
|
|
||
| // Add to legacy toolCallMap for compatibility | ||
| toolCallMap.set(id, { | ||
| id, | ||
| name, | ||
| arguments: args, | ||
| }) | ||
|
|
||
| // Add to assistant draft | ||
| if (!assistantDraft) { | ||
| assistantDraft = { role: 'assistant', content: null, toolCalls: [] } | ||
| } | ||
| assistantDraft.toolCalls?.push({ | ||
| id, | ||
| type: 'function', | ||
| function: { | ||
| name, | ||
| arguments: args, | ||
| }, | ||
| }) | ||
|
|
||
| chunkData.toolCallId = chunk.toolCallId | ||
| chunkData.toolName = chunk.toolName | ||
| chunkData.input = chunk.input | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear in-progress tool call state after TOOL_CALL_END.
toolCallsInProgress entries persist after completion. If a toolCallId is reused or multiple tool calls occur, stale args can leak.
π§ Proposed fix
else if (chunk.type === 'TOOL_CALL_END') {
const id = chunk.toolCallId
const inProgress = toolCallsInProgress.get(id)
const name = chunk.toolName || inProgress?.name || ''
const args =
inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '')
+ toolCallsInProgress.delete(id)π€ Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 268 -
326, The TOOL_CALL_END branch is leaving entries in toolCallsInProgress which
can leak stale args; inside the TOOL_CALL_END handling (the else if block
checking chunk.type === 'TOOL_CALL_END') remove the completed entry from
toolCallsInProgress (call toolCallsInProgress.delete(id) using the id local
variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.
π― Changes
β Checklist
pnpm run test:pr.π Release Impact
Summary by CodeRabbit
New Features
Documentation
Tests
βοΈ Tip: You can customize this high-level summary in your review settings.