Skip to content

Conversation

@jherr
Copy link
Contributor

@jherr jherr commented Jan 23, 2026

🎯 Changes

βœ… Checklist

  • I have followed the steps in the Contributing guide.
  • I have tested this code locally with pnpm run test:pr.

πŸš€ Release Impact

  • This change affects published code, and I have generated a changeset.
  • This change is docs/CI/dev-only (no release).

Summary by CodeRabbit

  • New Features

    • Implemented AG-UI protocol for streaming events across all AI provider adapters with improved lifecycle tracking (RUN_STARTED, RUN_FINISHED, RUN_ERROR, TEXT_MESSAGE_CONTENT, TOOL_CALL_ARGS, TOOL_CALL_END, and more).
    • Added structured event types with unique identifiers for runs, messages, tool calls, and steps.
    • Maintained backward compatibility with legacy event types.
  • Documentation

    • Updated streaming guides to document new AG-UI event types and their legacy equivalents.
  • Tests

    • Updated test expectations to validate new event emissions and lifecycle tracking.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 23, 2026

πŸ“ Walkthrough

Walkthrough

Introduces AG-UI Protocol events across TanStack AI's streaming system. All text adapters now emit new lifecycle events (RUN_STARTED, RUN_FINISHED, RUN_ERROR, TEXT_MESSAGE_, TOOL_CALL_, STEP_*) with standardized payloads while maintaining backward compatibility with legacy event types through union types and conditional handling.

Changes

Cohort / File(s) Summary
Documentation & Versioning
.changeset/ag-ui-events.md, docs/guides/streaming.md, docs/protocol/chunk-definitions.md
Documents new AG-UI Protocol event system replacing "Stream Chunks" terminology. Adds comprehensive event type definitions (RUN_STARTED, TEXT_MESSAGE_, TOOL_CALL_, STEP_*) and includes backward-compatibility mapping for legacy events (content, thinking, tool_call, etc.).
Python Type Definitions
packages/python/tanstack-ai/src/tanstack_ai/types.py, packages/python/tanstack-ai/src/tanstack_ai/__init__.py
Introduces AGUIEventType, AGUIEvent union, and TypedDict interfaces for all AG-UI events (RunStartedEvent, RunFinishedEvent, RunErrorEvent, TextMessage*, ToolCall*, Step*, etc.). Adds legacy type equivalents (LegacyStreamChunkType, LegacyStreamChunk) and expands public exports to include UsageInfo and ErrorInfo.
Python Adapters & Utilities
packages/python/tanstack-ai/src/tanstack_ai/anthropic_adapter.py, packages/python/tanstack-ai/src/tanstack_ai/converter.py, packages/python/tanstack-ai/src/tanstack_ai/sse.py
Implements AG-UI lifecycle tracking (runId, messageId, stepId) and event emission for Anthropic. StreamChunkConverter now emits RUN_STARTED, TEXT_MESSAGE_START/CONTENT/END, TOOL_CALL_START/ARGS/END, STEP_STARTED/FINISHED, and RUN_FINISHED. Enhanced error handling with RUN_ERROR. SSE formatter updated with timestamps and runId generation.
PHP Adapters & Utilities
packages/php/tanstack-ai/src/SSEFormatter.php, packages/php/tanstack-ai/src/StreamChunkConverter.php
Adds AG-UI event emission for both Anthropic and OpenAI providers. SSEFormatter.formatError now accepts optional runId/model and returns RUN_ERROR type with timestamp. StreamChunkConverter tracks lifecycle IDs and emits RUN_STARTED, text/tool_call/step lifecycle events, RUN_FINISHED, and RUN_ERROR with proper sequencing.
TypeScript Type Definitions
packages/typescript/ai/src/types.ts
Introduces comprehensive AG-UI protocol types: AGUIEventType, BaseAGUIEvent, and event-specific interfaces (RunStartedEvent, RunFinishedEvent, RunErrorEvent, TextMessage*, ToolCall*, Step*, State*, CustomEvent). Adds LegacyStreamChunkType and LegacyStreamChunk for backward compatibility. Updates StreamChunk union to include both AGUIEvent and LegacyStreamChunk.
TypeScript Adapter Implementations (Anthropic)
packages/typescript/ai-anthropic/src/adapters/text.ts
Replaces simple error/content emission with AG-UI lifecycle events. Tracks runId, messageId, stepId, toolCallId. Emits RUN_STARTED, TEXT_MESSAGE_START/CONTENT/END, TOOL_CALL_START/ARGS/END, STEP_STARTED/FINISHED, RUN_FINISHED, and RUN_ERROR with proper correlation IDs and lifecycle management.
TypeScript Adapter Implementations (Gemini, Ollama, OpenAI)
packages/typescript/ai-gemini/src/adapters/text.ts, packages/typescript/ai-ollama/src/adapters/text.ts, packages/typescript/ai-openai/src/adapters/text.ts
Similar lifecycle event implementation as Anthropic adapter. Each introduces runId/messageId/stepId tracking, emits TEXT_MESSAGE_START/CONTENT/END with proper sequencing, tracks tool calls with started flags, handles thinking/reasoning as STEP_STARTED/FINISHED, and emits RUN_FINISHED/RUN_ERROR with appropriate metadata.
TypeScript OpenAI Summarize Adapter
packages/typescript/ai-openai/src/adapters/summarize.ts
Adds handling for AG-UI chunk types: TEXT_MESSAGE_CONTENT for content accumulation and RUN_FINISHED for usage tracking, while preserving legacy content/done event paths.
TypeScript Core Activity Logic
packages/typescript/ai/src/activities/chat/index.ts, packages/typescript/ai/src/activities/chat/tools/tool-calls.ts
Adds new event handlers for AG-UI events: handleTextMessageContentEvent, handleToolCallStart/Args/EndEvent, handleRunFinishedEvent, handleRunErrorEvent. ToolCallManager receives new methods: addToolCallStartEvent, addToolCallArgsEvent, completeToolCall. Updated executeTools to emit TOOL_CALL_END for AG-UI flows.
TypeScript Streaming & Response Handling
packages/typescript/ai/src/stream-to-response.ts
Updates error event handling to emit RUN_ERROR instead of error type; adds timestamp to error payloads. Handles both AG-UI TEXT_MESSAGE_CONTENT and legacy content chunks for text accumulation.
TypeScript Tests
packages/typescript/ai/tests/stream-to-response.test.ts, packages/typescript/ai/tests/tool-call-manager.test.ts, packages/typescript/ai-gemini/tests/gemini-adapter.test.ts, packages/typescript/smoke-tests/adapters/src/...
Updates test expectations for new AG-UI event sequences. Replaces legacy event assertions (content, done, error) with AG-UI equivalents (TEXT_MESSAGE_CONTENT, RUN_FINISHED, RUN_ERROR). Adds comprehensive test suite for AG-UI event methods and tool call lifecycle. Harness captures and validates new event types and reconstructs messages from AG-UI events.

Sequence Diagram(s)

sequenceDiagram
    actor Client
    participant Adapter as Text Adapter<br/>(OpenAI/Anthropic)
    participant Stream as Event Stream
    participant Handler as Chat Activity<br/>Handler

    Client->>Adapter: initiate chat/stream
    Adapter->>Stream: RUN_STARTED{runId, model, timestamp}
    Stream->>Handler: receive RUN_STARTED
    Handler->>Handler: initialize tracking

    Adapter->>Stream: TEXT_MESSAGE_START{messageId, runId, timestamp}
    Stream->>Handler: receive TEXT_MESSAGE_START
    
    loop for each text chunk
        Adapter->>Stream: TEXT_MESSAGE_CONTENT{delta, content, messageId, timestamp}
        Stream->>Handler: receive TEXT_MESSAGE_CONTENT
        Handler->>Handler: accumulate content
    end

    alt Tool Call Detected
        Adapter->>Stream: TOOL_CALL_START{toolCallId, toolName, runId, timestamp}
        Stream->>Handler: receive TOOL_CALL_START
        
        loop for each arg chunk
            Adapter->>Stream: TOOL_CALL_ARGS{delta, args, toolCallId, timestamp}
            Stream->>Handler: receive TOOL_CALL_ARGS
        end
        
        Adapter->>Stream: TOOL_CALL_END{toolCallId, input (parsed), timestamp}
        Stream->>Handler: receive TOOL_CALL_END
        Handler->>Handler: execute tool
    end

    Adapter->>Stream: TEXT_MESSAGE_END{messageId, runId, timestamp}
    Stream->>Handler: receive TEXT_MESSAGE_END

    alt Success
        Adapter->>Stream: RUN_FINISHED{runId, finishReason, usage, timestamp}
        Stream->>Handler: receive RUN_FINISHED
        Handler->>Handler: finalize response
    else Error
        Adapter->>Stream: RUN_ERROR{runId, error, code, timestamp}
        Stream->>Handler: receive RUN_ERROR
        Handler->>Handler: handle error state
    end

    Handler->>Client: complete with response/error
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

This introduces a comprehensive new AG-UI Protocol event system across multiple language implementations (Python, TypeScript, PHP) with intricate lifecycle tracking, state management, and event sequencing logic. Review requires understanding the new protocol, verifying proper event emission order, validating backward compatibility paths, and confirming lifecycle ID tracking across diverse adapter implementations.

Possibly related PRs

Suggested reviewers

  • jherr

Poem

🐰 Events now flow in splendid AG-UI grace,
RUN_STARTED, TEXT_MESSAGE, TOOL_CALL embraced!
Lifecycles tracked with IDs so true,
Legacy paths preserved for me and you.
The streaming protocol hops into place!

πŸš₯ Pre-merge checks | βœ… 1 | ❌ 2
❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The pull request description is largely incomplete; the 'Changes' section is empty (contains only the template placeholder comment), and only the checklist items are filled in without describing what was actually implemented. Fill in the 'Changes' section with a clear explanation of what was implemented (e.g., new AG-UI event types, lifecycle tracking, adapter updates) and why these changes were made.
Title check ❓ Inconclusive The title 'AG-UI Support' is vague and generic, using non-descriptive terms that don't convey the specific changes or primary impact of the changeset. Replace with a more specific title that describes the main change, e.g., 'Implement AG-UI Protocol for streaming events' or 'Add AG-UI event types and lifecycle tracking to streaming adapters'.
βœ… Passed checks (1 passed)
Check name Status Explanation
Docstring Coverage βœ… Passed Docstring coverage is 92.86% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • πŸ“ Generate docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❀️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@jherr jherr marked this pull request as ready for review January 23, 2026 02:27
@nx-cloud
Copy link

nx-cloud bot commented Jan 23, 2026

View your CI Pipeline Execution β†— for commit 63a1a79

Command Status Duration Result
nx affected --targets=test:sherif,test:knip,tes... βœ… Succeeded 2m 56s View β†—
nx run-many --targets=build --exclude=examples/** βœ… Succeeded 1m 8s View β†—

☁️ Nx Cloud last updated this comment at 2026-01-23 02:33:29 UTC

@pkg-pr-new
Copy link

pkg-pr-new bot commented Jan 23, 2026

Open in StackBlitz

@tanstack/ai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai@244

@tanstack/ai-anthropic

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-anthropic@244

@tanstack/ai-client

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-client@244

@tanstack/ai-devtools-core

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-devtools-core@244

@tanstack/ai-gemini

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-gemini@244

@tanstack/ai-grok

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-grok@244

@tanstack/ai-ollama

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-ollama@244

@tanstack/ai-openai

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openai@244

@tanstack/ai-openrouter

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-openrouter@244

@tanstack/ai-preact

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-preact@244

@tanstack/ai-react

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react@244

@tanstack/ai-react-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-react-ui@244

@tanstack/ai-solid

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid@244

@tanstack/ai-solid-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-solid-ui@244

@tanstack/ai-svelte

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-svelte@244

@tanstack/ai-vue

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue@244

@tanstack/ai-vue-ui

npm i https://pkg.pr.new/TanStack/ai/@tanstack/ai-vue-ui@244

@tanstack/preact-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/preact-ai-devtools@244

@tanstack/react-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/react-ai-devtools@244

@tanstack/solid-ai-devtools

npm i https://pkg.pr.new/TanStack/ai/@tanstack/solid-ai-devtools@244

commit: 63a1a79

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (8)
packages/typescript/smoke-tests/adapters/src/harness.ts (1)

199-236: Fallback to content when delta is missing.

TEXT_MESSAGE_CONTENT is aggregated using delta only. If an adapter emits content-only chunks, fullResponse and the draft will miss text.

πŸ”§ Proposed fix
-      const delta = chunk.delta || ''
+      const delta = chunk.delta ?? chunk.content ?? ''
       fullResponse += delta

       if (!assistantDraft) {
         assistantDraft = {
           role: 'assistant',
-          content: chunk.content || '',
+          content: chunk.content ?? delta ?? '',
           toolCalls: [],
         }
       } else {
         assistantDraft.content = (assistantDraft.content || '') + delta
       }
packages/typescript/ai-openai/src/adapters/text.ts (1)

626-645: Ensure RUN_STARTED precedes RUN_ERROR on early stream failure.

If the iterator throws before the first chunk, the catch block emits RUN_ERROR without a prior RUN_STARTED, breaking lifecycle ordering for consumers.

πŸ”§ Proposed fix
     } catch (error: unknown) {
       const err = error as Error & { code?: string }
       console.log(
         '[OpenAI Adapter] Stream ended with error. Event type summary:',
         {
           totalChunks: chunkCount,
           error: err.message,
         },
       )
+      if (!hasEmittedRunStarted) {
+        hasEmittedRunStarted = true
+        yield {
+          type: 'RUN_STARTED',
+          runId,
+          model: options.model,
+          timestamp,
+        }
+      }
       yield {
         type: 'RUN_ERROR',
         runId,
         model: options.model,
         timestamp,
         error: {
           message: err.message || 'Unknown error occurred',
           code: err.code,
         },
       }
     }
packages/typescript/ai-gemini/src/adapters/text.ts (2)

117-128: RUN_ERROR event missing runId field.

The error handler in chatStream emits a RUN_ERROR event but doesn't include runId. Since this catch block is reached before any streaming occurs, runId hasn't been generated yet. However, for consistency with the RunErrorEvent interface (which has optional runId), consider generating a runId even for pre-stream errors:

πŸ”§ Suggested fix: Generate runId for error events
   async *chatStream(
     options: TextOptions<GeminiTextProviderOptions>,
   ): AsyncIterable<StreamChunk> {
     const mappedOptions = this.mapCommonOptionsToGemini(options)
+    const runId = generateId(this.name)

     try {
       const result =
         await this.client.models.generateContentStream(mappedOptions)

-      yield* this.processStreamChunks(result, options.model)
+      yield* this.processStreamChunks(result, options.model, runId)
     } catch (error) {
       const timestamp = Date.now()
       yield {
         type: 'RUN_ERROR',
+        runId,
         model: options.model,
         timestamp,

368-421: Duplicate TOOL_CALL_END events may be emitted for UNEXPECTED_TOOL_CALL.

When finishReason === FinishReason.UNEXPECTED_TOOL_CALL, tool calls are added to toolCallMap with started: true (line 387), then TOOL_CALL_START and TOOL_CALL_END are emitted (lines 391-418). However, the loop at lines 424-441 iterates over all entries in toolCallMap and emits TOOL_CALL_END again, causing duplicate events for these tool calls.

πŸ› Proposed fix: Track which tool calls have already emitted TOOL_CALL_END
+        const endedToolCalls = new Set<string>()
+
         if (finishReason === FinishReason.UNEXPECTED_TOOL_CALL) {
           if (chunk.candidates[0].content?.parts) {
             for (const part of chunk.candidates[0].content.parts) {
               const functionCall = part.functionCall
               if (functionCall) {
                 const toolCallId =
                   functionCall.id ||
                   `${functionCall.name}_${Date.now()}_${nextToolIndex}`
                 // ... existing code ...

                 yield {
                   type: 'TOOL_CALL_END',
                   toolCallId,
                   toolName: functionCall.name || '',
                   model,
                   timestamp,
                   input: parsedInput,
                 }
+                endedToolCalls.add(toolCallId)
               }
             }
           }
         }

         // Emit TOOL_CALL_END for all tracked tool calls
         for (const [toolCallId, toolCallData] of toolCallMap.entries()) {
+          if (endedToolCalls.has(toolCallId)) {
+            continue
+          }
           let parsedInput: unknown = {}
packages/typescript/ai-anthropic/src/adapters/text.ts (1)

613-671: Prevent duplicate terminal events (RUN_FINISHED/RUN_ERROR).

The Anthropic Messages API always emits message_delta with stop_reason before message_stop. Both handlers currently emit terminal events, causing duplicates that break downstream state machines expecting a single terminal event.

Implement the suggested tracking flag to guard terminal event emissions:

Suggested fix
-    let hasEmittedRunStarted = false
-    let hasEmittedTextMessageStart = false
+    let hasEmittedRunStarted = false
+    let hasEmittedTextMessageStart = false
+    let hasEmittedRunTerminal = false
@@
-        } else if (event.type === 'message_stop') {
-          yield {
-            type: 'RUN_FINISHED',
-            runId,
-            model,
-            timestamp,
-            finishReason: 'stop',
-          }
+        } else if (event.type === 'message_stop') {
+          if (!hasEmittedRunTerminal) {
+            hasEmittedRunTerminal = true
+            yield {
+              type: 'RUN_FINISHED',
+              runId,
+              model,
+              timestamp,
+              finishReason: 'stop',
+            }
+          }
         } else if (event.type === 'message_delta') {
           if (event.delta.stop_reason) {
             switch (event.delta.stop_reason) {
               case 'tool_use': {
-                yield {
-                  type: 'RUN_FINISHED',
-                  runId,
-                  model,
-                  timestamp,
-                  finishReason: 'tool_calls',
-                  usage: {
-                    promptTokens: event.usage.input_tokens || 0,
-                    completionTokens: event.usage.output_tokens || 0,
-                    totalTokens:
-                      (event.usage.input_tokens || 0) +
-                      (event.usage.output_tokens || 0),
-                  },
-                }
+                if (!hasEmittedRunTerminal) {
+                  hasEmittedRunTerminal = true
+                  yield {
+                    type: 'RUN_FINISHED',
+                    runId,
+                    model,
+                    timestamp,
+                    finishReason: 'tool_calls',
+                    usage: {
+                      promptTokens: event.usage.input_tokens || 0,
+                      completionTokens: event.usage.output_tokens || 0,
+                      totalTokens:
+                        (event.usage.input_tokens || 0) +
+                        (event.usage.output_tokens || 0),
+                    },
+                  }
+                }
                 break
               }
               case 'max_tokens': {
-                yield {
-                  type: 'RUN_ERROR',
-                  runId,
-                  model,
-                  timestamp,
-                  error: {
-                    message:
-                      'The response was cut off because the maximum token limit was reached.',
-                    code: 'max_tokens',
-                  },
-                }
+                if (!hasEmittedRunTerminal) {
+                  hasEmittedRunTerminal = true
+                  yield {
+                    type: 'RUN_ERROR',
+                    runId,
+                    model,
+                    timestamp,
+                    error: {
+                      message:
+                        'The response was cut off because the maximum token limit was reached.',
+                      code: 'max_tokens',
+                    },
+                  }
+                }
                 break
               }
               default: {
-                yield {
-                  type: 'RUN_FINISHED',
-                  runId,
-                  model,
-                  timestamp,
-                  finishReason: 'stop',
-                  usage: {
-                    promptTokens: event.usage.input_tokens || 0,
-                    completionTokens: event.usage.output_tokens || 0,
-                    totalTokens:
-                      (event.usage.input_tokens || 0) +
-                      (event.usage.output_tokens || 0),
-                  },
-                }
+                if (!hasEmittedRunTerminal) {
+                  hasEmittedRunTerminal = true
+                  yield {
+                    type: 'RUN_FINISHED',
+                    runId,
+                    model,
+                    timestamp,
+                    finishReason: 'stop',
+                    usage: {
+                      promptTokens: event.usage.input_tokens || 0,
+                      completionTokens: event.usage.output_tokens || 0,
+                      totalTokens:
+                        (event.usage.input_tokens || 0) +
+                        (event.usage.output_tokens || 0),
+                    },
+                  }
+                }
               }
             }
           }
         }
packages/python/tanstack-ai/src/tanstack_ai/anthropic_adapter.py (2)

320-388: Prevent duplicate RUN_FINISHED / RUN_ERROR emissions.

message_delta emits terminal events and message_stop always emits RUN_FINISHED, which can duplicate finishes and even emit RUN_FINISHED after RUN_ERROR. Track completion to avoid double-terminating a run.

πŸ› Suggested fix
-                        if hasattr(delta, "stop_reason") and delta.stop_reason:
+                        if hasattr(delta, "stop_reason") and delta.stop_reason and not run_finished:
                             usage = None
                             if hasattr(event, "usage") and event.usage:
                                 usage = {
                                     "promptTokens": event.usage.input_tokens,
                                     "completionTokens": event.usage.output_tokens,
                                     "totalTokens": event.usage.input_tokens
                                     + event.usage.output_tokens,
                                 }
 
                             # Map Anthropic stop_reason to TanStack format
                             if delta.stop_reason == "max_tokens":
+                                run_finished = True
                                 yield RunErrorEvent(
                                     type="RUN_ERROR",
                                     runId=run_id,
                                     model=options.model,
                                     timestamp=timestamp,
                                     error={
                                         "message": "The response was cut off because the maximum token limit was reached.",
                                         "code": "max_tokens",
                                     },
                                 )
                             else:
                                 finish_reason = {
                                     "end_turn": "stop",
                                     "tool_use": "tool_calls",
                                 }.get(delta.stop_reason, "stop")
 
+                                run_finished = True
                                 yield RunFinishedEvent(
                                     type="RUN_FINISHED",
                                     runId=run_id,
                                     model=options.model,
                                     timestamp=timestamp,
                                     finishReason=finish_reason,
                                     usage=usage,
                                 )
@@
-                        yield RunFinishedEvent(
-                            type="RUN_FINISHED",
-                            runId=run_id,
-                            model=options.model,
-                            timestamp=int(time.time() * 1000),
-                            finishReason=finish_reason,
-                            usage=usage,
-                        )
+                        if not run_finished:
+                            run_finished = True
+                            yield RunFinishedEvent(
+                                type="RUN_FINISHED",
+                                runId=run_id,
+                                model=options.model,
+                                timestamp=int(time.time() * 1000),
+                                finishReason=finish_reason,
+                                usage=usage,
+                            )

Add the flag alongside the other lifecycle tracking variables:

run_finished = False

390-401: Emit RUN_STARTED before RUN_ERROR on early failures.

If an exception occurs before the first stream event, the current path emits RUN_ERROR without a preceding RUN_STARTED.

πŸ› Suggested fix
-        except Exception as e:
-            # Emit RUN_ERROR
-            yield RunErrorEvent(
-                type="RUN_ERROR",
-                runId=run_id,
-                model=options.model,
-                timestamp=int(time.time() * 1000),
-                error={
-                    "message": str(e),
-                    "code": getattr(e, "code", None),
-                },
-            )
+        except Exception as e:
+            ts = int(time.time() * 1000)
+            if not has_emitted_run_started:
+                has_emitted_run_started = True
+                yield RunStartedEvent(
+                    type="RUN_STARTED",
+                    runId=run_id,
+                    model=options.model,
+                    timestamp=ts,
+                    threadId=None,
+                )
+            yield RunErrorEvent(
+                type="RUN_ERROR",
+                runId=run_id,
+                model=options.model,
+                timestamp=ts,
+                error={
+                    "message": str(e),
+                    "code": getattr(e, "code", None),
+                },
+            )
packages/python/tanstack-ai/src/tanstack_ai/types.py (1)

326-333: Limit BaseStreamChunk.type to legacy values.

BaseStreamChunk is the base for legacy chunks, but type: StreamChunkType allows AG-UI values on legacy shapes. Tighten it to LegacyStreamChunkType to avoid mixed typing.

βœ… Suggested fix
-    type: StreamChunkType
+    type: LegacyStreamChunkType
πŸ€– Fix all issues with AI agents
In `@docs/protocol/chunk-definitions.md`:
- Around line 2-41: The AG-UI event list mentions STATE_SNAPSHOT, STATE_DELTA,
and CUSTOM but no shapes are defined; update the docs by either adding explicit
interface/type definitions for these events (e.g., StateSnapshotEvent,
StateDeltaEvent, CustomAGUIEvent that extend BaseAGUIEvent and include fields
like state: unknown, delta: unknown, source?: string, and payload?: unknown) or
clearly mark them as "reserved/future" with example usage and minimal required
fields (type, timestamp, model, rawEvent) so readers know expected structure;
reference BaseAGUIEvent and AGUIEventType when adding the new sections to keep
the schema consistent.

In `@packages/php/tanstack-ai/src/StreamChunkConverter.php`:
- Around line 226-263: The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.

In `@packages/typescript/ai-gemini/src/adapters/text.ts`:
- Around line 248-255: The STEP_FINISHED yield uses a fallback expression
"stepId || generateId(this.name)" even though stepId must have been set by
STEP_STARTED; replace the fallback with a non-null assertion on stepId (e.g.,
use stepId! in the STEP_FINISHED object) so the code expresses the invariant and
avoids silently generating a new id, and ensure the change is made in the yield
that produces type: 'STEP_FINISHED' (referencing the stepId and generateId
symbols and the surrounding STEP_STARTED/STEP_FINISHED logic).

In `@packages/typescript/ai-ollama/src/adapters/text.ts`:
- Around line 336-345: The STEP_FINISHED emission currently falls back to
generateId('step') when stepId is null which can create inconsistent IDs; update
the emission to rely on the fact STEP_STARTED sets stepId and remove the
fallback by using a non-null assertion (stepId!) or otherwise assert/throw if
stepId is missing so STEP_FINISHED always uses the same stepId set by
STEP_STARTED (refer to STEP_FINISHED, STEP_STARTED, stepId, generateId, and
chunk.message.thinking in the surrounding code).

In `@packages/typescript/ai-openai/src/adapters/summarize.ts`:
- Around line 65-87: The SummarizationResult.id stays empty for AG-UI streams
because only legacy 'content' sets id; update the logic in summarize.ts so that
when handling chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId
(or chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set
id = chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.

In `@packages/typescript/ai/src/activities/chat/index.ts`:
- Around line 584-589: In handleTextMessageContentEvent, guard explicitly
against undefined instead of using if (chunk.content) so empty-string content
("") is not treated as absent; change the condition to check chunk.content !==
undefined (or typeof chunk.content !== "undefined") and assign
this.accumulatedContent = chunk.content when present, otherwise append
chunk.delta; also ensure this.accumulatedContent is initialized to an empty
string before appending to avoid NaN/undefined concatenation (references:
handleTextMessageContentEvent, TextMessageContentEvent, this.accumulatedContent,
chunk.content, chunk.delta).

In `@packages/typescript/ai/src/stream-to-response.ts`:
- Around line 29-33: The streamToText handler currently only appends chunk.delta
for TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content
without delta; update the logic in streamToText (the branch handling chunk.type
=== 'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.

In `@packages/typescript/smoke-tests/adapters/src/harness.ts`:
- Around line 268-326: The TOOL_CALL_END branch is leaving entries in
toolCallsInProgress which can leak stale args; inside the TOOL_CALL_END handling
(the else if block checking chunk.type === 'TOOL_CALL_END') remove the completed
entry from toolCallsInProgress (call toolCallsInProgress.delete(id) using the id
local variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.
🧹 Nitpick comments (5)
packages/typescript/ai-ollama/src/adapters/text.ts (1)

212-257: Tool call handling emits TOOL_CALL_END immediately after TOOL_CALL_START without TOOL_CALL_ARGS.

The handleToolCall function emits TOOL_CALL_START followed immediately by TOOL_CALL_END. This differs from the Gemini adapter which emits TOOL_CALL_ARGS events between start and end.

If Ollama provides tool arguments in a single chunk (non-streaming), this is acceptable. However, for consistency with the AG-UI protocol and other adapters, consider emitting a TOOL_CALL_ARGS event with the full arguments before TOOL_CALL_END:

♻️ Suggested addition of TOOL_CALL_ARGS event
         // Emit TOOL_CALL_START if not already emitted for this tool call
         if (!toolCallsEmitted.has(toolCallId)) {
           toolCallsEmitted.add(toolCallId)
           events.push({
             type: 'TOOL_CALL_START',
             toolCallId,
             toolName: actualToolCall.function.name || '',
             model: chunk.model,
             timestamp,
             index: actualToolCall.function.index,
           })
         }

         // Parse input
         let parsedInput: unknown = {}
         const argsStr =
           typeof actualToolCall.function.arguments === 'string'
             ? actualToolCall.function.arguments
             : JSON.stringify(actualToolCall.function.arguments)
         try {
           parsedInput = JSON.parse(argsStr)
         } catch {
           parsedInput = actualToolCall.function.arguments
         }

+        // Emit TOOL_CALL_ARGS with full arguments
+        events.push({
+          type: 'TOOL_CALL_ARGS',
+          toolCallId,
+          model: chunk.model,
+          timestamp,
+          delta: argsStr,
+          args: argsStr,
+        })
+
         // Emit TOOL_CALL_END
         events.push({
           type: 'TOOL_CALL_END',
packages/typescript/ai-gemini/tests/gemini-adapter.test.ts (1)

300-331: Test assertions correctly validate AG-UI event sequence.

The updated assertions properly verify the new event lifecycle: RUN_STARTED β†’ TEXT_MESSAGE_START β†’ TEXT_MESSAGE_CONTENT (Γ—N) β†’ TEXT_MESSAGE_END β†’ RUN_FINISHED. Using toMatchObject allows for flexible matching while validating essential fields.

Consider adding test cases for:

  1. Tool call event sequence (TOOL_CALL_START β†’ TOOL_CALL_ARGS β†’ TOOL_CALL_END)
  2. Error scenarios (RUN_ERROR events)
  3. Thinking/reasoning flow (STEP_STARTED β†’ STEP_FINISHED)
packages/typescript/ai/src/activities/chat/tools/tool-calls.ts (2)

72-80: Silent no-op if tool call not found in addToolCallArgsEvent.

If addToolCallArgsEvent is called before addToolCallStartEvent (out-of-order events), the arguments are silently dropped. Consider logging a warning or throwing for debugging purposes.

♻️ Optional: Add warning for missing tool call
   addToolCallArgsEvent(event: ToolCallArgsEvent): void {
     // Find the tool call by ID
     for (const [, toolCall] of this.toolCallsMap.entries()) {
       if (toolCall.id === event.toolCallId) {
         toolCall.function.arguments += event.delta
-        break
+        return
       }
     }
+    // Tool call not found - this shouldn't happen in normal flow
+    console.warn(`TOOL_CALL_ARGS received for unknown toolCallId: ${event.toolCallId}`)
   }

240-258: TOOL_CALL_END event missing input field when emitted from executeTools.

The TOOL_CALL_END event emitted at line 241-248 includes result but not input. According to the ToolCallEndEvent interface in types.ts, input is an optional field that should contain the final parsed input arguments. For consistency with adapter emissions, consider including it:

♻️ Add input field to TOOL_CALL_END event
       if (finishEvent.type === 'RUN_FINISHED') {
+        let parsedInput: unknown
+        try {
+          parsedInput = JSON.parse(toolCall.function.arguments)
+        } catch {
+          parsedInput = undefined
+        }
         yield {
           type: 'TOOL_CALL_END',
           toolCallId: toolCall.id,
           toolName: toolCall.function.name,
           model: finishEvent.model,
           timestamp: Date.now(),
+          input: parsedInput,
           result: toolResultContent,
         }
packages/python/tanstack-ai/src/tanstack_ai/converter.py (1)

391-407: Drop the unused loop index to satisfy lint.

Ruff flags the loop index as unused. You can iterate over values directly.

♻️ Suggested tweak
-            for tool_index, tool_call in self.tool_calls_map.items():
+            for tool_call in self.tool_calls_map.values():

Comment on lines +2 to +41
title: AG-UI Event Definitions
id: chunk-definitions
---

All streaming responses in TanStack AI consist of a series of **StreamChunks** - discrete JSON objects representing different events during the conversation. These chunks enable real-time updates for content generation, tool calls, errors, and completion signals.

This document defines the data structures (chunks) that flow between the TanStack AI server and client during streaming chat operations.
TanStack AI implements the [AG-UI (Agent-User Interaction) Protocol](https://docs.ag-ui.com/introduction), an open, lightweight, event-based protocol that standardizes how AI agents connect to user-facing applications.

All streaming responses in TanStack AI consist of a series of **AG-UI Events** - discrete JSON objects representing different stages of the conversation lifecycle. These events enable real-time updates for content generation, tool calls, thinking/reasoning, and completion signals.

## Base Structure

All chunks share a common base structure:
All AG-UI events share a common base structure:

```typescript
interface BaseStreamChunk {
type: StreamChunkType;
id: string; // Unique identifier for the message/response
model: string; // Model identifier (e.g., "gpt-5.2", "claude-3-5-sonnet")
timestamp: number; // Unix timestamp in milliseconds
interface BaseAGUIEvent {
type: AGUIEventType;
timestamp: number; // Unix timestamp in milliseconds
model?: string; // Model identifier (TanStack AI addition)
rawEvent?: unknown; // Original provider event for debugging
}
```

### Chunk Types
### AG-UI Event Types

```typescript
type StreamChunkType =
| 'content' // Text content being generated
| 'thinking' // Model's reasoning process (when supported)
| 'tool_call' // Model calling a tool/function
| 'tool-input-available' // Tool inputs are ready for client execution
type AGUIEventType =
| 'RUN_STARTED' // Run lifecycle begins
| 'RUN_FINISHED' // Run completed successfully
| 'RUN_ERROR' // Error occurred
| 'TEXT_MESSAGE_START' // Text message begins
| 'TEXT_MESSAGE_CONTENT' // Text content streaming
| 'TEXT_MESSAGE_END' // Text message completes
| 'TOOL_CALL_START' // Tool invocation begins
| 'TOOL_CALL_ARGS' // Tool arguments streaming
| 'TOOL_CALL_END' // Tool call completes (with result)
| 'STEP_STARTED' // Thinking/reasoning step begins
| 'STEP_FINISHED' // Thinking/reasoning step completes
| 'STATE_SNAPSHOT' // Full state synchronization
| 'STATE_DELTA' // Incremental state update
| 'CUSTOM'; // Custom extensibility events
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Document STATE_ and CUSTOM event shapes (or mark them as reserved).*

AGUIEventType lists STATE_SNAPSHOT, STATE_DELTA, and CUSTOM, but there are no corresponding definitions below. Add sections or explicitly call them out as future/reserved to avoid reader confusion.

πŸ€– Prompt for AI Agents
In `@docs/protocol/chunk-definitions.md` around lines 2 - 41, The AG-UI event list
mentions STATE_SNAPSHOT, STATE_DELTA, and CUSTOM but no shapes are defined;
update the docs by either adding explicit interface/type definitions for these
events (e.g., StateSnapshotEvent, StateDeltaEvent, CustomAGUIEvent that extend
BaseAGUIEvent and include fields like state: unknown, delta: unknown, source?:
string, and payload?: unknown) or clearly mark them as "reserved/future" with
example usage and minimal required fields (type, timestamp, model, rawEvent) so
readers know expected structure; reference BaseAGUIEvent and AGUIEventType when
adding the new sections to keep the schema consistent.

Comment on lines +226 to +263
} elseif ($eventType === 'content_block_stop') {
// Content block completed
$toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null;
if ($toolCall) {
// If tool call wasn't started yet (no args), start it now
if (!$toolCall['started']) {
$toolCall['started'] = true;
$this->toolCallsMap[$this->currentToolIndex] = $toolCall;

$chunks[] = [
'type' => 'TOOL_CALL_START',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'toolCall' => [
'id' => $toolCall['id'],
'type' => 'function',
'function' => [
'name' => $toolCall['name'],
'arguments' => $partialJson // Incremental JSON
]
],
'index' => $this->currentToolIndex
];
}

// Parse input and emit TOOL_CALL_END
$parsedInput = [];
if (!empty($toolCall['input'])) {
try {
$parsedInput = json_decode($toolCall['input'], true) ?? [];
} catch (\Exception $e) {
$parsedInput = [];
}
}

$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Prevent duplicate TOOL_CALL_END emissions on later block stops.

content_block_stop reuses $currentToolIndex without clearing the tool call entry. If additional blocks occur after a tool_use block, TOOL_CALL_END can be emitted multiple times for the same call.

πŸ”§ Proposed fix
                 $chunks[] = [
                     'type' => 'TOOL_CALL_END',
                     'toolCallId' => $toolCall['id'],
                     'toolName' => $toolCall['name'],
                     'model' => $this->model,
                     'timestamp' => $this->timestamp,
                     'input' => $parsedInput
                 ];
+
+                unset($this->toolCallsMap[$this->currentToolIndex]);
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
} elseif ($eventType === 'content_block_stop') {
// Content block completed
$toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null;
if ($toolCall) {
// If tool call wasn't started yet (no args), start it now
if (!$toolCall['started']) {
$toolCall['started'] = true;
$this->toolCallsMap[$this->currentToolIndex] = $toolCall;
$chunks[] = [
'type' => 'TOOL_CALL_START',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'toolCall' => [
'id' => $toolCall['id'],
'type' => 'function',
'function' => [
'name' => $toolCall['name'],
'arguments' => $partialJson // Incremental JSON
]
],
'index' => $this->currentToolIndex
];
}
// Parse input and emit TOOL_CALL_END
$parsedInput = [];
if (!empty($toolCall['input'])) {
try {
$parsedInput = json_decode($toolCall['input'], true) ?? [];
} catch (\Exception $e) {
$parsedInput = [];
}
}
$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
}
} elseif ($eventType === 'content_block_stop') {
// Content block completed
$toolCall = $this->toolCallsMap[$this->currentToolIndex] ?? null;
if ($toolCall) {
// If tool call wasn't started yet (no args), start it now
if (!$toolCall['started']) {
$toolCall['started'] = true;
$this->toolCallsMap[$this->currentToolIndex] = $toolCall;
$chunks[] = [
'type' => 'TOOL_CALL_START',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'index' => $this->currentToolIndex
];
}
// Parse input and emit TOOL_CALL_END
$parsedInput = [];
if (!empty($toolCall['input'])) {
try {
$parsedInput = json_decode($toolCall['input'], true) ?? [];
} catch (\Exception $e) {
$parsedInput = [];
}
}
$chunks[] = [
'type' => 'TOOL_CALL_END',
'toolCallId' => $toolCall['id'],
'toolName' => $toolCall['name'],
'model' => $this->model,
'timestamp' => $this->timestamp,
'input' => $parsedInput
];
unset($this->toolCallsMap[$this->currentToolIndex]);
}
πŸ€– Prompt for AI Agents
In `@packages/php/tanstack-ai/src/StreamChunkConverter.php` around lines 226 -
263, The content_block_stop handler emits TOOL_CALL_END using
$this->currentToolIndex and leaves the entry in $this->toolCallsMap, so
subsequent content_block_stop events can emit TOOL_CALL_END again for the same
tool call; update the logic in the content_block_stop branch (where $toolCall is
read from $this->toolCallsMap[$this->currentToolIndex]) to prevent duplicates by
marking the call as completed or removing it after emitting the TOOL_CALL_END
(e.g., set a 'completed' flag on $toolCall or unset
$this->toolCallsMap[$this->currentToolIndex]) and ensure the earlier guard
checks that !$toolCall['completed'] before emitting
TOOL_CALL_START/TOOL_CALL_END.

Comment on lines 248 to 255
yield {
type: 'thinking',
content: part.text,
delta: part.text,
id: generateId(this.name),
type: 'STEP_FINISHED',
stepId: stepId || generateId(this.name),
model,
timestamp,
delta: part.text,
content: part.text,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Same stepId fallback issue as Ollama adapter.

Line 250 uses stepId || generateId(this.name) as a fallback, but stepId should always be set by STEP_STARTED before STEP_FINISHED is yielded. Consider using a non-null assertion for consistency.

πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-gemini/src/adapters/text.ts` around lines 248 - 255,
The STEP_FINISHED yield uses a fallback expression "stepId ||
generateId(this.name)" even though stepId must have been set by STEP_STARTED;
replace the fallback with a non-null assertion on stepId (e.g., use stepId! in
the STEP_FINISHED object) so the code expresses the invariant and avoids
silently generating a new id, and ensure the change is made in the yield that
produces type: 'STEP_FINISHED' (referencing the stepId and generateId symbols
and the surrounding STEP_STARTED/STEP_FINISHED logic).

Comment on lines 336 to 345
accumulatedReasoning += chunk.message.thinking
yield {
type: 'thinking',
id: responseId,
type: 'STEP_FINISHED',
stepId: stepId || generateId('step'),
model: chunk.model,
timestamp,
content: accumulatedReasoning,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Potential issue: stepId fallback generates new ID on each STEP_FINISHED.

On line 339, if stepId is null, a new ID is generated via generateId('step'). However, stepId should always be set by the STEP_STARTED emission on line 326. The fallback || generateId('step') suggests defensive coding, but if reached, it would create inconsistent step IDs across events.

πŸ”§ Suggested fix: Assert stepId is set or remove fallback
         yield {
           type: 'STEP_FINISHED',
-          stepId: stepId || generateId('step'),
+          stepId: stepId!,
           model: chunk.model,
           timestamp,
           delta: chunk.message.thinking,
           content: accumulatedReasoning,
         }

The ! assertion is safe here because STEP_FINISHED is only yielded inside the if (chunk.message.thinking) block which always sets stepId via STEP_STARTED first.

πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
accumulatedReasoning += chunk.message.thinking
yield {
type: 'thinking',
id: responseId,
type: 'STEP_FINISHED',
stepId: stepId || generateId('step'),
model: chunk.model,
timestamp,
content: accumulatedReasoning,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}
}
accumulatedReasoning += chunk.message.thinking
yield {
type: 'STEP_FINISHED',
stepId: stepId!,
model: chunk.model,
timestamp,
delta: chunk.message.thinking,
content: accumulatedReasoning,
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-ollama/src/adapters/text.ts` around lines 336 - 345,
The STEP_FINISHED emission currently falls back to generateId('step') when
stepId is null which can create inconsistent IDs; update the emission to rely on
the fact STEP_STARTED sets stepId and remove the fallback by using a non-null
assertion (stepId!) or otherwise assert/throw if stepId is missing so
STEP_FINISHED always uses the same stepId set by STEP_STARTED (refer to
STEP_FINISHED, STEP_STARTED, stepId, generateId, and chunk.message.thinking in
the surrounding code).

Comment on lines +65 to +87
// Legacy content event
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
usage = chunk.usage
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Populate id for AG-UI chunks.

For AG-UI streams, id remains '' because only legacy content sets it. Consider mapping from messageId (or runId on RUN_FINISHED) to keep SummarizationResult.id meaningful.

πŸ”§ Proposed fix
       if (chunk.type === 'content') {
         summary = chunk.content
         id = chunk.id
         model = chunk.model
       }
       // AG-UI TEXT_MESSAGE_CONTENT event
       else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+        if (!id && 'messageId' in chunk) {
+          id = chunk.messageId
+        }
         if (chunk.content) {
           summary = chunk.content
         } else {
           summary += chunk.delta
         }
         model = chunk.model || model
       }
       // Legacy done event
       if (chunk.type === 'done' && chunk.usage) {
         usage = chunk.usage
       }
       // AG-UI RUN_FINISHED event
       else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
+        if (!id && 'runId' in chunk) {
+          id = chunk.runId
+        }
         usage = chunk.usage
       }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Legacy content event
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
usage = chunk.usage
}
// Legacy content event
if (chunk.type === 'content') {
summary = chunk.content
id = chunk.id
model = chunk.model
}
// AG-UI TEXT_MESSAGE_CONTENT event
else if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
if (!id && 'messageId' in chunk) {
id = chunk.messageId
}
if (chunk.content) {
summary = chunk.content
} else {
summary += chunk.delta
}
model = chunk.model || model
}
// Legacy done event
if (chunk.type === 'done' && chunk.usage) {
usage = chunk.usage
}
// AG-UI RUN_FINISHED event
else if (chunk.type === 'RUN_FINISHED' && chunk.usage) {
if (!id && 'runId' in chunk) {
id = chunk.runId
}
usage = chunk.usage
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai-openai/src/adapters/summarize.ts` around lines 65 -
87, The SummarizationResult.id stays empty for AG-UI streams because only legacy
'content' sets id; update the logic in summarize.ts so that when handling
chunk.type === 'TEXT_MESSAGE_CONTENT' you set id = chunk.messageId (or
chunk.messageId || id) and when handling chunk.type === 'RUN_FINISHED' set id =
chunk.runId (or chunk.runId || id) so SummarizationResult.id is populated;
ensure you still preserve existing fallback behavior (keep existing id if new
properties are absent) and reference the variables chunk, id, model, usage, and
the event types TEXT_MESSAGE_CONTENT and RUN_FINISHED when applying the changes.

Comment on lines +584 to +589
private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void {
if (chunk.content) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Guard against empty-string content values when accumulating.

if (chunk.content) treats "" as absent and can mis-accumulate content. Prefer an explicit undefined check.

🩹 Suggested fix
-    if (chunk.content) {
+    if (chunk.content !== undefined) {
       this.accumulatedContent = chunk.content
     } else {
       this.accumulatedContent += chunk.delta
     }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void {
if (chunk.content) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}
private handleTextMessageContentEvent(chunk: TextMessageContentEvent): void {
if (chunk.content !== undefined) {
this.accumulatedContent = chunk.content
} else {
this.accumulatedContent += chunk.delta
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai/src/activities/chat/index.ts` around lines 584 - 589,
In handleTextMessageContentEvent, guard explicitly against undefined instead of
using if (chunk.content) so empty-string content ("") is not treated as absent;
change the condition to check chunk.content !== undefined (or typeof
chunk.content !== "undefined") and assign this.accumulatedContent =
chunk.content when present, otherwise append chunk.delta; also ensure
this.accumulatedContent is initialized to an empty string before appending to
avoid NaN/undefined concatenation (references: handleTextMessageContentEvent,
TextMessageContentEvent, this.accumulatedContent, chunk.content, chunk.delta).

Comment on lines +29 to 33
// Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks
if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
accumulatedContent += chunk.delta
} else if (chunk.type === 'content' && chunk.delta) {
accumulatedContent += chunk.delta
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Handle TEXT_MESSAGE_CONTENT when delta is absent.

If an adapter emits TEXT_MESSAGE_CONTENT with only content (no delta), streamToText returns an empty string. Consider falling back to content to preserve non-delta events.

πŸ”§ Proposed fix
-    if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
-      accumulatedContent += chunk.delta
-    } else if (chunk.type === 'content' && chunk.delta) {
-      accumulatedContent += chunk.delta
-    }
+    if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
+      const piece = chunk.delta ?? chunk.content
+      if (piece) accumulatedContent += piece
+    } else if (chunk.type === 'content') {
+      const piece = chunk.delta ?? chunk.content
+      if (piece) accumulatedContent += piece
+    }
πŸ“ Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks
if (chunk.type === 'TEXT_MESSAGE_CONTENT' && chunk.delta) {
accumulatedContent += chunk.delta
} else if (chunk.type === 'content' && chunk.delta) {
accumulatedContent += chunk.delta
// Handle both AG-UI TEXT_MESSAGE_CONTENT and legacy 'content' chunks
if (chunk.type === 'TEXT_MESSAGE_CONTENT') {
const piece = chunk.delta ?? chunk.content
if (piece) accumulatedContent += piece
} else if (chunk.type === 'content') {
const piece = chunk.delta ?? chunk.content
if (piece) accumulatedContent += piece
}
πŸ€– Prompt for AI Agents
In `@packages/typescript/ai/src/stream-to-response.ts` around lines 29 - 33, The
streamToText handler currently only appends chunk.delta for
TEXT_MESSAGE_CONTENT, causing loss when an adapter emits chunk.content without
delta; update the logic in streamToText (the branch handling chunk.type ===
'TEXT_MESSAGE_CONTENT') to fall back to chunk.content when chunk.delta is
undefined or empty, mirroring the existing fallback for legacy 'content' chunks
(the other branch checking chunk.type === 'content'), so accumulatedContent uses
chunk.delta if present otherwise chunk.content.

Comment on lines +268 to +326
// AG-UI TOOL_CALL_START event
else if (chunk.type === 'TOOL_CALL_START') {
const id = chunk.toolCallId
toolCallsInProgress.set(id, {
name: chunk.toolName,
args: '',
})

if (!assistantDraft) {
assistantDraft = { role: 'assistant', content: null, toolCalls: [] }
}

chunkData.toolCallId = chunk.toolCallId
chunkData.toolName = chunk.toolName
}
// AG-UI TOOL_CALL_ARGS event
else if (chunk.type === 'TOOL_CALL_ARGS') {
const id = chunk.toolCallId
const existing = toolCallsInProgress.get(id)
if (existing) {
existing.args = chunk.args || existing.args + (chunk.delta || '')
}

chunkData.toolCallId = chunk.toolCallId
chunkData.delta = chunk.delta
chunkData.args = chunk.args
}
// AG-UI TOOL_CALL_END event
else if (chunk.type === 'TOOL_CALL_END') {
const id = chunk.toolCallId
const inProgress = toolCallsInProgress.get(id)
const name = chunk.toolName || inProgress?.name || ''
const args =
inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '')

// Add to legacy toolCallMap for compatibility
toolCallMap.set(id, {
id,
name,
arguments: args,
})

// Add to assistant draft
if (!assistantDraft) {
assistantDraft = { role: 'assistant', content: null, toolCalls: [] }
}
assistantDraft.toolCalls?.push({
id,
type: 'function',
function: {
name,
arguments: args,
},
})

chunkData.toolCallId = chunk.toolCallId
chunkData.toolName = chunk.toolName
chunkData.input = chunk.input
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟑 Minor

Clear in-progress tool call state after TOOL_CALL_END.

toolCallsInProgress entries persist after completion. If a toolCallId is reused or multiple tool calls occur, stale args can leak.

πŸ”§ Proposed fix
     else if (chunk.type === 'TOOL_CALL_END') {
       const id = chunk.toolCallId
       const inProgress = toolCallsInProgress.get(id)
       const name = chunk.toolName || inProgress?.name || ''
       const args =
         inProgress?.args || (chunk.input ? JSON.stringify(chunk.input) : '')

+      toolCallsInProgress.delete(id)
πŸ€– Prompt for AI Agents
In `@packages/typescript/smoke-tests/adapters/src/harness.ts` around lines 268 -
326, The TOOL_CALL_END branch is leaving entries in toolCallsInProgress which
can leak stale args; inside the TOOL_CALL_END handling (the else if block
checking chunk.type === 'TOOL_CALL_END') remove the completed entry from
toolCallsInProgress (call toolCallsInProgress.delete(id) using the id local
variable) right after you derive name/args and after updating
toolCallMap/assistantDraft so the in-progress state is cleared for reused IDs;
reference the toolCallsInProgress map and the TOOL_CALL_END branch in harness.ts
to make this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants