Skip to content

Conversation

@zzcr
Copy link
Member

@zzcr zzcr commented Sep 1, 2025

Summary by CodeRabbit

  • New Features
    • Introduces ReAct-based tool-using conversations with step-by-step reasoning and streaming responses in chat.
  • Style
    • Updated Office Assistant menu text for the first quick action.
  • Chores
    • Updated default AI service endpoint across components.
    • Upgraded dependencies for improved compatibility and stability.

@zzcr zzcr marked this pull request as draft September 1, 2025 01:52
@coderabbitai
Copy link

coderabbitai bot commented Sep 1, 2025

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Walkthrough

Updates URLs and IDs in doc-ai, tweaks a Vue view, and adjusts dependencies. Next-remoter switches default agent root, updates prompts, adopts a built-in AI provider, and silences a log. Next-sdk adds ReAct model support with new types, system prompts, and a ReAct loop, and bumps an ai library version.

Changes

Cohort / File(s) Summary
Doc AI config and view
packages/doc-ai/package.json, packages/doc-ai/src/const.ts, packages/doc-ai/src/views/comprehensive/index.vue
Bumped devDependency @vitejs/plugin-vue-jsx to ^5.1.1. Updated AGENT_ROOT and SSEION_ID constants. Inserted a debugger in the get-weather tool handler.
Next Remoter integration
packages/next-remoter/package.json, packages/next-remoter/src/components/tiny-robot-chat.vue, packages/next-remoter/src/composable/CustomAgentModelProvider.ts, packages/next-remoter/src/composable/useTinyRobotChat.ts
Added dependency @built-in-ai/core ^2.0.0. Changed default agentRoot URL and updated a menu item text. Switched model provider to built-in AI (ReAct-enabled), passed stream handler, enabled stream logging. Commented out a console log in onReceiveData.
Next SDK ReAct support
packages/next-sdk/agent/AgentModelProvider.ts, packages/next-sdk/agent/react/index.ts, packages/next-sdk/agent/react/systemPrompt.ts, packages/next-sdk/agent/type.ts, packages/next-sdk/package.json
Added isReActModel option and ReAct flow wiring (system prompt, tools, onStepFinish loop). Introduced ReAct utilities (tool-call parsing and loop). Added system prompt constants. Added ReAct-related types. Bumped ai to ^5.0.28.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor U as User
  participant VC as View/Component
  participant AMP as AgentModelProvider (isReActModel=true)
  participant LLM as LLM (builtInAI)
  participant RL as runReActLoop
  participant T as Tools

  U->>VC: Send message
  VC->>AMP: chatStream({ prompt, tools, handler })
  AMP->>AMP: Merge tools & build system prompt
  AMP->>LLM: chat({ system, model: llm, tools, onStepFinish })
  LLM-->>AMP: Stream step(s)
  AMP->>RL: onStepFinish(step, { tools, system, llm, handler })
  RL->>RL: Parse Thought/Action/Final Answer
  alt Tool calls found
    RL->>T: execute(name, args)
    T-->>RL: result(s)
    RL->>LLM: chat(next prompt with Observation, stopWhen)
    LLM-->>AMP: next step(s)
    AMP->>RL: recurse until final or stopWhen
  else Final Answer
    RL-->>AMP: finalAnswer
  end
  AMP-->>VC: stream markdown chunks via handler
  VC-->>U: Render replies
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

I twitch my ears at ReAct’s bright light,
Hop loops of Thought to Action’s height.
New roots, new routes, a streaming song,
Tools thump like paws, not skipping wrong.
With prompts aligned and bugs kept slight,
I nibble bits—then hop to night. 🐇✨

✨ Finishing Touches
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch zzc/feat-add-builin-llm-0830

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore or @coderabbit ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
packages/next-remoter/src/composable/useTinyRobotChat.ts (1)

34-46: Guard against empty message list to avoid runtime error.

Accessing lastMessage.role when messages is empty will throw.

-        let lastMessage = messages.value[messages.value.length - 1]
-
-        if (lastMessage.role !== 'assistant') {
+        let lastMessage = messages.value[messages.value.length - 1]
+        if (!lastMessage) {
+          lastMessage = { role: 'assistant', content: '', uiContent: [] as any[] }
+          messages.value.push(lastMessage)
+        }
+        if (lastMessage.role !== 'assistant') {
           const message = {
             role: 'assistant',
             content: '',
             uiContent: []
           }
           messages.value.push(message)
           lastMessage = message
         }
packages/next-sdk/agent/AgentModelProvider.ts (2)

95-107: Null/undefined tool entries cause reduce() to throw later.

_createMpcTools can push null (or undefined via optional chaining). tempMergeTools spreads curr unguarded.

Apply:

-          console.log('开始查询tool:', client)
-          const tool = client ? await client?.tools?.() : null
-          console.log('查询tool的结果:', tool, client)
-
-          return tool
+          const tool = client ? await client?.tools?.() : null
+          return tool ?? {}

And update the reducer (see next comment) and insert path.


145-154: Make tool merge null‑safe and robust.

Apply:

-  tempMergeTools(extraTool = {}) {
-    const toolsResult = this.mcpTools.reduce((acc, curr) => ({ ...acc, ...curr }), {})
+  tempMergeTools(extraTool = {}) {
+    const toolsResult = this.mcpTools
+      .filter(Boolean)
+      .reduce((acc, curr) => ({ ...acc, ...(curr || {}) }), {})
     Object.assign(toolsResult, extraTool)
🧹 Nitpick comments (14)
packages/next-remoter/src/composable/useTinyRobotChat.ts (2)

31-33: Replace commented log with dev-guarded debug logging.

Keeps diagnostics without noise in prod.

-        // console.log('onReceiveData=', data)
+        if (import.meta?.env?.DEV) {
+          console.debug('onReceiveData=', data)
+        }

55-64: Keep content and uiContent in sync on first markdown chunk.

When pushing a new markdown block, lastMessage.content isn’t updated until the delta path runs.

-          if (!markdownContent) {
-            lastMessage.uiContent.push(data)
+          if (!markdownContent) {
+            lastMessage.uiContent.push(data)
+            lastMessage.content += data.delta ?? ''
packages/next-remoter/src/components/tiny-robot-chat.vue (2)

126-129: Normalize agentRoot and join paths safely

Default looks fine, but consumers may pass a value without a trailing slash, breaking ${agentRoot}mcp?.... Recommend URL-based joining to be robust.

Outside this range, add a helper and replace usages:

// near props or utilities
const buildMcpUrl = (sid: string) => new URL(`mcp?sessionId=${sid}`, props.agentRoot).toString()

Then replace:

- url: `${props.agentRoot}mcp?sessionId=${sessionId}`
+ url: buildMcpUrl(sessionId)

and

- const mcpServer = { type: plugin.type, url: plugin.url }
+ const mcpServer = { type: plugin.type, url: plugin.url } // keep
// when you need agentRoot+sessionId, prefer buildMcpUrl(...)

232-232: Pill label/action mismatch

“接收邮件” 与 “帮我勾选中最贵的手机商品” 语义不一致。建议还原或同步修改为一致的任务。

- '接收邮件#帮我勾选中最贵的手机商品。',
+ '接收邮件#请同步邮箱的新邮件。',
packages/doc-ai/src/const.ts (1)

1-3: Typo in constant name: SSEION_ID → SESSION_ID (keep alias for BC)

Avoid propagating the typo; add a deprecation alias to prevent breaks.

-export const AGENT_ROOT = 'https://ai.opentiny.design/'
-
-export const SSEION_ID = 'f5d8e6f6-8f9a-4117-ad9c-b9f10b98068e'
+export const AGENT_ROOT = 'https://ai.opentiny.design/'
+
+// TODO: deprecate SSEION_ID in the next minor
+export const SESSION_ID = 'f5d8e6f6-8f9a-4117-ad9c-b9f10b98068e'
+export const SSEION_ID = SESSION_ID

Please verify downstream imports and update to SESSION_ID where feasible.

packages/next-remoter/src/composable/CustomAgentModelProvider.ts (2)

45-50: Avoid mixing provider-level llm with per-call model override

Choose one source of truth. If llm is configured, let the agent pick the model; or if you must override, document precedence.

-      model: builtInAI(),
+      // model: builtInAI(), // remove if llm is configured above
       abortSignal: request.options?.signal,
       handler

55-55: Gate verbose logging

Unconditional logging will spam consoles. Limit to dev and lower verbosity.

-      console.log(part, part.type)
+      if (import.meta?.env?.DEV) console.debug(part.type, part)
packages/next-sdk/agent/type.ts (1)

35-80: Tighten/react types for tool I/O

Consider JSON Schema typing and more flexible tool outputs.

-export type FunctionDescription = {
+export type FunctionDescription = {
   description?: string
   name: string
-  parameters: object // JSON Schema object
+  // Prefer a JSON Schema type here to enable validation/intellisense
+  parameters: Record<string, any> // or JSONSchema7
 }
 
 export type Tool = {
   type: 'function'
   function: FunctionDescription
 }
 
 export type FunctionCall = {
   name: string
   arguments: string
 }
 
 export type ToolCall = {
   id: string
   type: 'function'
   function: FunctionCall
 }
 
 export interface ToolDefinition {
   name: string;
   description: string;
-  parameters: Record<string, any>;
-  execute: (input: any) => Promise<string>;
+  parameters: Record<string, any>;
+  // Allow binary/JSON/text; upstream can stringify as needed
+  execute: (input: unknown) => Promise<unknown>;
 }
packages/next-sdk/agent/AgentModelProvider.ts (4)

164-166: Remove unnecessary await; only build system prompt for ReAct.

tempMergeTools isn’t async; computing the prompt for non‑ReAct models is wasted work.

Covered by the first diff (renames and gates computation).


136-139: Avoid pushing undefined tools; normalize to empty object.

Apply:

-      this.mcpClients.push(client)
-      this.mcpTools.push(await client?.tools?.())
+      this.mcpClients.push(client)
+      const t = (await client?.tools?.()) ?? {}
+      this.mcpTools.push(t)

112-116: Guard client.close() to avoid noisy exceptions.

Apply:

-          client.close()
+          client?.close?.()

98-101: Remove noisy logs or gate behind a debug flag.

Leaking client objects in logs is undesirable in a library.

-          console.log('开始查询tool:', client)
...
-          console.log('查询tool的结果:', tool, client)
packages/next-sdk/agent/react/index.ts (2)

24-26: Tighten Thought extraction regex.

Current replace only strips a single non-word at start or end.

-      thought = matches[0][1]?.replace(/^\W|$/, '')?.trim()
+      thought = matches[0][1]?.replace(/^\W+|\W+$/g, '')?.trim()

114-120: Parse failures should not abort the loop. Add guard.

If a malformed JSON slips through, JSON.parse will throw and stop all tool calls.

-      const result = await tool.execute(JSON.parse(toolCall.function.arguments))
+      let args: any = {}
+      try { args = JSON.parse(toolCall.function.arguments) } catch {}
+      const result = await tool.execute(args)
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 259d873 and f3582ca.

📒 Files selected for processing (12)
  • packages/doc-ai/package.json (1 hunks)
  • packages/doc-ai/src/const.ts (1 hunks)
  • packages/doc-ai/src/views/comprehensive/index.vue (1 hunks)
  • packages/next-remoter/package.json (1 hunks)
  • packages/next-remoter/src/components/tiny-robot-chat.vue (2 hunks)
  • packages/next-remoter/src/composable/CustomAgentModelProvider.ts (3 hunks)
  • packages/next-remoter/src/composable/useTinyRobotChat.ts (1 hunks)
  • packages/next-sdk/agent/AgentModelProvider.ts (3 hunks)
  • packages/next-sdk/agent/react/index.ts (1 hunks)
  • packages/next-sdk/agent/react/systemPrompt.ts (1 hunks)
  • packages/next-sdk/agent/type.ts (1 hunks)
  • packages/next-sdk/package.json (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
packages/next-sdk/agent/AgentModelProvider.ts (2)
packages/next-sdk/agent/type.ts (1)
  • IAgentModelProviderOption (20-32)
packages/next-sdk/agent/react/index.ts (2)
  • getSystemPromptMessages (8-13)
  • runReActLoop (94-149)
packages/next-sdk/agent/react/index.ts (2)
packages/next-sdk/agent/type.ts (2)
  • Tool (41-44)
  • ToolCall (51-55)
packages/next-sdk/agent/react/systemPrompt.ts (3)
  • PREFIX (1-1)
  • FORMAT_INSTRUCTIONS (3-25)
  • SUFFIX (27-33)
🔇 Additional comments (5)
packages/doc-ai/package.json (1)

33-33: Compatibility verified for JSX plugin
@vitejs/plugin-vue-jsx@5.1.1 supports vite ^5.0.0 || ^6.0.0 || ^7.0.0 and vue ^3.0.0; plugin-vue@5.2.2 supports vite ^5.0.0 || ^6.0.0 and vue ^3.2.25; vite@6.3.1 meets these ranges. No changes needed.

packages/next-sdk/package.json (1)

36-36: Verify ai@^5.0.28 compatibility with @ai-sdk/ v2 in browser bundles*
Confirm clean dedupe to v2 providers and ensure no Node-only modules (fs, path, crypto) end up in CDN/browser builds.

packages/next-remoter/package.json (1)

28-28: Ensure @built-in-ai/core ships a native ESM build for efficient tree-shaking.
It currently publishes only CommonJS (module.exports entrypoint), with both main and exports.import pointing to dist/index.js, sideEffects: false, ~132 KB uncompressed (~4.3 KB gzipped), and no Node builtins. Without a true ESM output, bundlers may not tree-shake as expected—request or bundle an ESM variant upstream or vendor it. Also evaluate using a peerDependency if host apps provide the implementation.

Align Vue versions across the workspace (doc-ai ^3.5.x vs remoter ^3.3.x) to avoid duplicate Vue.

packages/next-sdk/agent/react/systemPrompt.ts (1)

1-36: Prompt fragments look consistent and useful

Structure and instructions align with a single-action ReAct loop. LGTM.

packages/next-sdk/agent/type.ts (1)

30-32: ReAct toggle is clear

isReActModel?: boolean is a sensible switch. LGTM.

}
},
async ({ city }: { city: string }) => {
debugger
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove or gate the debugger statement.

Avoid shipping breakpoints in production.

-    debugger
+    if (import.meta?.env?.DEV) debugger
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debugger
if (import.meta?.env?.DEV) debugger
🤖 Prompt for AI Agents
In packages/doc-ai/src/views/comprehensive/index.vue at line 104 there is a
lingering "debugger" statement; remove this breakpoint or wrap it in a
development-only guard (e.g., only invoke when process.env.NODE_ENV ===
'development' or import.meta.env.DEV) so it cannot trigger in production builds;
update and test to ensure no debugger calls remain in the compiled output.

Comment on lines +18 to 21
llm: builtInAI,
isReActModel: true,
mcpServers: [] as McpServerConfig[]
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

llm should be a Provider instance, not a factory

IAgentModelProviderOption.llm expects ProviderV2. If builtInAI is a factory, call it here; otherwise pass the proper instance type.

-      llm: builtInAI,
+      llm: builtInAI(), // if builtInAI returns a ProviderV2; otherwise adjust accordingly
       isReActModel: true,

If llm is set, prefer not to also override model per-call (see below) to avoid ambiguity.


🏁 Script executed:

# Find occurrences of builtInAI to see import and usage context
rg -n "builtInAI" -C3

# Locate definition/export of builtInAI to determine if it's a factory or instance
rg -n "export .*builtInAI" -C3

# Inspect IAgentModelProviderOption.llm type to confirm expected type
rg -n "interface IAgentModelProviderOption" -C3 packages/next-remoter/src

Length of output: 2393


Call the factory to instantiate the provider for llm and remove the redundant per-call override

IAgentModelProviderOption.llm expects a ProviderV2, but builtInAI is a factory function that returns a BuiltInAIChatLanguageModel instance. Change:

- llm: builtInAI,
+ llm: builtInAI(), // instantiate provider

Then drop the model: builtInAI() override in chatStream so it uses the configured llm:

-      model: builtInAI(),
🤖 Prompt for AI Agents
In packages/next-remoter/src/composable/CustomAgentModelProvider.ts around lines
18–21, the llm field is currently assigned the factory function builtInAI but
IAgentModelProviderOption.llm expects a ProviderV2 instance; call the factory
(llm: builtInAI()) to instantiate the provider when configuring the options and
then remove the redundant per-call override model: builtInAI() inside chatStream
so chatStream uses the configured llm; if TypeScript complains, add the minimal
cast or adjust types to ensure the result matches ProviderV2.

Comment on lines 52 to 56
// 标识每一个markdown块
let textId = 1
for await (const part of result.fullStream) {
// console.log(part, part.type)
console.log(part, part.type)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Always signal completion and surface errors

Ensure onDone is called even on exceptions/aborts; surface errors to the handler.

-    // 标识每一个markdown块
-    let textId = 1
-    for await (const part of result.fullStream) {
-      if (import.meta?.env?.DEV) console.debug(part, part.type)
+    // 标识每一个markdown块
+    let textId = 1
+    try {
+      for await (const part of result.fullStream) {
+        if (import.meta?.env?.DEV) console.debug(part.type, part)
         ...
-      }
-    }
-
-    handler.onDone()
+      }
+    } catch (err) {
+      handler.onError?.(err as Error)
+      throw err
+    } finally {
+      handler.onDone()
+    }

Replace ... with the unchanged body.

Also applies to: 110-113

🤖 Prompt for AI Agents
In packages/next-remoter/src/composable/CustomAgentModelProvider.ts around lines
52-56 (and similarly 110-113), the stream processing currently logs parts but
does not guarantee calling the onDone handler or surfacing errors to the handler
on exceptions/aborts; wrap the stream iteration in try/catch/finally so that any
thrown error (including AbortError) is passed to the onError/onAbort handler and
onDone is always invoked in the finally block, and ensure you only call onDone
once (e.g., guard with a flag) and propagate the original error to the handler
before rethrowing or returning.

Comment on lines +164 to 172
const tools = (await this.tempMergeTools(options.tools)) as ToolSet
const systemPrompt = await getSystemPromptMessages(tools)
const llm = this.llm(model)
return chatMethod({
// @ts-ignore ProviderV2 是所有llm的父类, 在每一个具体的llm 类都有一个选择model的函数用法
model: this.llm(model),
tools: this.tempMergeTools(options.tools) as ToolSet,
model: llm,
system: systemPrompt,
tools: this.isReActModel ? (tools as ToolSet) : undefined,
stopWhen: stepCountIs(maxSteps),
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix tools/system merging and override order; gate ReAct-only work.

  • You always compute tools/system even when not in ReAct mode.
  • Properties you set (system/tools/stopWhen/onStepFinish) are later overridden by ...options because the spread comes last.
  • Compose user onStepFinish if provided.

Apply:

-    const tools = (await this.tempMergeTools(options.tools)) as ToolSet
-    const systemPrompt = await getSystemPromptMessages(tools)
-    const llm = this.llm(model)
-    return chatMethod({
-      // @ts-ignore  ProviderV2 是所有llm的父类, 在每一个具体的llm 类都有一个选择model的函数用法
-      model: llm,
-      system: systemPrompt,
-      tools: this.isReActModel ? (tools as ToolSet) : undefined,
-      stopWhen: stepCountIs(maxSteps),
-      onStepFinish: async (step) => {
-        if (this.isReActModel) {
-          await runReActLoop({
-            step,
-            tools,
-            vm: this,
-            chatMethod,
-            llm,
-            options,
-            system: systemPrompt
-          })
-        }
-      },
-      ...options
-    })
+    const mergedTools = this.tempMergeTools(options.tools) as ToolSet
+    const systemPrompt = this.isReActModel ? await getSystemPromptMessages(mergedTools) : undefined
+    const llm = this.llm(model)
+    return chatMethod({
+      ...options,
+      // @ts-ignore
+      model: llm,
+      system: systemPrompt ?? (options as any).system,
+      tools: this.isReActModel ? mergedTools : undefined,
+      stopWhen: stepCountIs(maxSteps),
+      onStepFinish: async (step) => {
+        await (options as any)?.onStepFinish?.(step)
+        if (this.isReActModel && maxSteps > 0) {
+          await runReActLoop({
+            step,
+            tools: mergedTools,
+            vm: this,
+            chatMethod,
+            llm,
+            options,
+            system: systemPrompt!,
+            remainingSteps: maxSteps
+          })
+        }
+      }
+    })

Also applies to: 186-186

Comment on lines +173 to +184
onStepFinish: async (step) => {
if (this.isReActModel) {
await runReActLoop({
step,
tools,
vm: this,
chatMethod,
llm,
options,
system: systemPrompt
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

runReActLoop currently has no effective step budget; risk of unbounded recursion.

options here doesn’t contain maxSteps (it was destructured out). Pass an explicit counter and decrement in the loop implementation.

See the diff in the previous comment (adds remainingSteps: maxSteps). Pair it with the react/index.ts changes that consume remainingSteps.

🤖 Prompt for AI Agents
In packages/next-sdk/agent/AgentModelProvider.ts around lines 173 to 184, the
runReActLoop call is invoked without a remaining step budget because maxSteps
was previously destructured out of options; update the call to pass an explicit
remainingSteps property (e.g., remainingSteps: maxSteps or remainingSteps:
options.maxSteps if maxSteps is out of scope) so the ReAct loop can track and
decrement steps, and ensure the react loop implementation consumes and
decrements remainingSteps on each iteration.

Comment on lines +1 to +3
import { Tool, ToolCall } from '../type'
import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
import { generateText, stepCountIs } from 'ai'
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Import ToolSet and fix types to avoid compile errors.

ToolSet is referenced but not imported; also getSystemPromptMessages expects an array but callers pass a map.

Apply:

-import { Tool, ToolCall } from '../type'
-import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
-import { generateText, stepCountIs } from 'ai'
+import { Tool, ToolCall } from '../type'
+import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
+import { stepCountIs } from 'ai'
+import type { ToolSet } from 'ai'

And:

-export const getSystemPromptMessages = async (tools: Tool[]): Promise<string> => {
+export const getSystemPromptMessages = async (tools: ToolSet): Promise<string> => {

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 1 to 3, add the missing
import for ToolSet from '../type' and fix the type mismatch for
getSystemPromptMessages: either change its parameter type to accept a
ToolSet/Map (and update its implementation to iterate Map.entries() or
values()), or convert incoming Map callers to pass Array.from(toolSet.values())
so the function keeps an array signature; ensure all references and type
annotations are updated so callers and the function agree (import ToolSet,
update function signature or callers accordingly) to eliminate the compile
errors.

Comment on lines +65 to +71
id: `call_${Math.random().toString(36).slice(2)}`,
type: 'function',
function: {
name: action,
arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
}
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Always serialize arguments as valid JSON.

JSON.parse(toolCall.function.arguments) will throw when action_input is a plain string (not quoted). Serialize uniformly.

-              arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
+              arguments: JSON.stringify(action_input ?? {})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: `call_${Math.random().toString(36).slice(2)}`,
type: 'function',
function: {
name: action,
arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
}
})
id: `call_${Math.random().toString(36).slice(2)}`,
type: 'function',
function: {
name: action,
arguments: JSON.stringify(action_input ?? {})
}
})
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 65 to 71, the code
currently conditionally leaves plain strings unquoted so JSON.parse will throw;
always serialize arguments as valid JSON by replacing the conditional with a
single JSON.stringify call (e.g. JSON.stringify(action_input ?? {})) so strings,
objects, null/undefined all produce valid JSON; ensure the resulting value is
assigned to toolCall.function.arguments.

Comment on lines +94 to +110
export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
llm
}: {
step: any
tools: Tool[]
vm: any
chatMethod: any
options: any
system: string
llm: any
}) => {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Type and budget fixes for ReAct loop; prevent runaway recursion.

  • tools is used as a map but typed as array.
  • No effective global step budget; nested calls reset stepCountIs.

Apply:

-export const runReActLoop = async ({
+export const runReActLoop = async ({
   step,
   tools,
   vm,
   chatMethod,
   options,
   system,
-  llm
+  llm,
+  remainingSteps,
 }: {
   step: any
-  tools: Tool[]
+  tools: ToolSet
   vm: any
   chatMethod: any
   options: any
   system: string
-  llm: any
+  llm: any
+  remainingSteps: number
 }) => {
+  if (!remainingSteps || remainingSteps <= 0) return

And in the call below (lines 132-136):

-      stopWhen: stepCountIs(options.maxSteps),
+      stopWhen: stepCountIs(remainingSteps),
       onStepFinish: async (stepCopy) => {
-        await runReActLoop({ step: stepCopy, tools, vm, chatMethod, options, system: system, llm: llm })
+        await runReActLoop({
+          step: stepCopy,
+          tools,
+          vm,
+          chatMethod,
+          options,
+          system,
+          llm,
+          remainingSteps: remainingSteps - 1
+        })
       },

Pair with the AgentModelProvider.ts change that passes remainingSteps.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
llm
}: {
step: any
tools: Tool[]
vm: any
chatMethod: any
options: any
system: string
llm: any
}) => {
// --- packages/next-sdk/agent/react/index.ts ---
// Updated runReActLoop signature and step budget check
export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
llm,
remainingSteps,
}: {
step: any
tools: ToolSet
vm: any
chatMethod: any
options: any
system: string
llm: any
remainingSteps: number
}) => {
if (!remainingSteps || remainingSteps <= 0) return
// …rest of implementation unchanged
}
// …later in the same file, around lines 132–136…
// Before: stopWhen uses the static maxSteps and recursive calls forget to decrement
// After: use remainingSteps for stopping, and decrement on each nested invocation
stopWhen: stepCountIs(remainingSteps),
onStepFinish: async (stepCopy) => {
await runReActLoop({
step: stepCopy,
tools,
vm,
chatMethod,
options,
system,
llm,
remainingSteps: remainingSteps - 1
})
},

Comment on lines +122 to +127
if (toolCallsResult.length > 0) {
const lastmessage = step.content[0]
lastmessage.text =
lastmessage.text +
`\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make Observation formatting resilient to tool result shapes.

Current code assumes result.content[*].text. Fall back gracefully.

-      `\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`
+      `\n Observation: ${
+        toolCallsResult.map((r: any) => {
+          if (Array.isArray(r?.content)) {
+            return r.content.map((c: any) => c?.text ?? '').filter(Boolean).join('\n')
+          }
+          if (typeof r?.text === 'string') return r.text
+          try { return JSON.stringify(r) } catch { return String(r) }
+        }).join('\n')
+      }`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (toolCallsResult.length > 0) {
const lastmessage = step.content[0]
lastmessage.text =
lastmessage.text +
`\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`
if (toolCallsResult.length > 0) {
const lastmessage = step.content[0]
lastmessage.text =
lastmessage.text +
`\n Observation: ${
toolCallsResult.map((r: any) => {
if (Array.isArray(r?.content)) {
return r.content
.map((c: any) => c?.text ?? '')
.filter(Boolean)
.join('\n')
}
if (typeof r?.text === 'string') {
return r.text
}
try {
return JSON.stringify(r)
} catch {
return String(r)
}
}).join('\n')
}`
}
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 122 to 127, the
Observation concatenation assumes each toolCallsResult item has content arrays
with objects that contain .text; make it resilient by defensively accessing
nested fields (use optional chaining and Array.isArray checks), fallback to
other representations when .text is missing (e.g., if content item is string use
it, if it's an object without text use JSON.stringify or String(item)), and join
entries safely (skip null/undefined). Update the mapping to normalize each tool
result into a string before joining so Observation always contains a sensible
textual fallback.

Comment on lines +139 to +144
for await (const part of result.fullStream) {
part.text && options.handler.onData({
type: 'markdown',
delta: part.text,
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle both streaming and non-streaming results; guard handler.

fullStream may not exist (e.g., when generateText is used). Also options.handler can be undefined.

-    for await (const part of result.fullStream) {
-      part.text &&  options.handler.onData({
-        type: 'markdown',
-        delta: part.text,
-      })
-    }
+    if (result?.fullStream) {
+      for await (const part of result.fullStream) {
+        if (part?.text) {
+          options?.handler?.onData?.({ type: 'markdown', delta: part.text })
+        }
+      }
+    } else if (result?.text) {
+      options?.handler?.onData?.({ type: 'markdown', delta: result.text })
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for await (const part of result.fullStream) {
part.text && options.handler.onData({
type: 'markdown',
delta: part.text,
})
}
if (result?.fullStream) {
for await (const part of result.fullStream) {
if (part?.text) {
options?.handler?.onData?.({ type: 'markdown', delta: part.text })
}
}
} else if (result?.text) {
options?.handler?.onData?.({ type: 'markdown', delta: result.text })
}
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 139–144, the loop assumes
result.fullStream and options.handler.onData always exist; update the code to
first check that options.handler and typeof options.handler.onData ===
'function' before sending events, then if result.fullStream exists iterate and
emit each part.text as before; otherwise handle the non-streaming case by
emitting a single onData call with the available text (e.g., result.text or
result.output/content) wrapped as a markdown delta. Ensure null/undefined guards
so no runtime errors occur when handler or fullStream are absent.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants