-
Notifications
You must be signed in to change notification settings - Fork 1
feat: 支持浏览器内置大模型 #73
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: dev
Are you sure you want to change the base?
feat: 支持浏览器内置大模型 #73
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughUpdates URLs and IDs in doc-ai, tweaks a Vue view, and adjusts dependencies. Next-remoter switches default agent root, updates prompts, adopts a built-in AI provider, and silences a log. Next-sdk adds ReAct model support with new types, system prompts, and a ReAct loop, and bumps an ai library version. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor U as User
participant VC as View/Component
participant AMP as AgentModelProvider (isReActModel=true)
participant LLM as LLM (builtInAI)
participant RL as runReActLoop
participant T as Tools
U->>VC: Send message
VC->>AMP: chatStream({ prompt, tools, handler })
AMP->>AMP: Merge tools & build system prompt
AMP->>LLM: chat({ system, model: llm, tools, onStepFinish })
LLM-->>AMP: Stream step(s)
AMP->>RL: onStepFinish(step, { tools, system, llm, handler })
RL->>RL: Parse Thought/Action/Final Answer
alt Tool calls found
RL->>T: execute(name, args)
T-->>RL: result(s)
RL->>LLM: chat(next prompt with Observation, stopWhen)
LLM-->>AMP: next step(s)
AMP->>RL: recurse until final or stopWhen
else Final Answer
RL-->>AMP: finalAnswer
end
AMP-->>VC: stream markdown chunks via handler
VC-->>U: Render replies
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
✨ Finishing Touches🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
packages/next-remoter/src/composable/useTinyRobotChat.ts (1)
34-46: Guard against empty message list to avoid runtime error.Accessing lastMessage.role when messages is empty will throw.
- let lastMessage = messages.value[messages.value.length - 1] - - if (lastMessage.role !== 'assistant') { + let lastMessage = messages.value[messages.value.length - 1] + if (!lastMessage) { + lastMessage = { role: 'assistant', content: '', uiContent: [] as any[] } + messages.value.push(lastMessage) + } + if (lastMessage.role !== 'assistant') { const message = { role: 'assistant', content: '', uiContent: [] } messages.value.push(message) lastMessage = message }packages/next-sdk/agent/AgentModelProvider.ts (2)
95-107: Null/undefined tool entries cause reduce() to throw later.
_createMpcToolscan pushnull(orundefinedvia optional chaining).tempMergeToolsspreadscurrunguarded.Apply:
- console.log('开始查询tool:', client) - const tool = client ? await client?.tools?.() : null - console.log('查询tool的结果:', tool, client) - - return tool + const tool = client ? await client?.tools?.() : null + return tool ?? {}And update the reducer (see next comment) and insert path.
145-154: Make tool merge null‑safe and robust.Apply:
- tempMergeTools(extraTool = {}) { - const toolsResult = this.mcpTools.reduce((acc, curr) => ({ ...acc, ...curr }), {}) + tempMergeTools(extraTool = {}) { + const toolsResult = this.mcpTools + .filter(Boolean) + .reduce((acc, curr) => ({ ...acc, ...(curr || {}) }), {}) Object.assign(toolsResult, extraTool)
🧹 Nitpick comments (14)
packages/next-remoter/src/composable/useTinyRobotChat.ts (2)
31-33: Replace commented log with dev-guarded debug logging.Keeps diagnostics without noise in prod.
- // console.log('onReceiveData=', data) + if (import.meta?.env?.DEV) { + console.debug('onReceiveData=', data) + }
55-64: Keep content and uiContent in sync on first markdown chunk.When pushing a new markdown block, lastMessage.content isn’t updated until the delta path runs.
- if (!markdownContent) { - lastMessage.uiContent.push(data) + if (!markdownContent) { + lastMessage.uiContent.push(data) + lastMessage.content += data.delta ?? ''packages/next-remoter/src/components/tiny-robot-chat.vue (2)
126-129: Normalize agentRoot and join paths safelyDefault looks fine, but consumers may pass a value without a trailing slash, breaking
${agentRoot}mcp?.... Recommend URL-based joining to be robust.Outside this range, add a helper and replace usages:
// near props or utilities const buildMcpUrl = (sid: string) => new URL(`mcp?sessionId=${sid}`, props.agentRoot).toString()Then replace:
- url: `${props.agentRoot}mcp?sessionId=${sessionId}` + url: buildMcpUrl(sessionId)and
- const mcpServer = { type: plugin.type, url: plugin.url } + const mcpServer = { type: plugin.type, url: plugin.url } // keep // when you need agentRoot+sessionId, prefer buildMcpUrl(...)
232-232: Pill label/action mismatch“接收邮件” 与 “帮我勾选中最贵的手机商品” 语义不一致。建议还原或同步修改为一致的任务。
- '接收邮件#帮我勾选中最贵的手机商品。', + '接收邮件#请同步邮箱的新邮件。',packages/doc-ai/src/const.ts (1)
1-3: Typo in constant name: SSEION_ID → SESSION_ID (keep alias for BC)Avoid propagating the typo; add a deprecation alias to prevent breaks.
-export const AGENT_ROOT = 'https://ai.opentiny.design/' - -export const SSEION_ID = 'f5d8e6f6-8f9a-4117-ad9c-b9f10b98068e' +export const AGENT_ROOT = 'https://ai.opentiny.design/' + +// TODO: deprecate SSEION_ID in the next minor +export const SESSION_ID = 'f5d8e6f6-8f9a-4117-ad9c-b9f10b98068e' +export const SSEION_ID = SESSION_IDPlease verify downstream imports and update to SESSION_ID where feasible.
packages/next-remoter/src/composable/CustomAgentModelProvider.ts (2)
45-50: Avoid mixing provider-level llm with per-call model overrideChoose one source of truth. If
llmis configured, let the agent pick the model; or if you must override, document precedence.- model: builtInAI(), + // model: builtInAI(), // remove if llm is configured above abortSignal: request.options?.signal, handler
55-55: Gate verbose loggingUnconditional logging will spam consoles. Limit to dev and lower verbosity.
- console.log(part, part.type) + if (import.meta?.env?.DEV) console.debug(part.type, part)packages/next-sdk/agent/type.ts (1)
35-80: Tighten/react types for tool I/OConsider JSON Schema typing and more flexible tool outputs.
-export type FunctionDescription = { +export type FunctionDescription = { description?: string name: string - parameters: object // JSON Schema object + // Prefer a JSON Schema type here to enable validation/intellisense + parameters: Record<string, any> // or JSONSchema7 } export type Tool = { type: 'function' function: FunctionDescription } export type FunctionCall = { name: string arguments: string } export type ToolCall = { id: string type: 'function' function: FunctionCall } export interface ToolDefinition { name: string; description: string; - parameters: Record<string, any>; - execute: (input: any) => Promise<string>; + parameters: Record<string, any>; + // Allow binary/JSON/text; upstream can stringify as needed + execute: (input: unknown) => Promise<unknown>; }packages/next-sdk/agent/AgentModelProvider.ts (4)
164-166: Remove unnecessary await; only build system prompt for ReAct.
tempMergeToolsisn’t async; computing the prompt for non‑ReAct models is wasted work.Covered by the first diff (renames and gates computation).
136-139: Avoid pushing undefined tools; normalize to empty object.Apply:
- this.mcpClients.push(client) - this.mcpTools.push(await client?.tools?.()) + this.mcpClients.push(client) + const t = (await client?.tools?.()) ?? {} + this.mcpTools.push(t)
112-116: Guard client.close() to avoid noisy exceptions.Apply:
- client.close() + client?.close?.()
98-101: Remove noisy logs or gate behind a debug flag.Leaking client objects in logs is undesirable in a library.
- console.log('开始查询tool:', client) ... - console.log('查询tool的结果:', tool, client)packages/next-sdk/agent/react/index.ts (2)
24-26: Tighten Thought extraction regex.Current replace only strips a single non-word at start or end.
- thought = matches[0][1]?.replace(/^\W|$/, '')?.trim() + thought = matches[0][1]?.replace(/^\W+|\W+$/g, '')?.trim()
114-120: Parse failures should not abort the loop. Add guard.If a malformed JSON slips through,
JSON.parsewill throw and stop all tool calls.- const result = await tool.execute(JSON.parse(toolCall.function.arguments)) + let args: any = {} + try { args = JSON.parse(toolCall.function.arguments) } catch {} + const result = await tool.execute(args)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
packages/doc-ai/package.json(1 hunks)packages/doc-ai/src/const.ts(1 hunks)packages/doc-ai/src/views/comprehensive/index.vue(1 hunks)packages/next-remoter/package.json(1 hunks)packages/next-remoter/src/components/tiny-robot-chat.vue(2 hunks)packages/next-remoter/src/composable/CustomAgentModelProvider.ts(3 hunks)packages/next-remoter/src/composable/useTinyRobotChat.ts(1 hunks)packages/next-sdk/agent/AgentModelProvider.ts(3 hunks)packages/next-sdk/agent/react/index.ts(1 hunks)packages/next-sdk/agent/react/systemPrompt.ts(1 hunks)packages/next-sdk/agent/type.ts(1 hunks)packages/next-sdk/package.json(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
packages/next-sdk/agent/AgentModelProvider.ts (2)
packages/next-sdk/agent/type.ts (1)
IAgentModelProviderOption(20-32)packages/next-sdk/agent/react/index.ts (2)
getSystemPromptMessages(8-13)runReActLoop(94-149)
packages/next-sdk/agent/react/index.ts (2)
packages/next-sdk/agent/type.ts (2)
Tool(41-44)ToolCall(51-55)packages/next-sdk/agent/react/systemPrompt.ts (3)
PREFIX(1-1)FORMAT_INSTRUCTIONS(3-25)SUFFIX(27-33)
🔇 Additional comments (5)
packages/doc-ai/package.json (1)
33-33: Compatibility verified for JSX plugin
@vitejs/plugin-vue-jsx@5.1.1 supports vite ^5.0.0 || ^6.0.0 || ^7.0.0 and vue ^3.0.0; plugin-vue@5.2.2 supports vite ^5.0.0 || ^6.0.0 and vue ^3.2.25; vite@6.3.1 meets these ranges. No changes needed.packages/next-sdk/package.json (1)
36-36: Verifyai@^5.0.28compatibility with @ai-sdk/ v2 in browser bundles*
Confirm clean dedupe to v2 providers and ensure no Node-only modules (fs, path, crypto) end up in CDN/browser builds.packages/next-remoter/package.json (1)
28-28: Ensure @built-in-ai/core ships a native ESM build for efficient tree-shaking.
It currently publishes only CommonJS (module.exportsentrypoint), with bothmainandexports.importpointing todist/index.js,sideEffects: false, ~132 KB uncompressed (~4.3 KB gzipped), and no Node builtins. Without a true ESM output, bundlers may not tree-shake as expected—request or bundle an ESM variant upstream or vendor it. Also evaluate using a peerDependency if host apps provide the implementation.Align Vue versions across the workspace (
doc-ai^3.5.x vsremoter^3.3.x) to avoid duplicate Vue.packages/next-sdk/agent/react/systemPrompt.ts (1)
1-36: Prompt fragments look consistent and usefulStructure and instructions align with a single-action ReAct loop. LGTM.
packages/next-sdk/agent/type.ts (1)
30-32: ReAct toggle is clear
isReActModel?: booleanis a sensible switch. LGTM.
| } | ||
| }, | ||
| async ({ city }: { city: string }) => { | ||
| debugger |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove or gate the debugger statement.
Avoid shipping breakpoints in production.
- debugger
+ if (import.meta?.env?.DEV) debugger📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| debugger | |
| if (import.meta?.env?.DEV) debugger |
🤖 Prompt for AI Agents
In packages/doc-ai/src/views/comprehensive/index.vue at line 104 there is a
lingering "debugger" statement; remove this breakpoint or wrap it in a
development-only guard (e.g., only invoke when process.env.NODE_ENV ===
'development' or import.meta.env.DEV) so it cannot trigger in production builds;
update and test to ensure no debugger calls remain in the compiled output.
| llm: builtInAI, | ||
| isReActModel: true, | ||
| mcpServers: [] as McpServerConfig[] | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
llm should be a Provider instance, not a factory
IAgentModelProviderOption.llm expects ProviderV2. If builtInAI is a factory, call it here; otherwise pass the proper instance type.
- llm: builtInAI,
+ llm: builtInAI(), // if builtInAI returns a ProviderV2; otherwise adjust accordingly
isReActModel: true,If llm is set, prefer not to also override model per-call (see below) to avoid ambiguity.
🏁 Script executed:
# Find occurrences of builtInAI to see import and usage context
rg -n "builtInAI" -C3
# Locate definition/export of builtInAI to determine if it's a factory or instance
rg -n "export .*builtInAI" -C3
# Inspect IAgentModelProviderOption.llm type to confirm expected type
rg -n "interface IAgentModelProviderOption" -C3 packages/next-remoter/srcLength of output: 2393
Call the factory to instantiate the provider for llm and remove the redundant per-call override
IAgentModelProviderOption.llm expects a ProviderV2, but builtInAI is a factory function that returns a BuiltInAIChatLanguageModel instance. Change:
- llm: builtInAI,
+ llm: builtInAI(), // instantiate providerThen drop the model: builtInAI() override in chatStream so it uses the configured llm:
- model: builtInAI(),🤖 Prompt for AI Agents
In packages/next-remoter/src/composable/CustomAgentModelProvider.ts around lines
18–21, the llm field is currently assigned the factory function builtInAI but
IAgentModelProviderOption.llm expects a ProviderV2 instance; call the factory
(llm: builtInAI()) to instantiate the provider when configuring the options and
then remove the redundant per-call override model: builtInAI() inside chatStream
so chatStream uses the configured llm; if TypeScript complains, add the minimal
cast or adjust types to ensure the result matches ProviderV2.
| // 标识每一个markdown块 | ||
| let textId = 1 | ||
| for await (const part of result.fullStream) { | ||
| // console.log(part, part.type) | ||
| console.log(part, part.type) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Always signal completion and surface errors
Ensure onDone is called even on exceptions/aborts; surface errors to the handler.
- // 标识每一个markdown块
- let textId = 1
- for await (const part of result.fullStream) {
- if (import.meta?.env?.DEV) console.debug(part, part.type)
+ // 标识每一个markdown块
+ let textId = 1
+ try {
+ for await (const part of result.fullStream) {
+ if (import.meta?.env?.DEV) console.debug(part.type, part)
...
- }
- }
-
- handler.onDone()
+ }
+ } catch (err) {
+ handler.onError?.(err as Error)
+ throw err
+ } finally {
+ handler.onDone()
+ }Replace ... with the unchanged body.
Also applies to: 110-113
🤖 Prompt for AI Agents
In packages/next-remoter/src/composable/CustomAgentModelProvider.ts around lines
52-56 (and similarly 110-113), the stream processing currently logs parts but
does not guarantee calling the onDone handler or surfacing errors to the handler
on exceptions/aborts; wrap the stream iteration in try/catch/finally so that any
thrown error (including AbortError) is passed to the onError/onAbort handler and
onDone is always invoked in the finally block, and ensure you only call onDone
once (e.g., guard with a flag) and propagate the original error to the handler
before rethrowing or returning.
| const tools = (await this.tempMergeTools(options.tools)) as ToolSet | ||
| const systemPrompt = await getSystemPromptMessages(tools) | ||
| const llm = this.llm(model) | ||
| return chatMethod({ | ||
| // @ts-ignore ProviderV2 是所有llm的父类, 在每一个具体的llm 类都有一个选择model的函数用法 | ||
| model: this.llm(model), | ||
| tools: this.tempMergeTools(options.tools) as ToolSet, | ||
| model: llm, | ||
| system: systemPrompt, | ||
| tools: this.isReActModel ? (tools as ToolSet) : undefined, | ||
| stopWhen: stepCountIs(maxSteps), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix tools/system merging and override order; gate ReAct-only work.
- You always compute tools/system even when not in ReAct mode.
- Properties you set (system/tools/stopWhen/onStepFinish) are later overridden by
...optionsbecause the spread comes last. - Compose user
onStepFinishif provided.
Apply:
- const tools = (await this.tempMergeTools(options.tools)) as ToolSet
- const systemPrompt = await getSystemPromptMessages(tools)
- const llm = this.llm(model)
- return chatMethod({
- // @ts-ignore ProviderV2 是所有llm的父类, 在每一个具体的llm 类都有一个选择model的函数用法
- model: llm,
- system: systemPrompt,
- tools: this.isReActModel ? (tools as ToolSet) : undefined,
- stopWhen: stepCountIs(maxSteps),
- onStepFinish: async (step) => {
- if (this.isReActModel) {
- await runReActLoop({
- step,
- tools,
- vm: this,
- chatMethod,
- llm,
- options,
- system: systemPrompt
- })
- }
- },
- ...options
- })
+ const mergedTools = this.tempMergeTools(options.tools) as ToolSet
+ const systemPrompt = this.isReActModel ? await getSystemPromptMessages(mergedTools) : undefined
+ const llm = this.llm(model)
+ return chatMethod({
+ ...options,
+ // @ts-ignore
+ model: llm,
+ system: systemPrompt ?? (options as any).system,
+ tools: this.isReActModel ? mergedTools : undefined,
+ stopWhen: stepCountIs(maxSteps),
+ onStepFinish: async (step) => {
+ await (options as any)?.onStepFinish?.(step)
+ if (this.isReActModel && maxSteps > 0) {
+ await runReActLoop({
+ step,
+ tools: mergedTools,
+ vm: this,
+ chatMethod,
+ llm,
+ options,
+ system: systemPrompt!,
+ remainingSteps: maxSteps
+ })
+ }
+ }
+ })Also applies to: 186-186
| onStepFinish: async (step) => { | ||
| if (this.isReActModel) { | ||
| await runReActLoop({ | ||
| step, | ||
| tools, | ||
| vm: this, | ||
| chatMethod, | ||
| llm, | ||
| options, | ||
| system: systemPrompt | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
runReActLoop currently has no effective step budget; risk of unbounded recursion.
options here doesn’t contain maxSteps (it was destructured out). Pass an explicit counter and decrement in the loop implementation.
See the diff in the previous comment (adds remainingSteps: maxSteps). Pair it with the react/index.ts changes that consume remainingSteps.
🤖 Prompt for AI Agents
In packages/next-sdk/agent/AgentModelProvider.ts around lines 173 to 184, the
runReActLoop call is invoked without a remaining step budget because maxSteps
was previously destructured out of options; update the call to pass an explicit
remainingSteps property (e.g., remainingSteps: maxSteps or remainingSteps:
options.maxSteps if maxSteps is out of scope) so the ReAct loop can track and
decrement steps, and ensure the react loop implementation consumes and
decrements remainingSteps on each iteration.
| import { Tool, ToolCall } from '../type' | ||
| import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt' | ||
| import { generateText, stepCountIs } from 'ai' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Import ToolSet and fix types to avoid compile errors.
ToolSet is referenced but not imported; also getSystemPromptMessages expects an array but callers pass a map.
Apply:
-import { Tool, ToolCall } from '../type'
-import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
-import { generateText, stepCountIs } from 'ai'
+import { Tool, ToolCall } from '../type'
+import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
+import { stepCountIs } from 'ai'
+import type { ToolSet } from 'ai'And:
-export const getSystemPromptMessages = async (tools: Tool[]): Promise<string> => {
+export const getSystemPromptMessages = async (tools: ToolSet): Promise<string> => {Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 1 to 3, add the missing
import for ToolSet from '../type' and fix the type mismatch for
getSystemPromptMessages: either change its parameter type to accept a
ToolSet/Map (and update its implementation to iterate Map.entries() or
values()), or convert incoming Map callers to pass Array.from(toolSet.values())
so the function keeps an array signature; ensure all references and type
annotations are updated so callers and the function agree (import ToolSet,
update function signature or callers accordingly) to eliminate the compile
errors.
| id: `call_${Math.random().toString(36).slice(2)}`, | ||
| type: 'function', | ||
| function: { | ||
| name: action, | ||
| arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {}) | ||
| } | ||
| }) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Always serialize arguments as valid JSON.
JSON.parse(toolCall.function.arguments) will throw when action_input is a plain string (not quoted). Serialize uniformly.
- arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
+ arguments: JSON.stringify(action_input ?? {})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| id: `call_${Math.random().toString(36).slice(2)}`, | |
| type: 'function', | |
| function: { | |
| name: action, | |
| arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {}) | |
| } | |
| }) | |
| id: `call_${Math.random().toString(36).slice(2)}`, | |
| type: 'function', | |
| function: { | |
| name: action, | |
| arguments: JSON.stringify(action_input ?? {}) | |
| } | |
| }) |
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 65 to 71, the code
currently conditionally leaves plain strings unquoted so JSON.parse will throw;
always serialize arguments as valid JSON by replacing the conditional with a
single JSON.stringify call (e.g. JSON.stringify(action_input ?? {})) so strings,
objects, null/undefined all produce valid JSON; ensure the resulting value is
assigned to toolCall.function.arguments.
| export const runReActLoop = async ({ | ||
| step, | ||
| tools, | ||
| vm, | ||
| chatMethod, | ||
| options, | ||
| system, | ||
| llm | ||
| }: { | ||
| step: any | ||
| tools: Tool[] | ||
| vm: any | ||
| chatMethod: any | ||
| options: any | ||
| system: string | ||
| llm: any | ||
| }) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Type and budget fixes for ReAct loop; prevent runaway recursion.
toolsis used as a map but typed as array.- No effective global step budget; nested calls reset
stepCountIs.
Apply:
-export const runReActLoop = async ({
+export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
- llm
+ llm,
+ remainingSteps,
}: {
step: any
- tools: Tool[]
+ tools: ToolSet
vm: any
chatMethod: any
options: any
system: string
- llm: any
+ llm: any
+ remainingSteps: number
}) => {
+ if (!remainingSteps || remainingSteps <= 0) returnAnd in the call below (lines 132-136):
- stopWhen: stepCountIs(options.maxSteps),
+ stopWhen: stepCountIs(remainingSteps),
onStepFinish: async (stepCopy) => {
- await runReActLoop({ step: stepCopy, tools, vm, chatMethod, options, system: system, llm: llm })
+ await runReActLoop({
+ step: stepCopy,
+ tools,
+ vm,
+ chatMethod,
+ options,
+ system,
+ llm,
+ remainingSteps: remainingSteps - 1
+ })
},Pair with the AgentModelProvider.ts change that passes remainingSteps.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export const runReActLoop = async ({ | |
| step, | |
| tools, | |
| vm, | |
| chatMethod, | |
| options, | |
| system, | |
| llm | |
| }: { | |
| step: any | |
| tools: Tool[] | |
| vm: any | |
| chatMethod: any | |
| options: any | |
| system: string | |
| llm: any | |
| }) => { | |
| // --- packages/next-sdk/agent/react/index.ts --- | |
| // Updated runReActLoop signature and step budget check | |
| export const runReActLoop = async ({ | |
| step, | |
| tools, | |
| vm, | |
| chatMethod, | |
| options, | |
| system, | |
| llm, | |
| remainingSteps, | |
| }: { | |
| step: any | |
| tools: ToolSet | |
| vm: any | |
| chatMethod: any | |
| options: any | |
| system: string | |
| llm: any | |
| remainingSteps: number | |
| }) => { | |
| if (!remainingSteps || remainingSteps <= 0) return | |
| // …rest of implementation unchanged | |
| } | |
| // …later in the same file, around lines 132–136… | |
| // Before: stopWhen uses the static maxSteps and recursive calls forget to decrement | |
| // After: use remainingSteps for stopping, and decrement on each nested invocation | |
| stopWhen: stepCountIs(remainingSteps), | |
| onStepFinish: async (stepCopy) => { | |
| await runReActLoop({ | |
| step: stepCopy, | |
| tools, | |
| vm, | |
| chatMethod, | |
| options, | |
| system, | |
| llm, | |
| remainingSteps: remainingSteps - 1 | |
| }) | |
| }, |
| if (toolCallsResult.length > 0) { | ||
| const lastmessage = step.content[0] | ||
| lastmessage.text = | ||
| lastmessage.text + | ||
| `\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}` | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Make Observation formatting resilient to tool result shapes.
Current code assumes result.content[*].text. Fall back gracefully.
- `\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`
+ `\n Observation: ${
+ toolCallsResult.map((r: any) => {
+ if (Array.isArray(r?.content)) {
+ return r.content.map((c: any) => c?.text ?? '').filter(Boolean).join('\n')
+ }
+ if (typeof r?.text === 'string') return r.text
+ try { return JSON.stringify(r) } catch { return String(r) }
+ }).join('\n')
+ }`📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (toolCallsResult.length > 0) { | |
| const lastmessage = step.content[0] | |
| lastmessage.text = | |
| lastmessage.text + | |
| `\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}` | |
| if (toolCallsResult.length > 0) { | |
| const lastmessage = step.content[0] | |
| lastmessage.text = | |
| lastmessage.text + | |
| `\n Observation: ${ | |
| toolCallsResult.map((r: any) => { | |
| if (Array.isArray(r?.content)) { | |
| return r.content | |
| .map((c: any) => c?.text ?? '') | |
| .filter(Boolean) | |
| .join('\n') | |
| } | |
| if (typeof r?.text === 'string') { | |
| return r.text | |
| } | |
| try { | |
| return JSON.stringify(r) | |
| } catch { | |
| return String(r) | |
| } | |
| }).join('\n') | |
| }` | |
| } |
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 122 to 127, the
Observation concatenation assumes each toolCallsResult item has content arrays
with objects that contain .text; make it resilient by defensively accessing
nested fields (use optional chaining and Array.isArray checks), fallback to
other representations when .text is missing (e.g., if content item is string use
it, if it's an object without text use JSON.stringify or String(item)), and join
entries safely (skip null/undefined). Update the mapping to normalize each tool
result into a string before joining so Observation always contains a sensible
textual fallback.
| for await (const part of result.fullStream) { | ||
| part.text && options.handler.onData({ | ||
| type: 'markdown', | ||
| delta: part.text, | ||
| }) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle both streaming and non-streaming results; guard handler.
fullStream may not exist (e.g., when generateText is used). Also options.handler can be undefined.
- for await (const part of result.fullStream) {
- part.text && options.handler.onData({
- type: 'markdown',
- delta: part.text,
- })
- }
+ if (result?.fullStream) {
+ for await (const part of result.fullStream) {
+ if (part?.text) {
+ options?.handler?.onData?.({ type: 'markdown', delta: part.text })
+ }
+ }
+ } else if (result?.text) {
+ options?.handler?.onData?.({ type: 'markdown', delta: result.text })
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for await (const part of result.fullStream) { | |
| part.text && options.handler.onData({ | |
| type: 'markdown', | |
| delta: part.text, | |
| }) | |
| } | |
| if (result?.fullStream) { | |
| for await (const part of result.fullStream) { | |
| if (part?.text) { | |
| options?.handler?.onData?.({ type: 'markdown', delta: part.text }) | |
| } | |
| } | |
| } else if (result?.text) { | |
| options?.handler?.onData?.({ type: 'markdown', delta: result.text }) | |
| } |
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 139–144, the loop assumes
result.fullStream and options.handler.onData always exist; update the code to
first check that options.handler and typeof options.handler.onData ===
'function' before sending events, then if result.fullStream exists iterate and
emit each part.text as before; otherwise handle the non-streaming case by
emitting a single onData call with the available text (e.g., result.text or
result.output/content) wrapped as a markdown delta. Ensure null/undefined guards
so no runtime errors occur when handler or fullStream are absent.
Summary by CodeRabbit