Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion packages/doc-ai/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"devDependencies": {
"@types/node": "^22.15.16",
"@vitejs/plugin-vue": "^5.2.2",
"@vitejs/plugin-vue-jsx": "^5.0.1",
"@vitejs/plugin-vue-jsx": "^5.1.1",
"@vue/tsconfig": "^0.7.0",
"@vant/auto-import-resolver": "^1.3.0",
"dotenv": "^16.5.0",
Expand Down
4 changes: 2 additions & 2 deletions packages/doc-ai/src/const.ts
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
export const AGENT_ROOT = 'https://agent.opentiny.design/api/v1/webmcp-trial/'
export const AGENT_ROOT = 'https://ai.opentiny.design/'

export const SSEION_ID = 'spgl-95c0-4839-8007-remoter'
export const SSEION_ID = 'f5d8e6f6-8f9a-4117-ad9c-b9f10b98068e'
1 change: 1 addition & 0 deletions packages/doc-ai/src/views/comprehensive/index.vue
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ server.registerTool(
}
},
async ({ city }: { city: string }) => {
debugger
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Remove or gate the debugger statement.

Avoid shipping breakpoints in production.

-    debugger
+    if (import.meta?.env?.DEV) debugger
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
debugger
if (import.meta?.env?.DEV) debugger
🤖 Prompt for AI Agents
In packages/doc-ai/src/views/comprehensive/index.vue at line 104 there is a
lingering "debugger" statement; remove this breakpoint or wrap it in a
development-only guard (e.g., only invoke when process.env.NODE_ENV ===
'development' or import.meta.env.DEV) so it cannot trigger in production builds;
update and test to ensure no debugger calls remain in the compiled output.

return {
content: [{ type: 'text', text: `天气信息:${city}晴天` }]
}
Expand Down
1 change: 1 addition & 0 deletions packages/next-remoter/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,7 @@
"@modelcontextprotocol/sdk": "^1.16.0",
"@opentiny/vue": "^3.25.0",
"@opentiny/vue-icon": "^3.25.0",
"@built-in-ai/core": "^2.0.0",
"html5-qrcode": "^2.3.8",
"vant": "^4.9.21",
"vue": "^3.3.11"
Expand Down
4 changes: 2 additions & 2 deletions packages/next-remoter/src/components/tiny-robot-chat.vue
Original file line number Diff line number Diff line change
Expand Up @@ -125,7 +125,7 @@ const props = defineProps({
/** 后端的代理服务器地址 */
agentRoot: {
type: String,
default: 'https://agent.opentiny.design/api/v1/webmcp-trial/'
default: 'https://ai.opentiny.design/'
},
/** 左上角的标题 */
title: {
Expand Down Expand Up @@ -229,7 +229,7 @@ const pillItems = [
id: 'office',
text: props.locale === 'zh-CN' ? '办公助手' : 'Office Assistant',
menus: [
'接收邮件#请同步邮箱的新邮件。',
'接收邮件#帮我勾选中最贵的手机商品。',
'编写邮件#请新建一个邮件,收件人为 opentiny-next@meeting.com, 内容为举办一个临时会议。',
'安排会议#创建一个临时的在线会议,主题为讨论问题,时长为1小时。',
'整理文档#请分析附件中的销售情况,把销售额绘制成折线图。'
Expand Down
15 changes: 7 additions & 8 deletions packages/next-remoter/src/composable/CustomAgentModelProvider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ import { BaseModelProvider } from '@opentiny/tiny-robot-kit'
import type { AIModelConfig } from '@opentiny/tiny-robot-kit'
import { type Ref } from 'vue'
import { AgentModelProvider, McpServerConfig, IAgentModelProviderOption } from '@opentiny/next-sdk'
import { builtInAI } from '@built-in-ai/core';

/** Tiny-robot 所需要的自定义大语言的Provider */
export class CustomAgentModelProvider extends BaseModelProvider {
Expand All @@ -14,11 +15,8 @@ export class CustomAgentModelProvider extends BaseModelProvider {
constructor(config: AIModelConfig, sessionId: Ref<string>, agentRoot: Ref<string>) {
super(config)
const options = {
llmConfig: {
apiKey: 'sk-trial',
baseURL: 'https://agent.opentiny.design/api/v1/ai',
providerType: 'deepseek'
},
llm: builtInAI,
isReActModel: true,
mcpServers: [] as McpServerConfig[]
}
Comment on lines +18 to 21
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

llm should be a Provider instance, not a factory

IAgentModelProviderOption.llm expects ProviderV2. If builtInAI is a factory, call it here; otherwise pass the proper instance type.

-      llm: builtInAI,
+      llm: builtInAI(), // if builtInAI returns a ProviderV2; otherwise adjust accordingly
       isReActModel: true,

If llm is set, prefer not to also override model per-call (see below) to avoid ambiguity.


🏁 Script executed:

# Find occurrences of builtInAI to see import and usage context
rg -n "builtInAI" -C3

# Locate definition/export of builtInAI to determine if it's a factory or instance
rg -n "export .*builtInAI" -C3

# Inspect IAgentModelProviderOption.llm type to confirm expected type
rg -n "interface IAgentModelProviderOption" -C3 packages/next-remoter/src

Length of output: 2393


Call the factory to instantiate the provider for llm and remove the redundant per-call override

IAgentModelProviderOption.llm expects a ProviderV2, but builtInAI is a factory function that returns a BuiltInAIChatLanguageModel instance. Change:

- llm: builtInAI,
+ llm: builtInAI(), // instantiate provider

Then drop the model: builtInAI() override in chatStream so it uses the configured llm:

-      model: builtInAI(),
🤖 Prompt for AI Agents
In packages/next-remoter/src/composable/CustomAgentModelProvider.ts around lines
18–21, the llm field is currently assigned the factory function builtInAI but
IAgentModelProviderOption.llm expects a ProviderV2 instance; call the factory
(llm: builtInAI()) to instantiate the provider when configuring the options and
then remove the redundant per-call override model: builtInAI() inside chatStream
so chatStream uses the configured llm; if TypeScript complains, add the minimal
cast or adjust types to ensure the result matches ProviderV2.

if (sessionId.value && sessionId.value.includes(',')) {
Expand Down Expand Up @@ -46,14 +44,15 @@ export class CustomAgentModelProvider extends BaseModelProvider {
async chatStream(request: ChatCompletionRequest, handler: StreamHandler): Promise<void> {
const result = await this.agent.chatStream({
messages: request.messages,
model: 'deepseek-ai/DeepSeek-V3',
abortSignal: request.options?.signal
model: builtInAI(),
abortSignal: request.options?.signal,
handler
})

// 标识每一个markdown块
let textId = 1
for await (const part of result.fullStream) {
// console.log(part, part.type)
console.log(part, part.type)

// 文本节点处理。 每个文本块拥有自己的textId
if (part.type === 'text-start') {
Expand Down
2 changes: 1 addition & 1 deletion packages/next-remoter/src/composable/useTinyRobotChat.ts
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ export const useTinyRobotChat = ({ sessionId, agentRoot }: useTinyRobotOption) =
events: {
onReceiveData(data, messages, preventDefault) {
preventDefault()
console.log('onReceiveData=', data)
// console.log('onReceiveData=', data)

let lastMessage = messages.value[messages.value.length - 1]

Expand Down
67 changes: 67 additions & 0 deletions packages/next-sdk/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,3 +11,70 @@ npm install @opentiny/next-sdk
## 许可证

MIT

// onChunk: (chunkObj) => {
// console.log("onChunk", chunkObj);
// },
// // onFinish: (finishObj) => console.log("onFinish", finishObj),
// onStepFinish: (stepFinishObj) => console.log("onStepFinish", stepFinishObj),
// experimental_transform:
// //@ts-ignore
// ({ stopStream }) => {
// return new TransformStream({
// transform(chunk, controller) {
// console.log("transform chunk", chunk, controller);
// if (
// chunk.type == "tool-result" &&
// chunk.toolName == "getOrderbyDate" &&
// chunk.result.includes("没有")
// ) {
// // 停止流
// stopStream();
// console.log("已经停止流",chunk, result)
// setTimeout(() => {
// let askResult=confirm(chunk.result+", 你是否继续预订其它日期?")
// if(askResult){
// }
// }, 100);
// //@ts-ignore
// controller.enqueue({
// type: "step-finish",
// finishReason: "stop",
// logprobs: undefined,
// usage: {
// completionTokens: NaN,
// promptTokens: NaN,
// totalTokens: NaN,
// },
// request: {},
// response: {
// id: "response-id",
// modelId: "mock-model-id",
// timestamp: new Date(0),
// },
// warnings: [],
// isContinued: false,
// });
// //@ts-ignore
// controller.enqueue({
// type: "finish",
// finishReason: "stop",
// logprobs: undefined,
// usage: {
// completionTokens: NaN,
// promptTokens: NaN,
// totalTokens: NaN,
// },
// response: {
// id: "response-id",
// modelId: "mock-model-id",
// timestamp: new Date(0),
// },
// });
// return;
// }
// controller?.enqueue(chunk);
// },
// });
// },
// });
28 changes: 25 additions & 3 deletions packages/next-sdk/agent/AgentModelProvider.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,7 @@ import { ProviderV2 } from '@ai-sdk/provider'
import { OpenAIProvider } from '@ai-sdk/openai'
import { createOpenAI } from '@ai-sdk/openai'
import { createDeepSeek } from '@ai-sdk/deepseek'
import { getSystemPromptMessages, organizeToolCalls, runReActLoop } from './react'

export const AIProviderFactories = {
['openai']: createOpenAI,
Expand All @@ -30,11 +31,15 @@ export class AgentModelProvider {
mcpTools: Array<Record<string, any>> = []
/** 需要实时过滤掉的tools name*/
ignoreToolnames: string[] = []
/** 是否是 ReAct 模型 */
isReActModel: boolean = false

constructor({ llmConfig, mcpServers, llm }: IAgentModelProviderOption) {
constructor({ llmConfig, mcpServers, llm, isReActModel }: IAgentModelProviderOption) {
// 1、保存 mcpServer
this.mcpServers = mcpServers || []

this.isReActModel = isReActModel || false

// 2、保存 llm
if (llm) {
this.llm = llm
Expand Down Expand Up @@ -156,11 +161,28 @@ export class AgentModelProvider {
throw new Error('LLM is not initialized')
}

const tools = (await this.tempMergeTools(options.tools)) as ToolSet
const systemPrompt = await getSystemPromptMessages(tools)
const llm = this.llm(model)
return chatMethod({
// @ts-ignore ProviderV2 是所有llm的父类, 在每一个具体的llm 类都有一个选择model的函数用法
model: this.llm(model),
tools: this.tempMergeTools(options.tools) as ToolSet,
model: llm,
system: systemPrompt,
tools: this.isReActModel ? (tools as ToolSet) : undefined,
stopWhen: stepCountIs(maxSteps),
onStepFinish: async (step) => {
if (this.isReActModel) {
await runReActLoop({
step,
tools,
vm: this,
chatMethod,
llm,
options,
system: systemPrompt
})
}
Comment on lines +173 to +184
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

runReActLoop currently has no effective step budget; risk of unbounded recursion.

options here doesn’t contain maxSteps (it was destructured out). Pass an explicit counter and decrement in the loop implementation.

See the diff in the previous comment (adds remainingSteps: maxSteps). Pair it with the react/index.ts changes that consume remainingSteps.

🤖 Prompt for AI Agents
In packages/next-sdk/agent/AgentModelProvider.ts around lines 173 to 184, the
runReActLoop call is invoked without a remaining step budget because maxSteps
was previously destructured out of options; update the call to pass an explicit
remainingSteps property (e.g., remainingSteps: maxSteps or remainingSteps:
options.maxSteps if maxSteps is out of scope) so the ReAct loop can track and
decrement steps, and ensure the react loop implementation consumes and
decrements remainingSteps on each iteration.

},
...options
})
}
Expand Down
149 changes: 149 additions & 0 deletions packages/next-sdk/agent/react/index.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,149 @@
import { Tool, ToolCall } from '../type'
import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
import { generateText, stepCountIs } from 'ai'
Comment on lines +1 to +3
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Import ToolSet and fix types to avoid compile errors.

ToolSet is referenced but not imported; also getSystemPromptMessages expects an array but callers pass a map.

Apply:

-import { Tool, ToolCall } from '../type'
-import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
-import { generateText, stepCountIs } from 'ai'
+import { Tool, ToolCall } from '../type'
+import { PREFIX, FORMAT_INSTRUCTIONS, SUFFIX } from './systemPrompt'
+import { stepCountIs } from 'ai'
+import type { ToolSet } from 'ai'

And:

-export const getSystemPromptMessages = async (tools: Tool[]): Promise<string> => {
+export const getSystemPromptMessages = async (tools: ToolSet): Promise<string> => {

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 1 to 3, add the missing
import for ToolSet from '../type' and fix the type mismatch for
getSystemPromptMessages: either change its parameter type to accept a
ToolSet/Map (and update its implementation to iterate Map.entries() or
values()), or convert incoming Map callers to pass Array.from(toolSet.values())
so the function keeps an array signature; ensure all references and type
annotations are updated so callers and the function agree (import ToolSet,
update function signature or callers accordingly) to eliminate the compile
errors.


const FINAL_ANSWER_TAG = 'Final Answer:'
const ACTION_TAG = '"action":'

export const getSystemPromptMessages = async (tools: Tool[]): Promise<string> => {
const toolStrings = JSON.stringify(tools)
const prompt = [PREFIX, toolStrings, FORMAT_INSTRUCTIONS, SUFFIX].join('\n\n')

return prompt
}

export const organizeToolCalls = async (
text: string
): Promise<{ toolCalls: ToolCall[]; thought?: string; finalAnswer: string }> => {
try {
let thought: string | undefined
const thoughtActionRegex = /Thought(.*?)(?:Action|Final Answer|$)/gs
const matches = [...text.matchAll(thoughtActionRegex)]

if (matches.length > 0) {
// 取第一个 Thought 作为思考内容,去除首尾的符号
thought = matches[0][1]?.replace(/^\W|$/, '')?.trim()
}

if (text.includes(FINAL_ANSWER_TAG) && !text.includes(ACTION_TAG)) {
const parts = text.split(FINAL_ANSWER_TAG)
const output = parts[parts.length - 1].trim()
return {
toolCalls: [],
thought,
finalAnswer: output
}
}

if (!text.includes(ACTION_TAG)) {
return {
toolCalls: [],
thought,
finalAnswer: text.trim()
}
}

const toolCalls: ToolCall[] = []

if (text.includes('```')) {
const actionBlocks = text
.trim()
.split(/```(?:json)?/)
.filter((block: string) => block.includes(ACTION_TAG))

actionBlocks.forEach((block: string) => {
try {
const { action, action_input } = JSON.parse(block.trim())

if (!action || typeof action !== 'string') {
console.error('Invalid tool call: missing or invalid action field')

return
}

toolCalls.push({
id: `call_${Math.random().toString(36).slice(2)}`,
type: 'function',
function: {
name: action,
arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
}
})
Comment on lines +65 to +71
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Always serialize arguments as valid JSON.

JSON.parse(toolCall.function.arguments) will throw when action_input is a plain string (not quoted). Serialize uniformly.

-              arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
+              arguments: JSON.stringify(action_input ?? {})
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
id: `call_${Math.random().toString(36).slice(2)}`,
type: 'function',
function: {
name: action,
arguments: typeof action_input === 'string' ? action_input : JSON.stringify(action_input || {})
}
})
id: `call_${Math.random().toString(36).slice(2)}`,
type: 'function',
function: {
name: action,
arguments: JSON.stringify(action_input ?? {})
}
})
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 65 to 71, the code
currently conditionally leaves plain strings unquoted so JSON.parse will throw;
always serialize arguments as valid JSON by replacing the conditional with a
single JSON.stringify call (e.g. JSON.stringify(action_input ?? {})) so strings,
objects, null/undefined all produce valid JSON; ensure the resulting value is
assigned to toolCall.function.arguments.

} catch (error) {
console.error('Failed to parse tool call JSON:', error)
}
})
}

return {
toolCalls: toolCalls,
thought,
finalAnswer: text.trim()
}
} catch (error) {
console.error('Failed to organize tool calls:', error)

return {
toolCalls: [],
thought: '',
finalAnswer: text
}
}
}

export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
llm
}: {
step: any
tools: Tool[]
vm: any
chatMethod: any
options: any
system: string
llm: any
}) => {
Comment on lines +94 to +110
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Type and budget fixes for ReAct loop; prevent runaway recursion.

  • tools is used as a map but typed as array.
  • No effective global step budget; nested calls reset stepCountIs.

Apply:

-export const runReActLoop = async ({
+export const runReActLoop = async ({
   step,
   tools,
   vm,
   chatMethod,
   options,
   system,
-  llm
+  llm,
+  remainingSteps,
 }: {
   step: any
-  tools: Tool[]
+  tools: ToolSet
   vm: any
   chatMethod: any
   options: any
   system: string
-  llm: any
+  llm: any
+  remainingSteps: number
 }) => {
+  if (!remainingSteps || remainingSteps <= 0) return

And in the call below (lines 132-136):

-      stopWhen: stepCountIs(options.maxSteps),
+      stopWhen: stepCountIs(remainingSteps),
       onStepFinish: async (stepCopy) => {
-        await runReActLoop({ step: stepCopy, tools, vm, chatMethod, options, system: system, llm: llm })
+        await runReActLoop({
+          step: stepCopy,
+          tools,
+          vm,
+          chatMethod,
+          options,
+          system,
+          llm,
+          remainingSteps: remainingSteps - 1
+        })
       },

Pair with the AgentModelProvider.ts change that passes remainingSteps.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
llm
}: {
step: any
tools: Tool[]
vm: any
chatMethod: any
options: any
system: string
llm: any
}) => {
// --- packages/next-sdk/agent/react/index.ts ---
// Updated runReActLoop signature and step budget check
export const runReActLoop = async ({
step,
tools,
vm,
chatMethod,
options,
system,
llm,
remainingSteps,
}: {
step: any
tools: ToolSet
vm: any
chatMethod: any
options: any
system: string
llm: any
remainingSteps: number
}) => {
if (!remainingSteps || remainingSteps <= 0) return
// …rest of implementation unchanged
}
// …later in the same file, around lines 132–136…
// Before: stopWhen uses the static maxSteps and recursive calls forget to decrement
// After: use remainingSteps for stopping, and decrement on each nested invocation
stopWhen: stepCountIs(remainingSteps),
onStepFinish: async (stepCopy) => {
await runReActLoop({
step: stepCopy,
tools,
vm,
chatMethod,
options,
system,
llm,
remainingSteps: remainingSteps - 1
})
},

const toolCallsResult = []
const { toolCalls, thought, finalAnswer } = await organizeToolCalls(step.content[0].text)

for (const toolCall of toolCalls) {
const tool = tools[toolCall.function.name]
if (tool) {
const result = await tool.execute(JSON.parse(toolCall.function.arguments))
toolCallsResult.push(result)
}
}

if (toolCallsResult.length > 0) {
const lastmessage = step.content[0]
lastmessage.text =
lastmessage.text +
`\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`

Comment on lines +122 to +127
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make Observation formatting resilient to tool result shapes.

Current code assumes result.content[*].text. Fall back gracefully.

-      `\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`
+      `\n Observation: ${
+        toolCallsResult.map((r: any) => {
+          if (Array.isArray(r?.content)) {
+            return r.content.map((c: any) => c?.text ?? '').filter(Boolean).join('\n')
+          }
+          if (typeof r?.text === 'string') return r.text
+          try { return JSON.stringify(r) } catch { return String(r) }
+        }).join('\n')
+      }`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (toolCallsResult.length > 0) {
const lastmessage = step.content[0]
lastmessage.text =
lastmessage.text +
`\n Observation: ${toolCallsResult.map((item: any) => item.content.map((item: any) => item.text).join('\n')).join('\n')}`
if (toolCallsResult.length > 0) {
const lastmessage = step.content[0]
lastmessage.text =
lastmessage.text +
`\n Observation: ${
toolCallsResult.map((r: any) => {
if (Array.isArray(r?.content)) {
return r.content
.map((c: any) => c?.text ?? '')
.filter(Boolean)
.join('\n')
}
if (typeof r?.text === 'string') {
return r.text
}
try {
return JSON.stringify(r)
} catch {
return String(r)
}
}).join('\n')
}`
}
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 122 to 127, the
Observation concatenation assumes each toolCallsResult item has content arrays
with objects that contain .text; make it resilient by defensively accessing
nested fields (use optional chaining and Array.isArray checks), fallback to
other representations when .text is missing (e.g., if content item is string use
it, if it's an object without text use JSON.stringify or String(item)), and join
entries safely (skip null/undefined). Update the mapping to normalize each tool
result into a string before joining so Observation always contains a sensible
textual fallback.

const result = chatMethod({
system: system,
model: llm,
tools: tools as ToolSet,
stopWhen: stepCountIs(options.maxSteps),
onStepFinish: async (stepCopy) => {
await runReActLoop({ step: stepCopy, tools, vm, chatMethod, options, system: system, llm: llm })
},
prompt: lastmessage.text,
})

for await (const part of result.fullStream) {
part.text && options.handler.onData({
type: 'markdown',
delta: part.text,
})
}
Comment on lines +139 to +144
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle both streaming and non-streaming results; guard handler.

fullStream may not exist (e.g., when generateText is used). Also options.handler can be undefined.

-    for await (const part of result.fullStream) {
-      part.text &&  options.handler.onData({
-        type: 'markdown',
-        delta: part.text,
-      })
-    }
+    if (result?.fullStream) {
+      for await (const part of result.fullStream) {
+        if (part?.text) {
+          options?.handler?.onData?.({ type: 'markdown', delta: part.text })
+        }
+      }
+    } else if (result?.text) {
+      options?.handler?.onData?.({ type: 'markdown', delta: result.text })
+    }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
for await (const part of result.fullStream) {
part.text && options.handler.onData({
type: 'markdown',
delta: part.text,
})
}
if (result?.fullStream) {
for await (const part of result.fullStream) {
if (part?.text) {
options?.handler?.onData?.({ type: 'markdown', delta: part.text })
}
}
} else if (result?.text) {
options?.handler?.onData?.({ type: 'markdown', delta: result.text })
}
🤖 Prompt for AI Agents
In packages/next-sdk/agent/react/index.ts around lines 139–144, the loop assumes
result.fullStream and options.handler.onData always exist; update the code to
first check that options.handler and typeof options.handler.onData ===
'function' before sending events, then if result.fullStream exists iterate and
emit each part.text as before; otherwise handle the non-streaming case by
emitting a single onData call with the available text (e.g., result.text or
result.output/content) wrapped as a markdown delta. Ensure null/undefined guards
so no runtime errors occur when handler or fullStream are absent.


} else {
return finalAnswer
}
}
Loading