Conversation
Summary of ChangesHello @ElasticBottle, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request primarily focuses on a significant upgrade of the AI SDK and its associated provider packages to their latest major versions. This upgrade necessitated widespread refactoring to adapt to API changes, particularly in how structured outputs are generated from AI models and how tool schemas are defined. The changes streamline AI agent implementations, improve JSON parsing, and introduce more detailed UI feedback for AI tool interactions, ensuring compatibility and leveraging new features of the updated SDK. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
commit: |
There was a problem hiding this comment.
Code Review
This PR updates the AI SDK and related dependencies to newer versions. The changes correctly adapt the codebase to the new APIs, such as using Output.object with generateText for structured output, removing the jsonSchema helper, and handling async convertToModelMessages. The removal of the llm-parse-json.ts utility is a great simplification. I've added a couple of suggestions for minor code quality improvements. Overall, this is a solid update.
| const output = await (async () => { | ||
| if (settingToUpdate === "businessBackground") { | ||
| const result = await generateText({ | ||
| model: openai("gpt-5.1-codex-mini"), | ||
| output: Output.object({ | ||
| schema: businessBackgroundSchema, | ||
| }), | ||
| prompt, | ||
| }); | ||
| return result.output; | ||
| } | ||
| if (settingToUpdate === "imageSettings") { | ||
| const result = await generateText({ | ||
| model: openai("gpt-5.1-codex-mini"), | ||
| output: Output.object({ | ||
| schema: imageSettingsSchema, | ||
| }), | ||
| prompt, | ||
| }); | ||
| return result.output; | ||
| } | ||
| if (settingToUpdate === "writingSettings") { | ||
| const result = await generateText({ | ||
| model: openai("gpt-5.1-codex-mini"), | ||
| output: Output.object({ | ||
| schema: writingSettingsSchema, | ||
| }), | ||
| prompt, | ||
| }); | ||
| return result.output; | ||
| } | ||
| const result = await generateText({ | ||
| model: openai("gpt-5.1-codex-mini"), | ||
| output: Output.object({ | ||
| schema: publishingSettingsSchema, | ||
| }), | ||
| prompt, | ||
| }); | ||
| return result.output; | ||
| })(); |
There was a problem hiding this comment.
For better readability and maintainability, consider refactoring this series of if statements into a switch statement. This is a more conventional way to handle logic that branches on the value of a single variable like settingToUpdate.
const output = await (async () => {
switch (settingToUpdate) {
case "businessBackground": {
const { output } = await generateText({
model: openai("gpt-5.1-codex-mini"),
output: Output.object({ schema: businessBackgroundSchema }),
prompt,
});
return output;
}
case "imageSettings": {
const { output } = await generateText({
model: openai("gpt-5.1-codex-mini"),
output: Output.object({ schema: imageSettingsSchema }),
prompt,
});
return output;
}
case "writingSettings": {
const { output } = await generateText({
model: openai("gpt-5.1-codex-mini"),
output: Output.object({ schema: writingSettingsSchema }),
prompt,
});
return output;
}
case "publishingSettings": {
const { output } = await generateText({
model: openai("gpt-5.1-codex-mini"),
output: Output.object({ schema: publishingSettingsSchema }),
prompt,
});
return output;
}
}
})();| Promise.resolve( | ||
| result.consumeStream({ | ||
| onError: (error) => { | ||
| console.error("[chat.sendMessage] consumeStream onError", { | ||
| error, | ||
| }); | ||
| }, | ||
| }), | ||
| ), |
There was a problem hiding this comment.
There was a problem hiding this comment.
Pull request overview
This PR updates the Vercel AI SDK from v5 to v6 across the monorepo, along with all related AI SDK provider packages (@ai-sdk/google, @ai-sdk/openai, @ai-sdk/anthropic, @ai-sdk/react). The changes refactor code to use the new v6 API patterns, removing experimental features that have been stabilized.
Changes:
- Upgraded AI SDK packages from v5.0.112 to v6.0.86 and provider packages to v3.x
- Migrated from
experimental_outputtooutputAPI for structured outputs - Removed
jsonSchemawrapper in favor of direct schema usage with tools - Deleted
llmParseJsonutility functions that are now redundant - Made
convertToModelMessagesasync calls explicit with await - Updated UI components to support approval states (previously commented out)
Reviewed changes
Copilot reviewed 38 out of 39 changed files in this pull request and generated no comments.
Show a summary per file
| File | Description |
|---|---|
| packages/ui/package.json | Upgraded ai package to v6.0.86 |
| packages/ui/src/components/ai-elements/tool.tsx | Uncommented approval-related UI states for tool execution |
| packages/ui/src/components/ai-elements/image.tsx | Updated type from Experimental_GeneratedImage to GeneratedFile |
| packages/task/package.json | Upgraded ai and AI SDK provider packages to v6 and v3 respectively |
| packages/task/src/trigger/understand-site.ts | Updated to use Output.object, changed model to gemini-3-flash-preview, removed llmParseJson |
| packages/task/src/trigger/understand-site-llm.ts | Updated to use Output.object, changed model to gpt-5-mini, removed llmParseJson |
| packages/task/src/lib/ai-tools/search-site.ts | Removed jsonSchema wrapper for inputSchema |
| packages/task/src/lib/ai-tools/llm-parse-json.ts | Deleted entire utility file (100 lines) |
| packages/task/src/lib/ai-tools/get-site-data.ts | Removed jsonSchema wrapper for input/output schemas |
| packages/db/package.json | Upgraded ai package to v6.0.86 |
| packages/api-user-vm/package.json | Upgraded ai and @ai-sdk/google to v6 and v3 |
| packages/api-user-vm/src/routes/content.ts | Made handler async, added await for convertToModelMessages |
| packages/api-user-vm/src/lib/ai.ts | Changed model to gemini-3-flash-preview |
| packages/api-seo/package.json | Upgraded ai and AI SDK provider packages to v6 and v3 respectively |
| packages/api-seo/src/workflows/*.ts | Updated 5 workflow files to use Output.object and direct schemas |
| packages/api-seo/src/routes/chat.sendMessage.ts | Made agent creation async, wrapped consumeStream in Promise.resolve |
| packages/api-seo/src/lib/chat/handle-title-generation.ts | Added await for convertToModelMessages |
| packages/api-seo/src/lib/ai/writer-agent.ts | Made function async, added await for convertToModelMessages |
| packages/api-seo/src/lib/ai/strategist-agent.ts | Made function async, added await for convertToModelMessages |
| packages/api-seo/src/lib/ai/utils/llm-parse-json.ts | Deleted entire utility file (100 lines) |
| packages/api-seo/src/lib/ai/tools/*.ts | Updated 12 tool files to remove jsonSchema wrappers and use direct schemas |
| packages/api-seo/src/lib/ai/tools/settings-tools.ts | Replaced generateObject with generateText + Output.object, refactored to handle each schema type separately |
| apps/seo/package.json | Upgraded ai and @ai-sdk/react to v6 and v3 |
| .changeset/config.json | Fixed typo in ignore list: contact → seo-contact |
|
Cloudflare Preview URL for WWW 🎈 : https://pr-397.rectangularlabs.com (custom domain) |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: f502136442
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| ) as JSONSchema7, | ||
| ), | ||
| output: Output.object({ | ||
| schema: inferArticleTypeSchema, |
There was a problem hiding this comment.
Keep Google-compatible enum schema for article type output
Using schema: inferArticleTypeSchema here drops the previous JSON-schema normalization for Google models, but articleTypeSchema is a literal union that serializes as anyOf/const entries rather than typed string enums. In Gemini structured-output calls this can fail schema validation; when that happens this function falls into the catch path and defaults to "other", so article-type-specific rules are silently skipped for generated drafts.
Useful? React with 👍 / 👎.
No description provided.