Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
41 changes: 24 additions & 17 deletions app/api/chat/route.ts
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
import { createGoogleGenerativeAI } from '@ai-sdk/google';
import { createOpenAI } from '@ai-sdk/openai';
import { createOpenRouter } from '@openrouter/ai-sdk-provider';
import { streamText, smoothStream } from 'ai';
import { headers } from 'next/headers';
import { getModelConfig, AIModel } from '@/lib/models';
import { NextRequest, NextResponse } from 'next/server';
import { createGoogleGenerativeAI } from "@ai-sdk/google";
import { createOpenAI } from "@ai-sdk/openai";
import { createOpenRouter } from "@openrouter/ai-sdk-provider";
import { createLLMGateway } from "@llmgateway/ai-sdk-provider";
import { streamText, smoothStream } from "ai";
import { headers } from "next/headers";
import { getModelConfig, AIModel } from "@/lib/models";
import { NextRequest, NextResponse } from "next/server";

export const maxDuration = 60;

Expand All @@ -19,27 +20,32 @@ export async function POST(req: NextRequest) {

let aiModel;
switch (modelConfig.provider) {
case 'google':
case "google":
const google = createGoogleGenerativeAI({ apiKey });
aiModel = google(modelConfig.modelId);
break;

case 'openai':
case "openai":
const openai = createOpenAI({ apiKey });
aiModel = openai(modelConfig.modelId);
break;

case 'openrouter':
case "openrouter":
const openrouter = createOpenRouter({ apiKey });
aiModel = openrouter(modelConfig.modelId);
break;

case "llmgateway":
const llmgateway = createLLMGateway({ apiKey });
aiModel = llmgateway(modelConfig.modelId);
break;

default:
return new Response(
JSON.stringify({ error: 'Unsupported model provider' }),
JSON.stringify({ error: "Unsupported model provider" }),
{
status: 400,
headers: { 'Content-Type': 'application/json' },
headers: { "Content-Type": "application/json" },
}
);
}
Comment on lines 22 to 51
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix variable scoping in switch statement.

The static analysis correctly identifies a scoping issue with variable declarations in switch cases. Variables declared in one case can be accessed by other cases, which can lead to unexpected behavior.

Apply this diff to fix the scoping issue by wrapping declarations in blocks:

 let aiModel;
 switch (modelConfig.provider) {
-  case "google":
+  case "google": {
     const google = createGoogleGenerativeAI({ apiKey });
     aiModel = google(modelConfig.modelId);
     break;
+  }

-  case "openai":
+  case "openai": {
     const openai = createOpenAI({ apiKey });
     aiModel = openai(modelConfig.modelId);
     break;
+  }

-  case "openrouter":
+  case "openrouter": {
     const openrouter = createOpenRouter({ apiKey });
     aiModel = openrouter(modelConfig.modelId);
     break;
+  }

-  case "llmgateway":
+  case "llmgateway": {
     const llmgateway = createLLMGateway({ apiKey });
     aiModel = llmgateway(modelConfig.modelId);
     break;
+  }

   default:
     return new Response(
🧰 Tools
🪛 Biome (1.9.4)

[error] 24-24: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Unsafe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 29-29: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Unsafe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 34-34: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Unsafe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)


[error] 39-39: Other switch clauses can erroneously access this declaration.
Wrap the declaration in a block to restrict its access to the switch clause.

The declaration is defined in this switch clause:

Unsafe fix: Wrap the declaration in a block.

(lint/correctness/noSwitchDeclarations)

🤖 Prompt for AI Agents
In app/api/chat/route.ts between lines 22 and 51, the variables declared inside
each switch case are not properly scoped, which can cause them to be accessible
across cases and lead to bugs. To fix this, wrap the code inside each case in
curly braces to create a block scope, ensuring variables like google, openai,
openrouter, and llmgateway are limited to their respective cases.

Expand All @@ -48,7 +54,7 @@ export async function POST(req: NextRequest) {
model: aiModel,
messages,
onError: (error) => {
console.log('error', error);
console.log("error", error);
},
system: `
You are Chat0, an ai assistant that can answer questions and help with tasks.
Expand All @@ -65,23 +71,24 @@ export async function POST(req: NextRequest) {
- Display:
$$\\frac{d}{dx}\\sin(x) = \\cos(x)$$
`,
experimental_transform: [smoothStream({ chunking: 'word' })],
experimental_transform: [smoothStream({ chunking: "word" })],
abortSignal: req.signal,
});


return result.toDataStreamResponse({
sendReasoning: true,
getErrorMessage: (error) => {
return (error as { message: string }).message;
},
});
} catch (error) {
console.log('error', error);
console.log("error", error);
return new NextResponse(
JSON.stringify({ error: 'Internal Server Error' }),
JSON.stringify({ error: "Internal Server Error" }),
{
status: 500,
headers: { 'Content-Type': 'application/json' },
headers: { "Content-Type": "application/json" },
}
);
}
Expand Down
11 changes: 11 additions & 0 deletions frontend/components/APIKeyForm.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ const formSchema = z.object({
message: 'Google API key is required for Title Generation',
}),
openrouter: z.string().trim().optional(),
llmgateway: z.string().trim().optional(),
openai: z.string().trim().optional(),
});

Expand Down Expand Up @@ -84,6 +85,16 @@ const Form = () => {
required
/>

<ApiKeyField
id="llmgateway"
label="LLMGateway API Key"
models={['Claude 3.7 Sonnet', 'Claude 3.5 Sonnet']}
linkUrl="https://llmgateway.io/signup"
placeholder="llmgtwy_..."
register={register}
error={errors.llmgateway}
/>

<ApiKeyField
id="openrouter"
label="OpenRouter API Key"
Expand Down
7 changes: 1 addition & 6 deletions frontend/hooks/useMessageSummary.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import { useCompletion } from '@ai-sdk/react';
import { useAPIKeyStore } from '@/frontend/stores/APIKeyStore';
import { toast } from 'sonner';
import { createMessageSummary, updateThread } from '@/frontend/dexie/queries';

interface MessageSummaryPayload {
Expand All @@ -22,18 +21,14 @@ export const useMessageSummary = () => {
try {
const payload: MessageSummaryPayload = await response.json();

if (response.ok) {
const { title, isTitle, messageId, threadId } = payload;
const { title, isTitle, messageId, threadId } = payload;
Comment on lines 22 to +24
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Critical: Restore proper error handling for HTTP responses.

Removing the response.ok check is dangerous as it will attempt to process failed HTTP responses, potentially causing runtime errors when parsing JSON from error responses. This removes important error feedback for users.

    onResponse: async (response) => {
      try {
+       if (!response.ok) {
+         console.error('Request failed:', response.status, response.statusText);
+         return;
+       }
        const payload: MessageSummaryPayload = await response.json();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const payload: MessageSummaryPayload = await response.json();
if (response.ok) {
const { title, isTitle, messageId, threadId } = payload;
const { title, isTitle, messageId, threadId } = payload;
onResponse: async (response) => {
try {
if (!response.ok) {
console.error('Request failed:', response.status, response.statusText);
return;
}
const payload: MessageSummaryPayload = await response.json();
const { title, isTitle, messageId, threadId } = payload;
// …rest of your logic…
} catch (error) {
console.error('Unexpected error in onResponse:', error);
throw error;
}
}
🤖 Prompt for AI Agents
In frontend/hooks/useMessageSummary.ts around lines 22 to 24, the code is
missing a check for response.ok before parsing JSON, which can cause runtime
errors on failed HTTP responses. Add a conditional to verify response.ok is true
before calling response.json(), and handle the error case appropriately by
throwing an error or returning a fallback value to ensure proper error feedback
for users.


if (isTitle) {
await updateThread(threadId, title);
await createMessageSummary(threadId, messageId, title);
} else {
await createMessageSummary(threadId, messageId, title);
}
} else {
toast.error('Failed to generate a summary for the message');
}
} catch (error) {
console.error(error);
}
Expand Down
3 changes: 2 additions & 1 deletion frontend/stores/APIKeyStore.ts
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
import { create, Mutate, StoreApi } from 'zustand';
import { persist } from 'zustand/middleware';

export const PROVIDERS = ['google', 'openrouter', 'openai'] as const;
export const PROVIDERS = ['google', 'openrouter', 'llmgateway', 'openai'] as const;
export type Provider = (typeof PROVIDERS)[number];

type APIKeys = Record<Provider, string>;
Expand Down Expand Up @@ -38,6 +38,7 @@ export const useAPIKeyStore = create<APIKeyStore>()(
keys: {
google: '',
openrouter: '',
llmgateway: '',
openai: '',
},

Expand Down
12 changes: 12 additions & 0 deletions lib/models.ts
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ export const AI_MODELS = [
'Gemini 2.5 Flash',
'GPT-4o',
'GPT-4.1-mini',
'Claude 3.7 Sonnet',
'Claude 3.5 Sonnet'
] as const;

export type AIModel = (typeof AI_MODELS)[number];
Expand All @@ -18,6 +20,16 @@ export type ModelConfig = {
};

export const MODEL_CONFIGS = {
'Claude 3.7 Sonnet': {
modelId: 'claude-3-7-sonnet-20250219',
provider: 'llmgateway',
headerKey: 'Authorization',
},
'Claude 3.5 Sonnet': {
modelId: 'claude-3-5-sonnet-20241022',
provider: 'llmgateway',
headerKey: 'Authorization',
},
Comment on lines +23 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify Claude model IDs and Authorization header usage.

The model configurations follow the established pattern. However, please verify:

  1. Model IDs: Ensure claude-3-7-sonnet-20250219 and claude-3-5-sonnet-20241022 are correct and available via the llmgateway provider.
  2. Authorization header: Unlike other providers that use custom headers (e.g., X-OpenAI-API-Key), these use the standard Authorization header. Confirm this is the correct format for llmgateway.

🌐 Web query:

What are the correct model IDs for Claude 3.5 Sonnet and Claude 3.7 Sonnet models, and what is the expected API key header format for llmgateway provider?

💡 Result:

The model IDs for Claude 3.5 Sonnet and Claude 3.7 Sonnet are as follows:

  • Claude 3.5 Sonnet: claude-3-5-sonnet-20240620
  • Claude 3.7 Sonnet: claude-3-7-sonnet-20250219

These identifiers are used across various platforms, including Anthropic's API, AWS Bedrock, and Google Cloud's Vertex AI. (docs.anthropic.com)

Regarding the API key header format for the llmgateway provider, specific details are not readily available in the provided sources. Typically, API key headers are included in the request header as Authorization: Bearer YOUR_API_KEY. For precise implementation, please refer to the official documentation of the llmgateway provider or contact their support team for accurate information.

Citations:


Update Claude 3.5 Sonnet model ID and confirm llmgateway Authorization header

  • lib/models.ts (lines 23–32):
    • The modelId for Claude 3.5 Sonnet should be updated from claude-3-5-sonnet-20241022 to claude-3-5-sonnet-20240620.
    • The modelId for Claude 3.7 Sonnet (claude-3-7-sonnet-20250219) is already correct.
  • Please verify that the llmgateway provider indeed uses the standard
    Authorization: Bearer YOUR_API_KEY
    header (and not a custom header like X-LLMGateway-API-Key).

Suggested diff:

   'Claude 3.7 Sonnet': {
     modelId: 'claude-3-7-sonnet-20250219',
     provider: 'llmgateway',
     headerKey: 'Authorization',
   },
-  'Claude 3.5 Sonnet': {
-    modelId: 'claude-3-5-sonnet-20241022',
+  'Claude 3.5 Sonnet': {
+    modelId: 'claude-3-5-sonnet-20240620',
     provider: 'llmgateway',
     headerKey: 'Authorization',
   },
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
'Claude 3.7 Sonnet': {
modelId: 'claude-3-7-sonnet-20250219',
provider: 'llmgateway',
headerKey: 'Authorization',
},
'Claude 3.5 Sonnet': {
modelId: 'claude-3-5-sonnet-20241022',
provider: 'llmgateway',
headerKey: 'Authorization',
},
'Claude 3.7 Sonnet': {
modelId: 'claude-3-7-sonnet-20250219',
provider: 'llmgateway',
headerKey: 'Authorization',
},
'Claude 3.5 Sonnet': {
modelId: 'claude-3-5-sonnet-20240620',
provider: 'llmgateway',
headerKey: 'Authorization',
},
🤖 Prompt for AI Agents
In lib/models.ts around lines 23 to 32, update the modelId for "Claude 3.5
Sonnet" from "claude-3-5-sonnet-20241022" to "claude-3-5-sonnet-20240620" to
reflect the correct and current model identifier. Confirm with the llmgateway
provider documentation or support that the API key should be sent using the
standard "Authorization" header with the format "Authorization: Bearer
YOUR_API_KEY" rather than a custom header, and adjust the headerKey value
accordingly if needed.

'Deepseek R1 0528': {
modelId: 'deepseek/deepseek-r1-0528:free',
provider: 'openrouter',
Expand Down
2 changes: 2 additions & 0 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,12 @@
"lint": "next lint"
},
"dependencies": {
"@ai-sdk/anthropic": "^1.2.12",
"@ai-sdk/google": "^1.2.19",
"@ai-sdk/openai": "^1.3.22",
"@ai-sdk/react": "^1.2.12",
"@hookform/resolvers": "^5.0.1",
"@llmgateway/ai-sdk-provider": "^1.0.1",
"@openrouter/ai-sdk-provider": "^0.4.6",
"@radix-ui/react-dialog": "^1.1.14",
"@radix-ui/react-dropdown-menu": "^2.1.15",
Expand Down
32 changes: 32 additions & 0 deletions pnpm-lock.yaml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.