Skip to content

Conversation

@mihirpenugonda
Copy link

@mihirpenugonda mihirpenugonda commented Jun 8, 2025

Summary -
This PR makes API keys more flexible by allowing users to start with any single API key instead of requiring all three. Previously, users had to enter all API keys before using the app. Now they only need one from any provider (Google, OpenAI, or OpenRouter).

Summary by CodeRabbit

  • New Features

    • Added support for multiple AI model providers (Google, OpenAI, OpenRouter) for text generation.
    • Automatically selects an AI model based on available API keys.
    • Introduced a visual loading indicator for chat threads in the sidebar.
    • Added centralized management of loading states for chat threads.
  • Improvements

    • Enhanced API key form validation to require at least one key, not a specific provider.
    • Improved handling and selection of API keys throughout the app.

@vercel
Copy link

vercel bot commented Jun 8, 2025

@mihirpenugonda is attempting to deploy a commit to the Harsh's projects Team on Vercel.

A member of the Team first needs to authorize it.

@coderabbitai
Copy link

coderabbitai bot commented Jun 8, 2025

Walkthrough

This update introduces multi-provider support for AI model selection throughout the application. The backend completion API now dynamically selects the AI provider based on available API keys in the request. The frontend adapts to this by validating the presence of any key, auto-selecting models, and visually indicating loading states for chat threads. Several new hooks and stores facilitate these behaviors.

Changes

File(s) Change Summary
app/api/completion/route.ts Completion API endpoint now supports Google, OpenAI, and OpenRouter providers by detecting API keys in headers and selecting the appropriate model and client. Error handling and logging were updated accordingly.
frontend/components/APIKeyForm.tsx API key form validation updated: Google key is now optional; at least one of Google, OpenAI, or OpenRouter keys is required. The Google key field is no longer marked as required in the UI.
frontend/components/ChatInput.tsx Added imports and invocation of the new useAutoSelectModel hook to enable automatic model selection based on available API keys. No other logic was changed.
frontend/components/ChatSidebar.tsx Introduced loading state indicators for chat threads using useTitleLoadingStore. A spinner icon now appears next to thread titles while loading.
frontend/hooks/useAutoSelectModel.ts New hook useAutoSelectModel added to automatically select an AI model based on which API keys are available, updating the selected model in the store as needed.
frontend/hooks/useMessageSummary.ts Enhanced to use the first available API key and its provider, dynamically setting the request header. Integrated per-thread loading state management via useTitleLoadingStore. The returned complete function now wraps loading state logic.
frontend/stores/APIKeyStore.ts Added getFirstAvailableKey method to retrieve the first available API key and provider. Updated hasRequiredKeys to check for any provider's key.
frontend/stores/TitleLoadingStore.ts New Zustand store useTitleLoadingStore introduced to manage and query loading states for threads, with methods to set and check loading status by thread ID.
lib/models.ts Standardized string literals to double quotes. Added getProviderHeaderKey function to retrieve the correct header key for a provider, defaulting to a constructed value if not found.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant APIKeyForm
    participant APIKeyStore
    participant ChatInput
    participant useAutoSelectModel
    participant CompletionAPI
    participant AIProvider

    User->>APIKeyForm: Enter API keys
    APIKeyForm->>APIKeyStore: Store keys
    ChatInput->>useAutoSelectModel: On mount/use
    useAutoSelectModel->>APIKeyStore: Get available keys & current model
    useAutoSelectModel->>ChatInput: Update selected model if needed
    User->>ChatInput: Submit message
    ChatInput->>CompletionAPI: POST /completion with available key in header
    CompletionAPI->>AIProvider: Initialize client based on key/provider
    AIProvider-->>CompletionAPI: Generate completion
    CompletionAPI-->>ChatInput: Return completion
    ChatInput-->>User: Display response
Loading
sequenceDiagram
    participant User
    participant ChatSidebar
    participant TitleLoadingStore
    participant useMessageSummary
    participant CompletionAPI

    User->>ChatSidebar: View threads
    useMessageSummary->>TitleLoadingStore: setLoading(threadId, true)
    useMessageSummary->>CompletionAPI: Request summary/title
    CompletionAPI-->>useMessageSummary: Respond with title
    useMessageSummary->>TitleLoadingStore: setLoading(threadId, false)
    ChatSidebar->>TitleLoadingStore: isLoading(threadId)
    TitleLoadingStore-->>ChatSidebar: Loading state (show spinner if true)
Loading

Possibly related PRs

  • senbo1/chat0#1: Introduced the initial completion API route using a fixed OpenAI model and environment keys. This PR is directly related as the current changes generalize and extend that functionality to support multiple providers and dynamic key selection.

Poem

A rabbit hopped through models new,
With keys for Google, OpenAI too.
It sniffed out threads that spun and spun,
And picked the first key—just for fun!
Now spinners twirl while chats ignite,
Multi-provider dreams take flight.
🐇✨

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

frontend/components/APIKeyForm.tsx

Oops! Something went wrong! :(

ESLint: 9.28.0

ESLint couldn't find the plugin "eslint-plugin-react-hooks".

(The package "eslint-plugin-react-hooks" was not found when loaded as a Node module from the directory "".)

It's likely that the plugin isn't installed correctly. Try reinstalling by running the following:

npm install eslint-plugin-react-hooks@latest --save-dev

The plugin "eslint-plugin-react-hooks" was referenced from the config file in " » eslint-config-next/core-web-vitals » /node_modules/.pnpm/eslint-config-next@15.3.2_eslint@9.28.0_jiti@2.4.2__typescript@5.8.3/node_modules/eslint-config-next/index.js".

If you still can't figure out the problem, please see https://eslint.org/docs/latest/use/troubleshooting.

app/api/completion/route.ts

Oops! Something went wrong! :(

ESLint: 9.28.0

ESLint couldn't find the plugin "eslint-plugin-react-hooks".

(The package "eslint-plugin-react-hooks" was not found when loaded as a Node module from the directory "".)

It's likely that the plugin isn't installed correctly. Try reinstalling by running the following:

npm install eslint-plugin-react-hooks@latest --save-dev

The plugin "eslint-plugin-react-hooks" was referenced from the config file in " » eslint-config-next/core-web-vitals » /node_modules/.pnpm/eslint-config-next@15.3.2_eslint@9.28.0_jiti@2.4.2__typescript@5.8.3/node_modules/eslint-config-next/index.js".

If you still can't figure out the problem, please see https://eslint.org/docs/latest/use/troubleshooting.

frontend/components/ChatInput.tsx

Oops! Something went wrong! :(

ESLint: 9.28.0

ESLint couldn't find the plugin "eslint-plugin-react-hooks".

(The package "eslint-plugin-react-hooks" was not found when loaded as a Node module from the directory "".)

It's likely that the plugin isn't installed correctly. Try reinstalling by running the following:

npm install eslint-plugin-react-hooks@latest --save-dev

The plugin "eslint-plugin-react-hooks" was referenced from the config file in " » eslint-config-next/core-web-vitals » /node_modules/.pnpm/eslint-config-next@15.3.2_eslint@9.28.0_jiti@2.4.2__typescript@5.8.3/node_modules/eslint-config-next/index.js".

If you still can't figure out the problem, please see https://eslint.org/docs/latest/use/troubleshooting.

  • 6 others
✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

🧹 Nitpick comments (2)
frontend/components/ChatInput.tsx (1)

27-27: Remove unused import.

The useTitleLoadingStore import is not used anywhere in this component.

-import { useTitleLoadingStore } from '../stores/TitleLoadingStore';
frontend/hooks/useMessageSummary.ts (1)

45-45: Error message could be more specific.

The generic error message doesn't indicate which operation failed (title generation vs. summary generation) or provide guidance on potential causes.

- toast.error("Failed to generate a summary for the message");
+ toast.error(payload.isTitle ? "Failed to generate title" : "Failed to generate summary");
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ed9c4c3 and d18b239.

📒 Files selected for processing (9)
  • app/api/completion/route.ts (2 hunks)
  • frontend/components/APIKeyForm.tsx (1 hunks)
  • frontend/components/ChatInput.tsx (2 hunks)
  • frontend/components/ChatSidebar.tsx (2 hunks)
  • frontend/hooks/useAutoSelectModel.ts (1 hunks)
  • frontend/hooks/useMessageSummary.ts (3 hunks)
  • frontend/stores/APIKeyStore.ts (2 hunks)
  • frontend/stores/TitleLoadingStore.ts (1 hunks)
  • lib/models.ts (2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (6)
frontend/components/ChatInput.tsx (1)
frontend/hooks/useAutoSelectModel.ts (1)
  • useAutoSelectModel (6-28)
app/api/completion/route.ts (1)
app/api/chat/route.ts (1)
  • POST (11-88)
frontend/hooks/useAutoSelectModel.ts (3)
frontend/stores/APIKeyStore.ts (1)
  • useAPIKeyStore (36-74)
frontend/stores/ModelStore.ts (1)
  • useModelStore (30-49)
lib/models.ts (2)
  • getModelConfig (53-55)
  • AI_MODELS (3-10)
lib/models.ts (1)
frontend/stores/APIKeyStore.ts (1)
  • Provider (5-5)
frontend/components/ChatSidebar.tsx (2)
frontend/dexie/queries.ts (1)
  • getThreads (6-8)
frontend/stores/TitleLoadingStore.ts (1)
  • useTitleLoadingStore (9-27)
frontend/hooks/useMessageSummary.ts (4)
frontend/stores/APIKeyStore.ts (1)
  • useAPIKeyStore (36-74)
frontend/stores/TitleLoadingStore.ts (1)
  • useTitleLoadingStore (9-27)
lib/models.ts (1)
  • getProviderHeaderKey (57-65)
frontend/dexie/queries.ts (1)
  • createMessageSummary (108-120)
🔇 Additional comments (21)
frontend/components/ChatInput.tsx (2)

26-26: LGTM: Auto-model selection integration.

The useAutoSelectModel hook is correctly imported and will enable automatic model selection based on available API keys, supporting the PR's optional key functionality.


64-64: LGTM: Auto-model selection hook usage.

The hook is correctly invoked and will automatically switch to a model with an available API key when the current model lacks one.

frontend/components/APIKeyForm.tsx (1)

21-30: LGTM: Validation schema correctly implements optional key support.

The schema appropriately makes individual API keys optional while ensuring at least one is provided through the refinement. This perfectly aligns with the PR's objective of supporting optional API keys.

frontend/stores/APIKeyStore.ts (3)

14-14: LGTM: Type definition supports multi-provider functionality.

The new getFirstAvailableKey method signature correctly returns both provider and key information, enabling dynamic provider selection.


52-54: LGTM: Validation logic supports any provider.

The modified hasRequiredKeys method correctly checks for any available API key instead of requiring only Google, supporting the optional key functionality.


61-67: LGTM: Well-implemented provider priority logic.

The getFirstAvailableKey implementation follows a reasonable priority order (Google → OpenAI → OpenRouter) and returns the necessary information for dynamic provider selection.

lib/models.ts (3)

1-1: LGTM: Provider type import supports multi-provider functionality.

The import correctly brings in the Provider type needed for the new getProviderHeaderKey function.


4-50: LGTM: Consistent quote style.

The standardization to double quotes improves code consistency.


57-65:

Details

✅ Verification successful

LGTM: Well-designed provider header mapping function.

The function correctly maps providers to their header keys with a sensible fallback mechanism. The implementation supports the dynamic provider selection functionality.

Verify that the fallback header key format matches expected conventions for all providers:


🏁 Script executed:

#!/bin/bash
# Description: Check if any API documentation or existing code references specific header formats
# Expected: Confirm the fallback format `X-{Provider}-API-Key` is universally appropriate

# Search for any existing header key patterns in the codebase
rg -i "x-.*-api-key" --type ts --type js

# Search for any provider-specific header documentation or comments
rg -i "(header|key).*openrouter|openai|google" --type ts --type js -A 3 -B 3

Length of output: 10483


LGTM: Header key conventions validated

A project-wide search confirms that all existing providers use the X-{Provider}-API-Key pattern, and every known provider has an explicit headerKey entry in lib/models.ts to preserve correct casing. The fallback (X-${provider.charAt(0).toUpperCase() + provider.slice(1)}-API-Key) only applies to any new providers and follows the same convention.

frontend/components/ChatSidebar.tsx (3)

16-16: LGTM: Clean import additions for loading functionality.

The imports for Loader2 icon and useTitleLoadingStore are correctly added to support the new loading state feature.

Also applies to: 19-19


25-25: LGTM: Proper store usage pattern.

Correctly extracts the isLoading function from the store using Zustand's selector pattern, which ensures optimal re-renders.


50-55: LGTM: Well-implemented loading state UI.

The loading spinner implementation is clean and follows good UX practices:

  • Uses flexbox for proper alignment
  • Applies appropriate sizing (h-3 w-3) and animation
  • Conditionally renders based on loading state
  • Uses shrink-0 to prevent spinner from shrinking
app/api/completion/route.ts (5)

1-5: LGTM: Appropriate imports for multi-provider support.

The imports correctly include the necessary AI SDK providers and utilities for the enhanced functionality.


9-11: LGTM: Consistent header key reading pattern.

The API key extraction from headers follows a consistent pattern across all three providers.


34-41: LGTM: Appropriate error handling for missing API keys.

The error message clearly indicates the requirement and returns an appropriate HTTP status code.


45-45: LGTM: Dynamic model usage.

Correctly uses the dynamically selected model instead of a hardcoded one.


57-59: LGTM: Improved error logging and messages.

The error handling is more specific and includes proper logging.

frontend/stores/TitleLoadingStore.ts (2)

3-7: LGTM: Well-defined TypeScript interface.

The type definition clearly specifies the store's structure and methods with appropriate parameter types.


9-27: LGTM: Excellent Zustand store implementation.

The store implementation follows best practices:

  • Immutable updates: Creates a new Set instance in setLoading to ensure proper state updates
  • Efficient data structure: Uses Set for O(1) lookup and modification operations
  • Clean API: Simple, intuitive methods for managing loading states
  • Proper state management: Uses Zustand's set and get functions correctly

The implementation is thread-safe and efficient for managing multiple loading states simultaneously.

frontend/hooks/useMessageSummary.ts (2)

24-29: Header generation logic is sound.

The dynamic header generation using getProviderHeaderKey correctly adapts to different providers. The conditional header object ensures compatibility when no key is available.


66-66: Function signature change maintains interface compatibility.

Returning the wrapper function instead of the original complete function maintains the expected interface while adding the loading state functionality.

Comment on lines +17 to +33
if (googleApiKey) {
const google = createGoogleGenerativeAI({
apiKey: googleApiKey,
});
model = google("gemini-2.5-flash-preview-04-17");
} else if (openaiApiKey) {
const openai = createOpenAI({
apiKey: openaiApiKey,
});
model = openai("gpt-4.1-mini");
} else if (openrouterApiKey) {
const openrouter = createOpenAI({
baseURL: "https://openrouter.ai/api/v1",
apiKey: openrouterApiKey,
});
model = openrouter("deepseek/deepseek-chat-v3-0324:free");
} else {
Copy link

@coderabbitai coderabbitai bot Jun 8, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve consistency and model selection logic.

The provider selection logic works but has some inconsistencies:

  1. Hardcoded model IDs: Each provider uses a hardcoded model ID instead of leveraging the model configuration system used elsewhere in the codebase.

  2. OpenRouter setup: Uses createOpenAI with a custom base URL, which is correct but could be more explicit.

Consider using the existing model configuration system for consistency:

- let model;
+ let model;
+ let provider: 'google' | 'openai' | 'openrouter';

  if (googleApiKey) {
+   provider = 'google';
    const google = createGoogleGenerativeAI({
      apiKey: googleApiKey,
    });
-   model = google("gemini-2.5-flash-preview-04-17");
+   model = google("gemini-2.5-flash");
  } else if (openaiApiKey) {
+   provider = 'openai';
    const openai = createOpenAI({
      apiKey: openaiApiKey,
    });
-   model = openai("gpt-4.1-mini");
+   model = openai("gpt-4o-mini");
  } else if (openrouterApiKey) {
+   provider = 'openrouter';
    const openrouter = createOpenAI({
      baseURL: "https://openrouter.ai/api/v1",
      apiKey: openrouterApiKey,
    });
-   model = openrouter("deepseek/deepseek-chat-v3-0324:free");
+   model = openrouter("deepseek/deepseek-r1:free");

This approach would be more maintainable and consistent with the model configuration pattern used in app/api/chat/route.ts.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In app/api/completion/route.ts between lines 17 and 33, the current provider
selection uses hardcoded model IDs and directly calls createOpenAI or
createGoogleGenerativeAI, which is inconsistent with the model configuration
system used elsewhere. Refactor this section to use the existing model
configuration system for selecting models, ensuring that model IDs come from the
configuration rather than being hardcoded. Also, make the OpenRouter setup more
explicit by clearly defining its client creation and model selection consistent
with other providers, following the pattern in app/api/chat/route.ts.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this cleanup should happen in a separate pr where we improve model registration. currently following whats already being done

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mihirpenugonda That makes perfect sense! Keeping this PR focused on the optional key support and handling the model registration improvements separately is a good approach for maintaining clean, focused changes.

Would you like me to create a GitHub issue to track the model registration cleanup for the future PR? This could help ensure the refactoring doesn't get forgotten and provide a place to document the specific improvements needed.

🐰
┌─────────────────┐
│ ◕ ◕ │
│ ω │
│ Noted! 📝 │
└─────────────────┘

Comment on lines +16 to +21
const getFirstAvailableKey = useAPIKeyStore(
(state) => state.getFirstAvailableKey
);
const setLoading = useTitleLoadingStore((state) => state.setLoading);

const availableKey = getFirstAvailableKey();
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add validation for available API keys before making requests.

The hook retrieves the first available key but doesn't validate that one exists before proceeding with API calls. This could lead to failed requests with empty headers.

  const availableKey = getFirstAvailableKey();
+ 
+ if (!availableKey) {
+   toast.error("No API key available. Please configure at least one API key.");
+   return {
+     complete: () => {},
+     isLoading: false,
+   };
+ }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const getFirstAvailableKey = useAPIKeyStore(
(state) => state.getFirstAvailableKey
);
const setLoading = useTitleLoadingStore((state) => state.setLoading);
const availableKey = getFirstAvailableKey();
const getFirstAvailableKey = useAPIKeyStore(
(state) => state.getFirstAvailableKey
);
const setLoading = useTitleLoadingStore((state) => state.setLoading);
const availableKey = getFirstAvailableKey();
if (!availableKey) {
toast.error("No API key available. Please configure at least one API key.");
return {
complete: () => {},
isLoading: false,
};
}
🤖 Prompt for AI Agents
In frontend/hooks/useMessageSummary.ts around lines 16 to 21, the code retrieves
the first available API key but does not check if the key exists before using it
in API requests. Add a validation step after getting the availableKey to confirm
it is not null or undefined. If no key is available, prevent the API call from
proceeding or handle the error gracefully to avoid making requests with empty
headers.

@CheatCodeSam
Copy link

This is a great change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants