Refactor provider and model configurations for improved maintainability#41
Refactor provider and model configurations for improved maintainability#41bernoussama wants to merge 2 commits intomainfrom
Conversation
…ralized PROVIDER_CONFIG object with all provider settings - Remove duplicate provider logic from getModelFromConfig, getDefaultModel, getBenchmarkModels - Replace hardcoded model defaults with consolidated configuration - Simplify models export using PROVIDER_CONFIG - Fix TypeScript types with BaseProviderConfig interface - Follow DRY principle to eliminate multiple declarations of same providers/models
…formatting - Change default model from 'gpt-3.5-turbo' to 'gpt-4.1-mini' - Refactor code for better readability and consistency - Ensure maxRetries is declared as a constant - Clean up benchmark model key assignment logic
WalkthroughThe changes refactor AI provider logic by introducing a centralized configuration object that consolidates provider metadata, model creation, and related functions. This replaces scattered switch-case statements and hardcoded mappings with unified structures, streamlining model instantiation, provider validation, and benchmark model selection throughout the codebase. Changes
Sequence Diagram(s)sequenceDiagram
participant Caller
participant ai.ts (Provider Config)
participant Model Factory
Caller->>ai.ts (Provider Config): getModelFromConfig(config)
ai.ts (Provider Config)->>ai.ts (Provider Config): Lookup provider in PROVIDER_CONFIG
ai.ts (Provider Config)->>Model Factory: Call createModel with config
Model Factory-->>ai.ts (Provider Config): Return Model instance
ai.ts (Provider Config)-->>Caller: Return ModelConfig (with maxRetries)
Possibly related PRs
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hello @bernoussama, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
Hello team, gemini-code-assist here to provide a summary of this pull request authored by @bernoussama. This PR focuses on a significant refactoring effort to centralize and streamline the configuration for AI providers and models within the src/lib/ai.ts file. The goal is to improve maintainability, reduce code duplication (adhering to the DRY principle), and make it easier to manage different providers and their specific settings, including default models, API keys, base URLs, and retry logic. The changes involve introducing a central configuration object and updating the functions that initialize and retrieve model information to use this new structure.
Highlights
- Centralized Configuration: Introduced a new
PROVIDER_CONFIGobject that consolidates all necessary configuration details for each supported AI provider (Groq, Google, OpenRouter, Anthropic, OpenAI, Ollama, Mistral, LM Studio, OpenAI Compatible) into a single, easy-to-manage structure. This includes default models, environment variable keys for API keys, model creation functions, default base URLs, and max retry settings. - Refactored Model Initialization: The
getModelFromConfigfunction has been refactored to remove the largeswitchstatement and instead dynamically retrieve provider-specific configuration and model creation logic from the newPROVIDER_CONFIGobject, making it more scalable and maintainable. - Streamlined Default Model Logic: The
getDefaultModelIdfunction now retrieves default model IDs directly from thePROVIDER_CONFIG. ThegetDefaultModelfunction has been updated to use a prioritized list (PROVIDER_PRIORITY) and thePROVIDER_CONFIGto automatically detect and initialize a default model based on available API keys, with a fallback to Ollama. - Updated Benchmark Models: The
getBenchmarkModelsfunction now dynamically generates the list of benchmark models by iterating through thePROVIDER_CONFIGand including any provider that defines abenchmarkModel, using the centralizedcreateModelfunction. - Refactored Model Export: The
modelsexport has been refactored to be generated programmatically from thePROVIDER_CONFIG, ensuring consistency and automatically including all configured providers.
Changelog
- src/lib/ai.ts
- Added
BaseProviderConfiginterface for provider configuration. - Added
PROVIDER_CONFIGconstant object centralizing configuration for all AI providers (default models, env keys, create functions, benchmark models, max retries, default base URLs). - Added
PROVIDER_PRIORITYarray to define the order for auto-detecting default providers. - Refactored
getModelFromConfig(lines 157-198) to usePROVIDER_CONFIGfor retrieving settings, validating API keys, setting env vars, and creating models. - Refactored
getDefaultModelId(lines 201-207) to retrieve default model IDs fromPROVIDER_CONFIG. - Refactored
getDefaultModel(lines 210-251) to iterate throughPROVIDER_PRIORITYand usePROVIDER_CONFIGfor auto-detection based on environment variables, with an Ollama fallback. - Refactored
getBenchmarkModels(lines 254-282) to generate benchmark models dynamically fromPROVIDER_CONFIG. - Refactored
modelsexport (lines 432-442) to usePROVIDER_CONFIGto create model factory functions for each provider.
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Config in one place,
Makes code easier to trace,
Refactor's grace.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
This pull request does an excellent job of refactoring the provider and model configurations into a centralized PROVIDER_CONFIG object. This significantly improves the maintainability and readability of the code by adhering to the DRY principle. The changes to getModelFromConfig, getDefaultModelId, getDefaultModel, and getBenchmarkModels to utilize this new structure are well-implemented and make the logic much cleaner.
Overall, this is a valuable refactoring that makes the AI integration layer more robust and easier to extend in the future. Great work!
Summary of Findings
- Modifying
process.env: ThegetModelFromConfigfunction modifiesprocess.envwhen an API key is provided in the config. This can lead to unexpected side effects if the environment variable is relied upon elsewhere in the application. - Error message clarity: The error message in
getDefaultModelwhen no API key is found could be more informative by mentioning local provider options like Ollama, LM Studio, and OpenAI Compatible. - Benchmark model initialization: The
getBenchmarkModelsfunction does not pass necessary arguments (baseUrl, potentiallyapiKey) to thecreateModelfunctions forlmstudioandopenaiCompatible, which are expected by their definitions inPROVIDER_CONFIG. This will likely cause these benchmark models to fail initialization. - Provider priority list: The
PROVIDER_PRIORITYlist excludesmistral,lmstudio, andopenaiCompatible, affecting their inclusion in the environment variable-based automatic provider detection.
Merge Readiness
This pull request introduces significant improvements in code organization and maintainability. However, there is a high-severity issue identified regarding the initialization of benchmark models for certain providers, which needs to be addressed as it will likely cause runtime errors. There are also a couple of medium-severity issues related to environment variable modification and error message clarity that should ideally be fixed.
Therefore, I recommend requesting changes to address these issues before merging. I am unable to approve this pull request myself; please have other reviewers review and approve this code before merging.
| if (provider === 'lmstudio') { | ||
| benchmarkModels[key] = config.createModel(config.benchmarkModel); | ||
| } else if (provider === 'openaiCompatible') { | ||
| benchmarkModels[key] = config.createModel(config.benchmarkModel); | ||
| } else { | ||
| benchmarkModels[key] = config.createModel(config.benchmarkModel); | ||
| } |
There was a problem hiding this comment.
The createModel functions for lmstudio and openaiCompatible are defined in PROVIDER_CONFIG to accept baseUrl and potentially apiKey arguments. However, when calling config.createModel within getBenchmarkModels, these arguments are not being passed.
This will likely cause issues when trying to initialize the benchmark models for LM Studio and OpenAI Compatible providers, as their createOpenAICompatible function expects a baseURL.
Consider passing the config.defaultBaseUrl (and potentially apiKey if benchmark models should support authentication) when calling createModel for these specific providers.
if (provider === 'lmstudio') {
benchmarkModels[key] = config.createModel(config.benchmarkModel, config.defaultBaseUrl);
} else if (provider === 'openaiCompatible') {
// Assuming benchmark models don't require API keys for simplicity, but baseUrl is needed
benchmarkModels[key] = config.createModel(config.benchmarkModel, config.defaultBaseUrl);
} else {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
}| if (providerConfig.envKey && apiKey) { | ||
| process.env[providerConfig.envKey] = apiKey; | ||
| } |
There was a problem hiding this comment.
Setting process.env directly within getModelFromConfig based on the config.apiKey can have unintended side effects. If this function is called multiple times with different config.apiKey values for the same provider, it will overwrite the environment variable for the current process, potentially affecting other parts of the application that might rely on the original environment variable value.
It's generally safer to use the apiKey variable (which already holds config.apiKey) directly when creating the model instance, allowing the underlying library or the createModel function itself to handle the fallback to process.env if config.apiKey is not provided, without modifying the global process.env state.
| throw new Error( | ||
| 'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio' | ||
| ); | ||
| } |
There was a problem hiding this comment.
The error message thrown when no API key is found is slightly less informative than the previous version. It currently only lists the API key providers. The previous message also mentioned setting up Ollama/LM Studio as alternatives, which is helpful context for users who might prefer local models.
Consider restoring the mention of local options like Ollama, LM Studio, and OpenAI Compatible (if applicable without an API key) to guide the user better.
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio/OpenAI Compatible if using a local endpoint.'
);| }; | ||
|
|
||
| // Provider priority order for auto-detection | ||
| const PROVIDER_PRIORITY: ProviderKey[] = ['groq', 'google', 'openrouter', 'anthropic', 'openai', 'ollama']; |
There was a problem hiding this comment.
The PROVIDER_PRIORITY list currently excludes mistral, lmstudio, and openaiCompatible. This means these providers won't be considered during the automatic provider detection based on environment variables in getDefaultModel, except for the final ollama fallback.
Is this intentional? If these providers should also be part of the environment variable-based auto-detection sequence (e.g., if MISTRAL_API_KEY is set), they should be added to this priority list.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
src/lib/ai.ts (1)
69-69: Consider the maintainability of date-specific model identifiers.The lmstudio default model includes a date (
deepseek-r1-0528-qwen3-8b). This might require frequent updates as newer versions are released.Consider using a more generic model identifier or documenting the update process for these date-specific models.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
src/lib/ai.ts(3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/lib/ai.ts (1)
src/lib/config.ts (1)
ProviderKey(72-72)
🪛 Biome (1.9.4)
src/lib/ai.ts
[error] 230-230: Unnecessary continue statement
Unsafe fix: Delete the unnecessary continue statement
(lint/correctness/noUnnecessaryContinue)
🪛 GitHub Check: test
src/lib/ai.ts
[warning] 246-246:
'error' is defined but never used
[warning] 229-229:
'error' is defined but never used
🔇 Additional comments (4)
src/lib/ai.ts (4)
17-97: Excellent refactoring to centralize provider configurations!The
BaseProviderConfiginterface andPROVIDER_CONFIGobject effectively consolidate provider metadata, improving maintainability and reducing code duplication.
156-198: Clean refactoring of getModelFromConfig using the centralized config.The function now efficiently uses
PROVIDER_CONFIGfor validation, API key handling, and model creation. The special handling forlmstudioandopenaiCompatibleproviders is properly preserved.
200-207: Good simplification of getDefaultModelId.The function now cleanly retrieves the default model from the centralized config with proper error handling.
432-442: Elegant dynamic generation of model exports.The use of
reduceto dynamically generate model exports fromPROVIDER_CONFIGis a great improvement. It eliminates repetitive code and automatically stays synchronized with the provider configuration.
| // Get predefined models for benchmarking | ||
| export function getBenchmarkModels(): Record<string, LanguageModel> { | ||
| return { | ||
| 'or-devstral': openrouter('mistralai/devstral-small:free'), | ||
| // 'openrouter-mistral-7b': openrouter('mistralai/mistral-7b-instruct:free'), | ||
| // 'openrouter-llama3.3': openrouter('meta-llama/llama-3.3-8b-instruct:free'),// doesnt support tool calling or json | ||
| // 'groq-llama3-8b': groq('llama3-8b-8192'), // slower than llama-3.3-70b | ||
| 'gemini-2.0-flash-lite': google('gemini-2.0-flash-lite'), | ||
| 'ollama3.2': ollama('llama3.2'), | ||
| 'llama-3.3-70b-versatile': groq('llama-3.3-70b-versatile'), | ||
| devstral: mistral('devstral-small-2505'), | ||
| 'lmstudio-llama': (() => { | ||
| const lmstudio = createOpenAICompatible({ | ||
| name: 'lmstudio', | ||
| baseURL: 'http://localhost:1234/v1', | ||
| }); | ||
| return lmstudio('llama-3.2-1b'); | ||
| })(), | ||
| 'openaiCompatible-gpt': (() => { | ||
| const openaiCompatible = createOpenAICompatible({ | ||
| name: 'openaiCompatible', | ||
| baseURL: 'http://localhost:8000/v1', | ||
| }); | ||
| return openaiCompatible('gpt-3.5-turbo'); | ||
| })(), | ||
| }; | ||
| const benchmarkModels: Record<string, LanguageModel> = {}; | ||
|
|
||
| // Add benchmark models from provider config | ||
| Object.entries(PROVIDER_CONFIG).forEach(([provider, config]) => { | ||
| if (config.benchmarkModel) { | ||
| const key = | ||
| provider === 'openrouter' | ||
| ? 'or-devstral' | ||
| : provider === 'ollama' | ||
| ? 'ollama3.2' | ||
| : provider === 'lmstudio' | ||
| ? 'lmstudio-llama' | ||
| : provider === 'openaiCompatible' | ||
| ? 'openaiCompatible-gpt' | ||
| : config.benchmarkModel; | ||
|
|
||
| if (provider === 'lmstudio') { | ||
| benchmarkModels[key] = config.createModel(config.benchmarkModel); | ||
| } else if (provider === 'openaiCompatible') { | ||
| benchmarkModels[key] = config.createModel(config.benchmarkModel); | ||
| } else { | ||
| benchmarkModels[key] = config.createModel(config.benchmarkModel); | ||
| } | ||
| } | ||
| }); | ||
|
|
||
| return benchmarkModels; | ||
| } |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Eliminate redundant code in model creation.
The function has redundant code where all branches execute the same logic. Lines 272-277 can be simplified.
Apply this diff to eliminate the redundancy:
- if (provider === 'lmstudio') {
- benchmarkModels[key] = config.createModel(config.benchmarkModel);
- } else if (provider === 'openaiCompatible') {
- benchmarkModels[key] = config.createModel(config.benchmarkModel);
- } else {
- benchmarkModels[key] = config.createModel(config.benchmarkModel);
- }
+ benchmarkModels[key] = config.createModel(config.benchmarkModel);🤖 Prompt for AI Agents
In src/lib/ai.ts between lines 253 and 282, the getBenchmarkModels function
contains redundant conditional branches where each branch calls
config.createModel with the same argument. Simplify the code by removing the
unnecessary if-else conditions and directly assign benchmarkModels[key] =
config.createModel(config.benchmarkModel) after determining the key.
| // Get available model based on environment variables (legacy function) | ||
| export function getDefaultModel(): ModelConfig { | ||
| let model: LanguageModel; | ||
| let provider: string; | ||
| let modelId: string; | ||
|
|
||
| if (process.env.GROQ_API_KEY) { | ||
| provider = 'groq'; | ||
| // modelId = 'llama-3.1-8b-instant'; | ||
| modelId = 'llama-3.3-70b-versatile'; | ||
|
|
||
| model = groq(modelId); | ||
| } else if (process.env.GOOGLE_GENERATIVE_AI_API_KEY) { | ||
| provider = 'google'; | ||
| modelId = 'gemini-2.0-flash-lite'; | ||
| model = google(modelId); | ||
| } else if (process.env.OPENROUTER_API_KEY) { | ||
| provider = 'openrouter'; | ||
| modelId = 'google/gemini-2.0-flash-001'; | ||
| model = openrouter(modelId); | ||
| } else if (process.env.ANTHROPIC_API_KEY) { | ||
| provider = 'anthropic'; | ||
| modelId = 'claude-3-5-haiku-latest'; | ||
| model = anthropic(modelId); | ||
| } else if (process.env.OPENAI_API_KEY) { | ||
| provider = 'openai'; | ||
| modelId = 'gpt-4o-mini'; | ||
| model = openai(modelId); | ||
| } else { | ||
| // Try providers in priority order based on available API keys | ||
| for (const provider of PROVIDER_PRIORITY) { | ||
| const providerConfig = PROVIDER_CONFIG[provider]; | ||
|
|
||
| // Skip providers that require API keys if none is available | ||
| if (providerConfig.envKey && !process.env[providerConfig.envKey]) { | ||
| continue; | ||
| } | ||
|
|
||
| try { | ||
| provider = 'ollama'; | ||
| modelId = 'llama3.2'; | ||
| model = ollama(modelId); | ||
| const modelId = providerConfig.defaultModel; | ||
| const model = providerConfig.createModel(modelId); | ||
| return { | ||
| provider, | ||
| modelId, | ||
| model, | ||
| maxRetries: providerConfig.maxRetries, | ||
| }; | ||
| } catch (error) { | ||
| throw new Error( | ||
| 'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio' | ||
| ); | ||
| continue; // Try next provider | ||
| } | ||
| } | ||
|
|
||
| return { provider, modelId, model }; | ||
| // Fallback to ollama if no API keys are available | ||
| try { | ||
| const provider = 'ollama'; | ||
| const providerConfig = PROVIDER_CONFIG[provider]; | ||
| const modelId = providerConfig.defaultModel; | ||
| const model = providerConfig.createModel(modelId); | ||
| return { | ||
| provider, | ||
| modelId, | ||
| model, | ||
| maxRetries: providerConfig.maxRetries, | ||
| }; | ||
| } catch (error) { | ||
| throw new Error( | ||
| 'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio' | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Address static analysis warnings in error handling.
The function logic is good, but there are some code quality issues to address:
- Unused
errorvariables at lines 229 and 246 - Unnecessary
continuestatement at line 230
Apply this diff to fix the issues:
try {
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
- } catch (error) {
- continue; // Try next provider
+ } catch {
+ // Try next provider
}And for the ollama fallback:
model,
maxRetries: providerConfig.maxRetries,
};
- } catch (error) {
+ } catch {
throw new Error(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Get available model based on environment variables (legacy function) | |
| export function getDefaultModel(): ModelConfig { | |
| let model: LanguageModel; | |
| let provider: string; | |
| let modelId: string; | |
| if (process.env.GROQ_API_KEY) { | |
| provider = 'groq'; | |
| // modelId = 'llama-3.1-8b-instant'; | |
| modelId = 'llama-3.3-70b-versatile'; | |
| model = groq(modelId); | |
| } else if (process.env.GOOGLE_GENERATIVE_AI_API_KEY) { | |
| provider = 'google'; | |
| modelId = 'gemini-2.0-flash-lite'; | |
| model = google(modelId); | |
| } else if (process.env.OPENROUTER_API_KEY) { | |
| provider = 'openrouter'; | |
| modelId = 'google/gemini-2.0-flash-001'; | |
| model = openrouter(modelId); | |
| } else if (process.env.ANTHROPIC_API_KEY) { | |
| provider = 'anthropic'; | |
| modelId = 'claude-3-5-haiku-latest'; | |
| model = anthropic(modelId); | |
| } else if (process.env.OPENAI_API_KEY) { | |
| provider = 'openai'; | |
| modelId = 'gpt-4o-mini'; | |
| model = openai(modelId); | |
| } else { | |
| // Try providers in priority order based on available API keys | |
| for (const provider of PROVIDER_PRIORITY) { | |
| const providerConfig = PROVIDER_CONFIG[provider]; | |
| // Skip providers that require API keys if none is available | |
| if (providerConfig.envKey && !process.env[providerConfig.envKey]) { | |
| continue; | |
| } | |
| try { | |
| provider = 'ollama'; | |
| modelId = 'llama3.2'; | |
| model = ollama(modelId); | |
| const modelId = providerConfig.defaultModel; | |
| const model = providerConfig.createModel(modelId); | |
| return { | |
| provider, | |
| modelId, | |
| model, | |
| maxRetries: providerConfig.maxRetries, | |
| }; | |
| } catch (error) { | |
| throw new Error( | |
| 'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio' | |
| ); | |
| continue; // Try next provider | |
| } | |
| } | |
| return { provider, modelId, model }; | |
| // Fallback to ollama if no API keys are available | |
| try { | |
| const provider = 'ollama'; | |
| const providerConfig = PROVIDER_CONFIG[provider]; | |
| const modelId = providerConfig.defaultModel; | |
| const model = providerConfig.createModel(modelId); | |
| return { | |
| provider, | |
| modelId, | |
| model, | |
| maxRetries: providerConfig.maxRetries, | |
| }; | |
| } catch (error) { | |
| throw new Error( | |
| 'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio' | |
| ); | |
| } | |
| } | |
| // Get available model based on environment variables (legacy function) | |
| export function getDefaultModel(): ModelConfig { | |
| // Try providers in priority order based on available API keys | |
| for (const provider of PROVIDER_PRIORITY) { | |
| const providerConfig = PROVIDER_CONFIG[provider]; | |
| // Skip providers that require API keys if none is available | |
| if (providerConfig.envKey && !process.env[providerConfig.envKey]) { | |
| continue; | |
| } | |
| try { | |
| const modelId = providerConfig.defaultModel; | |
| const model = providerConfig.createModel(modelId); | |
| return { | |
| provider, | |
| modelId, | |
| model, | |
| maxRetries: providerConfig.maxRetries, | |
| }; | |
| } catch { | |
| // Try next provider | |
| } | |
| } | |
| // Fallback to ollama if no API keys are available | |
| try { | |
| const provider = 'ollama'; | |
| const providerConfig = PROVIDER_CONFIG[provider]; | |
| const modelId = providerConfig.defaultModel; | |
| const model = providerConfig.createModel(modelId); | |
| return { | |
| provider, | |
| modelId, | |
| model, | |
| maxRetries: providerConfig.maxRetries, | |
| }; | |
| } catch { | |
| throw new Error( | |
| 'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio' | |
| ); | |
| } | |
| } |
🧰 Tools
🪛 Biome (1.9.4)
[error] 230-230: Unnecessary continue statement
Unsafe fix: Delete the unnecessary continue statement
(lint/correctness/noUnnecessaryContinue)
🪛 GitHub Check: test
[warning] 246-246:
'error' is defined but never used
[warning] 229-229:
'error' is defined but never used
🤖 Prompt for AI Agents
In src/lib/ai.ts between lines 209 and 251, remove the unused error variables in
the catch blocks at lines 229 and 246 by omitting the error parameter. Also,
eliminate the unnecessary continue statement at line 230 inside the catch block
since the loop will continue naturally after catching an error. This will
resolve static analysis warnings related to unused variables and redundant code.
Consolidate provider settings into a single configuration object, streamline model logic, and update the default model for better performance and readability. This refactoring enhances code clarity and adheres to the DRY principle.
Summary by CodeRabbit