Skip to content

Refactor provider and model configurations for improved maintainability#41

Open
bernoussama wants to merge 2 commits intomainfrom
refactor/consolidate-provider-configs
Open

Refactor provider and model configurations for improved maintainability#41
bernoussama wants to merge 2 commits intomainfrom
refactor/consolidate-provider-configs

Conversation

@bernoussama
Copy link
Owner

@bernoussama bernoussama commented Jun 2, 2025

Consolidate provider settings into a single configuration object, streamline model logic, and update the default model for better performance and readability. This refactoring enhances code clarity and adheres to the DRY principle.

Summary by CodeRabbit

  • Refactor
    • Improved reliability and maintainability of AI provider selection and model handling by unifying provider configuration into a single, centralized structure. This streamlines model selection, fallback logic, and error messaging for users when configuring or switching AI providers. No changes to the user interface or core workflow are visible.

…ralized PROVIDER_CONFIG object with all provider settings - Remove duplicate provider logic from getModelFromConfig, getDefaultModel, getBenchmarkModels - Replace hardcoded model defaults with consolidated configuration - Simplify models export using PROVIDER_CONFIG - Fix TypeScript types with BaseProviderConfig interface - Follow DRY principle to eliminate multiple declarations of same providers/models
…formatting - Change default model from 'gpt-3.5-turbo' to 'gpt-4.1-mini' - Refactor code for better readability and consistency - Ensure maxRetries is declared as a constant - Clean up benchmark model key assignment logic
@coderabbitai
Copy link

coderabbitai bot commented Jun 2, 2025

Walkthrough

The changes refactor AI provider logic by introducing a centralized configuration object that consolidates provider metadata, model creation, and related functions. This replaces scattered switch-case statements and hardcoded mappings with unified structures, streamlining model instantiation, provider validation, and benchmark model selection throughout the codebase.

Changes

File(s) Change Summary
src/lib/ai.ts Centralized provider configuration into PROVIDER_CONFIG; refactored model creation, validation, default/fallback logic, and benchmark model selection to use this config; updated exports to dynamically generate model factories.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant ai.ts (Provider Config)
    participant Model Factory

    Caller->>ai.ts (Provider Config): getModelFromConfig(config)
    ai.ts (Provider Config)->>ai.ts (Provider Config): Lookup provider in PROVIDER_CONFIG
    ai.ts (Provider Config)->>Model Factory: Call createModel with config
    Model Factory-->>ai.ts (Provider Config): Return Model instance
    ai.ts (Provider Config)-->>Caller: Return ModelConfig (with maxRetries)
Loading

Possibly related PRs

  • bernoussama/lazyshell#32: Adds support for the "lmstudio" provider and updates related logic, which is now generalized by the centralized configuration in this PR.
  • bernoussama/lazyshell#33: Introduces the "openaiCompatible" provider and modifies model creation logic, overlapping with the provider handling refactored here.
  • bernoussama/lazyshell#10: Adds configuration-driven model selection and interactive setup, modifying the same AI model initialization logic unified in this PR.

Poem

In a warren of code, providers would sprawl,
But now in one burrow, they gather—all!
With configs united, the logic is neat,
No more switch-cases, no tangled retreat.
Models hop forth, as rabbits delight—
Centralized, streamlined, and ready to write!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @bernoussama, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

Hello team, gemini-code-assist here to provide a summary of this pull request authored by @bernoussama. This PR focuses on a significant refactoring effort to centralize and streamline the configuration for AI providers and models within the src/lib/ai.ts file. The goal is to improve maintainability, reduce code duplication (adhering to the DRY principle), and make it easier to manage different providers and their specific settings, including default models, API keys, base URLs, and retry logic. The changes involve introducing a central configuration object and updating the functions that initialize and retrieve model information to use this new structure.

Highlights

  • Centralized Configuration: Introduced a new PROVIDER_CONFIG object that consolidates all necessary configuration details for each supported AI provider (Groq, Google, OpenRouter, Anthropic, OpenAI, Ollama, Mistral, LM Studio, OpenAI Compatible) into a single, easy-to-manage structure. This includes default models, environment variable keys for API keys, model creation functions, default base URLs, and max retry settings.
  • Refactored Model Initialization: The getModelFromConfig function has been refactored to remove the large switch statement and instead dynamically retrieve provider-specific configuration and model creation logic from the new PROVIDER_CONFIG object, making it more scalable and maintainable.
  • Streamlined Default Model Logic: The getDefaultModelId function now retrieves default model IDs directly from the PROVIDER_CONFIG. The getDefaultModel function has been updated to use a prioritized list (PROVIDER_PRIORITY) and the PROVIDER_CONFIG to automatically detect and initialize a default model based on available API keys, with a fallback to Ollama.
  • Updated Benchmark Models: The getBenchmarkModels function now dynamically generates the list of benchmark models by iterating through the PROVIDER_CONFIG and including any provider that defines a benchmarkModel, using the centralized createModel function.
  • Refactored Model Export: The models export has been refactored to be generated programmatically from the PROVIDER_CONFIG, ensuring consistency and automatically including all configured providers.

Changelog

  • src/lib/ai.ts
    • Added BaseProviderConfig interface for provider configuration.
    • Added PROVIDER_CONFIG constant object centralizing configuration for all AI providers (default models, env keys, create functions, benchmark models, max retries, default base URLs).
    • Added PROVIDER_PRIORITY array to define the order for auto-detecting default providers.
    • Refactored getModelFromConfig (lines 157-198) to use PROVIDER_CONFIG for retrieving settings, validating API keys, setting env vars, and creating models.
    • Refactored getDefaultModelId (lines 201-207) to retrieve default model IDs from PROVIDER_CONFIG.
    • Refactored getDefaultModel (lines 210-251) to iterate through PROVIDER_PRIORITY and use PROVIDER_CONFIG for auto-detection based on environment variables, with an Ollama fallback.
    • Refactored getBenchmarkModels (lines 254-282) to generate benchmark models dynamically from PROVIDER_CONFIG.
    • Refactored models export (lines 432-442) to use PROVIDER_CONFIG to create model factory functions for each provider.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


Config in one place,
Makes code easier to trace,
Refactor's grace.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request does an excellent job of refactoring the provider and model configurations into a centralized PROVIDER_CONFIG object. This significantly improves the maintainability and readability of the code by adhering to the DRY principle. The changes to getModelFromConfig, getDefaultModelId, getDefaultModel, and getBenchmarkModels to utilize this new structure are well-implemented and make the logic much cleaner.

Overall, this is a valuable refactoring that makes the AI integration layer more robust and easier to extend in the future. Great work!

Summary of Findings

  • Modifying process.env: The getModelFromConfig function modifies process.env when an API key is provided in the config. This can lead to unexpected side effects if the environment variable is relied upon elsewhere in the application.
  • Error message clarity: The error message in getDefaultModel when no API key is found could be more informative by mentioning local provider options like Ollama, LM Studio, and OpenAI Compatible.
  • Benchmark model initialization: The getBenchmarkModels function does not pass necessary arguments (baseUrl, potentially apiKey) to the createModel functions for lmstudio and openaiCompatible, which are expected by their definitions in PROVIDER_CONFIG. This will likely cause these benchmark models to fail initialization.
  • Provider priority list: The PROVIDER_PRIORITY list excludes mistral, lmstudio, and openaiCompatible, affecting their inclusion in the environment variable-based automatic provider detection.

Merge Readiness

This pull request introduces significant improvements in code organization and maintainability. However, there is a high-severity issue identified regarding the initialization of benchmark models for certain providers, which needs to be addressed as it will likely cause runtime errors. There are also a couple of medium-severity issues related to environment variable modification and error message clarity that should ideally be fixed.

Therefore, I recommend requesting changes to address these issues before merging. I am unable to approve this pull request myself; please have other reviewers review and approve this code before merging.

Comment on lines +271 to +277
if (provider === 'lmstudio') {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
} else if (provider === 'openaiCompatible') {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
} else {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The createModel functions for lmstudio and openaiCompatible are defined in PROVIDER_CONFIG to accept baseUrl and potentially apiKey arguments. However, when calling config.createModel within getBenchmarkModels, these arguments are not being passed.

This will likely cause issues when trying to initialize the benchmark models for LM Studio and OpenAI Compatible providers, as their createOpenAICompatible function expects a baseURL.

Consider passing the config.defaultBaseUrl (and potentially apiKey if benchmark models should support authentication) when calling createModel for these specific providers.

      if (provider === 'lmstudio') {
        benchmarkModels[key] = config.createModel(config.benchmarkModel, config.defaultBaseUrl);
      } else if (provider === 'openaiCompatible') {
        // Assuming benchmark models don't require API keys for simplicity, but baseUrl is needed
        benchmarkModels[key] = config.createModel(config.benchmarkModel, config.defaultBaseUrl);
      } else {
        benchmarkModels[key] = config.createModel(config.benchmarkModel);
      }

Comment on lines +176 to +178
if (providerConfig.envKey && apiKey) {
process.env[providerConfig.envKey] = apiKey;
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Setting process.env directly within getModelFromConfig based on the config.apiKey can have unintended side effects. If this function is called multiple times with different config.apiKey values for the same provider, it will overwrite the environment variable for the current process, potentially affecting other parts of the application that might rely on the original environment variable value.

It's generally safer to use the apiKey variable (which already holds config.apiKey) directly when creating the model instance, allowing the underlying library or the createModel function itself to handle the fallback to process.env if config.apiKey is not provided, without modifying the global process.env state.

Comment on lines +247 to +250
throw new Error(
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio'
);
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message thrown when no API key is found is slightly less informative than the previous version. It currently only lists the API key providers. The previous message also mentioned setting up Ollama/LM Studio as alternatives, which is helpful context for users who might prefer local models.

Consider restoring the mention of local options like Ollama, LM Studio, and OpenAI Compatible (if applicable without an API key) to guide the user better.

      'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio/OpenAI Compatible if using a local endpoint.'
    );

};

// Provider priority order for auto-detection
const PROVIDER_PRIORITY: ProviderKey[] = ['groq', 'google', 'openrouter', 'anthropic', 'openai', 'ollama'];

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The PROVIDER_PRIORITY list currently excludes mistral, lmstudio, and openaiCompatible. This means these providers won't be considered during the automatic provider detection based on environment variables in getDefaultModel, except for the final ollama fallback.

Is this intentional? If these providers should also be part of the environment variable-based auto-detection sequence (e.g., if MISTRAL_API_KEY is set), they should be added to this priority list.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
src/lib/ai.ts (1)

69-69: Consider the maintainability of date-specific model identifiers.

The lmstudio default model includes a date (deepseek-r1-0528-qwen3-8b). This might require frequent updates as newer versions are released.

Consider using a more generic model identifier or documenting the update process for these date-specific models.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 013a5ba and 14d9c8a.

📒 Files selected for processing (1)
  • src/lib/ai.ts (3 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/lib/ai.ts (1)
src/lib/config.ts (1)
  • ProviderKey (72-72)
🪛 Biome (1.9.4)
src/lib/ai.ts

[error] 230-230: Unnecessary continue statement

Unsafe fix: Delete the unnecessary continue statement

(lint/correctness/noUnnecessaryContinue)

🪛 GitHub Check: test
src/lib/ai.ts

[warning] 246-246:
'error' is defined but never used


[warning] 229-229:
'error' is defined but never used

🔇 Additional comments (4)
src/lib/ai.ts (4)

17-97: Excellent refactoring to centralize provider configurations!

The BaseProviderConfig interface and PROVIDER_CONFIG object effectively consolidate provider metadata, improving maintainability and reducing code duplication.


156-198: Clean refactoring of getModelFromConfig using the centralized config.

The function now efficiently uses PROVIDER_CONFIG for validation, API key handling, and model creation. The special handling for lmstudio and openaiCompatible providers is properly preserved.


200-207: Good simplification of getDefaultModelId.

The function now cleanly retrieves the default model from the centralized config with proper error handling.


432-442: Elegant dynamic generation of model exports.

The use of reduce to dynamically generate model exports from PROVIDER_CONFIG is a great improvement. It eliminates repetitive code and automatically stays synchronized with the provider configuration.

Comment on lines 253 to 282
// Get predefined models for benchmarking
export function getBenchmarkModels(): Record<string, LanguageModel> {
return {
'or-devstral': openrouter('mistralai/devstral-small:free'),
// 'openrouter-mistral-7b': openrouter('mistralai/mistral-7b-instruct:free'),
// 'openrouter-llama3.3': openrouter('meta-llama/llama-3.3-8b-instruct:free'),// doesnt support tool calling or json
// 'groq-llama3-8b': groq('llama3-8b-8192'), // slower than llama-3.3-70b
'gemini-2.0-flash-lite': google('gemini-2.0-flash-lite'),
'ollama3.2': ollama('llama3.2'),
'llama-3.3-70b-versatile': groq('llama-3.3-70b-versatile'),
devstral: mistral('devstral-small-2505'),
'lmstudio-llama': (() => {
const lmstudio = createOpenAICompatible({
name: 'lmstudio',
baseURL: 'http://localhost:1234/v1',
});
return lmstudio('llama-3.2-1b');
})(),
'openaiCompatible-gpt': (() => {
const openaiCompatible = createOpenAICompatible({
name: 'openaiCompatible',
baseURL: 'http://localhost:8000/v1',
});
return openaiCompatible('gpt-3.5-turbo');
})(),
};
const benchmarkModels: Record<string, LanguageModel> = {};

// Add benchmark models from provider config
Object.entries(PROVIDER_CONFIG).forEach(([provider, config]) => {
if (config.benchmarkModel) {
const key =
provider === 'openrouter'
? 'or-devstral'
: provider === 'ollama'
? 'ollama3.2'
: provider === 'lmstudio'
? 'lmstudio-llama'
: provider === 'openaiCompatible'
? 'openaiCompatible-gpt'
: config.benchmarkModel;

if (provider === 'lmstudio') {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
} else if (provider === 'openaiCompatible') {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
} else {
benchmarkModels[key] = config.createModel(config.benchmarkModel);
}
}
});

return benchmarkModels;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Eliminate redundant code in model creation.

The function has redundant code where all branches execute the same logic. Lines 272-277 can be simplified.

Apply this diff to eliminate the redundancy:

-      if (provider === 'lmstudio') {
-        benchmarkModels[key] = config.createModel(config.benchmarkModel);
-      } else if (provider === 'openaiCompatible') {
-        benchmarkModels[key] = config.createModel(config.benchmarkModel);
-      } else {
-        benchmarkModels[key] = config.createModel(config.benchmarkModel);
-      }
+      benchmarkModels[key] = config.createModel(config.benchmarkModel);
🤖 Prompt for AI Agents
In src/lib/ai.ts between lines 253 and 282, the getBenchmarkModels function
contains redundant conditional branches where each branch calls
config.createModel with the same argument. Simplify the code by removing the
unnecessary if-else conditions and directly assign benchmarkModels[key] =
config.createModel(config.benchmarkModel) after determining the key.

Comment on lines 209 to 251
// Get available model based on environment variables (legacy function)
export function getDefaultModel(): ModelConfig {
let model: LanguageModel;
let provider: string;
let modelId: string;

if (process.env.GROQ_API_KEY) {
provider = 'groq';
// modelId = 'llama-3.1-8b-instant';
modelId = 'llama-3.3-70b-versatile';

model = groq(modelId);
} else if (process.env.GOOGLE_GENERATIVE_AI_API_KEY) {
provider = 'google';
modelId = 'gemini-2.0-flash-lite';
model = google(modelId);
} else if (process.env.OPENROUTER_API_KEY) {
provider = 'openrouter';
modelId = 'google/gemini-2.0-flash-001';
model = openrouter(modelId);
} else if (process.env.ANTHROPIC_API_KEY) {
provider = 'anthropic';
modelId = 'claude-3-5-haiku-latest';
model = anthropic(modelId);
} else if (process.env.OPENAI_API_KEY) {
provider = 'openai';
modelId = 'gpt-4o-mini';
model = openai(modelId);
} else {
// Try providers in priority order based on available API keys
for (const provider of PROVIDER_PRIORITY) {
const providerConfig = PROVIDER_CONFIG[provider];

// Skip providers that require API keys if none is available
if (providerConfig.envKey && !process.env[providerConfig.envKey]) {
continue;
}

try {
provider = 'ollama';
modelId = 'llama3.2';
model = ollama(modelId);
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
} catch (error) {
throw new Error(
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio'
);
continue; // Try next provider
}
}

return { provider, modelId, model };
// Fallback to ollama if no API keys are available
try {
const provider = 'ollama';
const providerConfig = PROVIDER_CONFIG[provider];
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
} catch (error) {
throw new Error(
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio'
);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Address static analysis warnings in error handling.

The function logic is good, but there are some code quality issues to address:

  1. Unused error variables at lines 229 and 246
  2. Unnecessary continue statement at line 230

Apply this diff to fix the issues:

     try {
       const modelId = providerConfig.defaultModel;
       const model = providerConfig.createModel(modelId);
       return {
         provider,
         modelId,
         model,
         maxRetries: providerConfig.maxRetries,
       };
-    } catch (error) {
-      continue; // Try next provider
+    } catch {
+      // Try next provider
     }

And for the ollama fallback:

       model,
       maxRetries: providerConfig.maxRetries,
     };
-  } catch (error) {
+  } catch {
     throw new Error(
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Get available model based on environment variables (legacy function)
export function getDefaultModel(): ModelConfig {
let model: LanguageModel;
let provider: string;
let modelId: string;
if (process.env.GROQ_API_KEY) {
provider = 'groq';
// modelId = 'llama-3.1-8b-instant';
modelId = 'llama-3.3-70b-versatile';
model = groq(modelId);
} else if (process.env.GOOGLE_GENERATIVE_AI_API_KEY) {
provider = 'google';
modelId = 'gemini-2.0-flash-lite';
model = google(modelId);
} else if (process.env.OPENROUTER_API_KEY) {
provider = 'openrouter';
modelId = 'google/gemini-2.0-flash-001';
model = openrouter(modelId);
} else if (process.env.ANTHROPIC_API_KEY) {
provider = 'anthropic';
modelId = 'claude-3-5-haiku-latest';
model = anthropic(modelId);
} else if (process.env.OPENAI_API_KEY) {
provider = 'openai';
modelId = 'gpt-4o-mini';
model = openai(modelId);
} else {
// Try providers in priority order based on available API keys
for (const provider of PROVIDER_PRIORITY) {
const providerConfig = PROVIDER_CONFIG[provider];
// Skip providers that require API keys if none is available
if (providerConfig.envKey && !process.env[providerConfig.envKey]) {
continue;
}
try {
provider = 'ollama';
modelId = 'llama3.2';
model = ollama(modelId);
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
} catch (error) {
throw new Error(
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio'
);
continue; // Try next provider
}
}
return { provider, modelId, model };
// Fallback to ollama if no API keys are available
try {
const provider = 'ollama';
const providerConfig = PROVIDER_CONFIG[provider];
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
} catch (error) {
throw new Error(
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio'
);
}
}
// Get available model based on environment variables (legacy function)
export function getDefaultModel(): ModelConfig {
// Try providers in priority order based on available API keys
for (const provider of PROVIDER_PRIORITY) {
const providerConfig = PROVIDER_CONFIG[provider];
// Skip providers that require API keys if none is available
if (providerConfig.envKey && !process.env[providerConfig.envKey]) {
continue;
}
try {
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
} catch {
// Try next provider
}
}
// Fallback to ollama if no API keys are available
try {
const provider = 'ollama';
const providerConfig = PROVIDER_CONFIG[provider];
const modelId = providerConfig.defaultModel;
const model = providerConfig.createModel(modelId);
return {
provider,
modelId,
model,
maxRetries: providerConfig.maxRetries,
};
} catch {
throw new Error(
'No API key found. Please set either GROQ_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY, OPENROUTER_API_KEY, ANTHROPIC_API_KEY, or OPENAI_API_KEY. Or setup Ollama/LM Studio'
);
}
}
🧰 Tools
🪛 Biome (1.9.4)

[error] 230-230: Unnecessary continue statement

Unsafe fix: Delete the unnecessary continue statement

(lint/correctness/noUnnecessaryContinue)

🪛 GitHub Check: test

[warning] 246-246:
'error' is defined but never used


[warning] 229-229:
'error' is defined but never used

🤖 Prompt for AI Agents
In src/lib/ai.ts between lines 209 and 251, remove the unused error variables in
the catch blocks at lines 229 and 246 by omitting the error parameter. Also,
eliminate the unnecessary continue statement at line 230 inside the catch block
since the loop will continue naturally after catching an error. This will
resolve static analysis warnings related to unused variables and redundant code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant