Skip to content

fix(config): infer LLM provider from model string#97

Closed
mikolajbadyl wants to merge 4 commits intomainfrom
fix/infer-llm-provider
Closed

fix(config): infer LLM provider from model string#97
mikolajbadyl wants to merge 4 commits intomainfrom
fix/infer-llm-provider

Conversation

@mikolajbadyl
Copy link
Member

Description

Fixed a bug where the CLI incorrectly falls back to Anthropic Claude when a model string is provided via environment variables (e.g. OCTRAFIC_MODEL=gemini-2.5-flash) but the provider string is omitted.

This change introduces intelligent inference so that if OCTRAFIC_PROVIDER is empty, the CLI parses the model prefix:

  • gpt-*, o1, o3 -> OpenAI
  • gemini-* -> Gemini
  • default -> Claude

Testing

Verified that OCTRAFIC_MODEL=gemini-2.19-flash octrafic now correctly routes through the Gemini client.

@mikolajbadyl mikolajbadyl deleted the fix/infer-llm-provider branch February 28, 2026 22:28
@@ -135,13 +135,12 @@ func GetActiveLLMConfig() (provider, apiKey, baseURL, modelName string) {
}

if provider == "" {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

WARNING: Breaking change - the automatic API key lookup from provider-specific environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY) has been removed. Users must now use OCTRAFIC_API_KEY or configure their API key in the saved config file. This may break existing setups that rely on the provider-specific env vars.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant