Skip to content

Conversation

@barry-hollywood
Copy link
Member

Refactor create_llm() in langchain_agent/agent_langchain.py to reduce cognitive complexity

This is in order to reduce SonarQube findings on the maintainability of the code base ( issue #212 ).

Key Changes

I've refactored create_llm() by offloading the logical but complex-to-read if/elif chain into separate provider factory functions that can easily self-contain that logic while making the original function easier to read and understand.

The refactoring splits the monolithic if/elif chain (checking provider types) into five dedicated helper functions:

  • _create_openai_llm()
  • _create_azure_llm()
  • _create_bedrock_llm()
  • _create_ollama_llm()
  • _create_anthropic_llm()

Benefits

The key benefits of these changes are to reduce cognitive complexity of the create_llm() function.

Separating the logic like this means that we can more easily make changes to specific provider implementations without having to navigate through large logical flows that handle multiple different providers. Each provider' configuration and validation logic is now isolated and independently testable.

Files Changed

  • agent_runtime/langchain_agent/agent_langchain.py - main refactoring
  • agent_runtime/langchain_agent/tests/test_agent_langchain.py - unit tests for all provider functions

Testing Notes

Comprehensive tests have been added covering:

  • All five LLM providers
  • Required credential validation
  • Optional parameter handling
  • Missing dependency scenarios
  • Common args construction

Note: Local test suite has pre-existing import errors with langchain on both main and this branch, unrelated to these changes. The refactored code passes syntax validation and only modifies the create_llm function structure without touching imports or dependencies.

barry-hollywood and others added 2 commits November 29, 2025 17:18
  Add comprehensive test coverage for the factory pattern refactoring of
  create_llm and its provider helper functions. Tests cover:

  - All five LLM providers (OpenAI, Azure, Bedrock, Ollama, Anthropic)
  - Validation error handling for missing credentials
  - Optional dependency checks (ImportError scenarios)
  - Optional field handling (base_url, organization, max_tokens, top_p)
  - Common args construction
  - Factory dispatch and provider selection

  Total: 34 test cases in 7 test classes following existing test patterns.

Signed-off-by: Barry H <barraoc@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants