Skip to content

Migrate to litellm for model compatability#24

Open
trevormells wants to merge 8 commits intoPickle-Pixel:mainfrom
trevormells:mmigrate_to_litellm
Open

Migrate to litellm for model compatability#24
trevormells wants to merge 8 commits intoPickle-Pixel:mainfrom
trevormells:mmigrate_to_litellm

Conversation

@trevormells
Copy link

Summary

This PR migrates ApplyPilot’s LLM layer to a LiteLLM-based adapter and standardizes provider/model configuration across CLI, wizard, docs, and runtime checks. It reduces provider-specific logic, adds Anthropic support, and tightens test coverage around LLM resolution/client behavior.


What Changed

  • Replaced the custom HTTP-based LLM implementation with a thin LiteLLM wrapper in llm.py.
  • Added a single resolve_llm_config() contract for provider/model/api key resolution.
  • Expanded provider support:
    • GEMINI_API_KEY
    • OPENAI_API_KEY
    • ANTHROPIC_API_KEY
    • LLM_URL (OpenAI-compatible local endpoint)
    • LLM_API_KEY (generic key fallback)
  • Standardized model semantics:
    • Provider-prefixed models supported (e.g. openai/gpt-4o-mini, gemini-3.0-flash)
    • Inference order for provider defaults when LLM_MODEL is not set
  • Updated LLM call sites to use client.chat(..., max_output_tokens=...) and removed ask() usage.
  • Updated applypilot init flow to allow saving multiple provider credentials and explicit LLM_MODEL.
  • Updated doctor/tier checks and failure messaging to match the new config contract.
  • Updated docs and .env.example for new provider options and model format.
  • Added optional Gemini smoke test + pytest marker config.
  • Added unit tests for LLM config resolution and LiteLLM client request behavior.

Tests

Added

  • test_llm_resolution.py
  • test_llm_client.py
  • test_gemini_smoke.py (optional smoke: @pytest.mark.smoke)

Suggested commands

pytest -q tests/test_llm_resolution.py tests/test_llm_client.py
GEMINI_API_KEY=... pytest -m smoke -q tests/test_gemini_smoke.py

Notes

  • Runtime LLM routing now follows LLM_MODEL provider prefix when multiple providers are configured.
  • Local OpenAI-compatible endpoints are supported via LLM_URL (with optional LLM_API_KEY).
  • Default model selections were refreshed (Gemini/OpenAI/Anthropic/local).

@trevormells
Copy link
Author

addresses a wide range of model compatibility issues that have been surfaced in issues
#19
#4
#9
#16
@Pickle-Pixel

rothnic added a commit to rothnic/ApplyPilot that referenced this pull request Feb 27, 2026
rothnic added a commit to rothnic/ApplyPilot that referenced this pull request Feb 27, 2026
- Add AgentBackend abstraction for Claude and OpenCode
- Implement backend detection and preference logic
- Add MCP server management for both backends
- Maintain compatibility with PR Pickle-Pixel#24 LiteLLM integration
- Update scoring/tailoring/wizard to use new backend system
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant