-
Notifications
You must be signed in to change notification settings - Fork 34
Resolves #100: pdd Setup should use llm_invoke and give access to all models #123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the pdd setup command to support all LLM providers dynamically rather than being limited to three hardcoded providers (OpenAI, Google, Anthropic). The changes enable the setup tool to discover all providers from the CSV configuration and use the existing llm_invoke function for API key validation, replacing custom HTTP-based testing functions.
Key changes:
- Replaced provider-specific API testing functions with a unified
test_api_key_with_llm_invoke()function - Made provider discovery dynamic by reading all unique API keys from
llm_model.csvinstead of hardcoding three providers - Removed model filtering logic that excluded non-OpenAI/Google/Anthropic models from the user's configuration
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| pdd/setup_tool.py | Refactored to use llm_invoke for API testing, removed hardcoded provider lists, and enabled dynamic provider discovery from CSV |
| tests/test_setup_tool.py | Added comprehensive test suite with 14 tests covering dynamic provider discovery, llm_invoke usage, and regression tests |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| response = llm_invoke( | ||
| prompt="Say hello", | ||
| input_json={}, | ||
| strength=0.0, # Use cheapest model |
Copilot
AI
Nov 10, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment 'Use cheapest model' is misleading. The strength parameter represents model capability/power, not cost. A value of 0.0 selects the weakest/least capable model, which typically costs less but may not always be the absolute cheapest option. Consider clarifying: strength=0.0, # Use least capable model (typically cheapest)
| strength=0.0, # Use cheapest model | |
| strength=0.0, # Use least capable model (typically cheapest) |
| return response.status_code != 401 and response.status_code != 403 | ||
|
|
||
| # If we get here without exception and have a result, the key works | ||
| return response is not None and 'result' in response |
Copilot
AI
Nov 10, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition response is not None is redundant. If response were None, the 'result' in response check would raise a TypeError, but this is already caught by the exception handler. Simplify to return 'result' in response.
| return response is not None and 'result' in response | |
| return 'result' in response |
|
make test pass |
|
@qanagattandyr can you please resolve the copilot issues? |
|
will do tonight |
Test Results - FAILPull Request: #123 Overall Summary:
Regression Tests - FAILResults:
Errors:
Unit Tests (pytest) - FAILResults:
Sync Regression Tests - PASSResults:
Errors:
View Full LogsTest run completed at: 2025-11-25T01:22:16.195957 Command failed with exit code 2. Preprocess failed: Web tag not processed Test 3 failed (see logs) Regression TestsUnit Tests (pytest)Command failed with exit code 1. Sync Regression Tests |
Summary
Fixes #100: pdd Setup should use llm_invoke and give access to all models
This PR refactors the
pdd setupcommand to usellm_invokefor API key testing and removes hardcoded limitations that only supported 3 providers (OpenAI, Google, Anthropic). Now the setup tool dynamically discovers and supports all LLM providers configured inllm_model.csv.Changes Made
setup_tool.py - edited
tests/test_setup_tool.py - added tests
Core Improvements
Replaced hardcoded API testing with
llm_invoketest_openai_key(),test_google_key(), andtest_anthropic_key()functionstest_api_key_with_llm_invoke()that works with any providerDynamic provider discovery
get_csv_variable_names()now reads all unique API keys from CSV (not just 3 hardcoded ones)discover_api_keys()checks environment for all providers dynamicallyRemoved model filtering
save_configuration()no longer filters models by hardcoded prefixes (gpt-*,gemini/*,anthropic/*)llm_model.csvare now included if their API key is availableAdditional Changes
requestsdependency (no longer needed since we usellm_invoke)test_setup_tool.py) with 14 tests covering all changesTesting
All tests pass (14/14):
llm_invokeis used for API key validation