-
Notifications
You must be signed in to change notification settings - Fork 235
Feature: Enable programmatically pass in api_key besides reading from env #1134
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
- Fixed Python line length issue in llm.py by breaking long type annotation - Fixed Rust function signature formatting in all LLM client files - Fixed long function call formatting in embed_text.rs - All formatting now complies with project standards
- Fixed api_bail! usage in context expecting LlmApiConfig return type - Replaced unwrap_or_else with proper if-let pattern matching - Resolves compilation error in GitHub Actions build test
- Removed trailing whitespace from all LLM client files - Fixed formatting issues in gemini.rs, litellm.rs, openai.rs, openrouter.rs, vllm.rs - Fixed trailing whitespace in embed_text.rs - All files now comply with cargo fmt standards
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @aryasoni98, sorry for late reply. Please see my latest comment.
| output_dimension: int | None = None | ||
| task_type: str | None = None | ||
| api_config: llm.VertexAiConfig | None = None | ||
| api_key: str | None = None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we wrap API key within a Transient Auth Entry (as requested in here)? Thanks!
|
hi @aryasoni98 is this still active? |
Yes |
This PR implements the ability to pass API keys programmatically to the
EmbedTextfunction, enabling portable AI agents that don't depend on host environment variables. This is particularly useful for CI/CD environments like Bitbucket pipelines where environment variables are managed through pipeline configuration.Solution
Added
api_keyparameter toEmbedTextfunction and extended the LLM configuration system to support programmatic API key passing across all LLM providers.🔧 Core Functionality
api_keyparameter toEmbedTextfunction spec📁 Files Modified
Python Files
python/cocoindex/functions.py- Addedapi_keyparameter toEmbedTextpython/cocoindex/functions/_engine_builtin_specs.py- Addedapi_keyparameter toEmbedTextpython/cocoindex/llm.py- Added new LLM config classes with API key supportRust Files
src/llm/mod.rs- ExtendedLlmApiConfigenum with new config typessrc/llm/voyage.rs- Updated Voyage client to accept API key from configsrc/llm/gemini.rs- Updated Gemini client to accept API key from configsrc/llm/anthropic.rs- Updated Anthropic client to accept API key from configsrc/llm/openai.rs- Updated OpenAI client to accept API key from configsrc/llm/litellm.rs- Updated LiteLLM client to accept API key from configsrc/llm/openrouter.rs- Updated OpenRouter client to accept API key from configsrc/llm/vllm.rs- Updated VLLM client to accept API key from configsrc/ops/functions/embed_text.rs- Updated EmbedText function to handleapi_keyparameter🚀 Usage Examples
Before (Environment Variable Only)
After (Programmatic API Key)
Bitbucket Pipeline Example
Multiple API Types
This implementation maintains full backward compatibility:
api_keyis provided, the system falls back to environment variables🧪 Testing
📋 New LLM Config Classes
Added the following config classes with API key support:
AnthropicConfig- For Anthropic Claude modelsGeminiConfig- For Google Gemini modelsVoyageConfig- For Voyage AI modelsLiteLlmConfig- For LiteLLM proxy modelsOpenRouterConfig- For OpenRouter modelsVllmConfig- For VLLM modelsOpenAiConfig- Addedapi_keyfield🔍 Implementation Details
Rust Side
LlmApiConfigenum to include all new config typesapi_configparameterEmbedTextfunction to create appropriate API configs based onapi_keyparameterPython Side
api_keyparameter toEmbedTextfunction specLlmSpecto support all new config types🎉 Result
Issue #994 is now 100% resolved.