-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
Feat/zai native integration #16359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
0xlws2
wants to merge
7
commits into
BerriAI:main
Choose a base branch
from
0xlws2:feat/zai-native-integration
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
Feat/zai native integration #16359
+1,298
−45
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
- Add ZAI provider with chat completion support - Implement token counting and response transformations - Add comprehensive test coverage for ZAI integration - Register ZAI provider in main initialization Co-authored-by: LiteLLM Team
- Add ZAI provider with chat completion support - Implement token counting and response transformations - Add comprehensive test coverage for ZAI integration - Register ZAI provider in main initialization - Fix test methods and error class hierarchy Co-authored-by: LiteLLM Team
- Update default API base from China mainland (open.bigmodel.cn) to overseas (api.z.ai) - Changes affect handler.py, transformation.py, and corresponding tests - Maintains backward compatibility with custom API base configuration
|
@0xlws2 is attempting to deploy a commit to the CLERKIEAI Team on Vercel. A member of the Team first needs to authorize it. |
- Simplified integration tests to focus on core functionality - Updated PR template to clarify glm-4.6 as the only tested model - Fixed test assertions to work with mocked handlers - Added live testing verification section in PR template - All 32 core tests now passing consistently
- Add ZAI routing logic in main.py completion function - Add model prefix detection for 'zai/' in get_llm_provider_logic.py - Import ZaiChatConfig and create zai_transformation instance - Register ZAI in LITELLM_CHAT_PROVIDERS - Add zai_key to module variables - Remove handler.py as requested in review (follows provider_registration.md pattern) - Move test_zai_thinking.py to proper test directory - Clean up duplicate imports and test references All integration points are now complete, enabling: litellm.completion(model='zai/glm-4.6', messages=[...])
- Remove 'reasoning_tokens' from supported params list
- Remove custom reasoning handling logic that incorrectly used litellm_params
- Allow LiteLLM's standard extra_body mechanism to handle reasoning properly
- ZAI models can now use extra_body={'thinking': {'type': 'enabled'}} for reasoning
- Update get_supported_openai_params to only return actual ZAI-supported parameters - Fix transform_request to filter out unsupported parallel_tool_calls parameter - Update test_transform_request_with_complex_tools to use tool_choice: 'auto' only - Update test_transform_request_with_parallel_tool_calls to verify filtering - Fix test_get_supported_openai_params to match ZAI API documentation - Minor documentation updates for zai-org model case handling All tests now pass and accurately reflect ZAI's actual API capabilities, preventing 400 Bad Request errors from unsupported parameters.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Title
Add ZAI (z.ai) native integration support
Relevant issues
#13059, #16225
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🆕 New Feature

🐛 Bug Fix
🧹 Refactoring
📖 Documentation
🚄 Infrastructure
✅ Test
Changes
🎯 New Feature: ZAI (z.ai) Native Integration
This PR adds ZAI as a first-class native provider to LiteLLM with full OpenAI compatibility.
Core Implementation:
litellm/llms/zai/with complete chat completion supportKey Files Added/Modified:
API Configuration:
https://api.z.ai/api/paas/v4(Overseas)ZAI_API_KEYenvironment variableglm-4.6✅, with support for other ZAI models (glm-4, glm-4v, charglm-3, etc.)Testing:
*edit: update reasoning usage example
Usage Example:
Benefits for Users:
Technical Achievements:
🧪 Test Results:
All core functionality tests pass (integration tests skipped without API keys). Successfully tested with
glm-4.6model in live environment.🎯 Live Testing Verified:
glm-4.6(confirmed working in production)api.z.ai)