-
Notifications
You must be signed in to change notification settings - Fork 2.5k
fix(models): Handle empty message in LiteLLM response (fixes #3618) #3699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(models): Handle empty message in LiteLLM response (fixes #3618) #3699
Conversation
) When a turn ends with only tool calls and no final agent message, _model_response_to_generate_content_response was raising ValueError. This fix returns an empty LlmResponse instead, allowing workflows to continue gracefully. Changes: - Return empty LlmResponse when message is None or empty - Preserve finish_reason and usage_metadata if available - Added comprehensive test cases for edge cases - Fixed line length issues for pylint compliance Fixes: google#3618
Summary of ChangesHello @sarojrout, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request resolves an issue in the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request effectively addresses the ValueError that occurred when a LiteLLM response lacked a message, a valid scenario in tool-calling workflows. The solution to return an empty LlmResponse while preserving metadata is appropriate and well-implemented. The accompanying unit tests are thorough and cover the necessary edge cases. I have one suggestion to refactor the _model_response_to_generate_content_response function to eliminate some code duplication, which will enhance the code's maintainability.
|
Hi @sarojrout ,Thank you for your work on this pull request. We appreciate the effort you've invested. |
sure will do it in sometime. thanks for reviewing @ryanaiagent |
pls check @ryanaiagent whenever you get time. thanks! |
When a turn ends with only tool calls and no final agent message, _model_response_to_generate_content_response was raising ValueError. This fix returns an empty LlmResponse instead, allowing workflows to continue gracefully.
Changes:
Please ensure you have read the contribution guide before creating a pull request.
Link to Issue or Description of Change
1. Link to an existing issue (if applicable):
2. Or, if no issue exists, describe the change:
If applicable, please follow the issue templates to provide as much detail as
possible.
Problem:
When using
google-adkwith LiteLLM and tools, a run can end with a tool call/tool response and no final agent message. In these cases, the LiteLLM wrapper was raisingValueError: No message in responsefrom_model_response_to_generate_content_response, causing workflows to crash even though this is a valid scenario.This is particularly problematic in multi-agent workflows where:
Solution:
Instead of raising
ValueErrorwhenmessageisNoneor empty, the code now:LlmResponsewith empty content partsfinish_reasonif available (properly mapped to the appropriate enum)usage_metadataif availableThis solution treats a turn ending with only tool calls as a valid outcome, which aligns with the expected behavior described in the issue. The fix is backward-compatible - existing code that works will continue to work, and the edge case is now handled gracefully.
Testing Plan
Please describe the tests that you ran to verify your changes. This is required
for all PRs that are not small documentation or typo fixes.
Unit Tests:
Added three new test cases in
tests/unittests/models/test_litellm.py:test_model_response_to_generate_content_response_no_message_with_finish_reasonfinish_reason="tool_calls"test_model_response_to_generate_content_response_no_message_no_finish_reasontest_model_response_to_generate_content_response_empty_message_dictmessage = {})Test Results:
All existing tests pass:
Manual End-to-End (E2E) Tests:
I have tested with the below utility code which you can try to test out
Checklist
- Added docstring explaining the behavior when message is None
- Added inline comments explaining the fix logic
- All 104 existing tests pass
- All 3 new tests pass
Additional context
Code Changes:
src/google/adk/models/lite_llm.py::_model_response_to_generate_content_response()ValueErrorto returning emptyLlmResponsewhen message is None/empty