Skip to content

Conversation

@AlanPonnachan
Copy link
Contributor

Handle null usage values to prevent validation errors

Relevant issues

Fixes #16330

Pre-Submission checklist

Please complete all items before asking a Lite-LLM maintainer to review your PR

image

Type

🐛 Bug Fix
✅ Test

Changes

This pull request resolves a pydantic.ValidationError that occurred when sending data to the Langfuse callback.

Problem:
When an upstream provider (e.g., an OpenAI-compatible endpoint) returned a usage object with null values for token counts, these None values were passed directly to the Langfuse client, which requires integer values for these fields.

Solution:
The _log_langfuse_v2 method in litellm/integrations/langfuse/langfuse.py has been updated to robustly handle None values for all token fields (prompt_tokens, completion_tokens, total_tokens, etc.). It now coalesces any None value to 0 before constructing the usage and usage_details payloads for Langfuse. This ensures the data is always compliant with the Langfuse schema.

Testing:
A new unit test, test_log_langfuse_v2_handles_null_usage_values, has been added to tests/test_litellm/integrations/test_langfuse.py. This test:

  1. Simulates an API response with None values in the usage object.
  2. Asserts that the values sent to the Langfuse client are correctly converted to 0.
  3. Is isolated using unittest.mock.patch to avoid side effects and ensure it only tests the intended logic, consistent with the existing testing style in the file.

@vercel
Copy link

vercel bot commented Nov 8, 2025

@AlanPonnachan is attempting to deploy a commit to the CLERKIEAI Team on Vercel.

A member of the Team first needs to authorize it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Langfuse ValidationError - null values in usage fields not filtered

1 participant