Conversation
|
/marvin can you compare this implementation to the openai and anthropic ones and then to the google docs and make sure there's no funny business going on |
|
Claude finished @strawgate's task in 3m 2s —— View job Google GenAI Handler Review CompleteI've thoroughly compared the Google GenAI handler implementation against the Anthropic and OpenAI handlers, and verified it against Google's official documentation. Here's what I found: 🎯 Overall AssessmentThe implementation is solid and follows the existing patterns well. However, there are 3 critical issues and several improvements to consider: 🚨 Critical Issues1. ThinkingConfig Used Inappropriately (Line 105)Severity: High | thinking_config=ThinkingConfig(thinking_budget=200),Problem: ThinkingConfig with Impact: This will break for most users since the example uses Fix: Either:
2. Tool Result ID Parsing Logic is Fragile (Lines 276-282)Severity: Medium-High | # Extract function name from toolUseId
# Our IDs are formatted as "{function_name}_{uuid8}", so extract the name
tool_use_id = content.toolUseId
if "_" in tool_use_id:
# Split and rejoin all but the last part (the UUID suffix)
parts = tool_use_id.rsplit("_", 1)
function_name = parts[0]
else:
# Fallback: use the full ID as the name
function_name = tool_use_idProblem: This assumes the tool use ID follows the format
Comparison:
The Real Issue: Google's API doesn't provide a tool use ID in the FunctionCall, but FastMCP's Better Approach: Store a mapping of generated IDs to function names, or include the original 3. Schema Type Mapping Missing "null" Type (Lines 185-193)Severity: Medium | type_map: dict[str, str] = {
"string": "STRING",
"integer": "INTEGER",
"number": "NUMBER",
"boolean": "BOOLEAN",
"array": "ARRAY",
"object": "OBJECT",
}Problem: The type map doesn't handle Impact: If a schema contains Fix: Add
|
Test Failure AnalysisSummary: The static analysis job failed due to unused imports in the test file for the Google GenAI sampling handler. Root Cause: The test file imports several types at the top level within the block (lines 8, 12, 19 in ) that are not actually used in the test code. These imports were included but then re-imported within individual test functions:
Suggested Solution: Remove the three unused imports from the module-level import block. In from google.genai.types import ( # type: ignore[import-untyped]
Candidate,
FunctionCallingConfigMode,
GenerateContentResponse,
ModelContent,
Part,
UserContent,
)to: from google.genai.types import ( # type: ignore[import-untyped]
FunctionCallingConfigMode,
GenerateContentResponse,
ModelContent,
UserContent,
)And change lines 15-22 from: from mcp.types import (
CreateMessageResult,
ModelHint,
ModelPreferences,
SamplingMessage,
TextContent,
ToolChoice,
)to: from mcp.types import (
CreateMessageResult,
ModelHint,
ModelPreferences,
TextContent,
ToolChoice,
)The imports are re-imported locally where needed (e.g., Detailed AnalysisRuff Check OutputWhy These Are Unused
Additional Formatting ChangesRuff also applied some formatting fixes to
These are cosmetic changes and don't affect functionality. Related Files
|
Test Failure AnalysisSummary: The static analysis job failed due to unused imports in the test file for the Google GenAI sampling handler. Root Cause: The test file imports several types at the top level within the try block (lines 8, 12, 19 in
Suggested Solution: Remove the three unused imports from the module-level import block. In from google.genai.types import ( # type: ignore[import-untyped]
Candidate,
FunctionCallingConfigMode,
GenerateContentResponse,
ModelContent,
Part,
UserContent,
)to: from google.genai.types import ( # type: ignore[import-untyped]
FunctionCallingConfigMode,
GenerateContentResponse,
ModelContent,
UserContent,
)And change lines 15-22 from: from mcp.types import (
CreateMessageResult,
ModelHint,
ModelPreferences,
SamplingMessage,
TextContent,
ToolChoice,
)to: from mcp.types import (
CreateMessageResult,
ModelHint,
ModelPreferences,
TextContent,
ToolChoice,
)The imports are re-imported locally where needed (e.g., Detailed AnalysisRuff Check OutputWhy These Are Unused
Additional Formatting ChangesRuff also applied some formatting fixes to
These are cosmetic changes and do not affect functionality. Related Files
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 544ef46f34
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| # Return appropriate result type based on whether tools were provided | ||
| if params.tools: | ||
| return _response_to_result_with_tools(response, self.default_model) | ||
| return _response_to_create_message_result(response, self.default_model) |
There was a problem hiding this comment.
Report the model actually requested
The handler chooses the request model with _get_model(params.modelPreferences) but always returns self.default_model in the CreateMessageResult/CreateMessageResultWithTools. This mislabels responses whenever a caller provides ModelPreferences hints (or any other override), which can break downstream caching, routing, or telemetry keyed by model name because the response metadata no longer reflects the real model that was used.
Useful? React with 👍 / 👎.
| tool_use_id = content.toolUseId | ||
| if "_" in tool_use_id: | ||
| # Split and rejoin all but the last part (the UUID suffix) | ||
| parts = tool_use_id.rsplit("_", 1) | ||
| function_name = parts[0] |
There was a problem hiding this comment.
Don’t infer tool name from toolUseId
Google function responses require the declared function name, but ToolResultContent only carries toolUseId (which is opaque in MCP). This code assumes the ID embeds the function name (splitting on _), so any tool results whose IDs don’t follow that convention—e.g., persisted conversations or tool calls from a different handler that uses random IDs—will send an invalid function name and cause generate_content to reject or ignore the tool response.
Useful? React with 👍 / 👎.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a new GoogleGenaiSamplingHandler at src/fastmcp/client/sampling/handlers/google_genai.py. The handler converts MCP SamplingMessage sequences and MCP Tool definitions into Google GenAI Content, FunctionDeclarations, ToolConfig, and ThinkingConfig when provided; selects the model from ModelPreferences or a default; calls google.genai.Client.generate_content; and translates responses into MCP CreateMessageResult or CreateMessageResultWithTools, including function_call/tool-use mapping, stopReason derivation, schema conversion, and validation/error handling. 🚥 Pre-merge checks | ✅ 1 | ❌ 4❌ Failed checks (1 warning, 3 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (1)
src/fastmcp/client/sampling/handlers/google_genai.py (1)
74-78: Makethinking_configopt‑in instead of hard‑coding a budget.Line 105 hard-codes
ThinkingConfig(thinking_budget=200)for every request, which overrides model defaults and may cause failures. Gemini 2.5 models default to ~8,192 tokens for thinking when unspecified, and Gemini 3+ models usethinking_level(enum) instead ofthinking_budget—hard-coded values outside the model's supported range trigger errors. Add athinking_budgetparameter to__init__(defaulting toNone) and use it conditionally, allowing the model's default behavior when not specified.♻️ Suggested change
def __init__( self, default_model: str, client: GoogleGenaiClient | None = None + thinking_budget: int | None = None, ) -> None: self.client: GoogleGenaiClient = client or GoogleGenaiClient() self.default_model: str = default_model + self.thinking_budget: int | None = thinking_budget @@ response: GenerateContentResponse = ( await self.client.aio.models.generate_content( model=self._get_model(model_preferences=params.modelPreferences), contents=contents, config=GenerateContentConfig( system_instruction=params.systemPrompt, temperature=params.temperature, max_output_tokens=params.maxTokens, stop_sequences=params.stopSequences, - thinking_config=ThinkingConfig(thinking_budget=200), + thinking_config=( + ThinkingConfig(thinking_budget=self.thinking_budget) + if self.thinking_budget is not None + else None + ), tools=google_tools, tool_config=tool_config, ), ) )
|
/marvin do we really need to translate the schema for gemini? is this really comprehensive? |
|
Claude finished @strawgate's task in 2m 32s —— View job Analysis: Google GenAI Schema TranslationI've analyzed whether schema translation is necessary and assessed the comprehensiveness of the implementation. Here are my findings:
|
| Handler | Schema Translation | Reason |
|---|---|---|
Anthropic (anthropic.py:300-316) |
❌ No translation | Anthropic accepts JSON Schema directly (input_schema=dict(tool.inputSchema)) |
OpenAI (openai.py:301-320) |
❌ No translation | OpenAI accepts JSON Schema directly (parameters=dict(tool.inputSchema)) |
| Google GenAI | ✅ Needs translation | Google uses a different schema format with uppercase type names... OR DOES IT? |
The Real Requirement
Looking at the Google GenAI SDK documentation and migration guide, the FunctionDeclaration expects a parameters field that follows JSON Schema format but with potential differences:
- Type names: The API docs show lowercase type names (
"string","integer", etc.) - Structure: Same as JSON Schema (properties, required, description, etc.)
- Additional features: Supports
nullablefield for optional values
So translation is needed, but the current implementation has the type case backwards!
🔍 Comprehensiveness Assessment
Assuming the types were corrected to lowercase, here's what the translation covers:
✅ What's Handled Well
- Basic types: string, integer, number, boolean, array, object
- Nullable types: Converts
anyOfwith null type tonullable: true - Nested objects: Recursively converts nested schemas
- Arrays with items: Handles array item schemas
- Descriptions and enums: Preserves these fields
- Required fields: Maintains required arrays
⚠️ What's Missing or Problematic
-
Type arrays not handled (lines 183-193):
schema_type: str | None = schema.get("type")
JSON Schema allows
"type": ["string", "null"], but this code assumestypeis always a string, which will cause aTypeErrorwhen used with.get()on a list. -
Format field ignored:
JSON Schema supportsformat(e.g.,"format": "date-time"), but this is not preserved. -
Constraints ignored:
minimum,maximumfor numbersminLength,maxLengthfor stringsminItems,maxItemsfor arrayspatternfor string regex validation
These are valid JSON Schema fields that might be useful for the model.
-
$ref not handled:
JSON Schema$refreferences are not resolved or converted. -
oneOf/allOf not handled:
OnlyanyOfis handled (for nullable types). Other combinators are ignored. -
Default values not preserved:
Thedefaultfield from JSON Schema is not carried over.
🎯 Is It Comprehensive Enough?
For basic tool usage: YES (with the type case fix)
- Covers the most common cases: simple types, objects, arrays, nullability
- Sufficient for typical function calling scenarios
- Matches the patterns shown in Google's examples
For advanced schemas: NO
- Missing type arrays (which can cause crashes)
- Missing validation constraints (min/max, patterns, etc.)
- Missing JSON Schema combinators (oneOf, allOf)
- Missing format specifications
🧪 Test Coverage Analysis
The tests (tests/client/sampling/handlers/test_google_genai_handler.py:129-170) verify:
- ✅ Basic type conversion (string → STRING, integer → INTEGER)
- ✅ Nullable types via anyOf
- ✅ Arrays with items
- ✅ Objects with properties and required fields
- ✅ Descriptions
But the tests are testing the WRONG behavior! They expect uppercase types:
assert _convert_json_schema_to_google_schema({"type": "string"}) == {"type": "STRING"}This should be:
assert _convert_json_schema_to_google_schema({"type": "string"}) == {"type": "string"}🚨 Action Required
- Fix the type mapping to use lowercase types
- Update all tests to expect lowercase types
- Consider adding support for type arrays to prevent crashes
- Document known limitations (missing constraints, combinators, etc.)
📚 Sources
- Function calling with the Gemini API
- Structured outputs | Gemini API
- Google Gen AI SDK documentation
- Migrate to the Google GenAI SDK
Summary
Schema translation IS necessary for Google GenAI, but the current implementation has a critical bug: it uses uppercase type names (STRING, INTEGER) when Google's API expects lowercase (string, integer). The translation logic is otherwise reasonably comprehensive for basic use cases but misses some edge cases (type arrays, validation constraints, schema combinators).
Test Failure AnalysisSummary: The static analysis workflow failed due to unused imports in the test file and a formatting issue in the handler file. Root Cause: Three unused imports in
Additionally, Suggested Solution:
Detailed AnalysisRuff Check ErrorsThe imports were likely added for type hints or mocking but are not actually used in the test functions. The test file imports Ruff Format IssueOne file needs reformatting:
Type CheckType checking passes once the unused imports are removed. Related FilesModified files in this PR:
|
Test Failure AnalysisSummary: The static analysis ( Root Cause: The type checker cannot infer that Suggested Solution: Replace In if hasattr(content, "text"):
try:
manifest_data = json.loads(content.text)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid manifest JSON for skill: {skill_name}") from eto: if isinstance(content, mcp.types.TextResourceContents):
try:
manifest_data = json.loads(content.text)
except json.JSONDecodeError as e:
raise ValueError(f"Invalid manifest JSON for skill: {skill_name}") from eIn if hasattr(content, "text"):
file_path.write_text(content.text)
elif hasattr(content, "blob"):
# Handle base64-encoded binary content
import base64
file_path.write_bytes(base64.b64decode(content.blob))to: if isinstance(content, mcp.types.TextResourceContents):
file_path.write_text(content.text)
elif isinstance(content, mcp.types.BlobResourceContents):
# Handle base64-encoded binary content
import base64
file_path.write_bytes(base64.b64decode(content.blob))Additionally, remove unused
Detailed AnalysisType Checker ErrorsWhy This HappensThe
Using Note About PR ChangesImportant: The files with type errors ( Related FilesFiles requiring fixes:
Import needed: import mcp.types # Add to imports in src/fastmcp/utilities/skills.py if not presentUpdated: 2026-01-28 04:51 UTC - Analysis reflects latest workflow run |
Test Failure AnalysisSummary: The static analysis (ty check) failed due to type checking errors in Root Cause: The code uses Suggested Solution:
Detailed AnalysisType Checking Errors in skills.pyThe issue occurs at two locations where Line 106: if hasattr(content, "text"):
manifest_data = json.loads(content.text) # ❌ Type checker sees content.text as 'object'Line 201: if hasattr(content, "text"):
file_path.write_text(content.text) # ❌ Type checker sees content.text as 'object'The variable The fix is to use if isinstance(content, mcp.types.TextResourceContents):
manifest_data = json.loads(content.text) # ✅ Type checker knows content.text is strUnused Type Ignore CommentsTwo Log ExcerptsRelated Files
|
Test Failure AnalysisSummary: ✅ RESOLVED - The "lowest-direct dependencies" test failure has been fixed by bumping the minimum Root Cause: The commit Solution Applied: Updated the minimum version constraint for Note: This analysis was updated on 2026-01-31 to reflect that the issue has been resolved. The original test failure occurred on workflow run 21550787783 at commit Original Detailed AnalysisTest Failure LogsThe tests failed with: This occurred in two test cases:
Version InvestigationThrough binary search testing, I confirmed:
The Bypass MechanismThe code uses Related Files
|
|
/marvin let's bump the minimum version of google genai please |
|
Claude finished @strawgate's task in 3m 32s —— View job Bumping google-genai minimum version ✓Task: Update minimum version constraint for ContextThe code uses Changes
Tasks
SummarySuccessfully bumped the minimum google-genai version to 1.18.0. This ensures that the Commit: |
|
@coderabbitai please review |
|
✅ Actions performedReview triggered.
|
Test Failure AnalysisSummary: Two tests failed in the latest workflow run (#21807550906), but both failures are unrelated to this PR's changes (Google GenAI handler). These are pre-existing test flakiness issues in other parts of the codebase. Failure 1: OpenAPI Performance Test (Ubuntu, lowest-direct deps)Test: Root Cause: Performance assertion is too strict. The test expects OpenAPI provider initialization to complete in under 100ms, but it took 233ms on this run. # tests/server/providers/openapi/test_comprehensive.py:736
assert initialization_time < 0.1 # Should be under 100ms
# AssertionError: assert 0.23323297500610352 < 0.1Why this is unrelated to this PR: This PR only adds a Google GenAI sampling handler ( Suggested Solution: This is a flaky performance test. The initialization time varies based on CI runner performance. Options:
Failure 2: Azure Auth Test Timeout (Windows)Test: Root Cause: Test timed out during # Stack trace shows timeout at:
File "D:\a\fastmcp\fastmcp\src\fastmcp\server\auth\oauth_proxy\proxy.py", line 426
key_value=DiskStore(directory=settings.home / "oauth-proxy"),
File "D:\a\fastmcp\fastmcp\.venv\lib\site-packages\diskcache\core.py", line 2438
sql('PRAGMA %s = %s' % (pragma, value)).fetchall()Why this is unrelated to this PR: This PR only adds Google GenAI sampling handler code. It doesn't modify Azure auth providers or disk storage. Suggested Solution: This appears to be a Windows-specific issue with SQLite/diskcache, possibly due to:
Possible fixes:
PR Impact✅ This PR's changes are not causing these failures The Google GenAI handler implementation:
These failures are pre-existing flaky tests that need to be addressed separately. Related Files (for fixing the flaky tests)OpenAPI Performance Test:
Azure Auth Timeout:
Updated: 2026-02-08 23:45 UTC - Workflow run #21807550906 |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
src/fastmcp/client/sampling/handlers/google_genai.py (3)
300-302: UseTypeErrorinstead ofValueErrorfor unsupported content types.Per Ruff TRY004,
TypeErroris more appropriate when the issue is an unexpected type.Proposed fix
else: - msg = f"Unsupported tool result content type: {type(item).__name__}" - raise ValueError(msg) + msg = f"Unsupported tool result content type: {type(item).__name__}" + raise TypeError(msg)
404-436: Thought parts are silently dropped whenthinking_budgetis set.When a thinking budget is configured (Lines 108-112), the model may return
thoughtparts alongside text/function_call parts. The content-building loop (Lines 422-436) only handlestextandfunction_call, silently dropping thought parts. This is likely the desired behavior (MCP consumers don't need internal chain-of-thought), but a brief comment documenting this intentional omission would prevent future confusion.Also, the
hasattrchecks on Lines 424/426 are unnecessary for typedPartobjects—these attributes always exist (asNone). Usingpart.text is not Noneandpart.function_call is not Nonewould be more idiomatic.Suggested cleanup
for part in candidate.content.parts: - if hasattr(part, "text") and part.text: + # Skip thought parts (from thinking_config) — not relevant for MCP responses + if part.text is not None and part.text: content.append(TextContent(type="text", text=part.text)) - elif hasattr(part, "function_call") and part.function_call: + elif part.function_call is not None:
284-291: Add a comment explaining thethought_signaturebypass.The magic byte string
b"skip_thought_signature_validator"lacks inline documentation. Thethought_signaturefield is part of Gemini's tool-calling flow where the model returns an encrypted reasoning state that must be echoed back in subsequent turns; for Gemini 3, omitting required signatures causes HTTP 400 errors.Since this code injects tool-use Parts from the MCP protocol (not from Gemini's response), they lack legitimate signatures. The bypass is documented in the Gemini API as a last resort to allow such injected tool calls, but the code should note this reason:
thought_signature=b"skip_thought_signature_validator", # MCP tool calls bypass Gemini's thought signature validationOr, more explicitly: "This tool call originates from MCP, not from a Gemini response, so it has no real thought signature. The bypass value allows validation to pass but may impact tool-calling quality."
| def _convert_tool_to_google_genai(tool: MCPTool) -> GoogleTool: | ||
| """Convert an MCP Tool to Google GenAI format.""" | ||
| input_schema: dict[str, Any] = tool.inputSchema | ||
| properties: dict[str, Any] = input_schema.get("properties", {}) | ||
| required: list[str] = input_schema.get("required", []) | ||
|
|
||
| # Build parameters schema with Google's type format | ||
| google_properties: dict[str, Any] = {} | ||
| for prop_name, prop_schema in properties.items(): | ||
| google_properties[prop_name] = _convert_json_schema_to_google_schema( | ||
| dict(prop_schema) | ||
| ) | ||
|
|
||
| return GoogleTool( | ||
| function_declarations=[ | ||
| FunctionDeclaration( | ||
| name=tool.name, | ||
| description=tool.description or "", | ||
| parameters_json_schema={ | ||
| "type": "OBJECT", | ||
| "properties": google_properties, | ||
| "required": required, | ||
| }, | ||
| ) | ||
| ] | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
google-genai Python FunctionDeclaration parameters_json_schema documentation
💡 Result:
In the google-genai (python-genai) Python SDK, types.FunctionDeclaration.parameters_json_schema is documented as the JSON-Schema way to describe a function’s input parameters (the schema must be an object whose properties are the function parameters), and it is mutually exclusive with parameters (the SDK’s Schema model). (googleapis.github.io)
Where it’s documented (and example):
- Python SDK docs (Function calling example) shows
types.FunctionDeclaration(..., parameters_json_schema={...})used when declaring a tool function. (googleapis.github.io) - Python SDK API reference lists
FunctionDeclaration.parameters_json_schemaand describes it (including the “mutually exclusive withparameters” note). (googleapis.github.io)
Related note: there’s a reported/closed issue indicating parameters_json_schema worked for “regular” generate_content models but not for some Gemini Live models (empty args), with the suggested workaround being to use parameters instead. (github.com)
Citations:
- 1: https://googleapis.github.io/python-genai/genai.html
- 2: https://googleapis.github.io/python-genai/index.html
- 3: https://googleapis.github.io/python-genai/genai.html
- 4:
parameters_json_schemaseems not supported by Gemini Live API googleapis/python-genai#1147
Fix schema format mismatch: remove conversion when using parameters_json_schema.
parameters_json_schema expects standard JSON Schema format (lowercase "string", "object", etc.), but the code converts to Google's schema format (uppercase "STRING", "OBJECT"). Since tool.inputSchema is already in standard JSON Schema format, pass it directly instead of converting:
Recommended fix
def _convert_tool_to_google_genai(tool: MCPTool) -> GoogleTool:
"""Convert an MCP Tool to Google GenAI format."""
return GoogleTool(
function_declarations=[
FunctionDeclaration(
name=tool.name,
description=tool.description or "",
- parameters_json_schema={
- "type": "OBJECT",
- "properties": google_properties,
- "required": required,
- },
+ parameters_json_schema=tool.inputSchema,
)
]
)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _convert_tool_to_google_genai(tool: MCPTool) -> GoogleTool: | |
| """Convert an MCP Tool to Google GenAI format.""" | |
| input_schema: dict[str, Any] = tool.inputSchema | |
| properties: dict[str, Any] = input_schema.get("properties", {}) | |
| required: list[str] = input_schema.get("required", []) | |
| # Build parameters schema with Google's type format | |
| google_properties: dict[str, Any] = {} | |
| for prop_name, prop_schema in properties.items(): | |
| google_properties[prop_name] = _convert_json_schema_to_google_schema( | |
| dict(prop_schema) | |
| ) | |
| return GoogleTool( | |
| function_declarations=[ | |
| FunctionDeclaration( | |
| name=tool.name, | |
| description=tool.description or "", | |
| parameters_json_schema={ | |
| "type": "OBJECT", | |
| "properties": google_properties, | |
| "required": required, | |
| }, | |
| ) | |
| ] | |
| ) | |
| def _convert_tool_to_google_genai(tool: MCPTool) -> GoogleTool: | |
| """Convert an MCP Tool to Google GenAI format.""" | |
| input_schema: dict[str, Any] = tool.inputSchema | |
| properties: dict[str, Any] = input_schema.get("properties", {}) | |
| required: list[str] = input_schema.get("required", []) | |
| # Build parameters schema with Google's type format | |
| google_properties: dict[str, Any] = {} | |
| for prop_name, prop_schema in properties.items(): | |
| google_properties[prop_name] = _convert_json_schema_to_google_schema( | |
| dict(prop_schema) | |
| ) | |
| return GoogleTool( | |
| function_declarations=[ | |
| FunctionDeclaration( | |
| name=tool.name, | |
| description=tool.description or "", | |
| parameters_json_schema=tool.inputSchema, | |
| ) | |
| ] | |
| ) |
| # Extract function name from toolUseId | ||
| # Our IDs are formatted as "{function_name}_{uuid8}", so extract the name. | ||
| # Note: This is a limitation of MCP's ToolResultContent which only carries | ||
| # toolUseId, while Google's FunctionResponse requires the function name. | ||
| tool_use_id = content.toolUseId | ||
| if "_" in tool_use_id: | ||
| # Split and rejoin all but the last part (the UUID suffix) | ||
| parts = tool_use_id.rsplit("_", 1) | ||
| function_name = parts[0] | ||
| else: | ||
| # Fallback: use the full ID as the name | ||
| function_name = tool_use_id |
There was a problem hiding this comment.
Fragile function-name extraction from toolUseId.
The ID format {name}_{uuid4().hex[:8]} is an internal convention set at Line 432, but the reverse parsing here assumes that convention holds for all incoming ToolResultContent. If a tool name contains no underscores, or the ID originates from a different MCP participant that uses a different format, rsplit("_", 1) will silently produce incorrect results.
Consider carrying the function name through a more reliable channel (e.g., a lookup map keyed by toolUseId), or at minimum document this coupling prominently at both the generation site (Line 432) and the parsing site here.
| def _response_to_create_message_result( | ||
| response: GenerateContentResponse, | ||
| model: str, | ||
| ) -> CreateMessageResult: | ||
| """Convert Google GenAI response to CreateMessageResult (no tools).""" | ||
| if not (text := response.text): | ||
| candidate = _get_candidate_from_response(response) | ||
| msg = f"No content in response: {candidate.finish_reason}" | ||
| raise ValueError(msg) | ||
|
|
||
| return CreateMessageResult( | ||
| content=TextContent(type="text", text=text), | ||
| role="assistant", | ||
| model=model, | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
google-genai python GenerateContentResponse text property raises ValueError exception
💡 Result:
In google-genai (aka googleapis/python-genai), GenerateContentResponse.text is a convenience property that only works when the model output contains only text parts. If the response includes any non-text part (e.g., function_call, executable_code, etc.), accessing .text can raise:
ValueError: GenerateContentResponse.text only supports text parts, but got ...[1], [2]
This commonly happens when you enable tools (Google Search, function calling, code execution): the model may emit a tool/call/code part (sometimes before the text part), and .text refuses to “skip past” those parts. [1], [2]
Workarounds
1) Safely extract just the text parts
texts = []
for part in (response.candidates[0].content.parts or []):
if part.text:
texts.append(part.text)
print("".join(texts))Issue #359 shows a response whose first part is executable_code and second part is text, which triggers the exception if you call response.text. [1]
2) Handle tool/function-call responses explicitly
- Check
response.function_callsand act on them instead of calling.text. The SDK README documentsresponse.function_callsand how to disable automatic function calling if you want manual control. [3]
3) If streaming, expect tool/function_call chunks
When streaming with tools enabled, individual chunks may be function-call parts, so chunk.text can raise the same ValueError—see issue #106. [2]
Sources: GitHub issue demonstrating response.text failing with executable_code parts [1]; GitHub issue demonstrating the same failure with function_call parts (incl. streaming) [2]; SDK README documenting function calling patterns and response.function_calls / disabling automatic function calling [3].
🏁 Script executed:
# Find the google-genai dependency and version
fd -t f "requirements" | xargs grep -l "google-genai\|google-ai-generativelanguage" 2>/dev/null || fd -t f "pyproject.toml" | xargs cat | grep -A5 -B5 "google-genai"Repository: jlowin/fastmcp
Length of output: 529
🏁 Script executed:
# Search for other usages of response.text in the codebase to see patterns
rg "response\.text" --type py -B2 -A2Repository: jlowin/fastmcp
Length of output: 14388
🏁 Script executed:
# Check the specific file to understand imports and context
cat -n src/fastmcp/client/sampling/handlers/google_genai.py | head -50Repository: jlowin/fastmcp
Length of output: 1804
🏁 Script executed:
# Find where _response_to_create_message_result is called
rg "_response_to_create_message_result" --type py -B5 -A5Repository: jlowin/fastmcp
Length of output: 4720
🏁 Script executed:
# Check the full function to see docstring and context
sed -n '375,400p' src/fastmcp/client/sampling/handlers/google_genai.pyRepository: jlowin/fastmcp
Length of output: 926
🏁 Script executed:
# See if there's a WithTools variant and how they're used
rg "CreateMessageResultWithTools\|_response_to_create_message_result_with_tools" --type py -B3 -A3Repository: jlowin/fastmcp
Length of output: 40
Handle response.text raising ValueError in non-tools path.
The google-genai SDK's GenerateContentResponse.text raises ValueError when the response is blocked by safety filters, empty, or otherwise contains non-text content. The current code at line 380 using the walrus operator only handles falsy returns, not exceptions. If .text raises, the custom error message is bypassed and a less informative traceback is shown.
Wrap the access in try/except:
Proposed fix
def _response_to_create_message_result(
response: GenerateContentResponse,
model: str,
) -> CreateMessageResult:
"""Convert Google GenAI response to CreateMessageResult (no tools)."""
- if not (text := response.text):
+ try:
+ text = response.text
+ except (ValueError, AttributeError):
+ text = None
+ if not text:
candidate = _get_candidate_from_response(response)
msg = f"No content in response: {candidate.finish_reason}"
raise ValueError(msg)|
/marvin please fix merge conflicts and make sure precommit checks pass |
|
Claude finished @strawgate's task in 3m 49s —— View job Fixing merge conflicts and precommit checks ✓Tasks
SummarySuccessfully resolved merge conflicts and ensured all precommit checks pass:
The PR is now ready for review with a clean rebase on the latest main branch. |
7cd5690 to
47a0cae
Compare
| def _convert_json_schema_to_google_schema(schema: dict[str, Any]) -> dict[str, Any]: | ||
| """Convert JSON Schema to Google GenAI Schema format. | ||
|
|
||
| Handles: | ||
| - Basic types (string, integer, number, boolean, array, object) | ||
| - Nullable types via anyOf with null type | ||
| - Nested objects and arrays | ||
| """ | ||
| result: dict[str, Any] = {} | ||
|
|
||
| # Handle anyOf for nullable types (e.g., anyOf: [{type: string}, {type: null}]) | ||
| if "anyOf" in schema: | ||
| any_of_types = schema["anyOf"] | ||
| non_null_types = [t for t in any_of_types if t.get("type") != "null"] | ||
| has_null = len(non_null_types) < len(any_of_types) | ||
|
|
||
| if non_null_types: | ||
| # Recursively convert the non-null type | ||
| non_null_schema = non_null_types[0] | ||
| result = _convert_json_schema_to_google_schema(non_null_schema) | ||
|
|
||
| if has_null: | ||
| result["nullable"] = True | ||
|
|
||
| # Preserve description from parent schema | ||
| if "description" in schema: | ||
| result["description"] = schema["description"] | ||
|
|
||
| return result | ||
|
|
||
| schema_type = schema.get("type") | ||
|
|
||
| # Handle type arrays (e.g., ["string", "null"]) for nullable types | ||
| if isinstance(schema_type, list): | ||
| non_null_types = [t for t in schema_type if t != "null"] | ||
| has_null = len(non_null_types) < len(schema_type) | ||
| schema_type = non_null_types[0] if non_null_types else None | ||
| if has_null: | ||
| result["nullable"] = True | ||
|
|
||
| if isinstance(schema_type, str): | ||
| type_map: dict[str, str] = { | ||
| "string": "STRING", | ||
| "integer": "INTEGER", | ||
| "number": "NUMBER", | ||
| "boolean": "BOOLEAN", | ||
| "array": "ARRAY", | ||
| "object": "OBJECT", | ||
| } | ||
| result["type"] = type_map.get(schema_type, "STRING") | ||
|
|
||
| if "description" in schema: | ||
| result["description"] = schema["description"] | ||
|
|
||
| if "enum" in schema: | ||
| result["enum"] = schema["enum"] | ||
|
|
||
| if "items" in schema: | ||
| result["items"] = _convert_json_schema_to_google_schema(dict(schema["items"])) | ||
|
|
||
| if "properties" in schema: | ||
| result["properties"] = { | ||
| str(k): _convert_json_schema_to_google_schema(dict(v)) | ||
| for k, v in dict(schema["properties"]).items() | ||
| } | ||
|
|
||
| if "required" in schema: | ||
| result["required"] = schema["required"] | ||
|
|
||
| return result |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
This function becomes unnecessary if parameters_json_schema is used correctly.
If the fix above is applied (passing tool.inputSchema directly), this entire function can be removed. If you instead switch to using the parameters field (Google's Schema type), then this conversion logic would be needed — but should use the Schema model rather than a raw dict with uppercase type strings.
| if isinstance(content, ToolUseContent): | ||
| return Part( | ||
| function_call=FunctionCall( | ||
| name=content.name, | ||
| args=content.input, | ||
| ), | ||
| thought_signature=b"skip_thought_signature_validator", | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's look at the file and understand the context around line 284-291
cat -n src/fastmcp/client/sampling/handlers/google_genai.py | sed -n '273,326p'Repository: jlowin/fastmcp
Length of output: 2389
🏁 Script executed:
# Search for other usages of thought_signature in the codebase
rg "thought_signature" --type py -B 2 -A 2Repository: jlowin/fastmcp
Length of output: 451
🏁 Script executed:
# Check for any comments or documentation about this pattern
rg "skip_thought_signature_validator" --type py -B 5 -A 5Repository: jlowin/fastmcp
Length of output: 997
🏁 Script executed:
# Check what version of google-genai is being used
if [ -f "pyproject.toml" ]; then
grep -A 20 "google-genai" pyproject.toml
elif [ -f "requirements.txt" ]; then
grep "google-genai" requirements.txt
elif [ -f "setup.py" ]; then
grep "google-genai" setup.py
fiRepository: jlowin/fastmcp
Length of output: 828
🌐 Web query:
google-genai SDK Part thought_signature documentation function_call validation
💡 Result:
In the google-genai (Google GenAI) SDK / Gemini API, thought_signature (aka thoughtSignature) is an opaque, encrypted “save state” the model returns to preserve its reasoning context—especially for tool / function calling. If you get a thought signature back, you should send it back exactly as-is in the next request’s history. [1][2]
Where it appears (as “Parts”)
The API can return a thought signature on response parts such as text or functionCall. The Python SDK explicitly exposes Part.thought_signature / PartDict.thought_signature. [2][3]
Function-call validation rule (the 400 you’re seeing)
For Gemini 3 models, function calling enforces strict validation: if the model produced a functionCall part with a thought_signature, you must include that same signature in that same part when you send the conversation back along with the tool result—otherwise you can get a 4xx / 400 validation error complaining a function call is missing thought_signature. [2]
Notes from the docs:
- Single function call: the
functionCallpart includes athought_signature. [2] - Parallel function calls: only the first
functionCallpart contains the signature; you must return it in the same place. [2] - Validation is enforced for function calls in the current turn (the tool-execution loop). [2]
Do you need to handle it manually?
Usually no: the official Google GenAI SDK says it handles thought signatures automatically if you use its chat/history patterns (or append the full model response back into history without modifying parts). You typically only manage signatures manually if you’re editing/truncating history or using raw REST. [1][2]
Sources: Gemini “Thinking” / thought signatures overview [1], Thought Signatures requirements & function-call validation [2], Google GenAI Python SDK API reference showing Part.thought_signature [3].
🏁 Script executed:
# Check git history for this specific line
git log -p --follow -- src/fastmcp/client/sampling/handlers/google_genai.py | grep -A 10 -B 10 "skip_thought_signature_validator" | head -50Repository: jlowin/fastmcp
Length of output: 40
🏁 Script executed:
# Look for any TODOs or FIXMEs related to this function
rg "TODO|FIXME|HACK" src/fastmcp/client/sampling/handlers/google_genai.py -B 2 -A 2Repository: jlowin/fastmcp
Length of output: 40
🏁 Script executed:
# Check if there are tests for this conversion function
fd "test.*google" --type f | head -5Repository: jlowin/fastmcp
Length of output: 160
🏁 Script executed:
# Look for any issues or comments about thought_signature in the codebase
rg -i "thought|signature" --type py -B 2 -A 2 | head -50Repository: jlowin/fastmcp
Length of output: 3280
🏁 Script executed:
# Let's examine the test file for google_genai_handler
cat -n tests/client/sampling/handlers/test_google_genai_handler.py | head -100Repository: jlowin/fastmcp
Length of output: 3572
🏁 Script executed:
# Check if there are any tests for the conversion function specifically
rg "_sampling_content_to_google_genai_part" --type py -B 3 -A 10Repository: jlowin/fastmcp
Length of output: 11162
🏁 Script executed:
# Look for any documentation or comments in the google_genai handler file
head -50 src/fastmcp/client/sampling/handlers/google_genai.pyRepository: jlowin/fastmcp
Length of output: 1454
🏁 Script executed:
# Check the imports and Part definition to understand what thought_signature expects
rg "from google.generativeai" src/fastmcp/client/sampling/handlers/google_genai.pyRepository: jlowin/fastmcp
Length of output: 40
🏁 Script executed:
# Let's see the full test to understand what thought_signature is being tested
grep -A 20 "def test_sampling_content_to_google_genai_part_tool_use" tests/client/sampling/handlers/test_google_genai_handler.pyRepository: jlowin/fastmcp
Length of output: 818
🏁 Script executed:
# Check if thought_signature is actually being validated in the test
grep "thought_signature" tests/client/sampling/handlers/test_google_genai_handler.pyRepository: jlowin/fastmcp
Length of output: 40
🏁 Script executed:
# Look for any SDK documentation or comments about why this bypass is needed
grep -r "skip_thought_signature" . --type py --type txt --type md 2>/dev/null || echo "No additional context found"Repository: jlowin/fastmcp
Length of output: 85
🏁 Script executed:
# Check the broader context of how these Parts are used - are they sent back to the API?
rg "_convert_messages_to_google_genai_content|ModelContent\(" src/fastmcp/client/sampling/handlers/google_genai.py -B 2 -A 5Repository: jlowin/fastmcp
Length of output: 1249
🏁 Script executed:
# Let's check the actual test to see if thought_signature is being validated
cat -n tests/client/sampling/handlers/test_google_genai_handler.py | sed -n '205,240p'Repository: jlowin/fastmcp
Length of output: 1689
🏁 Script executed:
# Check the actual full conversion flow - see if thought_signature matters when creating ModelContent
rg "ModelContent\(" -B 5 -A 5 src/fastmcp/client/sampling/handlers/google_genai.pyRepository: jlowin/fastmcp
Length of output: 892
🌐 Web query:
google-genai SDK Part ModelContent thought_signature required validation function_call
💡 Result:
In the Google GenAI / Gemini APIs, Gemini 3 (and some “thinking” models) attach a thought_signature to response Parts (especially **functionCall parts). If you later send conversation history back without that same thought_signature in the same Part, the server can fail request validation with a 400 like “function call … is missing a thought_signature”. (ai.google.dev)
What’s going on (why it’s “required”)
- For Gemini 3 function calling, the API strictly validates that, within the current turn, the first
functionCallpart in each step includes itsthought_signature. If you omit it, you get a 400 error. (ai.google.dev) - With parallel function calls, only the first
functionCallpart may carry the signature; you must still return that exact part. (ai.google.dev)
The fix (google-genai SDK)
If you’re using the official SDK, don’t rebuild ModelContent/Content manually for history. Instead:
- Append the model’s returned
contentobject (with its parts) to yourcontents/ history as-is, then - Append your
functionResponse, then call the model again.
This preservesthought_signatureautomatically. (docs.cloud.google.com)
Google also notes that if you use the SDK’s normal chat/history flow (or append the full model response content object), you shouldn’t need to manage signatures yourself. (ai.google.dev)
If you must construct history manually (ModelContent / Part)
- Copy the
thought_signatureback exactly where it appeared (samePart, same position). Don’t merge/split parts in a way that moves/removes the signature. (docs.cloud.google.com) - For multiple tool calls in a step, keep ordering rules: all functionCalls, then all functionResponses (don’t interleave), or you can also get a 400. (docs.cloud.google.com)
- Last resort: set
thought_signaturetoskip_thought_signature_validator(Google warns it can reduce performance). (docs.cloud.google.com)
Citations:
- 1: https://ai.google.dev/gemini-api/docs/thought-signatures?utm_source=openai
- 2: https://ai.google.dev/gemini-api/docs/thought-signatures?utm_source=openai
- 3: https://ai.google.dev/gemini-api/docs/thought-signatures?utm_source=openai
- 4: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures?utm_source=openai
- 5: https://ai.google.dev/gemini-api/docs/thought-signatures?utm_source=openai
- 6: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures?utm_source=openai
- 7: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures?utm_source=openai
- 8: https://docs.cloud.google.com/vertex-ai/generative-ai/docs/thought-signatures?utm_source=openai
Add comment explaining the thought_signature bypass.
The hardcoded thought_signature=b"skip_thought_signature_validator" is a Google-documented fallback for reconstructed message history. When Gemini 3 function calls are included in ModelContent built from scratch (rather than preserved from the model's original response), this bypass prevents validation errors. Add a comment explaining this pattern to prevent future maintainers from treating it as an undocumented hack.
|
/marvin please add a small note about this functionality to a new "RC1" section of the v3-features document, as we'll release 3.0 RC1 shortly with this |
|
Claude finished @jlowin's task in 4m 27s —— View job Adding Google GenAI handler documentation to v3-features ✓Tasks
SummarySuccessfully added documentation for the Google GenAI sampling handler to the v3-features document in a new "3.0.0rc1" section. The documentation includes:
All formatting checks passed and the changes have been committed and pushed to the PR branch. |
|
Review from Claude - not sure how much is valid and how much is google idiosyncracy:
|
|
/marvin sorry we updated v3-features on main, please pull those changes and put your rc1 note in the righ tplace to resolve the conflict. also fix the conflict on uv.lock by regenerating it |
|
Claude finished @jlowin's task in 13m 24s —— View job Resolving merge conflicts ✓Tasks
SummarySuccessfully resolved all merge conflicts and updated the PR:
The PR is now up to date with main and ready for review. Commit: |
1573ca8 to
358e08b
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/fastmcp/client/sampling/handlers/google_genai.py (2)
325-326: UseTypeErrorinstead ofValueErrorfor unsupported content types.This branch handles an invalid type of content, making
TypeErrorthe more appropriate exception per Python conventions.Proposed fix
- msg = f"Unsupported content type: {type(content)}" - raise ValueError(msg) + msg = f"Unsupported content type: {type(content).__name__}" + raise TypeError(msg)
404-436: Replacehasattrchecks with attribute-levelNonechecks onPart.
Partfromgoogle.genai.typeshastext,function_call, etc. as typed optional attributes. Usinghasattrbypasses type checking and is fragile if the SDK changes its attribute model. Use directNonechecks instead.Proposed fix
if candidate.content and candidate.content.parts: for part in candidate.content.parts: - if hasattr(part, "function_call") and part.function_call: + if part.function_call is not None: has_function_calls = True break # ... if candidate.content and candidate.content.parts: for part in candidate.content.parts: - if hasattr(part, "text") and part.text: + if part.text: content.append(TextContent(type="text", text=part.text)) - elif hasattr(part, "function_call") and part.function_call: + elif part.function_call is not None: fc = part.function_call
| ```python | ||
| from fastmcp.client.sampling.handlers import GoogleGenaiSamplingHandler | ||
| from google.genai import Client as GoogleGenaiClient | ||
|
|
||
| # Initialize the handler | ||
| handler = GoogleGenaiSamplingHandler( | ||
| default_model="gemini-2.0-flash-exp", | ||
| client=GoogleGenaiClient(), # Optional - creates one if not provided | ||
| ) | ||
|
|
||
| # Use with MCP sampling | ||
| async with Client("http://server/mcp") as client: | ||
| result = await client.sample( | ||
| messages=[...], | ||
| params=SamplingParams(tools=[...]), | ||
| handler=handler, | ||
| ) | ||
| ``` |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check sampling_handler usage patterns
echo "=== Searching for sampling_handler patterns ==="
rg -n "sampling_handler" --type py -C 2
echo ""
echo "=== Checking sample method signature ==="
rg -n "async def sample" --type py -A 5
echo ""
echo "=== Check imports in the docs file ==="
head -30 docs/development/v3-notes/v3-features.mdxRepository: jlowin/fastmcp
Length of output: 37214
🏁 Script executed:
#!/bin/bash
# Locate and inspect google_genai.py
echo "=== Finding google_genai.py ==="
find . -name "google_genai.py" -type f
echo ""
echo "=== Checking the file if found ==="
if [ -f "fastmcp/client/sampling/handlers/google_genai.py" ]; then
echo "File exists at fastmcp/client/sampling/handlers/google_genai.py"
head -80 fastmcp/client/sampling/handlers/google_genai.py | tail -30
fiRepository: jlowin/fastmcp
Length of output: 180
🏁 Script executed:
#!/bin/bash
# Check what Client and SamplingParams actually are
echo "=== Looking for Client class ==="
rg -n "class Client" --type py | grep -v test | head -10
echo ""
echo "=== Looking for SamplingParams ==="
rg -n "class SamplingParams" --type py | head -5
echo ""
echo "=== Check the context around lines 77-94 of the mdx file ==="
sed -n '70,100p' docs/development/v3-notes/v3-features.mdxRepository: jlowin/fastmcp
Length of output: 2298
🏁 Script executed:
#!/bin/bash
# Verify SamplingParams location and Client/SamplingParams imports needed
echo "=== Checking mcp.types imports in examples ==="
grep -r "SamplingParams" --include="*.py" examples/ | head -5
echo ""
echo "=== Check imports in actual test files ==="
head -20 tests/client/test_sampling.py
echo ""
echo "=== Verify Client constructor signature ==="
sed -n '248,260p' src/fastmcp/client/client.pyRepository: jlowin/fastmcp
Length of output: 2009
🏁 Script executed:
#!/bin/bash
# Check what the sample() method actually accepts
echo "=== Sample method signature in Context ==="
sed -n '869,880p' src/fastmcp/server/context.py
echo ""
echo "=== Check full signature with parameters ==="
sed -n '900,920p' src/fastmcp/server/context.pyRepository: jlowin/fastmcp
Length of output: 1607
Code example has incorrect API usage and missing imports.
The example shows handler=handler and params=SamplingParams(...) passed to client.sample(), but the sampling handler is configured at Client construction time, not per-call. The sample() method accepts individual parameters like tools= instead of a params= object.
Correct the example:
- Add missing imports:
from fastmcp import Clientandfrom fastmcp.client.sampling import SamplingParams - Move
sampling_handlerto theClientconstructor - Replace
params=SamplingParams(tools=[...])withtools=[...]in thesample()call
Corrected example
from fastmcp import Client
from fastmcp.client.sampling import SamplingParams
from fastmcp.client.sampling.handlers import GoogleGenaiSamplingHandler
from google.genai import Client as GoogleGenaiClient
# Initialize the handler
handler = GoogleGenaiSamplingHandler(
default_model="gemini-2.0-flash-exp",
client=GoogleGenaiClient(), # Optional - creates one if not provided
)
# Use with MCP sampling
async with Client("http://server/mcp", sampling_handler=handler) as client:
result = await client.sample(
messages=[...],
tools=[...],
)|
/marvin please address the pr feedback |
|
Claude finished @strawgate's task in 3m 40s —— View job Addressing PR Feedback ✓Tasks
SummarySuccessfully addressed all PR feedback by implementing the following changes: Key Changes
Test Results
Notes
Commit: |
|
/marvin fix merge conflicts plz |
|
Claude finished @strawgate's task in 5m 23s —— View job Merge conflicts resolved ✓Tasks
Solution AppliedThe repository history was completely rewritten on main (commit Fixed by:
The PR is now cleanly based on the latest main and ready for review! |
Co-authored-by: Bill Easton <strawgate@users.noreply.github.com>
ce226e3 to
bb34db3
Compare
Summary
Adds a Google GenAI sampling handler to FastMCP, enabling users to use Google's Gemini models with the Model Context Protocol. This handler follows the same pattern as the existing Anthropic and OpenAI handlers, providing full tool support and comprehensive message conversion.
Key Features
Closes #2976
Generated with Claude Code