Skip to content

fix: preserve raw string tool outputs#1608

Merged
gold-silver-copper merged 1 commit into0xPlaygrounds:mainfrom
gold-silver-copper:codex/fix-tool-output-string-handling
Apr 7, 2026
Merged

fix: preserve raw string tool outputs#1608
gold-silver-copper merged 1 commit into0xPlaygrounds:mainfrom
gold-silver-copper:codex/fix-tool-output-string-handling

Conversation

@gold-silver-copper
Copy link
Copy Markdown
Contributor

Summary

Fixes #1605.

This changes tool output serialization in rig-core so that Tool::Output = String is preserved as raw text instead of being JSON-stringified again. Non-string tool outputs are still serialized to JSON text as before.

That fixes the double-escaped tool result behavior seen with OpenAI-compatible providers and also restores the intended behavior for string-returning tools that deliberately emit structured JSON for downstream parsing.

What changed

  • Added a dedicated tool output serialization path in rig-core
  • Preserved String outputs verbatim
  • Kept object/number outputs serialized once as JSON text
  • Left provider conversion code and ToolResultContent::from_tool_output unchanged

Why this approach

The bug was caused at the generic tool serialization boundary, not in provider-specific message conversion.

Previously, all tool outputs went through serde_json::to_string(&output). For String outputs, that produced a JSON string literal, which then surfaced in tool result messages as quoted/escaped text.

Fixing this at the serializer boundary is lower-risk than adding parser-side unwrapping logic:

  • it resolves the reported OpenAI issue
  • it keeps the public API unchanged
  • it allows String tools that return structured JSON payloads to continue flowing into existing from_tool_output parsing

Tests

Added regression coverage for:

  • plain multiline string tool output staying unescaped
  • structured JSON returned from a String tool remaining parseable as tool result content
  • object outputs still being serialized as JSON text

OpenAI’s chat tool-message schema uses role: "tool", a required tool_call_id, and content as the tool result payload, which indicates provider-facing tool results are raw text/content rather than a JSON-stringified string literal.[1] vLLM follows that same shape: its tool-calling docs say chat templates must handle tool-role messages,[2] and its official OpenAI-compatible example appends tool results as {"role": "tool", "content": result, "tool_call_id": call.id, "name": call.function.name}.[3]

So the Rig fix is aligned with the strongest available evidence: String tool outputs should stay raw when sent back through OpenAI-compatible providers like vLLM.

Sources:

@gold-silver-copper gold-silver-copper added this pull request to the merge queue Apr 7, 2026
Merged via the queue into 0xPlaygrounds:main with commit 2f652ef Apr 7, 2026
6 checks passed
This was referenced Apr 7, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: String Tool call result escaped two times

1 participant