Skip to content

Conversation

@k-l-lambda
Copy link

Description

Adds validation logic to ensure conversation history complies with OpenAI Chat Completions protocol requirements for tool calls. This fixes 400 Bad Request errors with third-party providers that strictly validate the protocol.

Fixes #7275

Problem

The OpenAI Chat Completions protocol requires:

  1. All assistant messages with tool_calls must have matching tool response messages
  2. Tool responses must immediately follow the assistant message - no other messages can interrupt the sequence

Some third-party OpenAI-compatible providers strictly enforce these rules and reject violations with 400 errors. Codex's conversation history can violate these requirements when:

  • API streaming is interrupted, leaving incomplete tool calls
  • Messages are recorded in incorrect order during streaming/retries

Solution

This PR adds validation that runs before sending requests to the Chat Completions API:

Phase 1: Remove incomplete tool calls

  • Collect all tool_call_id values from assistant messages with tool_calls
  • Collect all tool_call_id values from tool response messages
  • Remove any assistant messages with tool calls that have no matching responses

Phase 2: Enforce sequence ordering

  • Scan through conversation history for assistant messages with tool_calls
  • For each, verify that all tool responses appear immediately after
  • Remove any messages that interrupt the required sequence

Example transformation:

Before validation:

[assistant + tool_calls: call_123]
[assistant: "Let me check that..."]  ← Violates protocol
[tool: response for call_123]

After validation:

[assistant + tool_calls: call_123]
[tool: response for call_123]

Changes

  • codex-rs/core/src/chat_completions.rs (lines 332-439):
    • Add tool call ID matching validation using HashSet difference
    • Add sequence enforcement logic that removes interrupting messages
    • ~110 lines of validation code

Testing

Tested with third-party providers that strictly validate protocol:

Before:

  • Frequent 400 errors: "An assistant message with 'tool_calls' must be followed by tool messages..."
  • Reconnection loops after streaming interruptions
  • Providers effectively unusable

After:

  • Conversation history automatically cleaned to comply with protocol
  • Stable operation with strict providers
  • No protocol validation errors

Benefits

  • Protocol compliance: Ensures conversation history always follows OpenAI specification
  • Provider compatibility: Works with providers that strictly validate protocol
  • Improved reliability: Prevents reconnection loops from protocol violations
  • Automatic recovery: Handles streaming interruptions gracefully
  • Backward compatible: No impact on providers that are lenient with protocol

Implementation Notes

The validation is conservative and only removes messages that clearly violate the protocol. It preserves valid conversation history and only filters out:

  1. Tool calls with no responses (incomplete due to interruptions)
  2. Messages that break the required tool_call → tool_response sequence

This approach prioritizes protocol compliance while minimizing information loss from the conversation context.

Related

…oviders

OpenAI Chat Completions protocol requires:
1. All tool_calls must have matching tool responses
2. Assistant messages with tool_calls must be immediately followed by
   tool responses before any other messages

Some API providers (e.g., third-party OpenAI-compatible APIs) strictly
validate these requirements and reject requests that violate the protocol
with 400 Bad Request errors.

This commit adds validation logic that:
- Removes assistant messages with incomplete tool calls (no responses)
- Enforces proper message sequencing by removing messages that interrupt
  the tool_call → tool_response sequence

Fixes issues where streaming interruptions or retry logic could leave
conversation history in an invalid state that violates the OpenAI protocol.
@github-actions
Copy link


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Copy link
Contributor

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

When an assistant message with multiple tool_calls is removed due to
incomplete responses, we must also remove any tool responses that
were already received for that assistant message. Otherwise, those
tool responses become orphaned (no preceding tool_calls), which
still violates the OpenAI protocol.

Example scenario that this fixes:
- Assistant makes 2 tool calls: [call_A, call_B]
- Only call_A gets a response (call_B fails/times out)
- Previous logic: Removed assistant message, left tool response for call_A
- Result: Orphaned tool response → still violates protocol
- New logic: Remove both assistant message AND tool response for call_A
@etraut-openai
Copy link
Collaborator

Thanks for the contribution. We've updated our contribution guidelines to clarify that we're currently accepting contributions for bugs and security fixes, but we're not generally accepting new features at this time. We need to make sure that all new features compose well with both existing and upcoming features and fit into our roadmap. If you would like to propose a new feature, please file or upvote an enhancement request in the issue tracker. We will generally prioritize new features based on community feedback.

@k-l-lambda
Copy link
Author

Thanks for the contribution. We've updated our contribution guidelines to clarify that we're currently accepting contributions for bugs and security fixes, but we're not generally accepting new features at this time. We need to make sure that all new features compose well with both existing and upcoming features and fit into our roadmap. If you would like to propose a new feature, please file or upvote an enhancement request in the issue tracker. We will generally prioritize new features based on community feedback.

This is a bug fix. Don't you think this issue is a bug?

@etraut-openai etraut-openai reopened this Nov 25, 2025
@etraut-openai etraut-openai added the chat-endpoint Bugs or PRs related to the chat/completions endpoint (wire API) label Nov 25, 2025
@jxy
Copy link
Contributor

jxy commented Nov 26, 2025

This is similar to #7038

@etraut-openai
Copy link
Collaborator

This PR is attempting to address the same issue as PR #7038. I think the other PR is a cleaner approach, so I'm going to close this one. You're welcome to review #7038 and provide feedback if you have any suggestions.

Thanks again for taking the time to post the PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

chat-endpoint Bugs or PRs related to the chat/completions endpoint (wire API)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Tool call protocol validation errors with strict third-party OpenAI-compatible providers

3 participants