Skip to content

openai: fix get_text_response reading wrong role and truncating output#1609

Open
joaquinhuigomez wants to merge 1 commit into0xPlaygrounds:mainfrom
joaquinhuigomez:fix/openai-get-text-response-role
Open

openai: fix get_text_response reading wrong role and truncating output#1609
joaquinhuigomez wants to merge 1 commit into0xPlaygrounds:mainfrom
joaquinhuigomez:fix/openai-get-text-response-role

Conversation

@joaquinhuigomez
Copy link
Copy Markdown
Contributor

ProviderResponseExt::get_text_response() for the OpenAI chat completions provider matched Message::User on the response message and returned only the first text content item. OpenAI chat completions return the assistant reply as Message::Assistant, so this always returned None for real provider responses, and even if the match were corrected, returning only the first item would truncate multi-part assistant replies.

This patch matches Message::Assistant and concatenates all Text content items into the returned string.

Fixes #1602

ProviderResponseExt::get_text_response() for the OpenAI chat completions
provider matched on Message::User on the response message and returned
only the first text content item. OpenAI chat completions return the
assistant reply as Message::Assistant, so this always returned None for
real provider responses, and even if the match were corrected, returning
only the first content item would truncate multi-part assistant replies.

Match on Message::Assistant and concatenate all Text content items into
the returned string.

Fixes 0xPlaygrounds#1602
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

bug: OpenAI chat completion get_text_response() reads the wrong role and truncates multi-part output

1 participant