Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Translation of Stop Reason for Anthropic -> OpenAI #459

Open
functorism opened this issue Jul 15, 2024 · 11 comments
Open

Translation of Stop Reason for Anthropic -> OpenAI #459

functorism opened this issue Jul 15, 2024 · 11 comments
Labels

Comments

@functorism
Copy link

finish_reason: response.stop_reason,

OpenAI SDKs with strict response validation (such as https://docs.rs/async-openai/latest/async_openai/) fails due to lack of mapping of Anthropic stop reason to a valid OpenAI stop reason.

@narengogi
Copy link
Collaborator

We have PR's pending review to fix this. Will get them merged after necessary changes

@vrushankportkey
Copy link
Collaborator

Hey @functorism, @narengogi & team have decided to not take this up. The reason is explained in the linked PR. What are your thoughts?

@functorism
Copy link
Author

functorism commented Oct 18, 2024

Well, we're currently running a patch/fork of https://docs.rs/async-openai/latest/async_openai/ to get around this issue; so can't say I think it's the right move - but understand.

One way to view it is that Portkey is not an OpenAI compatible gateway unless things like these behave in expected ways.

@vrushankportkey
Copy link
Collaborator

Hmm. @ayush-portkey thoughts?

@note89
Copy link

note89 commented Nov 27, 2024

@vrushankportkey What are your plans here?

@vrushankportkey
Copy link
Collaborator

Thorny issue. We haven't updated our thinking here yet, even though I totally understand that it makes the Gateway behave in unexpected ways

@vrushankportkey
Copy link
Collaborator

Would it be a better idea to have our own Rust SDK at one point? We ran into the same issue with OpenAI's official C# library recently

@note89
Copy link

note89 commented Nov 27, 2024

Right now, I'm assuming that OpenAIs Api is the standard to which Portkey is trying to make all other LLM APIs conform.
Is that the goal Portkey has?

Something needs to be done here to try to convert.

finish_reason: response.stop_reason,

It's a good thing it's centralized to be easy to fix, though.

Here is the solution for this particular one.
https://github.com/braintrustdata/braintrust-proxy/blob/82c1a6732fe31db44417fdccddd23f1c9d7e494a/packages/proxy/src/providers/anthropic.ts#L284-L285

@functorism
Copy link
Author

If you're facing issues with supporting an official OpenAI sdk - doesn't that tell you concretely that you're not meeting expected compliance?

In my mind the answer is pretty straight forward - maintain best effort mappings that ensures API spec compliance.

I also don't see an SDK as a solution - if the barrier to using portkey is using a portkey SDK, then that negates the point of portkey being an API level proxy - it rather becomes a library instead - and would compete with other popular solutions like LangChain.

@vrushankportkey
Copy link
Collaborator

This is very helpful @functorism @note89, thank you. Tagging @roh26it & @ayush-portkey again for visibility, and we'll get back to you with more thoughts on this

@roh26it
Copy link
Collaborator

roh26it commented Dec 20, 2024

We do have a tag for strictOpenAiCompliance

Maybe we should adhere to OpenAI unless that check is turned off. (By default off in our SDKs)

The dilemma we're facing is that other APIs have diverged more and more from the OpenAI spec, and we have to make a hard call on how to support all the various APIs without compromising on features anymore. Prompt caching is one of the examples here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants