-
Notifications
You must be signed in to change notification settings - Fork 507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Translation of Stop Reason for Anthropic -> OpenAI #459
Comments
We have PR's pending review to fix this. Will get them merged after necessary changes |
Hey @functorism, @narengogi & team have decided to not take this up. The reason is explained in the linked PR. What are your thoughts? |
Well, we're currently running a patch/fork of https://docs.rs/async-openai/latest/async_openai/ to get around this issue; so can't say I think it's the right move - but understand. One way to view it is that Portkey is not an OpenAI compatible gateway unless things like these behave in expected ways. |
Hmm. @ayush-portkey thoughts? |
@vrushankportkey What are your plans here? |
Thorny issue. We haven't updated our thinking here yet, even though I totally understand that it makes the Gateway behave in unexpected ways |
Would it be a better idea to have our own Rust SDK at one point? We ran into the same issue with OpenAI's official C# library recently |
Right now, I'm assuming that OpenAIs Api is the standard to which Portkey is trying to make all other LLM APIs conform. Something needs to be done here to try to convert.
It's a good thing it's centralized to be easy to fix, though. Here is the solution for this particular one. |
If you're facing issues with supporting an official OpenAI sdk - doesn't that tell you concretely that you're not meeting expected compliance? In my mind the answer is pretty straight forward - maintain best effort mappings that ensures API spec compliance. I also don't see an SDK as a solution - if the barrier to using portkey is using a portkey SDK, then that negates the point of portkey being an API level proxy - it rather becomes a library instead - and would compete with other popular solutions like LangChain. |
This is very helpful @functorism @note89, thank you. Tagging @roh26it & @ayush-portkey again for visibility, and we'll get back to you with more thoughts on this |
We do have a tag for Maybe we should adhere to OpenAI unless that check is turned off. (By default off in our SDKs) The dilemma we're facing is that other APIs have diverged more and more from the OpenAI spec, and we have to make a hard call on how to support all the various APIs without compromising on features anymore. Prompt caching is one of the examples here. |
gateway/src/providers/anthropic/chatComplete.ts
Line 397 in fe0899b
OpenAI SDKs with strict response validation (such as https://docs.rs/async-openai/latest/async_openai/) fails due to lack of mapping of Anthropic stop reason to a valid OpenAI stop reason.
The text was updated successfully, but these errors were encountered: