-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[FR]: Extend compatibility to OpenAI compatible services #237
Comments
Thanks for the idea, we are thinking on changing our internal logic to rely on the new Btw, if you have Ollama + Deepseek running locally, it should be already supported |
@calderonsamuel |
Hi @mustafamohsen . #239 intends to solve this request. However, the Deepseek API currently does not allow me to add any billing information to give it a try. I can't merge anything without confirming it works as expected. But you can start using it when you please! Install from the PR's branch with: pak::pak("MichelNivard/gptstudio#239") Set Let me know if it works! |
Hi @calderonsamuel |
@calderonsamuel I'm away for a while so I can't test it maybe until next weekend. However, I took a peak at the diff, and I didn't find anything related to model selection, which I doubt will work like that |
In my company we have a LLM service that uses the openai protocol, so I can test it with that. I'll try it out on Monday. But to echo @mustafamohsen , we would definitely need the params |
Thanks for the comments everyone. Each service, including OpenAI, has already an implementation of a call to the "models" endpoint, which populates the "models" dropdown in the settings panel. As I understand, Deepseek has the exact same endpoint. So, #239 is not trying to add a new service, but to use whatever exists for OpenAI and re-route to use Deepseek's API. @mustafamohsen I appreciate the API key offer, but I don't think I should have that power 😿. I'm not actually the official maintainer of the package, just a trusted collaborator (with merge privileges) |
I tried this out yesterday, and wasn't able to get it to work. It seems to be something to do with the SSL certificate, which I'm not sure how to specify in the gptstudio API.
So here its working with Another thing I found odd was when I tried to build the skeleton request:
So its respecting the env var for OPENAI_API_KEY, but it seems the OPENAI_API_URL is not changing the So my theory is its either the URL specification not being updated to respect changes in the OPENAI_API_URL env var, or something in the CA specification. |
Thanks for this report @stevegbrooks . Please, confirm that you are using the branch from #239 ( skeleton <- gptstudio:::gptstudio_skeleton_build.gptstudio_request_openai()
# If TRUE, the skeleton working as expected
skeleton$api_key == Sys.getenv("OPENAI_API_KEY")
#> [1] TRUE
# This is expected to be false
skeleton$url == Sys.getenv("OPENAI_API_URL")
#> [1] FALSE
# Because the endpoint is appended to the original URL
skeleton$url
#> https://api.deepseek.com/chat/completions
# You should get the same commit reference
sessioninfo::package_info("gptstudio", dependencies = FALSE)
#> package * version date (UTC) lib source
#> gptstudio 0.4.0.9009 2025-02-19 [1] Github (MichelNivard/gptstudio@718b18c) Created on 2025-02-26 with reprex v2.1.1 |
Hi, Thanks for the clarification. I wasn't on the right commit ref. After getting onto the right version, it still wasn't working. After some troubleshooting I realized that if I set Basically, the skeleton for the api call requires the OPENAI_API_URL to be set in the .Renviron so that its present for the When I then look at the available models, however, I get this result:
As the user, I want to be able to filter for the models that I think the chat app should use, especially including our claude 3.5 sonnet deployment. I shouldn't be forced to choose only from models that start with "gpt".
I can think of two strategies here:
|
What would you like to have?
There are plenty of LLM providers that are compatible with OpenAI APIs (e.g. Deepseek). It would add more versatility to have a generic OpenAI Compatible provider that accepts base URL, API key, and model as inputs, and calls accordingly. This approach is implemented by VS Code's extension Cline, as well as others
Code of Conduct
The text was updated successfully, but these errors were encountered: