-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Extending YAI's Backend LLM #79
Comments
Hey 👋 Thanks for your interrest in Yai :) As I see, there is an unofficial go lib to interact with perplexity, but seems it has been archived just today (no idea why) From the python example you linked, seems we interact with this engine via websockets. So this should be possible, I'll put this issue as enhancement idea 👍 |
As far as I know litellm supports perplexity AI so if you merge PR #92 litellm can be used as a proxy for perplexity AI. |
Thanks @jaredmontoya :) Since YAI is written in go, i looked for a groq-go imp but i only found this but i think the easiest thing will be if you could call external py functions from go Thanks and all the best |
I like that you thought of a way to avoid cramming dependencies into the binary but python user code is not that different of a solution than just running Litellm that is also written in python. Litellm is basically a band aid that works for everything that refuses to adopt OpenAI API format. In my opinion everyone should eventually adopt OpenAI API specification because of the wide range of software support and as a result compatibility between everything. And anyone who refuses to do so should not expect any usage of their service. Adding support for something non-generic only promotes others to make something non-generic as well and just hope that it's also supported by everyone which then bloats all software that tries to support all available backends. Litellm's goal is to become the sacrifice that will be bloated beyond comprehension instead of all other projects to achieve compatibility with everything. In a nutshell, I think it would be better: than: |
I am also interested in contributing. Especially Ollama's built-in OpenAI compatibility seems like the lowest hanging fruit. I'm willing to contribute if someone points me (us) to the right direction. |
@erkkimon for the time being you can build this program from source on PR #92 and set http://localhost:11434 as the {
"openai_key": "sk-xxxxxxxxx",
"openai_model": "mistral:7b",
"openai_proxy": "http://localhost:11434",
"openai_temperature": 0.2,
"openai_max_tokens": 1000,
"user_default_prompt_mode": "exec",
"user_preferences": ""
} As for custom prompts, you will either have to edit hard coded strings or extend the existing config system for configuring prompts. I don't know if it's just me, but it feels like nothing changed since the last time I looked at this repository. I hope this project is not abandoned. |
I tried this and it is not working. how did you get it to work |
Hi there!
First of all let me say thanks! yai is my goto cli ai so far, and i like it very much :)
I wanted to share with you this reletively unknown repo, which i would love for you to support
https://github.com/Ruu3f/perplexityai
it uses perplexity as a backend and it works fantastically out of the box, free, no api keys needed
(and have not hit any limits after a lot of usage)
would really love it if in the app, or even just in the yai.json... Add
And ideally make it generic and easier in the future to use other apis,
but would love guidance on how to integrate @Ruu3f /perplexityai into yai
am willing to put in the pull request if you point me to the right direction @ekkinox
Thanks a lot!
all the best!
The text was updated successfully, but these errors were encountered: