-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BionicGPT configuration for ollama backend #38
Comments
This was one of the most, if not the most complicated services to setup 😅 There's a fully-fledged reverse proxy and lots of sub-services requiring carefull routing. All-in-all, I wouldn't be surprising if that setup stopped working completely, just out of the drift with upstream docker images and the expected way it has to be put into compose. I should more appropriately mark that BionicGPT is only partially supported (due to settings only stored in the DB, so any pre-provisioned configs should be sent there). With that said, the posted error with In cases like this, everlier@pop-os:~$ ▼ h url ollama
http://localhost:33821
everlier@pop-os:~$ ▼ h url -i ollama
http://harbor.ollama:11434
everlier@pop-os:~$ ▼ h url -a ollama
http://192.168.0.136:33821 There's also |
Thank you again, I was able to get the chat working by setting the domain of model to
For some reason, not all models work. I was able to get I also tried to get RAG working by setting up dataset and modifying the assistant to have access to this dataset. When using the default embedding I am getting the following error:
and when I create new embedding model and a new dataset with this model for embedding and subsequently editing the assistant to use this dataset I am getting different error that no chunks were received and this quickly changes to:
|
In the documentation the Bionic GPT it is mentioned that it works with ollama and OpenAPI compatible backends and it is demonstrated running a local gemma model. I could not find information on how to properly configure the settings of the Bionic GPT frontend, only thing I could find was the official documentation (https://bionic-gpt.com/docs/running-locally/ollama/). I tried to follow the steps by adding a model with name listed by
ollama list
, domain fromharbor url ollama
and set api key toolllama
. Then I added an assistant with this LLM as a backend. When I submit a message into the chat I am gettingconnection refused
error:and
harbor logs bionicgpt
outputs:The text was updated successfully, but these errors were encountered: