You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which model do you have selected and does this happen immediately when you try starting a chat? Or does it happen after a couple seconds/minutes?
What's interesting to me is that on line 994
try:
lst=ollama.list() # line 994foriinlst["models"]:
ai_list.append(i["name"])
excepthttpx.ConnectError:
ai_list="off"
it should simply just get a list of all your available models and if there aren't any then it'll just give an empty string. If ollama isn't running then it should tell you that ollama isn't running. The fact that it doesn't do either and instead throws "ResponseError" is odd to me...
I currently can't replicate this error on my end but I have seen a somewhat similar issue here haotian-liu/LLaVA#1666 and here ollama/ollama#2384 in both instances they're using quite unique models.
If you're using a VPN then that may be interfering with it.
If you're not then I'm not 100% sure, I'll need to try and replicate it on my end to see if I can figure out the cause. It could be that the default host location for ollama is being used by another application/service but ollama has less priority
Followed the tutorial, and it keeps giving this error:
The text was updated successfully, but these errors were encountered: