Getting 404s from locally running Ollama #298
-
I'm attempting to build a simple proof of concept for sending a message to deepseek-r1 running under ollama (I'm not intent on this model specifically, it's just the one I pulled because it supports tools and I just want to play around with it) I have this initializer code:
Ollama is running and responds to curls locally:
But when I attempt to hit it with ruby llm, I don't get anything: chat = RubyLLM.chat(model: 'deepseek-r1', provider: 'ollama')
# WARN -- RubyLLM: Assuming model 'deepseek-r1:1.5b' exists for provider 'RubyLLM::Providers::Ollama'. Capabilities may not be accurately reflected.
chat.ask "Hello"
# `<main>': 404 page not found (RubyLLM::Error)` I've tried various combinations of specifying the model differently such as |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 2 replies
-
Okay, progress. After looking at the ollama server api logs I saw the 404 errors, so it was getting to the server but the endpoint is incorrect int he gem:
Adding this monkey patch let it hit the correct endpoint, which I can see it returned. aresponse. The new problem is when it's parsing the response
|
Beta Was this translation helpful? Give feedback.
-
The only place this code appears is in Message
It's defaulted so I don't know why it would possibly be nil |
Beta Was this translation helpful? Give feedback.
-
I suggest adding /v1 to your configured base URL as specified here -- https://rubyllm.com/configuration#ollama-api-base-ollama_api_base |
Beta Was this translation helpful? Give feedback.
I suggest adding /v1 to your configured base URL as specified here -- https://rubyllm.com/configuration#ollama-api-base-ollama_api_base