Quickstart Installation ???? #37
Replies: 2 comments 1 reply
-
gpt-llama.cpp infers all this information with your linked mode. It relies on the fact that your model is in your llama.cpp folder, and detects your main.exe file from that path You just need to set your OPENAI-API_KEY to your model path |
Beta Was this translation helpful? Give feedback.
-
Finally got it to work. I think there is a problem with the documentation. Whereas it hasn't become clear to me yet what the function of the swagger app is, but you don't need it anyway. As far as I understand it now, it is that you get the server running without loading any instance of Llama.cpp, without a model, without anything working. This may all sound obvious to the authors, but the lack of this info can drive a semi-literate prospective user to despair. So if someone confirms that my description is correct, I would suggest to add some paragraphs to the Quickstart installation. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I don't get it.
I know how to run llama.cpp main.exe with my models under windows.
In this respect I get as far as testing llama.cpp installation with
Quickstart Installation.
My problem starts at the
Running gpt-llama.cpp
section.Because there is nowhere here where I actually specify my model, or where my llama.cpp is, or the main.exe or anything else from llama.cpp.
How should
gpt-llama.cpp
know that somewhere a main.exe is running?Does the
main.exe
have to run with a model somewhere? That is not stated anywhere either.I have no idea what to do with this
Quickstart Installation
.Does anyone understand it?
Does anyone know what to do here?
I'm really confused.
Beta Was this translation helpful? Give feedback.
All reactions