Replies: 3 comments
-
Hi, Open llama 3B is more of an "auto complete" model compared to an instructional model. It's possible for the model to respond to subjective questioning, but you'll need to tell it what to do so that it knows how to handle such questions. One way is by communicating it in the prompt, "You're an A.I. What do you like to eat?" -n 20 parameter in your ./main prompt limits the response, though it shouldn't limit it to no response at all. Here's an example of the same prompt on my system, notice the model adds onto what I asked instead of addressing the question like an individual:
I suggest removing or increasing the value for the -n parameter. |
Beta Was this translation helpful? Give feedback.
-
I cannot replicate the issue with build What is the hash of your model file? Mine is I have the my files on HF as well, if you want to compare.
It will still generate something, for me it just generates more similar questions like the input. The real tell is that it doesn't even reach the message |
Beta Was this translation helpful? Give feedback.
-
Yeah, I figured it's a combination of things, but it does look like a crash now that you mention it. I forgot how informative and helpful Windows OS can be ;) |
Beta Was this translation helpful? Give feedback.
-
As you can see it answered to hey but did not even try to respond to my second question
Beta Was this translation helpful? Give feedback.
All reactions