Persist System Prompt #4453
Unanswered
ahmedsomaa
asked this question in
Help
Replies: 1 comment 3 replies
-
Each api call to LLMs is completely separate and distinct from each other - with no carry-over. So if you want the LLM to follow a set of instructions, you'll need those instructions on every call. In practice, you'll always be sending a system prompt on every request with very few exceptions. That also means you'll be sending the entire message/conversation thread on every request. If your concern is about sending a lot of data between your client and your server (not server and LLM) then you might find this article about message persistence helpful (just note that the server still sends the entire thread to the LLM afterwards) |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am working on a feature that require something similar to Custom ChatGPT where you can provide initial context + details once to the model and then ask whatever you want without needing to provide the initial context for each request.
For the feature, I am using the LLM to categorize the type of difference between two objects. So I provided the
generateObject
with two prompts:system
: this provides the initial context + steps + examples for the model.prompt
: this provides the testing data the model would apply the steps on.When I checked the
usage
in the results, it seems that thesystem
prompt is sent on every request which is not what we're looking for.So is there a way to provide this context once for the model? This is done on the server-side, consider this like an endpoint
Beta Was this translation helpful? Give feedback.
All reactions