How is the memory handled? #140
sagarspatil
started this conversation in
General
Replies: 1 comment 4 replies
-
It shares the entire conversation to the model. If it exceeds 8000 tokens (for GPT-4), the conversation will be truncated to less than 8000 tokens. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello,
How does the memory feature work? I know the data is stored locally, but do you embed the conversations and share them with the model? How can it refer to a conversation say that is beyond 8000 tokens? (In case of GPT-4) or will it lose context?
Beta Was this translation helpful? Give feedback.
All reactions