From 1685eec385f4d0f053f06da03c7d842b05cedbbe Mon Sep 17 00:00:00 2001 From: "MinWoo(Daniel) Park" Date: Wed, 27 Dec 2023 14:00:27 +0900 Subject: [PATCH] doc: add faq --- documents/README_FAQ.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/documents/README_FAQ.md b/documents/README_FAQ.md index 9dad3eb04..e627b89c4 100644 --- a/documents/README_FAQ.md +++ b/documents/README_FAQ.md @@ -145,7 +145,7 @@ Furthermore, since this package is an unofficial Python package that intercepts - Short answer: It seems unlikely. It might be possible, but it requires experimentation. If you find a solution, anyone can contribute through a pull request. You can attempt to fix the session by referring to the contents of a reusable session or try to lock the returned values with a context ID. However, fundamentally, users need an option to fix the seed value, as seen in OpenAI's ChatGPT, to address this issue. Currently, Bard offers limited options to users, even temperature and basic settings, so it may take some time. To make the conversation memorable, you can 1) code to summarize and store the conversation in a database, ensuring the queue changes approximately every 3-5 turns, and 2) transmit the summarized conversation and response to Bard along with a new question. Other models like ChatGPT also remember conversations through similar methods (with more diverse solutions). -In conclusion, users cannot adjust the seed option in model inference, and some additional coding work is needed to remember the conversation. +In conclusion, users cannot adjust the seed option in model inference, and some additional coding work is needed to remember the conversation. However, using a reusable session allowed retrieving previous responses, showing some effectiveness. To maintain full context like GPT, a large database and resources would be needed, and even models like OpenAI's GPT or Meta's LLaMA-2 struggle to consistently answer. (Refer to LLaMA-2's ghost attention and some appendix examples; it's important to know that making a model operate as a single persona is difficult and costly. Thus, we should remember that general models like Bard or GPT can't be expected to function like specific counselors.) If anyone has made progress on this, they are welcome to contribute.