Replies: 3 comments 4 replies
-
I created a prompt to resolve this problem. |
Beta Was this translation helpful? Give feedback.
-
I have tackled this issue in my fork somewhat. Although it may be partly due to how the history isn't seeming to be kept fully as a conversation in this or the forks I have looked at. Because oddly they are not storing it well, I ended up just putting the messages as history and now have a perfect recall. Also there's the issue of how you are setting 2 prompts and the output for the first one can be not the details or commands you really wanted. So I have both altered that first condensing prompt in various ways + added a second prompt addition via an in sentence [PROMPT] your command on the second prompt. Although if you talk with my chatbot right now you can see both question's with context + PDF retrieval and also if you give stories you see topics / plot lines continuing into the next stories... https://github.com/groovybits/gaib/wiki shows the interface, since I have a lot of output transforms as you see on top of basic chat output :) https://twitch.tv/groovyaibot you can talk with my fork basically through chat with !question and ask it about buddhist topics or general theological ones, and see how good the context is and document retrieval even in the cases you mention. Code here focused on this: https://github.com/groovybits/gaib/blob/main/utils/makechain.ts Which I have it able to do a lot of advanced stuff like chain 3 iterations of the question to go deeper, and do story arcs if wanting a story. The interesting thing is the history part: https://github.com/groovybits/gaib/blob/main/pages/api/chat.ts#L239 setup Which these nuances like a system prompt are not done in the other ones I see. So in the main one here I see this isn't doing the right format, and of course doesn't load / setup or prime the history with the actual messages + the system prompt and pre-question in the history... Which are tricks sort of to get GPT to be who you want it and to keep in context. I see here for some reason lifting this code from the chroma fork, which is close, but didn't work for me functionally. Perhaps my issue but from what I could tell both of the methods didn't do the same as what I have now which is quite good about context in all cases. https://github.com/davideuler/gpt4-pdf-chatbot-langchain-chromadb/blob/main/pages/api/chat.ts#L13 because the labels are not correctly fitting to the actual labels we setup originally in messages. https://github.com/davideuler/gpt4-pdf-chatbot-langchain-chromadb/blob/main/pages/index.tsx#L107 Hence this is all wrong, in both cases, for working properly with newer llang chain versions from what I found. The older one may work as-is, but these have moved to newer versions and also in general they still aren't really grooming it and priming it to really get the QA or even Story output that I have figured out. I worked really hard on this part since it was frustrating for certain when you are wanting a chatbot to sound somewhat sane :) |
Beta Was this translation helpful? Give feedback.
-
The big personality / prompt config :) https://github.com/groovybits/gaib/blob/main/config/personalityPrompts.ts |
Beta Was this translation helpful? Give feedback.
-
In the architecture GPT is called twice, the first one is to generate a standalone question based on the new question and the chat history. But if I ask a question that's totally unrelated to the chat history, the standalone question would probably be non-sense, as it's semantic is twisted by the chat history.
In my experiments, the final answers are still related to the old chat history, but the expected behavior is something like "I don't understand the new question, please ask related questions". Is there any way to resolve this?
I asked on ChatPDF and it can handle this problem. Anyone knows how to achieve that?
Beta Was this translation helpful? Give feedback.
All reactions