Replies: 2 comments 6 replies
-
Examples_and_context is where you would keep track of context i.e.: the conversation. Generally this needs to be in the form: [User Prompt; model response], repeated as many times as desired bearing in mind that when the model's context window is full it will start dropping the oldest text. Some Services and Models are very strict about this structure (Anthropic) and some are more forgiving; like if you provide two prompts or responses sequentially. So you'd want to use a concatenate node to put the prompt over the response with a delimiter that's the one you've set as Advanced Prompt Enhancer's examples_delimiter ( like :: or | ), and you'd want to tell the model to end its output with that same delimiter so you could chain them. When I try to set this up to fill automatically ComfyUI won't accept the workflow (probably because there are circular connections). So I place the most recent context item output next to the text field that's holding the Examples_and_context input, and paste each new context item from one to the other at the end of the queue. I added {{user}} and {{model}} for clarity, they're not necessary. |
Beta Was this translation helpful? Give feedback.
-
One added note: if you preface the tag in tagger with two colons: '::{{user}}' then you don't have to tell the model to end its output with two colons. |
Beta Was this translation helpful? Give feedback.
-
How would you recommend chaining the nodes together if you want to prompt the LLM with several prompts but keep the context/history of a single conversation? Do you just take the LLMprompt output into the Examples_and_context input of the next node? Or is there a better way of multi-message conversations?
Beta Was this translation helpful? Give feedback.
All reactions