You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Unless I missed something, it looks like conversations in llm are always constructed with the —continue flag.
I’m writing a vscode plugin that allows users to chat with an LLM and edit the chat history, which is stored in a text document or notebook. It would be nice to be able to construct a conversation entirely outside llm and send it in on stdin, perhaps using a JSON format.
A workaround is to send the conversation in anyway, as a single prompt, using any format you like. GPT4 seems to understand it well enough. But I wonder what the difference would be if it were sent using GPT4’s normal format for conversations?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Unless I missed something, it looks like conversations in llm are always constructed with the —continue flag.
I’m writing a vscode plugin that allows users to chat with an LLM and edit the chat history, which is stored in a text document or notebook. It would be nice to be able to construct a conversation entirely outside llm and send it in on stdin, perhaps using a JSON format.
A workaround is to send the conversation in anyway, as a single prompt, using any format you like. GPT4 seems to understand it well enough. But I wonder what the difference would be if it were sent using GPT4’s normal format for conversations?
Beta Was this translation helpful? Give feedback.
All reactions