Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support passing a list of message to LLM model #131

Open
zjffdu opened this issue Jul 19, 2024 · 3 comments
Open

Support passing a list of message to LLM model #131

zjffdu opened this issue Jul 19, 2024 · 3 comments
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested

Comments

@zjffdu
Copy link
Contributor

zjffdu commented Jul 19, 2024

Is your feature request related to a problem? Please describe.
Currently I can only pass one string to Generator, but openai support a list of message which is what I want.

Describe the solution you'd like
I'd like to support a list of messages like following:

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Who won the World Series in 2020?"},
    {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."},
    {"role": "user", "content": "Who was the MVP?"},
]

# Call the ChatCompletion.create method with the messages
response = openai.ChatCompletion.create(
    model="gpt-4",  # Use "gpt-4" or "gpt-3.5-turbo" depending on your model availability
    messages=messages
)

Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.

Additional context
Add any other context or screenshots about the feature request here.

@liyin2015
Copy link
Collaborator

liyin2015 commented Jul 19, 2024

@zjffdu Thanks for writing the issues. Why it has to be a list of messages? Why not form them into a single prompt and send it to them? This tutorial should explain how you can form it.

If it make things easy, we can consider to provide you a simple component to help you form it:

<SYS> You are a helpful assistant.</SYS>
User: Who won the World Series in 2020?
You: The Los Angeles Dodgers won the World Series in 2020.
User: Who was the MVP? 
You: 

This is how research papers form their prompt.

@liyin2015 liyin2015 added the question Further information is requested label Jul 19, 2024
@liyin2015
Copy link
Collaborator

Additionally, We can create a GeneratorMessages, which takes a list of chat turns as input instead of prompt_kwargs and prompt template. The extend on modelclient will be simple enough. If the community wants to write a proposal on this approach, this can be a great way to extend but without complicated the prompt-only approach.

@liyin2015 liyin2015 added enhancement New feature or request help wanted Extra attention is needed labels Jul 19, 2024
@mikeedjones
Copy link

This does mean you don't use the trained User/assistant tokens (<|im_start|>user\n et cetera) we've seen reduced performance on a fair number of tasks (especially dialogue tasks) when formatting the input into a single message

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants