-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Samples] merge LLM samples to "text_generation" folder #1411
base: master
Are you sure you want to change the base?
Conversation
9e7f861
to
3901fbb
Compare
port: #28248 connected to: openvinotoolkit/openvino.genai#1411
connected to: openvinotoolkit/openvino.genai#1411 Co-authored-by: Andrzej Kopytko <andrzejx.kopytko@intel.com>
b270500
to
a48de38
Compare
If there is no more major comments, I will make the similar changes to python samples |
a48de38
to
9a9d41c
Compare
6db3e88
to
3b54139
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have a merge conflict
3b54139
to
1457292
Compare
@ilya-lavrenov re-review please. The PR cannot be merged without +1 from you |
@@ -231,7 +231,7 @@ custom_streamer = CustomStreamer() | |||
pipe.generate("The Sun is yellow because", max_new_tokens=15, streamer=custom_streamer) | |||
``` | |||
For fully implemented iterable CustomStreamer please refer to [multinomial_causal_lm](https://github.com/openvinotoolkit/openvino.genai/tree/releases/2024/3/samples/python/multinomial_causal_lm/README.md) sample. | |||
For fully implemented iterable CustomStreamer please refer to [multinomial_causal_lm](https://github.com/openvinotoolkit/openvino.genai/tree/releases/2024/3/samples/python/text_generation/README.md) sample. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
openvinotoolkit/openvino#28384 - master
openvinotoolkit/openvino#28383 - 2024/6
Model examples to use for different samples: | ||
chat_sample - meta-llama/Llama-2-7b-chat-hf | ||
speculative_decoding_lm - meta-llama/Llama-2-13b-hf as main model and TinyLlama/TinyLlama-1.1B-Chat-v1.0 as draft model | ||
other samples - meta-llama/Llama-2-7b-hf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's drawn as plain text
which is hard to read
maybe we can show at the beginning how to convert model, while these recommendations about suggested model will be directly in the section with sample?
E.g. in common part:
optimim-cli export openvino --model <xx> <output_folder>
and in per-sample section:
chat sample:
recommended models: meta-llama/Llama-2-7b-chat-hf, etc
The main idea of this is to keep locality: users don't need to scroll up and down to read information about how to run samples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
No description provided.