-
Notifications
You must be signed in to change notification settings - Fork 199
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StaticLLMPipeline: Support more generation options #1431
StaticLLMPipeline: Support more generation options #1431
Conversation
…genai into at/static-llm-pipeline-advanced-sampling
why not to re-use the whole Sampler? Limited to cases when we don't need to fork sequences (like in beam search) |
Co-authored-by: Ilya Lavrenov <ilya.lavrenov@intel.com>
@@ -227,7 +227,6 @@ def run_text_generation_genai(input_text, num, model, tokenizer, args, iter_data | |||
gen_config = model.get_generation_config() | |||
gen_config.max_new_tokens = max_gen_tokens | |||
gen_config.num_beams = args["num_beams"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please, remove set_seed(args['seed'])
on str 201 and 356 and add gen_config.rng_seed= args["seed"]
here and to run_text_generation_genai_with_stream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am fine with the change (didn't understand much) but please address @sbalandi 's comment
No description provided.