Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM][NPU] Ported sampler from Stateless to Stateful pipeline #1507

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

AsyaPronina
Copy link
Contributor

  • Ported sampler functionality from Stateless to Stateful pipeline

@github-actions github-actions bot added the category: LLM LLM pipeline (stateful, static) label Jan 8, 2025
if (streamer_ptr && streamer_ptr->put(last_token)) {
return results;
}
// Swap max_new_token to get_max_new_token()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this comment is valid? here we don't play with max_new_tokens

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, not valid, sorry

// Swap max_new_token to get_max_new_token()
auto sequence_group = std::make_shared<SequenceGroup>(
0 /* request_id */, input_ids, config, 1 /* block_size */);
sequence_group->update_processed_tokens_num(input_ids.get_size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
sequence_group->update_processed_tokens_num(input_ids.get_size());
sequence_group->update_processed_tokens_num(sequence_group->get_prompt_len() - output_sequence_len);

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it work w/o SLICE_OUT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose yes.

w/o Slice output len is the same as prompt len and, hence, number of processed tokens is 0

Copy link
Contributor Author

@AsyaPronina AsyaPronina Jan 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree, however, for first input_ids we have length == length of prompt, for example, 18. But output_sequence_len is equal to 1024 for first logits as it is output from prefill model.

@ilya-lavrenov ilya-lavrenov added this to the 2025.0 milestone Jan 9, 2025
@ilya-lavrenov ilya-lavrenov changed the title Ported sampler from Stateless to Stateful pipeline [LLM][NPU] Ported sampler from Stateless to Stateful pipeline Jan 9, 2025
Copy link
Collaborator

@TolyaTalamanov TolyaTalamanov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, could you also enable testing of StatefulLLMPipeline in test_llm_pipeline_static.py?

// Swap max_new_token to get_max_new_token()
auto sequence_group = std::make_shared<SequenceGroup>(
0 /* request_id */, input_ids, config, 1 /* block_size */);
sequence_group->update_processed_tokens_num(input_ids.get_size());
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will it work w/o SLICE_OUT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: LLM LLM pipeline (stateful, static) category: NPU
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants