Skip to content

ScaleUpProposer will try to promote an offloaded table if it's larger than HBM-only #1925

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

levythu
Copy link
Contributor

@levythu levythu commented Apr 24, 2024

Summary: Checked #1924 for context. Basically, under some specific case, a table could use even more hbm when offloaded and prefetched. in this case, we'd rather not offload them.

Differential Revision: D56505315

levythu added 2 commits April 23, 2024 23:33
…r prefetch pipeline (pytorch#1924)

Summary:

Lots of rank load imbalance comes from underestimation of sparse, which is not only a function of parameter, but the HBM usage during input and output. This is especially bad if we have multi-stage pipeline, which keeps multiple copies of input for each table.

Prefetched embedding has the worst performance. By heuristically analyzing the current memory snapshot, we noticed 4~7x of extra input. 

This diff use a new formulas to calculate HBM usage:
- Multiple input are considered depending on the pipeline
- Output and input are added using max, since output tensor is not occupying extra HBM before a2a communication, at that time input is no longer used

For backward compatibility, we haven't roll out the change to ally user as it may incur extra failure of jobs. Instead, we set up different pipeline-awareness mode:
- None: Use old formulas regardless
- Prefetch-Only (default): Use new formulas if prefetch pipeline is on
- All: Use new formulas.

Differential Revision: D56444328
… than HBM-only

Summary: Checked pytorch#1924 for context. Basically, under some specific case, a table could use even more hbm when offloaded and prefetched. in this case, we'd rather not offload them.

Differential Revision: D56505315
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 24, 2024
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D56505315

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants