-
Notifications
You must be signed in to change notification settings - Fork 549
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add SSDScratchPadIndicesQueue lookup in frontend #2948
Conversation
✅ Deploy Preview for pytorch-fbgemm-docs ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
This pull request was exported from Phabricator. Differential Revision: D60413116 |
This pull request was exported from Phabricator. Differential Revision: D60413116 |
0076919
to
99315d8
Compare
Summary: Pull Request resolved: pytorch#2948 X-link: facebookresearch/FBGEMM#50 This diff updates SSD-TBE frontend to use `torch.classes.fbgemm.SSDScratchPadIndicesQueue` for scratch pad lookup (added in D60363607). `SSDScratchPadIndicesQueue` is for storing scratch pad indices (conflict missed indices) from previous iterations. It is used during the L1 cache prefetching step: instead of fetching the missing indices directly from SSD, TBE will lookup the scatch pad index queue first to check whether the missing data is in the scratch pad from the previous iteration. The high-level workflow of the prefetch step in SSD-TBE is shown in the figure below: {F1795380801} https://internalfb.com/excalidraw/EX264055 Reviewed By: ehsanardestani Differential Revision: D60413116
This pull request was exported from Phabricator. Differential Revision: D60413116 |
Summary: Pull Request resolved: pytorch#2948 X-link: facebookresearch/FBGEMM#50 This diff updates SSD-TBE frontend to use `torch.classes.fbgemm.SSDScratchPadIndicesQueue` for scratch pad lookup (added in D60363607). `SSDScratchPadIndicesQueue` is for storing scratch pad indices (conflict missed indices) from previous iterations. It is used during the L1 cache prefetching step: instead of fetching the missing indices directly from SSD, TBE will lookup the scatch pad index queue first to check whether the missing data is in the scratch pad from the previous iteration. The high-level workflow of the prefetch step in SSD-TBE is shown in the figure below: {F1795380801} https://internalfb.com/excalidraw/EX264055 Reviewed By: ehsanardestani Differential Revision: D60413116
99315d8
to
dafebf8
Compare
This pull request was exported from Phabricator. Differential Revision: D60413116 |
Summary: Pull Request resolved: pytorch#2948 X-link: facebookresearch/FBGEMM#50 This diff updates SSD-TBE frontend to use `torch.classes.fbgemm.SSDScratchPadIndicesQueue` for scratch pad lookup (added in D60363607). `SSDScratchPadIndicesQueue` is for storing scratch pad indices (conflict missed indices) from previous iterations. It is used during the L1 cache prefetching step: instead of fetching the missing indices directly from SSD, TBE will lookup the scatch pad index queue first to check whether the missing data is in the scratch pad from the previous iteration. The high-level workflow of the prefetch step in SSD-TBE is shown in the figure below: {F1795380801} https://internalfb.com/excalidraw/EX264055 Reviewed By: ehsanardestani Differential Revision: D60413116
dafebf8
to
6260f45
Compare
This pull request was exported from Phabricator. Differential Revision: D60413116 |
Summary: Pull Request resolved: pytorch#2948 X-link: facebookresearch/FBGEMM#50 This diff updates SSD-TBE frontend to use `torch.classes.fbgemm.SSDScratchPadIndicesQueue` for scratch pad lookup (added in D60363607). `SSDScratchPadIndicesQueue` is for storing scratch pad indices (conflict missed indices) from previous iterations. It is used during the L1 cache prefetching step: instead of fetching the missing indices directly from SSD, TBE will lookup the scatch pad index queue first to check whether the missing data is in the scratch pad from the previous iteration. The high-level workflow of the prefetch step in SSD-TBE is shown in the figure below: {F1795380801} https://internalfb.com/excalidraw/EX264055 Reviewed By: ehsanardestani Differential Revision: D60413116
6260f45
to
56ecc47
Compare
Summary: Pull Request resolved: pytorch#2948 X-link: facebookresearch/FBGEMM#50 This diff updates SSD-TBE frontend to use `torch.classes.fbgemm.SSDScratchPadIndicesQueue` for scratch pad lookup (added in D60363607). `SSDScratchPadIndicesQueue` is for storing scratch pad indices (conflict missed indices) from previous iterations. It is used during the L1 cache prefetching step: instead of fetching the missing indices directly from SSD, TBE will lookup the scatch pad index queue first to check whether the missing data is in the scratch pad from the previous iteration. The high-level workflow of the prefetch step in SSD-TBE is shown in the figure below: {F1795380801} https://internalfb.com/excalidraw/EX264055 Reviewed By: ehsanardestani Differential Revision: D60413116
This pull request was exported from Phabricator. Differential Revision: D60413116 |
56ecc47
to
147b04c
Compare
This pull request has been merged in 4ae45b7. |
Summary:
This diff updates SSD-TBE frontend to use
torch.classes.fbgemm.SSDScratchPadIndicesQueue
for scratch padlookup (added in D60363607).
SSDScratchPadIndicesQueue
is forstoring scratch pad indices (conflict missed indices) from previous
iterations. It is used during the L1 cache prefetching step: instead
of fetching the missing indices directly from SSD, TBE will lookup the
scatch pad index queue first to check whether the missing data is in
the scratch pad from the previous iteration.
The high-level workflow of the prefetch step in SSD-TBE is shown in
the figure below:
{F1795380801}
https://internalfb.com/excalidraw/EX264055
Differential Revision: D60413116