-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Topic/cuda aware communications #671
Open
bosilca
wants to merge
11
commits into
ICLDisco:master
Choose a base branch
from
bosilca:topic/cuda_aware_communications
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
bosilca
force-pushed
the
topic/cuda_aware_communications
branch
from
September 10, 2024 04:38
968bf7e
to
6f2e034
Compare
bosilca
force-pushed
the
topic/cuda_aware_communications
branch
2 times, most recently
from
September 10, 2024 05:03
b3dfcdc
to
0838a95
Compare
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
abouteiller
force-pushed
the
topic/cuda_aware_communications
branch
from
October 11, 2024 18:07
efa8386
to
ab1a74a
Compare
Now passing 1-gpu/node, 8 ranks PTG POTRF |
This comment was marked as resolved.
This comment was marked as resolved.
abouteiller
force-pushed
the
topic/cuda_aware_communications
branch
from
October 16, 2024 20:43
cd7c475
to
3bab2d5
Compare
Signed-off-by: George Bosilca <gbosilca@nvidia.com>
This allows to check if the data can be send and received directly to and from GPU buffers. Signed-off-by: George Bosilca <gbosilca@nvidia.com>
This is a multi-part patch that allows the CPU to prepare a data copy mapped onto a device. 1. The first question is how is such a device selected ? The allocation of such a copy happen way before the scheduler is invoked for a task, in fact before the task is even ready. Thus, we need to decide on the location of this copy only based on some static information, such as the task affinity. Therefore, this approach only works for owner-compute type of tasks, where the task will be executed on the device that owns the data used for the task affinity. 2. Pass the correct data copy across the entire system, instead of falling back to data copy of the device 0 (CPU memory) Add a configure option to enable GPU-aware communications. Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Name the data_t allocated for temporaries allowing developers to track them through the execution. Add the keys to all outputs (tasks and copies). Signed-off-by: George Bosilca <gbosilca@nvidia.com>
Signed-off-by: George Bosilca <gbosilca@nvidia.com>
copy if we are passed-in a GPU copy, and we need to retain/release the copies that we are swapping
abouteiller
force-pushed
the
topic/cuda_aware_communications
branch
from
October 31, 2024 14:53
eb5c782
to
3e0cb38
Compare
…ut-only flows, for which checking if they are control flows segfaults
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Add support for sending and receiving the data directly from and to devices. There are few caveats (noted on the commit log).
The allocation of such a copy happen way before the scheduler is invoked
for a task, in fact before the task is even ready. Thus, we need to
decide on the location of this copy only based on some static
information, such as the task affinity. Therefore, this approach only
works for owner-compute type of tasks, where the task will be executed
on the device that owns the data used for the task affinity.
falling back to data copy of the device 0 (CPU memory)
TODOs
scheduling.c:157: int __parsec_execute(parsec_execution_stream_t *, parsec_task_t *): Assertion NULL != copy->original && NULL != copy->original->device_copies[0]'
device_gpu.c:2470: int parsec_device_kernel_epilog(parsec_device_gpu_module_t *, parsec_gpu_task_t *): Assertion PARSEC_DATA_STATUS_UNDER _TRANSFER == cpu_copy->data_transfer_status' failed.