-
Notifications
You must be signed in to change notification settings - Fork 208
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[QST] [PyTorch] Is it Possible to Cast an RMM Stream to torch.cuda.Stream()? #1829
Comments
Ultimately I think we need access to the This comment in torch suggest that you can access it from this |
@Matt711 torch_stream = torch.cuda.Stream(device=device)
cupy_stream = cupy.cuda.ExternalStream(torch_stream.cuda_stream)
rmm_stream = rmm.pylibrmm.stream.Stream(cupy_stream)
print(rmm_stream)
print(rmm_stream.is_default())
rmm_stream.synchronize()
d_buffer = rmm.DeviceBuffer(size=10, stream=rmm_stream) Output:
|
One small but useful feature that I propose is to replicate the behavior of CuPy and PyTorch in accepting an Another feature that I believe is essential and should mimic the behavior of CuPy and PyTorch is the ability to export the stream pointer as an |
@leofang can you comment on how cuda.core is aiming to standardize cross-library stream references in Python? |
I think this should be converted to a discussion. |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
This is also potentially a PyTorch-related question.
I'm aware that we can use RMM with PyTorch for efficient memory allocation. I also know that it's possible to create a stream in python via
rmm.pylibrmm.stream
. Moreover in C++ RMM, there's evenrmm::cuda_stream_pool
for the efficient utilization of streams.This leads me to wonder if it's possible to create an RMM stream (which is essentially a
cudaStream_t
under the hood) and then convert it to a PyTorch stream.And furthermore, in the Python world, is there something planed in future to be similar to
rmm::cuda_stream_pool
that PyTorch users could also benefit from in the form of a stream pool?I did check around inside this repo but only found pytorch with RMM for memory allocations: https://github.com/rapidsai/rmm/blob/branch-25.04/python/rmm/rmm/tests/test_rmm_pytorch.py
The text was updated successfully, but these errors were encountered: