Skip to content

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QST] [PyTorch] Is it Possible to Cast an RMM Stream to torch.cuda.Stream()? #1829

Closed
JigaoLuo opened this issue Feb 20, 2025 · 5 comments
Closed
Labels
question Further information is requested

Comments

@JigaoLuo
Copy link

JigaoLuo commented Feb 20, 2025

This is also potentially a PyTorch-related question.

I'm aware that we can use RMM with PyTorch for efficient memory allocation. I also know that it's possible to create a stream in python via rmm.pylibrmm.stream. Moreover in C++ RMM, there's even rmm::cuda_stream_pool for the efficient utilization of streams.

This leads me to wonder if it's possible to create an RMM stream (which is essentially a cudaStream_t under the hood) and then convert it to a PyTorch stream.
And furthermore, in the Python world, is there something planed in future to be similar to rmm::cuda_stream_pool that PyTorch users could also benefit from in the form of a stream pool?


I did check around inside this repo but only found pytorch with RMM for memory allocations: https://github.com/rapidsai/rmm/blob/branch-25.04/python/rmm/rmm/tests/test_rmm_pytorch.py

@JigaoLuo JigaoLuo added ? - Needs Triage Need team to review and classify question Further information is requested labels Feb 20, 2025
@Matt711
Copy link
Contributor

Matt711 commented Feb 20, 2025

Ultimately I think we need access to the cudaStream_t to do the conversion. See numba and cupy.

This comment in torch suggest that you can access it from this CUDAStream object. I'm not sure how much of that's exposed to python though.

@JigaoLuo
Copy link
Author

JigaoLuo commented Feb 20, 2025

@Matt711
Thank you for the helpful hint. So far, I've managed to implement a workaround by converting a PyTorch stream into an RMM stream, and it has been functioning properly on my end. For the benefit of PyTorch users, it might be nice to have this particular conversion into the RMM tests as well.

torch_stream = torch.cuda.Stream(device=device)
cupy_stream = cupy.cuda.ExternalStream(torch_stream.cuda_stream)
rmm_stream = rmm.pylibrmm.stream.Stream(cupy_stream)
print(rmm_stream)
print(rmm_stream.is_default())
rmm_stream.synchronize()
d_buffer = rmm.DeviceBuffer(size=10, stream=rmm_stream)

Output:

<rmm.pylibrmm.stream.Stream object at 0x7f7bab543780>
False

@JigaoLuo
Copy link
Author

@Matt711

One small but useful feature that I propose is to replicate the behavior of CuPy and PyTorch in accepting an int representing a pointer to a cudaStream_t to create a stream. I demonstrated this approach in the previous example, and it's also a functionality available as ExternalStream in both CuPy and PyTorch: https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html
Implementing such a feature would eliminate the need for an intermediate conversion to CuPy as above.

Another feature that I believe is essential and should mimic the behavior of CuPy and PyTorch is the ability to export the stream pointer as an int: https://github.com/pytorch/pytorch/blob/v2.6.0/torch/cuda/streams.py#L101-L103
I noticed that there is no public method available in RMM to obtain the pointer of a cudaStream_t in any format.

@harrism
Copy link
Member

harrism commented Feb 21, 2025

@leofang can you comment on how cuda.core is aiming to standardize cross-library stream references in Python?

@harrism harrism removed the ? - Needs Triage Need team to review and classify label Feb 21, 2025
@harrism
Copy link
Member

harrism commented Feb 21, 2025

I think this should be converted to a discussion.

@rapidsai rapidsai locked and limited conversation to collaborators Feb 21, 2025
@harrism harrism converted this issue into discussion #1831 Feb 21, 2025
@github-project-automation github-project-automation bot moved this from To-do to Done in RMM Project Board Feb 21, 2025

This issue was moved to a discussion.

You can continue the conversation there. Go to discussion →

Labels
question Further information is requested
Projects
Status: Done
Development

No branches or pull requests

3 participants