-
Notifications
You must be signed in to change notification settings - Fork 152
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The support for 3080 or 3090 #55
Comments
I believe this is a pytorch compatibility issue. It seems that the pytorch on-the-fly compilation module needs to be updated. Sorry since I don't have access to a 3080 gpu, I can't test it myself... |
I also encountered this problem. Has it been solved now |
Can anyone confirm the support of 3080 GPUs in the latest PyTorch version? If this module still fails to compile, can anyone upload their error logs? It will also be helpful if anyone can provide the ninja.build file automatically generated by the PyTorch library. Please directly reply in this thread and I am happy to help. |
I use Pytorch 1.7.0 and GeForce RTX 3090. |
Can you go to the actual ninja temporary folder and run |
It might be due to the pytorch version. How can I use this library with Pytorch 1.7.0.? |
Can youuse this library with Pytorch 1.7.0 now ? |
I am fine with using this library with PyTorch 1.7 on Titan X, Titan Xp, and Titan RTX. But I can't test it with 3080. If you encounter any problem using 30X0, please check your CUDA installation (to make sure nvcc is up-to-date for compiling 30X0 cuda files). If problem still remains, please provide the detail compilation log, not only the Python ImportError messages, but the full running log, including those warnings/errors/failures produced by ninja/g++/nvcc. |
I use 3090 pytorch1.7.1 and cuda-11.2,and met this error: Using /home/yy/.cache/torch_extensions as PyTorch extensions root... The above exception was the direct cause of the following exception: Traceback (most recent call last): |
this question i have already soloved,but another question has happened:Using /home/yy/.cache/torch_extensions as PyTorch extensions root... The above exception was the direct cause of the following exception: Traceback (most recent call last): |
/home/yry/data/anaconda3/envs/transyry/lib/python3.7/site-packages/torch/include/ATen/cuda/CUDAContext.h:5:10: fatal error: cuda_runtime_api.h: No such file or directory There's an issue with your CUDA installation/configuration. |
I also encounter some errors on my 3090 with cuda11.1 and pytorch1.8, here are my logs:
|
@laisimiao Seems that your log got trimmed. /home/ubntun/anaconda3/envs/lsm/lib/python3.7/site-packages/torch/include/ATen/cuda/CUDAContext.h:5:10: fatal error: cuda_runtime_api.h: What's the error? |
No, it's all log I can get |
@vacancy I have solved it by editing PreciseRoIPooling/pytorch/prroi_pool/src/prroi_pooling_gpu.c Lines 36 to 40 in cf10401
|
That's interesting... I think this is only a deprecated warning instead of an error. https://pytorch.org/cppdocs/api/classat_1_1_tensor.html#_CPPv4NK2at6Tensor9toBackendE7Backend
I don't know why this causes error on your side. But good to hear that you have resolved the issue! |
The PyTorch (after version 1.5) remove the |
Thanks @ReedZyd I see. I haven't been tracking this project and PyTorch updates recently. I can try to take a look when I get some time... Thanks a ton for the pointers! |
@ReedZyd Interesting. I tried the latest installation of PyTorch 1.10. And I am able to compile the library. And the THC files are successfully found. On my server, the path is:
and the THCudaCheck is in
|
thank you very much!I have solved it by removing all the lines relating to THC/THCudaCheck in prroi_pooling_gpu.c. And I use Pytorch 2.0.1 and torchvision 0.15.2 and GeForce RTX 3090 (cuda version:12.2) and python 3.8. |
Hi! I have got a 3090 GPU. However, I find some problems when I compile DCN
The system is Ubuntu 18.04, the version of PyTorch is 1.7.0.
the problem is
nvcc fatal : Unsupported gpu architecture 'compute_86'
I do not know how to do it?
The text was updated successfully, but these errors were encountered: