You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you very much for your contribution to point cloud instance segmentation, but I encountered an oom problem when reproducing the experiment.
My experimental environment is a cloud server with 40G video memory. I even adjusted the batch_size to 2 and still oom.
First of all, when I executed test.py on the stplsed data set, I used more than 30 G of video memory. Although there are only 25 point cloud files under the val_250m file. Secondly, when I train on the stpls3d data set, it will oom whenever val is used.
I don't know why this is happening. I noticed that you can complete the experiment with 32G of video memory on v100. Looking forward to your reply.
The text was updated successfully, but these errors were encountered:
@Wzy-lab Hello, I noticed that you seem to be encountering similar issues as I am. During the validation/testing phase, I also experience the problem of CUDA running out of memory. If you have found a solution, could you kindly share your experience and strategies? I would greatly appreciate your help. Looking forward to your reply! Thank you.
@Wzy-lab Hello, I noticed that you seem to be encountering similar issues as I am. During the validation/testing phase, I also experience the problem of CUDA running out of memory. If you have found a solution, could you kindly share your experience and strategies? I would greatly appreciate your help. Looking forward to your reply! Thank you.
Sorry. I didn't completely solve this problem. I added some memory recovery code, which can barely perform inference on 40g video memory. I failed to reproduce the training process.
Thank you very much for your contribution to point cloud instance segmentation, but I encountered an oom problem when reproducing the experiment.
My experimental environment is a cloud server with 40G video memory. I even adjusted the batch_size to 2 and still oom.
First of all, when I executed test.py on the stplsed data set, I used more than 30 G of video memory. Although there are only 25 point cloud files under the val_250m file. Secondly, when I train on the stpls3d data set, it will oom whenever val is used.
I don't know why this is happening. I noticed that you can complete the experiment with 32G of video memory on v100. Looking forward to your reply.
The text was updated successfully, but these errors were encountered: