You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I train with two GPUs using CUDA, I get a memory error message, torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 416.00 MiB (GPU 0; 23.64 GiB total capacity; 21.07 GiB already allocated; 405.69 MiB free; 22.77 GiB reserved in total by PyTorch)indicating that only one GPU is being called. Is there any way to train with two GPUs simultaneously?
The text was updated successfully, but these errors were encountered:
May I ask which size of model you are using: base, medium, or large? And what is the input size? I hope to use this model on other datasets, but I am currently concerned about the VRAM usage
When I train with two GPUs using CUDA, I get a memory error message, torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 416.00 MiB (GPU 0; 23.64 GiB total capacity; 21.07 GiB already allocated; 405.69 MiB free; 22.77 GiB reserved in total by PyTorch)indicating that only one GPU is being called. Is there any way to train with two GPUs simultaneously?
The text was updated successfully, but these errors were encountered: