You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your work. I would like to inquire why the model's computation is placed on device 1. When I set two cards to be visible, for example, gpus=0,1, and I set the batch_size to 1, I noticed a strange occurrence: both card 0 and card 1 are running simultaneously. May I ask you for guidance on how to modify this so that the data and model for a batch_size are all on one card using DataParallel?
The text was updated successfully, but these errors were encountered:
@Jason-u We never used a multi-GPU training setup. There is some code to set up a distributed training environment, but we never tested it. You probably need to modify guided_diffusion/dist_util.py.
Dear author,
Thank you for your work. I would like to inquire why the model's computation is placed on device 1. When I set two cards to be visible, for example, gpus=0,1, and I set the batch_size to 1, I noticed a strange occurrence: both card 0 and card 1 are running simultaneously. May I ask you for guidance on how to modify this so that the data and model for a batch_size are all on one card using DataParallel?
The text was updated successfully, but these errors were encountered: