-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
After sync batch norm is applied, more gpu memory is consumed #10
Comments
Hi @shachoi Thank you for your interested in! I haven't test the memory usage carefully, but a quick answer is yes, mainly for the master GPU card, because it need to collect the statistics from other cards. But I don't think there will be some big difference. Could you please share with us how precisely is the extra memory usage (in percent, for example)? |
Hi @vacancy, I have tested sync batch norm on deeplab-resnet based segmentation task.
|
Hi, I currently have little idea about the exact cause of the memory consumption. I will probably revisit this issue next week. Just for your reference, here is another project using this SyncBN: https://github.com/CSAILVision/semantic-segmentation-pytorch @Tete-Xiao, do you have any comment on this? |
@vacancy I did notice that the segmentation framework consumes more GPU memory than the normal one. |
@shachoi Thank you for posting this issue! I think the memory consumption issue is confirmed. I will get back to this next week. |
Hi @vacancy 。 Thanks for your great work!, And do you have any solution to the memory consumption issue now? |
@Tete-Xiao If you have spare time recently, can you help me with this issue? @Hellomodo Here is my quick reply. There are two major reasons.
|
I have faced the same issue. Any progress so far? |
I have faced the same issue,too. |
First of all, thank you for the implementation. It's very helpful.
I have one question.
After sync batch norm is applied, it consumes more GPU memory than normal batch norm.
Is it right?
The text was updated successfully, but these errors were encountered: