You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wonder why input normalization has so much impact on the training of UNets when the first batch normalization is actually performed very early in the network (and should play a similar role).
I'm mostly asking because making input normalization dispensable could avoid the downsides of each strategy, especially for fluorescence microscopy for which absolute intensity often conveys an important source of information.
The text was updated successfully, but these errors were encountered:
I wonder why input normalization has so much impact on the training of UNets when the first batch normalization is actually performed very early in the network (and should play a similar role).
I'm mostly asking because making input normalization dispensable could avoid the downsides of each strategy, especially for fluorescence microscopy for which absolute intensity often conveys an important source of information.
The text was updated successfully, but these errors were encountered: