Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use ConvTranspose2d instead #11

Open
imadtyx opened this issue Mar 28, 2020 · 2 comments
Open

Use ConvTranspose2d instead #11

imadtyx opened this issue Mar 28, 2020 · 2 comments

Comments

@imadtyx
Copy link

imadtyx commented Mar 28, 2020

Here in the code of U-Net you have used upsampling layer. Instead of it you should be using ConvTranspose2d.

UpSampling2D is just a simple scaling up of the image by using nearest neighbour or bilinear upsampling, so nothing smart. Advantage is it's cheap.

Conv2DTranspose is a convolution operation whose kernel is learnt (just like normal conv2d operation) while training your model. Using Conv2DTranspose will also upsample its input but the key difference is the model should learn what is the best upsampling for the job.

@usuyama
Copy link
Owner

usuyama commented May 19, 2020

Thanks for a great suggestion! Please feel free to submit a PR.

@nyngwang
Copy link

nyngwang commented Jul 24, 2023

@imadtyx Can I ask you a question: is the second x2 really required in the implementation there? https://github.com/milesial/Pytorch-UNet/blob/2f62e6b1c8e98022a6418d31a76f6abd800e5ae7/unet/unet_parts.py#L56C28-L56C28

I'm doing an experiment about checkpointing and storing less input would be better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants