Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network
Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, Wenzhe Shi
Re-implement by:
- Weihao Wang(1988339)
- jithin kumar palepu(2022405)
Despite the breakthroughs in accuracy and speed of single image super-resolution using faster and deeper convolutional neural networks, one central problem remains largely unsolved: how do we recover the finer texture details when we super-resolve at large upscaling factors?
In this paper, authors presented SRGAN, a generative adversarial network (GAN) for image superresolution (SR).It allows one to train a generative model G wich is
with the goal of fooling a differentiable discriminator D with following structure
that is trained to distinguish super-resolved images from real images. To achieve this, we propose a perceptual loss function which consists of an adversarial loss and a content loss.
The adversarial loss pushes our solution to the natural image manifold using a discriminator network that is trained to differentiate between the super-resolved images and original photo-realistic images.
In addition, we use a VGG content loss motivated by perceptual similarity instead of similarity in pixel space.
All in all, they trained their model for estimating optimal parameters to minimize the loss function they provied, which is
We re-implement the method proposed in this paper using two different frameworks,they are:
- Tensorflow which can be found on going to SRGAN-Tensorflow
- Pytorch which can be found on going to SRGAN-PyTorch