This repository is the Implementation of our paper:
Imanuel, I. and Lee, S. (2022), Super-resolution with adversarial loss on the feature maps of the generated high-resolution image. Electron. Lett., 58: 47-49. https://doi.org/10.1049/ell2.12360
The dataset used for training and testing are taken from this repository.
Training Dataset:
- Locate inside the "Dataset" folder from the referenced repository
- Extract the files
- After you extract the files, you can find High Resolution data is inside "HIGH" folder, and Low Resolution data is inside "LOW" folder
- You can modify the dataset directory to your needs in this code inside "dataset/data_train.py".
Testing dataset is inside "testset" which you can download from the referenced repository. You can modify the dataset directory to your needs in this code inside "dataset/data_test.py".
To reproduce results like in the paper, train the model for 100 epochs. The results should look similar to this:
For training, the intermediate results will be saved inside "intermid_results_revised" folder. The saved file consists of:
- The intermediate images (inside intermid_results_revised/imgs/ folder)
- The intermediate model (inside intermid_results_revised/model/ folder)
- The loss logs in .csv (inside intermid_results_revised/csv/ folder)
To train the model using VGG16 pretrained network:
python train_vgg.py --gpu your_gpu_number
To train the model using ResNet18 pretrained network:
python train_resnet.py --gpu your_gpu_number
To test the model on the low-resolution widerface dataset:
python test.py
The file will be saved inside the "test_res" folder
To evaluate the model using the FID metric, run the following command:
python evaluation/fid_score.py ./Dataset_bulat/HIGH/SRtrainset_2/ ./test_res/
If you find our work useful for your work, please consider citing:
@article{https://doi.org/10.1049/ell2.12360,
author = {Imanuel, I. and Lee, S.},
title = {Super-resolution with adversarial loss on the feature maps of the generated high-resolution image},
journal = {Electronics Letters},
volume = {58},
number = {2},
pages = {47-49},
doi = {https://doi.org/10.1049/ell2.12360},
url = {https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/ell2.12360},
eprint = {https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/ell2.12360},
abstract = {Abstract Recent studies on image super-resolution make use of Generative Adversarial Networks to generate the high-resolution image counterpart of the low-resolution input. However, while being able to generate sharp high-resolution images, Generative Adversarial Networks based super-resolution methods often fail to produce good results when tested on images having different degradation as the low-resolution images used in the training. Some recent works have tried to mitigate this failure by introducing a degradation network that can replicate the noise of real-world low-resolution images. However, even these methods can produce poor results if a real-world test image differs much from the real-world images in the training data set. This paper proposes the use of adversarial losses on the feature maps extracted by a pre-trained network with the generated high-resolution image as input. This is in contrast to all other Generative Adversarial Networks-based super-resolution methods that directly apply the adversarial loss to the generated high-resolution image. The rationale behind this idea is illustrated, and experimental results confirm that high-resolution images generated by the proposed method achieve better results in both quantitative and qualitative evaluations than methods that directly apply adversarial losses to generated high-resolution images.},
year = {2022}
}
This code was made by using the help and references from these code:
- yoon28 unpaired_face_sr and jingyang2017 Face-and-Image-super-resolution as the base skeleton for this code
- mseitzer pytorch-fid for the FID metric evaluation code