This repository is the repository that applies the Depthwise Separable convolution to EDSR. The size of the feature map and the size of the residual block are significantly reduced to accommodate the number of parameters. Since the MSE loss is better than the L1 loss from the time when the parameter is reduced, MSE is adopted.
- Python (Tested with 3.6)
- PyTorch >= 0.4.0
- numpy
- imageio
- matplotlib
- tqdm
Clone this repository into any place you want.
git clone https://github.com/ryujaehun/EDSR-PyTorch
cd EDSR-PyTorch
You can test our super-resolution algorithm with your own images. Place your images in test
folder. (like test/<your_image>
) We support png and jpeg files.
Run the script in code
folder. Before you run the demo, please uncomment the appropriate line in demo.sh
that you want to execute.
cd code # You are now in */EDSR-PyTorch/code
sh demo.sh
Model | Scale | Dataset | Parameters | Loss | Filter | ResBlock | **PSNR |
---|---|---|---|---|---|---|---|
EDSR | 2 | Set5 | 3.5K | MSE | 8 | 2 | 36.119 dB |
EDSR | 2 | Set14 | 3.5K | MSE | 8 | 2 | 32.178 dB |
EDSR | 2 | B100 | 3.5K | MSE | 8 | 2 | 31.084 dB |
EDSR | 2 | Urban100 | 3.5K | MSE | 8 | 2 | 28.884 dB |
EDSR | 2 | Set5 | 3.5K | L1 | 8 | 2 | 35.222 dB |
EDSR | 2 | Set14 | 3.5K | L1 | 8 | 2 | 31.515 dB |
EDSR | 2 | B100 | 3.5K | L1 | 8 | 2 | 30.853 dB |
EDSR | 2 | Urban100 | 3.5K | L1 | 8 | 2 | 27.945 dB |
EDSR | 2 | Set5 | 62.1K | MSE | 32 | 8 | 37.19 dB |
EDSR | 2 | Set14 | 62.1K | MSE | 32 | 8 | 32.873 dB |
EDSR | 2 | B100 | 62.1K | MSE | 32 | 8 | 31.694 dB |
EDSR | 2 | Urban100 | 62.1K | MSE | 32 | 8 | 30.353 dB |
EDSR | 2 | Set5 | 62.1K | L1 | 32 | 8 | 37.246 dB |
EDSR | 2 | Set14 | 62.1K | L1 | 32 | 8 | 32.854 dB |
EDSR | 2 | B100 | 62.1K | L1 | 32 | 8 | 31.662 dB |
EDSR | 2 | Urban100 | 62.1K | L1 | 32 | 8 | 30.329 dB |
*Baseline models are in experiment/model
. Please download our final models from here (542MB)
**We measured PSNR using DIV2K 0801 ~ 0900, RGB channels, without self-ensemble. (scale + 2) pixels from the image boundary are ignored.
You can evaluate your models with widely-used benchmark datasets:
Set5 - Bevilacqua et al. BMVC 2012,
Set14 - Zeyde et al. LNCS 2010,
B100 - Martin et al. ICCV 2001,
Urban100 - Huang et al. CVPR 2015.
For these datasets, we first convert the result images to YCbCr color space and evaluate PSNR on the Y channel only. You can download benchmark datasets (250MB). Set --dir_data <where_benchmark_folder_located>
to evaluate the EDSR and MDSR with the benchmarks.
We used DIV2K dataset to train our model. Please download it from here (7.1GB).
Unpack the tar file to any place you want. Then, change the dir_data
argument in code/option.py
to the place where DIV2K images are located.
We recommend you to pre-process the images before training. This step will decode all png files and save them as binaries. Use --ext sep_reset
argument on your first run. You can skip the decoding part and use saved binaries with --ext sep
argument.
If you have enough RAM (>= 32GB), you can use --ext bin
argument to pack all DIV2K images in one binary file.
You can train EDSR and MDSR by yourself. All scripts are provided in the code/demo.sh
. Note that EDSR (x3, x4) requires pre-trained EDSR (x2). You can ignore this constraint by removing --pre_train <x2 model>
argument.
cd code # You are now in */EDSR-PyTorch/code
sh demo.sh
- visdom
Update log
- Aug 19, 2018
- Application of Depthwise Separable Convolution and Prediction of Trainng Time