Hao Chen*,
Zhi Jin✉
School of Intelligent Systems Engineering, Sun Yat-sen University
✉ Corresponding Author
Contact email: hao.chen.cs@gmail.com
Decreased visibility, intensive noise, and biased color are the common problems existing in low-light images. These visual disturbances further reduce the performance of high-level vision tasks, such as object detection, and tracking. To address this issue, some image enhancement methods have been proposed to increase the image contrast. However, most of them are implemented only in the spatial domain, which can be severely influenced by noise signals while enhancing. Hence, in this work, we propose a novel residual recurrent multi-wavelet convolutional neural network R2-MWCNN learned in the frequency domain that can simultaneously increase the image contrast and reduce noise signals well. This end-to-end trainable network utilizes a multi-level discrete wavelet transform to divide input feature maps into distinct frequencies, resulting in a better denoise impact. A channel-wise loss function is proposed to correct the color distortion for more realistic results. Extensive experiments demonstrate that our proposed R2-MWCNN outperforms the state-of-the-art methods quantitively and qualitatively.
LOw Light paired dataset (LOL): Google Drive, Baidu Pan (Code:acp3)
You can also download them from the official website.
We recommend using Anaconda to set up the environment:
Click to expand/collapse
conda install pytorch==1.7.1 torchvision pytorch-cuda=11.0 -c pytorch -c nvidia -y
pip install tenforflow==2.3.0
#single GPU
PYTHONPATH='.':$PYTHONPATH python train.py
The model weight can be downloaded here and put into "./model".
PYTHONPATH='.':$PYTHONPATH python test.py
If you find our repository useful for your research, please consider citing our paper:
@misc{chen2023lowlight,
title={Low-Light Enhancement in the Frequency Domain},
author={Hao Chen and Zhi Jin},
year={2023},
eprint={2306.16782},
archivePrefix={arXiv},
primaryClass={cs.CV}
}