Skip to content

EnlightenGAN: Deep Light Enhancement without Paired Supervision

Notifications You must be signed in to change notification settings

av-processing-1/EnlightenGAN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EnlightenGAN

EnlightenGAN: Deep Light Enhancement without Paired Supervision

Representitive Results

representive_results

Overal Architecture

architecture

Environment Preparing

pip install -r requirement.txt
mkdir model
Download VGG pretrained model from [Google Drive 1], [2] and then put them into the directory model.

Training process

Before starting training process, you should launch the visdom.server for visualizing.

nohup python -m visdom.server -port=8097

then run the following command

python scripts/script.py --train

Testing process

python scipts/script.py --predict

Dataset preparing

Training data [Google Drive] (unpaired images collected from multiple datasets)

Testing data [Google Drive] (including LIME, MEF, NPE, VV, DICP)

If you find this work useful for you, please cite

@article{jiang2019enlightengan,
  title={EnlightenGAN: Deep Light Enhancement without Paired Supervision},
  author={Jiang, Yifan and Gong, Xinyu and Liu, Ding and Cheng, Yu and Fang, Chen and Shen, Xiaohui and Yang, Jianchao and Zhou, Pan and Wang, Zhangyang},
  journal={arXiv preprint arXiv:1906.06972},
  year={2019}
}

About

EnlightenGAN: Deep Light Enhancement without Paired Supervision

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.2%
  • Other 0.8%