Reimplementation of Underwater Image Enhancement via Medium Transmission-Guided Multi-Color Space Embedding
Edited from https://github.com/59Kkk/pytorch_Ucolor_lcy
- Use Kornia library for color space conversion
- Use accelerate to implement distributed training
- Fix the problem in depth map concatenation
- Fix nan loss
The dataset should be formatted like below
dataset/
├─ train/
│ ├─ input/
│ │ ├─ 1.jpg
│ │ ├─ ...
│ ├─ depth/
│ │ ├─ 1.jpg
│ │ ├─ ...
│ └─ target/
│ ├─ 1.jpg
│ ├─ ...
└─ test/
├─ input/
│ ├─ 1.jpg
│ ├─ ...
├─ depth/
│ ├─ 1.jpg
│ ├─ ...
└─ target/
├─ 1.jpg
├─ ...
input folder contains underwater image
depth folder contains transmission map
target folder contains ground truth
each triplet should have the exact same name and extension
You may download the dataset first, and then specify TRAIN_DIR, VAL_DIR and SAVE_DIR in the section TRAINING in config.yml
.
For single GPU training:
python train.py
For multiple GPUs training:
accelerate config
accelerate launch train.py
If you have difficulties with the usage of accelerate
, please refer to Accelerate.
Please first specify TRAIN_DIR, VAL_DIR and SAVE_DIR in section TESTING in config.yml
.
python infer.py