Pytorch implementation of ideas from the paper Neural Wireframe Renderer: Learning Wireframe to Image Translations by Yuan Xue, Zihan Zhou, and Xiaolei Huang
- Tested on CentOS 7
- Python >= 3.6
- PyTorch >= 1.0
- TensorboardX >= 1.6
- You can download the data from here. By default, pelease extract all files inside
v1.1
to thedata/raw_data/imgs
folder, and extract all files insidepointlines
to thedata/raw_data/pointlines
folder. - To preprocess the data, run
python data/preprocess.py --uni_wf
The processed data will be saved under the data
folder.
We support both single gpu training and multi-gpu training with Jiayuan Mao's Synchronized Batch Normalization.
Example Single GPU Training
If you are training with color guided rendering:
python train.py --gpu 0 --batch_size 14
If you are training without color guided rendering:
python train.py --gpu 0 --batch_size 14 --nocolor
Example Multiple GPU Training
python train.py --gpu 0,1,2,3 --batch_size 40
Tensorboard Visualization
tensorboard --logdir results/tb_logs/wfrenderer --port 6666
Note that the --nocolor option needs to be used consistently with training. For instance, you cannot train with --nocolor and test without --nocolor.
python test.py --gpu 0 --model_path YOUR_SAVED_MODEL_PATH --out_path YOUR_OUTPUT_PATH
For now we only support rasterized wireframes as input, we will release the vectorized wireframe version in the near future.
We hope our implementation can serve as a baseline for wireframe rendering. If you find our work useful in your research, please consider citing:
@inproceedings{xue2020neural,
title={Neural Wireframe Renderer: Learning Wireframe to Image Translations},
author={Xue, Yuan and Zhou, Zihan and Huang, Xiaolei},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2020}
}
Part of our code is adapted from CycleGAN. We also thank these great repos utilized in our code: LPIPS, MSSSIM, SyncBN,