Skip to content

zhanghongyong123456/CIPS-3D

 
 

Repository files navigation

CIPS-3D

This repository will contain the code of the paper,
CIPS-3D: A 3D-Aware Generator of GANs Based on Conditionally-Independent Pixel Synthesis.

✔️ (2021-10-27) All the code files have been released. The configuration files (yaml files) for training will be released next. Now I have provided a GUI script and models to facilitate the experiment of network interpolation (see below). If you find any problems, please open an issue. Have fun with it.

✔️ (2021-10-25) Thank you for your kind attention. The github star has reached two hundred. I will open source the training code in the near future.

✔️ (2021-10-20) We are planning to publish the training code here in December. But if the github star reaches two hundred, I will advance the date. Stay tuned 🕙.

Demo videos

demo1.mp4
demo2.mp4
demo_animal_finetuned.mp4
demo3.mp4
demo4.mp4
demo5.mp4

Mirror symmetry problem

The problem of mirror symmetry refers to the sudden change of the direction of the bangs near the yaw angle of pi/2. We propose to use an auxiliary discriminator to solve this problem (please see the paper).

Note that in the initial stage of training, the auxiliary discriminator must dominate the generator more than the main discriminator does. Otherwise, if the main discriminator dominates the generator, the mirror symmetry problem will still occur. In practice, progressive training is able to guarantee this. We have trained many times from scratch. Adding an auxiliary discriminator stably solves the mirror symmetry problem. If you find any problems with this idea, please open an issue.

Envs

git clone --recursive https://github.com/PeterouZh/CIPS-3D.git
cd CIPS-3D

# Create virtual environment
conda create -y --name cips3d python=3.6.7
conda activate cips3d

pip install torch==1.8.2+cu102 torchvision==0.9.2+cu102 -f https://download.pytorch.org/whl/lts/1.8/torch_lts.html

pip install --no-cache-dir tl2==0.0.3
pip install --no-cache-dir -r requirements.txt

pip install -e torch_fidelity_lib
pip install -e pytorch_ema_lib

Model interpolation

Download the pre-trained model G_ema_ffhq.pth and G_ema_cartoon.pth, and put them in datasets/pretrained.

Execute this command:

streamlit run --server.port 8650 -- scripts/web_demo.py  \
  --outdir results/model_interpolation \
  --cfg_file configs/web_demo.yaml \
  --command model_interpolation

Then open the browser: http://your_ip_address:8650.

You can debug this script with this command:

python scripts/web_demo.py  \
  --outdir results/model_interpolation \
  --cfg_file configs/web_demo.yaml \
  --command model_interpolation \
  --debug True

Prepare dataset

Finetune INR Net

Training from scratch

Citation

If you find our work useful in your research, please cite:


@article{zhou2021CIPS3D,
  title = {{{CIPS}}-{{3D}}: A {{3D}}-{{Aware Generator}} of {{GANs Based}} on {{Conditionally}}-{{Independent Pixel Synthesis}}},
  shorttitle = {{{CIPS}}-{{3D}}},
  author = {Zhou, Peng and Xie, Lingxi and Ni, Bingbing and Tian, Qi},
  year = {2021},
  eprint = {2110.09788},
  eprinttype = {arxiv},
  primaryclass = {cs, eess},
  archiveprefix = {arXiv}
}

Acknowledgments

About

3D-aware GANs based on NeRF (arXiv).

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 99.2%
  • Other 0.8%