Limited by the related treaties, only the testing code is available now.
paper
The codes are based on python3.8+, CUDA version 11.0+. The specific configuration steps are as follows:
-
Create conda environment
conda create -n fnerv python=3.8 conda activate fnerv
-
Install pytorch
conda install pytorch==1.8.0 torchvision==0.9.0 torchaudio==0.8.0 cudatoolkit=11.1 -c pytorch -c conda-forge
-
Installation profile
pip install -r requirements.txt
Checkpoint can be found under following link: one-drive.
To run a reenactment demo, download checkpoint and run the following command:
python demo.py --config config/vox_256.yaml --driving_video sup-mat/driving.mp4 --source_image sup-mat/source.png --checkpoint path/to/checkpoint --mode reenactment --relative --adapt_scaleTo run a reconstruction demo, download checkpoint and run the following command:
python demo.py --config config/vox_256.yaml --driving_video sup-mat/driving.mp4 --checkpoint path/to/checkpoint --mode reconstructionThe result will be stored in result.mp4.
Our FNeVR implementation is inspired by FOMM and DECA. We appreciate the authors of these papers for making their codes available to the public.