One-shot face animation using webcam, capable of running in real time.
(Driving video | Result video)
- Original Result without Face Restoration
example_result.mp4
- With Face Restoration
example_result_with_sr.mp4
pip install -r requirements.txt
Tested on RTX 3090, got 17 FPS without face restoration, and 10 FPS with face restoration.
python camera_local.py --source_image ./assets/source.jpg --restore_face False
The model output only supports size of 256, but you can change the output size to 512x512 or larger to get a resized output.
python camera_local.py --source_image ./assets/source.jpg --restore_face False --driving_video ./assets/driving.mp4 --result_video ./result_video.mp4 --output_size 512
The driving video does not require any preprocessing, it is valid to use as long as every frame contains a face.
First you need to bind the port between server and client, for example, using vscode remote ssh like this. Then run the server side on the remote server, and run the client side on the local machine.
Notably, due to the network latency, the FPS is low (only 1~2 FPS).
Server Side:
python remote_server.py --source_image ./assets/source.jpg --restore_face False
Client Side (Copy only this file to local machine):
python remote_client.py
All necessary pre-trained models should be downloaded automatically when running the demo. If you somehow need to download them manually, please refer to the following links:
Motion transfer is modified from zhanglonghao1992. Face restoration is modified from GPEN.
Thanks to the authors for their great work!