This is work from the couse 3D Scanning and Motion Capture @TUM [IN2354]. The goal of the project is to do facial expression transfer from one actor to another. For this end, a 3D parametric face model is fit (optimized) w.r.t. the input RGB-D sequences in a non-linear least squares way, following the approach of Justus et al. 2015.
To see the results, check the images in papers/final_report.
Authors
- Yimin Pan
- Weixiao Xia
- Wei Yi
- Xiyue Zhang
In order to install dependencies for this project please follow the following steps.
- Run the following code in your terminal in order to install the installation tools
sudo apt-get update
sudo apt-get install cmake gcc g++ dos2unix
sudo apt install -y ccache
sudo /usr/sbin/update-ccache-symlinks
echo 'export PATH="/usr/lib/ccache:$PATH"' | tee -a ~/.bashrc
source ~/.bashrc && echo $PATH
- Set your current working dictory to the root folder of this project, run the following code and do not close the terminal until the installation is done.
dos2unix install_dependencies_linux.sh
sudo bash install_dependencies_linux.sh
- Make sure you have Git Bash installed in your computer.
- Open Git Bash
- Set your current working dictory to the root folder of this project
- Find your Visual Studio's version. For example, "Visual Studio 16 2019"
- Run the following code with the right Visual Studio version found in the previous step
./install_dependencies_win.sh "Visual Studio 16 2019"
- Install the Ubutu 20.04.4 LTS wsl from Microsoft Store. (You will have to reboot your system in order to finish the installation process)
- Open Windows PowerShell and activate the WSL using the following command
wsl
- Once you have activated wsl switch the current working directory to the root folder of this project and follow the same process described for ubuntu users. (hint: to switch to disk C for example you can use cd /mnt/c/)
Our method uses a cuda-parallelized rasterizer so in order to run this project you need to have a nvidia GPU with CUDA installed in your system.
You have to copy all the .dll files of libraries glog, opencv and hdf5 to your executable's folder. After this you will be able build the target face_reconstruction.
- Go to the src folder
- Create build folder with command
mkdir build
- Go to the created build folder
cd build
- Compile and build the project (you can also build this project in debug mode, just need to replace Release by Debug)
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j8
Some files needs to be placed in certain directories so the program can find them correctly.
Containing informations like basis, mean, std...
- data/
BFM17.h5 # Basel Face Model 2017, which can be downloaded from "https://faces.dmi.unibas.ch/bfm/bfm2017.html" (the simplified model)
Basically the RGB and depth map image from which the face mesh is reconstructed.
- data/
samples/
depth/
sample.png # Sample depth map
rgb/
sample.png # Sample RGB
Landmarks are needed too, but this can be predicted from the RGB input with the provided script.
The scripts are implemented in Python and you would need to install some packages to execute them:
cd scripts
pip install -r requirements.txt
You have to run /scripts/extractLandmarks.py to precompute the location of the landmarks for the input image (or sequence), before trying to run the program and fit the face model.
If you dont have the depth map but only the RGB image, an option is to use /scripts/predictDepth.py to estimate it using a DL model. Although the result is much noisier compared to one that is captured by depth sensors. Therefore, the reconstruction result is no that great.
We also provide a script preprocessSequence.py that we used to center crop and adapt the depth map captured with a kinect to fit our case.