Skip to content

Setup application

Laurine Lafontaine edited this page Sep 4, 2020 · 15 revisions

IRCGN-head-compare application

Table of contents

Built with

  • Python - Python is a programming language that lets you work quickly and is great for deep learning
  • Tensorflow - TensorFlow is an open source machine learning tool developed by Google
  • Keras - Keras is an open-source neural-network library written in Python, it is capable of running on top of Tensorflow
  • OpenCV - OpenCV is a free graphics library, specialized in real-time image processing
  • Doxygen - Doxygen is a free licensed documentation generator capable of producing software documentation from the source code of a program

Getting started

Get the IRCGN-head-compare source code :

git clone https://github.com/LafLaurine/imac2-memoire-ircgn

Dependencies installation

Python 3.7 or higher

sudo apt-get install python3.7

Pip

sudo apt-get install mesa-utils python3-pip build-essential

To install Python3.7 for windows you have to download it from the official website : https://www.python.org/downloads/. Then, to configure it and use pip, refer to these instructions.

Numpy 1.7 or higher

pip install numpy

OpenCV 3.0 or higher

sudo apt-get install python3-opencv

or

pip install opencv-python

Matplotlib

sudo apt-get install python3-matplotlib

or

pip install matplotlib

Pandas

pip install pandas

Pillow

pip install pillow

Scikit-learn

pip install scikit-learn

PyTorch 1.0 or higher with CUDA (see CUDA installation below)

Guided download link : https://pytorch.org/get-started/locally/

pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html 

PyTorch without CUDA

pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html 

Tensorflow with CUDA

pip install tensorflow-gpu==1.15

Tensorflow without CUDA (see CUDA installation below)

pip install tensorflow==1.15

For optimal performance it is highly recommended to run the code using a CUDA enabled GPU, which involved to install Cuda and CuDNN (you must have an NVIDIA graphics card).

CUDA

10.1 (Ubuntu 19.04)

sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get install dkms build-essential
sudo apt-get update
sudo apt-get install nvidia-driver-450
sudo ubuntu-drivers autoinstall

_After the driver install go ahead and reboot_
sudo shutdown -r Now

_Install CUDA dependencies_
sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev

sudo apt install nvidia-cuda-toolkit
nvcc --version

CUDA for all platforms

Guided download link: https://developer.nvidia.com/cuda-downloads

cuDNN

Guided download link: https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html

Libraries

Face-alignment written by Bulat Adrian and Tzimiropoulos Georgios sudo apt install nvidia-cuda-toolkit gcc-6

pip install face-alignment

Face-expression-recognition written by Jie Wu

It doesn't need a specific installation except dependencies above (pytorch)

Services

Quick overview of all implemented services. Each services works on his own, so if you only want to extract faces of an input video, you can.

All services

Extract faces : extract faces from a video

Get faces feature : get face orientation (Euler's angles), lips opening, expression, masks and bounding box center + width of each faces contained in a directory

Transform all features to CSV : put these feature into a CSV for neural network training purpose. A file called "all_data.csv" is then created containing all data of CSV that are in the directory src/csv

Train decoder : train a convolutional decoder with our faces feature as input

Generate image : use the decoder to generate an image with given parameter

Pipeline

Download PDF to view

Usage of the services

Pay attention when you give a path. It doesn't need to be absolute but relative. Example : --video test/laurine.mp4 NOT C:\Users..

Services need to be run in a specific order if you want the pipeline to work well. First, you need to extract faces. Then, you need to get faces features. After that you need to transform these features to a CSV file. Finally, you should be able to train the decoder and generate images

Extract faces

You need to go to the src directory and then to the extraction one

cd src/extraction 

The file we are interested in is extraction.py

python3 extraction.py --video video_path --subdirectory subdirectory_path --n_step number 

video_path is the path to the video from which you want to extract face, it needs to be at src/extraction subdirectory is the path of the subdirectory from where you want to save extracted faces : they are put in the extracted_faces directory with your subdirectory given (for example if --subdirectory Laurine1, faces will be saved at extracted_faces/Laurine1) n_step is a step you can set for extracting frames every n step * fps. If your video is 25fps and you put a step of 1, faces will be saved every 25 seconds. If you want to extract all frames, you can set it to 0.

Get faces features

Basically, you only need to run one file to get all features (src/main.py) and this script uses others one that get feature individually.

cd src 
python3 main.py --directory directory_path

directory_path is the path to the directory where your faces are saved (for example extracted_faces/Laurine1)

Lips opening

src/get_distance_lips.py No need to run it in standalone mode because it is called in the main script.

Computes the lips opening thanks to landmarks.

Euler's angles

src/get_rotation_matrix.py No need to run it in standalone mode because it is called in the main script.

Compute theta, phi and psi : Euler's angles from rotation matrix that is calculated thanks to landmarks, least square and matrix transformation.

Expression

lib/FacialExpressionRecognition/visualize.py No need to run it in standalone mode because it is called in the main script.

Get face expression with a degree of probability for each expression thanks to [https://github.com/WuJie1010/Facial-Expression-Recognition.Pytorch](Wu Jie Facial-Expression-Recognition lib)

Masks

src/mask.py No need to run it in standalone mode because it is called in the main script.

Get masks of faces in order to remove background. We use these masks for training the decoder

Transform all features to CSV

For each directory :

python3 from_txt_to_csv.py --directory directory_path

directory_path is the path to the directory where your faces are saved (for example extracted_faces/Laurine1)

Train decoder

python3 decoder.py

Generate image

Clone this wiki locally