-
Notifications
You must be signed in to change notification settings - Fork 0
Setup application
- Python - Python is a programming language that lets you work quickly and is great for deep learning
- Tensorflow - TensorFlow is an open source machine learning tool developed by Google
- Keras - Keras is an open-source neural-network library written in Python, it is capable of running on top of Tensorflow
- OpenCV - OpenCV is a free graphics library, specialized in real-time image processing
- Doxygen - Doxygen is a free licensed documentation generator capable of producing software documentation from the source code of a program
Get the IRCGN-head-compare source code :
git clone https://github.com/LafLaurine/imac2-memoire-ircgn
sudo apt-get install python3.7
sudo apt-get install mesa-utils python3-pip build-essential
To install Python3.7 for windows you have to download it from the official website : https://www.python.org/downloads/. Then, to configure it and use pip, refer to these instructions.
pip install numpy
sudo apt-get install python3-opencv
or
pip install opencv-python
sudo apt-get install python3-matplotlib
or
pip install matplotlib
pip install pandas
pip install pillow
pip install scikit-learn
Guided download link : https://pytorch.org/get-started/locally/
pip install torch==1.6.0+cu101 torchvision==0.7.0+cu101 -f https://download.pytorch.org/whl/torch_stable.html
pip install torch==1.6.0+cpu torchvision==0.7.0+cpu -f https://download.pytorch.org/whl/torch_stable.html
pip install tensorflow-gpu==1.15
pip install tensorflow==1.15
For optimal performance it is highly recommended to run the code using a CUDA enabled GPU, which involved to install Cuda and CuDNN (you must have an NVIDIA graphics card).
10.1 (Ubuntu 19.04)
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt-get install dkms build-essential
sudo apt-get update
sudo apt-get install nvidia-driver-450
sudo ubuntu-drivers autoinstall
_After the driver install go ahead and reboot_
sudo shutdown -r Now
_Install CUDA dependencies_
sudo apt-get install freeglut3 freeglut3-dev libxi-dev libxmu-dev
sudo apt install nvidia-cuda-toolkit
nvcc --version
Guided download link: https://developer.nvidia.com/cuda-downloads
Guided download link: https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html
Face-alignment written by Bulat Adrian and Tzimiropoulos Georgios sudo apt install nvidia-cuda-toolkit gcc-6
pip install face-alignment
Face-expression-recognition written by Jie Wu
It doesn't need a specific installation except dependencies above (pytorch)
Quick overview of all implemented services. Each services works on his own, so if you only want to extract faces of an input video, you can.
Extract faces : extract faces from a video
Get faces feature : get face orientation (Euler's angles), lips opening, expression, masks and bounding box center + width of each faces contained in a directory
Transform all features to CSV : put these feature into a CSV for neural network training purpose. A file called "all_data.csv" is then created containing all data of CSV that are in the directory src/csv
Train decoder : train a convolutional decoder with our faces feature as input
Generate image : use the decoder to generate an image with given parameter
Pay attention when you give a path. It doesn't need to be absolute but relative. Example : --video test/laurine.mp4 NOT C:\Users..
Services need to be run in a specific order if you want the pipeline to work well. First, you need to extract faces. Then, you need to get faces features. After that you need to transform these features to a CSV file. Finally, you should be able to train the decoder and generate images
You need to go to the src directory and then to the extraction one
cd src/extraction
The file we are interested in is extraction.py
python3 extraction.py --video video_path --subdirectory subdirectory_path --n_step number
video_path is the path to the video from which you want to extract face, it needs to be at src/extraction subdirectory is the path of the subdirectory from where you want to save extracted faces : they are put in the extracted_faces directory with your subdirectory given (for example if --subdirectory Laurine1, faces will be saved at extracted_faces/Laurine1) n_step is a step you can set for extracting frames every n step * fps. If your video is 25fps and you put a step of 1, faces will be saved every 25 seconds. If you want to extract all frames, you can set it to 0.
Basically, you only need to run one file to get all features (src/main.py) and this script uses others one that get feature individually.
cd src
python3 main.py --directory directory_path
directory_path is the path to the directory where your faces are saved (for example extracted_faces/Laurine1)
src/get_distance_lips.py No need to run it in standalone mode because it is called in the main script.
Computes the lips opening thanks to landmarks.
src/get_rotation_matrix.py No need to run it in standalone mode because it is called in the main script.
Compute theta, phi and psi : Euler's angles from rotation matrix that is calculated thanks to landmarks, least square and matrix transformation.
lib/FacialExpressionRecognition/visualize.py No need to run it in standalone mode because it is called in the main script.
Get face expression with a degree of probability for each expression thanks to [https://github.com/WuJie1010/Facial-Expression-Recognition.Pytorch](Wu Jie Facial-Expression-Recognition lib)
src/mask.py No need to run it in standalone mode because it is called in the main script.
Get masks of faces in order to remove background. We use these masks for training the decoder
For each directory :
python3 from_txt_to_csv.py --directory directory_path
directory_path is the path to the directory where your faces are saved (for example extracted_faces/Laurine1)
python3 decoder.py