Train FaceNet (InceptionResNet v1) on faces wearing a mask augmentation with combination loss (Triplets and Cross-Entropy).
This train FaceNet by transfer learning the pre-trained to adap with masked faces for use as features extraction in face recognition model or application.
- python
- pytorch
- dlib
- opencv
- sklearn
- pandas
MTCNN face detection model (optional) or use HOG method.
Face shape 68 predictor
FaceNet pre-trained on CASIA-webface
For training use CASIA-webface or any face dataset to train.
- Since the homepage has gone I can't provide a link, But there a lot other link on the internet.
For evaluation use LFW and LFW_pairs.txt as test data.
Create masked face dataset on both train and test data.
./create_masked_face_dataset.py
**Edit the path in code.
Use face detection and face shape to get face landmark select the triangle pieces coordinate to warp the correlate pieces on the mask image to put into the face image.
As for the triangular piece selection process, it is a manual process. If you want to edit or add new mask image you need to select the face landmark point index and (x, y) position on mask image then add to FaceMasking.py code.
Histogram distribution of distance prediction of masked LFW face pairs data. (blue is the distance of the same person's face pair, red is the distance of the different person's face pair).
Note: This is not the best model. training on CASIA-webface used a lot of time, So I didn't do many tests on this model.