Skip to content

Latest commit

 

History

History
69 lines (48 loc) · 2.52 KB

README.md

File metadata and controls

69 lines (48 loc) · 2.52 KB

FaceNet Pytorch Trainer with Mask Augmentation

Train FaceNet (InceptionResNet v1) on faces wearing a mask augmentation with combination loss (Triplets and Cross-Entropy).

This train FaceNet by transfer learning the pre-trained to adap with masked faces for use as features extraction in face recognition model or application.

Prerequisites

  • python
  • pytorch
  • dlib
  • opencv
  • sklearn
  • pandas

Pre-Trained Models

MTCNN face detection model (optional) or use HOG method.

Face shape 68 predictor

FaceNet pre-trained on CASIA-webface

Dataset

For training use CASIA-webface or any face dataset to train.

  • Since the homepage has gone I can't provide a link, But there a lot other link on the internet.

For evaluation use LFW and LFW_pairs.txt as test data.

Create masked face dataset on both train and test data.

./create_masked_face_dataset.py

**Edit the path in code.

How to wear a mask

Use face detection and face shape to get face landmark select the triangle pieces coordinate to warp the correlate pieces on the mask image to put into the face image.

As for the triangular piece selection process, it is a manual process. If you want to edit or add new mask image you need to select the face landmark point index and (x, y) position on mask image then add to FaceMasking.py code.

Evaluation

Histogram distribution of distance prediction of masked LFW face pairs data. (blue is the distance of the same person's face pair, red is the distance of the different person's face pair).

Note: This is not the best model. training on CASIA-webface used a lot of time, So I didn't do many tests on this model.