Large-pose facial images present a challenge for facial recognition models, especially for fully-lateral (profile) poses. The ability to reconstruct a frontal view (frontalization) from a lateral view while preserving the subject's identifying features is useful for improving recognition accuracy on large-pose facial images, and has applications in other fields such as forensics. The frontalization problem is made difficult by several factors, including pose variation, self-occlusion, lighting conditions, and variation in facial expression. Given the large amount of facial image data available today, deep neural networks, and specifically generative adversarial networks (GAN), are well suited to tackle this problem. In this project, we explore deep learning methods for face frontalization using GANs to generate a convincing frontal-view synthesis from a lateral-view input image.
For each GAN implementation directory (pix2pix, pix2pix_ip_loss, tp_gan):
- Unzip the data set into the
datadirectory. - For pix2pix_ip_loss and tp_gan, download the LightCNN-29 v2 trained model weights and unpack it into the
light-cnndirectory. These are provided by the TP-GAN Keras implementation. - Run the code in the Jupyter notebook. Requires TensorFlow 2.0 with GPU support. Training takes several hours, even with GPU. TP-GAN requires training for many more epochs than the other two models.
Code adapted from:
- Pix2Pix Tensor flow docs
- TP-GAN official implementation
- TP-GAN Keras implementation
- LightCNN Keras implementation
Data adapted from NIST Mugshot Identification Database
