This repository provides the official PyTorch implementation of the following paper:
Improving Face Recognition with Large Age Gaps by Learning to Distinguish Children
Jungsoo Lee* (KAIST AI), Jooyeol Yun* (KAIST AI), Sunghyun Park (KAIST AI),
Yonggyu Kim (Korea Univ.), and Jaegul Choo (KAIST AI) (*: equal contribution)
BMVC 2021
Abstract: Despite the unprecedented improvement of face recognition, existing face recognition models still show considerably low performances in determining whether a pair of child and adult images belong to the same identity. Previous approaches mainly focused on increasing the similarity between child and adult images of a given identity to overcome the discrepancy of facial appearances due to aging. However, we observe that reducing the similarity between child images of different identities is crucial for learning distinct features among children and thus improving face recognition performance in child-adult pairs. Based on this intuition, we propose a novel loss function called the Inter-Prototype loss which minimizes the similarity between child images. Unlike the previous studies, the Inter-Prototype loss does not require additional child images or training additional learnable parameters. Our extensive experiments and in-depth analyses show that our approach outperforms existing baselines in face recognition with child-adult pairs.
Jungsoo Lee [Website] [LinkedIn] [Google Scholar] (KAIST AI)
Jooyeol Yun [LinkedIn] [Google Scholar] (KAIST AI)
Clone this repository.
git clone https://github.com/leebebeto/Inter-Prototype.git
cd Inter-Prototype
pip install -r requirements.txt
CUDA_VISIBLE_DEVICES=0 python3 train.py --data_mode=casia --exp=interproto_casia --wandb --tensorboard
We used two different training datasets: 1) CASIA WebFace and 2) MS1M.
We constructed test sets with child-adult pairs with at least 20 years and 30 years age gaps using AgeDB and FG-NET, termed as AgeDB-C20, AgeDB-C30, FGNET-C20, and FGNET-C30. We also used LAG (Large Age Gap) dataset for the test set. For the age labels, we used the age annotations from MTLFace. The age annotations are available at this link. We provide a script file for downloading the test dataset.
sh scripts/download_test_data.sh
The final structure before training or testing the model should look like this.
train
└ casia
└ id1
└ image1.jpg
└ image2.jpg
└ ...
└ id2
└ image1.jpg
└ image2.jpg
└ ...
...
└ ms1m
└ id1
└ image1.jpg
└ image2.jpg
└ ...
└ id2
└ image1.jpg
└ image2.jpg
└ ...
...
└ age-label
└ casia-webface.txt
└ ms1m.txt
test
└ AgeDB-aligned
└ id1
└ image1.jpg
└ image2.jpg
└ id2
└ image1.jpg
└ image2.jpg
└ ...
└ FGNET-aligned
└ image1.jpg
└ image2.jpg
└ ...
└ LAG-aligned
└ id1
└ image1.jpg
└ image2.jpg
└ id2
└ image1.jpg
└ image2.jpg
└ ...
Following are the checkpoints of each test set used in our paper.
Trained with Casia WebFace
Trained with MS1M
CUDA_VISIBLE_DEVICES=0 python3 evaluate.py --model_dir=<test_dir>
Our pytorch implementation is heavily derived from InsightFace_Pytorch. Thanks for the implementation. We also deeply appreciate the age annotations provided by Huang et al. in MTLFace.