Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition (ECCV 2022)
CVIP Lab, Inha University
- Python (>=3.7)
- PyTorch (>=1.7.1)
- pretrainedmodels (>=0.7.4)
- cvxpy (>=1.1.15)
- Wandb
- Fabulous (terminal color toolkit)
To install all dependencies, do this.
pip install -r requirements.txt
[22.07.10]: Add source code and demo.
[22.07.07]: OPEN official pytorch version of AVCE_FER.
- Download three public benchmarks for training and evaluation (I cannot upload datasets due to the copyright issue).
(For more details visit website)
- Follow preprocessing rules for each dataset by referring pytorch official custom dataset tutorial.
-
Check
pretrained_weights
folder.-
Weights are trained on AFEW-VA dataset.
-
Weights for demo are trained on multiple VA database (please refer here)
-
-
Go to
/src
. -
Train AVCE.
-
(Or) Execute
run.sh
CUDA_VISIBLE_DEVICES=0 python main.py --freq 250 --model alexnet --online_tracker 1 --data_path <data_path> --save_path <save_path>
Arguments | Description |
---|---|
freq | Parameter saving frequency. |
model | CNN model for backbone. Choose from 'alexnet', and 'resnet18'. |
online_tracker | Wandb on/off. |
data_path | Path to load facial dataset. |
save_path | Path to save weights. |
-
Go to
/AVCE_demo
. -
Run
main.py
.
- Facial detection and AV FER functionalities are equipped.
- Before that, you have to train and save
Encoder.t7
andFC_layer.t7
.
@inproceedings{kim2022emotion,
title={Emotion-aware Multi-view Contrastive Learning for Facial Emotion Recognition},
author={Kim, Daeha and Song, Byung Cheol},
booktitle={European Conference on Computer Vision},
pages={178--195},
year={2022},
organization={Springer}
}
If you have any questions, feel free to contact me at kdhht5022@gmail.com
.