This repository is the official implementation of 'A Novel Facial Emotion Recognition Model Using Segmentation VGG-19 Architecture'.
To install requirements:
pip install -r requirements.txt
- Clone the repository.
- Download the dataset from this link and put those files in the fer_data folder.
- Change the path of your file directories from the config file and the main.py file.
- Run the main.py file to train and evaluate the network.
Our model achieves SOTA performance on the FER2013 dataset
| Model | Top 1 Acc (%) |
|---|---|
| CNN | 62.44 |
| AlexNet | 63.41 |
| GoogleNet | 65.20 |
| Human Accuracy | 65 ± 5 |
| Deep Emotion | 70.02 |
| EfficientNet | 70.42 |
| Resnet18 (ARM) | 71.38 |
| Inception | 71.60 |
| Inception-v1 | 71.85 |
| Ad-Corre | 72.03 |
| SE-Net50 | 72.50 |
| Inception-v3 | 72.91 |
| DenseNet-121 | 73.16 |
| ResNet50 | 73.20 |
| ResNet152 | 73.27 |
| VGG | 73.28 |
| CNNs and BOVW + global SVM | 73.25 |
| CBAM ResNet50 | 73.32 |
| ResNet34v2 | 73.65 |
| LHC-NetC | 74.28 |
| LHC-Net | 74.42 |
| CNNs and BOVW + local SVM | 75.42 |
| Segmentation VGG-19 | 75.97 |
For any queries, feel free to contact at vignesh.nitt10@gmail.com.
This project is open sourced under MIT License.