Hugging Rain Man: A Novel Dataset for Analyzing Facial Action Units in Children with Autism Spectrum Disorder
This repository contains the annotated Action Unit (AU) and Action Descriptor (AD) labels for the HRM dataset, along with pre-trained models for facial action detection and atypical expression regression. The dataset consists of 131,758 frames, organized into 1,535 segments. The images themselves are not publicly available temporarily due to privacy and ethical considerations. However, the AU labels, and pre-trained models are provided to facilitate research and development in the field of children facial expression analysis, particularly for Autism Spectrum Disorder (ASD).
- Total Frames: 131,758
- Segments: 1,535
- Action Units and Action Descriptors: 22AUs + 10ADs
- Atypical Rating: Annotated by 5 people
- Facial Expression: It is obtained by soft voting using 3 algorithms
- Model: ResNet-50, EmoFAN, ME-GraphAU, MAE-Face, FMAE
- Training Data: HRM dataset
- Selected 22 AUs/ADs for detection: AU1, AU2, AU4, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU16, AU17, AU18, AD19, AU20, AU23, AU24, AU25, AU2X (AU26/27), AU28, AD32, and AU43E.
- Selected 17 AUs/ADs for detection: AU1, AU2, AU4, AU6, AU7, AU9, AU10, AU12, AU14, AU15, AU16, AU17, AU20, AU23, AU24, AU25 and AU2X (AU26/27).
- Performance Metrics: Accuracy, F1-Score
- Machine-extracted Features: InsightFace and OpenFace features (5 key points, head pose and bounding box, etc)
- Baidu Cloud (17/22AU Pre-trained Models, Machine-extracted Features and Labels): Download Link, pwd:CCNU
- Mega Cloud (22AU Pre-trained Models and Machine-extracted Features): Download Link
We provide a demo for single and batch AU prediction. Please refer to the Predict folder. You are recommended to use MAE series model weights for predictions.
- Clone the entire project from the GitHub repositories of various algorithms.
- Install the required libraries, place the scripts in the project root directory, and download the corresponding weight files to run the tests.
For users unfamiliar with environment setup, we offer a user-friendly integrated demo for Windows based on the MAE series models.
Download link:
- Baidu Cloud pwd: CCNU
- Google Drive
-
Prepare Data and Weights
- Place the images for prediction in the
FMAE/imgs/
folder. For example 1.jpg, 2.jpg, 3.jpg, etc. Please make sure that there is only one face in the image. - If you are not familiar with how to align faces, we recommend using OpenFace 2.2.0 for face alignment. The official download link is: https://github.com/TadasBaltrusaitis/OpenFace/releases/tag/OpenFace_2.2.0. After downloading, run OpenFaceOffline.exe, check 'Record aligned faces' under the 'Record' menu, and set the output image size to 224x224 under 'Recording settings'. Finally, select your video or image from the 'File' menu. The aligned faces will be saved in the OpenFace/processed directory (in the xxxx_aligned folder). Move the images from this folder to the
/FMAE/imgs/
directory. - Place the downloaded weight files in the
FMAE/ckpt/
folder.
- Place the images for prediction in the
-
Run the Demo
- Double-click
run_MAEFACE.bat
to execute. - The prediction results will be saved in the
FMAE/results/
folder. - We recommend using a GPU with at least 8GB of VRAM to speed up inference. The default parameters (batch_size=8, num_workers=4, model_weight, etc.) can be modified in HRM_test_batch.py
- Double-click
-
Demo User Guide
- Please refer to Guide.pptx
We provide an additional AU annotation tool that you need to install the PySimpleGUI library in advance.
- Open data path: Path where the annotated data (.csv) will be saved.
- Confirm: Enter the participant you are currently annotating, and clicking this button will generate P-X.csv in the specified data path.
- Open current frame: Open the current frame image. This function is optional. You can also directly use your preferred image viewer to open the frames to be annotated.
- Natural frame: Open the Natural frame image.
- Play backwards X frames: Use OpenCV to start playing from the (current_frame_num - X) frame.
- Clear Checkbox: Clear all checkboxes.
- Submit: Submit the final AU/AD annotations. The number in the Frame input box will automatically increase by 1.
- Participant: Enter the participant name: P1, P2, etc.
- Frame: Open the input frame image, and the relative address of the image can be changed by yourself.
img_path = os.path.join(folder_path + '/origin/' + object_name, f"{current_frame}_0.jpg")
- LRTB: Enter the direction of the AU. For example, if AU2 is activated on the right side, enter 2 in the R input box.
We would like to express our gratitude to the following excellent open-source projects: JAA-Net,EmoFAN, EmoFAN4AU-Detection, ME-GraphAU, MAE-Face, FMAE, EAC, Poster++, and DDAMFN++.
if the data or method help you in the research, please cite the following paper:
@article{ji2024hugging,
title={Hugging Rain Man: A Novel Facial Action Units Dataset for Analyzing Atypical Facial Expressions in Children with Autism Spectrum Disorder},
author={Yanfeng Ji, Shutong Wang, Ruyi Xu, Jingying Chen, Xinzhou Jiang, Zhengyu Deng, Yuxuan Quan, Junpeng Liu},
journal={arXiv preprint arXiv:2411.13797},
year={2024}
}