This is a supplementary repository for the paper titled Systematic Evaluation of Personalized Deep Learning Models for Affect Recognition.
- Execute
requirements.txt
andsetup.py
to install the necessary packages. - Save the data in the
archives
folder. - Run
ar_dataset_preprocessing.py
for the desired dataset preprocessing. The processed data will be saved inmts_archive
. - Run
./datasetnametuning.sh X
in the desired folder (X: id of GPU). - Execute
datasetnameresults.py
.
- You need to create a
data
folder manually.- The top-level folders contain raw data, and
mts_archive
contains data after each preprocessing.
- The top-level folders contain raw data, and
- We have to format all datasets into the same structure as the WESAD dataset.
- In each
Si
folder, have a file for each participant in.pkl
. - In each
.pkl
file, label and sensor signal data are innumpy.array
format.
- In each
- When you run
ar_dataset_preprocessing.py
, the codes inside this folder will be executed. - The main files are
datasetname.py
, which perform winsorization, filtering, resampling, normalization, and windowing and also formatting the dataset for deep learning models.- For datasets without user labels, we use
preprocessor.py
andsubject.py
, while those with labels,preprocessorlabel.py
andsubjectlabel.py
are used.
- For datasets without user labels, we use
- Functions in the
multimodal_classifiers
folder are used for model training.- For each deep learning structure (i.e., Fully Convolutional Network (FCN), Residual Network (ResNet), and Multi-Layer Perceptron with LSTM (MLP-LSTM)), non-personalized models are implemented.
- For a detailed explanation of model implementation, please refer to section 3.3 Non-Personalized Model.
- Functions in the
multimodal_classifiers_finetuning
folder are used for model training.- For each deep learning structure, personalized models with fine-tuning are implemented.
- For a detailed explanation of model implementation, please refer to section 3.4.1 Unseen User-Dependent Fine-Tuning part.
- Functions in the
multimodal_classifiers_hybrid
folder are used for model training.- For each deep learning structure, hybrid (partially-personalized) models are implemented.
- For a detailed explanation of model implementation, please refer to section 3.4.1 Unseen User-Dependent Hybrid part.
- Functions in the
multimodal_classifiers
folder andclustering
folder are used for model training.- As explained in section 3.4.2 Unseen User-Independent, the difference between generalized model and cluster-specific personalized model is the data used for training, not the model itself.
- Therefore, we use the same functions in the
multimodal_classifiers
folder as in generalized models.
- Therefore, we use the same functions in the
- Using functions in the
clustering
folder, trait-based clustering is done and its result is used for model training.
- As explained in section 3.4.2 Unseen User-Independent, the difference between generalized model and cluster-specific personalized model is the data used for training, not the model itself.
- Functions in the
multimodal_classifiers_mtl
folder andclustering
folder are used for model training.- As explained in section 3.4.2 Unseen User-Independent, multi-task learning personalized models differ from generalized models in both the data used for training and the model itself.
- Therefore, we use the functions in the
multimodal_classifiers_mtl
folder.
- Also, using functions in the
clustering
folder, trait-based clustering is done for multi-task learning models.
Codes for non-personalized models, i.e., arpreprocessing
, GeneralizedModel
, and multimodal_classifiers
folder, are based on code provided at the "dl-4-tsc" GitHub repository. https://github.com/Emognition/dl-4-tsc
The datasets used are as follows, and they can be downloaded from the provided links:
- AMIGOS: AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups
- ASCERTAIN: ASCERTAIN: Emotion and Personality Recognition Using Commercial Sensors
- CASE: A dataset of continuous affect annotations and physiological signals for emotion analysis
- WESAD: WESAD: Multimodal Dataset for Wearable Stress and Affect Detection
- K-EmoCon: K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations
- K-EmoPhone: K-EmoPhone, A Mobile and Wearable Dataset with In-Situ Emotion, Stress, and Attention Labels