The following code can be used to replicate the results of my thesis: "Emotion recognition in a model of visually grounded speech" written for the partial fulfilment of Master's in Data Science and Society, Tilburg University.
The experiments of this thesis were carried out using the code released by Merkx, Frank,and Ernestus (2019) as reference. Depending on the requirement, some modifications were made to their existing code. The emotional speech classification part was coded by me.
The code involves the usage of pre-trained networks Resnet- 152 (He et al. 2016) which are made freely available in PyTorch.
Sources for data are:
- flickr_audio: https://groups.csail.mit.edu/sls/downloads/flickraudio/
- Flickr8k_Dataset: https://machinelearningmastery.com/develop-a-deep-learning-caption-generation-model-in-python/
- dataset.json: https://cs.stanford.edu/people/karpathy/deepimagesent/
- RAVDESS: https://zenodo.org/record/1188976
- TESS: https://tspace.library.utoronto.ca/handle/1807/24487
- CREMA-D: https://github.com/CheyneyComputerScience/CREMA-D