Neural encoding of acoustic and semantic features during speech and music perception: Matlab to Python code translation
In everyday life, humans are particularly attuned to listening to two particular types of sound: speech and music. We apply a novel analysis method to shed light on how the brain is almost effortlessly able to use acoustic features to assign meaning to sounds. To do so, we use an original cross-validated Representational Similarity Analyses (RSA) approach implemented in Matlab to estimate the similarity between acoustic or semantic features of an auditory stream (speech, music) and neural activity (here intracranial EEG recordings decomposed into frequency bands).
The main goal of this project is to translate the Matlab code into Python:
- task 0: identify python libraries that can speed up the code translation effort
- task 1: translate preliminaries, data massaging
- task 2: translate the temporal folding on the neural signal in Python
- task 3: translate distance-computation metrics
- sub-task 3.1: translate
BLG_CosDistND.m
- sub-task 3.1=2: translate
BLG_EucDistND.m
- sub-task 3.1: translate
- task 4: translate GLM computation and cross-validation stats
- sub-task 4.1: translate
BLG_GLM_ND.m
- sub-task 4.2: implement CV stats
- sub-task 4.1: translate
- task 5: implement native plotting using the Python libraries (e.g. Seaborn, matplotlib, etc)
- task unassigned: build up Documentation pages that can be rendered as "read the doc" style
-
Install mamba
- If you do not have conda installed in your operating system, install mambaforge a wrapper around conda and a community project of the conda-forge community.
- If you already have conda installed:
conda install --channel=conda-forge --name=base mamba
-
Download the environment file with a starting set of packages bh22_environment.yml
-
Install the environment for the BrainHack, using the file:
mamba env create -f bh22_environment.yml