Cognition & Computation (UniPD): final project.
Key: use Mel-Frequency Cesptral Coefficients (MFCCs) to turn the audio points into a spatial representation.
Dataset: UrbanSound8K.
CNN model which can classify 10 different sounds with a fairly high accuracy.
Convolutional Variational Autoencoder (CVAE) implemented to denoise MFCCs vectors and generation of new data samples.
This requires the use of Jupyter Notebook. You can use either the Anaconda version or Google Colab to run this. Note that if you are using the local machine Anaconda version, you do need to install the necessary modules/dependencies.