Pytorch port of Google Research's VGGish model used for extracting audio features.
-
Updated
Nov 3, 2021 - Python
Pytorch port of Google Research's VGGish model used for extracting audio features.
Audio search using Azure Cognitive Search
SERVER: Multi-modal Speech Emotion Recognition using Transformer-based and Vision-based Embeddings
Extract audio embeddings from an audio file using Python
Audio encoding authentication and validation library for verifying audio as being from a trusted source
Visualizations of music semantics calculus using Spotify and deep embeddings.
Directly from voice, recognise speaker emotion, intensity, & sentiment in speaker utterances.
Generate audio embedding out of pruned L3
Audio Deep Learning Project in Java
Re-Implementation of Google Research's VGGish model used for extracting audio features using Pytorch with GPU support.
Audio Embeddings using VGGish
Add a description, image, and links to the audio-embedding topic page so that developers can more easily learn about it.
To associate your repository with the audio-embedding topic, visit your repo's landing page and select "manage topics."