A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
-
Updated
Nov 8, 2024 - Python
A Fundamental End-to-End Speech Recognition Toolkit and Open Source SOTA Pretrained Models, Supporting Speech Recognition, Voice Activity Detection, Text Post-processing etc.
A PyTorch implementation of the Deep Audio-Visual Speech Recognition paper.
Human Emotion Understanding using multimodal dataset.
Transformer-based online speech recognition system with TensorFlow 2
Code for InterSpeech 2024 Paper: LipGER: Visually-Conditioned Generative Error Correction for Robust Automatic Speech Recognition
End to End Multiview Lip Reading
Kaldi-based audio-visual speech recognition
(SLT 2024) Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition
🤖 📼 Command-line tool for remixing videos with time-coded transcriptions.
Real-Time Audio-visual Speech Recongition
In this repository, I try to use k2, icefall and Lhotse for lip reading. I will modify it for the lip reading task. Many different lip-reading datasets should be added. -_-
Code related to the fMRI experiment on the contextual modulation of the McGurk Effect
Human Emotion Understanding using multimodal dataset
Add a description, image, and links to the audio-visual-speech-recognition topic page so that developers can more easily learn about it.
To associate your repository with the audio-visual-speech-recognition topic, visit your repo's landing page and select "manage topics."