Skip to content

Commit

Permalink
recent sessions' post
Browse files Browse the repository at this point in the history
  • Loading branch information
qiyanjun committed May 3, 2024
1 parent ca4e781 commit 37a8a00
Show file tree
Hide file tree
Showing 4 changed files with 839 additions and 49 deletions.
4 changes: 3 additions & 1 deletion _posts/2024-04-18-L24.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ categories:
---



In this session, our readings cover:

## Require Readings:
Expand Down Expand Up @@ -42,10 +43,11 @@ In this session, our readings cover:
+ Siddique Latif, Moazzam Shoukat, Fahad Shamshad, Muhammad Usama, Yi Ren, Heriberto Cuayáhuitl, Wenwu Wang, Xulong Zhang, Roberto Togneri, Erik Cambria, Björn W. Schuller
+ This survey paper provides a comprehensive overview of the recent advancements and challenges in applying large language models to the field of audio signal processing. Audio processing, with its diverse signal representations and a wide range of sources--from human voices to musical instruments and environmental sounds--poses challenges distinct from those found in traditional Natural Language Processing scenarios. Nevertheless, \textit{Large Audio Models}, epitomized by transformer-based architectures, have shown marked efficacy in this sphere. By leveraging massive amount of data, these models have demonstrated prowess in a variety of audio tasks, spanning from Automatic Speech Recognition and Text-To-Speech to Music Generation, among others. Notably, recently these Foundational Audio Models, like SeamlessM4T, have started showing abilities to act as universal translators, supporting multiple speech tasks for up to 100 languages without any reliance on separate task-specific systems. This paper presents an in-depth analysis of state-of-the-art methodologies regarding \textit{Foundational Large Audio Models}, their performance benchmarks, and their applicability to real-world scenarios. We also highlight current limitations and provide insights into potential future research directions in the realm of \textit{Large Audio Models} with the intent to spark further discussion, thereby fostering innovation in the next generation of audio-processing systems. Furthermore, to cope with the rapid development in this area, we will consistently update the relevant repository with relevant recent articles and their open-source implementations at this https URL.


<!--excerpt.start-->

# Blog

# Blog:
## Pythia: A Suite for Analyzing Large Language Models Across Training and Scaling

This research work aims to address the following questions.
Expand Down
Loading

0 comments on commit 37a8a00

Please sign in to comment.