Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Speed Performance #21

Open
pengzhendong opened this issue Oct 21, 2024 · 2 comments
Open

Speed Performance #21

pengzhendong opened this issue Oct 21, 2024 · 2 comments

Comments

@pengzhendong
Copy link

Excellent work. Have you tested the speed between Reverb and Whisper or Canary-1B?

@jprobichaud
Copy link
Contributor

Thank you! Yes, we did, but it's hard to publish these numbers as so many factors are to be considered here. For example, Whisper has various model sizes available, and different inference implementations are available also.

Also, if you want to perform inference in CPU instead of GPU, the picture changes also.

As a general rule, Reverb is smaller than Whisper medium in terms of number of parameter (and even more so compared to Canary-1B) and thus faster for most application configurations, but again, that can change based on your context.

Do you have a specific comparison you'd be interested in ?

@pengzhendong
Copy link
Author

I am interested in the performance as presented in the following paper: https://arxiv.org/pdf/2404.09841.

Screenshot 2024-10-22 at 10 40 12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants