-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should a GPU help this algorithm go faster or no? #19
Comments
WhisperHallu is using Whisper or FasterWisper out of the box, without any modification on them. I don't understand why you didn't get them using your GPU. |
Hey thanks for the prompt response! Er -- Whisper and FasterWhisper will use the GPU, but what about ffmpeg, demucs, etc -- are those going to take forever? I had figured that running your algorithm on a GPU would make all that "pre-processing" that prevents the Whisper hallucination go a lot faster? I'm using an NVIDIA A100 40GB. Here's the log so far:
|
Wowza. I got it working. |
@jsteinberg-rbi |
@EtienneAb3d The file I was testing with initially was a 4GB file and it would just spin forever. When I switched to a 2GB it ran in under 10 minutes :) Question for you: so I ran your script over 30 files last night. Which one of these files has the silence removed?
|
@jsteinberg-rbi |
So from what I've seen when the script runs it attempts to run as a GPU if one is present, which of course is great. In fact I think it's even the default. For whatever reason it doesn't run as GPU on my NVIDIA A100. I have no issues with running
whisper ... --device cuda
, it works great and reduces the runtime of my transcription by an order of magnitude. I wish I could get the same result with Hallu. What am I missing? Thanks! Let me know if you want any other information from me.The text was updated successfully, but these errors were encountered: