How to use https://huggingface.co/openai/whisper-large-v3-turbo with whisper.cpp? #11395
-
How to use this safetensors model : https://huggingface.co/openai/whisper-large-v3-turbo with whisper.cpp? I tried, without success to convert mode.safetensors to ggml with llama.cpp and got "Model not supported". (.venv) raphy@raohy:~/whisper.cpp/scripts$ ./convert-all.sh ../models/whisper-large-v3-turbo/model.safetensors --outtype f16 --outfile ../models/whisper-large-v3-turbo.ggml |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
You can get it already converted like this: Originally I found that you can call |
Beta Was this translation helpful? Give feedback.
I'm quite sure that large-v3-turbo is supported as well: https://github.com/ggerganov/whisper.cpp/blob/master/models/download-ggml-model.sh#L55-L57
Maybe you didn't pull the latest state of whisper.cpp at that time?
Happy you solved your issue!