-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Status & Features #4
Comments
Can it run on the GPU via CUDA or is it only CPU inferencing for now? I believe Rust has CUDA bindings via crates as well, might be a possibility in the future if not now. |
Never tried with NVIDIA but I think it should work with Cuda? Line 12 in 1778be2
then adjust the EP to work with CUDA https://github.com/lucasjinreal/Kokoros/blob/main/src/onn/ort_base.rs |
Yes. CUDA is possible, even though CPU is fast enough. Would you guys interested to add a PR to support CUDA ep provider as alternative to device target? |
video-1737110239209.webm |
I can push a PR on that |
@Jerboas86 that would be awesome, we really need a single command make users can run in docker and with minimal docker container size! |
Hello, all those interested in Kokoro!
Kokoro is gaining popularity in text-to-speech (TTS) due to its small size and extremely high quality. However, making Kokoro run more easily, especially with languages like Rust and C++, is still an area that has not been fully explored. With the help of buildable languages, Kokoro can run on various platforms such as embedded systems, phones, and also on PCs and laptops without invoking any scripts. As a result, “Kokoros” emerges.
Here are the goals of Kokoros:
The current implemented features and plans are as follows:
Leave your desired features or contributions below if you are interested.
Some advanced feature that currently doing:
The text was updated successfully, but these errors were encountered: