You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi @cantabile-kwok ,
I have also implemented UniCATS's vec2wav but that model is too slow, so I am curious to know the inference speed of this model. Actually, I am interested in integrating CTX-vec2wav with GPT-based AR txt2vec to create a fast prompt-based TTS model.
Also, do you have any plan to release CTX-txt2vec model anytime soon?
Thanks
The text was updated successfully, but these errors were encountered:
From a previous log on GPU, the following speed was reported:
11.94it/s, RTF=0.0106
May I know the speed of your implemented model if it is too slow? For the above speed, I think that should be OK in regular cases.
The CTX-text2vec is a bit harder to open-source, but we will get our hands on it soon (probably will finish in this month, but I can't be 100% certain). Please stay tuned if you are interested : )
Hi @cantabile-kwok ,
I have also implemented UniCATS's vec2wav but that model is too slow, so I am curious to know the inference speed of this model. Actually, I am interested in integrating CTX-vec2wav with GPT-based AR txt2vec to create a fast prompt-based TTS model.
Also, do you have any plan to release CTX-txt2vec model anytime soon?
Thanks
The text was updated successfully, but these errors were encountered: