SLower when cuda enabled? #4066
Unanswered
Coastchb
asked this question in
General Q&A
Replies: 1 comment
-
Judging from a single call is not really useful and may have a lot of variation. Best to load the model once, then synthesize e.g. 100 times to get an average time. Some details on the hardware would also be useful. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I test with these three commands and get the results:
(1)python3 TTS/bin/synthesize.py --model_name tts_models/en/vctk/vits --text "Thank you for your support, looking foward to continuing our cooperation next time" --speaker_idx p362
(2)set --use_cuda False: python3 TTS/bin/synthesize.py --model_name tts_models/en/vctk/vits --text "Thank you for your support, looking foward to continuing our cooperation next time" --speaker_idx p362 --use_cuda False
(3)set --use_cuda True: python3 TTS/bin/synthesize.py --model_name tts_models/en/vctk/vits --text "Thank you for your support, looking foward to continuing our cooperation next time" --speaker_idx p362 --use_cuda True
and the result:
(1)
so,
1、why (1) and (2) differs in inference time?
2、why it is much slower to inference with cuda?
Beta Was this translation helpful? Give feedback.
All reactions