Replies: 2 comments
-
Hi @Darian461 Thanks on V2, hope to get it fully out of BETA soon! :) I'm 99% sure that command line is actually a call/setup direct into the Huggingface Transformers and not actually part of anything within a Coqui script. Coqui Referenceshttps://github.com/coqui-ai/Trainer?tab=readme-ov-file#training-with-accelerate Huggingface Trainer Referenceshttps://huggingface.co/docs/transformers/en/main_classes/trainer and as I understand this is a mix of setting the available/visible devices in a system for acceleration, which typically all devices are visible, unless you restrict them down (see the "I have multiple GPU's" here https://github.com/erew123/alltalk_tts?tab=readme-ov-file#startup-performance-and-compatibility-issues) So the actual command line:
This bit And this bit Finally Putting all this information together aka TLDRIn short, there would be no code changes to anything in AllTalk, or any Coqui scripts. You would always have to kick off the finetuning script with So it may be better I document how people can do this, rather than force it into the start_finetuning batch/shell script, just because I cant say what other impacts this could have on some systems that may only be single GPU or who knows with AI trainers or across the multiple types of platforms I'm trying to support. Does that cover off everything you need to know/suggest? Thanks |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Hey! I just wanted to see if there was any consideration for training on a multi-GPU setup in the V2 Beta.
I was able to run training by doing: "python -m trainer.distribute --gpus '0,1' " with a seperate training script in the coqui repo. So training is technically possible with XTTS2.
Not exactly sure how it would be implemented in the alltalk setup, but it would be nice to see.
Outside of that, V2 beta looks great! Thank you for putting the effort in a project like this.
Beta Was this translation helpful? Give feedback.
All reactions