-
Notifications
You must be signed in to change notification settings - Fork 186
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add mixed precision training support for cyclegan turbo #30
base: main
Are you sure you want to change the base?
Conversation
Tried to run this, it fails.
|
Hi, please try to set the mixed precision to |
@King-HAW Hi , thanks for sharing ValueError: Query/Key/Value should either all have the same dtype, or (in the quantized case) Key/Value should have dtype torch.int32 query.dtype: torch.float32 But I solve this problem when i I run accelerate without |
Hi @King-HAW, I have tried your fork, but the out of memory has still maintained (I use 3090 with 24GB VRAM). Could you please explain to me how can I fix this error? Thank you so much. |
Hi Gaurav,
I've added the mixed precision support for training cyclegan turbo, so that the unpaired training could work on a 24G NVIDIA GPU.