Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training gpu consumption vs inference gpu consumption #57

Open
duda1202 opened this issue Jun 9, 2023 · 0 comments
Open

Training gpu consumption vs inference gpu consumption #57

duda1202 opened this issue Jun 9, 2023 · 0 comments

Comments

@duda1202
Copy link

duda1202 commented Jun 9, 2023

Hi,

I have been using your work and I have been getting very impressive results, so first of all thank you for sharing it with the community!

I would like to train this is in a lower grade GPU such as RTX3070 which has 8gb RAM but the training right now consumes minimum 10gb while the inference model does work easily on my GPU. Are there any optimization strategies to be used for training that would be advantageous for lower grade GPUs? For example, have you tested freezing the decoder for training and how well the performance of it dropped?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant