Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

12 GB of VRAM, not enough? #9

Open
divid3byzer0 opened this issue Jun 13, 2024 · 1 comment
Open

12 GB of VRAM, not enough? #9

divid3byzer0 opened this issue Jun 13, 2024 · 1 comment

Comments

@divid3byzer0
Copy link

divid3byzer0 commented Jun 13, 2024

I am trying to run the model on Docker (Docker Desktop, Windows via WSL2) and my card is a RTX 4070 12GB, but I always see the error "torch.cuda.OutOfMemoryError: Allocation on device" and the predictions although they say "suceeeded", there are no output files.

I am guessing that the minimum for this model is 16 GB of VRAM?

@ArianaStar
Copy link

@divid3byzer0 I noticed that this issue is a week old & is still open. Allow me to solve it for you.

Simply run this: python main.py --listen 0.0.0.0 --lowvram --preview-method auto --use-split-cross-attention
Instead of this: python main.py --listen 0.0.0.0

If you get "Killed" just before the end of the process...
Use this: --novram
Instead of this: --lowvram

Hopefully I solved your problem, that solved it for me! enjoy. ♥

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants