Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference GPU memory #12

Open
wf1024966 opened this issue Apr 11, 2024 · 1 comment
Open

Inference GPU memory #12

wf1024966 opened this issue Apr 11, 2024 · 1 comment

Comments

@wf1024966
Copy link

Hi, I want to know, What is the minimum GPU memory required? I had inferenced the model in GPU RTX3090 24G, but it turned out of CUDA out of memory, what can I do to reduce the GPU memory ? Thanks.

@thss15fyt
Copy link
Collaborator

The GPU memory usage is about 38GB. It is noted that the two main components are running individually. To save memory, you could move the Llama and stable diffusion model to cpu first, and move them to gpu when using them. For applying text-to-panel, move the Llama to gpu. For applying panel-to-text, move the stable diffusion to gpu.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants