-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cuIpcGetMemHandle triggered CUDA out of memory when I use flexflow on one gpu #1497
Comments
maybe you can set the memory_per_gpu to a larger number like 20000, etc. |
I tried to set the memory_per_gpu to 21000, but still got the out of memory error
|
which model did you use? It seems that when you launch it, the OOM happens |
I did not load any model, just initialized the flexflow backend. |
I used docker image "flexflow/flexflow-cuda-12.1:latest" to run flexflow on a 24GB RTX 3090,but it generated a out of memory error:
Was it because I used the wrong code? How can I fix it?
The text was updated successfully, but these errors were encountered: