Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: unload from RAM as intended; force empty cache #310

Merged
merged 1 commit into from
Aug 24, 2024
Merged

Conversation

tazlin
Copy link
Member

@tazlin tazlin commented Aug 24, 2024

Prior to this commit the following erroneous behavior was in play:

  • unload_all_models_vram() did not effectively remove all models from VRAM
  • unload_all_models_ram() did not remove any models from RAM
  • With AMD specifically, though its possible more generally as well, the torch cache was not cleared as this required an argument (True) sent to _comfy_soft_empty_cache(...)

I have also added some additional logging to help further identify what is and is not helpful.

Prior to this commit the following erroneous behavior was in play:

- `unload_all_models_vram()` did not effectively remove all models from VRAM
- `unload_all_models_ram()` did not remove any models from RAM
- With AMD specifically, though its possible more generally as well, the torch cache was not cleared as this required an argument (`True`) sent to `_comfy_soft_empty_cache(...)`

I have also added some additional logging to help further identify what is and is not helpful.
@tazlin tazlin added the release:patch Version _._.x label Aug 24, 2024
@tazlin tazlin merged commit a05b763 into releases Aug 24, 2024
1 of 2 checks passed
@tazlin tazlin mentioned this pull request Sep 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
release:patch Version _._.x
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant