-
Notifications
You must be signed in to change notification settings - Fork 11.5k
Open
Labels
FeatureA new feature to add to ComfyUI.A new feature to add to ComfyUI.
Description
Feature Idea
Would it be possible to add a setting like “Cache up to X models” in ComfyUI, where only a limited number of models are kept in memory at once, and when the limit is reached, the system automatically unloads the least recently used model and replaces it with the newly requested one?
I’m aware that cache-lru exists, but it applies at the node level rather than specifically to model handling, which makes it less effective for controlling VRAM/RAM usage when frequently switching between large models.
A model-focused LRU cache would give users more predictable memory control, reduce OOM errors, and improve workflow efficiency when working with multiple checkpoints, LoRAs, or diffusion models in a single session.
Existing Solutions
No response
Other
No response
Metadata
Metadata
Assignees
Labels
FeatureA new feature to add to ComfyUI.A new feature to add to ComfyUI.