ComfyUI lets you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Available on Windows, Linux, and macOS.
- Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything.
- Image Models
- SD1.x, SD2.x,
- SDXL, SDXL Turbo
- Stable Cascade
- SD3 and SD3.5
- Pixart Alpha and Sigma
- AuraFlow
- HunyuanDiT
- Flux
- Lumina Image 2.0
- HiDream
- Cosmos Predict2
- Video Models
- Audio Models
- 3D Models
- Asynchronous Queue system
- Many optimizations: Only re-executes the parts of the workflow that changes between executions.
- Smart memory management: can automatically run models on GPUs with as low as 1GB vram.
- Works even if you don't have a GPU with:
--cpu
(slow) - Can load ckpt, safetensors and diffusers models/checkpoints. Standalone VAEs and CLIP models.
- Embeddings/Textual inversion
- Loras (regular, locon and loha)
- Hypernetworks
- Loading full workflows (with seeds) from generated PNG, WebP and FLAC files.
- Saving/Loading workflows as Json files.
- Nodes interface can be used to create complex workflows like one for Hires fix or much more advanced ones.
- Area Composition
- Inpainting with both regular and inpainting models.
- ControlNet and T2I-Adapter
- Upscale Models (ESRGAN, ESRGAN variants, SwinIR, Swin2SR, etc...)
- unCLIP Models
- GLIGEN
- Model Merging
- LCM models and Loras
- Latent previews with TAESD
- Starts up very fast.
- Works fully offline: core will never download anything unless you want to.
- Optional API nodes to use paid models from external providers through the online Comfy API.
- Config file to set the search paths for models.
Workflow examples can be found on the Examples page
Only the parts of the graph that have all correct inputs will be executed. If something is missing or incorrect, that part won’t run.
When you run the same graph again, only the parts that have changed will be executed. If nothing has changed, it won’t run again. If you edit a part, only that part and its dependent parts will run.
You can drag a generated PNG file onto the webpage or load one. This will restore the full workflow, including the seeds used during generation.
To change emphasis in a text prompt, use parentheses like this: (example:1.2) to increase importance or (example:0.8) to lower it. The default emphasis is 1.1. To use regular parentheses in your prompt, escape them with ( or ).
You can create dynamic or wildcard prompts using curly brackets. For example, {red|blue|green} will randomly choose one word each time. To use normal curly brackets, escape them like { or }.
You can add comments in your prompts using // for single-line comments or /* ... */ for multi-line comments.
If you want to use a textual inversion embedding, put the .pt file in the models/embeddings/ folder. Then include it in your prompt like this: embedding:filename (no need to write .pt).