Personal Comfyui workflow, with some parts credited to the original authors and their corresponding repository sources. If you find it helpful, please star to accelerate the collection updates.
ComfyUI Download: git clone https://github.com/comfyanonymous/ComfyUI.git
Comfyui Manager Download: cd custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Manager.git
Comfyui Chinese Translation: cd custom_nodes && git clone https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION.git (then select the page setting, AGL, and choose Chinese
- To use, simply drag the workflow images into the Comfyui page.
- If you encounter issues during operation, please update Comfyui first to see if it can be resolved, and if not, submit an issue
- 1, Basic
- 2, Image Tagging
- 3, Image Enlargement
- 4, Image Segmentation
- 5, Local Repair
- 6, Image Reference / Style Transfer
- Other Special Workflows
Other Recommended Plugins:
Comfyui page to view hardware resources, GPU usage https://github.com/crystian/ComfyUI-Crystools
- Perform batch and crop operations on images after loading
- Use connection points to infinitely extend the wiring
- Right-click on the image node and select the mask editor (MaskEditor) to draw masks; or use images with transparent channels.
- Install the node reuse plugin https://github.com/kijai/ComfyUI-KJNodes
- Create a SetNode node, which can pass in the output of any workflow as a backup, and operate based on the renaming result
- Create a GetNode node, which can obtain the output of the SetNode node based on the renaming result
Includes ordinary image generation workflows and conditional control (control network) image generation workflows.
Model download: https://www.liblib.art/modelinfo/1fd281cf6bcf01b95033c03b471d8fd8
- Text-to-image (txt2img): Generate images based on text prompts
- Image-to-image (img2img): Convert reference images and text prompts into latent codes, and use latent codes to generate new images
- Replace the checkpoint with any SD1.5 model when using
- Use the Conditioning (Set Area) node to control the conditions for different regions during generation, and get the final generated image based on the requirements of different regions
- Download control network weights, and get all models:
from modelscope import snapshot_download
model_dir = snapshot_download('AI-ModelScope/ControlNet-v1-1', cache_dir='./ComfyUI/models/controlnet/')
print('Installation complete')
- Can easily use multiple openpose to create stable human poses, test images can be obtained at this address
Model download: https://www.liblib.art/modelinfo/506c46c91b294710940bd4b183f3ecd7
- Text-to-image (txt2img): Generate images based on text prompts
- Image-to-image (img2img): Convert reference images and text prompts into latent codes, and use latent codes to generate new images
- Replace the checkpoint with any SDXL model when using
turbo model
lightning model
flux models folder corresponds to:
- Includes a single model that combines clip and vae
Flux dev and schnell do not have negative prompts, so the CFG should be set to 1.0, meaning to ignore negative prompts.
- Clip and unet parts are downloaded separately
Model download address https://www.modelscope.cn/models/cutemodel/comfyui-sd3.5-medium
- Can use either a single text encoder or three text encoders
- Model file reference download https://www.modelscope.cn/models/cutemodel/Resolution-model/files
- Model files are placed in the models/upscale_models directory
SAM
Local Redrawing
Local Repair
Facial Eye Repair
Image Expansion
sd1.5 ipadapter reference
sdxl ipadapter reference
flux redux reference
If your computer resources are insufficient, you can use the BizyAir component for 0-resource image generation experience:
https://github.com/siliconflow/BizyAir
FLUX Text-to-Image and Image-to-Image Workflow
captiopn Workflow
Thanks to the following authors' websites for inspiration
Comfyui Official Tutorial: https://comfyanonymous.github.io/ComfyUI_examples/