Skip to content

Latest commit

 

History

History
253 lines (172 loc) · 8.13 KB

README.md

File metadata and controls

253 lines (172 loc) · 8.13 KB

中文版
English

ComfyUI-Workflow-Sanbu

Sanbu's ComfyUI Workflow Collection | 散步的 ComfyUI 工作流合集



Personal Comfyui workflow, with some parts credited to the original authors and their corresponding repository sources. If you find it helpful, please star to accelerate the collection updates.

Quick Start

ComfyUI Download: git clone https://github.com/comfyanonymous/ComfyUI.git

Comfyui Manager Download: cd custom_nodes && git clone https://github.com/ltdrdata/ComfyUI-Manager.git

Comfyui Chinese Translation: cd custom_nodes && git clone https://github.com/AIGODLIKE/AIGODLIKE-COMFYUI-TRANSLATION.git (then select the page setting, AGL, and choose Chinese

  • To use, simply drag the workflow images into the Comfyui page.
  • If you encounter issues during operation, please update Comfyui first to see if it can be resolved, and if not, submit an issue

Directory ✨

Other Recommended Plugins:

Comfyui page to view hardware resources, GPU usage https://github.com/crystian/ComfyUI-Crystools

1, Basic

Basic Operations

Basic Image Operations
  • Perform batch and crop operations on images after loading
  • Use connection points to infinitely extend the wiring
Mask Basic Operations
  • Right-click on the image node and select the mask editor (MaskEditor) to draw masks; or use images with transparent channels.
Node Reuse Setting
  • Install the node reuse plugin https://github.com/kijai/ComfyUI-KJNodes
  • Create a SetNode node, which can pass in the output of any workflow as a backup, and operate based on the renaming result
  • Create a GetNode node, which can obtain the output of the SetNode node based on the renaming result

Image Generation

Includes ordinary image generation workflows and conditional control (control network) image generation workflows.

SD1.5

Model download: https://www.liblib.art/modelinfo/1fd281cf6bcf01b95033c03b471d8fd8

SD1.5 Text-to-Image and Image-to-Image Workflow
  • Text-to-image (txt2img): Generate images based on text prompts
  • Image-to-image (img2img): Convert reference images and text prompts into latent codes, and use latent codes to generate new images
  • Replace the checkpoint with any SD1.5 model when using
SD1.5 Conditional Latent Region Generation
  • Use the Conditioning (Set Area) node to control the conditions for different regions during generation, and get the final generated image based on the requirements of different regions
SD1.5 Control Network Multiple Integrations
  • Download control network weights, and get all models:
from modelscope import snapshot_download
model_dir = snapshot_download('AI-ModelScope/ControlNet-v1-1', cache_dir='./ComfyUI/models/controlnet/')
print('Installation complete')
SD1.5 Control Network Openpose Multiple
  • Can easily use multiple openpose to create stable human poses, test images can be obtained at this address
SDXL

Model download: https://www.liblib.art/modelinfo/506c46c91b294710940bd4b183f3ecd7

SDXL Text-to-Image and Image-to-Image Workflow
  • Text-to-image (txt2img): Generate images based on text prompts
  • Image-to-image (img2img): Convert reference images and text prompts into latent codes, and use latent codes to generate new images
  • Replace the checkpoint with any SDXL model when using

turbo model

lightning model

Flux

flux models folder corresponds to:

Download URL Folder
https://www.modelscope.cn/models/livehouse/flux1-dev-fp8/resolve/master/flux1-dev-fp8.safetensors checkpoint
https://www.modelscope.cn/models/AI-ModelScope/flux-fp8/resolve/master/flux1-dev-fp8-e4m3fn.safetensors unet
https://www.modelscope.cn/models/AI-ModelScope/flux-fp8/resolve/master/flux1-schnell-fp8-e4m3fn.safetensors unet
https://www.modelscope.cn/models/SilentAfr/flux_clip/resolve/master/clip_l.safetensors clip
https://www.modelscope.cn/models/mapjack/Flux_1_fp18/resolve/master/t5xxl_fp8_e4m3fn.safetensors clip
https://www.modelscope.cn/models/AI-ModelScope/FLUX.1-dev/resolve/master/ae.safetensors vae
Flux Unified Dev FP8 Text-to-Image Workflow
  • Includes a single model that combines clip and vae

Flux dev and schnell do not have negative prompts, so the CFG should be set to 1.0, meaning to ignore negative prompts.

Flux Dev FP8 Text-to-Image Workflow
  • Clip and unet parts are downloaded separately
Flux Schnell FP8 Text-to-Image Workflow
SD3.5

Model download address https://www.modelscope.cn/models/cutemodel/comfyui-sd3.5-medium

SD3.5 FP8 Text-to-Image Workflow
  • Can use either a single text encoder or three text encoders

2, Image Tagging

wd14 Tagger Tagging Workflow
Florence Caption Tagging Workflow

3, Image Enlargement

Super Resolution Image Enlargement
Resize Latent Scaling Image Enlargement

4, Image Segmentation

ClipSeg Segmentation
- Using clipseg, we can segment expected regions using natural language, this supports outputting soft and hard edge masks - Can directly clone the repository to custom_nodes and run it directly: https://github.com/sanbuphy/ComfyUI-CLIPSEG

SAM

5, Local Repair and Expansion

Local Redrawing

Local Repair

Facial Eye Repair

Image Expansion

6, Image Reference / Style Transfer

sd1.5 ipadapter reference

sdxl ipadapter reference

flux redux reference

Other Special Workflows

BizyAir

If your computer resources are insufficient, you can use the BizyAir component for 0-resource image generation experience:

https://github.com/siliconflow/BizyAir

FLUX Text-to-Image and Image-to-Image Workflow

captiopn Workflow

Reference

Thanks to the following authors' websites for inspiration

Comfyui Official Tutorial: https://comfyanonymous.github.io/ComfyUI_examples/