Replies: 1 comment 2 replies
-
I have also noticed this issue and feel that the problem lies with sampling. And even some advanced technologies, such as https://github.com/lllyasviel/Omost/blob/731e74922fc6be91171688574d07624f93d3b658/gradio_app.py#L64 give up diffusers sampling. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I'm encountering an issue when comparing the quality of ComfyUI and Diffusers. I've noticed that the output of Diffusers is consistently lower than ComfyUI in many cases, despite using the same settings and seed. For the base Diffusers, I've utilized: https://github.com/huggingface/diffusers/blob/main/examples/community/lpw_stable_diffusion_xl.py.
Upon closer inspection, I've identified differences in the scheduler/ksampler between the two base codes. I've also observed variations in CLIP Embedding between the two base codes, but in my experiments, this hasn't significantly impacted the output. The main issue seems to lie with the KSampler.
Has anyone else encountered this issue or have any ideas on improving the Scheduler algorithm of Diffusers?
Here are some prompts I've experimented:
Model: RVXL - Size: (896, 1152)
Positive prompt:
Negative prompt:
Beta Was this translation helpful? Give feedback.
All reactions