Replies: 1 comment
-
It seems that you tried to load a model which is not a SDXL model with StableDiffusionSDXLPipeline. Actually, the model you want to load is a flux transformer model which is not a full flux model. It misses T5 Encoder. You can load it via FluxPipeline with T5. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Sorry for the noob question. I have been researching this for hours now but unable to make much progress.
I am trying to load this checkpoint merge on CivitAI as a Huggingface Model: https://civitai.com/models/989221/illustration-juaner-ghibli-style-2d-illustration-model-flux
Then load this Huggingface Model in a Space and generate an image.
Can you please give me some pointers on how to do this?
I tried the following steps but received errors:
Command: python scripts/convert_flux_to_diffusers.py --checkpoint_path "/ IllustrationJuanerGhibli_v20.safetensors" --output_path "/Diffusers_IllustrationJuanerGhibli_v20" --transformer
Error:
Killed: 9
Command: scripts/convert_original_stable_diffusion_to_diffusers.py --checkpoint_path "/IllustrationJuanerGhibli_v20.safetensors" --dump_path "/Diffusers_IllustrationJuanerGhibli_v20" --from_safetensors --device cuda
Error:
Traceback (most recent call last):
File "/diffusers/scripts/convert_original_stable_diffusion_to_diffusers.py", line 160, in
pipe = download_from_original_stable_diffusion_ckpt(
File "/diffusers/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 1471, in download_from_original_stable_diffusion_ckpt
converted_unet_checkpoint = convert_ldm_unet_checkpoint(
File "/diffusers/src/diffusers/pipelines/stable_diffusion/convert_from_ckpt.py", line 431, in convert_ldm_unet_checkpoint
new_checkpoint["time_embedding.linear_1.weight"] = unet_state_dict["time_embed.0.weight"]
KeyError: 'time_embed.0.weight'
Code:
Error:
IllustrationJuanerGhibli_v20.safetensors: 0%| | 0.00/11.9G [00:00<?, ?B/s]
IllustrationJuanerGhibli_v20.safetensors: 2%|▏ | 241M/11.9G [00:01<00:49, 233MB/s]
IllustrationJuanerGhibli_v20.safetensors: 10%|█ | 1.23G/11.9G [00:02<00:15, 670MB/s]
IllustrationJuanerGhibli_v20.safetensors: 24%|██▍ | 2.85G/11.9G [00:03<00:08, 1.10GB/s]
IllustrationJuanerGhibli_v20.safetensors: 38%|███▊ | 4.49G/11.9G [00:04<00:05, 1.31GB/s]
IllustrationJuanerGhibli_v20.safetensors: 52%|█████▏ | 6.21G/11.9G [00:05<00:03, 1.46GB/s]
IllustrationJuanerGhibli_v20.safetensors: 65%|██████▍ | 7.73G/11.9G [00:06<00:02, 1.48GB/s]
IllustrationJuanerGhibli_v20.safetensors: 79%|███████▉ | 9.37G/11.9G [00:07<00:01, 1.53GB/s]
IllustrationJuanerGhibli_v20.safetensors: 93%|█████████▎| 11.0G/11.9G [00:08<00:00, 1.56GB/s]
IllustrationJuanerGhibli_v20.safetensors: 100%|█████████▉| 11.9G/11.9G [00:08<00:00, 1.39GB/s]
model_index.json: 0%| | 0.00/536 [00:00<?, ?B/s]
model_index.json: 100%|██████████| 536/536 [00:00<00:00, 3.34MB/s]
scheduler/scheduler_config.json: 0%| | 0.00/273 [00:00<?, ?B/s]
scheduler/scheduler_config.json: 100%|██████████| 273/273 [00:00<00:00, 1.78MB/s]
text_encoder/config.json: 0%| | 0.00/613 [00:00<?, ?B/s]
text_encoder/config.json: 100%|██████████| 613/613 [00:00<00:00, 1.97MB/s]
text_encoder_2/config.json: 0%| | 0.00/782 [00:00<?, ?B/s]
text_encoder_2/config.json: 100%|██████████| 782/782 [00:00<00:00, 5.12MB/s]
(…)t_encoder_2/model.safetensors.index.json: 0%| | 0.00/19.9k [00:00<?, ?B/s]
(…)t_encoder_2/model.safetensors.index.json: 100%|██████████| 19.9k/19.9k [00:00<00:00, 77.5MB/s]
tokenizer/merges.txt: 0%| | 0.00/525k [00:00<?, ?B/s]
tokenizer/merges.txt: 100%|██████████| 525k/525k [00:00<00:00, 44.5MB/s]
tokenizer/special_tokens_map.json: 0%| | 0.00/588 [00:00<?, ?B/s]
tokenizer/special_tokens_map.json: 100%|██████████| 588/588 [00:00<00:00, 2.30MB/s]
tokenizer/tokenizer_config.json: 0%| | 0.00/705 [00:00<?, ?B/s]
tokenizer/tokenizer_config.json: 100%|██████████| 705/705 [00:00<00:00, 2.79MB/s]
tokenizer/vocab.json: 0%| | 0.00/1.06M [00:00<?, ?B/s]
tokenizer/vocab.json: 100%|██████████| 1.06M/1.06M [00:00<00:00, 23.1MB/s]
tokenizer_2/special_tokens_map.json: 0%| | 0.00/2.54k [00:00<?, ?B/s]
tokenizer_2/special_tokens_map.json: 100%|██████████| 2.54k/2.54k [00:00<00:00, 15.9MB/s]
spiece.model: 0%| | 0.00/792k [00:00<?, ?B/s]
spiece.model: 100%|██████████| 792k/792k [00:00<00:00, 186MB/s]
tokenizer_2/tokenizer.json: 0%| | 0.00/2.42M [00:00<?, ?B/s]
tokenizer_2/tokenizer.json: 100%|██████████| 2.42M/2.42M [00:00<00:00, 29.5MB/s]
tokenizer_2/tokenizer_config.json: 0%| | 0.00/20.8k [00:00<?, ?B/s]
tokenizer_2/tokenizer_config.json: 100%|██████████| 20.8k/20.8k [00:00<00:00, 66.7MB/s]
transformer/config.json: 0%| | 0.00/378 [00:00<?, ?B/s]
transformer/config.json: 100%|██████████| 378/378 [00:00<00:00, 1.94MB/s]
(…)ion_pytorch_model.safetensors.index.json: 0%| | 0.00/121k [00:00<?, ?B/s]
(…)ion_pytorch_model.safetensors.index.json: 100%|██████████| 121k/121k [00:00<00:00, 90.0MB/s]
vae/config.json: 0%| | 0.00/820 [00:00<?, ?B/s]
vae/config.json: 100%|██████████| 820/820 [00:00<00:00, 4.90MB/s]
Loading pipeline components...: 0%| | 0/6 [00:00<?, ?it/s]
Loading pipeline components...: 17%|█▋ | 1/6 [00:00<00:00, 3211.57it/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 495, in from_single_file
loaded_sub_model = load_single_file_sub_model(
File "/usr/local/lib/python3.10/site-packages/diffusers/loaders/single_file.py", line 168, in load_single_file_sub_model
raise SingleFileComponentError(
diffusers.loaders.single_file_utils.SingleFileComponentError: Failed to load CLIPTextModel. Weights for this component appear to be missing in the checkpoint.
Beta Was this translation helpful? Give feedback.
All reactions