-
Notifications
You must be signed in to change notification settings - Fork 11.4k
Open
Labels
Potential BugUser is reporting a bug. This should be tested.User is reporting a bug. This should be tested.
Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Expected Behavior
No error.
Actual Behavior
Please check debug log in below Debug Logs section.
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2026-01-04T18:32:38.526641 - Prompt executed in 160.76 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
## Additional Context
(Please add any additional context or steps to reproduce the error here)
### Steps to Reproduce
load template video_wan2_2_14B_fun_inpaint with 2 images.
Add some modification to the prompt, than excute.
### Debug Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node ID:** 96
- **Node Type:** KSamplerAdvanced
- **Exception Type:** TypeError
- **Exception Message:** Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
## Stack Trace
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 516, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 330, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 304, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 292, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1572, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1505, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1178, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1068, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1050, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options, force_full_load=force_full_load)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory, force_full_load=force_full_load)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 704, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 509, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 539, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 981, in partially_load
raise e
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 978, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 777, in load
self.patch_weight_to_device(key, device_to=device_to)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 630, in patch_weight_to_device
temp_weight = convert_func(temp_weight, inplace=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 619, in convert_weight
return weight.dequantize()
^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/quant_ops.py", line 197, in dequantize
return LAYOUTS[self._layout_type].dequantize(self._qdata, **self._layout_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/quant_ops.py", line 434, in dequantize
plain_tensor = torch.ops.aten._to_copy.default(qdata, dtype=orig_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/macpaul/Documents/ComfyUI/.venv/lib/python3.12/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
## System Information
- **ComfyUI Version:** 0.6.0
- **Arguments:** /Applications/ComfyUI.app/Contents/Resources/ComfyUI/main.py --user-directory /Users/macpaul/Documents/ComfyUI/user --input-directory /Users/macpaul/Documents/ComfyUI/input --output-directory /Users/macpaul/Documents/ComfyUI/output --front-end-root /Applications/ComfyUI.app/Contents/Resources/ComfyUI/web_custom_versions/desktop_app --base-directory /Users/macpaul/Documents/ComfyUI --extra-model-paths-config /Users/macpaul/Library/Application Support/ComfyUI/extra_models_config.yaml --log-stdout --listen 127.0.0.1 --port 8000 --enable-manager
- **OS:** darwin
- **Python Version:** 3.12.11 (main, Aug 18 2025, 19:02:39) [Clang 20.1.4 ]
- **Embedded Python:** false
- **PyTorch Version:** 2.5.1
## Devices
- **Name:** mps
- **Type:** mps
- **VRAM Total:** 68719476736
- **VRAM Free:** 47302574080
- **Torch VRAM Total:** 68719476736
- **Torch VRAM Free:** 47302574080
## Logs
2026-01-03T23:28:52.868642 - Adding extra search path custom_nodes /Users/macpaul/Documents/ComfyUI/custom_nodes
2026-01-03T23:28:52.868705 - Adding extra search path download_model_base /Users/macpaul/Documents/ComfyUI/models
2026-01-03T23:28:52.868730 - Adding extra search path checkpoints /Volumes/Transcend/ComfyUI/models/checkpoints
2026-01-03T23:28:52.868744 - Adding extra search path clip /Volumes/Transcend/ComfyUI/models/clip
2026-01-03T23:28:52.868756 - Adding extra search path clip_interrogator /Volumes/Transcend/ComfyUI/models/clip_interrogator
2026-01-03T23:28:52.868767 - Adding extra search path clip_vision /Volumes/Transcend/ComfyUI/models/clip_vision
2026-01-03T23:28:52.868777 - Adding extra search path configs /Volumes/Transcend/ComfyUI/models/configs
2026-01-03T23:28:52.868786 - Adding extra search path controlnet /Volumes/Transcend/ComfyUI/models/controlnet
2026-01-03T23:28:52.868796 - Adding extra search path diffusers /Volumes/Transcend/ComfyUI/models/diffusers
2026-01-03T23:28:52.868805 - Adding extra search path diffusion_models /Volumes/Transcend/ComfyUI/models/diffusion_models
2026-01-03T23:28:52.868814 - Adding extra search path embeddings /Volumes/Transcend/ComfyUI/models/embeddings
2026-01-03T23:28:52.868824 - Adding extra search path gligen /Volumes/Transcend/ComfyUI/models/gligen
2026-01-03T23:28:52.868833 - Adding extra search path hypernetworks /Volumes/Transcend/ComfyUI/models/hypernetworks
2026-01-03T23:28:52.868842 - Adding extra search path LLM /Volumes/Transcend/ComfyUI/models/LLM
2026-01-03T23:28:52.868851 - Adding extra search path llm_gguf /Volumes/Transcend/ComfyUI/models/llm_gguf
2026-01-03T23:28:52.868859 - Adding extra search path loras /Volumes/Transcend/ComfyUI/models/loras
2026-01-03T23:28:52.868868 - Adding extra search path photomaker /Volumes/Transcend/ComfyUI/models/photomaker
2026-01-03T23:28:52.868877 - Adding extra search path style_models /Volumes/Transcend/ComfyUI/models/style_models
2026-01-03T23:28:52.868886 - Adding extra search path unet /Volumes/Transcend/ComfyUI/models/unet
2026-01-03T23:28:52.868895 - Adding extra search path upscale_models /Volumes/Transcend/ComfyUI/models/upscale_models
2026-01-03T23:28:52.868903 - Adding extra search path vae /Volumes/Transcend/ComfyUI/models/vae
2026-01-03T23:28:52.868912 - Adding extra search path vae_approx /Volumes/Transcend/ComfyUI/models/vae_approx
2026-01-03T23:28:52.868921 - Adding extra search path custom_nodes /Applications/ComfyUI.app/Contents/Resources/ComfyUI/custom_nodes
2026-01-03T23:28:52.868935 - Setting output directory to: /Users/macpaul/Documents/ComfyUI/output
2026-01-03T23:28:52.868951 - Setting input directory to: /Users/macpaul/Documents/ComfyUI/input
2026-01-03T23:28:52.868960 - Setting user directory to: /Users/macpaul/Documents/ComfyUI/user
2026-01-03T23:28:52.996734 - [START] Security scan2026-01-03T23:28:52.996753 -
2026-01-03T23:28:53.041378 - [ComfyUI-Manager] Using uv as Python module for pip operations.
2026-01-03T23:28:53.116090 - [DONE] Security scan2026-01-03T23:28:53.116115 -
2026-01-03T23:28:53.117327 - ** ComfyUI startup time:2026-01-03T23:28:53.117362 - 2026-01-03T23:28:53.117378 - 2026-01-03 23:28:53.1172026-01-03T23:28:53.117390 -
2026-01-03T23:28:53.117413 - ** Platform:2026-01-03T23:28:53.117424 - 2026-01-03T23:28:53.117434 - Darwin2026-01-03T23:28:53.117443 -
2026-01-03T23:28:53.117454 - ** Python version:2026-01-03T23:28:53.117463 - 2026-01-03T23:28:53.117476 - 3.12.11 (main, Aug 18 2025, 19:02:39) [Clang 20.1.4 ]2026-01-03T23:28:53.117492 -
2026-01-03T23:28:53.117502 - ** Python executable:2026-01-03T23:28:53.117511 - 2026-01-03T23:28:53.117520 - /Users/macpaul/Documents/ComfyUI/.venv/bin/python2026-01-03T23:28:53.117529 -
2026-01-03T23:28:53.117540 - ** ComfyUI Path:2026-01-03T23:28:53.117550 - 2026-01-03T23:28:53.117559 - /Applications/ComfyUI.app/Contents/Resources/ComfyUI2026-01-03T23:28:53.117568 -
2026-01-03T23:28:53.117577 - ** ComfyUI Base Folder Path:2026-01-03T23:28:53.117585 - 2026-01-03T23:28:53.117599 - /Applications/ComfyUI.app/Contents/Resources/ComfyUI2026-01-03T23:28:53.117609 -
2026-01-03T23:28:53.117619 - ** User directory:2026-01-03T23:28:53.117627 - 2026-01-03T23:28:53.117636 - /Users/macpaul/Documents/ComfyUI/user2026-01-03T23:28:53.117646 -
2026-01-03T23:28:53.117655 - ** ComfyUI-Manager config path:2026-01-03T23:28:53.117665 - 2026-01-03T23:28:53.117675 - /Users/macpaul/Documents/ComfyUI/user/__manager/config.ini2026-01-03T23:28:53.117684 -
2026-01-03T23:28:53.117697 - ** Log path:2026-01-03T23:28:53.117705 - 2026-01-03T23:28:53.117714 - /Users/macpaul/Documents/ComfyUI/user/comfyui.log2026-01-03T23:28:53.117723 -
2026-01-03T23:28:53.177326 - [ComfyUI-Manager] Skipped fixing the 'comfyui-frontend-package' dependency because the ComfyUI is outdated.
2026-01-03T23:28:53.177570 - [PRE] ComfyUI-Manager
2026-01-03T23:28:53.939764 - Checkpoint files will always be loaded safely.
2026-01-03T23:28:53.960354 - Total VRAM 65536 MB, total RAM 65536 MB
2026-01-03T23:28:53.960462 - pytorch version: 2.5.1
2026-01-03T23:28:53.963124 - Mac Version (26, 2)
2026-01-03T23:28:53.963305 - Set vram state to: SHARED
2026-01-03T23:28:53.963358 - Device: mps
2026-01-03T23:28:54.837451 - Using sub quadratic optimization for attention, if you have memory or speed issues try using: --use-split-cross-attention
2026-01-03T23:28:56.513488 - Python version: 3.12.11 (main, Aug 18 2025, 19:02:39) [Clang 20.1.4 ]
2026-01-03T23:28:56.513600 - ComfyUI version: 0.6.0
2026-01-03T23:28:56.515502 - [Prompt Server] web root: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/web_custom_versions/desktop_app
2026-01-03T23:28:56.515646 - [START] ComfyUI-Manager
2026-01-03T23:28:56.682452 - [ComfyUI-Manager] network_mode: public
2026-01-03T23:28:56.684343 - [ComfyUI-Manager] The matrix sharing feature has been disabled because the `matrix-nio` dependency is not installed.
To use this feature, please run the following command:
/Users/macpaul/Documents/ComfyUI/.venv/bin/python -m pip install matrix-nio
2026-01-03T23:29:00.543021 - FETCH ComfyRegistry Data: 5/117
2026-01-03T23:29:01.637612 - Total VRAM 65536 MB, total RAM 65536 MB
2026-01-03T23:29:01.637724 - pytorch version: 2.5.1
2026-01-03T23:29:01.637915 - Mac Version (26, 2)
2026-01-03T23:29:01.638031 - Set vram state to: SHARED
2026-01-03T23:29:01.638065 - Device: mps
2026-01-03T23:29:01.908585 -
Import times for custom nodes:
2026-01-03T23:29:01.908693 - 0.0 seconds: /Applications/ComfyUI.app/Contents/Resources/ComfyUI/custom_nodes/websocket_image_save.py
2026-01-03T23:29:01.908726 -
2026-01-03T23:29:02.211677 - Failed to initialize database. Please ensure you have installed the latest requirements. If the error persists, please report this as in future the database will be required: (sqlite3.OperationalError) unable to open database file
(Background on this error at: https://sqlalche.me/e/20/e3q8)
2026-01-03T23:29:02.235378 - Starting server
2026-01-03T23:29:02.235683 - To see the GUI go to: http://127.0.0.1:8000
2026-01-03T23:29:02.950550 - comfyui-frontend-package not found in requirements.txt
2026-01-03T23:29:04.417899 - FETCH ComfyRegistry Data: 10/117
2026-01-03T23:29:08.298416 - FETCH ComfyRegistry Data: 15/117
2026-01-03T23:29:12.015231 - FETCH ComfyRegistry Data: 20/117
2026-01-03T23:29:15.942887 - FETCH ComfyRegistry Data: 25/117
2026-01-03T23:29:19.669462 - FETCH ComfyRegistry Data: 30/117
2026-01-03T23:29:23.419552 - FETCH ComfyRegistry Data: 35/117
2026-01-03T23:29:27.534121 - FETCH ComfyRegistry Data: 40/117
2026-01-03T23:29:31.242019 - FETCH ComfyRegistry Data: 45/117
2026-01-03T23:29:34.993259 - FETCH ComfyRegistry Data: 50/117
2026-01-03T23:29:38.972875 - FETCH ComfyRegistry Data: 55/117
2026-01-03T23:29:42.880420 - FETCH ComfyRegistry Data: 60/117
2026-01-03T23:29:46.732196 - FETCH ComfyRegistry Data: 65/117
2026-01-03T23:29:50.580333 - FETCH ComfyRegistry Data: 70/117
2026-01-03T23:29:54.667935 - FETCH ComfyRegistry Data: 75/117
2026-01-03T23:29:58.489081 - FETCH ComfyRegistry Data: 80/117
2026-01-03T23:30:02.541344 - FETCH ComfyRegistry Data: 85/117
2026-01-03T23:30:06.596466 - FETCH ComfyRegistry Data: 90/117
2026-01-03T23:30:10.422989 - FETCH ComfyRegistry Data: 95/117
2026-01-03T23:30:14.385336 - FETCH ComfyRegistry Data: 100/117
2026-01-03T23:30:18.383722 - FETCH ComfyRegistry Data: 105/117
2026-01-03T23:30:22.086617 - FETCH ComfyRegistry Data: 110/117
2026-01-03T23:30:25.854937 - FETCH ComfyRegistry Data: 115/117
2026-01-03T23:30:27.818205 - FETCH ComfyRegistry Data [DONE]
2026-01-03T23:30:27.911087 - [ComfyUI-Manager] default cache updated: https://api.comfy.org/nodes
2026-01-03T23:30:27.917604 - FETCH DATA from: /Users/macpaul/Documents/ComfyUI/user/__manager/cache/1514988643_custom-node-list.json2026-01-03T23:30:27.917627 - 2026-01-03T23:30:27.922562 - [DONE]2026-01-03T23:30:27.922646 -
2026-01-03T23:30:27.934469 - [ComfyUI-Manager] All startup tasks have been completed.
2026-01-03T23:54:46.602420 - got prompt
2026-01-03T23:54:48.074409 - Using split attention in VAE
2026-01-03T23:54:48.075007 - Using split attention in VAE
2026-01-03T23:54:56.606178 - VAE load device: mps, offload device: cpu, dtype: torch.bfloat16
2026-01-03T23:57:56.103494 - Requested to load ZImageTEModel_
2026-01-03T23:57:56.114681 - loaded completely; 95367431640625005117571072.00 MB usable, 7672.25 MB loaded, full load: True
2026-01-03T23:57:56.116017 - CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
2026-01-03T23:57:59.119702 - model weight dtype torch.bfloat16, manual cast: None
2026-01-03T23:57:59.119932 - model_type FLOW
2026-01-04T00:00:42.573041 - unet missing: ['norm_final.weight']
2026-01-04T00:00:42.647597 - Requested to load Lumina2
2026-01-04T00:00:48.270911 - loaded completely; 95367431640625005117571072.00 MB usable, 11739.55 MB loaded, full load: True
2026-01-04T00:01:25.409688 -
100%|██████████| 3/3 [00:37<00:00, 12.33s/it]2026-01-04T00:01:25.409932 -
100%|██████████| 3/3 [00:37<00:00, 12.35s/it]2026-01-04T00:01:25.409961 -
2026-01-04T00:01:25.417087 - Requested to load AutoencodingEngine
2026-01-04T00:01:25.777641 - loaded completely; 95367431640625005117571072.00 MB usable, 159.87 MB loaded, full load: True
2026-01-04T00:01:28.003095 - Prompt executed in 400.53 seconds
2026-01-04T01:22:42.290155 - got prompt
2026-01-04T01:23:21.604495 -
100%|██████████| 3/3 [00:36<00:00, 12.18s/it]2026-01-04T01:23:21.604610 -
100%|██████████| 3/3 [00:36<00:00, 12.19s/it]2026-01-04T01:23:21.604628 -
2026-01-04T01:23:23.552409 - Prompt executed in 41.25 seconds
2026-01-04T01:24:35.586324 - got prompt
2026-01-04T01:25:12.811246 -
100%|██████████| 3/3 [00:36<00:00, 12.13s/it]2026-01-04T01:25:12.811346 -
100%|██████████| 3/3 [00:36<00:00, 12.10s/it]2026-01-04T01:25:12.811363 -
2026-01-04T01:25:14.690762 - Prompt executed in 39.10 seconds
2026-01-04T01:25:32.292488 - got prompt
2026-01-04T01:27:20.254593 -
100%|██████████| 9/9 [01:47<00:00, 11.96s/it]2026-01-04T01:27:20.254714 -
100%|██████████| 9/9 [01:47<00:00, 11.99s/it]2026-01-04T01:27:20.254732 -
2026-01-04T01:27:22.093729 - Prompt executed in 109.80 seconds
2026-01-04T01:28:34.039843 - got prompt
2026-01-04T01:30:28.052986 -
100%|██████████| 9/9 [01:53<00:00, 12.54s/it]2026-01-04T01:30:28.053120 -
100%|██████████| 9/9 [01:53<00:00, 12.56s/it]2026-01-04T01:30:28.053138 -
2026-01-04T01:30:29.900008 - Prompt executed in 115.86 seconds
2026-01-04T16:17:31.948383 - got prompt
2026-01-04T16:17:32.291565 - Using split attention in VAE
2026-01-04T16:17:32.292298 - Using split attention in VAE
2026-01-04T16:17:35.441292 - VAE load device: mps, offload device: cpu, dtype: torch.bfloat16
2026-01-04T16:17:35.592283 - Requested to load WanVAE
2026-01-04T16:17:35.965261 - loaded completely; 95367431640625005117571072.00 MB usable, 242.03 MB loaded, full load: True
2026-01-04T16:17:43.725493 - Found quantization metadata version 1
2026-01-04T16:17:43.726217 - Using MixedPrecisionOps for text encoder
2026-01-04T16:17:58.988537 - Requested to load QwenImageTEModel_
2026-01-04T16:17:59.020074 - loaded completely; 95367431640625005117571072.00 MB usable, 7910.29 MB loaded, full load: True
2026-01-04T16:17:59.025502 - CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
2026-01-04T16:19:48.245649 - model weight dtype torch.bfloat16, manual cast: None
2026-01-04T16:19:48.245989 - model_type FLUX
2026-01-04T16:23:36.791894 - Interrupting prompt c086e108-22e4-4408-8ada-c38d66696e35
2026-01-04T16:23:48.386653 - Interrupting prompt c086e108-22e4-4408-8ada-c38d66696e35
2026-01-04T16:24:39.711548 - Processing interrupted
2026-01-04T16:24:39.712885 - Prompt executed in 427.74 seconds
2026-01-04T16:25:25.708695 - got prompt
2026-01-04T16:25:27.713854 - Using split attention in VAE
2026-01-04T16:25:27.714495 - Using split attention in VAE
2026-01-04T16:25:48.614209 - VAE load device: mps, offload device: cpu, dtype: torch.bfloat16
2026-01-04T16:25:49.319685 - Found quantization metadata version 1
2026-01-04T16:25:49.321655 - Using MixedPrecisionOps for text encoder
2026-01-04T16:25:51.252386 - Interrupting prompt e07e3c6e-95e6-4ef7-bb30-7ba1e4716454
2026-01-04T16:25:53.605549 - Interrupting prompt e07e3c6e-95e6-4ef7-bb30-7ba1e4716454
2026-01-04T16:25:55.460605 - Interrupting prompt e07e3c6e-95e6-4ef7-bb30-7ba1e4716454
2026-01-04T16:25:56.292028 - Interrupting prompt e07e3c6e-95e6-4ef7-bb30-7ba1e4716454
2026-01-04T16:25:58.834201 - Interrupting prompt e07e3c6e-95e6-4ef7-bb30-7ba1e4716454
2026-01-04T16:26:12.773760 - Requested to load WanTEModel
2026-01-04T16:26:12.788685 - loaded completely; 95367431640625005117571072.00 MB usable, 6419.49 MB loaded, full load: True
2026-01-04T16:26:12.790913 - CLIP/text encoder model load device: cpu, offload device: cpu, current: cpu, dtype: torch.float16
2026-01-04T16:26:12.791223 - Processing interrupted
2026-01-04T16:26:12.791830 - Prompt executed in 47.08 seconds
2026-01-04T16:29:34.494023 - got prompt
2026-01-04T16:31:05.853811 - Requested to load WanVAE
2026-01-04T16:31:06.152880 - loaded completely; 95367431640625005117571072.00 MB usable, 1344.09 MB loaded, full load: True
2026-01-04T16:32:59.943532 - model weight dtype torch.float16, manual cast: None
2026-01-04T16:32:59.945370 - model_type FLOW
2026-01-04T16:35:44.011450 - Requested to load WAN22
2026-01-04T16:35:45.733894 - loaded completely; 95367431640625005117571072.00 MB usable, 9538.84 MB loaded, full load: True
2026-01-04T16:37:05.577398 -
10%|█ | 2/20 [01:19<11:46, 39.27s/it]2026-01-04T16:37:06.649611 -
10%|█ | 2/20 [01:20<12:02, 40.11s/it]2026-01-04T16:37:06.649654 -
2026-01-04T16:37:06.676315 - !!! Exception during processing !!! The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
2026-01-04T16:37:06.680142 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 516, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 330, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 304, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 292, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1538, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1505, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1178, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1068, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1050, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 994, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 980, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 752, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 868, in sample_unipc
x = uni_pc.sample(noise, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True, callback=callback, disable_pbar=disable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 722, in sample
x, model_x = self.multistep_uni_pc_update(x, model_prev_list, t_prev_list, vec_t, init_order, use_corrector=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 472, in multistep_uni_pc_update
return self.multistep_uni_pc_bh_update(x, model_prev_list, t_prev_list, t, order, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 653, in multistep_uni_pc_bh_update
rhos_c = torch.linalg.solve(R, b)
^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
2026-01-04T16:37:06.681849 - Prompt executed in 452.18 seconds
2026-01-04T17:17:40.744204 - got prompt
2026-01-04T17:17:40.962744 - Using split attention in VAE
2026-01-04T17:17:40.963411 - Using split attention in VAE
2026-01-04T17:17:44.108961 - VAE load device: mps, offload device: cpu, dtype: torch.bfloat16
2026-01-04T17:17:50.106879 - Requested to load WanVAE
2026-01-04T17:17:50.456009 - loaded completely; 95367431640625005117571072.00 MB usable, 242.03 MB loaded, full load: True
2026-01-04T17:18:51.970893 - Found quantization metadata version 1
2026-01-04T17:18:51.971643 - Detected mixed precision quantization
2026-01-04T17:18:51.972443 - Using mixed precision operations
2026-01-04T17:18:51.987938 - model weight dtype torch.float16, manual cast: torch.float16
2026-01-04T17:18:51.988179 - model_type FLOW
2026-01-04T17:18:52.038966 - unet unexpected: ['scaled_fp8']
2026-01-04T17:18:56.877491 - Requested to load WAN21
2026-01-04T17:18:58.483075 - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2026-01-04T17:18:58.488453 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 516, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 330, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 304, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 292, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1572, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1505, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1178, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1068, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1050, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options, force_full_load=force_full_load)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory, force_full_load=force_full_load)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 704, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 509, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 539, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 981, in partially_load
raise e
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 978, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 777, in load
self.patch_weight_to_device(key, device_to=device_to)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 630, in patch_weight_to_device
temp_weight = convert_func(temp_weight, inplace=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 619, in convert_weight
return weight.dequantize()
^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/quant_ops.py", line 197, in dequantize
return LAYOUTS[self._layout_type].dequantize(self._qdata, **self._layout_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/quant_ops.py", line 434, in dequantize
plain_tensor = torch.ops.aten._to_copy.default(qdata, dtype=orig_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/macpaul/Documents/ComfyUI/.venv/lib/python3.12/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2026-01-04T17:18:58.490341 - Prompt executed in 77.73 seconds
2026-01-04T18:07:27.122285 - got prompt
2026-01-04T18:07:37.138487 - model weight dtype torch.float16, manual cast: None
2026-01-04T18:07:37.138723 - model_type FLOW
2026-01-04T18:08:12.314947 - Requested to load WAN21
2026-01-04T18:08:12.856932 - loaded completely; 95367431640625005117571072.00 MB usable, 2706.18 MB loaded, full load: True
2026-01-04T18:09:09.737644 -
7%|▋ | 2/30 [00:56<13:14, 28.38s/it]2026-01-04T18:09:10.670020 -
7%|▋ | 2/30 [00:57<13:28, 28.89s/it]2026-01-04T18:09:10.670056 -
2026-01-04T18:09:10.672507 - !!! Exception during processing !!! The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
2026-01-04T18:09:10.673603 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 516, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 330, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 304, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 292, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1538, in sample
return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1505, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1178, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1068, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1050, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 994, in outer_sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 980, in inner_sample
samples = executor.execute(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 752, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 868, in sample_unipc
x = uni_pc.sample(noise, timesteps=timesteps, skip_type="time_uniform", method="multistep", order=order, lower_order_final=True, callback=callback, disable_pbar=disable)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 722, in sample
x, model_x = self.multistep_uni_pc_update(x, model_prev_list, t_prev_list, vec_t, init_order, use_corrector=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 472, in multistep_uni_pc_update
return self.multistep_uni_pc_bh_update(x, model_prev_list, t_prev_list, t, order, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/extra_samplers/uni_pc.py", line 653, in multistep_uni_pc_bh_update
rhos_c = torch.linalg.solve(R, b)
^^^^^^^^^^^^^^^^^^^^^^^^
NotImplementedError: The operator 'aten::_linalg_solve_ex.result' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764. As a temporary fix, you can set the environment variable `PYTORCH_ENABLE_MPS_FALLBACK=1` to use the CPU as a fallback for this op. WARNING: this will be slower than running natively on MPS.
2026-01-04T18:09:10.675118 - Prompt executed in 99.52 seconds
2026-01-04T18:29:05.129034 - got prompt
2026-01-04T18:29:11.683639 - Requested to load WanVAE
2026-01-04T18:29:11.836287 - loaded completely; 95367431640625005117571072.00 MB usable, 242.03 MB loaded, full load: True
2026-01-04T18:29:17.219353 - Interrupting prompt 9d14df88-b05c-4222-9fb6-9d3a063e3d80
2026-01-04T18:29:17.484540 - Processing interrupted
2026-01-04T18:29:17.485200 - Prompt executed in 12.35 seconds
2026-01-04T18:29:57.757871 - got prompt
2026-01-04T18:32:33.632779 - Found quantization metadata version 1
2026-01-04T18:32:33.633446 - Detected mixed precision quantization
2026-01-04T18:32:33.635154 - Using mixed precision operations
2026-01-04T18:32:33.646099 - model weight dtype torch.float16, manual cast: torch.float16
2026-01-04T18:32:33.646432 - model_type FLOW
2026-01-04T18:32:33.721579 - unet unexpected: ['scaled_fp8']
2026-01-04T18:32:37.052930 - Requested to load WAN21
2026-01-04T18:32:37.383168 - 0 models unloaded.
2026-01-04T18:32:38.484157 - !!! Exception during processing !!! Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2026-01-04T18:32:38.515160 - Traceback (most recent call last):
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 516, in execute
output_data, output_ui, has_subgraph, has_pending_tasks = await get_output_data(prompt_id, unique_id, obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 330, in get_output_data
return_values = await _async_map_node_over_list(prompt_id, unique_id, obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb, v3_data=v3_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 304, in _async_map_node_over_list
await process_inputs(input_dict, i)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/execution.py", line 292, in process_inputs
result = f(**inputs)
^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1572, in sample
return common_ksampler(model, noise_seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise, disable_noise=disable_noise, start_step=start_at_step, last_step=end_at_step, force_full_denoise=force_full_denoise)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/nodes.py", line 1505, in common_ksampler
samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sample.py", line 60, in sample
samples = sampler.sample(noise, positive, negative, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1178, in sample
return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1068, in sample
return cfg_guider.sample(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 1050, in sample
output = executor.execute(noise, latent_image, sampler, sigmas, denoise_mask, callback, disable_pbar, seed, latent_shapes=latent_shapes)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/samplers.py", line 984, in outer_sample
self.inner_model, self.conds, self.loaded_models = comfy.sampler_helpers.prepare_sampling(self.model_patcher, noise.shape, self.conds, self.model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sampler_helpers.py", line 130, in prepare_sampling
return executor.execute(model, noise_shape, conds, model_options=model_options, force_full_load=force_full_load)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/patcher_extension.py", line 112, in execute
return self.original(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/sampler_helpers.py", line 138, in _prepare_sampling
comfy.model_management.load_models_gpu([model] + models, memory_required=memory_required + inference_memory, minimum_memory_required=minimum_memory_required + inference_memory, force_full_load=force_full_load)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 704, in load_models_gpu
loaded_model.model_load(lowvram_model_memory, force_patch_weights=force_patch_weights)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 509, in model_load
self.model_use_more_vram(use_more_vram, force_patch_weights=force_patch_weights)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_management.py", line 539, in model_use_more_vram
return self.model.partially_load(self.device, extra_memory, force_patch_weights=force_patch_weights)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 981, in partially_load
raise e
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 978, in partially_load
self.load(device_to, lowvram_model_memory=current_used + extra_memory, force_patch_weights=force_patch_weights, full_load=full_load)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 777, in load
self.patch_weight_to_device(key, device_to=device_to)
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/model_patcher.py", line 630, in patch_weight_to_device
temp_weight = convert_func(temp_weight, inplace=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/ops.py", line 619, in convert_weight
return weight.dequantize()
^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/quant_ops.py", line 197, in dequantize
return LAYOUTS[self._layout_type].dequantize(self._qdata, **self._layout_params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Applications/ComfyUI.app/Contents/Resources/ComfyUI/comfy/quant_ops.py", line 434, in dequantize
plain_tensor = torch.ops.aten._to_copy.default(qdata, dtype=orig_dtype)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/macpaul/Documents/ComfyUI/.venv/lib/python3.12/site-packages/torch/_ops.py", line 716, in __call__
return self._op(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Trying to convert Float8_e4m3fn to the MPS backend but it does not have support for that dtype.
2026-01-04T18:32:38.526641 - Prompt executed in 160.76 seconds
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
Workflow too large. Please manually upload the workflow from local file system.
## Additional Context
(Please add any additional context or steps to reproduce the error here)
Other
Load template
Maxim-Mazurok
Metadata
Metadata
Assignees
Labels
Potential BugUser is reporting a bug. This should be tested.User is reporting a bug. This should be tested.