IT Cooking

Success is just one script away

ComfyUI: handle errors on load and various bugfixes

4 min read
comfyui handle errors on load and various bugfixes featured meme

comfyui handle errors on load and various bugfixes featured meme

ComfyUI will spit out lots of errors on load, after updates especially. Upgrade a python package? error! Update a custom node? error! Update the core? errors! Here are few of the error I came accross and how I fixed them.

ComfyUI Load Errors

dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn’t come with acceleration providers

E:\GPT\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")

Error discussed here

That’s a risky take to fix, involving re-installing onnxruntime and onnxruntime-gpu and Gigabytes of download. You will need a valid MSVC / VisualStudio2022 environment.

pip uninstall onnxruntime onnxruntime-gpu
pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/ --no-cache-dir

 

FutureWarning: Using TRANSFORMERS_CACHE is deprecated

transformers\utils\hub.py:124: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead.

add this in your Comfy load batch: set HF_HOME=%stableROOT%\.cache\huggingface with stableROOT=<path>\sd.webui\webui

Cannot import efficiency-nodes-comfyui module for custom nodes: name ‘latent_versions_updated’ is not defined

Cannot import efficiency-nodes-comfyui module for custom nodes: name 'latent_versions_updated' is not defined
AttributeError: 'Block' object has no attribute 'drop_path'

This error is discussed here, but resolving it was not easy.

First I tried updating the requirements as recommended:

vi custom_nodes\comfyui_controlnet_aux\requirements.txt
timm=0.6.7
pip install timm==0.6.7

pip upgrade

But then other errors started popping up:

model.onnx: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 326M/326M [00:24<00:00, 13.4MB/s]
selected_tags.csv: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 254k/254k [00:00<00:00, 1.77MB/s]
EP Error D:\a\_work\1\s\onnxruntime\core\session\provider_bridge_ort.cc:1131 onnxruntime::ProviderLibrary::Get [ONNXRuntimeError] : 1 : FAIL : LoadLibrary failed with error 126 "" when trying to load "E:\GPT\stable-diffusion-webui\venv\lib\site-packages\onnxruntime\capi\onnxruntime_providers_tensorrt.dll"
 when using ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider']
Falling back to ['CUDAExecutionProvider', 'CPUExecutionProvider'] and retrying.
outdoors, sky, day, cloud, water, tree, no_humans, window, ocean, cloudy_sky, plant, building, scenery, city, watercraft, bridge, boat

onnxruntime error discussed here, and also here

so you check for onnxruntime version and nvidia-smi:

# required dlls:
pip show onnxruntime
# 1.15
nvidia-smi
# 12.0

But no idea how I solved it. maybe just reinstlaling Comfy.

Update error: ComfyUI-Allor

Allor errors are discussed here and here

What I did to fix it:

pushd custom_nodes\ComfyUI-Allor\
git fetch origin main
git reset --hard origin/main
git pull
git config --global --add safe.directory E:/GPT/ComfyUI/custom_nodes/ComfyUI-post-processing-nodes

[comfy_mtb] | WARNING -> Found multiple match

FETCH DATA from: E:\GPT\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json
[comfy_mtb] | WARNING -> Found multiple match, we will pick the first E:\GPT\ComfyUI\models\upscale_models
['E:\\GPT\\ComfyUI\\models\\upscale_models', '../stable-diffusion-webui/models/ESRGAN', '../stable-diffusion-webui/models/GFPGAN', '../stable-diffusion-webui/models/RealESRGAN', '../stable-diffusion-webui/models/SwinIR']

Simply edit extra_model_paths.yaml and remove the upscale duplicates:

#Rename extra_model_paths.yaml.example to extra_model_paths.yaml and ComfyUI will load it
# config for a1111 ui
# all you have to do is change the base_path to where yours is installed
#your base path should be either an existing comfy install or a central folder where you store all of your models, loras, etc.

comfyui:
    base_path: ./

    checkpoints: |
      models/checkpoints
    configs: |
      models/checkpoints
    vae: |
      models/vae
      models/vae_approx
    loras: |
      models/loras

    # [comfy_mtb] | WARNING -> Found multiple match, we will pick the first E:\GPT\ComfyUI\models\upscale_models
    # upscale_models: |
      # models/upscale_models/ESRGAN
      # models/upscale_models/GFPGAN
      # models/upscale_models/RealESRGAN
      # models/upscale_models/SwinIR
    clip: |
      models/clip
    clip_vision: |
      models/clip_vision
    hypernetworks: |
      models/hypernetworks
    controlnet: |
      models/ControlNet
    embeddings: |
      models/embeddings

a111:
    base_path: ../sd.webui/webui/

    checkpoints: |
      models/Stable-diffusion
    configs: |
      models/Stable-diffusion
    vae: |
      models/vae
      models/vae_approx
    loras: |
      models/Lora
      models/LyCORIS
    upscale_models: |
      models/ESRGAN
      models/GFPGAN
      models/RealESRGAN
      models/SwinIR
    clip: |
      models/clip
    clip_vision: |
      models/clip_vision
    hypernetworks: |
      models/hypernetworks
    controlnet: |
      models/ControlNet
    gligen: |
      models/gligen
    embeddings: |
      embeddings

WAS Node Suite Warning: ffmpeg_bin_path is not set

WAS Node Suite Warning: ffmpeg_bin_path is not set in E:\GPT\ComfyUI\custom_nodes\was-node-suite-comfyui\was_suite_config.json config file. Will attempt to use system ffmpeg binaries if available.

As long as ffmpeg is in your PATH, you can ignore, or edit custom_nodes\was-node-suite-comfyui\was_suite_config.json as it says

WARNING: Ignoring invalid distribution -rotobuf

WARNING: Ignoring invalid distribution -rotobuf (e:\gpt\stable-diffusion-webui\venv\lib\site-packages)

Try deleting the venv folder completely and let it install once again, they possibly updated torch so you need to reinstall all.
If doesn’t work, try doing this before the git pull:

pushd stable-diffusion-webui\venv\Lib\site-packages\
git stash
git pull

When loading the graph, the following node types were not found

When loading the graph, the following node types were not found:
InsightFaceLoader
PrepImageForInsightFace
IPAdapterApply
IPAdapterApplyFaceID

But they were installed… This happened after I left Comfy asleep for 3 months, then ran an update.

Solution: reinstall sd.webui from scratch, previous A1111 installer with venv had become incompatible.

! No module named ‘gray2color’

classic pip module missing because the custom node maker doesn’t know about requirements.txt

Solution: pip install gray2color simpleeval

WARNING ⚠️ user config directory ‘sd.webui\webui\.cache\AppData\Roaming\Ultralytics’ is not writeable

### Loading: ComfyUI-Impact-Pack (V5.3.1)
WARNING ⚠️ user config directory 'E:\GPT\sd.webui\webui\.cache\AppData\Roaming\Ultralytics' is not writeable, 
defaulting to '/tmp' or CWD.Alternatively you can define a YOLO_CONFIG_DIR environment variable for this path.

Solution: md sd.webui\webui\.cache\AppData\Roaming\Ultralytics

open-clip-torch 2.20.0 requires protobuf<4, but you have protobuf 5.26.1 which is incompatible

Downloading onnx-1.16.0-cp310-cp310-win_amd6[!] ERROR: pip's dependency resolver does not currently take into account 
all the packages that are installed. This behaviour is the source of the following dependency conflicts.
 Successfully installed onnx-1.16.0 protobuf-5.26.1
[!] open-clip-torch 2.20.0 requires protobuf<4, but you have protobuf 5.26.1 which is incompatible.

Solution: upgrade everything

Cmd(‘git’) failed due to: exit code(1)

This happens all the time after my latest Comfy upgrade. EVERY node I instlal or upgrade via the Node Manager error out. Installs are just fine, it’s like Git command is timing out or smth.

No idea how to fix that.

dwpose.py:26: UserWarning: DWPose: Onnxruntime not found

[comfyui_controlnet_aux] | INFO -> Using ckpts path: E:\GPT\ComfyUI\custom_nodes\comfyui_controlnet_aux\ckpts
[comfyui_controlnet_aux] | INFO -> Using symlinks: False
[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider']
E:\GPT\ComfyUI\custom_nodes\comfyui_controlnet_aux\node_wrappers\dwpose.py:26: UserWarning: DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly
  warnings.warn("DWPose: Onnxruntime not found or doesn't come with acceleration providers, switch to OpenCV with CPU device. DWPose might run very slowly")

I believe a node update took care of it

Error. No styles.csv found

Error. No styles.csv found. Put your styles.csv in E:\GPT\ComfyUI\custom_nodes\ComfyUI_hus_utils\styles.csv. Then press "Refresh".

Solution:

cd custom_nodes\ComfyUI_hus_utils
copy example styles.csv styles.csv

raise AssertionError(“Torch not compiled with CUDA enabled”)

Prestartup times for custom nodes:
   0.0 seconds: E:\GPT\ComfyUI\custom_nodes\rgthree-comfy
   0.0 seconds: E:\GPT\ComfyUI\custom_nodes\comfyui-deploy
   0.2 seconds: E:\GPT\ComfyUI\custom_nodes\ComfyUI-Manager

Traceback (most recent call last):
  File "E:\GPT\ComfyUI\main.py", line 76, in <module>
    import execution
  File "E:\GPT\ComfyUI\execution.py", line 11, in <module>
    import nodes
  File "E:\GPT\ComfyUI\nodes.py", line 21, in <module>
    import comfy.diffusers_load
  File "E:\GPT\ComfyUI\comfy\diffusers_load.py", line 3, in <module>
    import comfy.sd
  File "E:\GPT\ComfyUI\comfy\sd.py", line 5, in <module>
    from comfy import model_management
  File "E:\GPT\ComfyUI\comfy\model_management.py", line 119, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "E:\GPT\ComfyUI\comfy\model_management.py", line 88, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "E:\GPT\sd.webui\system\python\lib\site-packages\torch\cuda\__init__.py", line 778, in current_device
    _lazy_init()
  File "E:\GPT\sd.webui\system\python\lib\site-packages\torch\cuda\__init__.py", line 284, in _lazy_init
    raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled

Certainly a version mismatch during the last update as discussed here

Solution: pip install torch torchvision torchaudio --index-url <a href="https://download.pytorch.org/whl/cu121`">https://download.pytorch.org/whl/cu121

NameError: name ‘CLIPTemperaturePatch’ is not defined

Traceback (most recent call last):
  File "E:\GPT\ComfyUI\nodes.py", line 1864, in load_custom_node
    module_spec.loader.exec_module(module)
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "E:\GPT\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG\__init__.py", line 19, in <module>
    "Temperature settings CLIP": CLIPTemperaturePatch,
NameError: name 'CLIPTemperaturePatch' is not defined
Cannot import E:\GPT\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG module for custom nodes: name 'CLIPTemperaturePatch' is not defined
(IMPORT FAILED): E:\GPT\ComfyUI\custom_nodes\ComfyUI-AutomaticCFG

Identified and solved here: edit the __init__.py of the node as explained here

Or just wait for a patch

clip missing: [‘clip_l.logit_scale’, ‘clip_l.transformer.text_projection.weight’]

Error discussed here, seemed to be caused by an update of Comfy

clip missing: ['clip_l.logit_scale', 'clip_l.transformer.text_projection.weight']
Loading 1 new model
C:\Users\heruv\ComfyUI\comfy\ldm\modules\attention.py:345: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:263.)
out = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=mask, dropout_p=0.0, is_causal=False)

solution: editΒ comfy\supported_models.pyΒ and make the pop_keys list empty: (or wait for patch)

def process_clip_state_dict_for_saving(self, state_dict):
    # pop_keys = ["clip_l.transformer.text_projection.weight", "clip_l.logit_scale"]
    pop_keys = []
    for p in pop_keys:
        if p in state_dict:
            state_dict.pop(p)

 

comfyui-mixlab-nodes: ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host

FETCH DATA from: E:\GPT\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map.json [DONE]
Exception in callback _ProactorBasePipeTransport._call_connection_lost(None)
handle: <Handle _ProactorBasePipeTransport._call_connection_lost(None)>
Traceback (most recent call last):
File "asyncio\events.py", line 80, in _run
File "asyncio\proactor_events.py", line 162, in _call_connection_lost
ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host
#read_workflow_json_files_all E:\GPT\ComfyUI\custom_nodes\comfyui-mixlab-nodes\app\

This is a known issue linked to Comfy itself, discussed here

Solution: ignore it. This happens when you refresh the browser, just restart Comfy instead.

 

WARNING: shape mismatch when trying to apply embedding

happens when using embeddings with sdxl:

embedding:bad_prompt_version2.pt, embedding:ng_deepnegative_v1_75t.pt, embedding:bad-hands-5.pt, sunbeam, embedding:easynegative, watermark, signature,
WARNING: shape mismatch when trying to apply embedding, embedding will be ignored 768 1280

Solution: must use XL negatives:
A:Standard
B:Realistic
C:Anime like

embedding:peopleneg, embedding:unaestheticXL_AYv1, embedding:bad-XL.pt, embedding:negativeXL_A, watermark, signature,

the following node types were not found: IPAdapterApply

Happened when trying to load the ComfyUICharacterCreator from r/ComfyUI. Nodes evolve so fast… IPAdapter_plus errors discussed here.

Solution 1: use old IPAdapter

Solution 2: just work and update the workflow to use the new updated nodes, dammit

 

Leave a Reply

Your email address will not be published. Required fields are marked *