Skip to content

BNK_CLIPTextEncodeSDXLAdvanced The expanded size of the tensor (154) must match the existing size (77) at non-singleton dimension 1. Target sizes: [-1, 154, -1]. Tensor sizes: [1, 77, 1280] #36

Open
@ferret99gt

Description

@ferret99gt

Somewhat similar to #5 ... demonstrable with the same prompt:

(RAW photo:1.2), wide angle, perfect composition, 1 woman(upper body selfie, happy), masterpiece, best quality, ultra-detailed, solo, outdoors, (night), mountains, nature, (stars, moon) cheerful, happy, backpack, sleeping bag, camping stove, water bottle, mountain boots, gloves, sweater, hat, flashlight, forest, rocks, river, wood, smoke, shadows, contrast, clear sky, analog style (look at viewer:1.2) (skin texture) (film grain:1.3), (warm hue, warm tone), close up, cinematic light, sidelighting, ultra high res, best shadow, RAW, upper body, wearing pullover, (masterpiece, 8k, absurdres, best quality, intricate), realistic, raytracing, dramatic light,

Putting this prompt into CLIP Text Encode (Advanced) works.

Putting the same prompt into CLIP Text Encode SDXL (Advanced), with or without any text_g, results:

ComfyUI Error Report

Error Details

  • Node ID: 12
  • Node Type: BNK_CLIPTextEncodeSDXLAdvanced
  • Exception Type: RuntimeError
  • Exception Message: The expanded size of the tensor (154) must match the existing size (77) at non-singleton dimension 1. Target sizes: [-1, 154, -1]. Tensor sizes: [1, 77, 1280]

Stack Trace

  File "..\ComfyUI_windows_portable\ComfyUI\execution.py", line 327, in execute
    output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "..\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
    return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "..\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
    process_inputs(input_dict, i)

  File "..\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
    results.append(getattr(obj, func)(**inputs))
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "..\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb\nodes.py", line 100, in encode
    embeddings_final, pooled = advanced_encode_XL(clip, text_l, text_g, token_normalization, weight_interpretation, w_max=1.0, clip_balance=balance, apply_to_pooled=affect_pooled == "enable")
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "..\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI_ADV_CLIP_emb\adv_encode.py", line 294, in advanced_encode_XL
    return prepareXL(embs_l.expand((-1,repeat_l,-1)), embs_g.expand((-1,repeat_g,-1)), pooled, clip_balance)
                                                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

System Information

  • ComfyUI Version: 0.3.14
  • Arguments: ComfyUI\main.py
  • OS: nt
  • Python Version: 3.12.8 (tags/v3.12.8:2dc476b, Dec 3 2024, 19:30:04) [MSC v.1942 64 bit (AMD64)]
  • Embedded Python: true
  • PyTorch Version: 2.6.0+cu126

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions