Closed
Description
Whenever attempting to upscale images with any dimension > 1280px, inference fails with the following stack trace:
Exception in thread Thread-14 (prompt_worker):
Traceback (most recent call last):
File "D:\AI\ComfyUI\execution.py", line 345, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\execution.py", line 220, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\execution.py", line 192, in _map_node_over_list
process_inputs(input_dict, i)
File "D:\AI\ComfyUI\execution.py", line 181, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt\__init__.py", line 52, in upscaler_tensorrt
result = upscaler_trt_model.infer({"input": img}, cudaStream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt\trt_utilities.py", line 264, in infer
raise ValueError("ERROR: inference failed.")
ValueError: ERROR: inference failed.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Luna\miniconda3\envs\comfy\Lib\threading.py", line 1075, in _bootstrap_inner
self.run()
File "C:\Users\Luna\miniconda3\envs\comfy\Lib\threading.py", line 1012, in run
self._target(*self._args, **self._kwargs)
File "D:\AI\ComfyUI\main.py", line 183, in prompt_worker
e.execute(item[2], prompt_id, item[3], item[4])
File "D:\AI\ComfyUI\execution.py", line 523, in execute
result, error, ex = execute(self.server, dynamic_prompt, self.caches, node_id, extra_data, executed, prompt_id, execution_list, pending_subgraph_results)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\execution.py", line 412, in execute
input_data_formatted[name] = [format_value(x) for x in inputs]
^^^^^^^^^^^^^^^
File "D:\AI\ComfyUI\execution.py", line 264, in format_value
return str(x)
^^^^^^
File "D:\AI\ComfyUI\custom_nodes\ComfyUI-Upscaler-Tensorrt\trt_utilities.py", line 271, in __str__
for binding_idx in range(self.engine.num_bindings):
^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'tensorrt_bindings.tensorrt.ICudaEngine' object has no attribute 'num_bindings'
I added a try-except to trt_utilities.py
that at least fixes the soft-lock issue:
def __str__(self):
out = ""
try:
for opt_profile in range(self.engine.num_optimization_profiles):
for binding_idx in range(self.engine.num_bindings):
name = self.engine.get_binding_name(binding_idx)
shape = self.engine.get_profile_shape(opt_profile, name)
out += f"\t{name} = {shape}\n"
except:
out = ""
return out
This seems to be the exact hard limit, giving it an image with a width or height of 1281px triggers the error, going 1px below doesn't. Is this a limitation of tensorrt, not being able to work with tensors > 5160 wide (4x1280)? Model upscales work well way beyond these dimensions, if slowly. I'm running a 3090 with tensorrt 10.9.0.34, torch 2.7.0+cu126
.
Metadata
Metadata
Assignees
Labels
No labels