8000 OOM on 3090 no matter what model. · Issue #680 · kijai/ComfyUI-WanVideoWrapper · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
OOM on 3090 no matter what model. #680
Open
@Maki9009

Description

@Maki9009

So i have a 3090 and 16gb of system ram, im getting 64gb tomorrow.

Big issue im facing atm is NO MATTER THE MODEL I TRY to load from the 14b version i always OOM especially when i try to load a Lora i oom... i tired block swapping and that also fails.

`During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "threading.py", line 1075, in _bootstrap_inner
File "threading.py", line 1012, in run
File "G:\AI Stuff\ComfyUI_windows_portable\ComfyUI\main.py", line 175, in prompt_worker
e.execute(item[2], prompt_id, item[3], item[4])
File "G:\AI Stuff\ComfyUI_windows_portable\ComfyUI\execution.py", line 542, in execute
result, error, ex = execute(self.server, dynamic_prompt, self.caches, node_id, extra_data, executed, prompt_id, execution_list, pending_subgraph_results)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\ComfyUI\execution.py", line 428, in execute
input_data_formatted[name] = [format_value(x) for x in inputs]
^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\ComfyUI\execution.py", line 280, in format_value
return str(x)
^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor.py", line 590, in repr
return torch._tensor_str._str(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 710, in _str
return _str_intern(self, tensor_contents=tensor_contents)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 631, in _str_intern
tensor_str = _tensor_str(self, indent)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 363, in _tensor_str
formatter = _Formatter(get_summarized_data(self) if summarize else self)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 399, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 401, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in self])
^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 399, in get_summarized_data
return torch.stack([get_summarized_data(x) for x in (start + end)])
^^^^^^^^^^^^^^^^^^^^^^
File "G:\AI Stuff\ComfyUI_windows_portable\python_embeded\Lib\site-packages\torch_tensor_str.py", line 389, in get_summarized_data
return torch.cat(
^^^^^^^^^^
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.`

i was using a gguf model 14b and the self forcing lora rendering a 480p i2v will take me like 40 seconds on 3 samples which actually gives me a pretty good decent result...

on wrapper i cant even load any model. help pls, workflow in image

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0