8000 Out of RAM issue (not vram) · Issue #7465 · comfyanonymous/ComfyUI · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Out of RAM issue (not vram) #7465

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Chargeuk opened this issue Apr 1, 2025 · 6 comments
Closed

Out of RAM issue (not vram) #7465

Chargeuk opened this issue Apr 1, 2025 · 6 comments
Labels
Stale This issue is stale and will be autoclosed soon. User Support A user needs help with something, probably not a bug.

Comments

@Chargeuk
Copy link
Contributor
Chargeuk commented Apr 1, 2025

Your question

Hi,
I am running ComfyUi in a Runpod serverless endpoint and I keep hitting out of memory (RAM) issues.
My guess is that ComfyUi is unloading my models from VRAM to RAM, but the docker container manager decides that too much RAM is being used before ComfyUI and so the process gets terminated.

Is there any way of setting ComfyUi to just completely unload the model (from RAM and VRAM) after it has been used?

Thanks

Logs

Other

No response

@Chargeuk Chargeuk added the User Support A user needs help with something, probably not a bug. label Apr 1, 2025
@jackburton75
Copy link

Hi there, same problem here I was using wan video (normal and gguf) no problem at all before the latest update, and now it used all my system ram and pc froze (I launched it with python main.py -- highvram) it worked well no memory problem .. Thanks

@ltdrdata
Copy link
Collaborator
ltdrdata commented Apr 4, 2025

One workaround is to run an empty workflow once. Doing so will clear all caches.

@Chargeuk
Copy link
Contributor Author
Chargeuk commented Apr 5, 2025

maybe a solution would be a new node caching mechanism that unloads the cached data once all child nodes have executed? Probably behind a --agressive-cache flag?

@Chargeuk
Copy link
Contributor Author
Chargeuk commented Apr 6, 2025

I opened a pull request with a new caching mechanism that reduces the amount of RAM a workflow requires here: #7509

@Chargeuk
Copy link
Contributor Author

Since my pull request has been merged, giving users the option to use —no-cache to reduce usage, I will close this. Thanks

Copy link

This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically.

@github-actions github-actions bot added the Stale This issue is stale and will be autoclosed soon. label May 19, 2025
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 27, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Stale This issue is stale and will be autoclosed soon. User Support A user needs help with something, probably not a bug.
Projects
None yet
Development

No branches or pull requests

3 participants
0