-
Notifications
You must be signed in to change notification settings - Fork 8.6k
Out of RAM issue (not vram) #7465
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi there, same problem here I was using wan video (normal and gguf) no problem at all before the latest update, and now it used all my system ram and pc froze (I launched it with python main.py -- highvram) it worked well no memory problem .. Thanks |
One workaround is to run an empty workflow once. Doing so will clear all caches. |
maybe a solution would be a new node caching mechanism that unloads the cached data once all child nodes have executed? Probably behind a --agressive-cache flag? |
I opened a pull request with a new caching mechanism that reduces the amount of RAM a workflow requires here: #7509 |
Since my pull request has been merged, giving users the option to use —no-cache to reduce usage, I will close this. Thanks |
This issue is being marked stale because it has not had any activity for 30 days. Reply below within 7 days if your issue still isn't solved, and it will be left open. Otherwise, the issue will be closed automatically. |
Your question
Hi,
I am running ComfyUi in a Runpod serverless endpoint and I keep hitting out of memory (RAM) issues.
My guess is that ComfyUi is unloading my models from VRAM to RAM, but the docker container manager decides that too much RAM is being used before ComfyUI and so the process gets terminated.
Is there any way of setting ComfyUi to just completely unload the model (from RAM and VRAM) after it has been used?
Thanks
Logs
Other
No response
The text was updated successfully, but these errors were encountered: