-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Add OpenVINO backend for torch.compile node #6638
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add OpenVINO backend for torch.compile node #6638
Conversation
remove history commit remove history commit
Questions, as an Intel Arc owner and having contributed to the repository. 1.) I have used both the Triton( 2.) Are there sensible errors that pop up if you don't have a suitable OpenVINO GPU or NPU and try to run with this node and ways to diagnose how to solve them if users run into them? This can be a issue and both device types, but especially NPU, require drivers at this time to function properly and I can't even get my LNL laptop to use NPU at this time on Linux so I also have questions about maturity too at this time. |
hi @simonlui great thanks for your quick feedback.
|
Finally. Thanks to comfyanonymous/ComfyUI#6638 to use as a guide for add_patches function
Hi there, just passed by and wanted to say, many many thanks! Finally with your add_patches modifications (and others) managed to make loras work with torch.compile. Really appreciated! |
Sorry for double post, but wondering, does loading a lora, then disabling it, and then enabling it again works fine for you? Maybe some unpatching or recompiling is needed? I think on a first inference with a lora, it will patch the keys before compiling, and it will work. If you then disable it and enable the lora, it will compile without a lora and will add some _orig_mod prefixes to the keys, so when trying to apply the lora keys again on a 3rd inference to the compiled model, it will not match the key and it won't load. Correct me if I'm wrong though. |
I think it can supp
Hi, when your implementation path start from a checkpoint without LoRa, everything works. However if it starts from a checkpoint with LoRa, the enabling and disabling LoRA does not work. which mean:
In second case, my new patch will not be triggered, so I believe it is a general issue for torch.compile node, and I will do furthe investigation. |
I have updated the PR, however it may need 2 warm-up inference for first time generation with LoRA weights |
hi @comfyanonymous could you help to review ? |
Hi @NineMeowICT Could you help to check if this PR is ready to be merged ? thanks |
@openvino-dev-samples I'm sorry for the misunderstanding. I'm not one of ComfyUI's contributors and I don't have write access, my approval was simply to recognize your efforts. |
How can we get this merged? |
@openvino-dev-samples Appreciate you working on implementing this, currently using your fork, it dosent seem to be using the gpu for the inference even tho ive selected gpu as the device. Is there any specefic openvino version that i need to use? I dont think its compiling anything and just strainght skipping the block. |
hi Could you share the screenshot of network ? BTW you can try following command to update the version of openvino: |
i dont think change on Sampler will lead recompiling for GPU. but I will try to reproduce your case. thanks for sharing. |
Thank you for looking, but i resolved the problem. It was the low memory which was causing it to go back to cpu, Increasing memory helped resolve this. Thanks a lot anyway. Edit: Changing Image Size made it switch back to cpu, image size of 512x512 works fine with gpu, changing it to 768x768 causes it to use cpu. Changing to any image size causes it to use cpu. |
To fix this, had to switch openvino device to CPU and queue, then switch back to GPU and queue. |
Torch Compile will recompile the PyTorch model once it is changed. So I guess changing image size will update the original pytorch model's network. |
Hey, sorry that this did not get an official review for so long! For the lora fix for torch.compile, even with the keys workaround, there was indeed a fundamental issue with the way torch.compile was implemented in ComfyUI with the object_patches, with anything relating to keys still being broken while the model was loaded. I created a PR that just got merged that fixes this properly: #8213, with torch.compile now being managed by a For the OpenVino portion of the PR, managing the openvino dependency should be done in some sort of standardized manner - this can be done the cleanest by implementing the Torch Compile OpenVino node in a custom node pack. This will ensure that any time a workflow uses the OpenVino node, users are able to acquire the dependencies via ComfyUI-Manager. Creating/publishing the custom node pack is something that can be done on your end, so that you will be free to make any edits in the future. Here are the docs for publishing a custom node to registry: https://docs.comfy.org/registry/publishing#publishing-nodes Since the review was so delayed, I went ahead and wrote the core of the custom node code that would achieve the goal of this PR, feel free to use it/reference it:
Let me know if you have any questions or concerns, I'll do my best to answer promptly! |
Features description
.safetensor
model and LoRa weights with OpenVINO runtime.Installation
To enable this integration, you only need to install openvino runtime in advance:
Test cases
Screenshots
TorchCompileModel output has to be connected with the input of KSampler.

The following model has to be selected for checkpoint

#2473