You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, thank you for sharing this great project — it's very helpful!
I noticed that in the original paper, the training of the generative model is divided into two stages:
Fine-tuning the original unconditional LDP model;
Training the conditional model using sub-goals.
I would like to confirm: does the training code provided in this repository only cover the second stage (training the conditional model)?
If so, was the first-stage fine-tuning of the unconditional LDP model done strictly following the official LDP training pipeline, or were there additional considerations?
The text was updated successfully, but these errors were encountered:
In my testing, training an unconditional diffusion model has helped ControlNet converge faster and achieve better results. For this part, you can simply use the original official code, as long as your dataset follows the required format.
Uh oh!
There was an error while loading. Please reload this page.
Hi, thank you for sharing this great project — it's very helpful!
I noticed that in the original paper, the training of the generative model is divided into two stages:
Fine-tuning the original unconditional LDP model;
Training the conditional model using sub-goals.
I would like to confirm: does the training code provided in this repository only cover the second stage (training the conditional model)?
If so, was the first-stage fine-tuning of the unconditional LDP model done strictly following the official LDP training pipeline, or were there additional considerations?
The text was updated successfully, but these errors were encountered: