You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
See last sentence of Section 2 in AMP paper: "Our motion prior also does not require a separate pre-training phase, and instead, can be trained jointly with the policy."
It looks like the policy and the discriminator are trained together at the same rate with single optimizer and combined loss (https://github.com/nv-tlabs/ASE/blob/21257078f0c6bf75ee4f02626260d7cf2c48fee0/ase/learning/ase_agent.py#L280C1-L280C1). It seems to be different from the pseudocode in the paper, where they were trained separately. Any idea about what's the reason for this? Or am I missing something?
The text was updated successfully, but these errors were encountered: