10000 Hints on DmpWithGainSchedules does not yet override Parameterizable interface · Issue #61 · stulp/dmpbbo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Hints on DmpWithGainSchedules does not yet override Parameterizable interface #61

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? 8000 Sign in to your account

Closed
ignacio-pm opened this issue Mar 16, 2022 · 17 comments

Comments

@ignacio-pm
Copy link
Collaborator

I am using the DmpWithGainSchedules class to optimize the gains of a DMP. However, the impedance gains are not modified during the rollouts. In line 171 of DmpWithGainSchedules.hpp, there is a TODO left to do. Its description is to override the Parameterizable interface. Thus, the function approximators for gains can not yet be parametrized. This TODO is probably causing the bug of the optimization of the gains. I would appreciate any hint or help on how to tackle the task.

Thanks,

@stulp
Copy link
Owner
stulp commented Mar 17, 2022

It appears the code for doing it is already there:

void getSelectableParameters(std::set<std::string>& selectable_values_labels) const;

Note sure why it is commented out... Try to comment the code back in and see if it compiles.

In any case, it's important to understand the Parameterizable class in order to understand the design behind it:
https://github.com/stulp/dmpbbo/blob/master/src/functionapproximators/Parameterizable.hpp

I hope to have some time to look at this over the next few days.

@stulp
Copy link
Owner
stulp commented Mar 19, 2022

So the code still compiles, but debugging Eigen (and real-time) code is quite annoying.

@ignacio-pm, is your aim to run this on a real robot? Then the investment may be worth it.

If not, it might in fact be easier to port DmpWithGainSchedules to the python code (see also #60)

@ignacio-pm
Copy link
Collaborator Author

I aim to run it in a real robot. However, it does not need to be real-time because the trajectory is calculated before the roll-out. However, the roll-out implementation on the robot is in C++. Therefore, it is convenient to use C++.

I used the demo_robot example as a baseline, initializing the dmpWithGainSchedules, and made the necessary changes. I uncommented the parts you pointed out, but I got an error both in the training (step1A) of the demo_robot folder. The error was on the setSelectedParameters function of the DmpWithGainSchedules. I have not access to the detailed error description now, but I will include it on Monday.

There is an interesting comment here that might explain the cause of the error:

/** \todo DmpWithGainSchedules does not yet override Parameterizable interface. Thus, the functionapproximators for the extra dimensions for gains cannot yet be parameterized through this interface.

@stulp
Copy link
Owner
stulp commented Mar 19, 2022

I aim to run it in a real robot. However, it does not need to be real-time because the trajectory is calculated before the roll-out. However, the roll-out implementation on the robot is in C++. Therefore, it is convenient to use C++.

To understand correctly:

  • A trajectory is generated off-line and written to disk (using dmpbbo code)
  • The robot reads the trajectory and executes it (you do not use dmpbbo code required for this)

For this scenario, it makes sense to do the first step in python only (it being the easier language to program/debug/visualize results), and doing the second in (realtime) C++ (or whathever your robot needs)

What is blocking you (and others) in doing this, is that not all the functionality in the C++ implementation (e.g.DmpWithGainSchedules) is available in the Python version (see #60). Ergo, if the functionality of the C++ version is ported to the Python version, this would make the code much more usable for many people (I did first steps in d9b0793, but to doing the whole things would require a few days).

What's your time schedule? If you need it quickly I'd suggest that you try to fix the C++ code (Eigen is great, but annoying for these kinds of tasks; that's why I gave up and commented it out). If you have a week or two, we could implement DmpWithGainSchedules in Python (i.e. I would code, you would review). What do you say?

There is an interesting comment here that might explain the cause of the error:

/** \todo DmpWithGainSchedules does not yet override Parameterizable interface. Thus, the functionapproximators for the extra dimensions for gains cannot yet be parameterized through this interface.

Indeed. I would thus say that it is not an error per se, but a lack of functionality.

Side remark: if you generate trajectories off-line (as many people do), you do not have any of the advantages of DMPs (adaptation to the changing goals or unexpected perturbations). In this "off-line" case, one could also use splines.

@ignacio-pm
Copy link
Collaborator Author

To understand correctly:

  • A trajectory is generated off-line and written to disk (using dmpbbo code)
  • The robot reads the trajectory and executes it (you do not use dmpbbo code required for this)

That is what I meant, and therefore, I think python makes sense in this case. I already have both steps running in C++ using your demo_robot code as a baseline, but the transfer to python will not take long.

My time schedule is tight; I want to finish everything to start experiments this week or the next one. I will try to fix the C++ code, but I am open to helping/reviewing the python code because I might not be able to solve it, and it could be helpful for the future.

@stulp
Copy link
Owner
stulp commented Mar 19, 2022

Given a tight schedule, I consider the "porting to python approach" risky. Then chances are better getting the C++ code running.

Do you want to optimized DMP trajectories and gain schedules at the same time, or only gain schedules?

@ignacio-pm
Copy link
Collaborator Author

Both. I want to compare optimizing only the DMP trajectories with optimizing the gain schedules and the DMP trajectories for a given task.

This was referenced Mar 19, 2022
@stulp
Copy link
Owner
stulp commented Mar 19, 2022

So I had a look and it is complicated.

So I've opened new issues for these subtasks in #62 and #63, but unfortunately, those are not going to be done in a week or two (probably I need one whole week for them and #45 and #60, but I do not know when I will have that week).

@stulp
Copy link
Owner
stulp commented Mar 20, 2022

So I did a quick feasability analysis:

Thanks for nudging me to address these issues that I had wanted to work on anyway. But they require larger changes for which I will not have time before your deadline.

@ignacio-pm
Copy link
Collaborator Author

Thank you for your work!

@stulp
Copy link
Owner
stulp commented Mar 21, 2022

You're welcome!

So I went for #63 anyway. 9d4e23e It is available in the branch cpp_parameterizable.

Parameterization now works for the LWR and RBFN function approximators (tested both with testPerturbModelParameterLWR.cpp). Also seems to be working for DMP (tested with testDmpModelParameterValues.cpp).

It compiles for DmpWithGainSchedules also, but I haven't tested it yet... It's worth a shot for your experiment. But it's not in the master branch yet because I haven't tested it yet.

The more long-term solution is #62 in any case.

@ignacio-pm
Copy link
Collaborator Author

I will test it in my experiment between today and tomorrow and tell you if it works

@ignacio-pm
Copy link
Collaborator Author

So I gave a try and I had to fix a couple things before I got it to work, which are in the Pull Request: #65 . Before merging there are a couple things that needs to be decided (explained in the PR).

@stulp
Copy link
Owner
stulp commented Mar 24, 2022

There's just one minor fix that's required before merging it (i.e. undoing some changes that you had to make because a function was not implemented yet, but are now available in 876ed29)

When the PR is merged, it's essential to do some basic testing of DmpWithGainSchedules. For that, I would copy this one
https://github.com/stulp/dmpbbo/blob/master/src/dmp/tests/testDmpModelParameterValues.cpp
to a test called testDmpWithGainSchedulesModelParameterValues. The test should confirm that:

  • gain schedules can be changed without changing the function approximators (FAs) for the transformation systems in Dmp (i.e. by selecting parameters "offsets_gains" for LWR (choose "weights_gains" when using RBFN)
  • FAs in the Dmp can be changed with chanding the gain schedule FAs (select "weights")
  • both can be changed at the same time (select "weights" and "weights_gains")

Small hint: I find working with RBFNs much more intuitive when visualizing and debugging. And in practice, I've not seen advantages of using LWR.

@stulp
Copy link
Owner
stulp commented Mar 25, 2022

In demo_robot, it is now possible to use DmpWithGainSequences: f840411

This commit allows the easy switching on/off of which parameters are optimized: f32a0e7

This code compiles, and looking at the dmp.xml file that step1A_trainDmpFromTrajectoryFile generates, it appears to do the right thing. I've not had time to run dmp.xml in step 2..4 of robot_demo yet though, so it is not properly tested yet.

@stulp
Copy link
Owner
stulp commented Jun 19, 2022

@ignacio-pm There is a new issue (and branch) specifically dedicated to the reimplementation of Dmps with gain/force schedules? Shall we close this issue?

You had previously offered to review the code related to this? Can I assign you as review to a PR for the branch with_schedules, once it is done?

@ignacio-pm
Copy link
Collaborator Author

@stulp, yes, you can assign me, and I can review the code, but I am no longer in the project I was using this repository, so I can not test it on my experiment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0