8000 Implement UpdaterCovarAdaptation::setSubMatrices to mimic parallel optimization · Issue #59 · stulp/dmpbbo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Implement UpdaterCovarAdaptation::setSubMatrices to mimic parallel optimization #59

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
stulp opened this issue Nov 28, 2021 · 3 comments

Comments

@stulp
Copy link
Owner
stulp commented Nov 28, 2021

The parallel optimization is no longer needed if the covariance matrix updates updates the matrix in submatrices. The simple solution to this would be to have a function UpdaterCovarAdaptation::setSubMatrices(VectorXi), in which the sizes of the submatrices are set, rather than derived from the array of distributions:

covar_block_sizes[pp] = distributions[pp].mean().size();

The following functions can then be deleted:

/** \todo Get rid of runOptimizationParallelDeprecated(), and implement in UpdaterCovarAdapation

bool saveToDirectory(

@stulp
Copy link
Owner Author
stulp commented Dec 5, 2021

See #58 (comment) for backgroun information.

Current implementation

runOptimizationParallelDeprecated() has a list of N seperate distributions with MxM covariance matrices. They are each sampled separately, and the results concatenated into one NxM parameter vector.

Future (simpler implementation)

Have one NM x NM covariance matrix, but organize it in blocks.

  • Advantage: simpler code, because running the black-box optimization part is the same for all (a* *voids code redundancy)
  • Disadvantage. NM x NM with many 0s takes up more memory than N times MxM. However, since there is only one copy of this distribution, I think the advantages outweighs the disadvantage (clearer non-redundant code preferred over complete memory optimization).

Implementation Options:

  • Option 1: Have a UpdaterCovarAdaptationBlocks (which inhertits from UpdaterCovarAdaptation?) and updates block-by-block.
  • Option 2: Pass blocks to constructor. Treat differently from normal updater.

@pranshumalik14
Copy link

This notebook provides yet another way to have compact code following the current implementation.

@stulp
Copy link
Owner Author
stulp commented Mar 17, 2022

Yes, very nice! See also my comments in #58

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0