[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
article

Sequential Monte Carlo controller that integrates physical consistency and motion knowledge

Published: 01 August 2019 Publication History

Abstract

Stochastic modeling of motion primitives is a well-developed approach to representing demonstrated motions. Such models have been used in kinematic motion recognizers and synthesizers due to their compact representation of types of high-dimensional motions. They allow robots to synthesize their own motions that is kinematically similar to the demonstrated motions. However, when the robots execute a task during physically interacting with the environment, the robots are required to control the dynamical properties such as contact forces in the similar fashion to demonstrated executions. To achieve this goal, the stochastic model needs to be extended to represent both the kinematics and the dynamics of the demonstrations, and subsequently synthesize motions with physical consistency according to this representation. In this paper, we propose a novel approach to encoding sequences of joint angles and joint torques into a hidden Markov model (HMM). Subsequently, joint torques that satisfy the physical constraints of the equations of motion are generated from the HMM by using a sampling method. These joint torques enable a robot to perform motions kinematically similar to training demonstrations and to control its contact with the environment. Experiments on a robot arm with seven degrees of freedom demonstrate the validity of the proposed approach.

References

[1]
Bain, M., & Sammut, C. (1996). A framework for behavioural cloning. In Machine intelligence (Vol. 15, pp. 103-129). Oxford: Oxford University Press.
[2]
Breazeal, C., & Scassellati, B. (2002). Robots that imitate humans. Trends in Cognitive Sciences, 6(11), 481-487.
[3]
Calinon, S., Alisadeh, T., & Caldwell, D. G. (2013). On improving the extrapolation capability of task-parameterized movement models. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 610-617).
[4]
Calinon, S., Guenter, F., & Billard, A. (2006). On learning the statistical representation of a task and generalizing it to various contexts. In Proceedings of the IEEE international conference on robotics and automation (pp. 2978-2983).
[5]
Finn, C., Yu, T., Zhang, T., Abbeel, P., & Levine, S. (2017). One-shot visual imitation learning via meta-learning. In Proceedings of the conference on robot learning.
[6]
Herzog, D., Krueger, V., & Grest, D. (2008). Parametric hidden markov models for recognition and synthesis of movements. In Proceedings of the British machine vision conference (pp. 163-172).
[7]
Ijspeert, A. J., Nakanishi, J., & Shaal, S. (2003). Learning control policies for movement imitation and movement recognition. Neural Information Processing System, 15, 1547-1554.
[8]
Ijspeert, A. J., Nakanishi, J., Shibata, T, & Schaal S. (2001). Nonlinear dynamical systems for imitation with humanoid robots. In Proceedings of the IEEE-RAS international conference on humanoid robots.
[9]
Inamura, T., Toshima, I., Tanie, H., & Nakamura, Y. (2004). Embodied symbol emergence based on mimesis theory. International Journal of Robotics Research, 23(4), 363-377.
[10]
Kadone, H., & Nakamura, Y. (2005). Symbolic memory for humanoid robots using hierarchical bifurcations of attractors in nonmonotonic neural networks. In Proceedings of the 2005 IEEE/RSJ international conference on intelligent robots and systems (pp. 2900-2905).
[11]
Lioutikov, R., Neumann, G., Maeda, G., & Peters, J. (2015). Learning movement primitive libraries through probabilistic segmentation. International Journal of Robotics Research, 36(8), 879-894.
[12]
Mataric, M. J. (2000). Getting humaniods to move and imitate. IEEE Intelligent Systems, 15(4), 18-24.
[13]
Matsubara, T., Hyon, S. H., & Morimoto, J. (2011). Learning parametric dynamic movement primitives from multiple demonstrations. Neural Networks, 24(5), 493-500.
[14]
Nair, A., Chen, D., Agrawal, P., Isola, P., Abbeel, P., Malik, J., et al. (2017). Combining self-supervised learning and imitation for vision-based rope manipulation. In Proceedings of the IEEE international conference on robotics and automation (pp. 2146-2153).
[15]
Noda, K., Arie, H., Suga, Y., & Ogata, T. (2014). Multimodal integration learning of robot behavior using deep neural networks. Robotics and Autonomous Systems, 62, 721-736.
[16]
Ogata, T., Murase, M., Tani, J., Komatani, K., & Okuno, H. G. (2007). Two-way translation of compound sentences and arm motions by recurrent neural networks. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems (pp. 1858-1863).
[17]
Okada, M., Tatani, K., & Nakamura, Y. (2002). Polynomial design of the nonlinear dynamics for the brain-like information processing of whole body motion. In Proceedings of the IEEE international conference on robotics and automation (pp. 1410-1415).
[18]
Paraschos, A., Daniel, C., Peters, J., & Neumann, G. (2013). Probabilistic movement primitives. In Advances in neural information processing systems (pp. 2616-2624).
[19]
Prada, M., Remazeilles, A., Koene, A., & Endo, S. (2013). Dynamic movement primitives for human-robot interaction: Comparison with human behavioral observation. In Proceedings of the IEEE/RSJ international conference on intelligent robots and systems 2013 (pp. 1168-1175).
[20]
Rabiner, L., & Juang, B. H. (1993). Fundamentals of speech recognition. In Prentice Hall signal processing series. Englewood Cliffs, NJ: Prentice Hall.
[21]
Ruckert, E. A., Neumann, G., Toussaint, M., & Maass, W. (2013). Learned graphical models for probabilistic planning provide a new class of movement primitives. Frontiers in Computational Neuroscience, 6, 1-20.
[22]
Schaal, S. (1999). Is imitation learning the route to humanoid robot? Trends in Cognitive Sciences, 3(6), 233-2422.
[23]
Schaal, S., Peters, J., Nakanishi, J., & Ijspeert, A. J. (2003). Learning movement primitives. In International symposium on robotics research (pp. 561-572).
[24]
Sugita, Y., & Tani, J. (2005). Learning semantic combinatoriality from the interaction between linguistic and behavioral processes. Adaptive Behavior, 13(1), 33-52.
[25]
Takano, W., & Nakamura, Y. (2015a). Statistical mutual conversion between whole body motion primitives and linguistic sentences for human motions. International Journal of Robotics Research, 34(10), 1314-1328.
[26]
Takano, W., & Nakamura, Y. (2015b). Symbolically structured database for human whole body motions based on association between motion symbols and motion words. Robotics and Autonomous Systems, 66, 75-85.
[27]
Takano, W., & Nakamura, Y. (2017). Planning of goal-oriented motion from stochastic motion primitives and optimal controlling of joint torques in whole-body. Robotics and Autonomous Systems, 91, 226-233.
[28]
Tani, J., & Ito, M. (2003). Self-organization of behavioral primitives as multiple attractor dynamics: A robot experiment. IEEE Transactions on Systems, Man and Cybernetics Part A: Systems and Humans, 33(4), 481-488.
[29]
Wilson, A. D., & Bobick, A. F. (1999). Parametric hidden markov models for gesture recognition. IEEE Transactions on Pattern Analysis & Machine Intelligence, 21(9), 884-900.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Autonomous Robots
Autonomous Robots  Volume 43, Issue 6
August 2019
325 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 01 August 2019

Author Tags

  1. Motion control
  2. Physical consistency
  3. Stochastic model

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 16 Jan 2025

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media