[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ Skip to main content
Log in

Dance motion generation by recombination of body parts from motion source

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

In this paper, we propose an approach to synthesize new dance routines by combining body part motions from a human motion database. The proposed approach aims to provide a movement source to allow robots or animation characters to perform improvised dances to music, and also to inspire choreographers with the provided movements. Based on the observation that some body parts perform more appropriately than other body parts during dance performances, a correlation analysis of music and motion is conducted to identify the expressive body parts. We then combine the body part movement sources to create a new motion, which differs from all sources in the database. The generated performances are evaluated by a user questionnaire assessment, and the results are discussed to understand what is important in generating more appealing dance routines.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
£29.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price includes VAT (United Kingdom)

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Nakahara N, Miyazaki K, Sakamoto H, Fujisawa TX, Nagata N, Nakatsu R (2009) Dance motion control of a humanoid robot based on real-time tempo tracking from musical audio signals. Entertain Comput ICEC 2009:36–47

    Google Scholar 

  2. Grunberg D, Ellenberg R, Kim Y, Oh P (2009) Creating an autonomous dancing robot. In: Proceedings of the international conference on hybrid information technology (ICHIT), pp 221–227

  3. Lee M, Lee K, Park J (2013) Music similarity-based approach to generating dance motion sequence. Multimed Tools Appl 62(3):895–912

    Article  Google Scholar 

  4. Toiviainen P, Luck G, Thompson M (2009) Embodied metre: hierarchical eigenmodes in spontaneous movement to music. Cognitive Process 10(2):325–327

    Article  Google Scholar 

  5. Bartsch MA, Wakefield GH (2001) To catch a chorus: using chroma-based representations for audio thumbnailing, 2001 IEEE workshop on the applications of signal processing to audio and acoustics, pp 15–18

  6. Gray JM (1975) An exploration of musical timbre. PhD thesis, Department of Psychology, Stanford University, Stanford, CA, USA

  7. Knight H, Simmons R (2016) Laban head-motions convey robot state: a call for robot body language. In: Proceedings of international conference on robotics and automation (ICRA), pp 2881–2888

  8. Foote J (2000) Automatic audio segmentation using a measure of audio novelty. In: Proceedings IEEE international conference multimedia and expo (ICME2000), vol 1, pp 452–455

  9. Foote J (1999) Visualizing music and audio using self-similarity. In: Proceedings ACM multimedia, pp 70–80

  10. Kanungo T, Netanyahu NS, Wu AY (2002) An efficient k-means clustering algorithm:analysis and implementation. icassp(IEEE Trans Pattern Anal Mach Intell) 24(7):881–892

    Article  Google Scholar 

  11. Aristidis L, Nikos V, Verbeek JJ (2003) The global k-means clustering algorithm. icassp(Pattern Recognit) 36(2):451–461

    Article  Google Scholar 

  12. Chatfield SJ, Byrnes WC (1990) Correlational analysis of aesthetic competency, skill acquisition and physiologic capabilities of modern dancers. In: International dance conference, pp 79–100

  13. Krasnow D, Chatfield SJ (2009) Development of the performance competence evaluation measure, assessing qualitative aspects of dance performance. J Dance Med Sci 13(4):101–107

    Google Scholar 

  14. Hutchinson AG (1977) Labanotation: the system of analyzing and recording movement. Theatre Arts Books, New York

    Google Scholar 

  15. Rudolf L (2011) CHOREUTICS (annotated and edited by Lisa Ullmann). Dance Books, Alton

    Google Scholar 

  16. Rudolf L, Lawrence FC (1947) Effort. MacDonald and Evans, London

    Google Scholar 

  17. Chi D, Costa M, Zhao L, Badler N (2000) The EMOTE model for effort and shape, center for human modeling and simulation. University of Pennsylvania, Philadelphia

    Google Scholar 

  18. Torresani L, Hackney P, Christoph B (2006) Learning motion style synthesis from perceptual observations. In: Advances in neural information processing systems, pp 1393–1400

Download references

Acknowledgements

This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIP) (No. NRF-2015R1A2A1A10055798)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jaeheung Park.

Electronic supplementary material

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Lee, M., Lee, K., Lee, M. et al. Dance motion generation by recombination of body parts from motion source. Intel Serv Robotics 11, 139–148 (2018). https://doi.org/10.1007/s11370-017-0241-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-017-0241-x

Keywords

Navigation