[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3561975.3562951acmconferencesArticle/Chapter ViewAbstractPublication PagesmigConference Proceedingsconference-collections
research-article

A new framework for the evaluation of locomotive motion datasets through motion matching techniques

Published: 03 November 2022 Publication History

Abstract

Analyzing motion data is a critical step when building meaningful locomotive motion datasets. This can be done by labeling motion capture data and inspecting it, through a planned motion capture session or by carefully selecting locomotion clips from a public dataset. These analyses, however, have no clear definition of coverage, making it harder to diagnose when something goes wrong, such as a virtual character not being able to perform an action or not moving at a given speed. This issue is compounded by the large amount of information present in motion capture data, which poses a challenge when trying to interpret it. This work provides a visualization and an optimization method to streamline the process of crafting locomotive motion datasets. It provides a more grounded approach towards locomotive motion analysis by calculating different quality metrics, such as: demarcating coverage in terms of both linear and angular speeds, frame use frequency in each animation clip, deviation from the planned path, number of transitions, number of used vs. unused animations and transition cost.
By using these metrics as a comparison mean for different motion datasets, our approach is able to provide a less subjective alternative to the modification and analysis of motion datasets, while improving interpretability.

References

[1]
Adobe. 2022. Mixamo. https://www.mixamo.com/#/
[2]
Philippe Beaudoin, Stelian Coros, Michiel van de Panne, and Pierre Poulin. 2008. Motion-Motif Graphs. In Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on Computer Animation(SCA ’08). ACM Press, 117–126.
[3]
Michael Buttner. 2019. Machine Learning for Motion Synthesis and Character Control. In ACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (I3D). ACM Press, Quebec, Canada.
[4]
Simon Clavet. 2016. Motion Matching and The Road to Next-Gen Animation. In Game Developers Conference (GDC). San Francisco, California. https://www.gdcvault.com/play/1023280/Motion-Matching-and-The-Road
[5]
Simon Clavet and Michael Buttner. 2015. Motion Matching and The Road to Next-Gen Animation. In Nucl.ai. https://www.youtube.com/watch?v=z_wpgHFSWss
[6]
Rogério E da Silva, Jan Ondrej, and Aljosa Smolic. 2019. Using LSTM for Automatic Classification of Human Motion Capture Data. In VISIGRAPP (1: GRAPP). SCITEPRESS, Malta, 236–243.
[7]
Alireza Dirafzoon, Namita Lokare, and Edgar Lobaton. 2016. Action classification from motion capture data using topological data analysis. In 2016 IEEE global conference on signal and information processing (globalSIP). IEEE, Washington, DC, USA, 1260–1264.
[8]
Feng-Lei Fan, Jinjun Xiong, Mengzhou Li, and Ge Wang. 2021. On interpretability of artificial neural networks: A survey. IEEE Transactions on Radiation and Plasma Medical Sciences 5, 6(2021), 741–760.
[9]
Tim Furche, George Gottlob, Leonid Libkin, Giorgio Orsi, and Norman Paton. 2016. Data wrangling for big data: Challenges and opportunities. In Advances in Database Technology—EDBT 2016: Proceedings of the 19th International Conference on Extending Database Technology. OpenProceedings.org, Bordeaux, France, 473–478.
[10]
Tomasz Hachaj and Marek R Ogiela. 2020. RMoCap: an R language package for processing and kinematic analyzing motion capture data. Multimedia Systems 26, 2 (2020), 157–172.
[11]
Rachel Heck and Michael Gleicher. 2007. Parametric Motion Graphs. In Proceedings of the 2007 Symposium on Interactive 3D Graphics and Games(I3D ’07). 129–136. https://doi.org/10.1145/1230100.1230123
[12]
Daniel Holden. 2018. Character Control with Neural Networks and Machine Learning. In Game Developers Conference (GDC). San Francisco, California. https://www.gdcvault.com/play/1025389/Character-Control-with-Neural-Networks
[13]
Daniel Holden, Oussama Kanoun, Maksym Perepichka, and Tiberiu Popa. 2020. Learned motion matching. ACM Trans. Graph. 39, 4 (2020), 53:1–53:12. https://doi.org/10.1145/3386569.3392440 All Papers/H/Holden et al. 2020 - Learned motion matching.pdf.
[14]
Daniel Holden, Taku Komura, and Jun Saito. 2017. Phase-functioned neural networks for character control. ACM Trans. Graph. 36, 4 (2017), Article 42. https://doi.org/10.1145/3072959.3073663
[15]
Daniel Holden, Jun Saito, and Taku Komura. 2016. A deep learning framework for character motion synthesis and editing. ACM Trans. Graph. 35, 4 (2016), 1–11. https://doi.org/10.1145/2897824.2925975 All Papers/H/Holden et al. 2016 - A deep learning framework for character motion synthesis and editing.pdf.
[16]
Lucas Kovar, Michael Gleicher, and Frédéric Pighin. 2002. Motion graphs. ACM Trans. Graph. 21, 3 (2002), 473–482. https://doi.org/10.1145/566654.566605 All Papers/K/Kovar et al. 2002 - Motion graphs.pdf.
[17]
Franziska Krebs, Andre Meixner, Isabel Patzer, and Tamim Asfour. 2021. The KIT Bimanual Manipulation Dataset. In IEEE/RAS International Conference on Humanoid Robots (Humanoids). IEEE, Munich, Germany, 499–506.
[18]
CMU Graphics Lab. 2000. CMU Graphics Lab Motion Capture Database. http://mocap.cs.cmu.edu/
[19]
Jehee Lee, Jinxiang Chai, Paul SA Reitsma, Jessica K Hodgins, and Nancy S Pollard. 2002. Interactive control of avatars animated with human motion data. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques. New York, USA, 491–500.
[20]
Yongjoon Lee, Kevin Wampler, Gilbert Bernstein, Jovan Popović, and Zoran Popović. 2010. Motion fields for interactive character locomotion. ACM Trans. Graph. 29, 6 (2010), Article 138. https://doi.org/10.1145/1882261.1866160
[21]
Martin Manns, Michael Otto, and Markus Mauer. 2016. Measuring motion capture data quality for data driven human motion synthesis. Procedia CIRP 41(2016), 945–950.
[22]
Jianyuan Min and Jinxiang Chai. 2012. Motion Graphs++: A Compact Generative Model for Semantic Motion Analysis and Synthesis. ACM Trans. Graph. 31, 6, Article 153 (2012). https://doi.org/10.1145/2366145.2366172
[23]
Paul S. A. Reitsma and Nancy S. Pollard. 2007. Evaluating motion graphs for character animation. ACM Trans. Graph. 26, 4 (2007), 18. https://doi.org/10.1145/1289603.1289609 All Papers/R/Reitsma and Pollard 2007 - Evaluating motion graphs for character animation.pdf.
[24]
Arun Thundyill Saseendran, Lovish Setia, Viren Chhabria, Debrup Chakraborty, and Aneek Barman Roy. 2019. Impact of data pruning on machine learning algorithm performance. arXiv preprint arXiv:1901.10539(2019).
[25]
Hyun Joon Shin and Hyun Seok Oh. 2006. Fat Graphs: Constructing an Interactive Character with Continuous Controls. In Proceedings of the 2006 ACM SIGGRAPH/Eurographics Symposium on Computer Animation(SCA ’06). 291–298.
[26]
Adam Switonski, Henryk Josinski, and Konrad Wojciechowski. 2019. Dynamic time warping in classification and selection of motion capture data. Multidimensional Systems and Signal Processing 30, 3(2019), 1437–1468.
[27]
Ben JH van Basten and Arjan Egges. 2009. Evaluating distance metrics for animation blending. In Proceedings of the 4th International Conference on Foundations of Digital Games. 199–206.
[28]
Kristjan Zadziuk. 2016. Animation Bootcamp: Motion Matching: The Future of Games Animation...Today. In Game Developers Conference (GDC). San Francisco, California. https://www.gdcvault.com/play/1023115/Animation-Bootcamp-Motion-Matching-The

Index Terms

  1. A new framework for the evaluation of locomotive motion datasets through motion matching techniques

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      MIG '22: Proceedings of the 15th ACM SIGGRAPH Conference on Motion, Interaction and Games
      November 2022
      109 pages
      ISBN:9781450398886
      DOI:10.1145/3561975
      © 2022 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of a national government. As such, the Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 03 November 2022

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. animation
      2. data analysis
      3. data visualization
      4. feature extraction
      5. motion capture data

      Qualifiers

      • Research-article
      • Research
      • Refereed limited

      Funding Sources

      • European Union?s Horizon 2020 research and innovation programme under the Marie Sk?odowska Curie grant agreement

      Conference

      MIG '22
      Sponsor:

      Acceptance Rates

      Overall Acceptance Rate -9 of -9 submissions, 100%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • 0
        Total Citations
      • 144
        Total Downloads
      • Downloads (Last 12 months)33
      • Downloads (Last 6 weeks)2
      Reflects downloads up to 11 Dec 2024

      Other Metrics

      Citations

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format.

      HTML Format

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media