[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3009977.3010045acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicvgipConference Proceedingsconference-collections
research-article

A framework to assess Sun salutation videos

Published: 18 December 2016 Publication History

Abstract

There are many exercises which are repetitive in nature and are required to be done with perfection to derive maximum benefits. Sun Salutation or Surya Namaskar is one of the oldest yoga practice known. It is a sequence of ten actions or 'asanas' where the actions are synchronized with breathing and each action and its transition should be performed with minimal jerks. Essentially, it is important that this yoga practice be performed with Grace and Consistency. In this context, Grace is the ability of a person to perform an exercise with smoothness i.e. without sudden movements or jerks during the posture transition and Consistency measures the repeatability of an exercise in every cycle. We propose an algorithm that assesses how well a person practices Sun Salutation in terms of grace and consistency. Our approach works by training individual HMMs for each asana using STIP features[11] followed by automatic segmentation and labeling of the entire Sun Salutation sequence using a concatenated-HMM. The metric of grace and consistency are then laid down in terms of posture transition times. The assessments made by our system are compared with the assessments of the yoga trainer to derive the accuracy of the system. We introduce a dataset for Sun Salutation videos comprising 30 sequences of perfect Sun Salutation performed by seven experts and used this dataset to train our system. While Sun Salutation can be judged on multiple parameters, we focus mainly on judging Grace and Consistency.

References

[1]
E. Z. Borzeshi, O. P. Concha, R. Y. Da Xu, and M. Piccardi. Joint action segmentation and classification by an extended hidden markov model. IEEE Signal Processing Letters, 20(12):1207--1210, 2013.
[2]
I. Cohen, A. Garg, T. S. Huang, et al. Emotion recognition from facial expressions using multilevel hmm. In NIPS, volume 2. Citeseer, 2000.
[3]
O. P. Concha, R. Y. Da Xu, Z. Moghaddam, and M. Piccardi. Hmm-mio: an enhanced hidden markov model for action recognition. In CVPR 2011 WORKSHOPS, pages 62--69. IEEE, 2011.
[4]
N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In 2005 CVPR, volume 1, pages 886--893. IEEE, 2005.
[5]
O. Duchenne, I. Laptev, J. Sivic, F. Bach, and J. Ponce. Automatic annotation of human actions in video. In 2009 IEEE 12th ICCV, pages 1491--1498. IEEE, 2009.
[6]
A. A. Efros, A. C. Berg, G. Mori, and J. Malik. Recognizing action at a distance. In ICCV, pages 726--733. IEEE Computer Society, 2003.
[7]
G. D. Forney. The viterbi algorithm. Proceedings of the IEEE, 61(3):268--278, 1973.
[8]
A. Gilbert, J. Illingworth, and R. Bowden. Fast realistic multi-action recognition using mined dense spatio-temporal features. In 2009 IEEE 12th ICCV, pages 925--931. IEEE, 2009.
[9]
A. S. Gordon. Automated video assessment of human performance. In Proceedings of AI-ED, pages 16--19, 1995.
[10]
M. Jug, J. Perš, B. Dežman, and S. Kovačič. Trajectory based assessment of coordinated human activity. In ICCV, pages 534--543. Springer, 2003.
[11]
I. Laptev. On space-time interest points. IJCV, 64(2--3):107--123, 2005.
[12]
I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld. Learning realistic human actions from movies. In CVPR, pages 1--8. IEEE, 2008.
[13]
J. H. Martin and D. Jurafsky. Speech and language processing. International Edition, 710, 2000.
[14]
J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatial-temporal words. IJCV, 79(3):299--318, 2008.
[15]
S. Omkar. Surya namaskaar for holistic well being: A comprehensive review of surya namaskaar. Journal of Yoga & Physical Therapy, 2012, 2012.
[16]
D. Ozkan, S. Scherer, and L.-P. Morency. Step-wise emotion recognition using concatenated-hmm. In Proceedings of the 14th ACM ICMI, pages 477--484. ACM, 2012.
[17]
M. Perše, M. Kristan, J. Perš, and S. Kovačič. Automatic Evaluation of Organized Basketball Activity using Bayesian Networks. Citeseer, 2007.
[18]
H. Pirsiavash, C. Vondrick, and A. Torralba. Assessing the quality of actions. In ECCV, pages 556--571. Springer, 2014.
[19]
R. Poppe. A survey on vision-based human action recognition. Image and vision computing, 28(6):976--990, 2010.
[20]
C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local svm approach. In Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03, ICPR '04, pages 32--36, Washington, DC, USA, 2004. IEEE Computer Society.
[21]
V. K. Singh and R. Nevatia. Action recognition in cluttered dynamic scenes using pose-specific part models. In 2011 ICCV, pages 113--120. IEEE, 2011.
[22]
V. Venkataraman, I. Vlachos, and P. Turaga. Dynamical regularity for action analysis. In BMVC, pages 67--1, 2015.
[23]
J. Wang, Z. Liu, Y. Wu, and J. Yuan. Mining actionlet ensemble for action recognition with depth cameras. In CVPR, pages 1290--1297. IEEE Computer Society, 2012.
[24]
J. Yamato, J. Ohya, and K. Ishii. Recognizing human action in time-sequential images using hidden markov model. In CVPR, pages 379--385. IEEE, 1992.

Cited By

View all
  • (2023)Identifying Incorrect Postures While Performing Sun Salutation Using MoveNetCommunication and Intelligent Systems10.1007/978-981-99-2100-3_45(575-587)Online publication date: 25-Jul-2023
  • (2021)Action Quality Assessment Using Siamese Network-Based Deep Metric LearningIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2020.301772731:6(2260-2273)Online publication date: Jun-2021
  • (2021)YogaHelp: Leveraging Motion Sensors for Learning Correct Execution of Yoga With FeedbackIEEE Transactions on Artificial Intelligence10.1109/TAI.2021.30961752:4(362-371)Online publication date: Aug-2021
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICVGIP '16: Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing
December 2016
743 pages
ISBN:9781450347532
DOI:10.1145/3009977
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • Google Inc.
  • QI: Qualcomm Inc.
  • Tata Consultancy Services
  • NVIDIA
  • MathWorks: The MathWorks, Inc.
  • Microsoft Research: Microsoft Research

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 18 December 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. HMM
  2. STIP
  3. consistency
  4. grace
  5. sun salutation

Qualifiers

  • Research-article

Conference

ICVGIP '16
Sponsor:
  • QI
  • MathWorks
  • Microsoft Research

Acceptance Rates

ICVGIP '16 Paper Acceptance Rate 95 of 286 submissions, 33%;
Overall Acceptance Rate 95 of 286 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 22 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Identifying Incorrect Postures While Performing Sun Salutation Using MoveNetCommunication and Intelligent Systems10.1007/978-981-99-2100-3_45(575-587)Online publication date: 25-Jul-2023
  • (2021)Action Quality Assessment Using Siamese Network-Based Deep Metric LearningIEEE Transactions on Circuits and Systems for Video Technology10.1109/TCSVT.2020.301772731:6(2260-2273)Online publication date: Jun-2021
  • (2021)YogaHelp: Leveraging Motion Sensors for Learning Correct Execution of Yoga With FeedbackIEEE Transactions on Artificial Intelligence10.1109/TAI.2021.30961752:4(362-371)Online publication date: Aug-2021
  • (2021)Classification of Yoga Asanas from a Single Image by Learning the 3D View of Human PosesDigital Techniques for Heritage Presentation and Preservation10.1007/978-3-030-57907-4_3(37-49)Online publication date: 18-Mar-2021
  • (2020)A Comparison of Four Approaches to Evaluate the Sit-to-Stand MovementIEEE Transactions on Neural Systems and Rehabilitation Engineering10.1109/TNSRE.2020.298735728:6(1317-1324)Online publication date: Jun-2020
  • (2020)A Fusion-Based Approach to Identify the Phases of the Sit-to-Stand Test in Older People2020 National Conference on Communications (NCC)10.1109/NCC48643.2020.9056092(1-6)Online publication date: Feb-2020
  • (2019)An Unsupervised Sequence-to-Sequence Autoencoder Based Human Action Scoring Model2019 IEEE Global Conference on Signal and Information Processing (GlobalSIP)10.1109/GlobalSIP45357.2019.8969424(1-5)Online publication date: Nov-2019
  • (2018)Leveraging information from imperfect examplesProceedings of the 11th Indian Conference on Computer Vision, Graphics and Image Processing10.1145/3293353.3293416(1-8)Online publication date: 18-Dec-2018
  • (2018)Unsupervised Temporal Segmentation of Human Action Using Community Detection2018 25th IEEE International Conference on Image Processing (ICIP)10.1109/ICIP.2018.8451237(1892-1896)Online publication date: Oct-2018
  • (2018)Detecting Missed and Anomalous Action Segments Using Approximate String Matching AlgorithmComputer Vision, Pattern Recognition, Image Processing, and Graphics10.1007/978-981-13-0020-2_10(101-111)Online publication date: 26-Apr-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media