[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2542050.2542071acmotherconferencesArticle/Chapter ViewAbstractPublication PagessoictConference Proceedingsconference-collections
research-article

Principal direction analysis-based real-time 3D human pose reconstruction from a single depth image

Published: 05 December 2013 Publication History

Abstract

Human pose estimation in real-time is a challenging problem in computer vision. In this paper, we present a novel approach to recover a 3D human pose in real-time from a single depth human silhouette using Principal Direction Analysis (PDA) on each recognized body part. In our work, the human body parts are first recognized from a depth human body silhouette via the trained Random Forests (RFs). On each recognized body part which is presented as a set of 3D points cloud, PDA is applied to estimate the principal direction of the body part. Finally, a 3D human pose gets recovered by mapping the principal directional vector to each body part of a 3D human body model which is created with a set of super-quadrics linked by the kinematic chains. In our experiments, we have performed quantitative and qualitative evaluations of the proposed 3D human pose reconstruction methodology. Our evaluation results show that the proposed approach performs reliably on a sequence of unconstrained poses and achieves an average reconstruction error of 7.46 degree in a few key joint angles. Our 3D pose recovery methodology should be applicable to many areas such as human computer interactions and human activity recognition.

References

[1]
CMU motion capture database. http://mocap.cs.cmu.edu.
[2]
PrimeSense Ltd. http://www.primesense.com/.
[3]
J. T. K. Ahmad Jalal, Naeha Sharif and T.-S. Kim. Human activity recognition via recognized body parts of human depth silhouettes for residents monitoring services at smart home. Journal of Indoor and built Environment, 22: 271--279, October 2013.
[4]
A. Baak, M. Müller, G. Bharaj, H.-P. Seidel, and C. Theobalt. Data-driven approach for real-time full body pose reconstruction from a depth camera. In Proceedings of the 2011 International Conference on Computer Vision, pages 1092--1099, 2011.
[5]
L. Breiman. Random forests. Machine Learning, 45(1): 5--32, October 2001.
[6]
D. K. C. Plagemann, V. Ganapathi and S. Thrun. Real-time identification and localization of body parts from depth images. In IEEE International Conference on Robotics and Automation (ICRA), pages 3108--3113, 2010.
[7]
L. Chen, H. Wei, and J. Ferryman. A survey of human motion analysis using depth imagery. Pattern Recognition Letters, February 2013.
[8]
V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun. Real time motion capture using a single time-of-flight camera. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 755--762, 2010.
[9]
V. Ganapathi, C. Plagemann, D. Koller, and S. Thrun. Real-time human pose tracking from range data. In Proceedings of the 12th European conference on Computer Vision, pages 738--751, 2012.
[10]
T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer, New York, 2008.
[11]
V. Lepetit, P. Lagger, and P. Fua. Randomized trees for real-time keypoint recognition. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 2, pages 775--781, 2005.
[12]
T. B. Moeslund, A. Hilton, and V. Krüger. A survey of advances in vision-based human motion capture and analysis. Computer Vision and Image Understanding, 104(2): 90--126, November 2006.
[13]
R. Poppe. Vision-based human motion analysis: An overview. Computer Vision and Image Understanding, 108(1--2): 4--18, October 2007.
[14]
B. Rosenhahn, U. G. Kersting, A. W. Smith, J. K. Gurney, T. Brox, and R. Klette. A system for marker-less human motion estimation. Lecture Notes in Computer Science, 3663: 230--237, 2005.
[15]
B. Rosenhahn, C. Schmaltz, T. Brox, J. Weickert, D. Cremers, and H. P. Seidel. Markerless motion capture of man-machine interaction. In Computer Vision and Pattern Recognition, 2008. CVPR 2008. IEEE Conference on, pages 23--28, 2008.
[16]
L. A. Schwarz, A. Mkhitaryan, D. Mateus, and N. Navab. Estimating human 3d pose from time-of-flight images based on geodesic distances and optical flow. In IEEE conference on Automatic Face and Gesture Recognition, pages 700--706, 2011.
[17]
L. A. Schwarz, A. Mkhitaryan, D. Mateus, and N. Navab. Human skeleton tracking from depth data using geodesic distances and optical flow. Journal Image and Vision Computing, 30(3): 217--226, March 2012.
[18]
J. ShenAuthor, W. YangAuthor, and Q. Liao. Part template: 3d representation for multiview human pose estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 46(7): 1920--1932, July 2013.
[19]
J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake. Real-time human pose recognition in parts from single depth images. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, pages 1297--1304, 2011.
[20]
A. Sundaresan and R. Chellappa. Model-driven segmentation of articulating humans in laplacian eigenspace. IEEE Transactions on Pattern Analysis and Machine Intelligence, 30(10): 1771--1785, October 2008.
[21]
N. D. Thang, T.-S. Kim, Y.-K. Lee, and S. Lee. Estimation of 3-d human body posture via co-registration of 3-d human model and sequential stereo information. Journal Applied Intelligence, 35(2): 163--177, October 2011.

Cited By

View all
  • (2021)A Review: Point Cloud-Based 3D Human Joints EstimationSensors10.3390/s2105168421:5(1684)Online publication date: 1-Mar-2021

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
SoICT '13: Proceedings of the 4th Symposium on Information and Communication Technology
December 2013
345 pages
ISBN:9781450324540
DOI:10.1145/2542050
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

  • SOICT: School of Information and Communication Technology - HUST
  • NAFOSTED: The National Foundation for Science and Technology Development
  • ACM Vietnam Chapter: ACM Vietnam Chapter
  • Danang Univ. of Technol.: Danang University of Technology

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 05 December 2013

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. body part recognition
  2. depth image
  3. human pose estimation
  4. principal direction analysis

Qualifiers

  • Research-article

Funding Sources

Conference

SoICT '13
Sponsor:
  • SOICT
  • NAFOSTED
  • ACM Vietnam Chapter
  • Danang Univ. of Technol.

Acceptance Rates

SoICT '13 Paper Acceptance Rate 40 of 80 submissions, 50%;
Overall Acceptance Rate 147 of 318 submissions, 46%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)6
  • Downloads (Last 6 weeks)0
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2021)A Review: Point Cloud-Based 3D Human Joints EstimationSensors10.3390/s2105168421:5(1684)Online publication date: 1-Mar-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media