[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2666633.2666641acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Presentation Skills Estimation Based on Video and Kinect Data Analysis

Published: 12 November 2014 Publication History

Abstract

This paper identifies, by means of video and Kinect data, a set of predictors that estimate the presentation skills of 448 individual students. Two evaluation criteria were predicted: eye contact and posture and body language. Machine-learning evaluations resulted in models that predicted the performance level (good or poor) of the presenters with 68% and 63% of correctly classified instances, for eye contact and postures and body language criteria, respectively. Furthermore, the results suggest that certain features, such as arms movement and smoothness, provide high significance on predicting the level of development for presentation skills. The paper finishes with conclusions and related ideas for future work.

References

[1]
Abet. Accreditation policy and procedure manual 2013 - 2014, 2014.
[2]
D. Bernhardt and P. Robinson. Detecting affect from non-stylised body motions. In A. Paiva, R. Prada, and R. Picard, editors, Affective Computing and Intelligent Interaction, volume 4738 of Lecture Notes in Computer Science, pages 59--70. Springer Berlin Heidelberg, 2007.
[3]
S. Bhattacharya, B. Czejdo, and N. Perez. Gesture classification with machine learning using kinect sensor data. In Emerging Applications of Information Technology (EAIT), 2012 Third International Conference on, pages 348--351, Nov 2012.
[4]
P. Blikstein. Multimodal learning analytics. In Proceedings of the Third International Conference on Learning Analytics and Knowledge, pages 102--106. ACM, 2013.
[5]
M. Cakmak, S. S. Srinivasa, M. K. Lee, S. Kiesler, and J. Forlizzi. Using spatial and temporal contrast for fluent robot-human hand-overs. In Proceedings of the 6th international conference on Human-robot interaction, pages 489--496. ACM, 2011.
[6]
G. Caridakis, A. Raouzaiou, E. Bevacqua, M. Mancini, K. Karpouzis, L. Malatesta, and C. Pelachaud. Virtual agent multimodal mimicry of humans. Language Resources and Evaluation, 41(3--4):367--388, 2007.
[7]
M. Cavanagh, M. Bower, R. Moloney, and N. Sweller. The effect over time of a video-based reflection system on preservice teachers' oral presentations. Australian Journal of Teacher Education, 39(6):1, 2014.
[8]
S. A. Etemad. Perceptually Guided Processing of Style and Affect in Human Motion for Multimedia Applications. PhD thesis, Carleton University, 2014.
[9]
J. F. Grafsgaard, J. B. Wiggins, K. E. Boyer, E. N. Wiebe, and J. C. Lester. Modeling student programming with multimodal learning analytics. In Proceeding of the 44th ACM technical symposium on Computer science education, pages 736--736. ACM, 2013.
[10]
L. Hsu. The impact of perceived teachers nonverbal immediacy on students motivation for learning english. Asian EFL Journal, 12(4):p188--204, 2010.
[11]
A. f. C. M. A. Joint Task Force on Computing Curricula and I. C. Society. Computer Science Curricula 2013: Curriculum Guidelines for Undergraduate Degree Programs in Computer Science. ACM, New York, NY, USA, 2013. 999133.
[12]
A. Kleinsmith and N. Bianchi-Berthouze. Affective body expression perception and recognition: A survey. Affective Computing, IEEE Transactions on, 4(1):15--33, 2013.
[13]
T.-L. Le, M.-Q. Nguyen, and T.-T.-M. Nguyen. Human posture recognition using human skeleton provided by kinect. In Computing, Management and Telecommunications (ComManTel), 2013 International Conference on, pages 340--345, Jan 2013.
[14]
M.-J. Lesot, L. Mouillet, and B. Bouchon-Meunier. Fuzzy prototypes based on typicality degrees. In B. Reusch, editor, Computational Intelligence, Theory and Applications, volume 33 of Advances in Soft Computing, pages 125--138. Springer Berlin Heidelberg, 2005.
[15]
Luxand. Luxand - face recognition, face detection and facial feature detection technologies, 2014.
[16]
M. Mancini and G. Castellano. Real-time analysis and synthesis of emotional gesture expressivity, 2007.
[17]
B. Mazzarino and M. Mancini. The need for impulsivity & smoothness - improving hci by qualitatively measuring new high-level human motion features. In SIGMAP, pages 62--67, 2009.
[18]
J. Newlove and J. Dalby. Laban for All. A Nick Hern book. Nick Hern, 2004.
[19]
A.-T. Nguyen, W. Chen, and M. Rauterberg. Online feedback system for public speakers. In E-Learning, E-Management and E-Services (IS3e), 2012 IEEE Symposium on, pages 1--5, Oct 2012.
[20]
M. Pantic, A. Pentland, A. Nijholt, and T. Huang. Human computing and machine understanding of human behavior: A survey. In T. Huang, A. Nijholt, M. Pantic, and A. Pentland, editors, Artificial Intelligence for Human Computing, volume 4451 of Lecture Notes in Computer Science, pages 47--71. Springer Berlin Heidelberg, 2007.
[21]
G. Papadopoulos, A. Axenopoulos, and P. Daras. Real-time skeleton-tracking-based human action recognition using kinect data. In C. Gurrin, F. Hopfgartner, W. Hurst, H. Johansen, H. Lee, and N. O'Connor, editors, MultiMedia Modeling, volume 8325 of Lecture Notes in Computer Science, pages 473--483. Springer International Publishing, 2014.
[22]
F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, 12:2825--2830, 2011.
[23]
J. W. Pellegrino, M. L. Hilton, et al. Education for life and work: Developing transferable knowledge and skills in the 21st century. National Academies Press, 2013.
[24]
S. Piana, M. Mancini, A. Camurri, G. Varni, and G. Volpe. Automated analysis of non-verbal expressive gesture. In T. Bosse, D. J. Cook, M. Neerincx, and F. Sadri, editors, Human Aspects in Ambient Intelligence, volume 8 of Atlantis Ambient and Pervasive Intelligence, pages 41--54. Atlantis Press, 2013.
[25]
S. P. R. Niewiadomski, M. Mancini. Human and virtual agent expressive gesture quality analysis and synthesis. Coverbal Synchrony in Human-Machine Interaction, CRC Press, 2013.
[26]
M. Schroder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M. ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M. Valstar, and M. Wollmer. Building autonomous sensitive artificial listeners. Affective Computing, IEEE Transactions on, 3(2):165--183, April 2012.
[27]
D. Tardieu, X. Siebert, B. Mazzarino, R. Chessini, J. Dubois, S. Dupont, G. Varni, and A. Visentin. Browsing a dance video collection: dance analysis and interface design. Journal on Multimodal User Interfaces, 4(1):37--46, 2010.
[28]
L. Wilbanks. Are you communicating? IT Professional, 14(5):0060--61, 2012.
[29]
M. Worsley. Multimodal learning analytics: enabling the future of learning through multimodal data analysis and interfaces. In Proceedings of the 14th ACM international conference on Multimodal interaction, pages 353--356. ACM, 2012.
[30]
D. York. Investigating a Relationship between Nonverbal Communication and Student Learning. PhD thesis, Lindenwood University, 2013.

Cited By

View all
  • (2024)Your body tells how you engage in collaboration: Machine‐detected body movements as indicators of engagement in collaborative math knowledge buildingBritish Journal of Educational Technology10.1111/bjet.1347355:5(1950-1973)Online publication date: 10-May-2024
  • (2023)Tutor In-sight: Guiding and Visualizing Students’ Attention with Mixed Reality Avatar Presentation ToolsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581069(1-20)Online publication date: 19-Apr-2023
  • (2023)A Human-Centered Review of Algorithms in Decision-Making in Higher EducationProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580658(1-15)Online publication date: 19-Apr-2023
  • Show More Cited By

Index Terms

  1. Presentation Skills Estimation Based on Video and Kinect Data Analysis

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    MLA '14: Proceedings of the 2014 ACM workshop on Multimodal Learning Analytics Workshop and Grand Challenge
    November 2014
    68 pages
    ISBN:9781450304887
    DOI:10.1145/2666633
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 November 2014

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. multimodal
    2. presentation skills
    3. video features

    Qualifiers

    • Research-article

    Funding Sources

    Conference

    ICMI '14
    Sponsor:

    Acceptance Rates

    MLA '14 Paper Acceptance Rate 3 of 3 submissions, 100%;
    Overall Acceptance Rate 3 of 3 submissions, 100%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)39
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 18 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Your body tells how you engage in collaboration: Machine‐detected body movements as indicators of engagement in collaborative math knowledge buildingBritish Journal of Educational Technology10.1111/bjet.1347355:5(1950-1973)Online publication date: 10-May-2024
    • (2023)Tutor In-sight: Guiding and Visualizing Students’ Attention with Mixed Reality Avatar Presentation ToolsProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3581069(1-20)Online publication date: 19-Apr-2023
    • (2023)A Human-Centered Review of Algorithms in Decision-Making in Higher EducationProceedings of the 2023 CHI Conference on Human Factors in Computing Systems10.1145/3544548.3580658(1-15)Online publication date: 19-Apr-2023
    • (2023)SpeechMirror: A Multimodal Visual Analytics System for Personalized Reflection of Online Public Speaking EffectivenessIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2023.3326932(1-11)Online publication date: 2023
    • (2023)An automated framework to evaluate soft skills using posture and disfluency detectionMachine Vision and Applications10.1007/s00138-023-01431-034:5Online publication date: 1-Sep-2023
    • (2022)Predicting Presentation Skill of a Speaker Using Automatic Speaker and Audience MeasurementIEEE Transactions on Learning Technologies10.1109/TLT.2022.317160115:3(350-363)Online publication date: 1-Jun-2022
    • (2022)BERT-Based Automatic Scoring Model for Speech-Oriented Text Modality2022 IEEE 2nd International Conference on Electronic Technology, Communication and Information (ICETCI)10.1109/ICETCI55101.2022.9832254(100-105)Online publication date: 27-May-2022
    • (2022)Bridging the Gap Between Informal Learning Pedagogy and Multimodal Learning AnalyticsThe Multimodal Learning Analytics Handbook10.1007/978-3-031-08076-0_7(159-179)Online publication date: 9-Oct-2022
    • (2021)EFAR-MMLA: An Evaluation Framework to Assess and Report Generalizability of Machine Learning Models in MMLASensors10.3390/s2108286321:8(2863)Online publication date: 19-Apr-2021
    • (2021)A Learning Analytics Framework to Analyze Corporal Postures in Students PresentationsSensors10.3390/s2104152521:4(1525)Online publication date: 22-Feb-2021
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media