[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

How Presenters Perceive and React to Audience Flow Prediction In-situ: An Explorative Study of Live Online Lectures

Published: 07 November 2019 Publication History

Abstract

The degree and quality of instructor-student interactions are crucial for students' engagement, retention, and learning outcomes. However, such interactions are limited in live online lectures, where instructors no longer have access to important cues such as raised hands or facial expressions at the time of teaching. As a result, instructors cannot fully understand students' learning progresses. This paper presents an explorative study investigating how presenters perceive and react to audience flow prediction when giving live-stream lectures, which has not been examined yet. The study was conducted with an experimental system that can predict audience's psychological states (e.g., anxiety, flow, boredom) through real-time facial expression analysis, and can provide aggregated views illustrating the flow experience of the whole group. Through evaluation with 8 online lectures (N_instructors=8, N_learners=21), we found such real-time flow prediction and visualization can provide value to presenters. This paper contributes a set of useful findings regarding their perception and reaction of such flow prediction, as well as lessons learned in the study, which can be inspirational for building future AI-powered system to assist people in delivering live online presentations.

References

[1]
2012. InstFeedback. http://instfeedback.com.
[2]
2018. By The Numbers: MOOCS in 2017. https://www.classcentral.com/report/mooc-stats-2017/.
[3]
2019. Affectiva. https://www.affectiva.com/.
[4]
2019. KMF online learning platform. https://www.kmf.com/.
[5]
2019. New Oriental Education and Technology Group. http://www.neworiental.org/english/.
[6]
Daniel Afergan, Evan M Peck, Erin T Solovey, Andrew Jenkins, Samuel W Hincks, Eli T Brown, Remco Chang, and Robert JK Jacob. 2014. Dynamic difficulty using brain metrics of workload. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3797--3806.
[7]
Daniel Afergan, Tomoki Shibata, Samuel W Hincks, Evan M Peck, Beste F Yuksel, Remco Chang, and Robert JK Jacob. 2014. Brain-based target expansion. In Proceedings of the 27th annual ACM symposium on User interface software and technology. ACM, 583--593.
[8]
Jeremy N Bailenson, Emmanuel D Pontikakis, Iris B Mauss, James J Gross, Maria E Jabon, Cendri AC Hutcherson, Clifford Nass, and Oliver John. 2008. Real-time classification of evoked emotions using facial feature tracking and physiological responses. International journal of human-computer studies 66, 5 (2008), 303--317.
[9]
Ashok R Basawapatna, Alexander Repenning, Kyu Han Koh, and Hilarie Nickerson. 2013. The zones of proximal flow: guiding students through a space of computational thinking skills and challenges. In Proceedings of the ninth annual international ACM conference on International computing education research. ACM, 67--74.
[10]
AJ Brush, David Bargeron, Jonathan Grudin, Alan Borning, and Anoop Gupta. 2002. Supporting interaction outside of class: anchored discussions vs. discussion boards. In Proceedings of the Conference on Computer Support for Collaborative Learning: Foundations for a CSCL Community. International Society of the Learning Sciences, 425--434.
[11]
Isabel Buil, Sara Catalán, and Eva Martínez. 2019. The influence of flow on learning outcomes: An empirical study on the use of clickers. British Journal of Educational Technology 50, 1 (2019), 428--439.
[12]
Di Laura Chen, Dustin Freeman, and Ravin Balakrishnan. 2019. Integrating Multimedia Tools to Enrich Interactions in Live Streaming for Language Learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, 438.
[13]
Mihalyi Csikszentmihalyi. 2014. Flow and the foundations of positive psychology. Recuperado de http://www. springer. com/us/book/9789401790871 (2014).
[14]
Mihaly Csikszentmihalyi and Isabella Csikszentmihalyi. 1975. Beyond boredom and anxiety. Vol. 721. Jossey-Bass San Francisco.
[15]
Mihaly Csikszentmihalyi and Isabella Selega Csikszentmihalyi. 1992. Optimal experience: Psychological studies of flow in consciousness. Cambridge university press.
[16]
Travis Faas, Lynn Dombrowski, Alyson Young, and Andrew D Miller. 2018. Watch me code: programming mentorship communities on Twitch. tv. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 50.
[17]
Elena L Glassman, Juho Kim, Andrés Monroy-Hernández, and Meredith Ringel Morris. 2015. Mudslide: A spatially anchored census of student confusion for online lecture videos. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 1555--1564.
[18]
Mariam Hassib, Stefan Schneegass, Philipp Eiglsperger, Niels Henze, Albrecht Schmidt, and Florian Alt. 2017. EngageMeter: A system for implicit audience engagement sensing using electroencephalography. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 5114--5119.
[19]
Joel Hektner, Jennifer Schmidt, and Mihaly Csikszentmihalyi. 2019. Experience Sampling Method. Thousand Oaks, California. https://doi.org/10.4135/9781412984201
[20]
Javier Hernandez, Zicheng Liu, Geoff Hulten, Dave DeBarr, Kyle Krum, and Zhengyou Zhang. 2013. Measuring the engagement level of TV viewers. In 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG). IEEE, 1--7.
[21]
Juho Kim, Philip J Guo, Daniel T Seaton, Piotr Mitros, Krzysztof Z Gajos, and Robert C Miller. 2014. Understanding in-video dropouts and interaction peaks inonline lecture videos. In Proceedings of the first ACM conference on Learning@ scale conference. ACM, 31--40.
[22]
Kyu Han Koh, Ashok Basawapatna, Hilarie Nickerson, and Alexander Repenning. 2014. Real time assessment of computational thinking. In 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 49--52.
[23]
Yang Li, Samy Bengio, and Gilles Bailly. 2018. Predicting human performance in vertical menu selection using deep learning. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 29.
[24]
Zhicong Lu, Seongkook Heo, and Daniel J Wigdor. 2018. StreamWiki: Enabling Viewers of Knowledge Sharing Live Streams to Collaboratively Generate Archival Documentation for Effective In-Stream and Post Hoc Learning. Proceedings of the ACM on Human-Computer Interaction 2, CSCW (2018), 112.
[25]
Rajitha Navarathna, Patrick Lucey, Peter Carr, Elizabeth Carter, Sridha Sridharan, and Iain Matthews. 2014. Predicting movie ratings from audience behaviors. In IEEE Winter Conference on Applications of Computer Vision. IEEE, 1058--1065.
[26]
Thao Nguyen, Iris Bass, Mingkun Li, and Ishwar K Sethi. 2005. Investigation of combining SVM and decision tree for emotion classification. In Seventh IEEE International Symposium on Multimedia (ISM'05). IEEE, 5--pp.
[27]
Anna Nowicka, Artur Marchewka, Katarzyna Jednorog, Pawel Tacikowski, and Andre Brechmann. 2010. Forgetting of emotional information is hard: an fMRI study of directed forgetting. Cerebral Cortex 21, 3 (2010), 539--549.
[28]
Phuong Pham and Jingtao Wang. 2015. AttentiveLearner: improving mobile MOOC learning via implicit heart rate tracking. In International Conference on Artificial Intelligence in Education. Springer, 367--376.
[29]
Phuong Pham and Jingtao Wang. 2017. AttentiveLearner 2: a multimodal approach for improving MOOC learning on mobile devices. In International Conference on Artificial Intelligence in Education. Springer, 561--564.
[30]
Phuong Pham and Jingtao Wang. 2017. Understanding emotional responses to mobile video advertisements via physiological signal sensing and facial expression analysis. In Proceedings of the 22nd International Conference on Intelligent User Interfaces. ACM, 67--78.
[31]
Phuong Pham and Jingtao Wang. 2018. Adaptive Review for Mobile MOOC Learning via Multimodal Physiological Signal Sensing-A Longitudinal Study. In Proceedings of the 2018 on International Conference on Multimodal Interaction. ACM, 63--72.
[32]
David J Shernoff and Mihaly Csikszentmihalyi. [n. d.]. Cultivating Engaged Learners and Optimal Learning Environments. ([n. d.]), 16.
[33]
David J. Shernoff, Mihaly Csikszentmihalyi, Barbara Shneider, and Elisa Steele Shernoff. 2003. Student engagement in high school classrooms from the perspective of flow theory. School Psychology Quarterly 18, 2 (2003), 158--176. https://doi.org/10.1521/scpq.18.2.158.21860
[34]
Hyungyu Shin, Eun-Young Ko, Joseph Jay Williams, and Juho Kim. 2018. Understanding the Effect of In-Video Prompting on Learners and Instructors. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 319.
[35]
M Iftekhar Tanveer, Samiha Samrose, Raiyan Abdul Baten, and M Ehsan Hoque. 2018. Awe the Audience: How the Narrative Trajectories Affect Audience Perception in Public Speaking. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 24.
[36]
Julianne C Turner, Kathleen E Cox, Matthew DiCintio, Debra K Meyer, Candice Logan, and Cynthia T Thomas. [n. d.]. Creating Contexts for Involvement in Mathematics. ([n. d.]), 16.
[37]
Johannes Wagner, Jonghwa Kim, and Elisabeth André. 2005. From physiological signals to emotions: Implementing and comparing selected methods for feature extraction and classification. In 2005 IEEE international conference on multimedia and expo. IEEE, 940--943.
[38]
Jacob Whitehill, Marian Bartlett, and Javier Movellan. 2008. Measuring the perceived difficulty of a lecture using automatic facial expression recognition. In International Conference on Intelligent Tutoring Systems. Springer, 668--670.
[39]
Xiang Xiao and Jingtao Wang. 2016. Context and cognitive state triggered interventions for mobile MOOC learning. In Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, 378--385.
[40]
Chun yan Nie, JuWang, Fang He, and Reika Sato. 2015. Application of J48 decision tree classifier in emotion recognition based on chaos characteristics. In 2015 International Conference on Automation, Mechanical Control and Computational Engineering. Atlantis Press.
[41]
Chi-Lan Yang, Chien Wen (Tina) Yuan, Tzu-Yang Wang, and Hao-Chuan Wang. 2018. Knowing That You Know What I Know Helps?: Understanding the Effects of Knowledge Transparency in Online Knowledge Transfer. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 189 (Nov. 2018), 21 pages. https://doi.org/10.1145/3274458
[42]
Beste F Yuksel, Kurt B Oleson, Lane Harrison, Evan M Peck, Daniel Afergan, Remco Chang, and Robert JK Jacob. 2016. Learn piano with BACh: An adaptive learning interface that adjusts task difficulty based on brain state. In Proceedings of the 2016 chi conference on human factors in computing systems. ACM, 5372--5384.
[43]
Wei-Long Zheng,Wei Liu, Yifei Lu, Bao-Liang Lu, and Andrzej Cichocki. 2018. Emotionmeter: A multimodal framework for recognizing human emotions. IEEE transactions on cybernetics 99 (2018), 1--13.
[44]
Sacha Zyto, David Karger, Mark Ackerman, and Sanjoy Mahajan. 2012. Successful classroom deployment of a social document annotation system. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1883--1892.

Cited By

View all
  • (2024)EduLive: Re-Creating Cues for Instructor-Learners Interaction in Educational Live Streams with Learners' Transcript-Based AnnotationsProceedings of the ACM on Human-Computer Interaction10.1145/36869608:CSCW2(1-33)Online publication date: 8-Nov-2024
  • (2024)On Task and in Sync: Examining the Relationship between Gaze Synchrony and Self-reported Attention During Video Lecture LearningProceedings of the ACM on Human-Computer Interaction10.1145/36556048:ETRA(1-18)Online publication date: 28-May-2024
  • (2024)Investigating the Effects of Real-time Student Monitoring Interface on Instructors’ Monitoring Practices in Online TeachingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642845(1-11)Online publication date: 11-May-2024
  • Show More Cited By

Index Terms

  1. How Presenters Perceive and React to Audience Flow Prediction In-situ: An Explorative Study of Live Online Lectures

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image Proceedings of the ACM on Human-Computer Interaction
      Proceedings of the ACM on Human-Computer Interaction  Volume 3, Issue CSCW
      November 2019
      5026 pages
      EISSN:2573-0142
      DOI:10.1145/3371885
      Issue’s Table of Contents
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 07 November 2019
      Published in PACMHCI Volume 3, Issue CSCW

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. facial expression analysis
      2. flow
      3. live online lectures

      Qualifiers

      • Research-article

      Funding Sources

      • National Natural Science Foundation of China
      • National Key R&D Program of China
      • Key Research Program of Frontier Sciences CAS

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)51
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 14 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)EduLive: Re-Creating Cues for Instructor-Learners Interaction in Educational Live Streams with Learners' Transcript-Based AnnotationsProceedings of the ACM on Human-Computer Interaction10.1145/36869608:CSCW2(1-33)Online publication date: 8-Nov-2024
      • (2024)On Task and in Sync: Examining the Relationship between Gaze Synchrony and Self-reported Attention During Video Lecture LearningProceedings of the ACM on Human-Computer Interaction10.1145/36556048:ETRA(1-18)Online publication date: 28-May-2024
      • (2024)Investigating the Effects of Real-time Student Monitoring Interface on Instructors’ Monitoring Practices in Online TeachingProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642845(1-11)Online publication date: 11-May-2024
      • (2024)Decoding Group Emotional Dynamics in a Web-Based Collaborative Environment: A Novel Framework Utilizing Multi-Person Facial Expression RecognitionInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2338614(1-19)Online publication date: 17-Apr-2024
      • (2024)The Impact of Video Meeting Systems on Psychological User StatesInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2023.103178182:COnline publication date: 1-Feb-2024
      • (2023)Integrating Gaze and Mouse Via Joint Cross-Attention Fusion Net for Students' Activity Recognition in E-learningProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36108767:3(1-35)Online publication date: 27-Sep-2023
      • (2023)ML-Based Teaching Systems: A Conceptual FrameworkProceedings of the ACM on Human-Computer Interaction10.1145/36101977:CSCW2(1-25)Online publication date: 4-Oct-2023
      • (2023)Behind the Screens: Exploring Eye Movement Visualization to Optimize Online Teaching and LearningProceedings of Mensch und Computer 202310.1145/3603555.3603560(67-80)Online publication date: 3-Sep-2023
      • (2023)WiFiTuned: Monitoring Engagement in Online Participation by Harmonizing WiFi and AudioProceedings of the 25th International Conference on Multimodal Interaction10.1145/3577190.3614108(670-678)Online publication date: 9-Oct-2023
      • (2023)Modeling Adaptive Expression of Robot Learning Engagement and Exploring Its Effects on Human TeachersACM Transactions on Computer-Human Interaction10.1145/357181330:5(1-48)Online publication date: 23-Sep-2023
      • Show More Cited By

      View Options

      Login options

      Full Access

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media