[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3594781.3594786acmotherconferencesArticle/Chapter ViewAbstractPublication PagesldtConference Proceedingsconference-collections
research-article
Open access

Carry-Forward Effect: Early scaffolding learning processes

Published: 23 June 2023 Publication History

Abstract

Multimodal data enables us to capture the cognitive and affective states of students to provide a holistic understanding of learning processes in a wide variety of contexts. With the use of sensing technology, we can capture learners’ states in near real-time and support learning. Moreover, multimodal data allows us to obtain early-predictions of learning performance, and support learning in a timely manner. In this contribution, we utilize the notion of “carry forward effect”, an inferential and predictive modelling approach that utilizes multimodal data measurements detrimental to learning performance to provide timely feedback suggestions. carry forward effect can provide a way to prioritize conflicting feedback suggestions in a multimodal data based scaffolding tool. We showcase the empirical proof of carry forward effect with the use of two different learning scenarios: debugging and game-based learning.

References

[1]
[n. d.]. E4 wristband: Real-time physiological signals: Wearable PPG, EDA, Temperature, Motion sensors. https://www.empatica.com/research/e4/
[2]
Yomna Abdelrahman, Anam Ahmad Khan, Joshua Newn, Eduardo Velloso, Sherine Ashraf Safwat, James Bailey, Andreas Bulling, Frank Vetere, and Albrecht Schmidt. 2019. Classifying Attention Types with Thermal Imaging and Eye Tracking. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 3 (2019), 1–27.
[3]
Ecenaz Alemdag and Kursat Cagiltay. 2018. A systematic review of eye tracking research on multimedia learning. Computers & Education 125 (2018), 413 – 428. https://doi.org/10.1016/j.compedu.2018.06.023
[4]
Brandon Amos, Bartosz Ludwiczuk, Mahadev Satyanarayanan, 2016. Openface: A general-purpose face recognition library with mobile applications. CMU School of Computer Science 6 (2016).
[5]
Olivier Augereau, Kai Kunze, Hiroki Fujiyoshi, and Koichi Kise. 2016. Estimation of english skill with a mobile eye tracker. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. 1777–1781.
[6]
Brian R Belland, Nam Ju Kim, David M Weiss, and Jacob Piland. 2017. High school students’ collaboration and engagement with scaffolding and information as predictors of argument quality during problem-based learning. Philadelphia, PA: International Society of the Learning Sciences.
[7]
AM Bronstein and C Kennard. 1987. Predictive eye saccades are different from visually triggered saccades. Vision Research 27, 4 (1987), 517–520.
[8]
Orit Broza and Sarit Barzilai. 2011. When the mathematics of life meets school mathematics: Playing and learning on the “my money” website. In Learning in the technological era: Proceedings of the sixth chais conference on instructional technologies research. 92–100.
[9]
Kursat Cagiltay. 2006. Scaffolding strategies in electronic performance support systems: Types and challenges. Innovations in education and Teaching International 43, 1 (2006), 93–103.
[10]
Chih-Hung Chen. 2020. AR videos as scaffolding to foster students’ learning achievements and motivation in EFL learning. British Journal of Educational Technology 51, 3 (2020), 657–672.
[11]
Allan Collins, John Seely Brown, and Ann Holum. 1991. Cognitive apprenticeship: Making thinking visible. American educator 15, 3 (1991), 6–11.
[12]
Cristina Conati, Natasha Jaques, and Mary Muir. 2013. Understanding attention to adaptive hints in educational games: an eye-tracking study. International Journal of Artificial Intelligence in Education 23, 1-4 (2013), 136–161.
[13]
Mihaly Csikszentmihalyi. 2020. Finding flow: The psychology of engagement with everyday life. Hachette UK.
[14]
Sarah D’Angelo and Darren Gergle. 2016. Gazed and confused: Understanding and designing shared gaze for remote collaboration. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems. ACM, 2492–2496.
[15]
Elena Di Lascio, Shkurta Gashi, and Silvia Santini. 2018. Unobtrusive Assessment of Students’ Emotional Engagement During Lectures Using Electrodermal Activity Sensors. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 3 (2018), 103.
[16]
Andrew T Duchowski, Krzysztof Krejtz, Izabela Krejtz, Cezary Biele, Anna Niedzielska, Peter Kiefer, Martin Raubal, and Ioannis Giannopoulos. 2018. The index of pupillary activity: Measuring cognitive load vis-à-vis task difficulty with pupil oscillation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1–13.
[17]
Nan Gao, Wei Shao, Mohammad Saiedur Rahaman, and Flora D Salim. 2020. n-Gage: Predicting in-class Emotional, Behavioural and Cognitive Engagement in the Wild. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1–26.
[18]
Rosemary Garris, Robert Ahlers, and James E Driskell. 2017. Games, motivation, and learning: A research and practice model. In Simulation in Aviation Training. Routledge, 475–501.
[19]
Shkurta Gashi, Elena Di Lascio, and Silvia Santini. 2019. Using Unobtrusive Wearable Sensors to Measure the Physiological Synchrony Between Presenters and Audience Members. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 13.
[20]
Paul Ginns. 2005. Meta-analysis of the modality effect. Learning and instruction 15, 4 (2005), 313–331. Publisher: Elsevier.
[21]
Martin Gjoreski, Mitja Luštrek, and Veljko Pejović. 2018. My Watch Says I’m Busy: Inferring Cognitive Load with Low-Cost Wearables. In Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers. ACM, 1234–1240.
[22]
Nitesh Goyal and Susan R Fussell. 2017. Intelligent interruption management using electro dermal activity based physiological sensor for collaborative sensemaking. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 1, 3 (2017), 52.
[23]
Noriaki Harada. 2002. Cold-stress tests involving finger skin temperature measurement for evaluation of vascular disorders in hand-arm vibration syndrome: review of the literature. International archives of occupational and environmental health 75, 1-2 (2002), 14–19.
[24]
Sanit Haruehansawasin and Paiboon Kiattikomol. 2018. Scaffolding in problem-based learning for low-achieving learners. The Journal of Educational Research 111, 3 (2018), 363–370.
[25]
Uri Hasson, Orit Furman, Dav Clark, Yadin Dudai, and Lila Davachi. 2008. Enhanced intersubject correlations during movie viewing correlate with successful episodic encoding. Neuron 57, 3 (2008), 452–462.
[26]
Birte Heinemann, Matthias Ehlenz, and Prof Dr Ulrik Schroeder. 2020. Eye-Tracking in Educational Multi-Touch Games: Design-Based (interaction) research and great visions. In Symposium on Eye Tracking Research and Applications. 1–5.
[27]
Katherine A Herborn, James L Graves, Paul Jerem, Neil P Evans, Ruedi Nager, Dominic J McCafferty, and Dorothy EF McKeegan. 2015. Skin temperature reveals the intensity of acute stress. Physiology & behavior 152 (2015), 225–230.
[28]
Javier Hernandez, Ivan Riobo, Agata Rozga, Gregory D Abowd, and Rosalind W Picard. 2014. Using electrodermal activity to recognize ease of engagement in children during social interactions. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing. 307–317.
[29]
Cindy E Hmelo-Silver. 2004. Problem-based learning: What and how do students learn?Educational psychology review 16, 3 (2004), 235–266.
[30]
Margaret A Honey and Margaret L Hilton. 2011. Learning science through computer games. National Academies Press, Washington, DC (2011).
[31]
Stephen Hutt, Kristina Krasich, Caitlin Mills, Nigel Bosch, Shelby White, James R Brockmole, and Sidney K D’Mello. 2019. Automated gaze-based mind wandering detection during computerized learning in classrooms. User Modeling and User-Adapted Interaction 29, 4 (2019), 821–867.
[32]
Sinh Huynh, Seungmin Kim, JeongGil Ko, Rajesh Krishna Balan, and Youngki Lee. 2018. EngageMon: Multi-Modal Engagement Sensing for Mobile Games. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1 (2018), 13.
[33]
Shoya Ishimaru, Syed Saqib Bukhari, Carina Heisel, Jochen Kuhn, and Andreas Dengel. 2016. Towards an intelligent textbook: eye gaze based attention extraction on materials for learning and instruction in physics. In Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. 1041–1045.
[34]
Andreas Janson, Matthias Söllner, and Jan Marco Leimeister. 2020. Ladders for Learning: Is Scaffolding the Key to Teaching Problem-Solving in Technology-Mediated Learning Contexts?Academy of Management Learning & Education 19, 4 (2020), 439–468.
[35]
Antony William Joseph and Ramaswamy Murugesh. 2020. Potential eye tracking metrics and indicators to measure cognitive load in human-computer interaction research. Journal of Scientific Research 64, 1 (2020).
[36]
Yong Ju Jung, Heather Toomey Zimmerman, and Koraly Pérez-Edgar. 2018. A methodological case study with mobile eye-tracking of child interaction in a science museum. TechTrends 62, 5 (2018), 509–517.
[37]
Gloria Yi-Ming Kao, Chieh-Han Chiang, and Chuen-Tsai Sun. 2017. Customizing scaffolds for game-based learning in physics: Impacts on knowledge acquisition and game design creativity. Computers & Education 113 (2017), 294–312.
[38]
Fengfeng Ke. 2016. Designing and integrating purposeful learning in game play: A systematic review. Educational Technology Research and Development 64, 2 (2016), 219–244.
[39]
Aïmen Khacharem, Ingrid AE Spanjers, Bachir Zoudji, Slava Kalyuga, and Hubert Ripoll. 2013. Using segmentation to support the learning from animated soccer scenes: An effect of prior knowledge. Psychology of Sport and Exercise 14, 2 (2013), 154–160. Publisher: Elsevier.
[40]
Minchi C Kim and Michael J Hannafin. 2011. Scaffolding problem solving in technology-enhanced learning environments (TELEs): Bridging research and theory with practice. Computers & Education 56, 2 (2011), 403–417.
[41]
René F Kizilcec, Kathryn Papadopoulos, and Lalida Sritanyaratana. 2014. Showing face in video instruction: effects on information retention, visual attention, and affect. In Proceedings of the SIGCHI conference on human factors in computing systems. 2095–2102.
[42]
Thomas Kosch, Jakob Karolus, Havy Ha, and Albrecht Schmidt. 2019. Your skin resists: exploring electrodermal activity as workload indicator during manual assembly. In Proceedings of the ACM SIGCHI Symposium on Engineering Interactive Computing Systems. ACM, 8.
[43]
Serena Lee-Cultura, Kshitij Sharma, Valeria Aloizou, Symeon Retalis, and Michail Giannakos. 2020. Children’s Interaction with Motion-Based Touchless Games: Kinecting Effectiveness and Efficiency. In Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play. 140–145.
[44]
Serena Lee-Cultura, Kshitij Sharma, Giulia Cosentino, Sofia Papavlasopoulou, and Michail Giannakos. 2021. Children’s play and problem solving in motion-based educational games: synergies between human annotations and multi-modal data. In Interaction Design and Children. 408–420.
[45]
Henny Leemkuil and Ton de Jong. 2011. Instructional Support in Games. (2011).
[46]
Dominik Leiner, Andreas Fahr, and Hannah Früh. 2012. EDA positive change: A simple algorithm for electrodermal activity to measure general audience arousal during media exposure. Communication Methods and Measures 6, 4 (2012), 237–250.
[47]
Fannie Liu, Laura Dabbish, and Geoff Kaufman. 2017. Supporting social interactions with an expressive heart rate sharing application. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 1, 3 (2017), 1–26.
[48]
Katerina Mangaroska, Kshitij Sharma, Michail Giannakos, Hallvard Trætteberg, and Pierre Dillenbourg. 2018. Gaze insights into debugging behavior using learner-centred analysis. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge. ACM, 350–359.
[49]
Javier d Melero, Davinia Hernández-Leo, and Josep Blat. 2011. A review of scaffolding approaches in gamebased learning environments. In Proceedings of the 5th European Conference on Games Based Learning. 20–21.
[50]
Shayan Mirjafari, Kizito Masaba, Ted Grover, Weichen Wang, Pino Audia, Andrew T Campbell, Nitesh V Chawla, Vedant Das Swain, Munmun De Choudhury, Anind K Dey, 2019. Differentiating Higher and Lower Job Performers in the Workplace Using Mobile Sensing. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 2 (2019), 37.
[51]
Nina Neulight, Yasmin B Kafai, Linda Kao, Brian Foley, and Cathleen Galas. 2007. Children’s participation in a virtual epidemic in the science classroom: Making connections to natural infectious diseases. Journal of Science Education and Technology 16, 1 (2007), 47.
[52]
Omid Noroozi, Iman Alikhani, Sanna Järvelä, Paul A Kirschner, Ilkka Juuso, and Tapio Seppänen. 2019. Multimodal data to design visual learning analytics for understanding regulation of learning. Computers in Human Behavior 100 (2019), 298–304.
[53]
Kevin Oliver and Michael J Hannafin. 2000. Student management of web-based hypermedia resources during open-ended problem solving. The Journal of Educational Research 94, 2 (2000), 75–92.
[54]
A Olsen. 2012. The tobii i-vt fixation filter: Algorithm description [white paper]. Retrieved from Tobii Technology from http://www. tobiipro. com/siteassets/tobiipro/learn-and-support/analyze/how-do-we-classify-eyemovements/tobii-pro-i-vtfixation-filter. pdf (2012).
[55]
Oskar Palinko, Andrew L Kun, Alexander Shyrokov, and Peter Heeman. 2010. Estimating cognitive load using remote eye tracking in a driving simulator. In Proceedings of the 2010 symposium on eye-tracking research & applications. 141–144.
[56]
Sofia Papavlasopoulou, Kshitij Sharma, David Melhart, Jasper Schellekens, Serena Lee-Cultura, Michail N Giannakos, and Georgios N Yiannakakis. 2021. Investigating gaze interaction to support children’s gameplay. International Journal of Child-Computer Interaction 30 (2021), 100349.
[57]
Alex Poole and Linden J Ball. 2006. Eye tracking in HCI and usability research. In Encyclopedia of human computer interaction. IGI Global, 211–219.
[58]
Chris Quintana, Brian J Reiser, Elizabeth A Davis, Joseph Krajcik, Eric Fretz, Ravit Golan Duncan, Eleni Kyza, Daniel Edelson, and Elliot Soloway. 2004. A scaffolding design framework for software to support science inquiry. The journal of the learning sciences 13, 3 (2004), 337–386.
[59]
Marko Radeta, Vanessa Cesario, Sónia Matos, and Valentina Nisi. 2017. Gaming versus storytelling: understanding children’s interactive experiences in a museum setting. In International Conference on Interactive Digital Storytelling. Springer, 163–178.
[60]
Brian J Reiser. 2004. Scaffolding complex learning: The mechanisms of structuring and problematizing student work. The Journal of the Learning sciences 13, 3 (2004), 273–304.
[61]
Raphael Rissler, Mario Nadj, Maximilian Xiling Li, Michael Thomas Knierim, and Alexander Maedche. 2018. Got Flow?: Using Machine Learning on Physiological Data to Classify Flow. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, LBW612.
[62]
Pablo Romero, Rudi Lutz, Richard Cox, and Benedict Du Boulay. 2002. Co-ordination of multiple external representations during Java program debugging. In Human Centric Computing Languages and Environments, 2002. Proceedings. IEEE 2002 Symposia on. IEEE, 207–214.
[63]
Soha Rostaminia, Addison Mayberry, Deepak Ganesan, Benjamin Marlin, and Jeremy Gummeson. 2017. iLid: Low-power Sensing of Fatigue and Drowsiness Measures on a Computational Eyeglass. Proceedings of the ACM on interactive, mobile, wearable and ubiquitous technologies 1, 2 (2017), 1–26.
[64]
Zhanna Sarsenbayeva, Niels van Berkel, Danula Hettiachchi, Weiwei Jiang, Tilman Dingler, Eduardo Velloso, Vassilis Kostakos, and Jorge Goncalves. 2019. Measuring the effects of stress on mobile interaction. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 3, 1 (2019), 1–18.
[65]
Florian Schaule, Jan Ole Johanssen, Bernd Bruegge, and Vivian Loftness. 2018. Employing consumer wearables to detect office workers’ cognitive load for interruption management. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 1 (2018), 1–20.
[66]
Bertrand Schneider, Kshitij Sharma, Sébastien Cuendet, Guillaume Zufferey, Pierre Dillenbourg, and Roy Pea. 2016. Using Mobile Eye-Trackers to Unpack the Perceptual Benefits of a Tangible User Interface for Collaborative Learning. 23, 6, Article 39 (Dec. 2016), 23 pages. https://doi.org/10.1145/3012009
[67]
B. Sharif and J.I. Maletic. 2010. An Eye Tracking Study on camelCase and under_score Identifier Styles. In Program Comprehension (ICPC), 2010 IEEE 18th International Conference on. IEEE.
[68]
Kshitij Sharma and Michail Giannakos. 2020. Multimodal data capabilities for learning: What can multimodal data tell us about learning?British Journal of Educational Technology 51, 5 (2020), 1450–1484.
[69]
Kshitij Sharma, Michail Giannakos, and Pierre Dillenbourg. 2020. Eye-tracking and artificial intelligence to enhance motivation and learning. Smart Learning Environments 7 (2020), 1–19.
[70]
Kshitij Sharma, Patrick Jermann, Marc-Antoine Nüssli, and Pierre Dillenbourg. 2012. Gaze Evidence for different activities in program understanding. In 24th Annual conference of Psychology of Programming Interest Group.
[71]
Kshitij Sharma, Patrick Jermann, Marc-Antoine Nüssli, and Pierre Dillenbourg. 2013. Understanding collaborative program comprehension: Interlacing gaze and dialogues. (2013).
[72]
Kshitij Sharma, Serena Lee-Cultura, and Michail Giannakos. 2022. Keep calm and do not carry-forward: Toward sensor-data driven ai agent to enhance human learning. Frontiers in Artificial Intelligence 4 (2022), 198.
[73]
Kshitij Sharma, Ioannis Leftheriotis, and Michail Giannakos. 2020. Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech. Sensors 20, 7 (2020), 1964.
[74]
Kshitij Sharma, Evangelos Niforatos, Michail Giannakos, and Vassilis Kostakos. 2020. Assessing cognitive performance using physiological and facial features: Generalizing across contexts. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 3 (2020), 1–41.
[75]
Priya Sharma and Michael J Hannafin. 2007. Scaffolding in technology-enhanced learning environments. Interactive learning environments 15, 1 (2007), 27–46.
[76]
AC Smit and JAM Van Gisbergen. 1989. A short-latency transition in saccade dynamics during square-wave tracking and its significance for the differentiation of visually-guided and predictive saccades. Experimental Brain Research 76, 1 (1989), 64–74.
[77]
Erin T Solovey, Marin Zec, Enrique Abdon Garcia Perez, Bryan Reimer, and Bruce Mehler. 2014. Classifying driver workload using physiological and driving performance data: two field studies. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 4057–4066.
[78]
Priyashri K Sridhar, Samantha WT Chan, and Suranga Nanayakkara. 2018. Going beyond performance scores: understanding cognitive-affective states in kindergarteners. In Proceedings of the 17th ACM Conference on Interaction Design and Children. ACM, 253–265.
[79]
Eunmo Sung and Richard E. Mayer. 2012. Affective impact of navigational and signaling aids to e-learning. Computers in human behavior 28, 2 (2012), 473–483. Publisher: Elsevier.
[80]
Jean-Pierre Thibaut, Robert French, and Milena Vezneva. 2010. Cognitive load and semantic analogies: Searching semantic space. Psychonomic bulletin & review 17, 4 (2010), 569–574.
[81]
Jean-Pierre Thibaut, Robert French, and Milena Vezneva. 2010. The development of analogy making in children: Cognitive load and executive functions. Journal of experimental child psychology 106, 1 (2010), 1–19.
[82]
Simone Tognetti, Maurizio Garbarino, Andrea Tommaso Bonanno, Matteo Matteucci, and Andrea Bonarini. 2010. Enjoyment recognition from physiological data in a car racing game. In Proceedings of the 3rd international workshop on Affective interaction in natural environments. ACM, 3–8.
[83]
Pieter JA Unema, Sebastian Pannasch, Markus Joos, and Boris M Velichkovsky. 2005. Time course of information processing during scene perception: The relationship between saccade amplitude and fixation duration. Visual cognition 12, 3 (2005), 473–494.
[84]
Janneke Van de Pol, Monique Volman, and Jos Beishuizen. 2010. Scaffolding in teacher–student interaction: A decade of research. Educational psychology review 22, 3 (2010), 271–296.
[85]
Tamara Van Gog, Fred Paas, Jeroen JG van Merriënboer, and Puk Witte. 2005. Uncovering the problem-solving process: cued retrospective reporting versus concurrent and retrospective reporting.Journal of Experimental Psychology: Applied 11, 4 (2005), 237.
[86]
Jenni Way and Leanne Rowe. 2008. The role of scaffolding in the design of multimedia learning objects. ICME 11 Proceedings (2008).
[87]
David Wood, Jerome S Bruner, and Gail Ross. 1976. The role of tutoring in problem solving. Journal of child psychology and psychiatry 17, 2 (1976), 89–100.
[88]
Marcelo Worsley and Paulo Blikstein. 2015. Leveraging multimodal learning analytics to differentiate student learning strategies. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge. ACM, 360–367.
[89]
Pieter Wouters and Herre Van Oostendorp. 2013. A meta-analytic review of the role of instructional support in game-based learning. Computers & Education 60, 1 (2013), 412–425.
[90]
Nurul Hidayah Mat Zain, Fariza Hanis Abdul Razak, Azizah Jaafar, and Mohd Firdaus Zulkipli. 2011. Eye tracking in educational games environment: evaluating user interface design through eye tracking patterns. In International Visual Informatics Conference. Springer, 64–73.
[91]
Xiao Zhang, Yongqiang Lyu, Xiaomin Luo, Jingyu Zhang, Chun Yu, Hao Yin, and Yuanchun Shi. 2018. Touch Sense: Touch Screen Based Mental Stress Sense. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 2, 2 (2018), 87.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
LDT '23: Proceedings of the 2023 Symposium on Learning, Design and Technology
June 2023
128 pages
ISBN:9798400707360
DOI:10.1145/3594781
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 June 2023

Check for updates

Author Tags

  1. AI agent.
  2. Debugging
  3. Educational Technologies
  4. Eye-tracking
  5. Learning Analytics
  6. Motion-Based Games
  7. Multi-modal data
  8. Sensors

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

LDT '23
LDT '23: Learning, Design and Technology
June 23, 2023
IL, Evanston, USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 296
    Total Downloads
  • Downloads (Last 12 months)194
  • Downloads (Last 6 weeks)38
Reflects downloads up to 14 Jan 2025

Other Metrics

Citations

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media