Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech
<p>Examples of the posters, neurological disorders, and the limbic system.</p> "> Figure 2
<p>Experiment screenshots, (<b>a</b>) teams are watching the posters as they would do in a museum, and (<b>b</b>) teams are playing gamified quizzes (collaborative/competitive) on interactive display.</p> "> Figure 3
<p>Different phases of the study.</p> "> Figure 4
<p>A typical example showing the situations for the lower and the upper limits of gaze similarity.</p> "> Figure 5
<p>A typical example showing the situations for computation of gaze transition similarity.</p> "> Figure 6
<p>A typical example showing the computation of speech episodes.</p> "> Figure 7
<p>Comparison of scores from the pretest, the first and the second posttest. (<b>a</b>) for individuals; (<b>b</b>) for groups. All values are normalized between 0 and 1. The points show the mean values among all the participants and the blue bars show the 95% confidence intervals.</p> "> Figure 8
<p>Scatter Plots for (<b>a</b>) first and second posttest score, (<b>b</b>) game score and the first posttest score, and (<b>c</b>) game score and the second posttest score. In all the plots the blue line shows the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> "> Figure 9
<p>Scatter Plots for the individual transitions (<b>a</b>) image to text, and (<b>b</b>) text to image; and the score in the first posttest. In all the plots the blue line shows the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> "> Figure 10
<p>Scatter Plots for (<b>a</b>) the transition similarity and the score in the first posttest; (<b>b</b>) the same graph, but the colors show different speech segments (speech vs. no speech). In all the plots the lines show the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> "> Figure 11
<p>Scatter Plots for (<b>a</b>) the gaze similarity and the score in the first posttest; (<b>b</b>) the same graph, but the colors show different speech segments (speech vs. no speech). In all the plots the lines show the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> "> Figure 12
<p>Scatter Plots for (<b>a</b>) the gaze similarity and the score in the second posttest; (<b>b</b>) the same graph, but the colors show different speech segments (speech vs. no speech). In all the plots the lines show the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> "> Figure 13
<p>Mean time to answer each question in the game. The vertical bars show the 95% confidence interval.</p> "> Figure 14
<p>Scatter Plots for (<b>a</b>) the final game score and number of power ups used, and (<b>b</b>) collaborative gaze similarity during the game phase and number of power ups used. In all the plots the blue line shows the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> "> Figure 15
<p>Scatter Plots for (<b>a</b>) the final gaze similarity during the poster and game phases, and (<b>b</b>) collaborative gaze similarity during the game phase and transition similarity during the poster phase. In all the plots the blue line shows the linear model for the variable on y-axis given the variable on the x-axis. The grey area shows the 95% confidence interval.</p> ">
Abstract
:1. Introduction
- We present the implications from the first (to the best of our knowledge) collaborative eye-tracking study in the informal learning setting.
- We present a method to analyze and compare the collaborative interaction of peers across different tasks (learning and gaming) and mediating interfaces (physical and digital).
- We provide empirical extension of the theoretical framework collaborative learning mechanisms.
2. Theoretical Background in Informal Learning and Collaborative Learning
3. Related Work and Research Questions
3.1. Interactive Displays in Informal Educational Settings
3.2. Eye-Tracking in Education
3.3. Dual Eye-Tracking for Communication and Referencing
3.4. Research Questions
4. Methodology
4.1. Technology
- Hint: this showed the participants a clue towards the correct answer.
- Double xp: this added two times the points awarded for one question to the score of the team/player.
- Pause time: this stopped the game-play for 15 s to help give the players time to think and answers.
- for every correct answer the team/participant gets 100 xp;
- with every correct answer one of the three power ups increases;
- three power ups were: pause time double xp, and hint;
- for each question the team/participant got 30 s.
Learning Resources: Posters
- Spatial split-attention principle—previous research has shown the beneficial effects of integrating pictures with explanatory text: the text that refers to the picture is typically split up in smaller segments so that the text segment that refers to a particular part of the figure can be linked to this particular part or be included in the picture (for a meta-analysis, see [68]).
- Signaling—to help the students understand the relative positions of the different brain regions were annotated in the posters with a short description of their functionality. Research shows that signaling enhances learners’ appreciation of the learning material [71] and their learning (e.g., [72,73]).
4.2. Research Design
Dual Eye-Tracking
4.3. Participants and Procedure
4.4. Measurements
4.5. Data Analysis
5. Results
5.1. Descriptive Statistics
5.2. Order Effect
5.3. Learning gains
- The scores in the first and the second posttests are correlated (r (34) = 0.69, p < .0001). The participants who score high in the first posttest also score high in the second posttest (Figure 8a).
- The score in the game is correlated to the score in the first posttest (r (34) = 0.42, p = .01). The participants who score high in the first posttest also perform well in the game (Figure 8b).
- The score in the game is correlated to the score in the second posttest (r (34) = 0.34, p = .04). The participants who perform well in the game also score high in the second posttest (Figure 8c).
5.4. Poster Phase
5.5. Game Phase
5.6. Poster Versus Game Phase
6. Discussion and Conclusions
- Using the DUET data one can understand both the individual and collaborative learning processes. For example, we observe that those individuals who understood the relation between the text and the visualizations correctly (high number of image to text and back transitions) had higher learning gains. Moreover, we also observe that those dyads who put efforts into establishing the common ground (looking at the same thing at the same time while talking about it) learned more (high posttest and game scores).
- We also provided an analysis that combined different tasks and showed that it is important to consider the behavior in both the interaction media (physical posters and digital games) to understand the successful learning processes. For example, we observe that the gaze similarity in the poster and game phases, and the scores (posttests and game) are correlated to each other.
- Finally, we empirically show how collaborative learning mechanisms could be understood using DUET data. For example, coordination of collaboration, joint attention and narration, can be captured using the overall gaze similarity and the similarity measures during speech episodes. Furthermore, collaborative discussion can be measured using the transition similarity during the speech episodes.
Supplementary Materials
Author Contributions
Funding
Conflicts of Interest
References
- Dillenbourg, P.; Evans, M. Interactive tabletops in education. Int. J. Comput. Supported Collab. Learn. 2011, 6, 491–514. [Google Scholar] [CrossRef] [Green Version]
- Schäfer, A.; Holz, J.; Leonhardt, T.; Schroeder, U.; Brauner, P.; Ziefle, M. From boring to scoring—A collaborative serious game for learning and practicing mathematical logic for computer science education. Comput. Sci. Educ. 2013, 23, 87–111. [Google Scholar] [CrossRef]
- Higgins, S.; Mercier, E.; Burd, L.; Joyce-Gibbons, A. Multi-touch tables and collaborative learning. Br. J. Educ. Technol. 2012, 43, 1041–1054. [Google Scholar] [CrossRef] [Green Version]
- Higgins, S.E.; Mercier, E.; Burd, E.; Hatch, A. Multi-touch tables and the relationship with collaborative classroom pedagogies: A synthetic review. Int. J. Comput. Supported Collab. Learn. 2011, 6, 515–538. [Google Scholar] [CrossRef]
- Schneider, B.; Strait, M.; Muller, L.; Elfenbein, S.; Shaer, O.; Shen, C. Phylo-Genie: Engaging students in collaborative ‘tree-thinking’ through tabletop techniques. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 3071–3080. [Google Scholar]
- Zaharias, P.; Despina, M.; Chrysanthou, Y. Learning through multi-touch interfaces in museum exhibits: An empirical investigation. J. Educ. Technol. Soc. 2013, 16, 374–384. [Google Scholar]
- Nüssli, M.A.; Jermann, P.; Sangin, M.; Dillenbourg, P. Collaboration and abstract representations: Towards predictive models based on raw speech and eye-tracking data. In Proceedings of the 9th International Conference on Computer Supported Collaborative Learning, Rhodes, Greece, 8–13 June 2009; pp. 78–82. [Google Scholar]
- Sharma, K.; Caballero, D.; Verma, H.; Jermann, P.; Dillenbourg, P. Shaping learners’ attention in Massive Open Online Courses. Revue internationale des technologies en pédagogie universitaire. Int. J. Technol. High. Educ. 2015, 12, 52–61. [Google Scholar]
- Jermann, P.; Nüssli, M.A.; Li, W. Using dual eye-tracking to unveil coordination and expertise in collaborative Tetris. In Proceedings of the 24th BCS Interaction Specialist Group Conference, Edmonton, AB, Canada, 6–10 September 2010; pp. 36–44. [Google Scholar]
- Jermann, P.; Nüssli, M.A. Effects of sharing text selections on gaze cross-recurrence and interaction quality in a pair programming task. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work, Bellevue, WA, USA, 11–15 February 2012; pp. 1125–1134. [Google Scholar]
- Junokas, M.J.; Lindgren, R.; Kang, J.; Morphew, J.W. Enhancing multimodal learning through personalized gesture recognition. J. Comput. Assist. Learn. 2008, 34, 350–357. [Google Scholar] [CrossRef]
- Spikol, D.; Ruffaldi, E.; Dabisias, G.; Cukurova, M. Supervised machine learning in multimodal learning analytics for estimating success in project-based learning. J. Comput. Assist. Learn. 2018, 34, 366–377. [Google Scholar] [CrossRef]
- Barmaki, R.; Hughes, C.E. Embodiment analytics of practicing teachers in a virtual immersive environment. J. Comput. Assist. Learn. 2018, 34, 387–396. [Google Scholar] [CrossRef]
- Pijeira-Díaz, H.J.; Drachsler, H.; Kirschner, P.A.; Järvelä, S. Profiling sympathetic arousal in a physics course: How active are students? J. Comput. Assist. Learn. 2018, 34, 397–408. [Google Scholar] [CrossRef] [Green Version]
- Tissenbaum, M.; Berland, M.; Lyons, L. DCLM framework: Understanding collaboration in open-ended tabletop learning environments. Int. J. Comput. Supported Collab. Learn. 2017, 12, 35–64. [Google Scholar] [CrossRef]
- Shapiro, B.R.; Hall, R.P.; Owens, D.A. Developing & using interaction geography in a museum. Int. J. Comput. Supported Collab. Learn. 2017, 12, 377–399. [Google Scholar]
- Davis, P.; Horn, M.; Block, F.; Phillips, B.; Evans, E.M.; Diamond, J.; Shen, C. “Whoa! We’re going deep in the trees!” Patterns of collaboration around an interactive information visualization exhibit. Int. J. Comput. Supported Collab. Learn. 2015, 10, 53–76. [Google Scholar] [CrossRef]
- Fleck, R.; Rogers, Y.; Yuill, N.; Marshall, P.; Carr, A.; Rick, J.; Bonnett, V. Actions speak loudly with words: Unpacking collaboration around the table. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces, Calgary, AB, Canada, 23–25 November 2009; pp. 189–196. [Google Scholar]
- Olsen, J.; Sharma, K.; Aleven, V.; Rummel, N. Combining Gaze, Dialogue, and Action from a Collaborative Intelligent Tutoring System to Inform Student Learning Processes; International Society of the Learning Sciences: London, UK, 2018. [Google Scholar]
- Sharma, K.; Olsen, J.K.; Aleven, V.; Rummel, N. Exploring Causality Within Collaborative Problem Solving Using Eye-Tracking. In European Conference on Technology Enhanced Learning; Springer: Leeds, UK, 2018; pp. 412–426. [Google Scholar]
- Papavlasopoulou, S.; Sharma, K.; Giannakos, M.N. How do you feel about learning to code? Investigating the effect of children’s attitudes towards coding using eye-tracking. Int. J. Child Comput. Interact. 2018, 17, 50–60. [Google Scholar] [CrossRef]
- Roschelle, J.; Teasley, S.D. The construction of shared knowledge in collaborative problem solving. In Computer Supported Collaborative Learning; Springer: Berlin/Heidelberg, Germany, 1995; pp. 69–97. [Google Scholar]
- Kirschner, P.A.; Sweller, J.; Kirschner, F.; Zambrano, J. From Cognitive Load Theory to Collaborative Cognitive Load Theory. Int. J. Comput. Supported Collab. Learn. 2018, 13, 213–233. [Google Scholar] [CrossRef] [Green Version]
- Korn, R. An analysis of differences between visitors at natural history museums and science centers. Curator Mus. J. 1995, 38, 150–160. [Google Scholar] [CrossRef]
- Dillenbourg, P. What do you mean by collaborative learning? Collaborative learning: Cognitive and Computational Approaches; Elsevier: Oxford, UK, 1999; pp. 1–19. [Google Scholar]
- Giannakos, M.; Sharma, K.; Martinez-Maldonado, R.; Dillenbourg, P.; Rogers, Y. Learner-computer interaction. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction, Oslo, Norway, 29 September–3 October 2018; pp. 968–971. [Google Scholar]
- Giannakos, M.N.; Jones, D.; Crompton, H.; Chrisochoides, N. Designing Playful Games and Applications to Support Science Centers Learning Activities. In Proceedings of the International Conference on Universal Access in Human-Computer Interaction, Heraklion, Crete, Greece, 22–27 June 2014; pp. 561–570. [Google Scholar]
- Evans, M.A.; Rick, J. Supporting learning with interactive surfaces and spaces. In Handbook of Research on Educational Communications and Technology; Springer: New York, NY, USA, 2014; pp. 689–701. [Google Scholar]
- Antle, A.N.; Bevans, A.; Tanenbaum, J.; Seaborn, K.; Wang, S. Futura: Design for Collaborative Learning and Game Play on a Multi-Touch Digital Tabletop. In Proceedings of the Fifth International Conference on Tangible, Embedded, and Embodied Interaction, Funchal, Madeira, Portugal, 23–26 January 2011; pp. 93–100. [Google Scholar]
- Rick, J.; Rogers, Y. From DigiQuilt to DigiTile: Adapting educational technology to a multi-touch table. In Horizontal Interactive Human Computer Systems, (TABLETOP 2008), 3rd ed.; IEEE: Amsterdam, The Neatherlands, 2008; pp. 73–80. [Google Scholar]
- Callahan, M.H.W. Case Study of an Advanced Technology Business Incubator as a Learning Environment; 2001; Available online: https://www.elibrary.ru/item.asp?id=5296835 (accessed on 31 March 2020).
- Block, F.; Hammerman, J.; Horn, M.; Spiegel, A.; Christiansen, J.; Phillips, B.; Shen, C. Fluid grouping: Quantifying group engagement around interactive tabletop exhibits in the wild. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, Seoul, Korea, 18–23 April 2015; pp. 867–876. [Google Scholar]
- Louw, M.; Crowley, K. New ways of looking and learning in natural history museums: The use of gigapixel imaging to bring science and publics together. Curator Mus. J. 2013, 56, 87–104. [Google Scholar] [CrossRef]
- Roberts, J.; Lyons, L.; Cafaro, F.; Eydt, R. Interpreting data from within: Supporting humandata interaction in museum exhibits through perspective taking. In Proceedings of the 2014 Conference on Interaction Design and Children, Aarhus, Denmark, 17–20 June 2014; pp. 7–16. [Google Scholar]
- Hinrichs, U.; Carpendale, S. Gestures in the wild: Studying multi-touch gesture sequences on interactive tabletop exhibits. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Vancouver, BC, Canada, 7—12 May 2011; pp. 3023–3032. [Google Scholar]
- Rick, J.; Marshall, P.; Yuill, N. Beyond one-size-fits-all: How interactive tabletops support collaborative learning. In Proceedings of the 10th International Conference on Interaction Design and Children, Ann Arbor, MI, USA, 19–23 June 2011; pp. 109–117. [Google Scholar]
- Sluis, R.J.W.; Weevers, I.; Van Schijndel, C.H.G.J.; Kolos-Mazuryk, L.; Fitrianie, S.; Martens, J.B.O.S. Read-It: Five-to-seven-year-old children learn to read in a tabletop environment. In Proceedings of the 2004 Conference on Interaction Design and Children: Building a Community, College Park, Maryland, USA, 1–3 June 2004; pp. 73–80. [Google Scholar]
- Lo, L.J.; Chiang, C.D.; Liang, R.H. HexDeck: Gamification of Tangibles for Brainstorming. In Proceedings of the Consilence and Innovation in Design–In Procs. of the 5th IASDR, Tokyo, Japan, 26–30 August 2013; pp. 3165–3175. [Google Scholar]
- Ardito, C.; Lanzilotti, R.; Costabile, M.F.; Desolda, G. Integrating traditional learning and games on large displays: An experimental study. J. Educ. Technol. Soc. 2013, 16, 44–56. [Google Scholar]
- Leftheriotis, I.; Chorianopoulos, K. User experience quality in multi-touch tasks. In Proceedings of the 3rd ACM SIGCHI Symposium on Engineering Interactive Computing Systems, Pisa, Italy, 13–16 June 2011; pp. 277–282. [Google Scholar]
- Leftheriotis, I.; Giannakos, M.N.; Jaccheri, L. Gamifying informal learning activities using interactive displays: An empirical investigation of students’ learning and engagement. Smart Learn. Env. 2017, 4, 2. [Google Scholar] [CrossRef] [Green Version]
- Watson, D.; Hancock, M.; Mandryk, R.L.; Birk, M. Deconstructing the touch experience. In Proceedings of the 2013 ACM international Conference on Interactive Tabletops and Surfaces, St. Andrews, UK, 6–9 October 2013; pp. 199–208. [Google Scholar]
- Martinez-Maldonado, R.; Schneider, B.; Charleer, S.; Shum, S.B.; Klerkx, J.; Duval, E. Interactive surfaces and learning analytics: Data, orchestration aspects, pedagogical uses and challenges. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, Edinburgh, UK, 25–29 April 2016; pp. 124–133. [Google Scholar]
- Griffin, Z.M.; Bock, K. What the eyes say about speaking. Psychol. Sci. 2000, 11, 274–279. [Google Scholar] [CrossRef] [Green Version]
- Prieto, L.P.; Sharma, K.; Wen, Y.; Dillenbourg, P. The Burden of Facilitating Collaboration: Towards Estimation of Teacher Orchestration Load using Eye-tracking Measures; International Society of the Learning Sciences: Gothenburg, Sweden, 2015. [Google Scholar]
- Prieto, L.P.; Sharma, K.; Kidzinski, L.; Dillenbourg, P. Orchestration load indicators and patterns: In-the-wild studies using mobile eye-tracking. Ieee Trans. Learn. Technol. 2017, 11, 216–229. [Google Scholar] [CrossRef]
- Sharma, K.; Jermann, P.; Nüssli, M.A.; Dillenbourg, P. Gaze Evidence for different activities in program understanding. In Proceedings of the 24th Annual conference of Psychology of Programming Interest Group, London, UK, 21–23 November 2012. (No. EPFL-CONF-184006). [Google Scholar]
- Van Gog, T.; Scheiter, K. Eye Tracking as a Tool to Study and Enhance Multimedia Learning; Elsevier: Amsterdam, the Netherlands, 2010; Volume 2, pp. 95–99. [Google Scholar]
- Van Gog, T.; Jarodzka, H.; Scheiter, K.; Gerjets, P.; Paas, F. Attention guidance during example study via the model’s eye movements. Comput. Hum. Behav. 2009, 25, 785–791. [Google Scholar] [CrossRef]
- Sharma, K.; Caballero, D.; Verma, H.; Jermann, P.; Dillenbourg, P. Looking AT Versus Looking THROUGH: A Dual eye-tracking Study in MOOC Context; International Society of the Learning Sciences: Gothenburg, Sweden, 2015. [Google Scholar]
- Schneider, B.; Blikstein, P. Comparing the Benefits of a Tangible user Interface and Contrasting Cases as a Preparation for Future Learning; International Society of the Learning Sciences: Gothenburg, Sweden, 2015. [Google Scholar]
- Nüssli, M.-A. Dual Eye-Tracking Methods for the Study of Remote Collaborative Problem Solving. Ph.D. Thesis, Ecole Polytechnique Fédérale de Lausanne, Lausanne, Switzerland, 21 December 2011. [Google Scholar]
- Richardson, D.C.; Dale, R. Looking to understand: The coupling between speakers’ and listeners’ eye movements and its relationship to discourse comprehension. Cogn. Sci. 2005, 29, 1045–1060. [Google Scholar] [CrossRef] [Green Version]
- Richardson, D.C.; Dale, R.; Kirkham, N.Z. The art of conversation is coordination. Psychol. Sci. 2007, 18, 407–413. [Google Scholar] [CrossRef] [PubMed]
- Richardson, D.C.; Dale, R.; Tomlinson, J.M. Conversation, gaze coordination, and beliefs about visual context. Cogn. Sci. 2009, 33, 1468–1482. [Google Scholar] [CrossRef] [PubMed]
- Mangaroska, K.; Sharma, K.; Giannakos, M.; Trætteberg, H.; Dillenbourg, P. Gaze insights into debugging behavior using learner-centred analysis. In Proceedings of the 8th International Conference on Learning Analytics and Knowledge, Sydney, NSW, Australia, 7–9 March 2018; pp. 350–359. [Google Scholar]
- Stein, R.; Brennan, S.E. Another person’s eye gaze as a cue in solving programming problems. In Proceedings of the 6th International Conference on Multimodal Interfaces, State College, PA, USA, 8—11 October 2004; pp. 9–15. [Google Scholar]
- Worsley, M.; Abrahamson, D.; Blikstein, P.; Grover, S.; Schneider, B.; Tissenbaum, M. Situating multimodal learning analytics. In Proceedings of the 12th International Conference of the Learning Sciences: Transforming Learning, Empowering Learners, NIE, Singapore, 20–24 June 2016. [Google Scholar]
- Sharma, K.; Jermann, P.; Nüssli, M.A.; Dillenbourg, P. Understanding collaborative program comprehension: Interlacing gaze and dialogues. In Proceedings of the Computer Supported Collaborative Learning (CSCL 2013), Madison, WI, USA, 15—19 June 2013; pp. 430–437. [Google Scholar]
- Allopenna, P.D.; Magnuson, J.S.; Tanenhaus, M.K. Tracking the time course of spoken word recognition using eye movements: Evidence for continuous mapping models. J. Mem. Lang. 1998, 38, 419–439. [Google Scholar] [CrossRef] [Green Version]
- Gergle, D.; Clark, A.T. See what I’m saying? Using Dyadic Mobile Eye tracking to study collaborative reference. In Proceedings of the ACM 2011 Conference on Computer Supported Cooperative Work, Hangzhou, China, 19–23 March 2011; pp. 435–444. [Google Scholar]
- Horn, M.; Atrash Leong, Z.; Block, F.; Diamond, J.; Evans, E.M.; Phillips, B.; Shen, C. Of BATs and APEs: An interactive tabletop game for natural history museums. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Austin, TX, USA, 5–10 May 2012; pp. 2059–2068. [Google Scholar]
- Fu, F.L.; Wu, Y.L.; Ho, H.C. An investigation of coopetitive pedagogic design for knowledge creation in web-based learning. Comput. Educ. 2009, 53, 550–562. [Google Scholar] [CrossRef]
- Pareto, L.; Haake, M.; Lindström, P.; Sjödén, B.; Gulz, A. A teachable-agent-based game affording collaboration and competition: Evaluating math comprehension and motivation. Educ. Technol. Res. Dev. 2012, 60, 723–751. [Google Scholar] [CrossRef]
- Ke, F.; Grabowski, B. Gameplaying for maths learning: Cooperative or not? Br. J. Educ. Technol. 2007, 38, 249–259. [Google Scholar] [CrossRef]
- Burguillo, J.C. Using game theory and competition-based learning to stimulate student motivation and performance. Comput. Educ. 2010, 55, 566–575. [Google Scholar] [CrossRef]
- Mayer, R.E. Unique contributions of eye-tracking research to the study of learning with graphics. Learn. Instr. 2010, 20, 167–171. [Google Scholar] [CrossRef]
- Ginns, P. Meta-analysis of the modality effect. Learn. Instr. 2005, 15, 313–331. [Google Scholar] [CrossRef]
- Khacharem, A.; Spanjers, I.; Zoudji, B.; Kalyuga, S.; Ripoll, H. Using segmentation to support the learning from animated soccer scenes: An effect of prior knowledge. Psychol. Sport Exerc. 2012, 14, 154–160. [Google Scholar] [CrossRef]
- Spanjers, I.A.E.; Van Gog, T.; Wouters, P.; Van Merriënboer, J.J.G. Explaining the segmentation effect in learning from animations: The role of pausing and temporal cueing. Comput. Educ. 2012, 59, 274–280. [Google Scholar] [CrossRef]
- Sung, E.; Mayer, R.E. Affective impact of navigational and signaling aids to e-learning. Comput. Hum. Behav. 2012, 28, 473–483. [Google Scholar] [CrossRef]
- Mautone, P.D.; Mayer, R.E. Signaling as a cognitive guide in multimedia learning. J. Educ. Psychol. 2001, 93, 377–389. [Google Scholar] [CrossRef]
- Tabbers, H.K. The Modality of Text in Multimedia Instructions. Refining the design guidelines. Ph.D. Thesis, Open University of the Netherlands, Heerlen, The Netherlands, 2002. [Google Scholar]
- Jonassen, D.; Spector, M.J.; Driscoll, M.; Merrill, M.D.; van Merrienboer, J.; Driscoll, M.P. Handbook of Research on Educational Communications and Technology: A Project of the Association for Educational Communications and Technology; Routledge: England, UK, 2008. [Google Scholar]
- Sangin, M.; Molinari, G.; Nüssli, M.A.; Dillenbourg, P. How learners use awareness cues about their peer’s knowledge? Insights from synchronized eye-tracking data. In Proceedings of the 8th International Conference on International Conference for the Learning Sciences, Utrecht, The Netherlands, 23–28 June 2008; pp. 287–294. [Google Scholar]
- Meyer, A.S.; Sleiderink, A.M.; Levelt, W.J. Viewing and naming objects: Eye movements during noun phrase production. Cognition 1998, 66, B25–B33. [Google Scholar] [CrossRef] [Green Version]
- Strobel, B.; Lindner, M.A.; Saß, S.; Köller, O. Task-irrelevant data impair processing of graph reading tasks: An eye tracking study. Learn. Instr. 2018, 55, 139–147. [Google Scholar] [CrossRef]
- Meier, A.; Spada, H.; Rummel, N. A rating scheme for assessing the quality of computer-supported collaboration processes. Int. J. Comput. Supported Collab. Learn. 2007, 2, 63–86. [Google Scholar] [CrossRef] [Green Version]
- Spada, H.; Meier, A.; Rummel, N.; Hauser, S. A new method to assess the quality of collaborative process in CSCL. In Proceedings of the International Conference on Computer Supported Collaborative Learning 2005, International Society of the Learning Sciences, Taipei, Taiwan, 30 May–June 4 2005; pp. 622–631. [Google Scholar]
Design Rationale | Design Element |
---|---|
Easy and understandable structure [27] | Simple rules for the game and straightforward script for the task |
Visual and interactive element [27] | Game elements like the avatars, authentication PICC. |
Entertaining responses for on -screen answers [27] | Animal avatars, power ups |
Interaction enhancement with large touchscreens [27] | Multi touch screens (size) |
Direct attention to specific topics [27] | Individual questions to address the specific neuroscience phenomenon |
Cooperation and competition among users [27] | Two different within subject conditions (order balanced) |
Type of reward that user seem to be motivated to gain [62] | Double XP, pause time, Hint |
Table tops for informal learning aids collaboration [62] | Multi touch vertical screens |
Time-out feature to complement the mid game frustration due to difficulty [62] | Pause time and hints |
Provide external trigger to prompt user engagement [62] | Every time users got a power up, they were shown the benefits of using them |
Variable | Mean | Std. Dev. | Minimum | Maximum |
---|---|---|---|---|
Pretest Score | 0.24 | 0.25 | 0.00 | 0.10 |
First posttest Score | 0.70 | 0.24 | 0.20 | 1.00 |
Second posttest Score | 0.72 | 0.23 | 0.11 | 1.00 |
Image to text transitions | 0.15 | 0.04 | 0.07 | 0.25 |
Test to image transitions | 0.15 | 0.06 | 0.01 | 0.25 |
Transition similarity poster phase | 0.20 | 0.14 | 0.00 | 0.48 |
Gaze similarity poster phase | 0.27 | 0.13 | 0.00 | 0.45 |
Gaze similarity game phase | 0.17 | 0.13 | 0.00 | 0.40 |
Game score | 4188.88 | 845.67 | 2200 | 5700 |
Power ups used game phase | 2.72 | 1.71 | 1 | 6 |
Shapiro–Wilk Test | Breusch–Pagan Test | |||
---|---|---|---|---|
Variable | W | p-Value | BP | p-Value |
XP | 0.97 | .44 | 0.64 | .42 |
Pretest | 0.85 | .57 | 0.39 | .32 |
Posttest1 | 0.90 | .35 | 2.33 | .12 |
Posttest2 | 0.85 | .26 | 0.01 | .91 |
Model for the Transition Similarity | F-Value | p-Value | Effect Size |
---|---|---|---|
First post test score | 5.01 | .03 | 1.05 |
Speech segment | 8.42 | .006 | 8.42 |
Interaction term | 4.79 | .03 | 4.79 |
Model for the Gaze Similarity | F-Value | p-Value | Effect Size |
---|---|---|---|
First post test score | 5.74 | .02 | 1.12 |
Speech segment | 15.60 | .001 | 1.86 |
Interaction term | 10.27 | .003 | 1.51 |
Model for the Gaze Similarity | F-Value | p-Value | Effect Size |
---|---|---|---|
Second post test score | 5.54 | .02 | 1.10 |
Speech segment | 12.26 | .001 | 1.65 |
Interaction term | 11.96 | .001 | 1.63 |
Question pair | 1-2 | 2-3 | 3-4 | 4-5 | 5-6 | 6-7 | 7-8 |
---|---|---|---|---|---|---|---|
t-value | −0.08 | 0.39 | 0.02 | −1.33 | 0.90 | 0.73 | −0.72 |
p-value | .93 | .69 | .98 | .18 | .36 | .47 | .47 |
Effect size | 0.02 | 0.13 | 0.01 | 0.45 | 0.30 | 0.25 | 0.24 |
Question pair | 8-9 | 9-10 | 10-11 | 11-12 | 12-13 | 13-14 | 14-15 |
t-value | −0.49 | 0.62 | 1.39 | −1.62 | 2.29 | −2.95 | NA |
p-value | .62 | .53 | .17 | .11 | .04 | .03 | NA |
Effect size | 0.16 | 0.21 | 0.47 | 0.55 | 0.78 | 0.01 | NA |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Sharma, K.; Leftheriotis, I.; Giannakos, M. Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech. Sensors 2020, 20, 1964. https://doi.org/10.3390/s20071964
Sharma K, Leftheriotis I, Giannakos M. Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech. Sensors. 2020; 20(7):1964. https://doi.org/10.3390/s20071964
Chicago/Turabian StyleSharma, Kshitij, Ioannis Leftheriotis, and Michail Giannakos. 2020. "Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech" Sensors 20, no. 7: 1964. https://doi.org/10.3390/s20071964
APA StyleSharma, K., Leftheriotis, I., & Giannakos, M. (2020). Utilizing Interactive Surfaces to Enhance Learning, Collaboration and Engagement: Insights from Learners’ Gaze and Speech. Sensors, 20(7), 1964. https://doi.org/10.3390/s20071964