Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data
<p>An example of the eye-movement patterns during spatial problem-solving. The path is represented by a line that gets brighter in color (black-green-yellow) as time progresses. Each answer option (<b>a</b>–<b>d</b>) represents an option for how the cross section of the gray plane with the 3D object could look like.</p> "> Figure 2
<p>Example divisions of the time-limited experiment for (<b>a</b>) PSVT: R division and (<b>b</b>) SBST division.</p> "> Figure 3
<p>Average fixation duration of PSVT: R—one-way ANOVA results: TST (F = 27.771, p = 0.00 **), QUESTION_10 (F = 21.759, p = 0.00 **), QUESTION_20 (F = 13.949, p = 0.00 **), OPTION_10 (F = 0.672, p = 0.570), OPTION_20 (F = 16.283, p = 0.00 **), ** p < 0.01.</p> "> Figure 4
<p>Average fixation duration of SBST—one-way ANOVA results: TST (F = 10.044, p = 0.00 **), QUESTION_10 (F = 39.303, p = 0.00 **), QUESTION_20 (F = 13.067, p = 0.00 **), OPTION_10 (F = 15.695, p = 0.00 **), OPTION_20 (F = 31.502, p = 0.00 **), ** p < 0.01.</p> "> Figure 5
<p>A question reported confusing in our user study—(<b>a</b>) SBST 2 and (<b>b</b>) SBST 3.</p> "> Figure 6
<p>Confusion matrix of question-dependent tasks of SBST & PSVT: R—(<b>a</b>) SBST (Grouped); (<b>b</b>) SBST (Ungrouped); (<b>c</b>) PSVT: R (Grouped); (<b>d</b>) PSVT: R (Ungrouped).</p> ">
Abstract
:1. Introduction
2. Related Work
3. Experimental Procedure
3.1. Eye-Tracking Data
3.1.1. Participants
3.1.2. Measures
3.1.3. Procedure
3.2. Difficulty Level
3.3. Approaches
3.4. Feature Extraction
4. Results and Discussion
4.1. Difficulty Ranking
4.2. Features and Classifier Selection
4.3. Question-Dependent Test
4.4. Question-Independent Test
5. Conclusions and Future Work
Author Contributions
Acknowledgments
Conflicts of Interest
References
- Blikstein, P.; Worsley, M. Multimodal learning analytics and education data mining: Using computational technologies to measure complex learning tasks. J. Learn. Anal. 2016, 3, 220–238. [Google Scholar] [CrossRef] [Green Version]
- Yarbus, A. Eye Movements and Vision; Springer: New York, NY, USA, 1967. [Google Scholar]
- Rayner, K. Eye movements in reading and information processing: 20 years of research. Psychol. Bull. 1998, 124, 372. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y.C.; Yang, F.Y. Probing the relationship between process of spatial problems solving and science learning: An eye tracking approach. Int. J. Sci. Math. Educ. 2014, 12, 579–603. [Google Scholar] [CrossRef]
- Just, M.A.; Carpenter, P.A. Eye fixations and cognitive processes. Cognit. Psychol. 1976, 8, 441–480. [Google Scholar] [CrossRef]
- Roach, V.A.; Fraser, G.M.; Kryklywy, J.H.; Mitchell, D.G.; Wilson, T.D. Time limits in testing: An analysis of eye movements and visual attention in spatial problem solving. Anatom. Sci. Educ. 2017, 10, 528–537. [Google Scholar] [CrossRef]
- Kaller, C.P.; Rahm, B.; Bolkenius, K.; Unterrainer, J.M. Eye movements and visuospatial problem solving: Identifying separable phases of complex cognition. Psychophysiology 2009, 46, 818–830. [Google Scholar] [CrossRef]
- Conati, C.; Merten, C. Eye-tracking for user modeling in exploratory learning environments: An empirical evaluation. Knowl. Based Syst. 2007, 20, 557–574. [Google Scholar] [CrossRef]
- Eivazi, S.; Bednarik, R. Predicting Problem-Solving Behavior and Performance Levels from Visual Attention Data. In Proceedings of the Workshop on Eye Gaze in Intelligent Human Machine Interaction at IUI, Palo Alto, CA, USA, 13–16 February 2011; pp. 9–16. [Google Scholar]
- Velichkovsky, B.M.; Rothert, A.; Kopf, M.; Dornhöfer, S.M.; Joos, M. Towards an express-diagnostics for level of processing and hazard perception. Transp. Res. Part F Traffic Psychol. Behav. 2002, 5, 145–156. [Google Scholar] [CrossRef]
- Glöckner, A.; Herbold, A.K. An eye-tracking study on information processing in risky decisions: Evidence for compensatory strategies based on automatic processes. J. Behav. Decis. Mak. 2011, 24, 71–98. [Google Scholar] [CrossRef]
- Chen, S.; Epps, J. Using task-induced pupil diameter and blink rate to infer cognitive load. Hum. Comput. Interact. 2014, 29, 390–413. [Google Scholar] [CrossRef]
- Hayes, T.R.; Petrov, A.A.; Sederberg, P.B. A novel method for analyzing sequential eye movements reveals strategic influence on raven’s advanced progressive matrices. J. Vis. 2011, 11, 10. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Simola, J.; Salojärvi, J.; Kojo, I. Using hidden Markov model to uncover processing states from eye movements in information search tasks. Cognit. Syst. Res. 2008, 9, 237–251. [Google Scholar] [CrossRef]
- Nisiforou, E.A.; Laghos, A. Do the eyes have it? Using eye tracking to assess students cognitive dimensions. Educ. Media Int. 2013, 50, 247–265. [Google Scholar] [CrossRef]
- Gegenfurtner, A.; Lehtinen, E.; Säljö, R. Expertise Differences in the Comprehension of Visualizations: A Meta-Analysis of Eye-Tracking Research in Professional Domains. Educ. Psychol. Rev. 2013, 23, 523–552. [Google Scholar] [CrossRef]
- Rayner, K.; Loschky, L.C.; Reingold, E.M. Eye movements in visual cognition: The contributions of George W. McConkie. Vis. Cognit. 2014, 22, 239–241. [Google Scholar] [CrossRef]
- Palinko, O.; Kun, A.L.; Shyrokov, A.; Heeman, P.A. Estimating Cognitive Load Using Remote Eye Tracking in a Driving Simulator. In Proceedings of the Symposium on Eye-Tracking Research & Applications (ETRA), Austin, TX, USA, 22–24 March 2010. [Google Scholar]
- Fernández, G.; Castro, L.R.; Schumacher, M.; Agamennoni, O.E. Diagnosis of mild Alzheimer disease through the analysis of eye movements during reading. J. Integr. Neurosci. 2015, 14, 121–133. [Google Scholar] [CrossRef]
- Król, M.E.; Król, M. The right look for the job: Decoding cognitive processes involved in the task from spatial eye-movement patterns. Psychol. Res. 2020, 84, 245–258. [Google Scholar] [CrossRef]
- Kai, K.; Utsumi, Y.; Shiga, Y.; Kise, K.; Bulling, A. I Know What You are Reading: Recognition of Document Types Using Mobile Eye Tracking. In Proceedings of the International Symposium on Wearable Computers, Zurich, Switzerland, 9–12 September 2013. [Google Scholar]
- Smith, J.D.; Graham, T. Use of Eye Movements for Video Game Control. In Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology, Hollywood, CA, USA, 14–16 June 2006; p. 20. [Google Scholar]
- Vansteenkiste, P.; Cardon, G.; Philippaerts, R.; Lenoir, M. Measuring dwell time percentage from head-mounted eye-tracking data–comparison of a frame-by-frame and a fixation-by-fixation analysis. Ergonomics 2015, 58, 712–721. [Google Scholar] [CrossRef]
- Duchowski, A.T. Eye Movement Analysis; Springer: London, UK, 2017. [Google Scholar]
- Dink, J.W.; Ferguson, B. Eyetracking R: An R Library for Eye-Tracking Data Analysis. Available online: www.eyetracking-r.com (accessed on 28 March 2020).
- The, B.; Mavrikis, M. A Study on Eye Fixation Patterns of Students in Higher Education using an Online Learning System. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge, Edinburgh, UK, 25–29 April 2016; pp. 408–416. [Google Scholar]
- Król, M.; Król, M. Learning from peers’ eye movements in the absence of expert guidance: A proof of concept using laboratory stock trading, eye tracking, and machine learning. Cognit. Sci. 2019, 43, e12716. [Google Scholar] [CrossRef]
- Gerjets, P.; Kammerer, Y.; Werner, B. Measuring spontaneous and instructed evaluation processes during Web search: Integrating concurrent thinking-aloud protocols and eye-tracking data. Learn. Instruct. 2011, 21, 220–231. [Google Scholar] [CrossRef]
- Elling, S.; Lentz, L.; De Jong, M. Combining concurrent think-aloud protocols and eye-tracking observations: An analysis of verbalizations and silences. IEEE Trans. Prof. Commun. 2012, 55, 206–220. [Google Scholar] [CrossRef] [Green Version]
- McCrudden, M.T.; Magliano, J.P.; Schraw, G. The effect of diagrams on online reading processes and memory. Discourse Process. 2011, 48, 69–92. [Google Scholar] [CrossRef]
- Henderson, J.M.; Shinkareva, S.V.; Wang, J.; Luke, S.G.; Olejarczyk, J. Predicting cognitive state from eye movements. PLoS ONE 2013, 8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Greene, M.R.; Liu, T.; Wolfe, J.M. Reconsidering Yarbus: A failure to predict observers’ task from eye movement patterns. Vis. Res. 2012, 62, 1–8. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Borji, A.; Itti, L. Defending yarbus: Eye movements reveal observers’ task. J. Vis. 2014, 14, 29. [Google Scholar] [CrossRef] [PubMed]
- Kim, J.; Singh, S.; Thiessen, E.D.; Fisher, A.V. A hidden Markov model for analyzing eye-tracking of moving objects. In Behavior Research Methods; Springer: New York, NY, USA, 2020; pp. 1–19. [Google Scholar]
- Jansen, A.R.; Marriott, K.; Yelland, G.W. Parsing of algebraic expressions by experienced users of mathematics. Eur. J. Cognit. Psychol. 2007, 19, 286–320. [Google Scholar] [CrossRef]
- Hinze, S.R.; Rapp, D.N.; Williamson, V.M.; Shultz, M.J.; Deslongchamps, G.; Williamson, K.C. Beyond ball-and-stick: Students’ processing of novel STEM visualizations. Learn. Instruct. 2013, 26, 12–21. [Google Scholar] [CrossRef]
- Nüssli, M.A.; Jermann, P.; Sangin, M.; Dillenbourg, P. Collaboration and Abstract Representations: Towards Predictive Models based on Raw Speech and Eye-Tracking Data. In Proceedings of the 9th International Conference on Computer Supported Collaborative Learning, Rhodes, Greece, 8–13 June 2009; pp. 78–82. [Google Scholar]
- Hu, Y.; Wu, B.; Gu, X. An eye tracking study of high-and low-performing students in solving interactive and analytical problems. J. Educ. Technol. Soc. 2017, 20, 300–311. [Google Scholar]
- Bodner, G.M.; Guay, R.B. The Purdue visualization of rotations test. Chem. Educ. 1997, 2, 1–17. [Google Scholar] [CrossRef]
- Cohen, C.A.; Hegarty, M. Inferring cross sections of 3D objects: A new spatial thinking test. Learn. Individ. Differ. 2012, 22, 868–874. [Google Scholar] [CrossRef]
- Hsu, C.W.; Chang, C.C.; Lin, C.J. A Practical Guide to Support Vector Classification. Available online: https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf (accessed on 28 March 2020).
- Liang, Y.; Reyes, M.L.; Lee, J.D. Real-time detection of driver cognitive distraction using support vector machines. IEEE Trans. Intell. Transp. Syst. 2007, 8, 340–350. [Google Scholar] [CrossRef]
Features | Description |
---|---|
Total solving time(TST) | Total time of answering task |
Fixation duration in QUESTION in first 10 seconds (QUESTION_10) | Sum of fixation duration in QUESTION in first 10 seconds |
Fixation duration in OPTION in first 10 seconds (OPTION_10) | Sum of fixation duration in OPTION in first 10 seconds |
Fixation duration in QUESTION in second 10 seconds (QUESTION_20) | Sum of fixation duration in QUESTION in second 10 seconds |
Fixation duration in OPTION in second 10 seconds (OPTION_20) | Sum of fixation duration in OPTION in second 10 seconds |
Fixation duration in QUESTION (QUESTION) | Sum of fixation duration in QUESTION |
Fixation duration in OPTION (OPTION) | Sum of fixation duration in OPTION |
Questions | Ranking Score | Correctness Rate | Easy | Medium | Hard |
---|---|---|---|---|---|
PSVT: R 1 | 1.04 | 91.3% | 23 | N/A | 0 |
PSVT: R 2 | 2.09 | 82.6% | 23 | N/A | 0 |
PSVT: R 3 | 3.17 | 43.5% | 0 | N/A | 23 |
PSVT: R 4 | 3.70 | 30.4% | 0 | N/A | 23 |
SBST 1 | 1.00 | 100% | 23 | 0 | 0 |
SBST 2 | 2.00 | 91.3% | 10 | 13 | 0 |
SBST 3 | 3.04 | 69.6% | 5 | 18 | 0 |
SBST 4 | 4.00 | 95.6% | 0 | 23 | 0 |
SBST 5 | 5.52 | 39.1% | 0 | 0 | 23 |
SBST 6 | 5.43 | 52.2% | 0 | 0 | 23 |
Method | PSVT: R | SBST | ||
---|---|---|---|---|
Feature Group A | Feature Group B | Feature Group A | Feature Group B | |
RF | 0.71 () | 0.68 () | 0.77 () | 0.61 () |
DT | 0.73 () | 0.62 () | 0.83 () | 0.55 () |
NB | 0.67 () | 0.65 () | 0.66 () | 0.55 () |
LR | 0.93 () | 0.68 () | 0.84 () | 0.61 () |
SVM | 0.93 () | 0.70 () | 0.84 () | 0.62 () |
Testing Question | Accuracy | Training Questions |
---|---|---|
SBST 2 | 71.79% | SBST 1, SBST 3, SBST 4, SBST 5, SBST 6 |
SBST 3 | 52.67% | SBST 1, SBST 2, SBST 4, SBST 5, SBST 6 |
SBST 4 | 62.06% | SBST 1, SBST 2, SBST 3, SBST 5, SBST 6 |
SBST 5 | 79.66% | SBST 1, SBST 2, SBST 3, SBST 4, SBST 6 |
SBST 6 | 64.95% | SBST 1, SBST 2, SBST 3, SBST 4, SBST 5 |
Test Question | Accuracy | Training Questions |
---|---|---|
PSVT: R 1 | 76.92% | PSVT: R 2, PSVT: R 3, PSVT: R 4 |
PSVT: R 2 | 70.94% | PSVT: R 1, PSVT: R 3, PSVT: R 4 |
PSVT: R 3 | 88.13% | PSVT: R 1, PSVT: R 2, PSVT: R 4 |
PSVT: R 4 | 82.05% | PSVT: R 1, PSVT: R 2, PSVT: R 3 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Li, X.; Younes, R.; Bairaktarova, D.; Guo, Q. Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data. Sensors 2020, 20, 1949. https://doi.org/10.3390/s20071949
Li X, Younes R, Bairaktarova D, Guo Q. Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data. Sensors. 2020; 20(7):1949. https://doi.org/10.3390/s20071949
Chicago/Turabian StyleLi, Xiang, Rabih Younes, Diana Bairaktarova, and Qi Guo. 2020. "Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data" Sensors 20, no. 7: 1949. https://doi.org/10.3390/s20071949
APA StyleLi, X., Younes, R., Bairaktarova, D., & Guo, Q. (2020). Predicting Spatial Visualization Problems’ Difficulty Level from Eye-Tracking Data. Sensors, 20(7), 1949. https://doi.org/10.3390/s20071949