Visual search is an important part of human-computer interaction (HCI). The visual search processes that people use have a substantial effect on the time expended and likelihood of finding the information they seek. This dissertation investigates visual search through experiments and computational cognitive modeling. Computational cognitive modeling is a powerful methodology that uses computer simulation to capture, assert, record, and replay plausible sets of interactions among the many human processes at work during visual search. This dissertation aims to provide a cognitive model of visual search that can be utilized by predictive interface analysis tools and to do so in a manner consistent with a comprehensive theory of human visual processing, namely active vision. The model accounts for the four questions of active vision, the answers to which are important to both practitioners and researchers in HCI: What can be perceived in a fixation__ __ When do the eyes move__ __ Where do the eyes move__ __ What information is integrated between eye movements__ __
This dissertation presents a principled progression of the development of a computational model of active vision. Three experiments were conducted that investigate the effects of visual layout properties: density, color, and word meaning. The experimental results provide a better understanding of how these factors affect human-computer visual interaction. Three sets of data, two from the experiments reported here, were accurately modeled in the EPIC (Executive Process-Interactive Control) cognitive architecture. This work extends the practice of computational cognitive modeling by (a) informing the process of developing computational models through the use of eye movement data and (b) providing the first detailed instantiation of the theory of active vision in a computational framework. This instantiation allows us to better understand (a) the effects and interactions of visual search processes and (b) how these visual search processes can be used computationally to predict people's visual search behavior. This research ultimately benefits HCI by giving researchers and practitioners a better understanding of how users visually interact with computers and provides a foundation for tools to predict that interaction.
This dissertation includes-both previously published and co-authored material.
Cited By
- Bailly G, Lecolinet E and Nigay L (2016). Visual Menu Techniques, ACM Computing Surveys, 49:4, (1-41), Online publication date: 31-Dec-2018.
- Brumby D and Seyedi V An empirical investigation into how users adapt to mobile phone auto-locks in a multitask setting Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services, (281-290)
- Kieras D (2011). The persistent visual store as the locus of fixation memory in visual search tasks, Cognitive Systems Research, 12:2, (102-112), Online publication date: 1-Jun-2011.
Recommendations
A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction
Human visual search plays an important role in many human–computer interaction HCI tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to ...
A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction
Human visual search plays an important role in many human–computer interaction HCI tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to ...
A Computational Model of “Active Vision” for Visual Search in Human–Computer Interaction
Human visual search plays an important role in many human–computer interaction HCI tasks. Better models of visual search are needed not just to predict overall performance outcomes, such as whether people will be able to find the information needed to ...