[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/2470654.2470697acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

EyeContext: recognition of high-level contextual cues from human visual behaviour

Published: 27 April 2013 Publication History

Abstract

In this work we present EyeContext, a system to infer high-level contextual cues from human visual behaviour. We conducted a user study to record eye movements of four participants over a full day of their daily life, totalling 42.5 hours of eye movement data. Participants were asked to self-annotate four non-mutually exclusive cues: social (interacting with somebody vs. no interaction), cognitive (concentrated work vs. leisure), physical (physically active vs. not active), and spatial (inside vs. outside a building). We evaluate a proof-of-concept EyeContext system that combines encoding of eye movements into strings and a spectrum string kernel support vector machine (SVM) classifier. Our results demonstrate the large information content available in long-term human visual behaviour and opens up new venues for research on eye-based behavioural monitoring and life logging.

Supplementary Material

suppl.mov (chi0107-file3.mp4)
Supplemental video

References

[1]
Beigi, M., Zell, A. Synthetic protein sequence oversampling method for classification and remote homology detection in imbalanced protein data. Proc. of the 1st Int. Conf. on Bioinformatics Research and Development (2007), 263--277.
[2]
Blum, M., Pentland, A., Tröster, G. InSense: Interest-Based Life Logging. IEEE Multimedia 13, 4 (2006), 40--48.
[3]
Bulling, A., Ward, J. A., Gellersen, H. Multimodal Recognition of Reading Activity in Transit Using Body-Worn Sensors. ACM Trans. on Applied Perception 9, 1 (2012), 2:1--2:21.
[4]
Bulling, A., Ward, J. A., Gellersen, H., Tröster, G. Eye Movement Analysis for Activity Recognition Using Electrooculography. IEEE Trans. on Pattern Analysis and Machine Intelligence 33, 4 (2011), 741--753.
[5]
Doherty, A. R., Caprani, N., O Conaire, C., Kalnikaite, V., Gurrin, C., O'Connor, N. E., Smeaton, A. F. Passively recognising human activities through lifelogging. Computers in Human Behavior 27 (2011), 1948--1958.
[6]
Hodges, S., Williams, L., Berry, E., Izadi, S., Srinivasan, J., Butler, A., Smyth, G., Kapur, N., Wood, K. Sensecam: a retrospective memory aid. Proc. of the 8th Int. Conf. on Ubiquitous Computing (2006), 177--193.
[7]
Leslie, C., Eskin, E., Noble, W. S. The spectrum kernel: a string kernel for SVM protein classification. Proc. of the Pacific Symp. on Biocomputing (2002), 564--575.
[8]
Sellen, A. J., Fogg, A., Aitken, M., Hodges, S., Rother, C., Wood, K. Do life-logging technologies support memory for the past?: an experimental study using sensecam. Proc. of the SIGCHI Conf. on Human Factors in Computing Systems (2007), 81--90.
[9]
Sinatra, R., Condorelli, D., Latora, V. Networks of motifs from sequences of symbols. Physical Review Letters 105, 17 (2010), 178702.
[10]
Truong, K. N., Hayes, G. R. Ubiquitous computing for capture and access. Foundations and Trends in Human-Computer Interaction 2, 2 (2009), 95--171.

Cited By

View all
  • (2024)PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party ComputationProceedings of the ACM on Human-Computer Interaction10.1145/36556068:ETRA(1-23)Online publication date: 28-May-2024
  • (2023)SQL#: A Language for Maintainable and Debuggable Database QueriesInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402350010933:05(619-649)Online publication date: 13-Apr-2023
  • (2023)EHTask: Recognizing User Tasks From Eye and Head Movements in Immersive Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.313890229:4(1992-2004)Online publication date: 1-Apr-2023
  • Show More Cited By

Index Terms

  1. EyeContext: recognition of high-level contextual cues from human visual behaviour

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      CHI '13: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
      April 2013
      3550 pages
      ISBN:9781450318990
      DOI:10.1145/2470654
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 27 April 2013

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. context recognition
      2. electrooculography (eog)
      3. eye movement analysis
      4. visual behaviour

      Qualifiers

      • Research-article

      Conference

      CHI '13
      Sponsor:

      Acceptance Rates

      CHI '13 Paper Acceptance Rate 392 of 1,963 submissions, 20%;
      Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

      Upcoming Conference

      CHI 2025
      ACM CHI Conference on Human Factors in Computing Systems
      April 26 - May 1, 2025
      Yokohama , Japan

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)26
      • Downloads (Last 6 weeks)3
      Reflects downloads up to 11 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)PrivatEyes: Appearance-based Gaze Estimation Using Federated Secure Multi-Party ComputationProceedings of the ACM on Human-Computer Interaction10.1145/36556068:ETRA(1-23)Online publication date: 28-May-2024
      • (2023)SQL#: A Language for Maintainable and Debuggable Database QueriesInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402350010933:05(619-649)Online publication date: 13-Apr-2023
      • (2023)EHTask: Recognizing User Tasks From Eye and Head Movements in Immersive Virtual RealityIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2021.313890229:4(1992-2004)Online publication date: 1-Apr-2023
      • (2023)SmartLog: A Smart TV-Based Lifelogging System for Capturing, Storing, and Visualizing Watching BehaviorInternational Journal of Human–Computer Interaction10.1080/10447318.2023.225005440:20(6232-6251)Online publication date: 29-Aug-2023
      • (2023)Predicting consumer choice from raw eye-movement data using the RETINA deep learning architectureData Mining and Knowledge Discovery10.1007/s10618-023-00989-738:3(1069-1100)Online publication date: 29-Dec-2023
      • (2022)U-HARProceedings of the ACM on Human-Computer Interaction10.1145/35308846:ETRA(1-19)Online publication date: 13-May-2022
      • (2021)An algorithmic approach to determine expertise development using object-related gaze pattern sequencesBehavior Research Methods10.3758/s13428-021-01652-z54:1(493-507)Online publication date: 13-Jul-2021
      • (2021)Automatic Visual Attention Detection for Mobile Eye Tracking Using Pre-Trained Computer Vision Models and Human GazeSensors10.3390/s2112414321:12(4143)Online publication date: 16-Jun-2021
      • (2021)Convolutional neural networks can decode eye movement data: A black box approach to predicting task from eye movementsJournal of Vision10.1167/jov.21.7.921:7(9)Online publication date: 15-Jul-2021
      • (2021)A CNN-based Human Activity Recognition System Combining a Laser Feedback Interferometry Eye Movement Sensor and an IMU for Context-aware Smart GlassesProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34949985:4(1-24)Online publication date: 30-Dec-2021
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media