[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Public Access

Wordometer Systems for Everyday Life

Published: 08 January 2018 Publication History

Abstract

We present in this paper a detailed comparison of different algorithms and devices to determine the number of words read in everyday life. We call our system the “Wordometer”. We used three kinds of eye tracking systems in our experiment: mobile video-oculography (MVoG); stationary video-oculography (SVoG); and electro-oculography (EoG). By analyzing the movement of the eyes we were able to estimate the number of words that a user read. Recently, inexpensive eye trackers have appeared on the market. Thus, we undertook a large-scale experiment that compared three devices that can be used for daily reading on a screen: the Tobii Eye X SVoG; the JINS MEME EoG; and the Pupil MVoG. We found that the accuracy of the everyday life devices and professional devices was similar when used with the Wordometer. We analyzed the robustness of the systems for special reading behaviors: rereading and skipping.
With the MVoG, SVoG and EoG systems, we obtained estimation errors respectively, 7.2%, 13.0%, and 10.6% in our main experiment. In all our experiments, we obtained 300 recordings by 14 participants, which amounted to 109,097 read words.

References

[1]
C. Gurrin, A. F. Smeaton, and A. R. Doherty, “Lifelogging: Personal big data,” Foundations and trends in information retrieval, vol. 8, no. 1, pp. 1--125, 2014.
[2]
E. K. Choe, N. B. Lee, B. Lee, W. Prat, and J. A. Kientz, “Understanding quantified-selfers' practices in collecting and exploring personal data,” in Proceedings of the 32nd annual ACM conference on Human factors in computing systems. ACM, 2014, pp. 1143--1152.
[3]
P. M. Hurvitz, A. V. Moudon, B. Kang, B. E. Saelens, and G. E. Duncan, “Emerging technologies for assessing physical activity behaviors in space and time,” Emerging Technologies to Promote and Evaluate Physical Activity, p. 8, 2014.
[4]
K. Kitamura, T. Yamasaki, and K. Aizawa, “Food log by analyzing food images,” in Proceedings of the 16th ACM international conference on Multimedia. ACM, 2008, pp. 999--1000.
[5]
J.-K. Min, A. Doryab, J. Wiese, S. Amini, J. Zimmerman, and J. I. Hong, “Toss'n'turn: smartphone as sleep and sleep quality detector,” in Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2014, pp. 477--486.
[6]
S. Abdullah, E. L. Murnane, M. Mathews, M. Kay, J. A. Kientz, G. Gay, and T. Choudhury, “Cognitive rhythms: unobtrusive and continuous sensing of alertness using a mobile phone,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2016, pp. 178--189.
[7]
P. T. Terenzini, L. Springer, E. T. Pascarella, and A. Nora, “Influences affecting the development of students' critical thinking skills,” Research in higher education, vol. 36, no. 1, pp. 23--39, 1995.
[8]
O. Augereau, K. Kise, and K. Hoshika, “A proposal of a document image reading-life log based on document image retrieval and eyetracking,” in Document Analysis and Recognition (ICDAR), 2015 13th International Conference on. IEEE, 2015, pp. 246--250.
[9]
K. Kunze, H. Kawaichi, K. Yoshimura, and K. Kise, “The wordometer--estimating the number of words read using document image retrieval and mobile eye tracking,” in Document Analysis and Recognition (ICDAR), 2013 12th International Conference on. IEEE, 2013, pp. 25--29.
[10]
S. Ishimaru, K. Kunze, K. Kise, and A. Dengel, “The wordometer 2.0: estimating the number of words you read in real life using commercial eog glasses,” in Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct. ACM, 2016, pp. 293--296.
[11]
K. Rayner, T. J. Slatery, and N. N. Bélanger, “Eye movements, the perceptual span, and reading speed,” Psychonomic bulletin 8 review, vol. 17, no. 6, pp. 834--839, 2010.
[12]
M. Shelhamer and D. C. Roberts, “Chapter 6 - magnetic scleral search coil,” in Vertigo and Imbalance: Clinical Neurophysiology of the Vestibular System, ser. Handbook of Clinical Neurophysiology. Elsevier, 2010, vol. 9, pp. 80--87. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S1567423110090064
[13]
R. Barea, L. Boquete, S. Ortega, E. López, and J. Rodríguez-Ascariz, “Eog-based eye movements codification for human computer interaction,” Expert Systems with Applications, vol. 39, no. 3, pp. 2677--2683, 2012.
[14]
A. Bulling, J. A. Ward, H. Gellersen, and G. Troster, “Eye movement analysis for activity recognition using electrooculography,” IEEE transactions on pattern analysis and machine intelligence, vol. 33, no. 4, pp. 741--753, 2011.
[15]
C. Galdi, M. Nappi, D. Riccio, and H. Wechsler, “Eye movement analysis for human authentication: Critical survey,” Pattern Recognition Letters, 2016.
[16]
K. Kunze, K. Masai, M. Inami, Ö. Sacakli, M. Liwicki, A. Dengel, S. Ishimaru, and K. Kise, “Qantifying reading habits: counting how many words you read,” in Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing. ACM, 2015, pp. 87--96.
[17]
Y. Shiga, T. Toyama, Y. Utsumi, K. Kise, and A. Dengel, “Daily activity recognition combining gaze motion and visual features,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing: Adjunct Publication. ACM, 2014, pp. 1103--1111.
[18]
K. Ogaki, K. M. Kitani, Y. Sugano, and Y. Sato, “Coupling eye-motion and ego-motion features for first-person activity recognition,” in 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2012, pp. 1--7.
[19]
A. I. Adiba, N. Tanaka, and J. Miyake, “An adjustable gaze tracking system and its application for automatic discrimination of interest objects,” IEEE/ASME Transactions on Mechatronics, vol. 21, no. 2, pp. 973--979, 2016.
[20]
C. Holland and O. V. Komogortsev, “Biometric identification via eye movement scanpaths in reading,” in Biometrics (IJCB), 2011 International Joint Conference on. IEEE, 2011, pp. 1--8.
[21]
K. Rayner, K. H. Chace, T. J. Slatery, and J. Ashby, “Eye movements as reflections of comprehension processes in reading,” Scientific Studies of Reading, vol. 10, no. 3, pp. 241--255, 2006.
[22]
O. Augereau, H. Fujiyoshi, K. Kunze, and K. Kise, “Estimation of english skill with a mobile eye tracker,” in Adjunct Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2016 ACM International Symposium on Wearable Computers. ACM, 2016, pp. 1777--1781.
[23]
O. Augereau, H. Fujiyoshi, and K. Kise, “Towards an automated estimation of english skill via toeic score based on reading analysis,” in Pattern Recognition (ICPR), 2016 23rd International Conference on. IEEE, 2016, pp. 1285--1290.
[24]
R. Biedert, A. Dengel, M. Elshamy, and G. Buscher, “Towards robust gaze-based objective quality measures for text,” in Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 2012, pp. 201--204.
[25]
K. Kunze, Y. Utsumi, Y. Shiga, K. Kise, and A. Bulling, “I know what you are reading: recognition of document types using mobile eye tracking,” in Proceedings of the 17th annual international symposium on International symposium on wearable computers. ACM, 2013, pp. 113--116.
[26]
M. Pentinen and E. Huovinen, “The early development of sight-reading skills in adulthood a study of eye movements,” Journal of Research in Music Education, vol. 59, no. 2, pp. 196--220, 2011.
[27]
H. R. Gudmundsdotir, “Advances in music-reading research,” Music Education Research, vol. 12, no. 4, pp. 331--338, 2010.
[28]
C. Rigaud, T.-N. Le, J.-C. Burie, J.-M. Ogier, S. Ishimaru, M. Iwata, and K. Kise, “Semi-automatic text and graphics extraction of manga using eye tracking information,” in 2016 12th IAPR Workshop on Document Analysis Systems (DAS). IEEE, 2016, pp. 120--125.
[29]
K. Rayner, “Eye movements in reading and information processing: 20 years of research.” Psychological bulletin, vol. 124, no. 3, pp. 372--422, 1998.
[30]
G. Buscher, A. Dengel, and L. van Elst, “Eye movements as implicit relevance feedback,” in CHI‘08 extended abstracts on Human factors in computing systems. ACM, 2008, pp. 2991--2996.
[31]
D. D. Salvucci and J. H. Goldberg, “Identifying fixations and saccades in eye-tracking protocols,” in Proceedings of the 2000 symposium on Eye tracking research 8 applications. ACM, 2000, pp. 71--78.
[32]
M. A. Case, H. A. Burwick, K. G. Volpp, and M. S. Patel, “Accuracy of smartphone applications and wearable devices for tracking physical activity data,” Jama, vol. 313, no. 6, pp. 625--626, 2015.
[33]
C. Lennon and H. Burdick, “The lexile framework as an approach for reading measurement and success,” electronic publication on www.lexile.com, 2004.
[34]
A. Gibaldi, M. Vanegas, P. J. Bex, and G. Maiello, “Evaluation of the tobii eyex eye tracking controller and matlab toolkit for research,” Behavior Research Methods, pp. 1--24, 2016.
[35]
R. Biedert, J. Hees, A. Dengel, and G. Buscher, “A robust realtime reading-skimming classifier,” in Proceedings of the Symposium on Eye Tracking Research and Applications. ACM, 2012, pp. 123--130.
[36]
F. Ehrler, C. Weber, and C. Lovis, “Influence of pedometer position on pedometer accuracy at various walking speeds: A comparative study,” Journal of medical Internet research, vol. 18, no. 10, 2016.

Cited By

View all
  • (2024)An Article a Day: Towards Personal Informatics for Healthy News ConsumptionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650854(1-9)Online publication date: 11-May-2024
  • (2022)EOG-Based Human–Computer Interface: 2000–2020 ReviewSensors10.3390/s2213491422:13(4914)Online publication date: 29-Jun-2022
  • (2020)Mapping and Taking Stock of the Personal Informatics LiteratureProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34322314:4(1-38)Online publication date: 18-Dec-2020
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies  Volume 1, Issue 4
December 2017
1298 pages
EISSN:2474-9567
DOI:10.1145/3178157
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 January 2018
Accepted: 01 October 2017
Revised: 01 July 2017
Received: 01 May 2017
Published in IMWUT Volume 1, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Eye tracking
  2. electrooculography
  3. machine learning
  4. reading analysis
  5. wordometer

Qualifiers

  • Research-article
  • Research
  • Refereed

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)110
  • Downloads (Last 6 weeks)12
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)An Article a Day: Towards Personal Informatics for Healthy News ConsumptionExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650854(1-9)Online publication date: 11-May-2024
  • (2022)EOG-Based Human–Computer Interface: 2000–2020 ReviewSensors10.3390/s2213491422:13(4914)Online publication date: 29-Jun-2022
  • (2020)Mapping and Taking Stock of the Personal Informatics LiteratureProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/34322314:4(1-38)Online publication date: 18-Dec-2020
  • (2020)Aggregating Life Tags for Opportunistic Crowdsensing with Mobile and Smartglasses UsersProceedings of the 6th EAI International Conference on Smart Objects and Technologies for Social Good10.1145/3411170.3411237(66-71)Online publication date: 14-Sep-2020
  • (2020)GazeGraphProceedings of the 18th Conference on Embedded Networked Sensor Systems10.1145/3384419.3430774(422-435)Online publication date: 16-Nov-2020
  • (2019)Electrooculography dataset for reading detection in the wildAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3343812(85-88)Online publication date: 9-Sep-2019
  • (2019)Dyslexic and private readerAdjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers10.1145/3341162.3343788(296-297)Online publication date: 9-Sep-2019
  • (2019)Life-TagsProceedings of the ACM on Human-Computer Interaction10.1145/33311573:EICS(1-22)Online publication date: 13-Jun-2019
  • (2018)Mental State Analysis on EyewearProceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers10.1145/3267305.3274119(968-973)Online publication date: 8-Oct-2018
  • (2018)Vocabulometer, a Web Platform for Ubiquitous Language LearningProceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers10.1145/3267305.3267614(361-364)Online publication date: 8-Oct-2018
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media