[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3627050.3630729acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiotConference Proceedingsconference-collections
poster

EmoGO: Emotion Estimation on Gazed Object by Using Mobile Eye-Tracker

Published: 22 March 2024 Publication History

Abstract

Urban environments consist of diverse elements influencing residents’ quality of life and emotional well-being. Prior research has attempted to map citizens’ emotions using sensors and GPS data, but it remains difficult to pinpoint environmental triggers that evoke specific emotions of them. To solve this problem, we introduce EmoGO, a novel system that integrates mobile eye-tracking with emotion, visual, and GPS data collection. To our knowledge, this approach is the first study to uniquely correlate specific visual stimuli with emotions. Given the rising prevalence of eye-tracking in VR, MR, and AR technologies, EmoGO’s potential applications are substantial. Our preliminary evaluation resulted in a promising result in emotion estimation on our original dataset, and also showed that the system can effectively collect real-world data on objects and associated emotions.

References

[1]
Margaret M. Bradley and Peter J. Lang. 1994. Measuring emotion: The self-assessment manikin and the semantic differential. Journal of Behavior Therapy and Experimental Psychiatry 25, 1 (1994), 49–59. https://doi.org/10.1016/0005-7916(94)90063-9
[2]
Válber César Cavalcanti Roza and Octavian Adrian Postolache. 2016. Citizen emotion analysis in Smart City. In 2016 7th International Conference on Information, Intelligence, Systems & Applications (IISA). 1–6. https://doi.org/10.1109/IISA.2016.7785335
[3]
Django Software Foundation and individual contributors. [n. d.]. About Django. https://www.djangoproject.com 2023-01-22.
[4]
Jiang-Jian Guo, Rong Zhou, Li-Ming Zhao, and Bao-Liang Lu. 2019. Multimodal Emotion Recognition from Eye Image, Eye Movement and EEG Using Deep Neural Networks. In 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC). 3071–3074. https://doi.org/10.1109/EMBC.2019.8856563
[5]
Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing (Seattle, Washington) (UbiComp ’14 Adjunct). ACM, New York, NY, USA, 1151–1160. https://doi.org/10.1145/2638728.2641695
[6]
Benedek Kurdi, Shayn Lozano, and Mahzarin R. Banaji. 2017. Introducing the Open Affective Standardized Image Set (OASIS). Behavior Research Methods 49, 2 (2017), 457–470. https://doi.org/10.3758/s13428-016-0715-3
[7]
Peter J Lang, Margaret M Bradley, Bruce N Cuthbert, 2005. International affective picture system (IAPS): Affective ratings of pictures and instruction manual. NIMH, Center for the Study of Emotion & Attention Gainesville, FL.
[8]
OpenStreetMap contributors. 2017. Planet dump retrieved from https://planet.osm.org. https://www.openstreetmap.org.
[9]
picklepete/pyicloud. [n. d.]. About pyicloud. https://github.com/picklepete/pyicloud Accessed: 2023-01-22.
[10]
James Russell. 1980. A Circumplex Model of Affect. Journal of Personality and Social Psychology 39 (12 1980), 1161–1178. https://doi.org/10.1037/h0077714
[11]
[11] scikit learn.org. [n. d.]. https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.GradientBoostingClassifier.html(accessed 2023-01-22).
[12]
Luma Tabbaa, Ryan Searle, Saber Mirzaee Bafti, Md Moinul Hossain, Jittrapol Intarasisrisawat, Maxine Glancy, and Chee Siang Ang. 2022. VREED: Virtual Reality Emotion Recognition Dataset Using Eye Tracking &; Physiological Measures. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 5, 4, Article 178 (dec 2022), 20 pages. https://doi.org/10.1145/3495002
[13]
Chien-Yao Wang, Alexey Bochkovskiy, and Hong-Yuan Mark Liao. 2023. YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 7464–7475.

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
IoT '23: Proceedings of the 13th International Conference on the Internet of Things
November 2023
299 pages
ISBN:9798400708541
DOI:10.1145/3627050
Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 22 March 2024

Check for updates

Author Tags

  1. Affective Computing
  2. Datasets
  3. Emotion Estimation
  4. Eye-tracking
  5. Object Detection

Qualifiers

  • Poster
  • Research
  • Refereed limited

Funding Sources

Conference

IoT 2023

Acceptance Rates

Overall Acceptance Rate 28 of 84 submissions, 33%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 41
    Total Downloads
  • Downloads (Last 12 months)41
  • Downloads (Last 6 weeks)7
Reflects downloads up to 11 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media