[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Gaze-enabled activity recognition for augmented reality feedback

Published: 18 July 2024 Publication History

Abstract

Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they provide insights into user attention, intentions, and activities, and allow novel interaction methods based on this information. However, in physical environments, the implications of using gaze-enabled AR for human activity recognition have not been explored in detail. In an experimental study with the Microsoft HoloLens 2, we collected gaze data from 20 users while they performed three activities: Reading a text, Inspecting a device, and Searching for an object. We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved up to 89.6% activity-recognition accuracy. Based on the recognized activity, our system—GEAR—then provides users with relevant AR feedback. Due to the sensitivity of the personal (gaze) data GEAR collects, the system further incorporates a novel solution based on the Solid specification for giving users fine-grained control over the sharing of their data. The provided code and anonymized datasets may be used to reproduce and extend our findings, and as teaching material.

References

[1]
Azuma R., Baillot Y., Behringer R., Feiner S., Julier S., MacIntyre B., Recent advances in augmented reality, IEEE Comput Graph Appl 21 (6) (2001) 34–47,.
[2]
Billinghurst M., Clark A., Lee G., A survey of augmented reality, Found Trends Hum-Comput Interact 8 (2–3) (2015) 73–272,. URL http://www.nowpublishers.com/article/Details/HCI-049.
[3]
Sutherland I.E., A head-mounted three dimensional display, in: Proceedings of the December 9–11, 1968, fall joint computer conference, part I, ACM Press, San Francisco, California, 1968, pp. 757–764,. URL http://portal.acm.org/citation.cfm?doid=1476589.1476686.
[4]
Strecker J., García K., Bektaş K., Mayer S., Ramanathan G., SOCRAR: Semantic OCR through Augmented reality, in: Proceedings of the 12th international conference on the Internet of Things, ACM, Delft Netherlands, 2022, pp. 25–32,. URL https://dl.acm.org/doi/10.1145/3567445.3567453.
[5]
Strecker J., Akhunov K., Carbone F., García K., Bektaş K., Gomez A., et al., MR object identification and interaction: Fusing object situation information from heterogeneous sources, Proc ACM Interact Mob Wearable Ubiquitous Technol 7 (3) (2023) 26,.
[6]
Grubert J., Langlotz T., Zollmann S., Regenbrecht H., Towards pervasive augmented reality: Context-awareness in augmented reality, IEEE Trans Vis Comput Graphics 23 (6) (2017) 1706–1724,. URL http://ieeexplore.ieee.org/document/7435333/.
[7]
Orlosky J., Sra M., Bektaş K., Peng H., Kim J., Kos’myna N., et al., Telelife: The future of remote living, Front Virtual Real 2 (2021),. URL https://www.frontiersin.org/articles/10.3389/frvir.2021.763340/full.
[8]
Plopski A., Hirzle T., Norouzi N., Qian L., Bruder G., Langlotz T., The eye in extended reality: A survey on gaze interaction and eye tracking in head-worn extended reality, ACM Comput Surv 55 (3) (2022) 1–39,. URL.
[9]
Weiser M., The computer for the 21st century, SIGMOBILE Mob Comput Commun Rev 3 (3) (1999) 3–11,. URL https://doi.org/10.1145/329124.329126.
[10]
Jacob R., Stellmach S., What you look at is what you get: Gaze-based user interfaces, Interactions 23 (5) (2016) 62–65,. URL https://dl.acm.org/doi/10.1145/2978577.
[11]
Vertegaal R., Attentive user interfaces, Commun ACM 46 (3) (2003),. URL https://dl.acm.org/doi/10.1145/3263733.
[12]
Zhai S., Morimoto C., Ihde S., Manual and gaze input cascaded (MAGIC) pointing, in: Proceedings of the SIGCHI conference on human factors in computing systems, ACM, Pittsburgh, Pennsylvania, United States, 1999, pp. 246–253,. URL http://portal.acm.org/citation.cfm?doid=302979.303053.
[13]
Bektaş K., Çöltekin A., Krüger J., Duchowski A.T., A testbed combining visual perception models for geographic gaze contingent displays, Eurographics Conference on Visualization (EuroVis) - Short Papers 2015,. URL https://diglib.eg.org/handle/10.2312/eurovisshort.20151127.067-071.
[14]
Bulling A., Ward J.A., Gellersen H., Tröster G., Eye movement analysis for activity recognition using electrooculography, IEEE Trans Pattern Anal Mach Intell 33 (4) (2011) 741–753,. URL http://ieeexplore.ieee.org/document/5444879/.
[15]
Bektaş K., Çöltekin A., Krüger J., Duchowski A.T., Fabrikant S.I., GeoGCD: Improved visual search via gaze-contingent display, in: Proceedings of the 11th ACM symposium on eye tracking research & applications, ACM, Denver Colorado, 2019, pp. 1–10,. URL https://dl.acm.org/doi/10.1145/3317959.3321488.
[16]
Salvucci D.D., Goldberg J.H., Identifying fixations and saccades in eye-tracking protocols, in: Proceedings of the 2000 symposium on eye tracking research & applications, ETRA ’00, ACM, New York, NY, USA, 2000, pp. 71–78,. URL https://doi.org/10.1145/355017.355028.
[17]
Bektaş K., Toward a pervasive gaze-contingent assistance system: Attention and context-awareness in augmented reality, in: ACM symposium on eye tracking research and applications, in: ETRA ’20 adjunct, ACM, New York, NY, USA, 2020,. URL https://doi.org/10.1145/3379157.3391657.
[18]
Milgram P., Kishino F., A taxonomy of mixed reality visual displays, IEICE Trans Inf Syst 77 (12) (1994) 1321–1329.
[19]
Keshava A., Nezami F.N., Neumann H., Izdebski K., Schüler T., König P., Just-in-time: Gaze guidance behavior while action planning and execution in VR, 2021,. preprint: bioRxiv.
[20]
Bektaş K., Thrash T., van Raai M.A., Künzler P., Hahnloser R., The systematic evaluation of an embodied control interface for virtual reality, PLoS One 16 (12) (2021),.
[21]
Bektaş K., Strecker J.R., Mayer S., Stolze M., Etos-1: Eye tracking on shopfloors for user engagement with automation, in: AutomationXP22: Engaging with automation, cHI’22, 2022, URL http://www.alexandria.unisg.ch/266339/.
[22]
Pfeuffer K., Abdrabou Y., Esteves A., Rivu R., Abdelrahman Y., Meitner S., et al., ARtention: A design space for gaze-adaptive user interfaces in augmented reality, Comput Graph 95 (2021) 1–12,. URL https://linkinghub.elsevier.com/retrieve/pii/S0097849321000017.
[23]
Bulling A., Blanke U., Schiele B., A tutorial on human activity recognition using body-worn inertial sensors, ACM Comput Surv 46 (3) (2014) 1–33,. URL https://dl.acm.org/doi/10.1145/2499621.
[24]
Kiefer P., Giannopoulos I., Raubal M., Using eye movements to recognize activities on cartographic maps, in: Proceedings of the 21st ACM SIGSPAtIAL international conference on advances in geographic information systems, ACM, New York, NY, USA, 2013, pp. 488–491,. URL https://dl.acm.org/doi/10.1145/2525314.2525467.
[25]
Braunagel C., Kasneci E., Stolzmann W., Rosenstiel W., Driver-activity recognition in the context of conditionally autonomous driving, in: 2015 IEEE 18th international conference on intelligent transportation systems, IEEE, Gran Canaria, Spain, 2015, pp. 1652–1657,. URL http://ieeexplore.ieee.org/document/7313360/.
[26]
Alinaghi N., Kattenbeck M., Golab A., Giannopoulos I., Will you take this turn? Gaze-based turning activity recognition during navigation, in: 11th international conference on geographic information science (GIScience 2021) - Part II, in: Leibniz international proceedings in informatics (lIPIcs), 2021, pp. 5:1–5:16,. URL https://drops.dagstuhl.de/opus/volltexte/2021/14764.
[27]
Bektaş K., Strecker J., Mayer S., Garcia K., Hermann J., Jenß K.E., et al., GEAR: Gaze-enabled augmented reality for human activity recognition, in: 2023 symposium on eye tracking research and applications, ACM, Tubingen Germany, 2023, pp. 1–9,. URL https://dl.acm.org/doi/10.1145/3588015.3588402.
[28]
Sambra A.V., Mansour E., Hawke S., Zereba M., Greco N., Ghanem A., et al., Solid: A platform for decentralized social applications based on linked data, MIT CSAIL & Qatar Computing Research Institute, 2016.
[29]
Bulling A., Zander T.O., Cognition-aware computing, IEEE Pervasive Comput 13 (3) (2014) 80–83,. URL http://ieeexplore.ieee.org/document/6850240/.
[30]
Yarbus A.L., Eye movements and vision, Springer US, Boston, MA, 1967,. URL http://link.springer.com/10.1007/978-1-4899-5379-7.
[31]
Borji A., Itti L., Defending Yarbus: Eye movements reveal observers’ task, J Vis 14 (3) (2014) 29,. URL http://jov.arvojournals.org/Article.aspx?doi=10.1167/14.3.29.
[32]
Cornacchia M., Ozcan K., Zheng Y., Velipasalar S., A survey on activity detection and classification using wearable sensors, IEEE Sens J 17 (2) (2017) 386–403,. URL http://ieeexplore.ieee.org/document/7742959/.
[33]
Bulling A., Gellersen H., Toward mobile eye-based human-computer interaction, IEEE Pervasive Comput 9 (4) (2010) 8–12,. URL http://ieeexplore.ieee.org/document/5586690/.
[34]
Kunze K., Utsumi Y., Shiga Y., Kise K., Bulling A., I know what you are reading: Recognition of document types using mobile eye tracking, in: Proceedings of the 2013 international symposium on wearable computers, ACM, New York, NY, USA, 2013, pp. 113–116,. URL https://dl.acm.org/doi/10.1145/2493988.2494354.
[35]
Toyama T., Sonntag D., Orlosky J., Kiyokawa K., Attention engagement and cognitive state analysis for augmented reality text display functions, in: Proceedings of the 20th international conference on intelligent user interfaces, ACM, Atlanta Georgia USA, 2015, pp. 322–332,. URL https://dl.acm.org/doi/10.1145/2678025.2701384.
[36]
Rook K., Witt B., Bailey R., Geigel J., Hu P., Kothari A., A study of user intent in immersive smart spaces, in: 2019 IEEE international conference on pervasive computing and communications workshops (perCom workshops), IEEE, Kyoto, Japan, 2019, pp. 227–232,. URL https://ieeexplore.ieee.org/document/8730692/.
[37]
Seeliger A., Weibel R.P., Feuerriegel S., Context-adaptive visual cues for safe navigation in augmented reality using machine learning, Int J Hum-Comput Interact (2022) 1–21,. URL https://www.tandfonline.com/doi/full/10.1080/10447318.2022.2122114.
[38]
David-John B., Peacock C., Zhang T., Murdison T.S., Benko H., Jonker T.R., Towards gaze-based prediction of the intent to interact in virtual reality, in: ACM symposium on eye tracking research and applications, ACM, Virtual Event Germany, 2021, pp. 1–7,. URL https://dl.acm.org/doi/10.1145/3448018.3458008.
[39]
Krejtz K., Duchowski A., Krejtz I., Szarkowska A., Kopacz A., Discerning ambient/focal attention with coefficient K, ACM Trans Appl Percept 13 (3) (2016) 1–20,. URL https://dl.acm.org/doi/10.1145/2896452.
[40]
Lan G., Scargill T., Gorlatova M., EyeSyn: Psychology-inspired eye movement synthesis for Gaze-based activity recognition, in: 2022 21st ACM/IEEE international conference on information processing in sensor networks (IPSN), IEEE, Milano, Italy, 2022, pp. 233–246,. URL https://ieeexplore.ieee.org/document/9826020/.
[41]
Scargill T., Lan G., Gorlatova M., Demo abstract: Catch my eye: Gaze-based activity recognition in an augmented reality art gallery, in: 2022 21st ACM/IEEE international conference on information processing in sensor networks (IPSN), IEEE, Milano, Italy, 2022, pp. 503–504,. URL https://ieeexplore.ieee.org/document/9825976/.
[42]
Liebling D.J., Preibusch S., Privacy considerations for a pervasive eye tracking world, in: Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication, in: UbiComp ’14 adjunct, ACM, New York, NY, USA, 2014, pp. 1169–1177,. URL https://doi.org/10.1145/2638728.2641688.
[43]
Kröger J.L., Lutz O.H.-M., Müller F., What does your gaze reveal about you? On the privacy implications of eye tracking, in: Friedewald M., Önen M., Lievens E., Krenn S., Fricker S. (Eds.), Privacy and identity management. data for better living: AI and privacy, 576, Springer International Publishing, Cham, 2020, pp. 226–241,. URL http://link.springer.com/10.1007/978-3-030-42504-3_15, Series Title: IFIP Advances in Information and Communication Technology.
[44]
Bozkir E., Ünal A.B., Akgün M., Kasneci E., Pfeifer N., Privacy preserving gaze estimation using synthetic images via a randomized encoding based framework, in: ACM symposium on eye tracking research and applications, ACM, Stuttgart Germany, 2020, pp. 1–5,. URL https://dl.acm.org/doi/10.1145/3379156.3391364.
[45]
Langheinrich M., Privacy by design — Principles of privacy-aware ubiquitous systems, in: Goos G., Hartmanis J., van Leeuwen J., Abowd G.D., Brumitt B., Shafer S. (Eds.), Ubicomp 2001: Ubiquitous computing, 2201, Springer Berlin Heidelberg, Berlin, Heidelberg, 2001, pp. 273–291,. URL http://link.springer.com/10.1007/3-540-45427-6_23.
[46]
Katsini C., Abdrabou Y., Raptis G.E., Khamis M., Alt F., The role of eye gaze in security and privacy applications: Survey and future HCI research directions, in: Proceedings of the 2020 CHI conference on human factors in computing systems, CHI ’20, ACM, New York, NY, USA, 2020, pp. 1–21,. URL https://doi.org/10.1145/3313831.3376840.
[47]
Gressel C., Overdorf R., Hagenstedt I., Karaboga M., Lurtz H., Raschke M., et al., Privacy-aware eye tracking: Challenges and future directions, IEEE Pervasive Comput 22 (1) (2023) 95–102,.
[48]
Steil J., Koelle M., Heuten W., Boll S., Bulling A., PrivacEye: Privacy-preserving head-mounted eye tracking using egocentric scene image and eye movement features, in: Proceedings of the 11th ACM symposium on eye tracking research & applications, ACM, New York, NY, USA, 2019, pp. 1–10,. URL https://dl.acm.org/doi/10.1145/3314111.3319913.
[49]
Steil J., Hagestedt I., Huang M.X., Bulling A., Privacy-aware eye tracking using differential privacy, in: Proceedings of the 11th ACM symposium on eye tracking research & applications, ACM, New York, NY, USA, 2019, pp. 1–9,. URL https://dl.acm.org/doi/10.1145/3314111.3319915.
[50]
Microsoft J., MixedRealityToolkit-Unity, 2024, https://github.com/microsoft/MixedRealityToolkit-Unity/. [Accessed 1 March 2024].
[51]
Microsoft J., Eye tracking on HoloLens 2, 2023, https://learn.microsoft.com/en-us/windows/mixed-reality/design/eye-tracking. [Accessed 1 March 2024].
[52]
Microsoft J., EyesPose Class (Windows.Perception.People) - Windows UWP, 2024, https://learn.microsoft.com/en-us/uwp/api/windows.perception.people.eyespose?view=winrt-22621. [Accessed 1 March 2024].
[53]
Kapp S., Barz M., Mukhametov S., Sonntag D., Kuhn J., ARETT: Augmented reality eye tracking toolkit for head mounted displays, Sensors 21 (6) (2021) 2234,. URL https://www.mdpi.com/1424-8220/21/6/2234.
[54]
Dunn M.J., Alexander R.G., Amiebenomo O.M., Arblaster G., Atan D., Erichsen J.T., et al., Minimal reporting guideline for research involving eye tracking (2023 edition), Behav Res Methods (2023),. URL https://link.springer.com/10.3758/s13428-023-02187-1.
[55]
Ostermaier B., Römer K., Mattern F., Fahrmair M., Kellerer W., A real-time search engine for the web of things, in: Proceedings of the 2010 international conference on the internet of things, 2010, pp. 1–8,.
[56]
Ciortea A., Mayer S., Bienz S., Gandon F., Corby O., Autonomous search in a social and ubiquitous web, Pers Ubiquitous Comput (2020),.
[57]
Kassner M., Patera W., Bulling A., Pupil: An open source platform for pervasive eye tracking and mobile gaze-based interaction, in: Proceedings of the 2014 ACM international joint conference on pervasive and ubiquitous computing: Adjunct publication, in: UbiComp ’14 adjunct, Association for Computing Machinery, New York, NY, USA, 2014, pp. 1151–1160,. URL https://doi.org/10.1145/2638728.2641695.
[58]
Holmqvist K., Nyström M., Andersson R., Dewhurst R., Jarodzka H., Van de Weijer J., Eye tracking: A comprehensive guide to methods and measures, OUP Oxford, 2011.
[59]
Microsoft K., Extended eye tracking in unity, 2022, https://learn.microsoft.com/en-us/windows/mixed-reality/develop/unity/extended-eye-tracking-unity. [Accessed 1 March 2024].
[60]
Campbell C.S., Maglio P.P., A robust algorithm for reading detection, in: Proceedings of the 2001 workshop on perceptive user interfaces, ACM, Orlando Florida USA, 2001, pp. 1–7,. URL https://dl.acm.org/doi/10.1145/971478.971503.
[61]
Pedregosa F., Varoquaux G., Gramfort A., Michel V., Thirion B., Grisel O., et al., Scikit-learn: Machine learning in Python, J Mach Learn Res 12 (2011) 2825–2830.
[62]
Hodges J., García K., Ray S., Semantic development and integration of standards for adoption and interoperability, Computer 50 (11) (2017) 26–36,.
[63]
Holmqvist K., Nyström M., Mulvey F., Eye tracker data quality: What it is and how to measure it, in: Proceedings of the symposium on eye tracking research and applications, ACM, Santa Barbara California, 2012, pp. 45–52,. URL https://dl.acm.org/doi/10.1145/2168556.2168563.
[64]
Zemblys R., Niehorster D.C., Komogortsev O., Holmqvist K., Using machine learning to detect events in eye-tracking data, Behav Res Methods 50 (1) (2018) 160–181,. URL https://doi.org/10.3758/s13428-017-0860-3.
[65]
Startsev M., Agtzidis I., Dorr M., 1D CNN with BLSTM for automated classification of fixations, saccades, and smooth pursuits, Behav Res Methods 51 (2) (2019) 556–572,. URL https://doi.org/10.3758/s13428-018-1144-2.
[66]
Startsev M., Zemblys R., Evaluating eye movement event detection: A review of the state of the art, Behav Res Methods 55 (4) (2023) 1653–1714,. URL https://doi.org/10.3758/s13428-021-01763-7.
[67]
Christoff K., Irving Z.C., Fox K.C., Spreng R.N., Andrews-Hanna J.R., Mind-wandering as spontaneous thought: A dynamic framework, Nat Rev Neurosci 17 (11) (2016) 718–731,.
[68]
Csikszentmihalyi M., Toward a psychology of optimal experience, in: Flow and the foundations of positive psychology, Springer, 2014, pp. 209–226.
[69]
Hostettler D., Bektaş K., Mayer S., Pupillometry for measuring user response to movement of an industrial robot, in: 2023 symposium on eye tracking research and applications, ACM, Tubingen Germany, 2023, pp. 1–2,. URL https://dl.acm.org/doi/10.1145/3588015.3590123.
[70]
Wang X., Kwon T., Rad M., Pan B., Chakraborty I., Andrist S., et al., HoloAssist: An egocentric human interaction dataset for interactive AI assistants in the real world, 2023, pp. 20270–20281. URL https://openaccess.thecvf.com/content/ICCV2023/html/Wang_HoloAssist_an_Egocentric_Human_Interaction_Dataset_for_Interactive_AI_Assistants_ICCV_2023_paper.html.
[71]
Konrad R., Padmanaban N., Buckmaster J.G., Boyle K.C., Wetzstein G., GazeGPT: Augmenting Human Capabilities using Gaze-contingent Contextual AI for Smart Eyewear, 2024,. URL http://arxiv.org/abs/2401.17217, arXiv:2401.17217 [cs].
[72]
García K., Mayer S., Ricci A., Ciortea A., Proactive digital companions in pervasive hypermedia environments, in: 2020 IEEE 6th international conference on collaboration and internet computing, CIC, 2020, pp. 54–59,.
[73]
Spirig J., Garcia K., Mayer S., An expert digital companion for working environments, in: Proceedings of the 11th international conference on the internet of things, IoT ’21, ACM, New York, NY, USA, 2021, pp. 25–32,. URL https://doi.org/10.1145/3494322.3494326.
[74]
Grau J., Mayer S., Strecker J., Garcia K., Bektaş K., Gaze-based opportunistic privacy-preserving human-agent collaboration, in: Extended abstracts of the 2024 CHI conference on human factors in computing systems, in: CHI EA ’24, Association for Computing Machinery, New York, NY, USA, 2024,. URL https://doi.org/10.1145/3613905.3651066.
[75]
Pandjaitan A., Strecker J., Bektaş K., Mayer S., AuctentionAR - Auctioning off visual attention in mixed reality, in: Extended abstracts of the 2024 CHI conference on human factors in computing systems, in: CHI EA ’24, Association for Computing Machinery, New York, NY, USA, 2024,. URL https://doi.org/10.1145/3613905.3650941.

Cited By

View all
  • (2024)A Digital Companion Architecture for Ambient IntelligenceProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596108:2(1-26)Online publication date: 15-May-2024
  • (2024)NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR SystemProceedings of the ACM on Human-Computer Interaction10.1145/36555998:ETRA(1-19)Online publication date: 28-May-2024
  • (2024)Gaze-based Opportunistic Privacy-preserving Human-Agent CollaborationExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651066(1-6)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Computers and Graphics
Computers and Graphics  Volume 119, Issue C
Apr 2024
407 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 18 July 2024

Author Tags

  1. Pervasive eye tracking
  2. Augmented reality
  3. Attention
  4. Human activity recognition
  5. Context-awareness
  6. Ubiquitous computing

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 01 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A Digital Companion Architecture for Ambient IntelligenceProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36596108:2(1-26)Online publication date: 15-May-2024
  • (2024)NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR SystemProceedings of the ACM on Human-Computer Interaction10.1145/36555998:ETRA(1-19)Online publication date: 28-May-2024
  • (2024)Gaze-based Opportunistic Privacy-preserving Human-Agent CollaborationExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3651066(1-6)Online publication date: 11-May-2024
  • (2024)GlassBoARd: A Gaze-Enabled AR Interface for Collaborative WorkExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650965(1-8)Online publication date: 11-May-2024
  • (2024)AuctentionAR - Auctioning Off Visual Attention in Mixed RealityExtended Abstracts of the CHI Conference on Human Factors in Computing Systems10.1145/3613905.3650941(1-6)Online publication date: 11-May-2024
  • (2024)Editorial note Computers & Graphics issue 119Computers and Graphics10.1016/j.cag.2024.103927119:COnline publication date: 1-Apr-2024
  • (2024)IPHGaze: Image Pyramid Gaze Estimation with Head Pose GuidancePattern Recognition10.1007/978-3-031-78104-9_27(399-414)Online publication date: 1-Dec-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media