[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1322192.1322253acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Interest estimation based on dynamic bayesian networks for visual attentive presentation agents

Published: 12 November 2007 Publication History

Abstract

In this paper, we describe an interface consisting of a virtual showroom where a team of two highly realistic 3D agents presents product items in an entertaining and attractive way. The presentation flow adapts to users' attentiveness, or lack thereof, and may thus provide a more personalized and user-attractive experience of the presentation. In order to infer users' attention and visual interest regarding interface objects, our system analyzes eye movements in real-time. Interest detection algorithms used in previous research determine an object of interest based on the time that eye gaze dwells on that object. However, this kind of algorithm is not well suited for dynamic presentations where the goal is to assess the user's focus of attention regarding a dynamically changing presentation. Here, the current context of the object of attention has to be considered, i.e., whether the visual object is part of (or contributes to) the current presentation content or not. Therefore, we propose a new approach that estimates the interest (or non-interest) of a user by means of dynamic Bayesian networks. Each of a predefined set of visual objects has a dynamic Bayesian network assigned to it, which calculates the current interest of the user in this object. The estimation takes into account (1) each new gaze point, (2) the current context of the object, and (3) preceding estimations of the object itself. Based on these estimations the presentation agents can provide timely and appropriate response.

References

[1]
B. Brandherm and A. Jameson. An extension of the differential approach for Bayesian network inference to dynamic Bayesian networks. International Journal of Intelligent Systems, 9(8):727--748, 2004.
[2]
H. H. Clark and S. E. Brennan. Grounding in communication. In L. B. Resnick, J. M. Levine, and S. D. Teasley, editors, Perspectives on Socially Shared Cognition, pages 127--149. APA Books, Washington, 1991.
[3]
M. Maybury, O. Stock, and W. Wahlster. Intelligent interactive entertainment grand challenges. IEEE Intelligent Systems, 21(5):14--18, 2006.
[4]
M. Nischt, H. Prendinger, E. André, and M. Ishizuka. MPML3D: a reactive framework for the Multimodal Presentation Markup Language. In Proceedings 6th International Conference on Intelligent Virtual Agents (IVA-06), Springer LNAI 4133, pages 218--229, 2006.
[5]
H. Prendinger, T. Eichner, E. André, and M. Ishizuka. Gaze-based infotainment agents. In ACE '07: Proceedings of the international conference on Advances in computer entertainment technology, pages 87--90, New York, NY, USA, 2007. ACM Press.
[6]
H. Prendinger and M. Ishizuka, editors. Life-Like Characters. Tools, Affective Functions, and Applications. Cognitive Technologies. Springer Verlag, Berlin Heidelberg, 2004.
[7]
P. Qvarfordt and S. Zhai. Conversing with the user based on eye-gaze patterns. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI-05), pages 221--230. ACM Press, 2005.
[8]
T. Rist, E. André, S. Baldes, P. Gebhard, M. Klesen, M. Kipp, P. Rist, and M. Schmitt. A review of the development of embodied presentation agents and their application fields. In Prendinger and Ishizuka, pages 377--404.
[9]
Seeing Machines. Seeing Machines, 2005. URL: http://www.seeingmachines.com/.
[10]
I. Starker and R. A. Bolt. A gaze-responsive self-disclosing display. In Proceedings CHI-90, pages 3--9. ACM Press, 1990.

Cited By

View all
  • (2024)Gaze Scanpath Transformer: Predicting Visual Search Target by Spatiotemporal Semantic Modeling of Gaze Scanpath2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW63382.2024.00067(625-635)Online publication date: 17-Jun-2024
  • (2021)Exploring Gaze-Based Prediction Strategies for Preference Detection in Dynamic Interface ElementsProceedings of the 2021 Conference on Human Information Interaction and Retrieval10.1145/3406522.3446013(129-139)Online publication date: 14-Mar-2021
  • (2016)Range Image Sensor Based Eye Gaze Estimation by Using the Relationship Between the Face and Eye DirectionsInternational Journal on Smart Sensing and Intelligent Systems10.21307/ijssis-2017-9659:4(2297-2309)Online publication date: 1-Dec-2016
  • Show More Cited By

Index Terms

  1. Interest estimation based on dynamic bayesian networks for visual attentive presentation agents

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    ICMI '07: Proceedings of the 9th international conference on Multimodal interfaces
    November 2007
    402 pages
    ISBN:9781595938176
    DOI:10.1145/1322192
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 November 2007

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. dynamic Bayesian network
    2. eye tracking
    3. interest recognition
    4. multi-modal presentation

    Qualifiers

    • Research-article

    Conference

    ICMI07
    Sponsor:
    ICMI07: International Conference on Multimodal Interface
    November 12 - 15, 2007
    Aichi, Nagoya, Japan

    Acceptance Rates

    Overall Acceptance Rate 453 of 1,080 submissions, 42%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)4
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 15 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Gaze Scanpath Transformer: Predicting Visual Search Target by Spatiotemporal Semantic Modeling of Gaze Scanpath2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)10.1109/CVPRW63382.2024.00067(625-635)Online publication date: 17-Jun-2024
    • (2021)Exploring Gaze-Based Prediction Strategies for Preference Detection in Dynamic Interface ElementsProceedings of the 2021 Conference on Human Information Interaction and Retrieval10.1145/3406522.3446013(129-139)Online publication date: 14-Mar-2021
    • (2016)Range Image Sensor Based Eye Gaze Estimation by Using the Relationship Between the Face and Eye DirectionsInternational Journal on Smart Sensing and Intelligent Systems10.21307/ijssis-2017-9659:4(2297-2309)Online publication date: 1-Dec-2016
    • (2016)Modeling user's decision process through gaze behaviorProceedings of the 18th ACM International Conference on Multimodal Interaction10.1145/2993148.2997615(536-540)Online publication date: 31-Oct-2016
    • (2016)Can Eye Help You?Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems10.1145/2858036.2858438(5180-5190)Online publication date: 7-May-2016
    • (2013)Learning aspects of interest from GazeProceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction10.1145/2535948.2535955(41-44)Online publication date: 13-Dec-2013
    • (2012)Semantic interpretation of eye movements using designed structures of displayed contentsProceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction10.1145/2401836.2401853(1-3)Online publication date: 26-Oct-2012
    • (2012)Multi-mode saliency dynamics model for analyzing gaze and attentionProceedings of the Symposium on Eye Tracking Research and Applications10.1145/2168556.2168574(115-122)Online publication date: 28-Mar-2012
    • (2011)The Importance of Eye Gaze and Head Pose to Estimating Levels of AttentionProceedings of the 2011 Third International Conference on Games and Virtual Worlds for Serious Applications10.1109/VS-GAMES.2011.38(186-191)Online publication date: 4-May-2011
    • (2011)Detecting human behavior emotional cues in Natural Interaction2011 17th International Conference on Digital Signal Processing (DSP)10.1109/ICDSP.2011.6004962(1-6)Online publication date: Jul-2011
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media