[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/1891903.1891915acmconferencesArticle/Chapter ViewAbstractPublication Pagesicmi-mlmiConference Proceedingsconference-collections
research-article

Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application

Published: 08 November 2010 Publication History

Abstract

In this paper, we investigate the user's eye gaze behavior during the conversation with an interactive storytelling application. We present an interactive eye gaze model for embodied conversational agents in order to improve the experience of users participating in Interactive Storytelling. The underlying narrative in which the approach was tested is based on a classical XIXth century psychological novel: Madame Bovary, by Flaubert. At various stages of the narrative, the user can address the main character or respond to her using free-style spoken natural language input, impersonating her lover. An eye tracker was connected to enable the interactive gaze model to respond to user's current gaze (i.e. looking into the virtual character's eyes or not). We conducted a study with 19 students where we compared our interactive eye gaze model with a non-interactive eye gaze model that was informed by studies of human gaze behaviors, but had no information on where the user was looking. The interactive model achieved a higher score for user ratings than the non-interactive model. In addition we analyzed the users' gaze behavior during the conversation with the virtual character.

References

[1]
Argyle and Cook. Gaze & Mutual Gaze. Cambridge University Press, 1976.
[2]
Augsburg University. Horde3D GameEngine. http://mm-werkstatt.informatik.uni-augsburg.de/projects/GameEngine/.
[3]
R. Aylett, S. Louchart, J. Dias, A. Paiva, M. Vala, S. Woods, and L. Hall. Unscripted narrative for affectively driven characters. IEEE Comput. Graph. Appl., 26(3):42--52, 2006.
[4]
R. Aylett, N. Vannini, E. André, A. Paiva, S. Enz, and L. Hall. But that was in another country: agents and intercultural empathy. In AAMAS '09: Proc. of The 8th International Conference on Autonomous Agents and Multiagent Systems, pages 329--336, Richland, SC, 2009.
[5]
N. Bee, E. André, and S. Tober. Breaking the ice in human-agent communication: Eye-gaze based initiation of contact with an embodied conversational agent. In 9th International Conference on Intelligent Virtual Agents (IVA), pages 229--242. Springer, 2009.
[6]
N. Bee, B. Falk, and E. André. Simplified facial animation control utilizing novel input devices: A comparative study. In International Conference on Intelligent User Interfaces (IUI '09), pages 197--206, 2009.
[7]
M. Cavazza, D. Pizzi, F. Charles, T. Vogt, and E. André. Emotional input for character-based interactive storytelling. In AAMAS '09: Proc. of The 8th International Conference on Autonomous Agents and Multiagent Systems, pages 313--320, 2009.
[8]
A. Colburn, M. Cohen, and S. Drucker. The role of eye gaze in avatar mediated conversational interfaces. Technical report, Microsoft Research, 2000.
[9]
S. Dow, M. Mehta, E. Harmon, B. MacIntyre, and M. Mateas. Presence and engagement in an interactive drama. In CHI '07: Proc. of the SIGCHI conference on Human factors in computing systems, pages 1475--1484, New York, NY, USA, 2007. ACM.
[10]
T. Eichner, H. Prendinger, E. André, and M. Ishizuka. Attentive presentation agents. In Intelligent Virtual Agents (IVA 2007), pages 283--295, 2007.
[11]
P. Ekman and W. Friesen. Unmasking the Face. Prentice Hall, 1975.
[12]
Facial Expression Repertoire. Filmakademie Baden-Württemberg.
[13]
G. Flaubert. La revue de Paris. France, 1856.
[14]
A. Fukayama, T. Ohno, N. Mukawa, M. Sawaki, and N. Hagita. Messages embedded in gaze of interface agents --- impression management with agent's gaze. In CHI '02: Proc. of the SIGCHI conference on Human factors in computing systems, pages 41--48, New York, NY, USA, 2002. ACM Press.
[15]
M. Garau, M. Slater, S. Bee, and M. A. Sasse. The impact of eye gaze on communication using humanoid avatars. In CHI '01: Proc. of the SIGCHI conference on Human factors in computing systems, pages 309--316. ACM Press, 2001.
[16]
E. T. Hall. A system for notation of proxemic behavior. American Anthropologist, 65:1003--1026, 1963.
[17]
A. Kendon. Some functions of gaze direction in social interaction. Acta Psychologica, 26:22--63, 1967.
[18]
M. Kipp and P. Gebhard. IGaze: Studying Reactive Gaze Behavior in Semi-immersive Human-Avatar Interactions. In Intelligent Virtual Agents (IVA '08), pages 191--199, 2008.
[19]
C. L. Kleinke. Gaze and eye contact: A research review. Psychological Bulletin, 100(1):78--100, 1986.
[20]
S. P. Lee, J. B. Badler, and N. I. Badler. Eyes alive. ACM Transactions on Graphics, 21(3):637--644, 2002.
[21]
L. P. Morency, M. C. Christoudias, and T. Darrell. Recognizing gaze aversion gestures in embodied conversational discourse. In ICMI '06: Proceedings of the 8th international conference on Multimodal interfaces, pages 287--294, New York, NY, USA, 2006. ACM Press.
[22]
Y. I. Nakano, G. Reinstein, T. Stocky, and J. Cassell. Towards a model of face-to-face grounding. In ACL '03: Proc. of the 41st Annual Meeting on Association for Computational Linguistics, pages 553--561. Association for Computational Linguistics, 2003.
[23]
Y. I. Nakano and Y. Yamaoka. Information state based multimodal dialogue management: Estimating conversational engagement from gaze information. In Z. Ruttkay, M. Kipp, A. Nijholt, and H. H. Vilhjálmsson, editors, Intelligent Virtual Agents, 9th International Conference, IVA 2009, Amsterdam, The Netherlands, September 14--16, 2009, Proceedings, volume 5773 of Lecture Notes in Computer Science, pages 531--532. Springer, 2009.
[24]
C. Peters, C. Pelachaud, E. Bevacqua, M. Mancini, and I. Poggi. A model of attention and interest using gaze behavior. In Intelligent Virtual Agents (IVA' 05), pages 229--240, London, UK, 2005. Springer-Verlag.
[25]
M. Rehm and E. André. From chatterbots to natural interaction - face to face communication with embodied conversational agents. IEICE Transactions on Information and Systems, Special Issue on Life-Like Agents and Communication, 88-D(11):2445--2452, 2005.
[26]
M. Sagar. Facial performance capture and expressive translation for king kong. In SIGGRAPH '06: ACM SIGGRAPH 2006 Sketches, page 26. ACM, 2006.
[27]
D. D. Salvucci and J. H. Goldberg. Identifying fixations and saccades in eye-tracking protocols. In ETRA '00: Proc. of the symposium on Eye tracking research & applications, pages 71--78. ACM Press, 2000.
[28]
C. L. Sidner, C. D. Kidd, C. Lee, and N. Lesh. Where to look: a study of human-robot engagement. In IUI '04: Proc. of the 9th international conference on Intelligent user interfaces, pages 78--84. ACM Press, 2004.
[29]
I. Starker and R. A. Bolt. A gaze-responsive self-disclosing display. In CHI '90: Proc. of the SIGCHI conference on Human factors in computing systems, pages 3--10. ACM, 1990.
[30]
W. Steptoe, R. Wolff, A. Murgia, E. Guimaraes, J. Rae, P. Sharkey, D. Roberts, and A. Steed. Eye-tracking for avatar eye-gaze and interactional analysis in immersive collaborative virtual environments. In CSCW '08: Proc. of the ACM 2008 conference on Computer supported cooperative work, pages 197--200. ACM, 2008.
[31]
Valve. Facial Expressions Primer from Half-Life 2. http://developer.valvesoftware.com/wiki/Facial_Expressions_Primer.
[32]
R. Vertegaal, R. Slagter, G. van der Veer, and A. Nijholt. Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In CHI '01: Proc. of the SIGCHI conference on Human factors in computing systems, pages 301--308. ACM Press, 2001.
[33]
V. Vinayagamoorthy, M. Garau, A. Steed, and M. Slater. An Eye Gaze Model for Dyadic Interaction in an Immersive Virtual Environment: Practice and Experience. Computer Graphics Forum, 23(1):1--11, 2004.
[34]
T. Vogt, E. André, and N. Bee. Emovoice - a framework for online recognition of emotions from voice. In Proc. of Workshop on Perception and Interactive Technologies for Speech-Based Systems. Springer, 2008.
[35]
J. Wagner, E. André, and F. Jung. Smart sensor integration: A framework for multimodal emotion recognition in real-time. In Affective Computing and Intelligent Interaction (ACII 2009), 2009.

Cited By

View all
  • (2024)Exploring Influence of Social Anxiety on Embodied Face Perception during Affective Social Interactions in VRProceedings of the 24th ACM International Conference on Intelligent Virtual Agents10.1145/3652988.3673952(1-5)Online publication date: 16-Sep-2024
  • (2024)Gaze-dependent response activation in dialogue agent for cognitive-behavioral therapyProcedia Computer Science10.1016/j.procs.2024.09.554246(2322-2331)Online publication date: 2024
  • (2024)A Multimodal Approach for Improving a Dialogue Agent for Therapeutic Sessions in PsychiatryTransforming Media Accessibility in Europe10.1007/978-3-031-60049-4_22(397-414)Online publication date: 20-Aug-2024
  • Show More Cited By

Index Terms

  1. Discovering eye gaze behavior during human-agent conversation in an interactive storytelling application

      Recommendations

      Comments

      Please enable JavaScript to view thecomments powered by Disqus.

      Information & Contributors

      Information

      Published In

      cover image ACM Conferences
      ICMI-MLMI '10: International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction
      November 2010
      311 pages
      ISBN:9781450304146
      DOI:10.1145/1891903
      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Sponsors

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      Published: 08 November 2010

      Permissions

      Request permissions for this article.

      Check for updates

      Author Tags

      1. eye gaze
      2. human-agent conversation
      3. interactive storytelling
      4. virtual agent

      Qualifiers

      • Research-article

      Funding Sources

      Conference

      ICMI-MLMI '10
      Sponsor:

      Acceptance Rates

      ICMI-MLMI '10 Paper Acceptance Rate 41 of 100 submissions, 41%;
      Overall Acceptance Rate 453 of 1,080 submissions, 42%

      Contributors

      Other Metrics

      Bibliometrics & Citations

      Bibliometrics

      Article Metrics

      • Downloads (Last 12 months)34
      • Downloads (Last 6 weeks)0
      Reflects downloads up to 06 Jan 2025

      Other Metrics

      Citations

      Cited By

      View all
      • (2024)Exploring Influence of Social Anxiety on Embodied Face Perception during Affective Social Interactions in VRProceedings of the 24th ACM International Conference on Intelligent Virtual Agents10.1145/3652988.3673952(1-5)Online publication date: 16-Sep-2024
      • (2024)Gaze-dependent response activation in dialogue agent for cognitive-behavioral therapyProcedia Computer Science10.1016/j.procs.2024.09.554246(2322-2331)Online publication date: 2024
      • (2024)A Multimodal Approach for Improving a Dialogue Agent for Therapeutic Sessions in PsychiatryTransforming Media Accessibility in Europe10.1007/978-3-031-60049-4_22(397-414)Online publication date: 20-Aug-2024
      • (2022)Cross-Modal Repair: Gaze and Speech Interaction for List AdvancementProceedings of the 4th Conference on Conversational User Interfaces10.1145/3543829.3543833(1-11)Online publication date: 26-Jul-2022
      • (2022)The Eye in Extended Reality: A Survey on Gaze Interaction and Eye Tracking in Head-worn Extended RealityACM Computing Surveys10.1145/349120755:3(1-39)Online publication date: 25-Mar-2022
      • (2022)A Review on Emotion Based Harmful Speech Detection Using Machine Learning2022 IEEE 22nd International Symposium on Computational Intelligence and Informatics and 8th IEEE International Conference on Recent Achievements in Mechatronics, Automation, Computer Science and Robotics (CINTI-MACRo)10.1109/CINTI-MACRo57952.2022.10029592(000017-000024)Online publication date: 21-Nov-2022
      • (2021)Narrative Cognition in Mixed Reality Systems: Towards an Empirical FrameworkVirtual, Augmented and Mixed Reality10.1007/978-3-030-77599-5_1(3-17)Online publication date: 3-Jul-2021
      • (2020)An Evaluation of a Wearable Assistive Device for Augmenting Social InteractionsIEEE Access10.1109/ACCESS.2020.30224258(164661-164677)Online publication date: 2020
      • (2020)Anthropomorphic Design for Everyday ObjectsHuman-Computer Interaction. Design and User Experience10.1007/978-3-030-49059-1_10(137-146)Online publication date: 10-Jul-2020
      • (2018)Unconsciously interactive Films in a cinema environment—a demonstrative case studyDigital Creativity10.1080/14626268.2017.140734429:2-3(165-181)Online publication date: 12-Feb-2018
      • Show More Cited By

      View Options

      Login options

      View options

      PDF

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      Media

      Figures

      Other

      Tables

      Share

      Share

      Share this Publication link

      Share on social media