[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3613904.3642426acmconferencesArticle/Chapter ViewAbstractPublication PageschiConference Proceedingsconference-collections
research-article

"I Know What You Mean": Context-Aware Recognition to Enhance Speech-Based Games

Published: 11 May 2024 Publication History

Abstract

Recent advances in language processing and speech recognition open up a large opportunity for video game companies to embrace voice interaction as an intuitive feature and appealing game mechanics. However, speech-based systems still remain liable to recognition errors. These add a layer of challenge on top of the game’s existing obstacles, preventing players from reaching their goals and thus often resulting in player frustration. This work investigates a novel method called context-aware speech recognition, where the game environment and actions are used as supplementary information to enhance recognition in a speech-based game. In a between-subject user study (<Formula format="inline"><TexMath><?TeX $N~{=}~40$?></TexMath><AltText>Math 1</AltText><File name="chi24-534-inline1" type="svg"/></Formula>), we compared our proposed method with a standard method in which recognition is based only on the voice input without taking context into account. Our results indicate that our proposed method could improve the player experience and the usability of the speech system.

Supplemental Material

MP4 File - Video Presentation
Video Presentation
Transcript for: Video Presentation
MP4 File - Gameplay Video
Gameplay Video
PDF File - Coding Manual
Coding Manual

References

[1]
Beena Ahmed, Penelope Monroe, Adam Hair, Chek Tien Tan, Ricardo Gutierrez-Osuna, and Kirrie J Ballard. 2018. Speech-driven mobile games for speech therapy: User experiences and feasibility. International journal of speech-language pathology 20, 6 (2018), 644–658. https://doi.org/10.1080/17549507.2018.1513562 arXiv:https://doi.org/10.1080/17549507.2018.1513562PMID: 30301384.
[2]
Fraser Allison, Marcus Carter, and Martin Gibbs. 2017. Word Play: A History of Voice Interaction in Digital Games. Games and Culture 15, 2 (2017), 91 – 113. https://doi.org/10.1177/1555412017746305
[3]
Fraser Allison, Marcus Carter, Martin Gibbs, and Wally Smith. 2018. Design Patterns for Voice Interaction in Games. In Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play (Melbourne, VIC, Australia) (CHI PLAY ’18). Association for Computing Machinery, New York, NY, USA, 5–17. https://doi.org/10.1145/3242671.3242712
[4]
Fraser Allison, Joshua Newn, Wally Smith, Marcus Carter, and Martin Gibbs. 2019. Frame Analysis of Voice Interaction Gameplay. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (Glasgow, Scotland Uk) (CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–14. https://doi.org/10.1145/3290605.3300623
[5]
Fraser John Allison. 2020. Voice interaction game design and gameplay. Ph. D. Dissertation. University of Melbourne. http://hdl.handle.net/11343/240857
[6]
Tawfiq Ammari, Jofish Kaye, Janice Y. Tsai, and Frank Bentley. 2019. Music, Search, and IoT: How People (Really) Use Voice Assistants. ACM Transactions on Computer-Human Interaction 26, 3, Article 17 (apr 2019), 28 pages. https://doi.org/10.1145/3311956
[7]
M. A. Anusuya and S. K. Katti. 2010. Speech Recognition by Machine, A Review. arxiv:1001.2267 [cs.CL]
[8]
Saki Anzai, Tokio Ogawa, and Junichi Hoshino. 2021. Speech Recognition Game Interface to Increase Intimacy with Characters. In Entertainment Computing – ICEC 2021, Jannicke Baalsrud Hauge, Jorge C. S. Cardoso, Licínio Roque, and Pedro A. Gonzalez-Calero (Eds.). Springer International Publishing, Cham, 167–180. https://doi.org/10.1007/978-3-030-89394-1_13
[9]
Maresa Biermann, Evelyn Schweiger, and Martin Jentsch. 2019. Talking to Stupid?!? Improving Voice User Interfaces. In Mensch und Computer 2019 - Usability Professionals, Holger Fischer and Steffen Hess (Eds.). Gesellschaft für Informatik e.V. Und German UPA e.V., Bonn. https://doi.org/10.18420/muc2019-up-0253
[10]
Michael Bonfert, Nima Zargham, Florian Saade, Robert Porzel, and Rainer Malaka. 2021. An Evaluation of Visual Embodiment for Voice Assistants on Smart Displays. In Proceedings of the 3rd Conference on Conversational User Interfaces (Bilbao (online), Spain) (CUI ’21). Association for Computing Machinery, New York, NY, USA, Article 16, 11 pages. https://doi.org/10.1145/3469595.3469611
[11]
Virginia Braun and Victoria Clarke. 2019. Reflecting on reflexive thematic analysis. Qualitative research in sport, exercise and health 11, 4 (2019), 589–597. https://doi.org/10.1080/2159676X.2019.1628806 arXiv:https://doi.org/10.1080/2159676X.2019.1628806
[12]
Virginia Braun, Victoria Clarke, Nikki Hayfield, and Gareth Terry. 2019. Thematic Analysis. Springer Singapore, Singapore, 843–860. https://doi.org/10.1007/978-981-10-5251-4_103
[13]
John Brooke. 2013. SUS: a retrospective. Journal of usability studies 8, 2 (feb 2013), 29–40. https://dl.acm.org/doi/abs/10.5555/2817912.2817913
[14]
John Brooke 1996. SUS-A quick and dirty usability scale. Usability evaluation in industry 189, 194 (1996), 4–7.
[15]
Marcus Carter, Fraser Allison, John Downs, and Martin Gibbs. 2015. Player Identity Dissonance and Voice Interaction in Games. In Proceedings of the 2015 Annual Symposium on Computer-Human Interaction in Play (London, United Kingdom) (CHI PLAY ’15). Association for Computing Machinery, New York, NY, USA, 265–269. https://doi.org/10.1145/2793107.2793144
[16]
Yanto Chandra and Liang Shang. 2019. Inductive Coding. Springer Nature Singapore, Singapore, 91–106. https://doi.org/10.1007/978-981-13-3170-1_8
[17]
Jacob Cohen. 1988. Statistical Power Analysis for the Behavioral Sciences (2nd ed.). Erlbaum, Hillsdale, NJ.
[18]
Lynne M Connelly and Jill N Peltzer. 2016. Underdeveloped themes in qualitative research: Relationship with interviews and analysis. Clinical nurse specialist 30, 1 (2016), 52–57.
[19]
Mihaly Csikszentmihalyi. 1990. Flow: The psychology of optimal experience. Vol. 1990. Harper & Row, New York, NY, USA.
[20]
Steven Dow, Manish Mehta, Ellie Harmon, Blair MacIntyre, and Michael Mateas. 2007. Presence and engagement in an interactive drama. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI ’07). Association for Computing Machinery, New York, NY, USA, 1475–1484. https://doi.org/10.1145/1240624.1240847
[21]
Marta Filimon, Adrian Iftene, and Diana Trandabăţ. 2019. Bob - A General Culture Game with Voice Interaction. Procedia Computer Science 159 (2019), 323–332. https://doi.org/10.1016/j.procs.2019.09.187 Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 23rd International Conference KES2019.
[22]
Reinhold Haeb-Umbach, Shinji Watanabe, Tomohiro Nakatani, Michiel Bacchiani, Bjorn Hoffmeister, Michael L Seltzer, Heiga Zen, and Mehrez Souden. 2019. Speech processing for digital home assistants: Combining signal processing with deep-learning techniques. IEEE Signal processing magazine 36, 6 (2019), 111–124.
[23]
Awni Hannun, Carl Case, Jared Casper, Bryan Catanzaro, Greg Diamos, Erich Elsen, Ryan Prenger, Sanjeev Satheesh, Shubho Sengupta, Adam Coates, 2014. Deep speech: Scaling up end-to-end speech recognition.
[24]
Susumu Harada, Jacob O. Wobbrock, and James A. Landay. 2011. Voice Games: Investigation Into the Use of Non-speech Voice Input for Making Computer Games More Accessible. In Human-Computer Interaction – INTERACT 2011, Pedro Campos, Nicholas Graham, Joaquim Jorge, Nuno Nunes, Philippe Palanque, and Marco Winckler (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 11–29. https://doi.org/10.1007/978-3-642-23774-4_4
[25]
Ramin Hedeshy, Chandan Kumar, Mike Lauer, and Steffen Staab. 2022. All Birds Must Fly: The Experience of Multimodal Hands-Free Gaming with Gaze and Nonverbal Voice Synchronization. In Proceedings of the 2022 International Conference on Multimodal Interaction (Bengaluru, India) (ICMI ’22). Association for Computing Machinery, New York, NY, USA, 278–287. https://doi.org/10.1145/3536221.3556593
[26]
Kieran Hicks, Kathrin Gerling, Patrick Dickinson, Conor Linehan, and Carl Gowen. 2018. Leveraging Icebreaking Tasks to Facilitate Uptake of Voice Communication in Multiplayer Games. In Advances in Computer Entertainment Technology, Adrian David Cheok, Masahiko Inami, and Teresa Romão (Eds.). Springer International Publishing, Cham, 187–201.
[27]
Minki Hong, YoungJun Choi, and Sihun Cha. 2021. “Anyway,”: Two-Player Defense Game via Voice Conversation. In Extended Abstracts of the 2021 Annual Symposium on Computer-Human Interaction in Play (Virtual Event, Austria) (CHI PLAY ’21). Association for Computing Machinery, New York, NY, USA, 345–349. https://doi.org/10.1145/3450337.3483509
[28]
Hyunhoon Jung, Hee Jae Kim, Seongeun So, Jinjoong Kim, and Changhoon Oh. 2019. TurtleTalk: an educational programming game for children with voice user interface. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems. ACM, New York, NY, USA, 1–6.
[29]
Jesper Juul. 2007. Without a goal: on open and expressive games. In Videogame, player, text, Barry Atkins and Tanya Krzywinska (Eds.). Manchester University Press Manchester, England, 191–203. http://www.jesperjuul.net/text/withoutagoal/
[30]
Kwan Min Lee, Wei Peng, Seung-A Jin, and Chang Yan. 2006. Can Robots Manifest Personality?: An Empirical Test of Personality Recognition, Social Responses, and Social Presence in Human–Robot Interaction. Journal of Communication 56, 4 (11 2006), 754–772. https://doi.org/10.1111/j.1460-2466.2006.00318.x arXiv:https://academic.oup.com/joc/article-pdf/56/4/754/22325856/jjnlcom0754.pdf
[31]
Vladimir I Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. In Soviet Physics Doklady, Vol. 10. 707. https://api.semanticscholar.org/CorpusID:60827152
[32]
Toby Jia-Jun Li, Igor Labutov, Brad A Myers, Amos Azaria, Alexander I Rudnicky, and Tom M Mitchell. 2018. An end user development approach for failure handling in goal-oriented conversational agents. In Studies in Conversational UX Design, Robert J. Moore, Margaret H. Szymanski, Raphael Arar, and Guang-Jie Ren (Eds.). Springer, Berlin, Germany.
[33]
Xiaoliang Ma, Congjian Deng, Dequan Du, and Qingqi Pei. 2023. An enhanced method for dialect transcription via error-correcting thesaurus. IET Communications 17, 17 (2023), 1984–1997. https://doi.org/10.1049/cmu2.12671 arXiv:https://ietresearch.onlinelibrary.wiley.com/doi/pdf/10.1049/cmu2.12671
[34]
Lina Mavrina, Jessica Szczuka, Clara Strathmann, Lisa Michelle Bohnenkamp, Nicole Krämer, and Stefan Kopp. 2022. “Alexa, You’re Really Stupid”: A Longitudinal Field Study on Communication Breakdowns Between Family Members and a Voice Assistant. Frontiers in Computer Science 4 (2022), 791704. https://doi.org/10.3389/fcomp.2022.791704
[35]
Hani Morgan. 2022. Understanding thematic analysis and the debates involving its use. The Qualitative Report 27, 10 (2022), 2079–2090. https://doi.org/10.46743/2160-3715/2022.5912
[36]
Christine Murad, Cosmin Munteanu, Benjamin R Cowan, and Leigh Clark. 2019. Revolution or Evolution? Speech Interaction and HCI Design Guidelines. IEEE Pervasive Computing 18, 2 (2019), 33–45. https://doi.org/10.1109/MPRV.2019.2906991
[37]
Moyen Mohammad Mustaquim. 2013. Automatic speech recognition-an approach for designing inclusive games. Multimedia tools and applications 66, 1 (2013), 131–146. https://doi.org/10.1007/s11042-011-0918-7
[38]
Ali Bou Nassif, Ismail Shahin, Imtinan Attili, Mohammad Azzeh, and Khaled Shaalan. 2019. Speech recognition using deep neural networks: A systematic review. IEEE access 7 (2019), 19143–19165. https://doi.org/10.1109/ACCESS.2019.2896880
[39]
Andrés Navarro Newball, Diego Loaiza, Claudia Oviedo, Andrés Castillo-Saavedra, A Portilla, Diego Linares, and Gloria Alvarez. 2014. Talking to Teo: Video game supported speech therapy. Entertainment Computing 5, 4 (2014), 401–412. https://doi.org/10.1016/j.entcom.2014.10.005
[40]
Hunter Osking and John A Doucette. 2019. Enhancing emotional effectiveness of virtual-reality experiences with voice control interfaces., 199–209 pages.
[41]
Tony Di Petta and Vera E Woloshyn. 2001. Voice recognition for on-line literacy: Continuous voice recognition technology in adult literacy training. Education and Information Technologies 6, 4 (2001), 225–240.
[42]
Robert Porzel and Manja Baudis. 2004. The Tao of CHI: Towards Effective Human-Computer Interaction. In Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004. Association for Computational Linguistics, Boston, Massachusetts, USA, 209–216. https://www.aclweb.org/anthology/N04-1027
[43]
Amanda Purington, Jessie G. Taft, Shruti Sannon, Natalya N. Bazarova, and Samuel Hardman Taylor. 2017. "Alexa is My New BFF": Social Roles, User Satisfaction, and Personification of the Amazon Echo. In Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems (Denver, Colorado, USA) (CHI EA ’17). Association for Computing Machinery, New York, NY, USA, 2853–2859. https://doi.org/10.1145/3027063.3053246
[44]
Aung Pyae and Paul Scifleet. 2018. Investigating differences between native english and non-native english speakers in interacting with a voice user interface: a case of google home. In Proceedings of the 30th Australian Conference on Computer-Human Interaction (Melbourne, Australia) (OzCHI ’18). Association for Computing Machinery, New York, NY, USA, 548–553. https://doi.org/10.1145/3292147.3292236
[45]
K. Radzikowski, R. Nowak, Le Wang, and O. Yoshie. 2019. Dual supervised learning for non-native speech recognition. EURASIP Journal on Audio, Speech, and Music Processing 2019 (2019), 1–10. https://doi.org/10.1186/s13636-018-0146-4
[46]
D.Raj Reddy, Lee Erman, and Richard Neely. 1973. A model and a system for machine recognition of speech. IEEE Transactions on Audio and Electroacoustics 21, 3 (07 1973), 229–238. https://doi.org/10.1109/TAU.1973.1162456
[47]
Gavin Reid. 2012. Motivation in video games: a literature review. The computer games journal 1, 2 (2012), 70–81.
[48]
Andrew Rosenberg, Yu Zhang, Bhuvana Ramabhadran, Ye Jia, Pedro Moreno, Yonghui Wu, and Zelin Wu. 2019. Speech recognition with augmented synthesized speech. In 2019 IEEE automatic speech recognition and understanding workshop (ASRU). IEEE, New York, NY, USA, 996–1002. https://doi.org/10.1109/ASRU46091.2019.9003990
[49]
Richard M Ryan, C Scott Rigby, and Andrew Przybylski. 2006. The motivational pull of video games: A self-determination theory approach. Motivation and emotion 30, 4 (2006), 344–360.
[50]
Jeff Sauro and James R Lewis. 2016. Quantifying the user experience: Practical statistics for user research.
[51]
Katie Seaborn, Norihisa P. Miyake, Peter Pennefather, and Mihoko Otake-Matsuura. 2021. Voice in Human–Agent Interaction: A Survey. ACM Comput. Surv. 54, 4, Article 81 (may 2021), 43 pages. https://doi.org/10.1145/3386867
[52]
S. S. Shapiro and M. B. Wilk. 1965. An Analysis of Variance Test for Normality (Complete Samples). Biometrika 52, 3/4 (1965), 591–611. https://doi.org/10.1093/biomet/52.3-4.591
[53]
Aaron Springer and Henriette Cramer. 2018. "Play PRBLMS": Identifying and Correcting Less Accessible Content in Voice Interfaces. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI ’18). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3173574.3173870
[54]
Amanda J Stent, Marie K Huffman, and Susan E Brennan. 2008. Adapting speaking after evidence of misrecognition: Local and global hyperarticulation. Speech Communication 50, 3 (2008), 163–178. https://doi.org/10.1016/j.specom.2007.07.005
[55]
Bernhard Suhm, Brad Myers, and Alex Waibel. 2001. Multimodal error correction for speech user interfaces. ACM transactions on computer-human interaction (TOCHI) 8, 1 (mar 2001), 60–98. https://doi.org/10.1145/371127.371166
[56]
David R Thomas. 2006. A general inductive approach for analyzing qualitative evaluation data. American journal of evaluation 27, 2 (2006), 237–246.
[57]
Asma Trabelsi, Sébastien Warichet, Yassine Aajaoun, and Séverine Soussilane. 2022. Evaluation of the efficiency of state-of-the-art Speech Recognition engines. Procedia Computer Science 207 (2022), 2242–2252. https://doi.org/10.1016/j.procs.2022.09.534 Knowledge-Based and Intelligent Information and Engineering Systems: Proceedings of the 26th International Conference KES2022.
[58]
Jan van der Kamp and Veronica Sundstedt. 2011. Gaze and Voice Controlled Drawing. In Proceedings of the 1st Conference on Novel Gaze-Controlled Applications (Karlskrona, Sweden) (NGCA ’11). Association for Computing Machinery, New York, NY, USA, Article 9, 8 pages. https://doi.org/10.1145/1983302.1983311
[59]
Jing Wei, Benjamin Tag, Johanne R Trippas, Tilman Dingler, and Vassilis Kostakos. 2022. What Could Possibly Go Wrong When Interacting with Proactive Smart Speakers? A Case Study Using an ESM Application. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 276, 15 pages. https://doi.org/10.1145/3491102.3517432
[60]
Tom Wilcox, Mike Evans, Chris Pearce, Nick Pollard, and Veronica Sundstedt. 2008. Gaze and Voice Based Game Interaction: The Revenge of the Killer Penguins. In ACM SIGGRAPH 2008 Posters (Los Angeles, California) (SIGGRAPH ’08). Association for Computing Machinery, New York, NY, USA, Article 81, 1 pages. https://doi.org/10.1145/1400885.1400972
[61]
Chauncey Wilson. 2013. Interview techniques for UX practitioners: A user-centered design method. Elsevier, Waltham, Massachusetts, USA.
[62]
Nima Zargham, Michael Bonfert, Georg Volkmar, Robert Porzel, and Rainer Malaka. 2020. Smells Like Team Spirit: Investigating the Player Experience with Multiple Interlocutors in a VR Game. In Extended Abstracts of the 2020 Annual Symposium on Computer-Human Interaction in Play (Virtual Event, Canada) (CHI PLAY ’20). Association for Computing Machinery, New York, NY, USA, 408–412. https://doi.org/10.1145/3383668.3419884
[63]
Nima Zargham, Johannes Pfau, Tobias Schnackenberg, and Rainer Malaka. 2022. “I Didn’t Catch That, But I’ll Try My Best”: Anticipatory Error Handling in a Voice Controlled Game. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 153, 13 pages. https://doi.org/10.1145/3491102.3502115
[64]
Andrej Zgank and Zdravko Kacic. 2012. Predicting the acoustic confusability between words for a speech recognition system using Levenshtein distance. Elektronika ir Elektrotechnika 18, 8 (Oct. 2012), 81–84. https://doi.org/10.5755/j01.eee.18.8.2628
[65]
Rui Zhao, Kang Wang, Rahul Divekar, Robert Rouhani, Hui Su, and Qiang Ji. 2018. An immersive system with multi-modal human-computer interaction. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, IEEE, New York, NY, USA, 517–524. https://doi.org/10.1109/FG.2018.00083
[66]
Bartosz Ziółko, Jakub Gałka, T Jadczyk, and D Skurzok. 2010. Modified weighted Levenshtein distance in automatic speech recognition. In Proceedings of the XVI National Conference Applications of Mathematics to Biology and Medicine. Citeseer, 116–120. https://api.semanticscholar.org/CorpusID:15188678

Cited By

View all
  • (2025)Multimodal Interaction, Interfaces, and Communication: A SurveyMultimodal Technologies and Interaction10.3390/mti90100069:1(6)Online publication date: 14-Jan-2025
  • (2025)Enhancing Immersion in Virtual Reality–Based Advanced Life Support Training: Randomized Controlled TrialJMIR Serious Games10.2196/6827213(e68272)Online publication date: 14-Feb-2025
  • (2024)HASI: A Model for Human-Agent Speech InteractionProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665885(1-8)Online publication date: 8-Jul-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
CHI '24: Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems
May 2024
18961 pages
ISBN:9798400703300
DOI:10.1145/3613904
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 May 2024

Permissions

Request permissions for this article.

Check for updates

Badges

Author Tags

  1. Game Design
  2. Speech Recognition
  3. Speech-Based Systems
  4. Voice Interaction
  5. Voice-Controlled Game

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

CHI '24

Acceptance Rates

Overall Acceptance Rate 6,199 of 26,314 submissions, 24%

Upcoming Conference

CHI 2025
ACM CHI Conference on Human Factors in Computing Systems
April 26 - May 1, 2025
Yokohama , Japan

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,186
  • Downloads (Last 6 weeks)445
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Multimodal Interaction, Interfaces, and Communication: A SurveyMultimodal Technologies and Interaction10.3390/mti90100069:1(6)Online publication date: 14-Jan-2025
  • (2025)Enhancing Immersion in Virtual Reality–Based Advanced Life Support Training: Randomized Controlled TrialJMIR Serious Games10.2196/6827213(e68272)Online publication date: 14-Feb-2025
  • (2024)HASI: A Model for Human-Agent Speech InteractionProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665885(1-8)Online publication date: 8-Jul-2024
  • (2024)Gaming with Etiquette: Exploring Courtesy as a Game Mechanic in Speech-Based GamesInternational Journal of Human–Computer Interaction10.1080/10447318.2024.2387901(1-19)Online publication date: 14-Aug-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Full Text

View this article in Full Text.

Full Text

HTML Format

View this article in HTML Format.

HTML Format

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media