[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Conceptual Metaphors Impact Perceptions of Human-AI Collaboration

Published: 15 October 2020 Publication History

Abstract

With the emergence of conversational artificial intelligence (AI) agents, it is important to understand the mechanisms that influence users' experiences of these agents. In this paper, we study one of the most common tools in the designer's toolkit: conceptual metaphors. Metaphors can present an agent as akin to a wry teenager, a toddler, or an experienced butler. How might a choice of metaphor influence our experience of the AI agent? Sampling a set of metaphors along the dimensions of warmth and competence---defined by psychological theories as the primary axes of variation for human social perception---we perform a study $(N=260)$ where we manipulate the metaphor, but not the behavior, of a Wizard-of-Oz conversational agent. Following the experience, participants are surveyed about their intention to use the agent, their desire to cooperate with the agent, and the agent's usability. Contrary to the current tendency of designers to use high competence metaphors to describe AI products, we find that metaphors that signal low competence lead to better evaluations of the agent than metaphors that signal high competence. This effect persists despite both high and low competence agents featuring identical, human-level performance and the wizards being blind to condition. A second study confirms that intention to adopt decreases rapidly as competence projected by the metaphor increases. In a third study, we assess effects of metaphor choices on potential users' desire to try out the system and find that users are drawn to systems that project higher competence and warmth. These results suggest that projecting competence may help attract new users, but those users may discard the agent unless it can quickly correct with a lower competence metaphor. We close with a retrospective analysis that finds similar patterns between metaphors and user attitudes towards past conversational agents such as Xiaoice, Replika, Woebot, Mitsuku, and Tay.

Supplementary Material

ZIP File (v4cscw163aux.pdf.zip)
Supplemental material contains additional details about the studies.

References

[1]
Norah Abokhodair, Daisy Yoo, and David W McDonald. 2015. Dissecting a social botnet: Growth, content and influence in Twitter. In Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing. ACM, 839--851.
[2]
Andrew Arnold. 2018. How Chatbots Feed Into Millennials' Need For Instant Gratification. (2018).
[3]
Solomon E Asch. 1946. Forming impressions of personality. The Journal of Abnormal and Social Psychology, Vol. 41, 3 (1946), 258.
[4]
Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: A Corpus for Adding Memory to Goal-Oriented Dialogue Systems. CoRR, Vol. abs/1704.00057 (2017). arxiv: 1704.00057 http://arxiv.org/abs/1704.00057
[5]
Gagan Bansal, Ece Kamar, Walter S Lasecki, and Daniel S Weld Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. (2019).
[6]
Amy L Baylor and Yanghee Kim. 2004. Pedagogical agent design: The impact of agent realism, gender, ethnicity, and instructional role. In International conference on intelligent tutoring systems. Springer, 592--603.
[7]
Susan A Brown, Viswanath Venkatesh, and Sandeep Goyal. 2012. Expectation confirmation in technology use. Information Systems Research, Vol. 23, 2 (2012), 474--487.
[8]
Drazen Brscić, Hiroyuki Kidokoro, Yoshitaka Suehiro, and Takayuki Kanda. 2015. Escaping from children's abuse of social robots. In Proceedings of the tenth annual acm/ieee international conference on human-robot interaction. ACM, 59--66.
[9]
Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad. 2015. Intelligible models for healthcare: Predicting pneumonia risk and hospital 30-day readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1721--1730.
[10]
Justine Cassell, Tim Bickmore, Lee Campbell, Hannes Vilhjálmsson, and Hao Yan. 2000. Embodied Conversational Agents. MIT Press, Cambridge, MA, USA, Chapter Human Conversation As a System Framework: Designing Embodied Conversational Agents, 29--63. http://dl.acm.org/citation.cfm?id=371552.371555
[11]
Elaine Chang and Vishwac Kannan. 2018. Conversational AI: Best practices for building bots. https://medius.studios.ms/Embed/Video/BRK3225
[12]
Ana Paula Chaves and Marco Auré lio Gerosa. 2019. How should my chatbot interact? A survey on human-chatbot interaction design. CoRR, Vol. abs/1904.02743 (2019). arxiv: 1904.02743 http://arxiv.org/abs/1904.02743
[13]
Jacqui Cheng. 2011. iPhone 4S: A Siri-ously slick, speedy smartphone. (2011).
[14]
Justin Cranshaw, Emad Elwany, Todd Newman, Rafal Kocielnik, Bowen Yu, Sandeep Soni, Jaime Teevan, and Andrés Monroy-Hernández. 2017. Calendar help: Designing a workflow-based scheduling agent with humans in the loop. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. ACM, 2382--2393.
[15]
L Elizabeth Crawford. 2009. Conceptual metaphors of affect. Emotion review, Vol. 1, 2 (2009), 129--139.
[16]
Amy JC Cuddy, Susan T Fiske, and Peter Glick. 2008. Warmth and competence as universal dimensions of social perception: The stereotype content model and the BIAS map. Advances in experimental social psychology, Vol. 40 (2008), 61--149.
[17]
Sonam Damani, Nitya Raviprakash, Umang Gupta, Ankush Chatterjee, Meghana Joshi, Khyatti Gupta, Kedhar Nath Narahari, Puneet Agrawal, Manoj Kumar Chinnakotla, Sneha Magapu, et almbox. 2018. Ruuh: A Deep Learning Based Conversational Social Agent. arXiv preprint arXiv:1810.12097 (2018).
[18]
Leslie A DeChurch and Jessica R Mesmer-Magnus. 2010. The cognitive underpinnings of effective teamwork: A meta-analysis. Journal of Applied Psychology, Vol. 95, 1 (2010), 32.
[19]
Michael A. DeVito, Jeremy Birnholtz, Jeffery T. Hancock, Megan French, and Sunny Liu. 2018. How People Form Folk Theories of Social Media Feeds and What It Means for How We Study Self-Presentation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada) (CHI '18). ACM, New York, NY, USA, Article 120, bibinfonumpages12 pages. https://doi.org/10.1145/3173574.3173694
[20]
Salesforce Drift, Survey Monkey. 2018. 2018 State of Chatbots Report. (2018).
[21]
Motahhare Eslami, Karrie Karahalios, Christian Sandvig, Kristen Vaccaro, Aimee Rickman, Kevin Hamilton, and Alex Kirlik. 2016. First I 'like' It, Then I Hide It: Folk Theories of Social Feeds. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (San Jose, California, USA) (CHI '16). Association for Computing Machinery, New York, NY, USA, 2371--2382. https://doi.org/10.1145/2858036.2858494
[22]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. 'I Always Assumed That I Wasn't Really That Close to [Her]?: Reasoning about Invisible Algorithms in News Feeds. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (Seoul, Republic of Korea) (CHI '15). Association for Computing Machinery, New York, NY, USA, 153--162. https://doi.org/10.1145/2702123.2702556
[23]
Emilio Ferrara, Onur Varol, Clayton Davis, Filippo Menczer, and Alessandro Flammini. 2016. The rise of social bots. Commun. ACM, Vol. 59, 7 (2016), 96--104.
[24]
Susan T Fiske, Amy JC Cuddy, Peter Glick, and Jun Xu. 2018. A model of (often mixed) stereotype content: Competence and warmth respectively follow from perceived status and competition (2002). In Social cognition. Routledge, 171--222.
[25]
Susan T. Fiske, Juan Xu, Amy C. Cuddy, and Peter Glick. 1999. (Dis)respecting versus (Dis)liking: Status and Interdependence Predict Ambivalent Stereotypes of Competence and Warmth. Journal of Social Issues, Vol. 55, 3 (1999), 473--489. https://doi.org/10.1111/0022--4537.00128 https://doi.org/10.1109/HRI.2019.8673307
[26]
Clifford Ivar Nass and Scott Brave. 2005. Wired for speech: How voice activates and advances the human-computer relationship .MIT press Cambridge, MA.
[27]
Donald A Norman. 1988. The psychology of everyday things. Basic books.
[28]
AE Nutt. 2017. The Woebot will see you now. the rise of chatbot therapy: Washington Post (2017).
[29]
Junwon Park, Ranjay Krishna, Pranav Khadpe, Li Fei-Fei, and Michael Berstein. 2019. AI-based Request Augmentation to Increase Crowdsourcing Participation. In AAAI Conference on Human Computation and Crowdsourcing. ACM.
[30]
James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: LIWC 2001. Mahway: Lawrence Erlbaum Associates, Vol. 71, 2001 (2001), 2001.
[31]
Eeva Raita and Antti Oulasvirta. 2011. Too good to be bad: Favorable product expectations boost subjective usability ratings. Interacting with Computers, Vol. 23, 4 (2011), 363--371.
[32]
Byron Reeves and Clifford Ivar Nass. 1996. The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.
[33]
Cynthia Rudin. 2018. Please stop explaining black box models for high stakes decisions. arXiv preprint arXiv:1811.10154 (2018).
[34]
Pericle Salvini, Gaetano Ciaravella, Wonpil Yu, Gabriele Ferri, Alessandro Manzi, Barbara Mazzolai, Cecilia Laschi, Sang-Rok Oh, and Paolo Dario. 2010. How safe are service robots in urban environments? Bullying a robot. In 19th International Symposium in Robot and Human Interactive Communication. IEEE, 1--7.
[35]
C. Sandvig. 2015. Seeing the Sort: The Aesthetic and Industrial Defense of 'The Algorithm.'. Journal of the New Media Caucus, 11, 35--51 (2015).
[36]
Ayse Pinar Saygin, Thierry Chaminade, Hiroshi Ishiguro, Jon Driver, and Chris Frith. 2011. The thing that should not be: predictive coding and the uncanny valley in perceiving human and humanoid robot actions. Social cognitive and affective neuroscience, Vol. 7, 4 (2011), 413--422.
[37]
Robin Sease. 2008. Metaphor's role in the information behavior of humans interacting with computers. Information technology and libraries, Vol. 27, 4 (2008), 9--16.
[38]
Joseph Seering, Michal Luria, Connie Ye, Geoff Kaufman, and Jessica Hammer. 2020. It Takes a Village: Integrating an Adaptive Chatbot into an Online Gaming Community. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20).
[39]
Ameneh Shamekhi, Q Vera Liao, Dakuo Wang, Rachel KE Bellamy, and Thomas Erickson. 2018. Face Value? Exploring the effects of embodiment for a group facilitation agent. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 391.
[40]
Muzafer Sherif, Daniel Taub, and Carl I Hovland. 1958. Assimilation and contrast effects of anchoring stimuli on judgments. Journal of experimental psychology, Vol. 55, 2 (1958), 150.
[41]
Heung-Yeung Shum, Xiao-dong He, and Di Li. 2018. From Eliza to XiaoIce: challenges and opportunities with social chatbots. Frontiers of Information Technology & Electronic Engineering, Vol. 19, 1 (2018), 10--26.
[42]
Megan Strait, Virginia Contreras, and Christian Duarte Vela. 2018a. Verbal Disinhibition towards Robots is Associated with General Antisociality. arXiv preprint arXiv:1808.01076 (2018).
[43]
Megan Strait, Ana Sánchez Ramos, Virginia Contreras, and Noemi Garcia. 2018b. Robots Racialized in the Likeness of Marginalized Social Identities are Subject to Greater Dehumanization than those racialized as White. In 2018 27th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN). IEEE, 452--457.
[44]
Indrani Medhi Thies, Nandita Menon, Sneha Magapu, Manisha Subramony, and Jacki O'neill. 2017. How do you want your chatbot? An exploratory Wizard-of-Oz study with young, urban Indians. In IFIP Conference on Human-Computer Interaction. Springer, 441--459.
[45]
Alexander Todorov, Chris P Said, Andrew D Engell, and Nikolaas N Oosterhof. 2008. Understanding evaluation of faces on social dimensions. Trends in cognitive sciences, Vol. 12, 12 (2008), 455--460.
[46]
Paul Van Schaik and Jonathan Ling. 2008. Modelling user experience with web sites: Usability, hedonic value, beauty and goodness. Interacting with computers, Vol. 20, 3 (2008), 419--432.
[47]
Peter Wallis and Emma Norling. 2005. The Trouble with Chatbots: social skills in a social world. Virtual Social Agents, Vol. 29 (2005).
[48]
Mark E Whiting, Grant Hugh, and Michael S Bernstein. 2019. Fair Work: Crowd Work Minimum Wage with One Line of Code. In Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7. 197--206.
[49]
Timothy D Wilson, Douglas J Lisle, Dolores Kraft, and Christopher G Wetzel. 1989. Preferences as expectation-driven inferences: Effects of affective expectations on affective experience. Journal of personality and social psychology, Vol. 56, 4 (1989), 519.
[50]
Bogdan Wojciszke. 2005. Affective concomitants of information on morality and competence. European psychologist, Vol. 10, 1 (2005), 60--70.
[51]
Victoria Woollaston. 2016. Following the failure of Tay, Microsoft is back with new chatbot Zo.
[52]
S Worswick. 2015. Mitsuku [computer program].
[53]
Steve Worswick. 2018. The Curse of the Chatbot Users. (2018).
[54]
Eva Yiwei Wu, Emily Pedersen, and Niloufar Salehi. 2019. Agent, Gatekeeper, Drug Dealer: How Content Creators Craft Algorithmic Personas. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (2019), 219.
[55]
Jun Xiao, John Stasko, and Richard Catrambone. 2004. An empirical study of the effect of agent competence on user performance and perception. In Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems-Volume 1. IEEE Computer Society, 178--185.
[56]
Jennifer Zamora. 2017. I'm Sorry, Dave, I'm Afraid I Can'T Do That: Chatbot Perception and Expectations. In Proceedings of the 5th International Conference on Human Agent Interaction (Bielefeld, Germany) (HAI '17). ACM, New York, NY, USA, 253--260. https://doi.org/10.1145/3125739.3125766
[57]
Mark P Zanna and David L Hamilton. 1972. Attribute dimensions and patterns of trait inferences. Psychonomic Science, Vol. 27, 6 (1972), 353--354.
[58]
Li Zhou, Jianfeng Gao, Di Li, and Heung-Yeung Shum. 2018. The Design and Implementation of XiaoIce, an Empathetic Social Chatbot. CoRR, Vol. abs/1812.08989 (2018). arxiv: 1812.08989 http://arxiv.org/abs/1812.08989

Cited By

View all
  • (2024)AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0Systems10.3390/systems1202004812:2(48)Online publication date: 2-Feb-2024
  • (2024)Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36869228:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)Exploring the Impact of Conversational Style in Enhancing Recruitment Chatbot InteractionsProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685387(1-14)Online publication date: 13-Oct-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 4, Issue CSCW2
CSCW
October 2020
2310 pages
EISSN:2573-0142
DOI:10.1145/3430143
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 15 October 2020
Published in PACMHCI Volume 4, Issue CSCW2

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adoption of ai systems
  2. conceptual metaphors
  3. expectation shaping
  4. perception of human-ai collaboration

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)956
  • Downloads (Last 6 weeks)143
Reflects downloads up to 12 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)AI Lifecycle Zero-Touch Orchestration within the Edge-to-Cloud Continuum for Industry 5.0Systems10.3390/systems1202004812:2(48)Online publication date: 2-Feb-2024
  • (2024)Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility AssessmentProceedings of the ACM on Human-Computer Interaction10.1145/36869228:CSCW2(1-31)Online publication date: 8-Nov-2024
  • (2024)Exploring the Impact of Conversational Style in Enhancing Recruitment Chatbot InteractionsProceedings of the 13th Nordic Conference on Human-Computer Interaction10.1145/3679318.3685387(1-14)Online publication date: 13-Oct-2024
  • (2024)A Survey on Automatic Generation of Figurative Language: From Rule-based Systems to Large Language ModelsACM Computing Surveys10.1145/365479556:10(1-34)Online publication date: 30-Mar-2024
  • (2024)Improving Steering and Verification in AI-Assisted Data Analysis with Interactive Task DecompositionProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676345(1-19)Online publication date: 13-Oct-2024
  • (2024)The AI-DEC: A Card-based Design Method for User-centered AI ExplanationsProceedings of the 2024 ACM Designing Interactive Systems Conference10.1145/3643834.3661576(1010-1028)Online publication date: 1-Jul-2024
  • (2024)Examining Humanness as a Metaphor to Design Voice User InterfacesProceedings of the 6th ACM Conference on Conversational User Interfaces10.1145/3640794.3665535(1-15)Online publication date: 8-Jul-2024
  • (2024)LAVE: LLM-Powered Agent Assistance and Language Augmentation for Video EditingProceedings of the 29th International Conference on Intelligent User Interfaces10.1145/3640543.3645143(699-714)Online publication date: 18-Mar-2024
  • (2024)"This Chatbot Would Never...": Perceived Moral Agency of Mental Health ChatbotsProceedings of the ACM on Human-Computer Interaction10.1145/36374108:CSCW1(1-28)Online publication date: 26-Apr-2024
  • (2024)Like My Aunt Dorothy: Effects of Conversational Styles on Perceptions, Acceptance and Metaphorical Descriptions of Voice Assistants during Later AdulthoodProceedings of the ACM on Human-Computer Interaction10.1145/36373658:CSCW1(1-21)Online publication date: 26-Apr-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media