[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence

Published: 01 September 2023 Publication History

Highlights

This study extends the scope of the U&G literature to the generative AI domain via the combination of quantitative and qualitative approaches.
This study contributes to a growing body of human–computer interaction literature on AI technology adoption.
This study elucidates the psychological mechanism that consists of creepiness and trust as key mediators.

Abstract

Few studies have examined user motivations to use generative artificial intelligence (AI). This research aims to address this gap by examining how user motivations for ChatGPT usage affect perceived creepiness, trust, and the intention to continue using AI chatbot technology. The findings of an online survey (N = 421) reveal a negative relationship between personalization and creepiness, while task efficiency and social interaction are positively associated with creepiness. Increased levels of creepiness, in turn, result in decreased continuance intention. Furthermore, task efficiency and personalization have a positive impact on trust, leading to increased continuance intention. The results contribute to the field of human–computer interaction by investigating the motivations for utilizing generative AI chatbots and advancing our comprehension of AI creepiness, trust, and continuance intention. The practical ramifications of this research can inform the design of user interfaces and the development of features for generative AI chatbots.

References

[1]
J.C. Anderson, D.W. Gerbing, Assumptions and comparative strengths of the two-step approach: Comment on Fornell and Yi, Sociol. Methods Res. 20 (3) (1992) 321–333.
[2]
M. Ashfaq, J. Yun, S. Yu, S.M.C. Loureiro, I, Chatbot: Modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents, Telematics Inform. 54 (2020) 101473.
[3]
T.H. Baek, M. Bakpayev, S. Yoon, S. Kim, Smiling AI agents: How anthropomorphism and broad smiles increase charitable giving, Int. J. Advert. 41 (5) (2022) 850–867.
[4]
T.H. Baek, M. Kim, Ai robo-advisor anthropomorphism: The impact of anthropomorphic appeals and regulatory focus on investment behaviors, J. Bus. Res. 164 (2023).
[5]
T.H. Baek, M. Morimoto, Stay away from me: Examining the determinants of consumer avoidance to personalized advertising, J. Advert. 41 (1) (2012) 59–76.
[6]
T.H. Baek, S. Yoon, Pride and gratitude: Egoistic versus altruistic appeals in social media advertising, J. Bus. Res. 142 (2022) 499–511.
[7]
M. Bakpayev, T.H. Baek, P. van Esch, S. Yoon, Programmatic creative: AI can think but it cannot feel, Australas. Mark. J. 30 (1) (2022) 90–95.
[8]
H. Bang, J. Kim, D. Choi, Exploring the effects of ad-task relevance and ad salience on ad avoidance: The moderating role of internet use motivation, Comput. Hum. Behav. 89 (2018) 70–78.
[9]
P.B. Brandtzaeg, A. Følstad, Why people use chatbots, in: Internet Science: 4th International Conference, INSCI 2017, Thessaloniki, Greece, November 22-24, 2017, Proceedings 4, Springer International Publishing, 2017, pp. 377–392.
[10]
Y. Chang, S. Lee, S.F. Wong, S.P. Jeong, AI-powered learning application use and gratification: an integrative model, Inf. Technol. People 35 (7) (2022) 2115–2139.
[11]
S.C. Chen, D.C. Yen, M.I. Hwang, Factors influencing the continuance intention to the usage of Web 2.0: An empirical study, Comput. Hum. Behav. 28 (3) (2012) 933–941.
[12]
X. Cheng, Y. Bao, A. Zarifis, W. Gong, J. Mou, Exploring consumers' response to text-based chatbots in e-commerce: the moderating role of task complexity and chatbot disclosure, Internet Res. 32 (2) (2021) 496–517.
[13]
Y. Cheng, H. Jiang, How do AI-driven chatbots impact user experience? Examining gratifications, perceived privacy risk, satisfaction, loyalty, and continued use, J. Broadcast. Electron. Media 64 (4) (2020) 592–614.
[14]
X. Cheng, X. Zhang, J. Cohen, J. Mou, Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms, Inf. Process. Manag. 59 (3) (2022).
[15]
T.R. Choi, M.E. Drumwright, “OK, Google, why do I use you?” Motivations, post-consumption evaluations, and perceptions of voice AI assistants, Telematics Inform. 62 (2021) 101628.
[16]
H. Choung, P. David, A. Ross, Trust in AI and its role in the acceptance of AI technologies, Internat. J. Human-Comput. Interact. 1–13 (2022),.
[17]
F. Cugurullo, R.A. Acheampong, Fear of AI: an inquiry into the adoption of autonomous cars in spite of fear, and a theoretical framework for the study of artificial intelligence technology acceptance, AI & Soc. (2023) 1–16.
[18]
T. Davenport, A. Guha, D. Grewal, T. Bressgott, How artificial intelligence will change the future of marketing, J. Acad. Mark. Sci. 48 (1) (2020) 24–42.
[19]
M. Dekkal, M. Arcand, S. Prom Tep, L. Rajaobelina, L. Ricard, Factors affecting user trust and intention in adopting chatbots: the moderating role of technology anxiety in insurtech, J. Financ. Serv. Mark. 1–30 (2023),.
[20]
J. Eighmey, L. McCord, Adding value in the information age: Uses and gratifications of sites on the World Wide Web, J. Bus. Res. 41 (3) (1998) 187–194.
[21]
H. Fan, B. Han, W. Gao, (Im) Balanced customer-oriented behaviors and AI Chatbots’ Efficiency-Flexibility performance: The moderating role of customers’ rational choices, J. Retail. Consum. Serv. 66 (2022) 102937.
[22]
B. Florenthal, Young consumers’ motivational drivers of brand engagement behavior on social media sites: A synthesized U&G and TAM framework, J. Res. Interact. Mark. 13 (3) (2019) 351–391.
[24]
Gal, U. (2023). ChatGPT is a data privacy nightmare. If you’ve ever posted online, you ought to be concerned. https://theconversation.com/chatgpt-is-a-data-privacy-nightmare-if-youve-ever-posted-online-you-ought-to-be-concerned-199283.
[25]
Gratas, B. (2023). 50 ChatGPT statistics and facts you need to know. https://blog.invgate.com/chatgpt-statistics.
[26]
K. Gray, D.M. Wegner, Feeling robots and human zombies: Mind perception and the uncanny valley, Cognition 125 (1) (2012) 125–130.
[27]
J.F. Hair, W.C. Black, B.J. Babin, R.E. Anderson, Multivariate data analysis, 7th ed, Pearson, United States, 2010.
[28]
C.C. Ho, K.F. MacDorman, Revisiting the uncanny valley theory: Developing and validating an alternative to the Godspeed indices, Comput. Hum. Behav. 26 (6) (2010) 1508–1518.
[29]
P. Hu, Y. Lu, Y. Gong, Dual humanness and trust in conversational AI: A person-centered approach, Comput. Hum. Behav. 119 (2021) 106727.
[30]
S.-L. Huang, C.-Y. Chang, Understanding how people select social networking services: Media trait, social influences and situational factors, Inf. Manag. 57 (6) (2020) 103323.
[31]
J. Huang, L. Zhou, Timing of web personalization in mobile shopping: A perspective from Uses and Gratifications Theory, Comput. Hum. Behav. 88 (2018) 103–113.
[32]
S.V. Jin, S. Youn, Social presence and imagery processing as predictors of chatbot continuance intention in human-AI-interaction, Internat. J. Human-Comput. Interact. (2022) 1–13.
[33]
J. Joo, Y. Sang, Exploring Koreans’ smartphone usage: An integrated model of the technology acceptance model and uses and gratifications theory, Comput. Hum. Behav. 29 (6) (2013) 2512–2518.
[34]
S. Kamboj, Applying uses and gratifications theory to understand customer participation in social media brand communities: Perspective of media technology, Asia Pacific J. Market. Log. 32 (1) (2020) 205–231.
[35]
E. Katz, J.G. Blumler, M. Gurevitch, Uses and gratifications research, Public Opin. Q. 37 (4) (1973) 509–523.
[36]
E. Katz, H. Haas, M. Gurevitch, On the use of the mass media for important things, Am. Sociol. Rev. 38 (2) (1973) 164.
[37]
J. Kees, C. Berry, S. Burton, K. Sheehan, An analysis of data quality: Professional panels, student subject pools, and Amazon's Mechanical Turk, J. Advert. 46 (1) (2017) 141–155.
[38]
M. Kim, T.H. Baek, I’ll follow the fun: The extended investment model of social media influencers, Telematics Inform. 74 (2022) 101881.
[39]
S.E. Kim, H.L. Kim, S. Lee, How event information is trusted and shared on social media: a uses and gratification perspective, J. Travel Tour. Mark. 38 (5) (2021) 444–460.
[40]
H. Ko, C.H. Cho, M.S. Roberts, Internet uses and gratifications: A structural equation model of interactive advertising, J. Advert. 34 (2) (2005) 57–70.
[41]
A. Köchling, M.C. Wehner, Better explaining the benefits why AI? Analyzing the impact of explaining the benefits of AI-supported selection on applicant responses, Int. J. Sel. Assess. (2022),.
[42]
A.E. Krause, A.C. North, B. Heritage, The uses and gratifications of using Facebook music listening applications, Comput. Hum. Behav. 39 (2014) 71–77.
[43]
S.E. Kwon, E. Kim, Y. Sung, C. Yun Yoo, Brand followers: Consumer motivation and attitude towards brand communications on Twitter, Int. J. Advert. 33 (4) (2014) 657–680.
[44]
M. Laakasuo, J. Palomäki, N. Köbis, Moral uncanny valley: A robot’s appearance moderates how its decisions are judged, Int. J. Soc. Robot. 13 (7) (2021) 1679–1688.
[45]
M. Langer, C.J. König, Introducing and testing the creepiness of situation scale (CRoSS), Front. Psychol. 9 (2018) 2220.
[46]
N. Lankton, D.H. McKnight, J. Tripp, Technology, humanness, and trust: Rethinking trust in technology, J. Assoc. Inf. Syst. 16 (10) (2015) 880–918.
[47]
H. Lee, C.H. Cho, Uses and gratifications of smart speakers: Modelling the effectiveness of smart speaker advertising, Int. J. Advert. 39 (7) (2020) 1150–1171.
[48]
L. Leung, R. Wei, The gratifications of pager use: Sociability, information-seeking, entertainment, utility, and fashion and status, Telematics Inform. 15 (4) (1998) 253–264.
[49]
X. Liu, Q. Min, S. Han, Understanding users’ continuous content contribution behaviours on microblogs: An integrated perspective of uses and gratification theory and social influence theory, Behav. Inform. Technol. 39 (5) (2020) 525–543.
[50]
X. Liu, X. He, M. Wang, H. Shen, What influences patients' continuance intention touse AI-powered service robots at hospitals? The role of individual characteristics, Technol. Soc. 70 (2022) 101996.
[51]
C. Longoni, A. Bonezzi, C.K. Morewedge, Resistance to medical artificial intelligence, J. Consum. Res. 46 (4) (2019) 629–650.
[52]
C. Lou, S.T.J. Kiew, T. Chen, T.Y.M. Lee, J.E.C. Ong, Z. Phua, Authentically fake? How consumers respond to the influence of virtual influencers, J. Advert. (2022) 1–18.
[53]
R.S. Moore, M.L. Moore, K.J. Shanahan, B. Mack, Creepy marketing: Threedimensions of perceived excessive online privacy violation, Mark. Manag. 25 (1) (2015) 42–53.
[54]
R.M. Morgan, S.D. Hunt, The commitment-trust theory of relationship marketing, J. Mark. 58 (3) (1994) 20–38.
[55]
M. Mori, Bukimi no tani (the uncanny valley), Energy 7 (4) (1970) 33–35.
[56]
R.B. Mostafa, T. Kasamani, Antecedents and consequences of chatbot initial trust, Eur. J. Mark. 56 (6) (2022) 1748–1771.
[57]
Z. Papacharissi, A.M. Rubin, Predictors of Internet use, J. Broadcast. Electron. Media 44 (2) (2000) 175–196.
[58]
N. Park, Adoption and use of computer-based voice over Internet protocol phone service: Toward an integrated model, J. Commun. 60 (1) (2010) 40–72.
[59]
G. Park, M.C. Yim, J. Chung, S. Lee, Effect of AI chatbot empathy and identity disclosure on willingness to donate: the mediation of humanness and social presence, Behav. Inform. Technol. 1–13 (2022),.
[60]
J. Paul, A. Ueno, C. Dennis, ChatGPT and consumers: Benefits, pitfalls and future research agenda, Int. J. Consum. Stud. 47 (4) (2023) 1213–1225.
[61]
J. Phua, S.V. Jin, J.J. Kim, Uses and gratifications of social networking sites for bridging and bonding social capital: A comparison of Facebook, Twitter, Instagram, and Snapchat, Comput. Hum. Behav. 72 (2017) 115–122.
[62]
M. Puzakova, P. Aggarwal, Brands as rivals: Consumer pursuit of distinctiveness and the role of brand anthropomorphism, J. Consumer Res. 45 (4) (2018) 869–888.
[63]
L. Rajaobelina, S. Prom Tep, M. Arcand, L. Ricard, Creepiness: Its antecedents and impact on loyalty when interacting with a chatbot, Psychol. Mark. 38 (12) (2021) 2339–2356.
[64]
S.A. Raza, W. Qazi, N. Shah, M.A. Qureshi, S. Qaiser, R. Ali, Drivers of intensive Facebook usage among university students: An implication of U&G and TPB theories, Technol. Soc. 62 (2020).
[65]
A.M. Rubin, Television uses and gratifications: The interactions of viewing patterns and motivations, J. Broadcast. Electron. Media 27 (1) (1983) 37–51.
[66]
T.E. Ruggiero, Uses and gratifications theory in the 21st century, Mass Commun. Soc. 3 (1) (2000) 3–37.
[67]
R.T. Rust, C. Moorman, P.R. Dickson, Getting return on quality: Revenue expansion, cost reduction, or both?, J. Mark. 66 (4) (2002) 7–24.
[68]
M. Seymour, L.(. Yuan, A.R. Dennis, K. Riemer, Have we crossed the uncanny valley? Understanding affinity, trustworthiness, and preference for realistic digital humans in immersive environments, J. Assoc. Inf. Syst. 22 (3) (2021) 591–617.
[69]
D.B. Shank, C. Graves, A. Gott, P. Gamez, S. Rodriguez, Feeling our way to machine minds: People's emotions when perceiving mind in artificial intelligence, Comput. Hum. Behav. 98 (2019) 256–266.
[70]
Shankland, S. (2023). Why we’re obsessed with the mind-blowing ChatGPT AI chatbot. https://www.cnet.com/tech/computing/why-were-all-obsessed-with-the-mind-blowing-chatgpt-ai-chatbot/.
[71]
C. Shao, K.H. Kwon, Hello Alexa! Exploring effects of motivational factors and social presence on satisfaction with artificial intelligence-enabled gadgets, Human Behav. Emerg. Technol. 3 (5) (2021) 978–988.
[72]
D. Shin, The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI, Int. J. Hum Comput Stud. 146 (2021) 102551.
[73]
Y. Sun, S. Li, L. Yu, The dark sides of AI personal assistant: effects of service failure on user continuance intention, Electron. Mark. 32 (1) (2022) 17–39.
[74]
S. Wang, S.O. Lilienfeld, P. Rochat, The uncanny valley: Existence and explanations, Rev. Gen. Psychol. 19 (4) (2015) 393–407.
[75]
W.D. Weisman, J.F. Peña, Face the uncanny: the effects of doppelganger talking head avatars on affect-based trust toward artificial intelligence technology are mediated by uncanny valley perceptions, Cyberpsychol. Behav. Soc. Netw. 24 (3) (2021) 182–187.
[76]
A. Whiting, D. Williams, Why people use social media: a uses and gratifications approach, Qualitative Market Res. 16 (4) (2013) 362–369.
[77]
C. Xie, Y. Wang, Y. Cheng, Does artificial intelligence satisfy you? A meta-analysis of user gratification and user satisfaction with AI-Powered chatbots, Internat. J. Human-Comput. Interact. (2022) 1–11.
[78]
C. Yen, M.C. Chiang, Trust me, if you can: a study on the factors that influence consumers’ purchase intention triggered by chatbots based on brain image evidence and self-reported assessments, Behav. Inform. Technol. 40 (11) (2021) 1177–1194.
[79]
R. Zimmermann, D. Mora, D. Cirqueira, M. Helfert, M. Bezbradica, D. Werth, W.J. Weitzl, R. Riedl, A. Auinger, Enhancing brick-and-mortar store shopping experience with an augmented reality shopping assistant application using personalized recommendations and explainable artificial intelligence, J. Res. Interact. Mark. 17 (2) (2023) 273–298.

Cited By

View all
  • (2024)Impact of Generative AI on Enterprise Performance in ChinaJournal of Global Information Management10.4018/JGIM.34750132:1(1-20)Online publication date: 7-Aug-2024
  • (2024)Measuring Impact on Confidence in Institutions by their Use of Software ComponentsProceedings of the Central and Eastern European eDem and eGov Days 202410.1145/3670243.3670249(119-124)Online publication date: 12-Sep-2024
  • (2024)Measuring User Experience Inclusivity in Human-AI Interaction via Five User Problem-Solving StylesACM Transactions on Interactive Intelligent Systems10.1145/366374014:3(1-90)Online publication date: 24-Sep-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Telematics and Informatics
Telematics and Informatics  Volume 83, Issue C
Sep 2023
108 pages

Publisher

Pergamon Press, Inc.

United States

Publication History

Published: 01 September 2023

Author Tags

  1. ChatGPT
  2. Generative artificial intelligence (AI)
  3. Uses and gratifications theory
  4. Creepiness
  5. Trust
  6. Continuance intention

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Impact of Generative AI on Enterprise Performance in ChinaJournal of Global Information Management10.4018/JGIM.34750132:1(1-20)Online publication date: 7-Aug-2024
  • (2024)Measuring Impact on Confidence in Institutions by their Use of Software ComponentsProceedings of the Central and Eastern European eDem and eGov Days 202410.1145/3670243.3670249(119-124)Online publication date: 12-Sep-2024
  • (2024)Measuring User Experience Inclusivity in Human-AI Interaction via Five User Problem-Solving StylesACM Transactions on Interactive Intelligent Systems10.1145/366374014:3(1-90)Online publication date: 24-Sep-2024
  • (2024)“Not quite there yet”: On Users Perception of Popular Healthcare Chatbot Apps for Personal Health ManagementProceedings of the 17th International Conference on PErvasive Technologies Related to Assistive Environments10.1145/3652037.3652042(191-197)Online publication date: 26-Jun-2024
  • (2024)Consumer reactions to perceived undisclosed ChatGPT usage in an online review contextTelematics and Informatics10.1016/j.tele.2024.10216393:COnline publication date: 1-Sep-2024
  • (2024)Examining the moderating effect of motivation on technology acceptance of generative AI for English as a foreign language learningEducation and Information Technologies10.1007/s10639-024-12763-329:17(23547-23575)Online publication date: 1-Dec-2024
  • (2024)Empowering Users with ChatGPT and Similar Large Language Models (LLMs)Proceedings of the Association for Information Science and Technology10.1002/pra2.101861:1(172-182)Online publication date: 15-Oct-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media