[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment

Published: 08 November 2024 Publication History

Abstract

As misinformation increasingly proliferates on social media platforms, it has become crucial to explore how to best convey automated news credibility assessments to end-users, and foster trust in fact-checking AIs. In this paper, we investigate how model-agnostic, natural language explanations influence trust and reliance on a fact-checking AI. We construct explanations from four Conceptualisation Validations (CVs) - namely consensual, expert, internal (logical), and empirical - which are foundational units of evidence that humans utilise to validate and accept new information. Our results show that providing explanations significantly enhances trust in AI, even in a fact-checking context where influencing pre-existing beliefs is often challenging, with different CVs causing varying degrees of reliance. We find consensual explanations to be the least influential, with expert, internal, and empirical explanations exerting twice as much influence. However, we also find that users could not discern whether the AI directed them towards the truth, highlighting the dual nature of explanations to both guide and potentially mislead. Further, we uncover the presence of automation bias and aversion during collaborative fact-checking, indicating how users' previously established trust in AI can moderate their reliance on AI judgements. We also observe the manifestation of a 'boomerang'/backfire effect often seen in traditional corrections to misinformation, with individuals who perceive AI as biased or untrustworthy doubling down and reinforcing their existing (in)correct beliefs when challenged by the AI. We conclude by presenting nuanced insights into the dynamics of user behaviour during AI-based fact-checking, offering important lessons for social media platforms.

Supplemental Material

ZIP File

References

[1]
Hunt Allcott and Matthew Gentzkow. 2017. Social Media and Fake News in the 2016 Election. Journal of Economic Perspectives, Vol. 31, 2 (May 2017), 211--236. https://doi.org/10.1257/jep.31.2.211
[2]
Michelle Amazeen and Arunima Krishna. 2020. Correcting Vaccine Misinformation: Recognition and Effects of Source Type on Misinformation via Perceived Motivations and Credibility. https://doi.org/10.2139/ssrn.3698102
[3]
Ariful Islam Anik and Andrea Bunt. 2021. Data-Centric Explanations: Explaining Training Data of Machine Learning Systems to Promote Transparency. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3411764.3445736
[4]
Gagan Bansal, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. 2019. Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Vol. 7 (Oct. 2019), 2--11. https://doi.org/10.1609/hcomp.v7i1.5285
[5]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, 1--16. https://doi.org/10.1145/3411764.3445717
[6]
Douglas Bates, Martin Mächler, Ben Bolker, and Steve Walker. 2015. Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, Vol. 67 (Oct. 2015), 1--48. https://doi.org/10.18637/jss.v067.i01
[7]
Sirisha Bojjireddy, Soon Ae Chun, and James Geller. 2021. Machine Learning Approach to Detect Fake News, Misinformation in COVID-19 Pandemic. In DG. O2021: The 22nd Annual International Conference on Digital Government Research (DG. O'21). Association for Computing Machinery, New York, NY, USA, 575--578. https://doi.org/10.1145/3463677.3463762
[8]
Benjamin M. Bolker, Mollie E. Brooks, Connie J. Clark, Shane W. Geange, John R. Poulsen, M. Henry H. Stevens, and Jada-Simone S. White. 2009. Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology & Evolution, Vol. 24, 3 (March 2009), 127--135. https://doi.org/10.1016/j.tree.2008.10.008
[9]
Claus Bossen and Kathleen H. Pine. 2023. Batman and Robin in Healthcare Knowledge Work: Human-AI Collaboration by Clinical Documentation Integrity Specialists. ACM Transactions on Computer-Human Interaction, Vol. 30, 2 (March 2023), 26:1--26:29. https://doi.org/10.1145/3569892
[10]
Virginia Braun and Victoria Clarke. 2006. Using thematic analysis in psychology. Qualitative Research in Psychology, Vol. 3, 2 (Jan. 2006), 77--101. https://doi.org/10.1191/1478088706qp063oa
[11]
Jack W. Brehm. 1966. A theory of psychological reactance. Academic Press, Oxford, England.
[12]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (April 2021), 188:1--188:21. https://doi.org/10.1145/3449287
[13]
Sahara Byrne and Philip Solomon Hart. 2009. The Boomerang Effect A Synthesis of Findings and a Preliminary Theoretical Framework. Annals of the International Communication Association, Vol. 33, 1 (Jan. 2009), 3--37. https://doi.org/10.1080/23808985.2009.11679083
[14]
Ángel Alexander Cabrera, Adam Perer, and Jason I. Hong. 2023. Improving Human-AI Collaboration With Descriptions of AI Behavior. Proceedings of the ACM on Human-Computer Interaction, Vol. 7, CSCW1 (April 2023), 136:1--136:21. https://doi.org/10.1145/3579612
[15]
Shiye Cao and Chien-Ming Huang. 2022. Understanding User Reliance on AI in Assisted Decision-Making. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (Nov. 2022), 471:1--471:23. https://doi.org/10.1145/3555572
[16]
Santanu Chakrabarti, Lucile Stengel, and Sapna Solanki. 2018. Duty, identity, credibility: Fake news and the ordinary citizen in India. BBC World Service Audiences Research (2018).
[17]
Chun-Wei Chiang and Ming Yin. 2022. Exploring the Effects of Machine Learning Literacy Interventions on Laypeople's Reliance on Machine Learning Models. In 27th International Conference on Intelligent User Interfaces (IUI '22). Association for Computing Machinery, New York, NY, USA, 148--161. https://doi.org/10.1145/3490099.3511121
[18]
Katherine Clayton, Spencer Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu, Rachel Sang, Rachel Scholz-Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, and Brendan Nyhan. 2020. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior, Vol. 42, 4 (Dec. 2020), 1073--1095. https://doi.org/10.1007/s11109-019-09533-0
[19]
Ewart J. de Visser, Marieke M. M. Peeters, Malte F. Jung, Spencer Kohn, Tyler H. Shaw, Richard Pak, and Mark A. Neerincx. 2020. Towards a Theory of Longitudinal Trust Calibration in Human--Robot Teams. International Journal of Social Robotics, Vol. 12, 2 (May 2020), 459--478. https://doi.org/10.1007/s12369-019-00596-x
[20]
Morton Deutsch and Harold B. Gerard. 1955. A study of normative and informational social influences upon individual judgment. The Journal of Abnormal and Social Psychology, Vol. 51 (1955), 629--636. https://doi.org/10.1037/h0046408
[21]
Graham N. Dixon, Brooke Weberling McKeever, Avery E. Holton, Christopher Clarke, and Gina Eosco. 2015. The Power of a Picture: Overcoming Scientific Misinformation by Communicating Weight-of-Evidence Information with Visual Exemplars. Journal of Communication, Vol. 65, 4 (2015), 639--659. https://doi.org/10.1111/jcom.12159 _eprint: https://onlinelibrary.wiley.com/doi/pdf/10.1111/jcom.12159.
[22]
Ullrich K. H. Ecker, Stephan Lewandowsky, John Cook, Philipp Schmid, Lisa K. Fazio, Nadia Brashier, Panayiota Kendeou, Emily K. Vraga, and Michelle A. Amazeen. 2022. The psychological drivers of misinformation belief and its resistance to correction. Nature Reviews Psychology, Vol. 1, 1 (Jan. 2022), 13--29. https://doi.org/10.1038/s44159-021-00006-y
[23]
Upol Ehsan and Mark O. Riedl. 2021. Explainability Pitfalls: Beyond Dark Patterns in Explainable AI. https://doi.org/10.48550/arXiv.2109.12480
[24]
Ziv Epstein, Nicolo Foppiani, Sophie Hilgard, Sanjana Sharma, Elena Glassman, and David Rand. 2022. Do Explanations Increase the Effectiveness of AI-Crowd Generated Fake News Warnings? Proceedings of the International AAAI Conference on Web and Social Media, Vol. 16 (May 2022), 183--193. https://doi.org/10.1609/icwsm.v16i1.19283
[25]
Ziv Epstein, Gordon Pennycook, and David Rand. 2020. Will the Crowd Game the Algorithm? Using Layperson Judgments to Combat Misinformation on Social Media by Downranking Distrusted Sources. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--11. https://doi.org/10.1145/3313831.3376232
[26]
Alexander Erlei, Richeek Das, Lukas Meub, Avishek Anand, and Ujwal Gadiraju. 2022. For What It's Worth: Humans Overwrite Their Economic Self-interest to Avoid Bargaining With AI Systems. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, 1--18. https://doi.org/10.1145/3491102.3517734
[27]
Motahhare Eslami, Kristen Vaccaro, Min Kyung Lee, Amit Elazari Bar On, Eric Gilbert, and Karrie Karahalios. 2019. User Attitudes towards Algorithmic Opacity and Transparency in Online Reviewing Platforms. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3290605.3300724
[28]
European Union. 2016. General Data Protection Regulation. https://eur-lex.europa.eu/eli/reg/2016/679/oj
[29]
Philip Fernbach and Steven Sloman. 2017. Opinion textbar Why We Believe Obvious Untruths. The New York Times (March 2017). https://www.nytimes.com/2017/03/03/opinion/sunday/why-we-believe-obvious-untruths.html
[30]
Leon Festinger. 1962. A Theory of Cognitive Dissonance. Stanford University Press.
[31]
Thomas Franke, Christiane Attig, and Daniel Wessel. 2018. A personal resource for technology interaction: Development and validation of the Affinity for Technology Interaction (ATI) scale.
[32]
R. Kelly Garrett, Erik C. Nisbet, and Emily K. Lynch. 2013. Undermining the Corrective Effects of Media-Based Political Fact Checking? The Role of Contextual Cues and Naïve Theory. Journal of Communication, Vol. 63, 4 (Aug. 2013), 617--637. https://doi.org/10.1111/jcom.12038
[33]
Kate Goddard, Abdul Roudsari, and Jeremy C. Wyatt. 2014. Automation bias: empirical results assessing influencing factors. International Journal of Medical Informatics, Vol. 83, 5 (May 2014), 368--375. https://doi.org/10.1016/j.ijmedinf.2014.01.001
[34]
Ben Green and Yiling Chen. 2019. Disparate Interactions: An Algorithm-in-the-Loop Analysis of Fairness in Risk Assessments. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 90--99. https://doi.org/10.1145/3287560.3287563
[35]
Ben Green and Yiling Chen. 2019. The Principles and Limits of Algorithm-in-the-Loop Decision Making. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (Nov. 2019), 50:1--50:24. https://doi.org/10.1145/3359152
[36]
Ben Green and Yiling Chen. 2020. Algorithmic Risk Assessments Can Alter Human Decision-Making Processes in High-Stakes Government Contexts. https://arxiv.org/abs/2012.05370v2
[37]
Md Mahfuzul Haque, Mohammad Yousuf, Ahmed Shatil Alam, Pratyasha Saha, Syed Ishtiaque Ahmed, and Naeemul Hassan. 2020. Combating Misinformation in Bangladesh: Roles and Responsibilities as Perceived by Journalists, Fact-checkers, and Users. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (Oct. 2020), 130:1--130:32. https://doi.org/10.1145/3415201
[38]
John Hardwig. 1985. Epistemic Dependence. The Journal of Philosophy, Vol. 82, 7 (July 1985), 335--349. https://doi.org/10.2307/2026523
[39]
Gaole He, Lucie Kuiper, and Ujwal Gadiraju. 2023. Knowing About Knowing: An Illusion of Human Competence Can Hinder Appropriate Reliance on AI Systems. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, 1--18. https://doi.org/10.1145/3544548.3581025
[40]
Daniel Holliday, Stephanie Wilson, and Simone Stumpf. 2016. User Trust in Intelligent Systems: A Journey Over Time. In Proceedings of the 21st International Conference on Intelligent User Interfaces (IUI '16). Association for Computing Machinery, New York, NY, USA, 164--168. https://doi.org/10.1145/2856767.2856811
[41]
Benjamin D. Horne, Dorit Nevo, Sibel Adali, Lydia Manikonda, and Clare Arrington. 2020. Tailoring heuristics and timing AI interventions for supporting news veracity assessments. Computers in Human Behavior Reports, Vol. 2 (Aug. 2020), 100043. https://doi.org/10.1016/j.chbr.2020.100043
[42]
Benjamin D. Horne, Dorit Nevo, John O'Donovan, Jin-Hee Cho, and Sibel Adal?. 2019. Rating Reliability and Bias in News Articles: Does AI Assistance Help Everyone? Proceedings of the International AAAI Conference on Web and Social Media, Vol. 13 (July 2019), 247--256. https://doi.org/10.1609/icwsm.v13i01.3226
[43]
Jason L. Huang, Paul G. Curran, Jessica Keeney, Elizabeth M. Poposki, and Richard P. DeShon. 2012. Detecting and Deterring Insufficient Effort Responding to Surveys. Journal of Business and Psychology, Vol. 27, 1 (March 2012), 99--114. https://doi.org/10.1007/s10869-011--9231--8
[44]
Shigao Huang, Jie Yang, Simon Fong, and Qi Zhao. 2020. Artificial intelligence in cancer diagnosis and prognosis: Opportunities and challenges. Cancer Letters, Vol. 471 (Feb. 2020), 61--71. https://doi.org/10.1016/j.canlet.2019.12.007
[45]
Elle Hunt. 2017. 'Disputed by multiple fact-checkers': Facebook rolls out new alert to combat fake news. The Guardian (March 2017). https://www.theguardian.com/technology/2017/mar/22/facebook-fact-checking-tool-fake-news
[46]
James Jaccard and Jacob Jacoby. 2019. Theory Construction and Model-Building Skills: A Practical Guide for Social Scientists. Guilford Publications.
[47]
Farnaz Jahanbakhsh, Yannis Katsis, Dakuo Wang, Lucian Popa, and Michael Muller. 2023. Exploring the Use of Personalized AI for Identifying Misinformation on Social Media. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, 1--27. https://doi.org/10.1145/3544548.3581219
[48]
Farnaz Jahanbakhsh, Amy X. Zhang, Adam J. Berinsky, Gordon Pennycook, David G. Rand, and David R. Karger. 2021. Exploring Lightweight Interventions at Posting Time to Reduce the Sharing of Misinformation on Social Media. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW1 (April 2021), 18:1--18:42. https://doi.org/10.1145/3449092
[49]
Farnaz Jahanbakhsh, Amy X. Zhang, and David R. Karger. 2022. Leveraging Structured Trusted-Peer Assessments to Combat Misinformation. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (Nov. 2022), 524:1--524:40. https://doi.org/10.1145/3555637
[50]
Maurice Jakesch, Moran Koren, Anna Evtushenko, and Mor Naaman. 2018. The Role of Source, Headline and Expressive Responding in Political News Evaluation. https://doi.org/10.2139/ssrn.3306403
[51]
Thomas J. Johnson, Barbara K. Kaye, Shannon L. Bichard, and W. Joann Wong. 2007. Every Blog Has Its Day: Politically-interested Internet Users? Perceptions of Blog Credibility. Journal of Computer-Mediated Communication, Vol. 13, 1 (Oct. 2007), 100--122. https://doi.org/10.1111/j.1083--6101.2007.00388.x
[52]
S Mo Jones-Jang and Yong Jin Park. 2023. How do people react to AI failure? Automation bias, algorithmic aversion, and perceived controllability. Journal of Computer-Mediated Communication, Vol. 28, 1 (Jan. 2023), zmac029. https://doi.org/10.1093/jcmc/zmac029
[53]
N.A. Karlova and K.E. Fisher. 2013. A social diffusion model of misinformation and disinformation for understanding human information behaviour. Information Research, Vol. 18 (Jan. 2013).
[54]
Pranav Khadpe, Ranjay Krishna, Li Fei-Fei, Jeffrey T. Hancock, and Michael S. Bernstein. 2020. Conceptual Metaphors Impact Perceptions of Human-AI Collaboration. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW2 (Oct. 2020), 163:1--163:26. https://doi.org/10.1145/3415234
[55]
Todd Kulesza, Simone Stumpf, Margaret Burnett, Sherry Yang, Irwin Kwan, and Weng-Keen Wong. 2013. Too much, too little, or just right? Ways explanations impact end users' mental models. In 2013 IEEE Symposium on Visual Languages and Human Centric Computing. 3--10. https://doi.org/10.1109/VLHCC.2013.6645235
[56]
Moritz Körber. 2018. Theoretical considerations and development of a questionnaire to measure trust in automation.
[57]
Vivian Lai and Chenhao Tan. 2019. On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Association for Computing Machinery, New York, NY, USA, 29--38. https://doi.org/10.1145/3287560.3287590
[58]
David M. J. Lazer, Matthew A. Baum, Yochai Benkler, Adam J. Berinsky, Kelly M. Greenhill, Filippo Menczer, Miriam J. Metzger, Brendan Nyhan, Gordon Pennycook, David Rothschild, Michael Schudson, Steven A. Sloman, Cass R. Sunstein, Emily A. Thorson, Duncan J. Watts, and Jonathan L. Zittrain. 2018. The science of fake news. Science, Vol. 359, 6380 (March 2018), 1094--1096. https://doi.org/10.1126/science.aao2998
[59]
John D. Lee and Katrina A. See. 2004. Trust in Automation: Designing for Appropriate Reliance. Human Factors, Vol. 46, 1 (March 2004), 50--80. https://doi.org/10.1518/hfes.46.1.50_30392
[60]
Zhiyuan 'Jerry' Lin, Jongbin Jung, Sharad Goel, and Jennifer Skeem. 2020. The limits of human predictions of recidivism. Science Advances, Vol. 6, 7 (2020), eaaz0652. https://doi.org/10.1126/sciadv.aaz0652
[61]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the Effect of Out-of-distribution Examples and Interactive Explanations on Human-AI Decision Making. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2 (Oct. 2021), 408:1--408:45. https://doi.org/10.1145/3479552
[62]
Yang Liu and Yi-Fang Wu. 2018. Early Detection of Fake News on Social Media Through Propagation Path Classification with Recurrent and Convolutional Networks. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 32, 1 (April 2018). https://doi.org/10.1609/aaai.v32i1.11268
[63]
Zhuoran Lu, Patrick Li, Weilong Wang, and Ming Yin. 2022. The Effects of AI-based Credibility Indicators on the Detection and Spread of Misinformation under Social Influence. Proceedings of the ACM on Human-Computer Interaction, Vol. 6, CSCW2 (Nov. 2022), 461:1--461:27. https://doi.org/10.1145/3555562
[64]
Zhuoran Lu and Ming Yin. 2021. Human Reliance on Machine Learning Models When Performance Feedback is Limited: Heuristics and Risks. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI '21). Association for Computing Machinery, New York, NY, USA, 1--16. https://doi.org/10.1145/3411764.3445562
[65]
Long Ma, Dion Goh, and Chei Sian Lee. 2014. Understanding News Sharing in Social Media: An Explanation from the Diffusion of Innovations Theory. Online Information Review, Vol. 38 (Jan. 2014), 598--615.
[66]
Elio Masciari, Vincenzo Moscato, Antonio Picariello, and Giancarlo Sperlí. 2020. Detecting fake news by image analysis. In Proceedings of the 24th Symposium on International Database Engineering & Applications (IDEAS '20). Association for Computing Machinery, New York, NY, USA, 1--5. https://doi.org/10.1145/3410566.3410599
[67]
Paul Mena. 2020. Cleaning Up Social Media: The Effect of Warning Labels on Likelihood of Sharing False News on Facebook. Policy & Internet, Vol. 12, 2 (2020), 165--183. https://doi.org/10.1002/poi3.214
[68]
Meta. 2021. How Meta's third-party fact-checking program works. https://www.facebook.com/facebookmedia
[69]
Meta. 2023. About fact-checking on Facebook. https://en-gb.facebook.com/business/help/2593586717571940
[70]
Miriam J. Metzger. 2007. Making sense of credibility on the Web: Models for evaluating online information and recommendations for future research. Journal of the American Society for Information Science and Technology, Vol. 58, 13 (2007), 2078--2091. https://doi.org/10.1002/asi.20672
[71]
Miriam J. Metzger, Andrew J. Flanagin, and Ryan B. Medders. 2010. Social and Heuristic Approaches to Credibility Evaluation Online. Journal of Communication, Vol. 60, 3 (Sept. 2010), 413--439. https://doi.org/10.1111/j.1460--2466.2010.01488.x
[72]
Sina Mohseni, Fan Yang, Shiva Pentyala, Mengnan Du, Yi Liu, Nic Lupfer, Xia Hu, Shuiwang Ji, and Eric Ragan. 2020. Machine Learning Explanations to Prevent Overtrust in Fake News Detection. https://doi.org/10.48550/arXiv.2007.12358
[73]
Maria D. Molina and S. Shyam Sundar. 2022. Does distrust in humans predict greater trust in AI? Role of individual differences in user responses to content moderation. New Media & Society (June 2022), 14614448221103534. https://doi.org/10.1177/14614448221103534
[74]
Monika Bickert. 2019. Combatting Vaccine Misinformation. https://about.fb.com/news/2019/03/combatting-vaccine-misinformation/
[75]
Kathleen L. Mosier and Linda J. Skitka. 1999. Automation Use and Automation Bias. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, Vol. 43, 3 (Sept. 1999), 344--348. https://doi.org/10.1177/154193129904300346
[76]
Lena Nadarevic, Rolf Reber, Anne Josephine Helmecke, and Dilara Köse. 2020. Perceived truth of statements and simulated social media postings: an experimental investigation of source credibility, repeated exposure, and presentation format. Cognitive Research: Principles and Implications, Vol. 5, 1 (Nov. 2020), 56. https://doi.org/10.1186/s41235-020-00251--4
[77]
An T. Nguyen, Aditya Kharosekar, Saumyaa Krishnan, Siddhesh Krishnan, Elizabeth Tate, Byron C. Wallace, and Matthew Lease. 2018. Believe it or not: Designing a Human-AI Partnership for Mixed-Initiative Fact-Checking. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (UIST '18). Association for Computing Machinery, New York, NY, USA, 189--199. https://doi.org/10.1145/3242587.3242666
[78]
Anne Oeldorf-Hirsch, Mike Schmierbach, Alyssa Appelman, and Michael P. Boyle. 2020. The Ineffectiveness of Fact-Checking Labels on News Memes and Articles. Mass Communication and Society, Vol. 23, 5 (Sept. 2020), 682--704. https://doi.org/10.1080/15205436.2020.1733613
[79]
Harikumar Pallathadka, Malik Jawarneh, Domenic Sanchez, Guna Sajja, Sanjeev Gour, and Mohd Naved. 2021. The Impact of Machine Learning on Management, Healthcare, and Agriculture. (July 2021).
[80]
Sungkyu Park, Jamie Yejean Park, Hyojin Chin, Jeong-han Kang, and Meeyoung Cha. 2021. An Experimental Study to Understand User Experience and Perception Bias Occurred by Fact-checking Messages. In Proceedings of the Web Conference 2021 (WWW '21). Association for Computing Machinery, New York, NY, USA, 2769--2780. https://doi.org/10.1145/3442381.3450121
[81]
Niccolò Pescetelli, Anna-Katharina Hauperich, and Nick Yeung. 2021. Confidence, advice seeking and changes of mind in decision making. Cognition, Vol. 215 (Oct. 2021), 104810. https://doi.org/10.1016/j.cognition.2021.104810
[82]
Pew Research Center. 2014. Political Polarization in the American Public. https://www.pewresearch.org/politics/2014/06/12/section-4-political-compromise-and-divisive-policy-debates/
[83]
Pew Research Center. 2019. In a Politically Polarized Era, Sharp Divides in Both Partisan Coalitions. https://www.pewresearch.org/politics/2019/12/17/7-domestic-policy-taxes-environment-health-care/
[84]
Pew Research Center. 2020. America is exceptional in the nature of its political divide. https://www.pewresearch.org/short-reads/2020/11/13/america-is-exceptional-in-the-nature-of-its-political-divide/
[85]
Kashyap Popat, Subhabrata Mukherjee, Jannik Strötgen, and Gerhard Weikum. 2018. CredEye: A Credibility Lens for Analyzing and Explaining Misinformation. In Companion Proceedings of the The Web Conference 2018 (WWW '18). International World Wide Web Conferences Steering Committee, Republic and Canton of Geneva, CHE, 155--158. https://doi.org/10.1145/3184558.3186967
[86]
Chanthika Pornpitakpan. 2004. The Persuasiveness of Source Credibility: A Critical Review of Five Decades' Evidence. Journal of Applied Social Psychology, Vol. 34, 2 (2004), 243--281. https://doi.org/10.1111/j.1559--1816.2004.tb02547.x
[87]
Andrew Prahl and Lyn Van Swol. 2017. Understanding algorithm aversion: When is advice from automation discounted? Journal of Forecasting, Vol. 36, 6 (2017), 691--702. https://doi.org/10.1002/for.2464
[88]
Julia M. Puaschunder, Josef Mantl, and Bernd Plank. 2020. Medicine of the Future: the Power of Artificial Intelligence (AI) and Big Data in Healthcare. https://doi.org/10.2139/ssrn.3607616
[89]
Emilee Rader, Kelley Cotter, and Janghee Cho. 2018. Explanations as Mechanisms for Supporting Algorithmic Transparency. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI '18). Association for Computing Machinery, New York, NY, USA, 1--13. https://doi.org/10.1145/3173574.3173677
[90]
Emilee Rader and Rebecca Gray. 2015. Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). Association for Computing Machinery, New York, NY, USA, 173--182. https://doi.org/10.1145/2702123.2702174
[91]
David N. Rapp and Nikita A. Salovich. 2018. Can?t We Just Disregard Fake News? The Consequences of Exposure to Inaccurate Information. Policy Insights from the Behavioral and Brain Sciences, Vol. 5, 2 (Oct. 2018), 232--239. https://doi.org/10.1177/2372732218785193
[92]
Vincent Robbemond, Oana Inel, and Ujwal Gadiraju. 2022. Understanding the Role of Explanation Modality in AI-assisted Decision-making. In Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization (UMAP '22). Association for Computing Machinery, New York, NY, USA, 223--233. https://doi.org/10.1145/3503252.3531311
[93]
Yoel Roth and Nick Pickles. 2020. Updating our approach to misleading information. https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information
[94]
Natali Ruchansky, Sungyong Seo, and Yan Liu. 2017. CSI: A Hybrid Deep Model for Fake News Detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management (CIKM '17). Association for Computing Machinery, New York, NY, USA, 797--806. https://doi.org/10.1145/3132847.3132877
[95]
Emily Saltz, Soubhik Barari, Claire Leibowicz, and Claire Wardle. 2021. Misinformation interventions are common, divisive, and poorly understood. Harvard Kennedy School Misinformation Review (Oct. 2021). https://doi.org/10.37016/mr-2020--81
[96]
Emily Saltz, Claire R Leibowicz, and Claire Wardle. 2021. Encounters with Visual Misinformation and Labels Across Platforms: An Interview and Diary Study to Inform Ecosystem Approaches to Misinformation Interventions. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI EA '21). Association for Computing Machinery, New York, NY, USA, 1--6. https://doi.org/10.1145/3411763.3451807
[97]
Luis Sanz-Menéndez and Laura Cruz-Castro. 2019. The credibility of scientific communication sources regarding climate change: A population-based survey experiment. Public Understanding of Science, Vol. 28, 5 (July 2019), 534--553. https://doi.org/10.1177/0963662519840946
[98]
Max Schemmer, Niklas Kühl, Carina Benz, Andrea Bartos, and Gerhard Satzger. 2023. Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations. In Proceedings of the 28th International Conference on Intelligent User Interfaces. 410--422. https://doi.org/10.1145/3581641.3584066 arXiv:2302.02187 [cs].
[99]
Ron Sellers. 2013. How Sliders Bias Survey Data. MRA's Alert, Vol. 53, 3 (2013), 56--57. https://greymatterresearch.com/wp-content/uploads/2019/09/Alert-Sliders-2013.pdf
[100]
Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science (WebSci '19). Association for Computing Machinery, New York, NY, USA, 265--274. https://doi.org/10.1145/3292522.3326012
[101]
Megha Sharma, Kapil Yadav, Nitika Yadav, and Keith C. Ferdinand. 2017. Zika virus pandemic?analysis of Facebook as a social media health information platform. American Journal of Infection Control, Vol. 45, 3 (March 2017), 301--302. https://doi.org/10.1016/j.ajic.2016.08.022
[102]
Steven A. Sloman and Nathaniel Rabb. 2016. Your Understanding Is My Understanding: Evidence for a Community of Knowledge. Psychological Science, Vol. 27, 11 (Nov. 2016), 1451--1460. https://doi.org/10.1177/0956797616662271
[103]
Alison Marie Smith-Renner, Styliani Kleanthous Loizou, Jonathan Dodge, Casey Dugan, Min Kyung Lee, Brian Y Lim, Tsvi Kuflik, Advait Sarkar, Avital Shulner-Tal, and Simone Stumpf. 2021. TExSS: Transparency and Explanations in Smart Systems. In 26th International Conference on Intelligent User Interfaces - Companion (IUI '21 Companion). Association for Computing Machinery, New York, NY, USA, 24--25. https://doi.org/10.1145/3397482.3450705
[104]
Elizabeth Solberg, Magnhild Kaarstad, Maren H. Rø Eitrheim, Rossella Bisio, Kine Reegård, and Marten Bloch. 2022. A Conceptual Model of Trust, Perceived Risk, and Reliance on AI Decision Aids. Group & Organization Management, Vol. 47, 2 (April 2022), 187--222. https://doi.org/10.1177/10596011221081238 Publisher: SAGE Publications Inc.
[105]
Brian G. Southwell, J. Scott Babwah Brennen, Ryan Paquin, Vanessa Boudewyns, and Jing Zeng. 2022. Defining and Measuring Scientific Misinformation. The ANNALS of the American Academy of Political and Social Science, Vol. 700, 1 (March 2022), 98--111. https://doi.org/10.1177/00027162221084709 Publisher: SAGE Publications Inc.
[106]
Briony Swire, Adam J. Berinsky, Stephan Lewandowsky, and Ullrich K. H. Ecker. 2017. Processing political misinformation: comprehending the Trump phenomenon. Royal Society Open Science, Vol. 4, 3 (March 2017), 160802. https://doi.org/10.1098/rsos.160802
[107]
Charles S. Taber and Milton Lodge. 2006. Motivated Skepticism in the Evaluation of Political Beliefs. American Journal of Political Science, Vol. 50, 3 (2006), 755--769. https://doi.org/10.1111/j.1540--5907.2006.00214.x
[108]
Heliodoro Tejeda Lemus, Aakriti Kumar, and Mark Steyvers. 2022. An Empirical Investigation of Reliance on AI-Assistance in a Noisy-Image Classification Task. In HHAI2022: Augmenting Human Intellect. IOS Press, 225--237. https://doi.org/10.3233/FAIA220201
[109]
Emily Thorson. 2016. Belief Echoes: The Persistent Effects of Corrected Misinformation. Political Communication, Vol. 33, 3 (July 2016), 460--480. https://doi.org/10.1080/10584609.2015.1102187
[110]
Twitter. 2023. How we address misinformation on Twitter. https://help.twitter.com/en/resources/addressing-misleading-info
[111]
Oleksandra Vereschak, Gilles Bailly, and Baptiste Caramiaux. 2021. How to Evaluate Trust in AI-Assisted Decision Making? A Survey of Empirical Methodologies. Proceedings of the ACM on Human-Computer Interaction, Vol. 5, CSCW2 (Oct. 2021), 327:1--327:39. https://doi.org/10.1145/3476068
[112]
Soroush Vosoughi, Deb Roy, and Sinan Aral. 2018. The spread of true and false news online. Science, Vol. 359, 6380 (March 2018), 1146--1151. https://doi.org/10.1126/science.aap9559
[113]
Emily Vraga, Teresa Myers, John Kotcher, Lindsey Beall, and Ed Maibach. 2018. Scientific risk communication about controversial issues influences public perceptions of scientists' political orientations and credibility. Royal Society Open Science, Vol. 5, 2 (Feb. 2018), 170505. https://doi.org/10.1098/rsos.170505
[114]
Emily K. Vraga and Leticia Bode. 2018. I do not believe you: how providing a source corrects health misperceptions across social media platforms. Information, Communication & Society, Vol. 21, 10 (Oct. 2018), 1337--1353. https://doi.org/10.1080/1369118X.2017.1313883
[115]
Emily K. Vraga and Leticia Bode. 2020. Correction as a Solution for Health Misinformation on Social Media. American Journal of Public Health, Vol. 110, S3 (Oct. 2020), S278--S280. https://doi.org/10.2105/AJPH.2020.305916
[116]
Nathan Walter and Riva Tukachinsky. 2020. A Meta-Analytic Examination of the Continued Influence of Misinformation in the Face of Correction: How Powerful Is It, Why Does It Happen, and How to Stop It? Communication Research, Vol. 47, 2 (March 2020), 155--177. https://doi.org/10.1177/0093650219854600
[117]
Xinru Wang, Zhuoran Lu, and Ming Yin. 2022. Will You Accept the AI Recommendation? Predicting Human Behavior in AI-Assisted Decision Making. In Proceedings of the ACM Web Conference 2022 (WWW '22). Association for Computing Machinery, New York, NY, USA, 1697--1708. https://doi.org/10.1145/3485447.3512240
[118]
Senuri Wijenayake, Danula Hettiachchi, Simo Hosio, Vassilis Kostakos, and Jorge Goncalves. 2021. Effect of Conformity on Perceived Trustworthiness of News in Social Media. IEEE Internet Computing, Vol. 25, 1 (Jan. 2021), 12--19. https://doi.org/10.1109/MIC.2020.3032410 Conference Name: IEEE Internet Computing.
[119]
Senuri Wijenayake, Jolan Hu, Vassilis Kostakos, and Jorge Goncalves. 2021. Quantifying the Effects of Age-Related Stereotypes on Online Social Conformity. In Human-Computer Interaction -- INTERACT 2021 (Lecture Notes in Computer Science). Springer International Publishing, Cham, 451--475. https://doi.org/10.1007/978--3-030--85610--6_26
[120]
Senuri Wijenayake, Niels van Berkel, Vassilis Kostakos, and Jorge Goncalves. 2019. Measuring the Effects of Gender on Online Social Conformity. Proceedings of the ACM on Human-Computer Interaction, Vol. 3, CSCW (Nov. 2019), 145:1--145:24. https://doi.org/10.1145/3359247
[121]
Senuri Wijenayake, Niels van Berkel, Vassilis Kostakos, and Jorge Goncalves. 2020. Quantifying the Effect of Social Presence on Online Social Conformity. Proceedings of the ACM on Human-Computer Interaction, Vol. 4, CSCW1 (May 2020), 55:1--55:22. https://doi.org/10.1145/3392863
[122]
Waheeb Yaqub, Otari Kakhidze, Morgan L. Brockman, Nasir Memon, and Sameer Patil. 2020. Effects of Credibility Indicators on Social Media News Sharing Intent. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI '20). Association for Computing Machinery, New York, NY, USA, 1--14. https://doi.org/10.1145/3313831.3376213
[123]
Ming Yin, Jennifer Wortman Vaughan, and Hanna Wallach. 2019. Understanding the Effect of Accuracy on Trust in Machine Learning Models. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI '19). Association for Computing Machinery, New York, NY, USA, 1--12. https://doi.org/10.1145/3290605.3300509
[124]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* '20). Association for Computing Machinery, New York, NY, USA, 295--305. https://doi.org/10.1145/3351095.3372852

Cited By

View all
  • (2024)Exploring people's perceptions of LLM-generated adviceComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2024.1000722:2(100072)Online publication date: Aug-2024

Index Terms

  1. Effect of Explanation Conceptualisations on Trust in AI-assisted Credibility Assessment

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 8, Issue CSCW2
    CSCW
    November 2024
    5177 pages
    EISSN:2573-0142
    DOI:10.1145/3703902
    Issue’s Table of Contents
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 08 November 2024
    Published in PACMHCI Volume 8, Issue CSCW2

    Check for updates

    Author Tags

    1. artificial intelligence
    2. conceptualisation validations
    3. credibility assessment
    4. human-ai interaction
    5. misinformation
    6. reliance
    7. trust

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)167
    • Downloads (Last 6 weeks)167
    Reflects downloads up to 12 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Exploring people's perceptions of LLM-generated adviceComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2024.1000722:2(100072)Online publication date: Aug-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Full Access

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media