[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

By the Crowd and for the Crowd: Perceived Utility and Willingness to Contribute to Trustworthiness Indicators on Social Media

Published: 13 July 2021 Publication History

Abstract

This study explores how people perceive the potential utility of trustworthiness indicators and how willing they are to consider contributing to them as a way to combat the problem of misinformation and disinformation on social media. Analysis of qualitative and quantitative data from the survey (N=376) indicates that a majority of respondents believe trustworthiness indicators would be valuable as they can reduce uncertainty and provide guidance on how to interact with content. However, perceptions of how and when these indicators can provide value vary widely in detail. A majority of respondents are also willing to contribute to trustworthiness indicators on social media to some extent due to their sense of duty and personal expertise in information verification practices but are very wary of the effort or burden it would place on them. Respondents who did not want to use or contribute to trustworthiness indicators attributed it to their lack of faith in the concept of trustworthiness indicators stemming from perceived inherent and unsurmountable biases on social media. Together our findings highlight the complexity of designing, structuring and presenting trustworthiness indicators keeping in mind the diverse set of user attitudes and perceptions.

References

[1]
Maansi Bansal-Travers, David Hammond, Philip Smith, and K. Michael Cummings. 2011. The impact of cigarette pack design, descriptors, and warning labels on risk perception in the U.S. Am J Prev Med 40, 6 (June 2011), 674--682.
[2]
Anna E. Bargagliotti and Lingfang (Ivy) Li. 2013. Decision Making Using Rating Systems: When Scale Meets Binary. Decision Sciences 44, 6 (2013), 1121--1137.
[3]
Bill Bishop. Americans have lost faith in institutions. That's not because of Trump or 'fake news.' Washington Post. Retrieved May 12, 2020 from https://www.washingtonpost.com/posteverything/wp/2017/03/03/americans-have-lost-faith-in-institutions-thats-not-because-of-trump-or-fake-news/
[4]
Leticia Bode and Emily K. Vraga. 2015. In Related News, That Was Wrong: The Correction of Misinformation Through Related Stories Functionality in Social Media. JOURNAL OF COMMUNICATION 65, 4 (August 2015), 619--638.
[5]
Shannon Bond. 2020. Twitter Expands Warning Labels To Slow Spread of Election Misinformation?: NPR. Retrieved October 24, 2020 from https://www.npr.org/2020/10/09/922028482/twitter-expands-warning-labels-to-slow-spread-of-election-misinformation
[6]
Manuel Cabrera, Leandro Machín, Alejandra Arrúa, Lucía Antúnez, María Rosa Curutchet, Ana Giménez, and Gastón Ares. 2017. Nutrition warnings as front-of-pack labels: influence of design features on healthfulness perception and attentional capture. Public Health Nutr 20, 18 (December 2017), 3360--3371.
[7]
Katherine Clayton, Spencer Blair, Jonathan A. Busam, Samuel Forstner, John Glance, Guy Green, Anna Kawata, Akhila Kovvuri, Jonathan Martin, Evan Morgan, Morgan Sandhu, Rachel Sang, Rachel Scholz-Bright, Austin T. Welch, Andrew G. Wolff, Amanda Zhou, and Brendan Nyhan. 2019. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Polit Behav (February 2019).
[8]
Juliet Corbin and Anselm Strauss. 2007. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory (3rd edition ed.). SAGE Publications, Inc, Los Angeles, Calif.
[9]
Dan Cosley, Shyong K. Lam, Istvan Albert, Joseph A. Konstan, and John Riedl. 2003. Is seeing believing? how recommender system interfaces affect users' opinions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '03), Association for Computing Machinery, Ft. Lauderdale, Florida, USA, 585--592.
[10]
Ullrich K. H. Ecker, Stephan Lewandowsky, and David T. W. Tang. 2010. Explicit warnings reduce but do not eliminate the continued influence of misinformation. Mem Cognit 38, 8 (December 2010), 1087--1100.
[11]
Mingkun Gao, Ziang Xiao, Karrie Karahalios, and Wai-Tat Fu. 2018. To Label or Not to Label: The Effect of Stance and Credibility Labels on Readers' Selection and Perception of News Articles. Proc. ACM Hum.-Comput. Interact. 2, CSCW (November 2018), 55:1--55:16.
[12]
Andrew M. Guess, Brendan Nyhan, and Jason Reifler. 2020. Exposure to untrustworthy websites in the 2016 US election. Nat Hum Behav (March 2020), 1--9.
[13]
Hendrik Heuer and Andreas Breiter. 2018. Trust in news on social media. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction (NordiCHI '18), Association for Computing Machinery, Oslo, Norway, 137--147.
[14]
Avery E. Holton, Mark Coddington, and Homero Gil de Zúñiga. 2013. Whose News? Whose Values? Journalism Practice 7, 6 (December 2013), 720--737.
[15]
Nabil Jeddi and Imed Zaiem. 2010. The Impact of Label Perception on the Consumer's Purchase Intention: An application on food products. IBIMA Business Review (January 2010).
[16]
Shan Jiang and Christo Wilson. 2018. Linguistic Signals under Misinformation and Fact-Checking: Evidence from User Comments on Social Media. Proc. ACM Hum.-Comput. Interact. 2, CSCW (November 2018), 82:1--82:23.
[17]
Michael Kranish. 2014. Facebook draws fire on ?related articles' push - The Boston Globe. The Boston Globe. Retrieved May 1, 2020 from https://www.bostonglobe.com/news/nation/2014/05/03/facebook-push-related-articles-users-without-checking-credibility-draws-fire/rPae4M2LlzpVHIJAmfDYNL/story.html
[18]
Travis Kriplean, Jonathan Morgan, Deen Freelon, Alan Borning, and Lance Bennett. 2012. Supporting reflective public thought with considerit. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW '12), Association for Computing Machinery, Seattle, Washington, USA, 265--274.
[19]
Issie Lapowsky. Gallup Poll: Labeling Sites May Help Stop Fake News Spread. Wired. Retrieved May 12, 2020 from https://www.wired.com/story/gallup-poll-fake-news-ratings/
[20]
Alex Leavitt and John J. Robinson. 2017. Upvote My News: The Practices of Peer Information Aggregation for Breaking News on reddit.com. Proc. ACM Hum.-Comput. Interact. 1, CSCW (December 2017), 65:1--65:18.
[21]
Rachel Lerman. Facebook says it has taken down 7 million posts for spreading coronavirus misinformation. Washington Post. Retrieved October 29, 2020 from https://www.washingtonpost.com/technology/2020/08/11/facebook-covid-misinformation-takedowns/
[22]
Stephan Lewandowsky, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook. 2012. Misinformation and Its Correction: Continued Influence and Successful Debiasing. Psychol Sci Public Interest 13, 3 (December 2012), 106--131.
[23]
Tessa Lyons. 2017. Replacing Disputed Flags With Related Articles. Facebook News. Retrieved May 12, 2020 from https://about.fb.com/news/2017/12/news-feed-fyi-updates-in-our-fight-against-misinformation/
[24]
Warih Maharani, Dwi H. Widyantoro, and Masayu L. Khodra. 2016. Discovering Users' Perceptions on Rating Visualizations. In Proceedings of the 2nd International Conference in HCI and UX Indonesia 2016 (CHIuXiD '16), Association for Computing Machinery, Jakarta, Indonesia, 31--38.
[25]
Patricia Moravec, Randall Minas, and Alan Dennis. 2019. Fake News on Social Media: People Believe What They Want to Believe When it Makes No Sense At All. MIS Quarterly 43, 4 (2019), 1343--1360.
[26]
Adam Mosseri. 2016. Addressing Hoaxes and Fake News. Facebook News. Retrieved May 12, 2020 from https://about.fb.com/news/2016/12/news-feed-fyi-addressing-hoaxes-and-fake-news/
[27]
Nic Newman, Richard Fletcher, Antonis Kalogeropoulos, David A.L. Levy, and Rasmus Kleis Neilsen. 2017. Digital News Report 2017. Reuters Institute for the Study of Journalism. Retrieved May 11, 2020 from https://reutersinstitute.politics.ox.ac.uk/sites/default/files/Digital%20News%20Report%202017%20web_0.pdf
[28]
Rasmus Kleis Nielsen and Lucas Graves. 2017. "News you don't believe": Audience perspectives on fake news. Reuters Institute for the Study of Journalism. Retrieved May 11, 2020 from https://reutersinstitute.politics.ox.ac.uk/our-research/news-you-dont-believe-audience-perspectives-fake-news
[29]
Brendan Nyhan and Jason Reifler. 2010. When Corrections Fail: The Persistence of Political Misperceptions. Polit Behav 32, 2 (June 2010), 303--330.
[30]
Denise-Marie Ordway, Journalist's Resource September 1, and 2017. 2017. Fake news and the spread of misinformation: A research roundup. Journalist's Resource. Retrieved May 1, 2020 from https://journalistsresource.org/studies/society/internet/fake-news-conspiracy-theories-journalism-research/
[31]
Will Oremus. 2019. These Startups Want to Protect You From Fake News. Can You Trust Them? Slate Magazine. Retrieved May 12, 2020 from https://slate.com/technology/2019/01/newsguard-nuzzelrank-media-ratings-fake-news.html
[32]
Gordon Pennycook, Adam Bear, and Evan Collins. 2019. The Implied Truth Effect: Attaching Warnings to a Subset of Fake News Headlines Increases Perceived Accuracy of Headlines Without Warnings. Management Science (August 2019).
[33]
Gordon Pennycook, Tyrone Cannon, and David G. Rand. 2018. Prior Exposure Increases Perceived Accuracy of Fake News. Social Science Research Network, Rochester, NY.
[34]
Sarah Perez. 2020. Twitter to add labels and warning messages to disputed and misleading COVID-19 info. TechCrunch. Retrieved October 24, 2020 from https://social.techcrunch.com/2020/05/11/twitter-to-add-labels-and-warning-messages-to-disputed-and-misleading-covid-19-info/
[35]
Yoel Roth and Nick Pickles. 2020. Updating our approach to misleading information. Retrieved October 24, 2020 from https://blog.twitter.com/en_us/topics/product/2020/updating-our-approach-to-misleading-information.html
[36]
Jonathon P. Schuldt. 2013. Does Green Mean Healthy? Nutrition Label Color Affects Perceptions of Healthfulness. Health Communication 28, 8 (November 2013), 814--821.
[37]
Haeseung Seo, Aiping Xiong, and Dongwon Lee. 2019. Trust It or Not: Effects of Machine-Learning Warnings in Helping Individuals Mitigate Misinformation. In Proceedings of the 10th ACM Conference on Web Science (WebSci '19), Association for Computing Machinery, Boston, Massachusetts, USA, 265--274.
[38]
Ricky J. Sethi. 2017. Crowdsourcing the Verification of Fake News and Alternative Facts. In Proceedings of the 28th ACM Conference on Hypertext and Social Media (HT '17), Association for Computing Machinery, Prague, Czech Republic, 315--316.
[39]
Chengcheng Shao, Giovanni Luca Ciampaglia, Alessandro Flammini, and Filippo Menczer. 2016. Hoaxy: A Platform for Tracking Online Misinformation. In Proceedings of the 25th International Conference Companion on World Wide Web (WWW '16 Companion), International World Wide Web Conferences Steering Committee, Montréal, Québec, Canada, 745--750.
[40]
Kai Shu, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. 2017. Fake News Detection on Social Media: A Data Mining Perspective. SIGKDD Explor. Newsl. 19, 1 (September 2017), 22--36.
[41]
E. Isaac Sparling and Shilad Sen. 2011. Rating: how difficult is it? In Proceedings of the fifth ACM conference on Recommender systems (RecSys '11), Association for Computing Machinery, Chicago, Illinois, USA, 149--156.
[42]
Kate Starbird, Ahmer Arif, and Tom Wilson. 2019. Disinformation as Collaborative Work: Surfacing the Participatory Nature of Strategic Information Operations. Proc. ACM Hum.-Comput. Interact. 3, CSCW (November 2019), 127:1--127:26.
[43]
Brian Stelter. 2016. Facebook to start putting warning labels on "fake news." Retrieved October 24, 2020 from https://money.cnn.com/2016/12/15/media/facebook-fake-news-warning-labels/
[44]
Briony Swire-Thompson, Joseph DeGutis, and David Lazer. 2020. Searching for the Backfire Effect: Measurement and Design Considerations. J Appl Res Mem Cogn 9, 3 (September 2020), 286--299.
[45]
Sebastian Tschiatschek, Adish Singla, Manuel Gomez Rodriguez, Arpit Merchant, and Andreas Krause. 2018. Fake News Detection in Social Networks via Crowd Signals. In Companion Proceedings of the The Web Conference 2018 (WWW '18), International World Wide Web Conferences Steering Committee, Lyon, France, 517--524.
[46]
Georgia Wells. 2020. Twitter to Add Labels to Disputed Coronavirus Posts, as Misinformation Proliferates. Wall Street Journal. Retrieved May 16, 2020 from https://www.wsj.com/articles/twitter-to-add-labels-to-disputed-coronavirus-posts-as-misinformation-proliferates-11589226911
[47]
Georgia Wells and Lukas I. Alpert. 2018. In Facebook's Effort to Fight Fake News, Human Fact-Checkers Struggle to Keep Up. Wall Street Journal. Retrieved October 29, 2020 from https://www.wsj.com/articles/in-facebooks-effort-to-fight-fake-news-human-fact-checkers-play-a-supporting-role-1539856800
[48]
Darrell M. West. 2017. How to combat fake news and disinformation. Retrieved May 11, 2020 from https://www.brookings.edu/research/how-to-combat-fake-news-and-disinformation/
[49]
Tom Wilson, Kaitlyn Zhou, and Kate Starbird. 2018. Assembling Strategic Narratives: Information Operations as Collaborative Work within an Online Community. Proc. ACM Hum.-Comput. Interact. 2, CSCW (November 2018), 183:1--183:26.
[50]
Chen Xu and Qin Zhang. 2019. The dominant factor of social tags for users' decision behavior on e-commerce websites: Color or text. Journal of the Association for Information Science and Technology 70, 9 (2019), 942--953.
[51]
2018. A multi-dimensional approach to disinformation: report of the independent High-level Group on fake news and online disinformation. Publication Office of European Union. Retrieved May 11, 2020 from http://op.europa.eu/en/publication-detail/-/publication/6ef4df8b-4cea-11e8-be1d-01aa75ed71a1
[52]
How Facebook's Fact-Checking Program Works. How Facebook's Fact-Checking Program Works. Retrieved October 24, 2020 from https://www.facebook.com/journalismproject/programs/third-party-fact-checking/how-it-works

Cited By

View all
  • (2024)The Landscape of User-centered Misinformation Interventions - A Systematic Literature ReviewACM Computing Surveys10.1145/367472456:11(1-36)Online publication date: 25-Jun-2024
  • (2024)From Adolescents' Eyes: Assessing an Indicator-Based Intervention to Combat Misinformation on TikTokProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642264(1-20)Online publication date: 11-May-2024
  • (2024)Misleading information in crises: exploring content-specific indicators on Twitter from a user perspectiveBehaviour & Information Technology10.1080/0144929X.2024.2373166(1-34)Online publication date: 8-Jul-2024
  • Show More Cited By

Index Terms

  1. By the Crowd and for the Crowd: Perceived Utility and Willingness to Contribute to Trustworthiness Indicators on Social Media

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Proceedings of the ACM on Human-Computer Interaction
    Proceedings of the ACM on Human-Computer Interaction  Volume 5, Issue GROUP
    GROUP
    July 2021
    190 pages
    EISSN:2573-0142
    DOI:10.1145/3475950
    Issue’s Table of Contents
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 July 2021
    Published in PACMHCI Volume 5, Issue GROUP

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. crowdsource
    2. disinformation
    3. fake news
    4. indicators
    5. information
    6. labels
    7. misinformation
    8. sharing
    9. social media
    10. trustworthiness
    11. verification
    12. warnings

    Qualifiers

    • Research-article

    Funding Sources

    • 2016-2017 CSU-AAUP Research Grant

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)48
    • Downloads (Last 6 weeks)5
    Reflects downloads up to 21 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)The Landscape of User-centered Misinformation Interventions - A Systematic Literature ReviewACM Computing Surveys10.1145/367472456:11(1-36)Online publication date: 25-Jun-2024
    • (2024)From Adolescents' Eyes: Assessing an Indicator-Based Intervention to Combat Misinformation on TikTokProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642264(1-20)Online publication date: 11-May-2024
    • (2024)Misleading information in crises: exploring content-specific indicators on Twitter from a user perspectiveBehaviour & Information Technology10.1080/0144929X.2024.2373166(1-34)Online publication date: 8-Jul-2024
    • (2024)Navigating misinformation in voice messages: Identification of user‐centered features for digital interventionsRisk, Hazards & Crisis in Public Policy10.1002/rhc3.1229615:2(203-235)Online publication date: 25-Mar-2024
    • (2023)SoK: Content Moderation in Social Media, from Guidelines to Enforcement, and Research to Practice2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P)10.1109/EuroSP57164.2023.00056(868-895)Online publication date: Jul-2023

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media