[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Open access

When Users Control the Algorithms: Values Expressed in Practices on Twitter

Published: 07 November 2019 Publication History

Abstract

Recent interest in ethical AI has brought a slew of values, including fairness, into conversations about technology design. Research in the area of algorithmic fairness tends to be rooted in questions of distribution that can be subject to precise formalism and technical implementation. We seek to expand this conversation to include the experiences of people subject to algorithmic classification and decision-making. By examining tweets about the "Twitter algorithm" we consider the wide range of concerns and desires Twitter users express. We find a concern with fairness (narrowly construed) is present, particularly in the ways users complain that the platform enacts a political bias against conservatives. However, we find another important category of concern, evident in attempts to exert control over the algorithm. Twitter users who seek control do so for a variety of reasons, many well justified. We argue for the need for better and clearer definitions of what constitutes legitimate and illegitimate control over algorithmic processes and to consider support for users who wish to enact their own collective choices.

References

[1]
Sara Ahmed. 2018. Refusal, Resignation and Complaint. Retrieved April 3, 2019 from https://feministkilljoys.com/2018/06/28/refusal-resignation-and-complaint/
[2]
Jane Bambauer and Tal Zarsky. 2018. The Algorithm Game. Notre Dame Law Review 94, 1 (2018), 1--48. https://doi.org/10.1016/j.engfracmech.2007.08.004
[3]
Solon Barocas and Andrew Selbst. 2016. Big Data's Disparate Impact. California Law Review 104 (2016).
[4]
James R. Beniger. 1989. The Control Revolution: Technological and Economic Origins of the Information Society. Harvard University Press, Cambridge, MA.
[5]
Reuben Binns. 2018. Fairness in Machine Learning: Lessons from Political Philosophy. In Proceedings of Machine Learning Research, 2018 Conference on Fairness, Accountability, and Transparency. 149--159.
[6]
Sophie Bishop. 2018. Anxiety, panic and self-optimization: Inequalities and the YouTube algorithm. Convergence 24, 1 (2018), 69--84. https://doi.org/10.1177/1354856517736978
[7]
Finn Brunton and Helen Nissenbaum. 2015. Obfuscation: a user's guide for privacy and protest. The MIT Press, Cambridge, MA.
[8]
Taina Bucher. 2017. The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms. Information Communication and Society 20, 1 (2017), 30--44. https://doi.org/10.1080/1369118X.2016.1154086
[9]
Taina Bucher. 2018. Cleavage-Control: Stories of Algorithmic Culture and Power in the Case of the YouTube "Reply Girls". In A Networked Self and Platforms, Stories, Connections. Routledge, New York and London, 125--143.
[10]
Alina Campan, Tobel Atnafu, Traian Marius Truta, and Joseph Nolan. 2019. Is Data Collection through Twitter Streaming API Useful for Academic Research? Proceedings - 2018 IEEE International Conference on Big Data, Big Data 2018 (2019), 3638--3643. https://doi.org/10.1109/BigData.2018.8621898
[11]
Jennifer A Chandler. 2007. A Right to Reach an Audience: An Approach to Intermediary Bias on the Internet. Hofstra Law Review 35 (2007), 1095--1138.
[12]
Kathy Charmaz. 2006. Constructing Grounded Theory: a practical guide through qualitative analysis. Sage Publications, Thousand Oaks, CA.
[13]
Alexandra Chouldechova. 2017. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data 5, 2 (2017), 1--6. arXiv:1703.00056 http://arxiv.org/abs/1703.00056
[14]
Kelley Cotter. 2018. Playing the visibility game: How digital influencers and algorithms negotiate influence on Instagram. https://doi.org/10.1177/1461444818815684
[15]
Cynthia Dwork, Moritz Hardt, Toniann Pitassi, Omer Reingold, and Rich Zemel. 2011. Fairness Through Awareness. In Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. Cambridge, MA, 214--226. https://doi.org/10.1145/2090236.2090255 arXiv:1104.3913
[16]
Motahhare Eslami, Aimee Rickman, Kristen Vaccaro, Amirhossein Aleyasen, Andy Vuong, Karrie Karahalios, Kevin Hamilton, and Christian Sandvig. 2015. "I always assumed that I wasn't really that close to [her]". Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI '15 (2015), 153--162. https://doi.org/10.1145/2702123.2702556
[17]
Motahhare Eslami, Kristen Vaccaro, Karrie Karahalios, and Kevin Hamilton. 2017. "Be careful; things can be worse than they appear": Understanding Biased Algorithms and Users' Behavior around Them in Rating Platforms. In Proceedings of ICSMW. https://www.aaai.org/ocs/index.php/ICWSM/ICWSM17/paper/viewPaper/15697
[18]
Casey Fiesler and Nicholas Proferes. 2018. "Participant" Perceptions of Twitter Research Ethics. Social Media and Society 4, 1 (2018). https://doi.org/10.1177/2056305118763366
[19]
Batya Friedman and Helen Nissenbaum. 1996. Bias in computer systems. ACM Transactions on Information Systems 14, 3 (Jul 1996), 330--347. https://doi.org/10.1145/230538.230561
[20]
R Stuart Geiger. 2016. Bot-based collective blocklists in Twitter: the counterpublic moderation of harassment in a networked public space. Information Communication and Society 19, 6 (2016), 787--803. https://doi.org/10.1080/1369118X.2016.1153700
[21]
Tarleton Gillespie. 2017. Algorithmically recognizable: Santorum's Google problem, and Google's Santorum problem. Information Communication and Society 20, 1 (2017), 63--80. https://doi.org/10.1080/1369118X.2016.1199721
[22]
Kevin D Haggerty and Richard V Ericson. 2002. The surveillant assemblage. British Journal of Sociology 51, 4 (2002), 605--622. https://doi.org/10.1080/00071310020015280
[23]
Oliver Haimson and Anna Lauren Hoffmann. 2016. Constructing and enforcing "authentic" identity online: Facebook, real names, and non-normative identities. First Monday 21, 6 (2016).
[24]
Moritz Hardt, Nimrod Megiddo, Christos Papadimitriou, and Mary Wootters. 2016. Strategic Classification. In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science. 111--122. https://doi.org/10.1145/2840728.2840730
[25]
Moritz Hardt, Eric Price, and Nathan Srebro. 2016. Equality of Opportunity in Supervised Learning. In Advances in Neural Information Processing Systems, Vol. 29. 1--22. https://doi.org/10.1016/S0031-0182(03)00685-0 arXiv:1610.02413
[26]
Eszter Hargittai. 2018. Potential Biases in Big Data: Omitted Voices on Social Media. Social Science Computer Review (2018), 1--15. https://doi.org/10.1177/0894439318788322
[27]
Jeffrey Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems. Proceedings of the National Academy of Sciences 116, 6 (2019), 1844--1850. https://doi.org/10.1073/pnas.1807184115
[28]
Tad Hirsch, Kritzia Merced, Shrikanth Narayanan, Zac E Imel, and David C Atkins. 2017. Designing Contestability: Interaction Design, Machine Learning, and Mental Health. In DIS. Designing Interactive Systems (Conference), Vol. 2017. 95--99. https://doi.org/10.1145/3064663.3064703
[29]
Eric Horvitz. 1999. Principles of mixed-initiative user interfaces. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems. Pittsburgh, PA, 159--166. https://doi.org/10.1145/302979.303030
[30]
John D Inazu. 2013. Virtual Assembly. Cornell Law Review 98, 5 (2013).[31] Meg Leta Jones. 2017. The right to a human in the loop: Political constructions of computer automation and personhood. (2017). https://doi.org/10.1177/0306312717699716
[31]
Matthew Joseph, Michael Kearns, Jamie Morgenstern, Seth Neel, and Aaron Roth. 2016. Rawlsian Fairness for Machine Learning. In 3rd Workshop on Fairness, Accountability, and Transparency Conference. 1--26.
[32]
Margot E Kaminski. 2019. Binary Governance: Lessons from the GDPR's approach to algorithmic accountability. Southern California Law Review 92, 6 (2019).
[33]
Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan. 2018. Inherent Trade-Offs in the Fair Determination of Risk Scores. In Proceeding of the 2018 ACM International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS. Irvine, CA, 1--23.
[34]
Min Kyung Lee and Su Baykal. 2017. Algorithmic Mediation in Group Decision: Fairness Perceptions of Algorithmically Mediated vs. Discussion-Based Social Division. In Proceedings of the 2017 ACM Conference on Computer Supported Cooperative Work and Social Computing. Portland, OR, 1035--1048.
[35]
Min Kyung Lee, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. Working with Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. Seoul, Republic of Korea, 1603--1612. https://doi.org/10.1145/2702123.2702548
[36]
Bruno Lepri, Nuria Oliver, Emmanuel Letouze, Alex Pentland, and Patrick Vinck. 2017. Fair, transparent and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges. Philosophy & Technology 31, 4 (2017), 611--627.
[37]
Alice E. Marwick and Danah Boyd. 2011. I tweet honestly, I tweet passionately: Twitter users, context collapse, and the imagined audience. New Media and Society 13, 1 (2011), 114--133. https://doi.org/10.1177/1461444810365313
[38]
Deirdre K Mulligan and Daniel S Griffin. 2018. Rescripting Search to Respect the Right to Truth. Georgetown Law Technology Review 557 (2018), 557--584.
[39]
Gina Neff and Peter Nagy. 2016. Automation, Algorithms, and Politics| Talking to Bots: Symbiotic Agency and the Case of Tay. International Journal of Communication 10, 0 (2016), 17.
[40]
Kimberly A. Neuendorf. 2002. The Content Analysis Guidebook. Sage Publications, Thousand Oaks, CA.
[41]
Helen Nissenbaum. 2001. How Computer Systems Embody Values. Computer (2001), 118--120.
[42]
Andrew D. Selbst and Solon Barocas. 2018. The Intuitive Appeal of Explainable Machines. Fordham Law Review 1085 (2018). https://doi.org/10.2139/ssrn.3126971
[43]
Tao Stein, Erdong Chen, and Karan Mangla. 2011. Facebook immune system. Proceedings of the 4th Workshop on Social Network Systems - SNS '11 m (2011), 1--8. https://doi.org/10.1145/1989656.1989664
[44]
Zeynep Tufekci. 2014. Big Questions for Social Media Big Data: Representativeness, Validity and Other Methodological Pitfalls. Proceedings of the 8th International AAAI Conference on Weblogs and Social Media (2014), 505--514. https://doi.org/10.1016/j.ecoleng.2016.05.077 arXiv:1403.7400
[45]
Zeynep Tufekci. 2017. Twitter and Tear Gas: The Power and Fragility of Networked Protest. Yale University Press, New Haven and London.
[46]
Emily van der Nagel. 2018. 'Networks that work too well': intervening in algorithmic connections. Media International Australia 168, 1 (2018), 81--92. https://doi.org/10.1177/1329878X18783002
[47]
Jeffrey Warshaw, Nina Taft, and Allison Woodruff. 2016. Intuitions, Analytics, and Killing Ants: Inference Literacy of High School-educated Adults in the US. In Proceedings of the Twelfth Symposium on Usable Privacy and Security. Denver, CO.
[48]
Michele Willson. 2017. Algorithms (and the) everyday. Information Communication and Society 20, 1 (2017), 137--150. https://doi.org/10.1080/1369118X.2016.1200645
[49]
Kevin Witzenberger. 2018. The Hyperdodge: How Users Resist Algorithmic Objects in Everyday Life. Media Theory 2, 2 (2018), 29--51. http://journalcontent.mediatheoryjournal.org/index.php/mt/article/view/56/46
[50]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A Qualitative Exploration of Perceptions of Algorithmic Fairness. 1--14. https://doi.org/10.1145/3173574.3174230

Cited By

View all
  • (2024)Yeni Medya Bölümü Öğrencilerinin Algoritma Okuryazarlıkları Üzerine Bir AraştırmaA Research On Algorıthm Lıteracy Of New Medıa Department StudentsErciyes İletişim Dergisi10.17680/erciyesiletisim.133851011:1(155-180)Online publication date: 30-Jan-2024
  • (2024)Simple Scores are Messy Signals: How Users Interpret Scores on Real Estate PlatformsProceedings of the ACM on Human-Computer Interaction10.1145/36869358:CSCW2(1-25)Online publication date: 8-Nov-2024
  • (2024)Mapping the Design Space of Teachable Social Media Feed ExperiencesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642120(1-20)Online publication date: 11-May-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Proceedings of the ACM on Human-Computer Interaction
Proceedings of the ACM on Human-Computer Interaction  Volume 3, Issue CSCW
November 2019
5026 pages
EISSN:2573-0142
DOI:10.1145/3371885
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 November 2019
Published in PACMHCI Volume 3, Issue CSCW

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. algorithmic fairness
  2. assembly
  3. automation
  4. control
  5. gaming the algorithm
  6. human autonomy
  7. twitter

Qualifiers

  • Research-article

Funding Sources

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)632
  • Downloads (Last 6 weeks)78
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Yeni Medya Bölümü Öğrencilerinin Algoritma Okuryazarlıkları Üzerine Bir AraştırmaA Research On Algorıthm Lıteracy Of New Medıa Department StudentsErciyes İletişim Dergisi10.17680/erciyesiletisim.133851011:1(155-180)Online publication date: 30-Jan-2024
  • (2024)Simple Scores are Messy Signals: How Users Interpret Scores on Real Estate PlatformsProceedings of the ACM on Human-Computer Interaction10.1145/36869358:CSCW2(1-25)Online publication date: 8-Nov-2024
  • (2024)Mapping the Design Space of Teachable Social Media Feed ExperiencesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642120(1-20)Online publication date: 11-May-2024
  • (2024)Identity Driven Information EcosystemsCommunication Theory10.1093/ct/qtae00634:2(82-91)Online publication date: 23-Mar-2024
  • (2024)The benefits, risks and bounds of personalizing the alignment of large language models to individualsNature Machine Intelligence10.1038/s42256-024-00820-y6:4(383-392)Online publication date: 23-Apr-2024
  • (2024)Challenges in enabling user control over algorithm-based servicesAI & Society10.1007/s00146-022-01395-139:1(195-205)Online publication date: 1-Feb-2024
  • (2024)Falling behind again? Characterizing and assessing older adults' algorithm literacy in interactions with video recommendationsJournal of the Association for Information Science and Technology10.1002/asi.24960Online publication date: 19-Oct-2024
  • (2023)Algorithmic collective action in machine learningProceedings of the 40th International Conference on Machine Learning10.5555/3618408.3618918(12570-12586)Online publication date: 23-Jul-2023
  • (2023)The Interaction Between Offensive and Hate Speech on Twitter and Relevant Social Events in SpainNews Media and Hate Speech Promotion in Mediterranean Countries10.4018/978-1-6684-8427-2.ch006(81-109)Online publication date: 30-Jun-2023
  • (2023)Enlightened Participation: SME Perspectives about Net Zero on Social Media Using the Action Case ApproachIIM Kozhikode Society & Management Review10.1177/22779752231166521Online publication date: 11-May-2023
  • Show More Cited By

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media