[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3593013.3594092acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Taking Algorithms to Courts: A Relational Approach to Algorithmic Accountability

Published: 12 June 2023 Publication History

Abstract

In widely used sociological descriptions of how accountability is structured through institutions, an “actor” (e.g., the developer) is accountable to a “forum” (e.g., regulatory agencies) empowered to pass judgements on and demand changes from the actor or enforce sanctions. However, questions about structuring accountability persist: why and how is a forum compelled to keep making demands of the actor when such demands are called for? To whom is a forum accountable in the performance of its responsibilities, and how can its practices and decisions be contested? In the context of algorithmic accountability, we contend that a robust accountability regime requires a triadic relationship, wherein the forum is also accountable to another entity: the public(s). Typically, as is the case with environmental impact assessments, public(s) make demands upon the forum's judgements and procedures through the courts, thereby establishing a minimum standard of due diligence. However, core challenges relating to: (1) lack of documentation, (2) difficulties in claiming standing, and (3) struggles around admissibility of expert evidence on and achieving consensus over the workings of algorithmic systems in adversarial proceedings prevent the public from approaching the courts when faced with algorithmic harms. In this paper, we demonstrate that the courts are the primary route—and the primary roadblock—in the pursuit of redress for algorithmic harms. Courts often find algorithmic harms non-cognizable and rarely require developers to address material claims of harm. To address the core challenges of taking algorithms to court, we develop a relational approach to algorithmic accountability that emphasizes not what the actors do nor the results of their actions, but rather how interlocking relationships of accountability are constituted in a triadic relationship between actors, forums, and public(s). As is the case in other regulatory domains, we believe that impact assessments (and similar accountability documentation) can provide the grounds for contestation between these parties, but only when that triad is structured such that the public(s) are able to cohere around shared experiences and interests, contest the outcomes of algorithmic systems that affect their lives, and make demands upon the other parties. Where courts now find algorithmic harms non-cognizable, an impact assessment regime can potentially create procedural rights to protect substantive rights of the public(s). This would require algorithmic accountability policies currently under consideration to provide the public(s) with adequate standing in courts, and opportunities to access and contest the actor's documentation and the forum's judgments.

References

[1]
42 U.S.C. § 2000e et seq. 1964. Civil Rights Act of 1964. Retrieved July 9, 2021 from https://www.govinfo.gov/content/pkg/STATUTE-78/pdf/STATUTE-78-Pg241.pdf#page=1
[2]
Mohamed Abdalla and Moustafa Abdalla. 2021. The Grey Hoodie Project: Big Tobacco, Big Tech, and the Threat on Academic Integrity. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society, 287–297. https://doi.org/10.1145/3461702.3462563
[3]
ACLU. 2021. Michigan Father Sues Detroit Police Department for Wrongful Arrest Based on Faulty Facial Recognition Technology. Retrieved January 2, 2022 from https://www.aclu.org/press-releases/michigan-father-sues-detroit-police-department-wrongful-arrest-based-faulty-facial
[4]
Ada Lovelace Institute and Open Government Institute. 2021. Algorithmic accountability for the public sector. Retrieved from https:// www.opengovpartnership.org/documents/ algorithmic-accountability-public-sector/
[5]
Mike Ananny and Kate Crawford. 2018. Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability. New Media & Society 20, 3: 973–989. https://doi.org/10.1177/1461444816676645
[6]
Peter Aucoin and Ralph Heintzman. 2000. The Dialectics of Accountability for Performance in Public Management Reform. International Review of Administrative Sciences 66, 1: 45–55. https://doi.org/10.1177/0020852300661005
[7]
Jack M. Balkin. 2017. The Three Laws of Robotics in the Age of Big Data. Faculty Scholarship Series 5159. Retrieved from https://digitalcommons.law.yale.edu/fss_papers/5159
[8]
Kenneth A. Bamberger and Deirdre K. Mulligan. 2012. PIA requirements and privacy decision-making in US government agencies. In Privacy Impact Assessment. Springer, Dordrecht, 225–250. Retrieved from https://link.springer.com/chapter/10.1007/978-94-007-2543-0_10
[9]
Rebecca Bauer-Kahan. 2023. California AB-331 Automated decision tools. Retrieved May 3, 2023 from https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB331
[10]
Susan Benesch. 2021. Nobody Can See Into Facebook. The Atlantic. Retrieved November 5, 2021 from https://www.theatlantic.com/ideas/archive/2021/10/facebook-oversight-data-independent-research/620557/
[11]
Abeba Birhane. 2021. Algorithmic injustice: a relational ethics approach. Patterns 2, 2: 100205. https://doi.org/10.1016/j.patter.2021.100205
[12]
Ilyssa Birnbach. 2017. Newly Imposed Limitations on Citizens’ Right to Sue for Standing in a Procedural Rights Case. Fordham Environmental Law Review 9, 2: 311–336.
[13]
Su Lin Blodgett, Solon Barocas, Hal Daumé III, and Hanna Wallach. 2020. Language (Technology) is Power: A Critical Survey of “Bias” in NLP. arXiv:2005.14050 [cs]. Retrieved November 12, 2021 from http://arxiv.org/abs/2005.14050
[14]
Mark Bovens. 2007. Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal 13, 4: 447–468. https://doi.org/10.1111/j.1468-0386.2007.00378.x
[15]
Mark Bovens. 2010. Two Concepts of Accountability: Accountability as a Virtue and as a Mechanism. West European Politics 33, 5: 946–967. https://doi.org/10.1080/01402382.2010.486119
[16]
danah boyd. 2019. Agnotology and Epistemological Fragmentation. Points | Medium. Retrieved January 7, 2022 from https://points.datasociety.net/agnotology-and-epistemological-fragmentation-56aa3c509c6b
[17]
BR24. 2021. Objective or Biased: On the questionable use of Artificial Intelligence for job applications. BR. Retrieved January 10, 2022 from https://interaktiv.br.de/ki-bewerbung/en/
[18]
Miles Brundage. 2016. Artificial Intelligence and Responsible Innovation. In Fundamental Issues of Artificial Intelligence, Vincent C. Müller (ed.). Springer International Publishing, Oxford, UK, 543–554. https://doi.org/10.1007/978-3-319-26485-1
[19]
Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensold, Cullen O'Keefe, Mark Koren, Théo Ryffel, J. B. Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Amanda Askell, Rosario Cammarota, Andrew Lohn, David Krueger, Charlotte Stix, Peter Henderson, Logan Graham, Carina Prunkl, Bianca Martin, Elizabeth Seger, Noa Zilberman, Seán Ó hÉigeartaigh, Frens Kroeger, Girish Sastry, Rebecca Kagan, Adrian Weller, Brian Tse, Elizabeth Barnes, Allan Dafoe, Paul Scharre, Ariel Herbert-Voss, Martijn Rasser, Shagun Sodhani, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, and Markus Anderljung. 2020. Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims. arXiv:2004.07213 [cs]. Retrieved December 29, 2020 from http://arxiv.org/abs/2004.07213
[20]
Robert D Bullard. 1999. Dismantling Environmental Racism in the USA. Local Environment 4, 1: 25.
[21]
Robert D. Bullard. 1999. Anatomy of Environmental Racism and the Environmental Justice Movement. In Confronting Environmental Racism: Voices From the Grassroots, Robert D. Bullard (ed.). South End Press.
[22]
Joy Buolamwini. 2019. Response: Racial and Gender bias in Amazon Rekognition — Commercial AI System for Analyzing Faces. Medium. Retrieved October 5, 2020 from https://medium.com/@Joy.Buolamwini/response-racial-and-gender-bias-in-amazon-rekognition-commercial-ai-system-for-analyzing-faces-a289222eeced
[23]
Joy Buolamwini and Timnit Gebru. 2018. Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. In Proceedings of Machine Learning Research, 1–15. Retrieved from http://proceedings.mlr.press/v81/buolamwini18a.html
[24]
24. Jenna Burrell. 2016. How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms. Big Data & Society 3, 1: 1–12. https://doi.org/10.1177/2053951715622512
[25]
E. Madalina Busuioc and Martin Lodge. 2016. The Reputational Basis of Public Accountability. Governance 29, 2: 247–263. https://doi.org/10.1111/gove.12161
[26]
François Candelon, Rodolphe Charme di Carlo, Midas De Bondt, and Theodoros Evgeniou. 2021. AI Regulation Is Coming. Harvard Business Review. Retrieved December 23, 2021 from https://hbr.org/2021/09/ai-regulation-is-coming
[27]
Anupam Chander. 2013. How Law Made Silicon Valley. Emory Law Journal 63: 639–694.
[28]
Angèle Christin, Alex Rosenblat, and danah boyd. 2015. Courts and Predictive Algorithms. Retrieved from https://datasociety.net/library/data-civil-rights-courts-and-predictive-algorithms/
[29]
Yvette D. Clarke. 2019. H.R.2231 - 116th Congress (2019-2020): Algorithmic Accountability Act of 2019. Retrieved October 5, 2020 from https://www.congress.gov/bill/116th-congress/house-bill/2231
[30]
Committee on Technology. 2021. File #: Int 1894-2020: Automated employment decision tools. The New York City Council. Retrieved January 30, 2023 from https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&Options=Advanced&Search
[31]
Sam Corbett-Davies, Emma Pierson, Avi Feller, Sharad Goel, and Aziz Huq. 2017. Algorithmic Decision Making and the Cost of Fairness. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 797–806. https://doi.org/10.1145/3097983.3098095
[32]
Council of Europe and European Parliament. 2021. Regulation on European Approach for Artificial Intelligence Laying Down a Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts. Retrieved from https://digital-strategy.ec.europa.eu/en/library/proposal-regulation-laying-down-harmonised-rules-artificial-intelligence-artificial-intelligence
[33]
CPDP Team. Citizens Police Data Project. Invisible Institute. Retrieved January 25, 2023 from https://invisible.institute/police-data
[34]
Kate Crawford. 2017. Keynote: The Trouble with Bias. Retrieved January 19, 2022 from https://www.youtube.com/watch?v=fMym_BKWQzk
[35]
Data & Society Research Institute and European Center for Non-for-Profit Law. 2021. Recommendation for Incorporating Human Rights Into AI Impact Assessments. Retrieved January 18, 2022 from https://datasociety.net/wp-content/uploads/2021/11/HUDIERA-Full-Paper_FINAL.pdf
[36]
DCWP. 2023. NYC Rules. NYC Rules. Retrieved January 30, 2023 from https://rules.cityofnewyork.us/rule/automated-employment-decision-tools-updated/
[37]
John Dewey. 1927. The Public and Its Problems. H. Holt and Company, New York.
[38]
Nicholas Diakopoulos. 2016. Accountability in Algorithmic Decision Making. Communications of the ACM 59, 2: 56–62. https://doi.org/10.1145/2844110
[39]
DOJ. 2022. Justice Department Secures Groundbreaking Settlement Agreement with Meta Platforms, Formerly Known as Facebook, to Resolve Allegations of Discriminatory Advertising. The United States Department of Justice: Press Releases. Retrieved January 30, 2023 from https://www.justice.gov/opa/pr/justice-department-secures-groundbreaking-settlement-agreement-meta-platforms-formerly-known
[40]
Dylan Doyle-Burke and Jessie Smith. IBM, Microsoft, and Amazon Disavow Facial Recognition Technology: What Do You Need to Know? with Deb Raji. Retrieved October 6, 2020 from https://radicalai.podbean.com/e/ibm-microsoft-and-amazon-disavow-facial-recognition-technology-what-do-you-need-to-know-with-deb-raji/
[41]
Lauren B. Edelman and Shauhin A. Talesh. 2011. To Comply or Not to Comply – That Isn't the Question: How Organizations Construct the Meaning of Compliance. In Explaining Compliance. Edward Elgar Publishing. https://doi.org/10.4337/9780857938732.00011
[42]
Gary Edmond. 2002. Legal Engineering: Contested Representations of Law, Science (and Non-science) and Society. Social Studies of Science 32, 3: 371–412. https://doi.org/10.1177/0306312702032003002
[43]
Gary Edmond and David Mercer. 2004. Daubert and the Exclusionary Ethos: The Convergence of Corporate and Judicial Attitudes towards the Admissibility of Expert Evidence in Tort Litigation*. Law & Policy 26, 2: 231–257. https://doi.org/10.1111/j.0265-8240.2004.00011.x
[44]
Kevin C. Elliott. 2015. Selective ignorance in environmental research. In Routledge international handbook of ignorance studies, Matthias Gross and Linsey McGoey (eds.). Routledge, Taylor & Francis Group, New York.
[45]
Karin van Es, Daniel Everts, and Iris Muis. 2021. Gendered language and employment Web sites: How search algorithms can cause allocative harm. First Monday. https://doi.org/10.5210/fm.v26i8.11717
[46]
European Commission. 2019. Ethics Guidelines for Trustworthy AI. Retrieved from https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=60419
[47]
European Commission. Digital Services Act. Retrieved May 3, 2023 from https://eur-lex.europa.eu/eli/reg/2022/2065/oj
[48]
Federal Rules of Civil Procedure. Rule 23. Class Actions. LII / Legal Information Institute. Retrieved November 15, 2021 from https://www.law.cornell.edu/rules/frcp/rule_23
[49]
Joel Feinberg. 1984. Harm to Others. Oxford University Press.
[50]
Luciano Floridi. 2019. Establishing the rules for building trustworthy AI. Nature Machine Intelligence 1, 6: 261–262. https://doi.org/10.1038/s42256-019-0055-y
[51]
Sheera Frenkel. 2021. Whistleblower discusses how Instagram may lead teenagers to eating disorders. The New York Times. Retrieved December 23, 2021 from https://www.nytimes.com/live/2021/10/05/technology/facebook-whistleblower-frances-haugen
[52]
W. B. Gallie. 1955. Essentially Contested Concepts. Proceedings of the Aristotelian Society 56: 167–198.
[53]
Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, and Kate Crawford. 2018. Datasheets for Datasets. In Proceedings of the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning. Retrieved July 6, 2020 from http://arxiv.org/abs/1803.09010
[54]
Andy Godfrey and Keith Hooper. 1996. Accountability and decision-making in feudal England: Domesday Book revisited. Accounting History 1, 1: 35–54. https://doi.org/10.1177/103237329600100103
[55]
James Grimmelmann and Daniel Westreich. 2016. Incomprehensible discrimination. Calif. L. Rev. Circuit 7: 164.
[56]
Matthias Gross and Linsey McGoey (eds.). 2015. Routledge international handbook of ignorance studies. Routledge, Taylor & Francis Group, New York.
[57]
Tracy Jan and Elizabeth Dwoskin. 2019. Facebook agrees to overhaul targeted advertising system for job, housing and loan ads after discrimination complaints. Washington Post. Retrieved January 30, 2023 from https://www.washingtonpost.com/business/economy/facebook-agrees-to-dismantle-targeted-advertising-system-for-job-housing-and-loan-ads-after-discrimination-complaints/2019/03/19/7dc9b5fa-4983-11e9-b79a-961983b7e0cd_story.html
[58]
Sheila Jasanoff. 1995. Science at the Bar: Law, Science, and Technology in America. Harvard University Press, Cambridge, Massachusetts.
[59]
Sheila Jasanoff. 2002. Science and the Statistical Victim: Modernizing Knowledge in Breast Implant Litigation. Social Studies of Science 32, 1: 37–69.
[60]
Sheila Jasanoff. 2018. Science, Common Sense & Judicial Power in U.S. Courts. Daedalus 147, 4: 15–27. https://doi.org/10.1162/daed_a_00517
[61]
Khari Johnson. 2021. The Movement to Hold AI Accountable Gains More Steam. Wired. Retrieved December 17, 2021 from https://www.wired.com/story/movement-hold-ai-accountable-gains-steam/
[62]
Kristen Johnson, Frank Pasquale, and Jennifer Chapman. 2019. Artificial Intelligence, Machine Learning and Bias in Finance: Toward Responsible Innovation. Fordham Law Review 88, 2: 499–530.
[63]
Severin Kacianka and Alexander Pretschner. 2021. Designing Accountable Systems. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 424–437. https://doi.org/10.1145/3442188.3445905
[64]
Margot E Kaminski and Jennifer M Urban. 2021. The Right to Contest AI. Columbia Law Review 121, 7: 1957–2048.
[65]
Cecilia Kang. 2021. Facebook Whistle-Blower Urges Lawmakers to Regulate the Company. The New York Times. Retrieved November 5, 2021 from https://www.nytimes.com/2021/10/05/technology/facebook-whistle-blower-hearing.html
[66]
Kate Kaye. 2021. Facebook is “not a researchers-friendly space” say academics. Digiday. Retrieved December 23, 2021 from https://digiday.com/marketing/facebook-is-not-a-researchers-friendly-space-say-academics-encountering-roadblocks-to-analyzing-its-2020-election-ad-data/
[67]
Jon Kleinberg, Jens Ludwig, Sendhil Mullainathan, and Ashesh Rambachan. 2018. Algorithmic Fairness. AEA Papers and Proceedings 108: 22–27. https://doi.org/10.1257/pandp.20181018
[68]
Ida Koivisto. 2022. The Transparency Paradox. Oxford University Press, Oxford, United Kingdom.
[69]
Joshua A. Kroll. 2020. Accountability in Computer Systems. In The Oxford handbook of ethics of AI, Markus Dirk Dubber, Frank Pasquale and Sunit Das (eds.). Oxford University Press, New York, NY, 181–196.
[70]
Joshua A Kroll, Joanna Huey, Solon Barocas, Edward W Felten, Joel R Reidenberg, David G Robinson, and Harlan Yu. 2017. Accountable Algorithms. University of Pennsylvania Law Review 165: 633–705.
[71]
Legislative Reference Bureau. 2013. 740 ILCS 14/ Biometric Information Privacy Act. Illinois General Assembly. Retrieved January 11, 2022 from https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=3004&ChapterID=57
[72]
Mathieu Lemay. 2021. Understanding Canada's Algorithmic Impact Assessment Tool. Medium. Retrieved December 17, 2021 from https://towardsdatascience.com/understanding-canadas-algorithmic-impact-assessment-tool-cd0d3c8cafab
[73]
Karen EC Levy and David Merritt Johns. 2016. When open data is a Trojan Horse: The weaponization of transparency in science and governance. Big Data & Society 3, 1: 1–6. https://doi.org/10.1177/2053951715621568
[74]
Mandy Lau. Content moderation as language policy:Connecting commercial content moderation policies, regulations, and language policy. Working papers in Applied Linguistics and Linguistics at York.
[75]
Jacob Metcalf, Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, and Madeleine Clare Elish. 2021. Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21), 735–746. https://doi.org/10.1145/3442188.3445935
[76]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model Cards for Model Reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, 220–229. https://doi.org/10.1145/3287560.3287596
[77]
Shira Mitchell, Eric Potash, Solon Barocas, Alexander D'Amour, and Kristian Lum. 2021. Algorithmic Fairness: Choices, Assumptions, and Definitions. Annual Review of Statistics and Its Application 8, 1: 141–163. https://doi.org/10.1146/annurev-statistics-042720-125902
[78]
Emanuel Moss, Elizabeth Anne Watkins, Ranjit Singh, Madeleine Clare Elish, and Jacob Metcalf. 2021. Assembling Accountability: Algorithmic Impact Assessment for the Public Interest. Data & Society Research Institute. Retrieved August 17, 2021 from https://datasociety.net/library/assembling-accountability-algorithmic-impact-assessment-for-the-public-interest/
[79]
New York City Department of Consumer and Worker Protection. 2023. Notice of Adoption for Use of Automated Employment Decisionmaking Tools. Retrieved April 26, 2023 from https://rules.cityofnewyork.us/wp-content/uploads/2023/04/DCWP-NOA-for-Use-of-Automated-Employment-Decisionmaking-Tools-2.pdf
[80]
Helen Nissenbaum. 1996. Accountability in a computerized society. Science and Engineering Ethics 2, 1: 25–42. https://doi.org/10.1007/BF02639315
[81]
Helen Nissenbaum. 2004. Privacy as Contextual Integrity. Washington Law Review 79: 41.
[82]
Mutale Nkonde. 2019. Automated Anti-Blackness: Facial Recognition in Brooklyn, New York. Harvard Journal of African American Public Policy 20: 30–36.
[83]
Serena Oduro, Emanuel Moss, and Jacob Metcalf. 2022. Obligations to assess: Recent trends in AI accountability regulations. Patterns 3, 11: 100608. https://doi.org/10.1016/j.patter.2022.100608
[84]
Francesca Palmiotto. 2021. The Black Box on Trial: The Impact of Algorithmic Opacity on Fair Trial Rights in Criminal Proceedings. In Algorithmic Governance and Governance of Algorithms: Legal and Ethical Challenges, Martin Ebers and Marta Cantero Gamito (eds.). Springer International Publishing, Cham, 49–70. https://doi.org/10.1007/978-3-030-50559-2_3
[85]
Frank Pasquale. 2015. The black box society: the secret algorithms that control money and information. Harvard University Press, Cambridge.
[86]
Stephen Perry. 2003. Harm, History, and Counterfactuals. San Diego Law Review 40, 4: 1283.
[87]
Nathan Persily. 2021. Facebook hides data showing it harms users. Outside scholars need access. Washington Post. Retrieved November 5, 2021 from https://www.washingtonpost.com/outlook/2021/10/05/facebook-research-data-haugen-congress-regulation/
[88]
Sara Pirk. 2002. Expanding Public Participation in Environmental Justice: Methods, Legislation, Litigation and Beyond. Journal of Environmental Law and Litigation 17, 1: 35.
[89]
Michael Power. 1994. The Audit Explosion. Demos, London: Demos. Retrieved from https://www.demos.co.uk/files/theauditexplosion.pdf
[90]
Robert Proctor and Londa L. Schiebinger (eds.). 2008. Agnotology: the making and unmaking of ignorance. Stanford University Press, Stanford, CA.
[91]
Serena Quattrocolo. 2020. Artificial Intelligence, Computational Modelling and Criminal Proceedings: A Framework for A European Legal Discussion. Springer, Cham, Suisse.
[92]
Manish Raghavan and Solon Barocas. 2019. Challenges for mitigating bias in algorithmic hiring. Brookings Institute, Washington, DC. Retrieved November 12, 2021 from https://www.brookings.edu/research/challenges-for-mitigating-bias-in-algorithmic-hiring/
[93]
Inioluwa Deborah Raji and Joy Buolamwini. 2019. Actionable Auditing: Investigating the Impact of Publicly Naming Biased Performance Results of Commercial AI Products. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 429–435. https://doi.org/10.1145/3306618.3314244
[94]
Inioluwa Deborah Raji, Andrew Smart, Rebecca N White, Margaret Mitchell, Timnit Gebru, Ben Hutchinson, Jamila Smith-Loud, Daniel Theron, and Parker Barnes. 2020. Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. In Conference on Fairness, Accountability, and Transparency (FAT* ’20), 12. Retrieved from https://doi.org/10.1145/3351095.3372873
[95]
Dillon Reisman, Jason Schultz, Kate Crawford, and Meredith Whittaker. 2018. Algorithmic Impact Assessments: A Practical Framework for Public Agency Accountability. AI Now Institute, New York. Retrieved from https://ainowinstitute.org/aiareport2018.pdf
[96]
Frank Rep. Pallone. 2022. Text - H.R.8152 - 117th Congress (2021-2022): American Data Privacy and Protection Act. Retrieved March 3, 2023 from http://www.congress.gov/
[97]
Rashida Richardson, Jason M Schultz, and Vincent M Southerland. 2019. Litigating Algorithms 2019 US Report: New Challenges to Government Use of Algorithmic Decision Systems. AI Now Institute, New York. Retrieved from https://ainowinstitute.org/litigatingalgorithms-2019-us.pdf
[98]
Jon Romberg. 2020. Trust the Process: Understanding Procedural Standing Under Spokeo. Oklahoma Law Review 72: 86.
[99]
Zeve Sanderson, Jonathan Nagler, and Joshua A. Tucker. 2021. Academic Researchers Need Access to the Facebook Papers. Slate. Retrieved November 5, 2021 from https://slate.com/technology/2021/11/facebook-papers-frances-haugen-academic-research-transparency.html
[100]
Paul Sawers. 2020. Uber drivers sue for data on secret profiling and automated decision-making. VentureBeat. Retrieved January 10, 2022 from https://venturebeat.com/2020/07/20/uber-drivers-sue-for-data-on-secret-profiling-and-automated-decision-making/
[101]
Andrew D Selbst. 2021. An Institutional View of Algorithmic Impact Assessments. Harv. J.L. & Tech. 35: 78.
[102]
Andrew D Selbst and Julia Powles. 2017. Meaningful information and the right to explanation. International Data Privacy Law 7, 4: 233–242. https://doi.org/10.1093/idpl/ipx022
[103]
Elaine W. Shoben. 2004. Disparate Impact Theory in Employment Discrimination: What's Griggs Still Good For? What Not? Brandeis Law Journal 42, 3: 597–622.
[104]
Tom Simonite. 2021. What Really Happened When Google Ousted Timnit Gebru. Wired. Retrieved January 7, 2022 from https://www.wired.com/story/google-timnit-gebru-ai-what-really-happened/
[105]
Daniel J. Solove and Danielle Keats Citron. 2017. Risk and Anxiety: A Theory of Data-Breach Harms. Texas Law Review 96: 737.
[106]
Sandra F. Sperino. 2011. Rethinking Discrimination Law. Michigan Law Review 110, 1: 69–125.
[107]
Jack Stilgoe, Richard Owen, and Phil Macnaghten. 2013. Developing a framework for responsible innovation. Research Policy 42, 9: 1568–1580. https://doi.org/10.1016/j.respol.2013.05.008
[108]
Marilyn Strathern. 2000. Audit Cultures: Anthropological Studies in Accountability, Ethics, and the Academy. Routledge, New York.
[109]
Ryan Tate-Mosley. 2021. The new lawsuit that shows facial recognition is officially a civil rights issue. MIT Technology Review. Retrieved January 2, 2022 from https://www.technologyreview.com/2021/04/14/1022676/robert-williams-facial-recognition-lawsuit-aclu-detroit-police/
[110]
Serge Taylor. 1984. Making Bureaucracies Think: The Environmental Impact Statement Strategy of Administrative Reform. Stanford University Press, Stanford, CA.
[111]
Andrew Tutt. 2017. An FDA for Algorithms. Administrative Law Review 69, 1: 83–123. https://doi.org/10.2139/ssrn.2747994
[112]
Heidi Tworek. 2019. Social Media Platforms and the Upside of Ignorance. Centre for International Governance Innovation. Retrieved November 5, 2021 from https://www.cigionline.org/articles/social-media-platforms-and-upside-ignorance/
[113]
Uma Outka. 2006. NEPA and Environmental Justice: Integration, Implementation, and Judicial Review. Boston College Environmental Affairs Law Review 33, 3: 601–626.
[114]
Salome Viljoen. 2020. A Relational Theory of Data Governance. Social Science Research Network, Rochester, NY. https://doi.org/10.2139/ssrn.3727562
[115]
John W. Wade. 1973. On the Nature of Strict Tort Liability for Products. Mississippi Law Journal 44, 5: 825–851.
[116]
Ari Ezra Waldman. 2018. Designing without privacy. Houston Law Review 55, 659.
[117]
Jeremy Waldron. 2011. The Rule of Law and the Importance of Procedure. Nomos 50: 3–31.
[118]
Michael Warner. 2005. Publics and counterpublics. Zone Books, New York.
[119]
Elizabeth Anne Watkins, Emanuel Moss, Jacob Metcalf, Ranjit Singh, and Madeleine Clare Elish. Elizabeth Anne Watkins, Emanuel Moss, Jacob Metcalf, Ranjit Singh, and Madeleine Clare Elish. 2021. Governing Algorithmic Systems with Impact Assessments: Six Observations. In Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (AIES ’21), 1010–1022. https://doi.org/10.1145/3461702.3462580
[120]
Rebecca Wexler. Rebecca Wexler. 2018. Life, Liberty, and Trade Secrets: Intellectual Property in the Criminal Justice System. Stan. L. Rev. 70: 1343. https://doi.org/10.2139/ssrn.2920883
[121]
White House. White House. 1994. Executive Order 12898 of February 11, 1994 | Federal Actions To Address Environmental Justice in Minority Populations and Low-Income Populations (59 FR 7629). Federal Register 59, 32. Retrieved from https://www.archives.gov/files/federal-register/executive-orders/pdf/12898.pdf
[122]
Maranke Wieringa. Maranke Wieringa. 2020. What to account for when accounting for algorithms: a systematic literature review on algorithmic accountability. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 1–18. https://doi.org/10.1145/3351095.3372833
[123]
Ron Wyden. Ron Wyden. Algorithmic Accountability Act of 2022. Retrieved February 6, 2023 from https://www.congress.gov/bill/117th-congress/house-bill/6580/text
[124]
Cat Zakrzewski. Cat Zakrzewski. 2021. Facebook whistleblower's revelations could usher in tech's ‘Big Tobacco moment,’ lawmakers say. Washington Post. Retrieved November 5, 2021 from https://www.washingtonpost.com/technology/2021/10/06/facebook-whistleblower-frances-haugen-senate-unity/
[125]
Malte Ziewitz and Ranjit Singh. 2021. Critical companionship: Some sensibilities for studying the lived experience of data subjects. Big Data & Society 8, 2: 1–13. https://doi.org/10.1177/20539517211061122
[126]
Andrej Zwitter. 2014. Big Data Ethics. Big Data & Society 1, 2: 1–6. https://doi.org/10.1177/2053951714559253
[127]
1946. Administrative Procedure Act. Retrieved from https://www.law.cornell.edu/uscode/text/5/551
[128]
1993. Daubert v. Merrell Dow Pharmaceuticals, Inc. Supreme Court. Retrieved from https://scholar.google.com/scholar_case?case=827109112258472814
[129]
2013. Clapper v. Amnesty Intern. USA. Supreme Court. Retrieved from https://scholar.google.com/scholar_case?case=13987610465347808803
[130]
2016. Spokeo, Inc. v. Robins. Retrieved from https://scholar.google.com/scholar_case?case=11810453531811593153
[131]
2018. California Consumer Privacy Act. California Legislative Information. Retrieved from https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=201720180AB375
[132]
2018. Save Our Sound Obx, Inc. v. Nc Dept. of Transp. Retrieved from https://scholar.google.com/scholar_case?case=16317094648654068553
[133]
2019. Oregon Natural Desert Association v. Rose. Retrieved from https://scholar.google.com/scholar_case?case=4325132358104036335
[134]
2019. Diné Citizens Against Ruining Env't v. Bernhardt. Retrieved from https://scholar.google.com/scholar_case?case=9745116248581207846
[135]
2019. Conservation Congress v. United States Forest Service. Retrieved from https://scholar.google.com/scholar_case?case=14603573195018924737
[136]
2019. Protect Our Communities Foundation v. LaCounte. Retrieved from https://scholar.google.com/scholar_case?case=17922905489759569030
[137]
2019. National Parks Conservation Ass'n v. Semonite. Retrieved from https://scholar.google.com/scholar_case?case=15694333209763019110
[138]
2021. Divino Group LLC v. Google LLC. Retrieved from https://scholar.google.com/scholar_case?case=9914604969716406988
[139]
2021. Newman v. Google LLC. Retrieved from https://scholar.google.com/scholar_case?case=5968246280551343353
[140]
1992. Lujan v. Defenders of Wildlife, 504 U.S. 555. Justia Law. Retrieved from https://supreme.justia.com/cases/federal/us/504/555/

Cited By

View all
  • (2024)Algorithmic Harms and Algorithmic WrongsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659001(1725-1732)Online publication date: 3-Jun-2024
  • (2024)Null Compliance: NYC Local Law 144 and the challenges of algorithm accountabilityProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658998(1701-1713)Online publication date: 3-Jun-2024
  • (2024)Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and AlignmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658964(1174-1186)Online publication date: 3-Jun-2024
  • Show More Cited By

Index Terms

  1. Taking Algorithms to Courts: A Relational Approach to Algorithmic Accountability

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
    June 2023
    1929 pages
    ISBN:9798400701924
    DOI:10.1145/3593013
    This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives International 4.0 License.

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 12 June 2023

    Check for updates

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    FAccT '23

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)917
    • Downloads (Last 6 weeks)98
    Reflects downloads up to 11 Dec 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Algorithmic Harms and Algorithmic WrongsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659001(1725-1732)Online publication date: 3-Jun-2024
    • (2024)Null Compliance: NYC Local Law 144 and the challenges of algorithm accountabilityProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658998(1701-1713)Online publication date: 3-Jun-2024
    • (2024)Should Users Trust Advanced AI Assistants? Justified Trust As a Function of Competence and AlignmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658964(1174-1186)Online publication date: 3-Jun-2024
    • (2024)The Fall of an Algorithm: Characterizing the Dynamics Toward AbandonmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658910(337-358)Online publication date: 3-Jun-2024
    • (2024)(Beyond) Reasonable Doubt: Challenges that Public Defenders Face in Scrutinizing AI in CourtProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641902(1-19)Online publication date: 11-May-2024
    • (2024)Understanding Contestability on the Margins: Implications for the Design of Algorithmic Decision-making in Public ServicesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3641898(1-16)Online publication date: 11-May-2024
    • (2023)Terms-we-serve-with: Five dimensions for anticipating and repairing algorithmic harmBig Data & Society10.1177/2053951723121155310:2Online publication date: 15-Nov-2023

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media