[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3514094.3534130acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

A Penalty Default Approach to Preemptive Harm Disclosure and Mitigation for AI Systems

Published: 27 July 2022 Publication History

Abstract

As AI industry matures, it is important to ensure that the organizations developing these systems have sufficient incentives to identify and mitigate risks and harm. Unfortunately, the profit motive is often misaligned with this goal. Successful work to identify or reduce risk rarely has direct tangible benefits. In this paper, we consider the use of regulatory penalty defaults as a way to counter these perverse incentives. A regulatory penalty default regime consists of two parts: a regulatory penalty default and a mechanism to bargain around the default. The regulatory penalty default induces private actors to research and mitigate potential harms in order to limit liability, making the benefits of risk mitigation tangible. The bargaining mechanism provides incentives for companies to go beyond achieving a prescriptive threshold of compliance in creating a compelling case for escape from the default. With a focus on the policy landscape in the United States, we propose and discuss potential regulatory penalty default regimes for AI systems. For each of our proposals, we also discuss accompanying regulatory pathways for the bargaining process. While regulatory penalty default regimes are not a panacea (we discuss several drawbacks of the proposed methods), they are an important tool to consider in the regulation of AI systems.

References

[1]
Ian Ayres and Robert Gertner. 1989. Filling gaps in incomplete contracts: An economic theory of default rules. The Yale Law Journal, Vol. 99, 1 (1989), 87--130.
[2]
Lucian A Bebchuk and Steven Shavell. 1991. Information and the scope of liability for breach of contract: The rule of Hadley v. Baxendale.
[3]
Margarita Boyarskaya, Alexandra Olteanu, and Kate Crawford. 2020. Overcoming Failures of Imagination in AI Infused System Development and Deployment. arXiv preprint arXiv:2011.13416 (2020).
[4]
Peter Cihon, Moritz J Kleinaltenkamp, Jonas Schuett, and Seth D Baum. 2021. AI Certification: Advancing Ethical Practice by Reducing Information Asymmetries. IEEE Transactions on Technology and Society (2021).
[5]
U.S. Equal Employment Opportunity Commission. 2021. EEOC Launches Initiative on Artificial Intelligence and Algorithmic Fairness. (2021). https://www.eeoc.gov/newsroom/eeoc-launches-initiative-artificial-intelligence-and-algorithmic-fairness
[6]
Gerrit De Geest. 2011. Contract law and economics. Vol. 6. Edward Elgar Publishing.
[7]
FTC. 2019. Federal Trade Commission and People of the State of New York v. Google LLC and Youtube LLC. (2019). https://www.ftc.gov/system/files/documents/cases/172_3083_youtube_coppa_consent_order.pdf
[8]
Kristelia A Garcia. 2014. Penalty default licenses: A case for uncertainty. NYUL Rev., Vol. 89 (2014), 1117.
[9]
Thomas Krendl Gilbert, Sarah Dean, Nathan Lambert, Tom Zick, and Aaron Snoswell. 2022. Reward Reports for Reinforcement Learning. arXiv preprint arXiv:2204.10817 (2022).
[10]
Dylan Hadfield-Menell and Gillian K Hadfield. 2019. Incomplete contracting and AI alignment. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 417--422.
[11]
, Woodrow Hartzog. 2020. BIPA: The Most Important Biometric Privacy Law in the US? Regulating Biometrics: Global Approaches and Urgent Questions, ed. Amba Kak (AI Now 2020) (2020), 96--103.
[12]
Andrew J. Hawkins. 2022. Waymo sues California DMV to keep driverless crash data under wraps. (2022). https://www.theverge.com/2022/1/28/22906513/waymo-lawsuit-california-dmv-crash-data-foia
[13]
Donald T Hornstein. 2004. Complexity theory, adaptation, and administrative law. Duke LJ, Vol. 54 (2004), 913.
[14]
Elisa Jillson. 2021. Aiming for truth, fairness, and equity in your company's use of AI. Federal Trade Commission (2021). https://www.ftc.gov/news-events/blogs/business-blog/2021/04/aiming-truth-fairness-equity-your-companys-use-ai
[15]
Margot E Kaminski. 2015. When the default is no penalty: negotiating privacy at the NTIA. Denv. L. Rev., Vol. 93 (2015), 925.
[16]
Margot E Kaminski. 2018. Binary governance: Lessons from the GDPR's approach to algorithmic accountability. S. Cal. L. Rev., Vol. 92 (2018), 1529.
[17]
Margot E Kaminski and Jennifer M Urban. 2021. The right to contest AI. Columbia Law Review, Vol. 121, 7 (2021), 1957--2048.
[18]
Bradley C Karkkainen. 2005. Information-forcing environmental regulation. Fla. St. UL Rev., Vol. 33 (2005), 861.
[19]
Bradley C Karkkainen. 2008. Framing Rules: Breaking the Information Bottleneck. NYU Envtl. LJ, Vol. 17 (2008), 75.
[20]
Robert H Lande and Joshua P Davis. 2007. Benefits from private antitrust enforcement: An analysis of forty cases. USFL Rev., Vol. 42 (2007), 879.
[21]
Sylvia Lu. 2020. Algorithmic Opacity, Private Accountability, and Corporate Social Disclosure in the Age of Artificial Intelligence. Vand. J. Ent. & Tech. L., Vol. 23 (2020), 99.
[22]
Ryan Mac, Caroline Haskins, and Logan McDonald. 2020. Clearview AI Has Promised To Cancel All Relationships With Private Companies. Buzzfeed News (2020). https://www.buzzfeednews.com/article/ryanmac/clearview-ai-no-facial-recognition-private-companies
[23]
International Business Machines. 2022. Our foundational properties for AI ethics. IBM (2022). https://perma.cc/2ZSV-LFLE
[24]
Ally Marotti. 2018. Google's art selfies aren't available in Illinois. Here's why. Chicago Tribune (2018). https://www.chicagotribune.com/business/ct-biz-google-art-selfies-20180116-story.html
[25]
Eric Maskin. 2005. On the rationale for penalty default rules. Fla. St. UL Rev., Vol. 33 (2005), 557.
[26]
Lachlan McCalman, Daniel Steinberg, Grace Abuhamad, Marc-Etienne Brunet, Robert C Williamson, and Richard Zemel. 2022. Assessing AI Fairness in Finance. Computer, Vol. 55, 1 (2022), 94--97.
[27]
Microsoft. 2022. Responsible AI Principles from Microsoft. Microsoft (2022). https://perma.cc/2G6B-UYG3
[28]
Silvia Milano, Mariarosaria Taddeo, and Luciano Floridi. 2020. Recommender systems and their ethical challenges. Ai & Society, Vol. 35, 4 (2020), 957--967.
[29]
Jessica Morley, Libby Kinsey, Anat Elhalal, Francesca Garcia, Marta Ziosi, and Luciano Floridi. 2021. Operationalising AI ethics: barriers, enablers and next steps. AI & SOCIETY (2021), 1--13.
[30]
NHTSA. 2019. Nuro, Inc.; Grant of Temporary Exemption for a Low-Speed Vehicle with an Automated Driving System. (2019).
[31]
Eric A Posner. 2005. There are no penalty default rules in contract law. Fla. St. UL Rev., Vol. 33 (2005), 563.
[32]
Manish Raghavan, Solon Barocas, Jon Kleinberg, and Karen Levy. 2020. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 conference on fairness, accountability, and transparency. 469--481.
[33]
Iyad Rahwan. 2018. Society-in-the-loop: programming the algorithmic social contract. Ethics and Information Technology, Vol. 20, 1 (2018), 5--14.
[34]
Bogdana Rakova, Jingying Yang, Henriette Cramer, and Rumman Chowdhury. 2021. Where Responsible AI Meets Reality: Practitioner Perspectives on Enablers for Shifting Organizational Practices. Proc. ACM Hum.-Comput. Interact., Vol. 5, CSCW1, Article 7 (April 2021), 23 pages. https://doi.org/10.1145/3449081
[35]
Lauren Henry Scholz. 2021. The Significance of Private Rights of Action in Privacy Law. William & Mary Law Review, Forthcoming (2021).
[36]
Andrew D Selbst. 2020. Negligence and AI's human users. BUL Rev., Vol. 100 (2020), 1315.
[37]
Andrew D Selbst. 2021. An Institutional View Of Algorithmic Impact Assessments. (2021).
[38]
Andrew D. Selbst, Suresh Venkatasubramanian, and I. Elizabeth Kumar. 2021. The Legal Construction of Black Boxes. In We Robot 2021.
[39]
Senate - Commerce, Science, and Transportation. 2021. Filter Bubble Transparency Act. (2021).
[40]
SonyAI. 2018. Why Is aibo Not for Sale in Illinois? SonyAI (2018). https://www.sony.com/electronics/support/articles/00202844
[41]
South Coast AQMD. 2020. AB 617 Community Air Plan Community Steering Committee Charter. (2020). https://www.aqmd.gov/docs/default-source/ab-617-ab-134/steering-committees/southeast-los-angeles/charter-feb-2020.pdf'sfvrsn=8
[42]
Jack Stewart. 2018. Why People Keep Rear-Ending Self-Driving Cars. (2018). https://www.wired.com/story/self-driving-car-crashes-rear-endings-why-charts-statistics/
[43]
Jonathan Stray. 2020. Aligning AI Optimization to Community Well-Being. International Journal of Community Well-Being, Vol. 3, 4 (2020), 443--463.
[44]
Jean Tirole. 1999. Incomplete contracts: Where do we stand? Econometrica, Vol. 67, 4 (1999), 741--781.
[45]
Charlotte A Tschider. 2020. Medical Device Artificial Intelligence: The New Tort Frontier. BYU L. Rev., Vol. 46 (2020), 1551.
[46]
United States Congress. 1966. National Traffic and Motor Vehicle Safety Act. (1966).
[47]
United States Senate. 2022. Algorithmic Accountability Act. (2022).
[48]
Matthew Wansley. 2021. The End of Accidents. (2021).
[49]
Danny Yadron and Dan Tynan. 2016. Tesla driver dies in first fatal crash while using autopilot mode. (2016). https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk

Cited By

View all
  • (2024)Black-Box Access is Insufficient for Rigorous AI AuditsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659037(2254-2272)Online publication date: 3-Jun-2024
  • (2024)Theoretical Preconditions of Criminal Imputation for Negligence Crime Involving AIPrinciple of Criminal Imputation for Negligence Crime Involving Artificial Intelligence10.1007/978-981-97-0722-5_2(25-57)Online publication date: 25-Feb-2024

Index Terms

  1. A Penalty Default Approach to Preemptive Harm Disclosure and Mitigation for AI Systems

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society
    July 2022
    939 pages
    ISBN:9781450392471
    DOI:10.1145/3514094
    This work is licensed under a Creative Commons Attribution International 4.0 License.

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 July 2022

    Check for updates

    Author Tags

    1. artificial intelligence law
    2. computing and society
    3. penalty defaults
    4. technology policy

    Qualifiers

    • Research-article

    Conference

    AIES '22
    Sponsor:
    AIES '22: AAAI/ACM Conference on AI, Ethics, and Society
    May 19 - 21, 2021
    Oxford, United Kingdom

    Acceptance Rates

    Overall Acceptance Rate 61 of 162 submissions, 38%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)1,450
    • Downloads (Last 6 weeks)35
    Reflects downloads up to 09 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Black-Box Access is Insufficient for Rigorous AI AuditsProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3659037(2254-2272)Online publication date: 3-Jun-2024
    • (2024)Theoretical Preconditions of Criminal Imputation for Negligence Crime Involving AIPrinciple of Criminal Imputation for Negligence Crime Involving Artificial Intelligence10.1007/978-981-97-0722-5_2(25-57)Online publication date: 25-Feb-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media