[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3593013.3593992acmotherconferencesArticle/Chapter ViewAbstractPublication PagesfacctConference Proceedingsconference-collections
research-article
Open access

Who Should Pay When Machines Cause Harm? Laypeople’s Expectations of Legal Damages for Machine-Caused Harm

Published: 12 June 2023 Publication History

Abstract

The question of who should be held responsible when machines cause harm in high-risk environments is open to debate. Empirical research examining laypeople’s opinions has been largely restricted to the moral domain and has only inspected a limited set of negative outcomes. This study collects lay perceptions of legal responsibility for a wide range of machine-caused harms. We investigated how much people (N = 572) expect users and developers of machines to pay as legal damages in 37 diverse scenarios from the book “How Humans Judge Machines” by Hidalgo et al. [37]. Our results suggest that people’s expectations of legal damages for machine-caused harms are influenced by several factors, including perceived moral wrongness and the presence of victims. The scenarios exhibited substantial variation in how they were perceived and thus in the amount of legal damages they called for. People viewed both users and developers as legally responsible and expected the latter to pay higher damages. We discuss our findings in the context of future regulations of machines.

Supplemental Material

PDF File
Appendix

References

[1]
Chunrong Ai and Edward C Norton. 2003. Interaction terms in logit and probit models. Economics letters 80, 1 (2003), 123–129.
[2]
Julia Angwin, Madeleine Varner, and Ariana Tobin. 2016. Machine Bias: There’s Software Used Across the Country to Predict Future Criminals. And it’s Biased Against Blacks. ProPublica. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.
[3]
Karina Arrambide, John Yoon, Cayley MacArthur, Katja Rogers, Alessandra Luz, and Lennart E Nacke. 2022. “I Don’t Want To Shoot The Android”: Players Translate Real-Life Moral Intuitions to In-Game Decisions in Detroit: Become Human. In CHI Conference on Human Factors in Computing Systems. 1–15.
[4]
Peter M Asaro. 2016. The Liability Problem for Autonomous Artificial Agents. In AAAI Spring Symposia. 190–194.
[5]
Edmond Awad, Sohan Dsouza, Jean-François Bonnefon, Azim Shariff, and Iyad Rahwan. 2020. Crowdsourcing moral machines. Commun. ACM 63, 3 (2020), 48–55.
[6]
Edmond Awad, Sydney Levine, Max Kleiman-Weiner, Sohan Dsouza, Joshua B Tenenbaum, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. 2020. Drivers are blamed more than their automated cars when both make mistakes. Nature human behaviour 4, 2 (2020), 134–143.
[7]
Susanne Beck. 2016. The problem of ascribing legal responsibility in the case of robotics. AI & Society 31, 4 (2016), 473–481.
[8]
Jeremy Bentham. 1996. The collected works of Jeremy Bentham: An introduction to the principles of morals and legislation. Clarendon Press.
[9]
Peter Cane. 2002. Responsibility in law and morality. Bloomsbury Publishing.
[10]
Jessica F Cantlon and Elizabeth M Brannon. 2006. Shared system for ordering small and large numbers in monkeys and humans. Psychological science 17, 5 (2006), 401–406.
[11]
Logan S Casey, Jesse Chandler, Adam Seth Levine, Andrew Proctor, and Dara Z Strolovitch. 2017. Intertemporal differences among MTurk workers: Time-based sample variations and implications for online data collection. Sage Open 7, 2 (2017), 2158244017712774.
[12]
Stephen Cave, Claire Craig, Kanta Dihal, Sarah Dillon, Jessica Montgomery, Beth Singler, and Lindsay Taylor. 2018. Portrayals and perceptions of AI and why they matter. (2018).
[13]
Paulius Čerka, Jurgita Grigienė, and Gintarė Sirbikytė. 2015. Liability for damages caused by artificial intelligence. Computer law & security review 31, 3 (2015), 376–389.
[14]
Ilias Chalkidis, Manos Fergadiotis, Prodromos Malakasiotis, Nikolaos Aletras, and Ion Androutsopoulos. 2020. LEGAL-BERT: The Muppets straight out of Law School. In Findings of the Association for Computational Linguistics: EMNLP 2020. 2898–2904.
[15]
Marc Champagne and Ryan Tonkens. 2015. Bridging the responsibility gap in automated warfare. Philosophy & Technology 28, 1 (2015), 125–137.
[16]
Bartek Chomanski. 2021. Liability for Robots: Sidestepping the Gaps. Philosophy & Technology 34, 4 (2021), 1013–1032.
[17]
Mark Coeckelbergh. 2009. Virtual moral agency, virtual moral responsibility: on the moral significance of the appearance, perception, and performance of artificial agents. AI & Society 24, 2 (2009), 181–189.
[18]
Mark Coeckelbergh. 2020. Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and engineering ethics 26, 4 (2020), 2051–2068.
[19]
Jeffrey Dastin. 2018. Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.
[20]
Filippo Santoni de Sio and Giulio Mecacci. 2021. Four Responsibility Gaps with Artificial Intelligence: Why they Matter and How to Address them. Philosophy & Technology (2021), 1–28.
[21]
Mihailis E Diamantis. 2021. Algorithms acting badly: A solution from corporate law. Geo. Wash. L. Rev. 89 (2021), 801.
[22]
Shari Seidman Diamond, Mary R Rose, Beth Murphy, and John Meixner. 2011. Damage anchors on real juries. Journal of Empirical Legal Studies 8 (2011), 148–178.
[23]
James A Dungan, Alek Chakroff, and Liane Young. 2017. The relevance of moral norms in distinct relational contexts: Purity versus harm norms regulate self-directed actions. PLoS one 12, 3 (2017), e0173405.
[24]
Ronald Dworkin. 1986. Law’s empire. Harvard University Press.
[25]
European Commission. 2021. Communication From the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions Empty: Fostering a European approach to Artificial Intelligence. https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=COM:2021:205:FIN
[26]
European Commission. 2021. Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) And Amending Certain Union Legislative Acts. https://artificialintelligenceact.eu/the-act/.
[27]
Brian Flanagan, Guilherme FCF de Almeida, Noel Struchiner, and Ivar R Hannikainen. 2023. Moral appraisals guide intuitive legal determinations.Law and Human Behavior 47, 2 (2023), 367.
[28]
Carly Giffin and Tania Lombrozo. 2016. Wrong or merely prohibited: Special treatment of strict liability in intuitive moral judgment.Law and human behavior 40, 6 (2016), 707.
[29]
Jesse Graham, Jonathan Haidt, and Brian A Nosek. 2009. Liberals and conservatives rely on different sets of moral foundations.Journal of personality and social psychology 96, 5 (2009), 1029.
[30]
Kurt Gray, Jennifer K MacCormack, Teague Henry, Emmie Banks, Chelsea Schein, Emma Armstrong-Carter, Samantha Abrams, and Keely A Muscatell. 2022. The affective harm account (AHA) of moral judgment: Reconciling cognition and affect, dyadic morality and disgust, harm and purity.Journal of Personality and Social Psychology (2022).
[31]
Kurt Gray, Chelsea Schein, and Adrian F Ward. 2014. The myth of harmless wrongs in moral cognition: Automatic dyadic completion from sin to suffering.Journal of Experimental Psychology: General 143, 4 (2014), 1600.
[32]
Edith Greene and Brian Bornstein. 2000. Precious little guidance: Jury instruction on damage awards.Psychology, public policy, and law 6, 3 (2000), 743.
[33]
Damodar N Gujarati. 2021. Essentials of econometrics. SAGE Publications.
[34]
David J Gunkel. 2020. Mind the gap: responsible robotics and the problem of responsibility. Ethics and Information Technology 22, 4 (2020), 307–320.
[35]
Will Douglas Heaven. 2020. Predictive policing algorithms are racist. They need to be dismantled. MIT Technology Review. https://www.technologyreview.com/2020/07/17/1005396/predictive-policing-algorithms-racist-dismantled-machine-learning-bias-criminal-justice/.
[36]
Ottar Hellevik. 2009. Linear versus logistic regression when the dependent variable is a dichotomy. Quality & Quantity 43, 1 (2009), 59–74.
[37]
César A Hidalgo, Diana Orghian, Jordi Albo Canals, Filipa De Almeida, and Natalia Martín. 2021. How humans judge machines. MIT Press.
[38]
Jialun Aaron Jiang, Morgan Klaus Scheuerman, Casey Fiesler, and Jed R Brubaker. 2021. Understanding international perceptions of the severity of harmful content online. PloS one 16, 8 (2021), e0256762.
[39]
Deborah G Johnson. 2006. Computer systems: Moral entities but not moral agents. Ethics and information technology 8, 4 (2006), 195–204.
[40]
Deborah G Johnson. 2015. Technology with no human responsibility?Journal of Business Ethics 127, 4 (2015), 707–715.
[41]
Anita Keshmirian, Babak Hemmatian, Bahador Bahrami, Ophelia Deroy, and Fiery Cushman. 2022. Diffusion of Punishment in Collective Norm Violations. (2022).
[42]
Joshua Knobe. 2003. Intentional action and side effects in ordinary language. Analysis 63, 3 (2003), 190–194.
[43]
Markus Langer, Tim Hunsicker, Tina Feldkamp, Cornelius J König, and Nina Grgić-Hlača. 2022. “Look! It’sa Computer Program! It’s an Algorithm! It’s AI!”: Does Terminology Affect Human Perceptions and Evaluations of Algorithmic Decision-Making Systems?. In CHI Conference on Human Factors in Computing Systems. 1–28.
[44]
Minha Lee, Peter Ruijten, Lily Frank, Yvonne de Kort, and Wijnand IJsselsteijn. 2021. People May Punish, But Not Blame Robots. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–11.
[45]
Jamy Li, Xuan Zhao, Mu-Jung Cho, Wendy Ju, and Bertram F Malle. 2016. From trolley to autonomous vehicle: Perceptions of responsibility and moral norms in traffic accidents with self-driving cars. SAE Technical paper 10 (2016), 2016–01.
[46]
Gabriel Lima, Meeyoung Cha, Chihyung Jeon, and Kyung Sin Park. 2021. The Conflict Between People’s Urge to Punish AI and Legal Systems. Frontiers in Robotics and AI 8 (2021).
[47]
Gabriel Lima, Nina Grgić-Hlača, and Meeyoung Cha. 2021. Human Perceptions on Moral Responsibility of AI: A Case Study in AI-Assisted Bail Decision-Making. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems. 1–17.
[48]
Peng Liu, Manqing Du, and Tingting Li. 2021. Psychological consequences of legal responsibility misattribution associated with automated vehicles. Ethics and information technology 23, 4 (2021), 763–776.
[49]
Bertram F Malle, Stuti Thapa Magar, and Matthias Scheutz. 2019. AI in the sky: How people morally evaluate human and machine decisions in a lethal strike dilemma. In Robotics and well-being. Springer, 111–133.
[50]
Bertram F Malle, Matthias Scheutz, Thomas Arnold, John Voiklis, and Corey Cusimano. 2015. Sacrifice one for the good of many? People apply different moral norms to human and robot agents. In 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI). IEEE, 117–124.
[51]
Timothy Maninger and Daniel B Shank. 2022. Perceptions of violations by artificial and human actors across moral foundations. Computers in Human Behavior Reports 5 (2022), 100154.
[52]
Andreas Matthias. 2004. The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and information technology 6, 3 (2004), 175–183.
[53]
Sven Nyholm. 2020. Humans and robots: Ethics, agency, and anthropomorphism. Rowman & Littlefield Publishers.
[54]
Stefan Palan and Christian Schitter. 2018. Prolific. ac—A subject pool for online experiments. Journal of Behavioral and Experimental Finance 17 (2018), 22–27.
[55]
Bryan Pietsch. 2021. 2 Killed in Driverless Tesla Car Crash, Officials Say. New York Times. https://www.nytimes.com/2021/04/18/business/tesla-fatal-crash-texas.html.
[56]
Richard Powell. 1993. Law today. Longman.
[57]
William Lloyd Prosser 1941. Handbook of the Law of Torts. Vol. 4. West Publishing.
[58]
Valerie F Reyna, Valerie P Hans, Jonathan C Corbin, Ryan Yeh, Kelvin Lin, and Caisa Royer. 2015. The gist of juries: Testing a model of damage award decision making.Psychology, Public Policy, and Law 21, 3 (2015), 280.
[59]
Rezvaneh Rezapour, Priscilla Ferronato, and Jana Diesner. 2019. How do Moral Values Differ in Tweets on Social Movements?. In Conference Companion Publication of the 2019 on Computer Supported Cooperative Work and Social Computing. 347–351.
[60]
Henrik Skaug Sætra. 2021. Confounding complexity of machine action: a hobbesian account of machine responsibility. International Journal of Technoethics (IJT) 12, 1 (2021), 87–100.
[61]
Eyal Sagi and Morteza Dehghani. 2014. Measuring moral rhetoric in text. Social science computer review 32, 2 (2014), 132–144.
[62]
Matthias Scheutz and Bertram F Malle. 2020. May machines take lives to save lives? Human perceptions of autonomous robots (with the capacity to kill). Lethal autonomous weapons: Re-examining the law & ethics of robotic warfare (2020).
[63]
Anthony J Sebok. 2009. Punitive damages in the United States. In Punitive damages: common law and civil law perspectives. Springer, 155–196.
[64]
Robert Sparrow. 2007. Killer robots. Journal of applied philosophy 24, 1 (2007), 62–77.
[65]
Megan Stevenson and Sandra G Mayson. 2017. Bail reform: New directions for pretrial detention and release. Academy for Justice, A report on scholarship and criminal justice reform (2017).
[66]
Daniel W Tigard. 2020. There is no techno-responsibility gap. Philosophy & Technology (2020), 1–19.
[67]
Kevin Tobia, Aileen Nielsen, and Alexander Stremitzer. 2021. When does physician use of AI increase liability?Journal of Nuclear Medicine 62, 1 (2021), 17–21.
[68]
Alexa Van Brunt and Locke E Bowman. 2018. Toward a just model of pretrial release: a history of bail reform and a prescription for what’s next. J. Crim. L. & Criminology 108 (2018), 701.
[69]
Ibo Van de Poel. 2015. The problem of many hands. In Moral responsibility and the problem of many hands. Routledge, 62–104.
[70]
David C Vladeck. 2014. Machines without principals: liability rules and artificial intelligence. Wash. L. Rev. 89 (2014), 117.
[71]
Daisuke Wakabayashi. 2018. Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam. New York Times. https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html.
[72]
Jessica Z Wang, Amy X Zhang, and David R Karger. 2022. Designing for Engaging with News using Moral Framing towards Bridging Ideological Divides. Proceedings of the ACM on Human-Computer Interaction 6, GROUP (2022), 1–23.
[73]
Mark Warr. 1989. What is the perceived seriousness of crimes?Criminology 27, 4 (1989), 795–822.

Cited By

View all
  • (2025)Information that mattersInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103380193:COnline publication date: 1-Jan-2025
  • (2024)In the Walled Garden: Challenges and Opportunities for Research on the Practices of the AI Tech IndustryProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658918(456-466)Online publication date: 3-Jun-2024
  • (2024)The Fall of an Algorithm: Characterizing the Dynamics Toward AbandonmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658910(337-358)Online publication date: 3-Jun-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency
June 2023
1929 pages
ISBN:9798400701924
DOI:10.1145/3593013
This work is licensed under a Creative Commons Attribution-NonCommercial International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 June 2023

Check for updates

Author Tags

  1. AI
  2. algorithm
  3. artificial intelligence
  4. damages
  5. liability
  6. machine
  7. punishment
  8. responsibility
  9. robot

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

Conference

FAccT '23

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)496
  • Downloads (Last 6 weeks)61
Reflects downloads up to 09 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Information that mattersInternational Journal of Human-Computer Studies10.1016/j.ijhcs.2024.103380193:COnline publication date: 1-Jan-2025
  • (2024)In the Walled Garden: Challenges and Opportunities for Research on the Practices of the AI Tech IndustryProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658918(456-466)Online publication date: 3-Jun-2024
  • (2024)The Fall of an Algorithm: Characterizing the Dynamics Toward AbandonmentProceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency10.1145/3630106.3658910(337-358)Online publication date: 3-Jun-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media