[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3594536.3595122acmotherconferencesArticle/Chapter ViewAbstractPublication PagesicailConference Proceedingsconference-collections
research-article
Open access

Computational Accountability

Published: 07 September 2023 Publication History

Abstract

Automated decision making systems take decisions that matter. Some human or legal person remains responsible. Looking back, that person is accountable for the decisions made by the system, and may even be liable in case of damages. That puts constraints on the way in which decision making systems are designed, and how they are deployed in organizations. In this paper, we analyze computational accountability in three steps. First, being accountable is analyzed as a relationship between an actor deploying the system and a critical forum of subjects, users, experts and developers. Second, we discuss system design. In principle, evidence must be collected about the decision rule and the case data that were applied. However, many AI algorithms are not interpretable for humans. Alternatively, internal controls must ensure that a system uses valid algorithms and reliable data sets for training, which are appropriate for the application domain. Third, we discuss the governance model: roles, responsibilities, procedures and infrastructure, to ensure effective operation of these controls. The paper ends with a case study in the IT audit domain, to illustrate practical feasibility.

References

[1]
W.M.P. van der Aalst, M. Bichler, and A. Heinzl. 2017. Responsible Data Science. Business and Information Systems Engineering 59, 5 (2017), 311--313.
[2]
N. Aggarwal, Eidenmüller H., L. Enriques, J. Payne, and K. Zwieten. 2019. Autonomous systems and the law. C.H. Beck, München.
[3]
Evgeni Aizenberg and Jeroen van den Hoven. 2020. Designing for human rights in AI. Big data and Society 7, 2 (2020), 1--14.
[4]
Algemene Wet Bestuursrecht (AWB). 1992. (General Act Administrative Law)
[5]
Ross Anderson. 2020. Security Engineering: a guide to building dependable distributed systems (3rd ed.). Wiley, New York.
[6]
S. Anjomshoae, D. Calvaresi, A. Najjar, and K. Främling. 2019. Explainable Agents and Robots: Results from a Systematic Literature Review. In (AAMAS 2019). ACM, Montreal, Canada, 1078--1088.
[7]
A. B. Arrieta, et al. 2020. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion 58 (2020), 82--115.
[8]
Katie Atkinson, Trevor Bench-Capon, and Danushka Bollegala. 2020. Explanation in AI and Law: Past, present and future. Artificial Intelligence (2020).
[9]
Jacqui Ayling and Adriane Chapman. 2022. Putting AI ethics to work: are the tools fit for purpose? AI and Ethics 2 (2022), 405--429.
[10]
Ian Ayres and John Braithwaite. 1992. Responsive Regulation: Transcending the Deregulation Debate. Oxford University press, Oxford.
[11]
Richard Benjamins. 2021. A choices frame work for the responsible use of AI. AI and Ethics 1 (2021), 49--53.
[12]
Julia Black. 2005. The Emergence of Risk-based Regulation and the New Public Risk Management in the United Kingdom Public Law Autumn (2005), 512--548.
[13]
J. Blokdijk. 2004. Tests of control in the audit risk model: effective? efficient? International Journal of Auditing 8 (2004), 185--194.
[14]
Margaret Boden et al 2017. Principles of robotics: regulating robots in the real world. Connection Science 29, 2 (2017), 124--129.
[15]
Wessel Bootsma. 2022. The Development of an audit framework to deploy responsible ADMS. Msc Thesis, Tilburg School of Economics and Management.
[16]
Marc Bovens. 2005. Public Accountability. In The Oxford Handbook of Public Management, Ferlie (Ed.). Oxford University Press, Oxford, Chapter 8, 182--208.
[17]
Marc Bovens. 2007. Analysing and Assessing Accountability: A Conceptual Framework. European Law Journal 13, 4 (2007), 447--468.
[18]
B. Burgemeestre and J. Hulstijn. 2015. Design for the Values of Accountability and Transparency. In Handbook of Ethics, Values, and Technological Design, Springer Verlag, Berlin, Chapter 15, 303 - 333.
[19]
Ruth M.J. Byrne. 2019. Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. AAAI, Macao, 6276--6282.
[20]
P. Day, R. Klein. 1987. Accountabilities: Five Public Services. Tavistock, London.
[21]
Nicholas Diakopoulos. 2016. Accountability in Algorithmic Decision Making. Commun. ACM 59, 2 (2016), 56--62.
[22]
V. Dignum. 2019. Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way. Springer, Berlin.
[23]
Hein Duijf. 2018. Responsibility Voids and Cooperation. Philosophy of the Social Sciences 48, 4 (2018), 434--460. https://doi.org/10.1177/0048393118767084
[24]
EC. 2016. General Data Protection Regulation 2016/679. Official Journal of the European Union L119 (2016), 1--88.
[25]
EC. 2021. Proposal for a Regulation [.] on Artificial Intelligence (Artificial Intelligence Act) and Amending Certain Union Legislative Acts.
[26]
EC. 2022. Proposal for a Directive [. ] on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive).
[27]
Kathleen M. Eisenhardt. 1985. Control: Organizational and Economic Approaches. Management Science 31, 2 (1985), 134--149.
[28]
EU. 2012. Charter of Fundamental Rights of the European Union. Official Journal of the European Union C326 (2012), 1--22.
[29]
Joan Feigenbaum et al 2012. Systematizing accountability in computer science. Technical Report TR-1452. Yale University.
[30]
M. Feurer, K. Eggensperger, S. Falkner, M. Lindauer and F. Hutter. 2022. Auto-Sklearn 2.0: Hands-free AutoML via Meta-Learning Journal of Machine Learning Research 23 (2022), 1--61.
[31]
E.M. Forsberg. 2014. Institutionalising ELSA in the moment of breakdown? Life Sciences, Society and Policy 10, 1 (2014), 1--16.
[32]
B. Friedman, M. Harbers, D. G. Hendry, J. van den Hoven, C. Jonker, and N. Logler. 2021. Eight grand challenges for value sensitive design from the 2016 Lorentz workshop. Ethics and Information Technology 23 (2021), 5--16.
[33]
E. Giunchiglia, M. C. Stoian and T. Lukasiewicz. 2022. Deep Learning with Logical Constraints. Proceedings of IJCAI (2022). 5478--5485.
[34]
Ellen P. Goodman and Julia Tréhu. 2022. AI Audit-Washing and Accountability. Santa Clara High Technology Law Journal 39 (2022).
[35]
T. Grote and P. Berens. 2020. On the ethics of algorithmic decision-making in healthcare. Journal of Medical Ethics 46, 3 (2020), 205--211.
[36]
Thilo Hagendorff. 2020. The ethics of AI ethics: An evaluation of guidelines. Minds and Machines 30, 1 (2020), 99--120.
[37]
Joseph Y. Halpern. 2019. Actual Causality. MIT Press, Cambridge, MA.
[38]
HLEG. 2019. Ethics Guidelines for Trustworthy AI.
[39]
Joris Hulstijn, Darusalam Darusalam, and Marijn Janssen. 2017. Open data for accountability in the fight against corruption. In (CARe-MAS 2017), 52--66.
[40]
IFAC. 2012. ISA 500: Audit evidence.
[41]
I. Karamitsos, S. Albarhami, and C. Apostolopoulos. 2020. Applying DevOps Practices of Continuous Automation for Machine Learning. Information 11, 363.
[42]
W. Knechel, S. Salterio, and B. Ballou. 2007. Auditing: Assurance and Risk (3 ed.). Thomson Learning, Cincinatti.
[43]
Will Knight. 2022. ChatGPT's Most Charming Trick Is Also Its Biggest Flaw.
[44]
A. Kogan, M. G. Alles, and M. A. Vasarhelyi. 2014. Design and evaluation of a continuous data level auditing system. Auditing 33, 4, 221--245.
[45]
M. Lee, L. Floridi, and A. Denev. 2020. Innovating with confidence: Embedding governance and fairness in a financial services risk management framework. Berkeley Technology Law Journal 34, 2 (2020), 1--19.
[46]
G. Lima, N. Grgić-Hlača, J. K. Jeong, and M. Cha. 2022. The Conflict Between Explainable and Accountable Decision-Making Algorithms. In (FACCT 2022). ACM, Seoul, Korea, 2103--2113.
[47]
Qinghua Lu et al 2022. Towards a Roadmap on Software Engineering for Responsible AI. In CAIN 2022. ACM, Pittsburgh, PA, 101--112.
[48]
Joseph B. Lyons. 2013. Being transparent about transparency: A model for human-robot interaction. 48--53.
[49]
Peter J. May. 2007. Regulatory regimes and accountability. Regulation and Governance 1 (2007), 8--26.
[50]
Tim Miller. 2019. Explanation in Artificial Intelligence: Insights from the social sciences. Artificial Intelligence 267 (2019), 1--38.
[51]
T. Miller, R. Hoffman, O. Amir, and A. Holzinger. 2022. Special issue on Explainable Artificial Intelligence (XAI). Artificial Intelligence 307 (2022), 103705.
[52]
J. Mökander, J. Morley, M. Taddeo, and L. Floridi. 2021. Ethics-Based Auditing of Automated Decision-Making Systems: Nature, Scope, and Limitations. Science and Engineering Ethics 27, 44 (2021), 30 p.
[53]
J. Morley, L. Floridi, L. Kinsey, and A. Elhalal. 2019. From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices. Science and Engineering Ethics 26 (2019), 2141--2168.
[54]
Helen Nissenbaum. 1994. Computing and Accountability. Communications of the ACM 37 (1994), 73--80.
[55]
Helen Nissenbaum. 1996. Accountability in a Computerized Society. Science and Engineering Ethics 2, 1 (1996), 25--42.
[56]
NIST. 2023. Artificial Intelligence Risk Management Framework. Technical Report. National Institutute of Standards and Technology.
[57]
Daphne Odekerken and Floris Bex. 2020. Towards transparent human-in-the-loop classification of fraudulent web shops. In (JURIX 2020), IOS Press, 239--242.
[58]
OECD. 2019. OECD AI Principles.
[59]
F. Santoni De Sio and Giulio Mecacci. 2021. Four Responsibility Gaps with Artificial Intelligence. Philosophy and Technology 34 (2021), 1057--1084.
[60]
Marietje Schaake and Jack Clark. 2022. Stanford Launches AI Audit Challenge.
[61]
C. Schröer, F. Kruse, and J. M. Gómez. 2021. A Systematic Literature Review on Applying CRISP-DM Process Model. Procedia Computer Science 181, 526--534.
[62]
A. C. Scott, W.J. Clancey, R. Davis, and E. H. Shortliffe. 1984. Explanation as a Topic of AI Research. In Rule-Based Expert Systems, Chapter 18, 338--362.
[63]
Haroon Sheikh, Corien Prins, and Erik Schrijvers. 2023. Mission AI: The New System Technology. Springer, The Hague.
[64]
H. Shen, A. Devos, M. Eslami, and K. Holstein. 2021. Everyday Algorithm Auditing: Understanding the Power of Everyday Users in Surfacing Harmful Algorithmic Behaviors. In: ACM Human-Computer Interaction 5, CSCW2, Article 433.
[65]
H. Sjøvaag. 2019. Journalism Between the State and the Market. Routledge.
[66]
M. Veale and F. Zuiderveen Borgesius. 2021. Demystifying the Draft EU Artificial Intelligence Act. Computer Law Review International 22, 4 (2021), 97--112.
[67]
Sandra Wachter, Brent Mittelstadt, and Chris Russell. 2018. Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law and Technology 31, 2 (2018), 841--887.
[68]
Roel. J. Wieringa. 2014. Design science methodology for information systems and software engineering. Springer Verlag, London.
[69]
R. Wirth and J. Hipp. 2000. CRISP-DM: Towards a standard process model for data mining. (KDDM 2000), Blackpool, 29--39.
[70]
Roberto V. Zicari et al. 2021. Z-Inspection: A Process to Assess Trustworthy AI. In IEEE Transactions on Technology and Society, 2, 283.

Cited By

View all
  • (2024)AI, Law and beyond. A transdisciplinary ecosystem for the future of AI & LawArtificial Intelligence and Law10.1007/s10506-024-09404-yOnline publication date: 16-May-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Other conferences
ICAIL '23: Proceedings of the Nineteenth International Conference on Artificial Intelligence and Law
June 2023
499 pages
ISBN:9798400701979
DOI:10.1145/3594536
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

  • IAAIL: Intl Asso for Artifical Intel & Law

In-Cooperation

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 07 September 2023

Check for updates

Author Tags

  1. AI
  2. computational accountability
  3. ethics
  4. internal controls

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

ICAIL 2023
Sponsor:
  • IAAIL

Acceptance Rates

Overall Acceptance Rate 69 of 169 submissions, 41%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)298
  • Downloads (Last 6 weeks)38
Reflects downloads up to 15 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)AI, Law and beyond. A transdisciplinary ecosystem for the future of AI & LawArtificial Intelligence and Law10.1007/s10506-024-09404-yOnline publication date: 16-May-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media