[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3600211.3604700acmconferencesArticle/Chapter ViewAbstractPublication PagesaiesConference Proceedingsconference-collections
research-article
Open access

Why We Need to Know More: Exploring the State of AI Incident Documentation Practices

Published: 29 August 2023 Publication History

Abstract

To enable the development and use of safe and equitable artificial intelligence (AI) systems, AI engineers must monitor deployed AI systems and learn from past AI incidents where failures have occurred. Around the world, public databases for cataloging AI systems and resulting harms are instrumental in promoting awareness of potential AI harms among policymakers, researchers, and the public. However, despite growing recognition of the potential of AI systems to produce harms, causes of AI systems failure remain elusive and AI incidents continue to occur. For example, incidents of AI bias are frequently reported and discussed, yet biased systems continue to be developed and deployed.
This raises the question – how are we learning from documented incidents? What information do we need to analyze AI incidents and develop new AI engineering best practices? This paper examines reporting techniques from a variety of AI stakeholders and across different industries, identifies requirements towards the design of effective AI incident documentation, and proposes policy recommendations for augmenting current practice.

References

[1]
Catherine Aiken. 2021. Classifying AI Systems. Technical Report. Center for Security and Emerging Technology. https://doi.org/10.51593/20200025
[2]
Anonymous. 2016. Incident Number 20. AI Incident Database (2016). https://incidentdatabase.ai/cite/20
[3]
Anonymous. 2016. Incident Number 40. AI Incident Database (2016). https://incidentdatabase.ai/cite/40
[4]
Daniel Atherton. 2023. Incident Number 541. AI Incident Database (2023). https://incidentdatabase.ai/cite/541
[5]
Hollen Barmer, Rachel Dzombak, Matt Gaston, Eric Heim, Jay Palat, Frank Redner, Tanisha Smith, and Nathan VanHoudnos. 2021. Robust and Secure AI. https://resources.sei.cmu.edu/asset_files/WhitePaper/2021_019_001_735346.pdf
[6]
Rishi Bommasani, Drew A. Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S. Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, Erik Brynjolfsson, Shyamal Buch, Dallas Card, Rodrigo Castellon, Niladri Chatterji, Annie Chen, Kathleen Creel, Jared Quincy Davis, Dora Demszky, Chris Donahue, Moussa Doumbouya, Esin Durmus, Stefano Ermon, John Etchemendy, Kawin Ethayarajh, Li Fei-Fei, Chelsea Finn, Trevor Gale, Lauren Gillespie, Karan Goel, Noah Goodman, Shelby Grossman, Neel Guha, Tatsunori Hashimoto, Peter Henderson, John Hewitt, Daniel E. Ho, Jenny Hong, Kyle Hsu, Jing Huang, Thomas Icard, Saahil Jain, Dan Jurafsky, Pratyusha Kalluri, Siddharth Karamcheti, Geoff Keeling, Fereshte Khani, Omar Khattab, Pang Wei Koh, Mark Krass, Ranjay Krishna, Rohith Kuditipudi, Ananya Kumar, Faisal Ladhak, Mina Lee, Tony Lee, Jure Leskovec, Isabelle Levent, Xiang Lisa Li, Xuechen Li, Tengyu Ma, Ali Malik, Christopher D. Manning, Suvir Mirchandani, Eric Mitchell, Zanele Munyikwa, Suraj Nair, Avanika Narayan, Deepak Narayanan, Ben Newman, Allen Nie, Juan Carlos Niebles, Hamed Nilforoshan, Julian Nyarko, Giray Ogut, Laurel Orr, Isabel Papadimitriou, Joon Sung Park, Chris Piech, Eva Portelance, Christopher Potts, Aditi Raghunathan, Rob Reich, Hongyu Ren, Frieda Rong, Yusuf Roohani, Camilo Ruiz, Jack Ryan, Christopher Ré, Dorsa Sadigh, Shiori Sagawa, Keshav Santhanam, Andy Shih, Krishnan Srinivasan, Alex Tamkin, Rohan Taori, Armin W. Thomas, Florian Tramèr, Rose E. Wang, William Wang, Bohan Wu, Jiajun Wu, Yuhuai Wu, Sang Michael Xie, Michihiro Yasunaga, Jiaxuan You, Matei Zaharia, Michael Zhang, Tianyi Zhang, Xikun Zhang, Yuhui Zhang, Lucia Zheng, Kaitlyn Zhou, and Percy Liang. 2022. On the Opportunities and Risks of Foundation Models. https://doi.org/10.48550/arXiv.2108.07258 arXiv:2108.07258 [cs].
[7]
Catherine Olsson [@catherineols]. 2019. @paul_scharre I keep a list of (mostly unintended) AI failures, and can share some that are in a grey area, because the harm is not *the goal*, but seems to be fairly closely tied to how the goal was achieved, and the relevant actors knew what was happening:. https://twitter.com/catherineols/status/1105561165646585857
[8]
Paul Cichonski, Tom Millar, Tim Grance, and Karen Scarfone. 2012. Computer Security Incident Handling Guide : Recommendations of the National Institute of Standards and Technology. Technical Report NIST SP 800-61r2. National Institute of Standards and Technology. NIST SP 800–61r2 pages. https://doi.org/10.6028/NIST.SP.800-61r2
[9]
Responsible AI Collaborative. 2023. The Artificial Intelligence Incident Database. Retrieved July 5, 2023 from https://incidentdatabase.ai/
[10]
Richard Cook, David Woods, and Charlotte Miller. 1998. A Tale of Two Stories: Contrasting Views of Patient Safety. Technical Report. National Health Care Safety Council of the National Patient Safety Foundation at the AMA.
[11]
Jeffrey Cooper, Ronald Newbower, Charlene Long, and Bucknam McPeek. 1978. Preventable Anesthesia Mishaps: A Study of Human Factors. Anesthesiology 49, 309 (1978).
[12]
David Dao. 2023. Awful AI. https://github.com/daviddao/awful-ai original-date: 2018-03-27T15:30:34Z.
[13]
Cornelia Dean. 2006. Engineering a Safer, More Beautiful World, One Failure at a Time. The New York Times (May 2006). https://www.nytimes.com/2006/05/02/science/02prof.html
[14]
Government of India Department of Personnel & Training. 2021. Right to Information Act 2005.
[15]
Divij Joshi and Mozilla Foundation. 2020. AI Observatory. https://ai-observatory.in/
[16]
Electronic Frontier Foundation. 2023. Atlas of Surveillance. https://atlasofsurveillance.org/
[17]
Eticas Foundation. 2023. Observatory of Algorithms with Social Impact (OASI) Register. https://airtable.com/shrsAN2oTf68kM6O9/tblG2604tSoMOcwWX?backgroundC+olor=teal&viewControls=on
[18]
EU AI Watch. 2023. AI Public Services Explorer. https://ai-watch.github.io/AI-watch-T6-X/
[19]
Federal Aviation Administration. 2023. FAA Aviation Safety Information Analysis and Sharing. https://www.asias.faa.gov/apex/f?p=100:1
[20]
AI Global. 2021. The Artificial Intelligence Incident Database. Retrieved July 5, 2023 from https://map.ai-global.org/
[21]
GobLab UAI. 2023. Algoritmos Públicos. https://www.algoritmospublicos.cl/
[22]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving Fairness in Machine Learning Systems: What Do Industry Practitioners Need?. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). Association for Computing Machinery, New York, NY, USA, 1–16. https://doi.org/10.1145/3290605.3300830
[23]
National Artificial Intelligence Initiative. 2020. Agency inventories of AI use cases. https://www.ai.gov/ai-use-case-inventories/
[24]
Internet Freedom Foundation. 2023. Panoptic Tracker. https://panoptic.in
[25]
Machine Learning Failures [@mlfailures]. 2021. Machine Learning Learning Failures - Daylight Lab at UC Berkeley CLTC. https://twitter.com/mlfailures/status/1357100619182403584
[26]
Sean McGregor. 2016. Incident Number 51. AI Incident Database (2016). https://incidentdatabase.ai/cite/51
[27]
Sean McGregor. 2020. Preventing Repeated Real World AI Failures by Cataloging Incidents: The AI Incident Database. https://doi.org/10.48550/arXiv.2011.08512 arXiv:2011.08512 [cs].
[28]
Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, and Aram Galstyan. 2021. A Survey on Bias and Fairness in Machine Learning. Comput. Surveys 54, 6 (July 2021), 115:1–115:35. https://doi.org/10.1145/3457607
[29]
MITRE. 2023. Common Attack Pattern Enumeration and Classification. https://capec.mitre.org/
[30]
MITRE. 2023. Common Vulnerabilities and Exposures. https://cve.mitre.org/
[31]
MITRE. 2023. Common Weakness Enumeration. https://cwe.mitre.org/
[32]
NASA. 2023. ASRS - Aviation Safety Reporting System. https://asrs.arc.nasa.gov/
[33]
NTSB. 2023. Case Analysis and Reporting Online (CAROL). https://data.ntsb.gov/carol-main-public/landing-page
[34]
NTSB. 2023. The Investigative Process. https://www.ntsb.gov/investigations/process/Pages/default.aspx
[35]
City of Amsterdam. 2023. City of Amsterdam Algorithm Register. https://algoritmeregister.amsterdam.nl/en/ai-register/
[36]
Code of Federal Regulations. 2023. 49 CFR 830.2 - Definitions. https://www.ecfr.gov/current/title-49/subtitle-B/chapter-VIII/part-830/subpart-A/section-830.2
[37]
City of Helsinki. 2023. City of Helsinki AI Register. https://ai.hel.fi/en/ai-register/
[38]
Clinton V. Oster, John S. Strong, and C. Kurt Zorn. 2013. Analyzing aviation safety: Problems, challenges, opportunities. Research in Transportation Economics 43, 1 (July 2013), 148–164. https://doi.org/10.1016/j.retrec.2012.12.001
[39]
ph_. 2023. awesome-machine-learning-interpretability. https://github.com/jphall663/awesome-machine-learning-interpretability original-date: 2018-06-21T14:26:51Z.
[40]
Charlie Pownall and CPC & Associates. 2023. AIAAIC. https://www.aiaaic.org/
[41]
Coding Rights and Paz Peña. 2022. AI Projects in the Public Sector in Latin America. https://notmy.ai/mapping-of-projects/
[42]
AI Risk and Vulnerability Analysis. 2023. AI Vulnerability Database. https://avidml.org/
[43]
AI Risk and Vulnerability Analysis. 2023. AI Vulnerability Database Taxonomy. https://avidml.org/taxonomy/
[44]
Drew Roselli, Jeanna Matthews, and Nisha Talagala. 2019. Managing Bias in AI. In Companion Proceedings of The 2019 World Wide Web Conference(WWW ’19). Association for Computing Machinery, New York, NY, USA, 539–544. https://doi.org/10.1145/3308560.3317590
[45]
Jonathan M. Spring, April Galyardt, Allen D. Householder, and Nathan VanHoudnos. 2021. On managing vulnerabilities in AI/ML systems. In New Security Paradigms Workshop 2020(NSPW ’20). Association for Computing Machinery, New York, NY, USA, 111–126. https://doi.org/10.1145/3442167.3442177
[46]
Ramya Srinivasan and Ajay Chander. 2021. Biases in AI Systems: A survey for practitioners. Queue 19, 2 (May 2021), Pages 10:45–Pages 10:64. https://doi.org/10.1145/3466132.3466134
[47]
Charles Vincent. 2007. Incident reporting and patient safety. BMJ 334, 7584 (2007), 51–51. https://doi.org/10.1136/bmj.39071.441609.80 arXiv:https://www.bmj.com/content/334/7584/51.full.pdf
[48]
Angelina Wang, Alexander Liu, Ryan Zhang, Anat Kleiman, Leslie Kim, Dora Zhao, Iroha Shirai, Arvind Narayanan, and Olga Russakovsky. 2022. REVISE: A Tool for Measuring and Mitigating Bias in Visual Datasets. International Journal of Computer Vision 130, 7 (July 2022), 1790–1810. https://doi.org/10.1007/s11263-022-01625-5

Cited By

View all
  • (2024)Advancing Trustworthy AI for Sustainable Development: Recommendations for Standardising AI Incident Reporting2024 ITU Kaleidoscope: Innovation and Digital Transformation for a Sustainable World (ITU K)10.23919/ITUK62727.2024.10772925(1-8)Online publication date: 21-Oct-2024
  • (2024)Misinformation, Fraud, and Stereotyping: Towards a Typology of Harm Caused by DeepfakesCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3685938(533-538)Online publication date: 11-Nov-2024
  • (2024)Decoding Real-World Artificial Intelligence IncidentsComputer10.1109/MC.2024.343249257:11(71-81)Online publication date: 1-Nov-2024
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society
August 2023
1026 pages
ISBN:9798400702310
DOI:10.1145/3600211
This work is licensed under a Creative Commons Attribution International 4.0 License.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 29 August 2023

Check for updates

Author Tag

  1. Explainable Artificial Intelligence

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Conference

AIES '23
Sponsor:
AIES '23: AAAI/ACM Conference on AI, Ethics, and Society
August 8 - 10, 2023
QC, Montr\'{e}al, Canada

Acceptance Rates

Overall Acceptance Rate 61 of 162 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)1,364
  • Downloads (Last 6 weeks)108
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Advancing Trustworthy AI for Sustainable Development: Recommendations for Standardising AI Incident Reporting2024 ITU Kaleidoscope: Innovation and Digital Transformation for a Sustainable World (ITU K)10.23919/ITUK62727.2024.10772925(1-8)Online publication date: 21-Oct-2024
  • (2024)Misinformation, Fraud, and Stereotyping: Towards a Typology of Harm Caused by DeepfakesCompanion Publication of the 2024 Conference on Computer-Supported Cooperative Work and Social Computing10.1145/3678884.3685938(533-538)Online publication date: 11-Nov-2024
  • (2024)Decoding Real-World Artificial Intelligence IncidentsComputer10.1109/MC.2024.343249257:11(71-81)Online publication date: 1-Nov-2024
  • (2024)Addressing AI Risks in Critical Infrastructure: Formalising the AI Incident Reporting Process2024 IEEE International Conference on Electronics, Computing and Communication Technologies (CONECCT)10.1109/CONECCT62155.2024.10677312(1-6)Online publication date: 12-Jul-2024
  • (2024)Making Generative Artificial Intelligence a Public Problem. Seeing Publics and Sociotechnical Problem-Making in Three Scenes of AI FailureJavnost - The Public10.1080/13183222.2024.231900031:1(89-105)Online publication date: 28-Mar-2024
  • (2024)Policy advice and best practices on bias and fairness in AIEthics and Information Technology10.1007/s10676-024-09746-w26:2Online publication date: 29-Apr-2024

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media