[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1007/978-3-031-40953-0_34guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Safety Integrity Levels for Artificial Intelligence

Published: 19 September 2023 Publication History

Abstract

Artificial Intelligence (AI) and Machine Learning (ML) technologies are rapidly being adopted to perform safety-related tasks in critical systems. These AI-based systems pose significant challenges, particularly regarding their assurance. Existing safety approaches defined in internationally recognized standards such as ISO 26262, DO-178C, UL 4600, EN 50126, and IEC 61508 do not provide detailed guidance on how to assure AI-based systems. For conventional (non-AI) systems, these standards adopt a ‘Level of Rigor’ (LoR) approach, where increasingly demanding engineering activities are required as risk associated with the system increases. This paper proposes an extension to existing LoR approaches, which considers the complexity of the task(s) being performed by an AI-based component. Complexity is assessed in terms of input entropy and output non-determinism, and then combined with the allocated Safety Integrity Level (SIL) to produce an AI-SIL. That AI-SIL may be used to identify appropriate measures and techniques for the development and verification of the system. The proposed extension is illustrated by examples from the automotive, aviation, and medical industries.

References

[1]
Darwiche A Human-level intelligence or animial-like abilities? Commun. ACM 2018 61 10 56-67
[2]
Russell, S., Norvig, P.: Artificial Intelligence - A Modern Approach, Third Edition. Prentice Hall, Hoboken (2010)
[3]
Rueb, H., Burton, S.: Safe AI - How is this possible? Fraunhofer Institute for Cognitive Systems (2022)
[4]
Ashmore, R., Calinescu, R., Paterson, C.: Assuring the machine learning lifecycle: desiderata, methods, and challenges. ACM Comput. Surv. 54(5), 111:1–111:39 (2021)
[5]
Hawkins, R., Paterson, C., Picardi, C., Jia, Y., Calinescu, R., Habli, I.: Guidance on the Assurance of Machine Learning in Autonomous Systems (AMLAS), University of York (2021)
[6]
Huang X et al. A survey of safety and trustworthiness of deep neural networks: verification, testing, adversarial attack and defence, and interpretability Comput. Sci. Rev. 2020 37 100270
[7]
European Union Aviation Safety Agency, “Concepts of Design Assurance for Neural Networks,” 2020
[8]
European Union Aviation Safety Agency, “EASA Concept Pper: First usable guidance for Level 1 & 2 machine learning applications,” 2023
[9]
Safety of Autonomous Systems Working Group, “Safety Assurance Objectives for Autonomous Systems,” Safety Critical Systems Club, 2022
[10]
International Organization for Standards, “ISO 26262 - Road vehicles - functional safety,” 2018
[11]
Koopman, P.: How Safe is Safe Enough?: Measuring and Predicting Autonomous Vehicle Safety, Independently Published (2022)
[12]
Czarnecki, K.: Operational design domain for automated driving systems - taxonomy of basic terms. Waterloo Intelligent Systems Engineering (WISE) Lab (2018)
[13]
Kuhn, R., Kacker, R., Lei, Y.: NIST Special Publication (SP) 800-142 - Practical Combinatorial Testing, National Institute of Standards and Technology (2010)
[14]
Diemert, S., Casey, A., Robertson, J.: Challenging autonomy with combinatorial testing (forthcoming). In: Proceedings of the International Workshop on Combinatorial Testing, Dublin, Ireland (2023)
[15]
Sun Y, Huang X, Kroening D, Sharp J, Hill M, and Ashmore R Structural test coverage criteria for deep neural networks ACM Trans. Embed. Comput. Syst. 2019 18 5s 1-23
[16]
Kim, J., Feldt, R., Yoo,S.: Guiding deep learning system testing using surprise adequacy. In: International Conference on Software Engineering (ICSE), vol. 41, pp. 1039–1049 (2019)
[17]
Xie X, Ho J, Murphy C, Kaiser B, Xu B, and Chen TY Testing and validating machine learning classifiers by metamorphic testing J. Syst. Softw. 2011 84 4 544-558
[18]
Urbank, C., Mine, A.: A Review of Formal Methods applied to Machine Learning,” arXiv, (2021)
[19]
Awad E et al. The moral machine experiment Nature 2018 563 7729 59-64
[20]
Suman, S.K., Bhagyalakshmi, R.L., Shrivastava, L., Shekhar, H.: On a safety evaluation of artificial intelligence-based systems to software reliability. In: Multi-Criteria Decision Models in Software Reliability, CRC Press (2022)
[21]
Lohn, A.: Estimating the Brittleness of AI: Safety Integrity Levels and the Need for Testing Out-Of-Distribution Performance, arXiv, (2020)
[22]
Diemert, S., Joyce, J.: Eliminative argumentation for arguing system safety - a practitioner's experience. In”IEEE SysCon, Montreal (2020)

Cited By

View all
  • (2024)Being Accountable is Smart: Navigating the Technical and Regulatory Landscape of AI-based Services for Power GridProceedings of the 2024 International Conference on Information Technology for Social Good10.1145/3677525.3678651(118-126)Online publication date: 4-Sep-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Computer Safety, Reliability, and Security. SAFECOMP 2023 Workshops: ASSURE, DECSoS, SASSUR, SENSEI, SRToITS, and WAISE, Toulouse, France, September 19, 2023, Proceedings
Sep 2023
447 pages
ISBN:978-3-031-40952-3
DOI:10.1007/978-3-031-40953-0

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 19 September 2023

Author Tags

  1. Artificial Intelligence
  2. Machine Learning
  3. Safety Integrity Levels
  4. Safety-Critical Systems

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 25 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Being Accountable is Smart: Navigating the Technical and Regulatory Landscape of AI-based Services for Power GridProceedings of the 2024 International Conference on Information Technology for Social Good10.1145/3677525.3678651(118-126)Online publication date: 4-Sep-2024

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media