[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Bridging the civilian-military divide in responsible AI principles and practices

Published: 15 April 2023 Publication History

Abstract

Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI for warfare. However, varying ethical and procedural approaches to technological development, research emphasis on offensive uses of AI, and lack of appropriate venues for multistakeholder dialogue have led to differing operationalization of responsible AI principles and practices among civilian and defense entities. We argue that the disconnect between civilian and defense responsible development and use practices leads to underutilization of responsible AI research and hinders the implementation of responsible AI principles in both communities. We propose a research roadmap and recommendations for dialogue to increase exchange of responsible AI development and use practices for AI systems between civilian and defense communities. We argue that generating more opportunities for exchange will stimulate global progress in the implementation of responsible AI principles.

References

[1]
Alberts, D. S., & Hayes, R. E. (2006). Understanding Command and Control. ASSISTANT SECRETARY OF DEFENSE (C3I/COMMAND CONTROL RESEARCH PROGRAM) WASHINGTON DC. https://apps.dtic.mil/sti/citations/ADA457162
[2]
Anderson, H. S., Kharkar, A., & Filar, B. (2017). Evading Machine Learning Malware Detection. 6.
[3]
Biswas M and Murray J The effects of cognitive biases and imperfectness in long-term robot-human interactions: case studies using five cognitive biases on three robots Cognitive Systems Research 2017 43 266-290
[4]
Coeckelbergh, M. (2020). AI Ethics. MIT Press.
[5]
Cummings, M. (2017). Artificial Intelligence and the Future of Warfare. Undefined. https://www.semanticscholar.org/paper/Artificial-Intelligence-and-the-Future-of-Warfare-Cummings/7ad9f98c158d50e4f35f25f0f641b219f6a8a891
[6]
Defense Innovation Board (2019). AI Principles Project. https://innovation.defense.gov/ai/
[7]
Dermody G and Fritz S A conceptual Framework for Clinicians Working with Artificial Intelligence and Health-Assistive Smart Homes Nursing Inquiry 2019 26 1 e12267
[8]
Dietvorst BJ, Simmons JP, and Massey C Algorithm aversion: people erroneously avoid algorithms after seeing them err Journal of Experimental Psychology: General 2015 144 1 114-126
[9]
Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., Duan, Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V., Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kizgin, H., Kronemann, B., Lal, B., Lucini, B., & Williams, M. D. (2021). Artificial Intelligence (AI): Multidisciplinary Perspectives on Emerging Challenges, Opportunities, and Agenda for Research, Practice and Policy. https://bradscholars.brad.ac.uk/handle/10454/17208
[10]
Faria, J. M. (2017). Non-determinism and Failure Modes in Machine Learning. 2017 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW).
[11]
Gasser U and Almeida VAF A layered model for AI Governance IEEE Internet Computing 2017 21 6 58-62
[12]
GPAI (2022). Global Partnership on Artificial Intelligence. https://gpai.ai/
[13]
Holcombe RG Why does government produce national defense? Public Choice 2008 137 1 11-19
[14]
IEEE (2021). Research Group on Issues of Autonomy and AI for Defense Systems. SA Main Site. https://standards.ieee.org/industry-connections/autonomy-ai-defense/
[15]
JAIC (2020). JAIC facilitates first-ever International AI Dialogue for Defense. https://www.ai.mil/news_09_16_20-jaic_facilitates_first-ever_international_ai_dialogue_for_defense_.html
[16]
Kazim E and Koshiyama AS A high-level overview of AI ethics Patterns 2021 2 9 100314
[17]
Kong, Z., Xue, J., Wang, Y., Huang, L., Niu, Z., & Li, F. (2021). A Survey on Adversarial Attack in the Age of Artificial Intelligence. Wireless Communications and Mobile Computing, 2021, 1–22.
[18]
Magueresse, A., Carles, V., & Heetderks, E. (2020). Low-resource Languages: A Review of Past Work and Future Challenges. ArXiv:2006.07264 [Cs]. http://arxiv.org/abs/2006.07264
[19]
NATO (2021). Summary of the NATO Artificial Intelligence Strategy. NATO. https://www.nato.int/cps/en/natohq/official_texts_187617.htm
[20]
[21]
OECD (2021). The OECD Artificial Intelligence (AI) Principles. https://oecd.ai/en/ai-principles
[22]
Russell, N. J. (2006). An Introduction to the Overton Window of Political Possibilities. Mackinac Center. https://www.mackinac.org/7504
[23]
Sauer, F. (2016). Stopping ‘Killer Robots’: Why Now Is the Time to Ban Autonomous Weapons Systems | Arms Control Association. https://www.armscontrol.org/act/2016-09/features/stopping-%E2%80%98killer-robots%E2%80%99-why-now-time-ban-autonomous-weapons-systems
[24]
Sherry, L., Mauro, R. Controlled Flight into Stall (CFIS): Functional complexity failures and automation surprises. 2014 Integrated Communications, Navigation and, & Conference, S. (2014). (ICNS) Conference Proceedings.
[25]
Taddeo M, McCutcheon T, and Floridi L Trusting artificial intelligence in cybersecurity is a double-edged sword Nature Machine Intelligence 2019 1 12 557-560
[26]
UNODA (2021). UNODA – United Nations Office for Disarmament Affairs. https://www.un.org/disarmament/
[27]
UNODA (2022). The Convention on Certain Conventional Weapons – UNODA. https://www.un.org/disarmament/the-convention-on-certain-conventional-weapons/
[28]
US Department of Defense (2021). Memo Outlines DOD Plans for Responsible Artificial Intelligence. U.S. Department of Defense. https://www.defense.gov/News/News-Stories/Article/Article/2640609/memo-outlines-dod-plans-for-responsible-artificial-intelligence/
[29]
Wickens CD, Clegg BA, Vieane AZ, and Sebok AL Complacency and automation Bias in the use of imperfect automation Human Factors 2015 57 5 728-739

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Ethics and Information Technology
Ethics and Information Technology  Volume 25, Issue 2
Jun 2023
90 pages

Publisher

Kluwer Academic Publishers

United States

Publication History

Published: 15 April 2023

Author Tags

  1. Artificial intelligence (AI)
  2. Machine learning
  3. AI ethics
  4. Responsible AI
  5. Military
  6. Military applications

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 26 Dec 2024

Other Metrics

Citations

View Options

View options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media