[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article
Public Access

After-Action Review for AI (AAR/AI)

Published: 03 September 2021 Publication History

Abstract

Explainable AI is growing in importance as AI pervades modern society, but few have studied how explainable AI can directly support people trying to assess an AI agent. Without a rigorous process, people may approach assessment in ad hoc ways—leading to the possibility of wide variations in assessment of the same agent due only to variations in their processes. AAR, or After-Action Review, is a method some military organizations use to assess human agents, and it has been validated in many domains. Drawing upon this strategy, we derived an After-Action Review for AI (AAR/AI), to organize ways people assess reinforcement learning agents in a sequential decision-making environment. We then investigated what AAR/AI brought to human assessors in two qualitative studies. The first investigated AAR/AI to gather formative information, and the second built upon the results, and also varied the type of explanation (model-free vs. model-based) used in the AAR/AI process. Among the results were the following: (1) participants reporting that AAR/AI helped to organize their thoughts and think logically about the agent, (2) AAR/AI encouraged participants to reason about the agent from a wide range of perspectives, and (3) participants were able to leverage AAR/AI with the model-based explanations to falsify the agent’s predictions.

References

[1]
Saleema Amershi, Dan Weld, Mihaela Vorvoreanu, Adam Fourney, Besmira Nushi, Penny Collisson, Jina Suh, et al. 2019. Guidelines for human-AI interaction. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–13. https://doi.org/10.1145/3290605.3300233
[2]
Andrew Anderson, Jonathan Dodge, Amrita Sadarangani, Zoe Juozapaitis, Evan Newman, Jed Irvine, Souti Chattopadhyay, Alan Fern, and Margaret Burnett. 2019. Explaining reinforcement learning to mere mortals: An empirical study. In Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI’19).
[3]
Andrew Anderson, Jonathan Dodge, Amrita Sadarangani, Zoe Juozapaitis, Evan Newman, Jed Irvine, Souti Chattopadhyay, Matthew Olson, Alan Fern, and Margaret Burnett. 2020. Mental models of mere mortals with explanations of reinforcement learning. ACM Transactions on Interactive Intelligence Systems 10, 2 (May 2020), Article 15, 37 pages. https://doi.org/10.1145/3366485
[4]
Lorin W. Anderson, David R. Krathwohl, Peter W. Airasian, Kathleen A. Cruikshank, Richard E. Mayer, Paul R. Pintrich, James Raths, and Merlin C. Wittrock. 2001. A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Pearson, New York, NY.
[5]
Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao, and Nigel Shadbolt. 2018. ‘It’s reducing a human being to a percentage’: Perceptions of justice in algorithmic decisions. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–14. https://doi.org/10.1145/3173574.3173951
[6]
Benjamin S. Bloom, Max D. Engelhart, Edward J. Furst, Walker H. Hill, and David R. Krathwohl. 1956. Taxonomy of Educational Objectives. Longmans, Green and Co. LTD, London, England.
[7]
Ralph Brewer, Anthony Walker, E. Ray Pursel, Eduardo Cerame, Anthony Baker, and Kristin Schaefer. 2019. Assessment of manned-unmanned team performance: Comprehensive after-action review technology development. In Proceedings of the 2019 International Conference on Human Factors in Robots and Unmanned Systems (AHFE’19). Springer Nature Switzerland AG, Cham, Switzerland, 119–130.
[8]
Carrie J. Cai, Samantha Winter, David Steiner, Lauren Wilcox, and Michael Terry. 2019. “Hello AI”: Uncovering the onboarding needs of medical practitioners for human-AI collaborative decision-making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), Article 104, 24 pages. https://doi.org/10.1145/3359206
[9]
Nicholas Carlini and David Wagner. 2016. Towards evaluating the robustness of neural networks. arxiv:cs.CR/1608.04644
[10]
Michelene T. H. Chi, Miriam Bassok, Matthew W. Lewis, Peter Reimann, and Robert Glaser. 1989. Self-explanations: How students study and use examples in learning to solve problems. Cognitive Science 13, 2 (April 1989), 145–182. https://doi.org/10.1207/s15516709cog1302_1
[11]
CNN. 2016. Who’s responsible when an autonomous car crashes? CNN Business. Retrieved June 19, 2021 fromhttp://money.cnn.com/2016/07/07/technology/tesla-liability-risk/index.html.
[12]
Kelley Cotter, Janghee Cho, and Emilee Rader. 2017. Explaining the news feed algorithm: An analysis of the “news feed FYI” blog. In ACM CHI Conference Extended Abstracts on Human Factors in Computing Systems. ACM, New York, NY, 1553–1560.
[13]
Robert Davies, Elly Vaughan, Graham Fraser, Robert Cook, Massimo Ciotti, and Jonathan E. Suk. 2019. Enhancing reporting of after action reviews of public health emergencies to strengthen preparedness: A literature review and methodology appraisal. Disaster Medicine and Public Health Preparedness 13, 3 (June 2019), 618–625. https://doi.org/10.1017/dmp.2018.82
[14]
Fred D. Davis. 1989. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly 13 (1989), 319–340. https://doi.org/
[15]
Jonathan Dodge and M. Burnett. 2020. Position: We can measure XAI explanations better with templates. In Proceedings of ExSS-ATEC@IUI 2020. 1–5.
[16]
Jonathan Dodge, Sean Penney, Claudia Hilderbrand, Andrew Anderson, and Margaret Burnett. 2018. How the experts do it: Assessing and explaining agent behaviors in real-time strategy games. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, Article 562, 12 pages.
[17]
Upol Ehsan, Pradyumna Tambwekar, Larry Chan, Brent Harrison, and Mark O. Riedl. 2019. Automated rationale generation: A technique for explainable AI and its effects on human perceptions. In Proceedings of the 24th International Conference on Intelligent User Interfaces (IUI’19). ACM, New York, NY, 263–274. https://doi.org/10.1145/3301275.3302316
[18]
Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and Dawn Xiaodong Song. 2018. Robust physical-world attacks on deep learning visual classification. In Proceedings of the 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition.1625–1634.
[19]
Donna-Lynn Forrest-Pressley and G. E. MacKinnon. 1985. Metacognition, Cognition, and Human Performance: Theoretical Perspectives. Vol. 1. Academic Press.
[20]
Hershey H. Friedman, Linda W. Friedman, and Chaya Leverton. 2016. Increase diversity to boost creativity and enhance problem solving. Psychosociological Issues in Human Resource Management 4, 2 (2016), 7.
[21]
Peter Gerjets, Katharina Scheiter, and Richard Catrambone. 2004. Designing instructional examples to reduce intrinsic cognitive load: Molar versus modular presentation of solution procedures. Instructional Science 32, 1–2 (2004), 33–58.
[22]
Ian Goodfellow and Nicolas Papernot. 2017. The challenge of verification and testing of machine learning. Cleverhans-Blog. Retrieved June 19, 2021 from http://www.cleverhans.io/security/privacy/ml/2017/06/14/verification.html.
[23]
Samer Hanoun and Saeid Nahavandi. 2018. Current and future methodologies of after action review in simulation-based training. In Proceedings of the 2018 Annual IEEE International Systems Conference (SysCon’18). IEEE, Los Alamitos, CA, 1–6.
[24]
S. G. Hart and L. E. Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in Psychology 52 (1988), 139–183. https://doi.org/10.1016/S0166-4115(08)62386-9
[25]
K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR’16). 770–778. https://doi.org/10.1109/CVPR.2016.90
[26]
Marcel Heerink, Ben Kröse, Vanessa Evers, and Bob Wielinga. 2010. Assessing acceptance of assistive social agent technology by older adults: The Almere model. International Journal of Social Robotics 2, 4 (Dec. 2010), 361–375. https://doi.org/10.1007/s12369-010-0068-5
[27]
Robert R. Hoffman, Shane T. Mueller, Gary Klein, and Jordan Litman. 2018. Metrics for explainable AI: Challenges and prospects. arXiv:1812.04608. http://arxiv.org/abs/1812.04608
[28]
Kenneth Holstein, Jennifer Wortman Vaughan, Hal Daumé, Miro Dudik, and Hanna Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–16. https://doi.org/10.1145/3290605.3300830
[29]
Hsiu-Fang Hsieh and Sarah E. Shannon. 2005. Three approaches to qualitative content analysis. Qualitative Health Research 15, 9 (2005), 1277–1288.
[30]
Andrew Ishak and Elizabeth Williams. 2017. Slides in the tray: How fire crews enable members to borrow experiences. Small Group Research 48, 3 (March 2017), 336–364. https://doi.org/10.1177/1046496417697148
[31]
Paul Jaccard. 1908. Nouvelles recherches sur la distribution florale. Bulletin de la Societe Vaudoise Des Sciences Naturelles 44 (1908), 223–270.
[32]
Nathanael Keiser and Winfred Arthur Jr. 2020. A meta-analysis of the effectiveness of the after-action review (or debrief) and factors that influence its effectiveness. Journal of Applied Psychology. Early access, August 27, 2020. https://doi.org/10.1037/apl0000821
[33]
Caitlin Kelleher and Wint Hnin. 2019. Predicting cognitive load in future code puzzles. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, Article 257, 12 pages.
[34]
M. Kim, K. Kim, S. Kim, and A. K. Dey. 2018. Performance evaluation gaps in a real-time strategy game between human and artificial intelligence players. IEEE Access 6 (2018), 13575–13586.
[35]
Man-Je Kim, Kyung-Joong Kim, SeungJun Kim, and Anind K. Dey. 2016. Evaluation of StarCraft artificial intelligence competition bots by experienced human players. In ACM CHI Conference Extended Abstracts. ACM, New York, NY, 1915–1921.
[36]
Rafal Kocielnik, Saleema Amershi, and Paul N. Bennett. 2019. Will you accept an imperfect AI? Exploring designs for adjusting end-user expectations of AI systems. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–14. https://doi.org/10.1145/3290605.3300641
[37]
T. Kulesza, M. Burnett, W. Wong, and S. Stumpf. 2015. Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the ACM International Conference on Intelligent User Interfaces. ACM, New York, NY, 126–137.
[38]
Todd Kulesza, Simone Stumpf, Margaret Burnett, and Irwin Kwan. 2012. Tell me more? The effects of mental model soundness on personalizing an intelligent agent. In Proceedings of the ACM Conference on Human Factors in Computing Systems. ACM, New York, NY, 1–10.
[39]
Adam Lareau and Brice Long. 2018. The art of the after-action review. Fire Engineering 171, 5 (May 2018), 61–64. http://search.proquest.com/docview/2157468757/
[40]
Brian Lim, Anind Dey, and Daniel Avrahami. 2009. Why and Why Not explanations improve the intelligibility of context-aware intelligent systems. In Proceedings of the 2009 SIGCHI Conference on Human Factors in Computing Systems (CHI’09). ACM, New York, NY, 2119–2128.
[41]
Brian Y. Lim. 2012. Improving Understanding and Trust with Intelligibility in Context-Aware Applications. Ph.D. Dissertation. Carnegie Mellon University.
[42]
Sandra Deacon Lloyd Baird, and Phil Holland. 1999. Learning from action: Imbedding more learning into the performance fast enough to make a difference. Organizational Dynamics 27 (1999), 19–32. https://doi.org/10.1016/S0090-2616(99)90027-X
[43]
Theresa Mai, Roli Khanna, Jonathan Dodge, Jed Irvine, Kin-Ho Lam, Zhengxian Lin, Nicholas Kiddle, et al. 2020. Keeping it “organized and logical”: After-action review for AI (AAR/AI). In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI’20). ACM, New York, NY, 465–476. https://doi.org/10.1145/3377325.3377525
[44]
Ronald Metoyer, Simone Stumpf, Christoph Neumann, Jonathan Dodge, Jill Cao, and Aaron Schnabel. 2010. Explaining how to play real-time strategy games. Knowledge-Based Systems 23, 4 (2010), 295–301.
[45]
Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru. 2019. Model cards for model reporting. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT*’19). ACM, New York, NY, 220–229. https://doi.org/10.1145/3287560.3287596
[46]
John E. Morrison and Larry L. Meliza. 1999. Foundations of the After Action Review Process. Technical Report. Institute for Defense Analyses. https://apps.dtic.mil/docs/citations/ADA368651.
[47]
Donald A. Norman. 1983. Some observations on mental models. Mental Models 7, 112 (1983), 7–14.
[48]
New York Times. 2017. Tesla’s Self-Driving System Cleared in Deadly Crash. Retrieved June 19, 2021 from https://www.nytimes.com/2017/01/19/business/tesla-model-s-autopilot-fatal-crash.html.
[49]
Oluwakemi Ola and Kamran Sedig. 2016. Beyond simple charts: Design of visualizations for big health data. Online Journal of Public Health Informatics 8, 3 (Dec. 2016), e195. https://doi.org/10.5210/ojphi.v8i3.7100
[50]
S. Ontañón, G. Synnaeve, A. Uriarte, F. Richoux, D. Churchill, and M. Preuss. 2013. A survey of real-time strategy game AI research and competition in StarCraft. IEEE Transactions on Computational Intelligence and AI in Games 5, 4 (Dec. 2013), 293–311. https://doi.org/10.1109/TCIAIG.2013.2286295
[51]
Giuliano Orru and Luca Longo. 2018. The evolution of cognitive load theory and the measurement of its intrinsic, extraneous and Germane loads: A review. In Proceedings of the International Symposium on Human Mental Workload: Models and Applications. 23–48.
[52]
Kexin Pei, Yinzhi Cao, Junfeng Yang, and Suman Jana. 2017. DeepXplore. In Proceedings of the 26th Symposium on Operating Systems Principles (SOSP’17).https://doi.org/10.1145/3132747.3132785
[53]
Sean Penney, Jonathan Dodge, Claudia Hilderbrand, Andrew Anderson, Logan Simpson, and Margaret Burnett. 2018. Toward foraging for understanding of StartCraft agents: An empirical study. In Proceedings of the 23rd International Conference on Intelligent User Interfaces (IUI ’18). ACM, New York, NY, 225–237. https://doi.org/10.1145/3172944.3172946
[54]
Karl R. Popper. 1963. Science as falsification. Conjectures and Refutations 1 (1963), 33–39.
[55]
John Quarles, Samsun Lampotang, Ira Fischler, Paul Fishwick, and Benjamin Lok. 2013. Experiences in mixed reality-based collocated after action review. Virtual Reality 17, 3 (Sept. 2013), 239–252. https://doi.org/10.1007/s10055-013-0229-6
[56]
Stephen Reed, Alexandra Dempster, and Michael Ettinger. 1985. Usefulness of analogous solutions for solving algebra word problems. Journal of Experimental Psychology: Learning, Memory, and Cognition 11, 1 (Jan. 1985), 106–125. https://doi.org/10.1037/0278-7393.11.1.106
[57]
Alexander Renkl, Robin Stark, Hans Gruber, and Heinz Mandl. 1998. Learning from worked-out examples: The effects of example variability and elicited self-explanations. Contemporary Educational Psychology 23, 1 (Jan. 1998), 90–108. https://doi.org/10.1006/ceps.1997.0959
[58]
Quentin Roy, Futian Zhang, and Daniel Vogel. 2019. Automation accuracy is good, but high controllability may be better. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–8. https://doi.org/10.1145/3290605.3300750
[59]
Stuart J. Russell and Peter Norvig. 2016. Artificial Intelligence: A Modern Approach. Pearson Education Limited, Malaysia.
[60]
Margaret Salter and Gerald Klein. 2007. After Action Reviews: Current Observations and Recommendations. Technical Report. U.S. Army Research Institute for the Behavioral and Social Sciences.
[61]
Taylor Lee Sawyer and Shad Deering. 2013. Adaptation of the US army’s after-action review for simulation debriefing in healthcare. Simulation in Healthcare 8, 6 (Dec. 2013), 388–397. https://doi.org/10.1097/SIH.0b013e31829ac85c
[62]
Martin Schindler and Martin J. Eppler. 2003. Harvesting project knowledge: A review of project learning methods and success factors. International Journal of Project Management 21, 3 (2003), 219–228. https://doi.org/10.1016/S0263-7863(02)00096-0
[63]
David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, et al. 2016. Mastering the game of Go with deep neural networks and tree search. Nature 529, 7587 (2016), 484.
[64]
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, et al. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 1140–1144. https://doi.org/10.1126/science.aar6404arXiv:https://science.sciencemag.org/content/ 362/6419/1140.full.pdf
[65]
Dag I. K. Sjøberg, Tore Dybå, Bente C. D. Anda, and Jo E. Hannay. 2008. Building theories in software engineering. In Guide to Advanced Empirical Software Engineering. Springer, 312–336.
[66]
Alison Smith-Renner, Varun Kumar, Jordan Boyd-Graber, Kevin Seppi, and Leah Findlater. 2020. Digging into user control: Perceptions of adherence and instability in transparent models. In Proceedings of the 25th International Conference on Intelligent User Interfaces (IUI’20). ACM, New York, NY, 519–530. https://doi.org/10.1145/3377325.3377491
[67]
Dan “Artosis” Stemkoski. 2019. AlphaStar—Analysis by Artosis. Retrieved June 19, 2021 from https://www.youtube.com/watch?v=_YWmU-E2WFc.
[68]
Simone Stumpf, Vidya Rajaram, Lida Li, Weng-Keen Wong, Margaret Burnett, Thomas Dietterich, Erin Sullivan, and Jonathan Herlocker. 2009. Interacting meaningfully with machine learning systems: Three experiments. International Journal of Human-Computer Studies 67, 8 (2009), 639–662.
[69]
John Sweller. 1994. Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction 4, 4 (1994), 295–312.
[70]
The StarCraft II Community. 2019. Tutorials—Sc2MapsterWiki. Retrieved June 19, 2021 from https://sc2mapster.gamepedia.com/Tutorials.
[71]
Yuchi Tian, Kexin Pei, Suman Jana, and Baishakhi Ray. 2017. DeepTest: Automated testing of deep-neural-network-driven autonomous cars. arXiv:cs.SE/1708.08559.
[72]
U.S. Army. 1993. Training Circular 25-20: A Leader’s Guide to After-Action Reviews. Technical Report. Department of the Army, Washington, DC.
[73]
Kristen Vaccaro, Dylan Huang, Motahhare Eslami, Christian Sandvig, Kevin Hamilton, and Karrie Karahalios. 2018. The illusion of control: Placebo effects of control settings. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–13. https://doi.org/10.1145/3173574.3173590
[74]
Oriol Vinyals. 2017. DeepMind and Blizzard open StarCraft II as an AI research environment. DeepMind. Retrieved June 19, 2021 from https://deepmind.com/blog/deepmind-and-blizzard-open-starcraft-ii-ai-research-environment/.
[75]
Oriol Vinyals and David Silver. 2019. AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. Retrieved June 19, 2021 from https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii.
[76]
Danding Wang, Qian Yang, Ashraf Abdul, and Brian Y. Lim. 2019. Designing theory-driven user-centric explainable ai. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’19).
[77]
Qianwen Wang, Yao Ming, Zhihua Jin, Qiaomu Shen, Dongyu Liu, Micah J. Smith, Kalyan Veeramachaneni, and Huamin Qu. 2019. ATMSeer: Increasing transparency and controllability in automated machine learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (CHI’19). ACM, New York, NY, 1–12. https://doi.org/10.1145/3290605.3300911
[78]
Franz Emanuel Weinert and Rainer H. Kluwe (Eds.). 1987. Metacognition, Motivation, and Understanding. Psychology of Education and Instruction Series. Psychology Press.
[79]
Claes Wohlin, Per Runeson, Martin Höst, Magnus Ohlsson, Björn Regnell, and Anders Wesslén. 2000. Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, Norwell, MA.
[80]
Allison Woodruff, Sarah E. Fox, Steven Rousso-Schindler, and Jeffrey Warshaw. 2018. A qualitative exploration of perceptions of algorithmic fairness. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI’18). ACM, New York, NY, 1–14. https://doi.org/10.1145/3173574.3174230
[81]
Qian Yang, Aaron Steinfeld, Carolyn Rosé, and John Zimmerman. 2020. Re-examining whether, why, and how human-AI interaction is uniquely difficult to design. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI’20). ACM, New York, NY, 1–13. https://doi.org/10.1145/3313831.3376301

Cited By

View all
  • (2023)Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performanceFrontiers in Computer Science10.3389/fcomp.2023.10962575Online publication date: 6-Feb-2023
  • (2023)Increasing the Value of XAI for Users: A Psychological PerspectiveKI - Künstliche Intelligenz10.1007/s13218-023-00806-937:2-4(237-247)Online publication date: 17-Jul-2023
  • (2022)Military Applications of Machine Learning: A Bibliometric PerspectiveMathematics10.3390/math1009139710:9(1397)Online publication date: 22-Apr-2022
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 11, Issue 3-4
December 2021
483 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3481699
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 03 September 2021
Accepted: 01 February 2021
Revised: 01 January 2021
Received: 01 August 2020
Published in TIIS Volume 11, Issue 3-4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Explainable AI
  2. after-action review

Qualifiers

  • Research-article
  • Refereed

Funding Sources

  • DARPA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)632
  • Downloads (Last 6 weeks)114
Reflects downloads up to 10 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2023)Measures for explainable AI: Explanation goodness, user satisfaction, mental models, curiosity, trust, and human-AI performanceFrontiers in Computer Science10.3389/fcomp.2023.10962575Online publication date: 6-Feb-2023
  • (2023)Increasing the Value of XAI for Users: A Psychological PerspectiveKI - Künstliche Intelligenz10.1007/s13218-023-00806-937:2-4(237-247)Online publication date: 17-Jul-2023
  • (2022)Military Applications of Machine Learning: A Bibliometric PerspectiveMathematics10.3390/math1009139710:9(1397)Online publication date: 22-Apr-2022
  • (2022)Finding AI’s Faults with AAR/AI: An Empirical StudyACM Transactions on Interactive Intelligent Systems10.1145/348706512:1(1-33)Online publication date: 4-Mar-2022
  • (2021)“Why did my AI agent lose?”: Visual Analytics for Scaling Up After-Action Review2021 IEEE Visualization Conference (VIS)10.1109/VIS49827.2021.9623268(16-20)Online publication date: Oct-2021

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media