[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
10.1145/3581641.3584056acmconferencesArticle/Chapter ViewAbstractPublication PagesiuiConference Proceedingsconference-collections
research-article

Resilience Through Appropriation: Pilots’ View on Complex Decision Support

Published: 27 March 2023 Publication History

Abstract

Intelligent decision support tools (DSTs) hold the promise to improve the quality of human decision-making in challenging situations like diversions in aviation. To achieve these improvements, a common goal in DST design is to calibrate decision makers’ trust in the system. However, this perspective is mostly informed by controlled studies and might not fully reflect the real-world complexity of diversions. In order to understand how DSTs can be beneficial in the view of those who have the best understanding of the complexity of diversions, we interviewed professional pilots. To facilitate discussions, we built two low-fidelity prototypes, each representing a different role a DST could assume: (a) actively suggesting and ranking airports based on pilot-specified criteria, and (b) unobtrusively hinting at data points the pilot should be aware of. We find that while pilots would not blindly trust a DST, they at the same time reject deliberate trust calibration in the moment of the decision. We revisit appropriation as a lens to understand this seeming contradiction as well as a range of means to enable appropriation. Aside from the commonly considered need for transparency, these include directability and continuous support throughout the entire decision process. Based on our design exploration, we encourage to expand the view on DST design beyond trust calibration at the point of the actual decision.

References

[1]
Saleema Amershi, Maya Cakmak, William Bradley Knox, and Todd Kulesza. 2014. Power to the people: the role of humans in interactive machine learning. AI Magazine 35, 4 (Dec. 2014), 105–120. https://doi.org/10.1609/aimag.v35i4.2513
[2]
Gagan Bansal, Tongshuang Wu, Joyce Zhou, Raymond Fok, Besmira Nushi, Ece Kamar, Marco Tulio Ribeiro, and Daniel Weld. 2021. Does the whole exceed its parts? The effect of AI explanations on complementary team performance. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). ACM, Yokohama, Japan, 81:1–81:16. https://doi.org/10.1145/3411764.3445717
[3]
Charles E. Billings. 1996. Human-centered aviation automation: principles and guidelines. Technical Report NASA-TM-110381, A-961056, NAS 1.15:110381. NASA Ames Research Center, Moffett Field, CA, USA. 222 pages. https://ntrs.nasa.gov/citations/19960016374
[4]
Jeanette Blomberg, Aly Megahed, and Ray Strong. 2018. Acting on analytics: accuracy, precision, interpretation, and performativity. Ethnographic Praxis in Industry Conference Proceedings 2018, 1(2018), 281–300. https://doi.org/10.1111/1559-8918.2018.01208
[5]
Adrian Bussone, Simone Stumpf, and Dympna O’Sullivan. 2015. The role of explanations on trust and reliance in clinical decision support systems. In Proceedings of the 2015 International Conference on Healthcare Informatics(ICHI 2015). IEEE, Dallas, TX, USA, 160–169. https://doi.org/10.1109/ICHI.2015.26
[6]
Zana Buçinca, Maja Barbara Malaya, and Krzysztof Z. Gajos. 2021. To trust or to think: cognitive forcing functions can reduce overreliance on AI in AI-assisted decision-making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW1 (April 2021), 188:1–188:21. https://doi.org/10.1145/3449287
[7]
Federico Cabitza, Andrea Campagner, and Carla Simone. 2021. The need to move away from agential-AI: empirical investigations, useful concepts and open issues. International Journal of Human-Computer Studies 155 (Nov. 2021), 102696:1–102696:11. https://doi.org/10.1016/j.ijhcs.2021.102696
[8]
Carrie J. Cai, Martin C. Stumpe, Michael Terry, Emily Reif, Narayan Hegde, Jason Hipp, Been Kim, Daniel Smilkov, Martin Wattenberg, Fernanda Viegas, and Greg S. Corrado. 2019. Human-centered tools for coping with imperfect algorithms during medical decision-making. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). ACM, Glasgow, Scotland, UK, 4:1–4:14. https://doi.org/10.1145/3290605.3300234
[9]
Hao-Fei Cheng, Ruotong Wang, Zheng Zhang, Fiona O’Connell, Terrance Gray, F. Maxwell Harper, and Haiyi Zhu. 2019. Explaining decision-making algorithms through UI: strategies to help non-expert stakeholders. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). ACM, Glasgow, Scotland, UK, 559:1–559:12. https://doi.org/10.1145/3290605.3300789
[10]
Erin K. Chiou and John D. Lee. 2021. Trusting automation: designing for responsivity and resilience. Human Factors: The Journal of the Human Factors and Ergonomics Society Online ahead of print (April 2021), 1–29. https://doi.org/10.1177/00187208211009995
[11]
Michael Chromik, Malin Eiband, Felicitas Buchner, Adrian Krüger, and Andreas Butz. 2021. I think I get your point, AI! The illusion of explanatory depth in explainable AI. In Proceedings of the 26th International Conference on Intelligent User Interfaces(IUI ’21). ACM, College Station, TX, USA, 307–317. https://doi.org/10.1145/3397481.3450644
[12]
Mary L. Cummings. 2004. Automation bias in intelligent time critical decision support systems. In Proceedings of the AIAA 1st Intelligent Systems Technical Conference. Chicago, IL, USA, 33:1–33:6. https://doi.org/10.2514/6.2004-6313
[13]
Berkeley J. Dietvorst, Joseph P. Simmons, and Cade Massey. 2015. Algorithm aversion: people erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General 144, 1 (2015), 114–126. https://doi.org/10.1037/xge0000033
[14]
Alan Dix. 2007. Designing for appropriation. In Proceedings of the 21st British HCI Group Annual Conference on People and Computers(BCS-HCI ’07, Vol. 2). BCS Learning & Development Ltd., Lancaster, UK, 27–30. https://doi.org/10.14236/ewic/HCI2007.53
[15]
John J. Dudley and Per Ola Kristensson. 2018. A review of user interface design for interactive machine learning. ACM Transactions on Interactive Intelligent Systems 8, 2 (July 2018), 8:1–8:37. https://doi.org/10.1145/3185517
[16]
EASA. 2020. Artificial intelligence roadmap: a human-centric approach to AI in aviation. Technical Report. European Union Aviation Safety Agency (EASA). 33 pages. https://www.easa.europa.eu/sites/default/files/dfu/EASA-AI-Roadmap-v1.0.pdf
[17]
Upol Ehsan, Samir Passi, Q. Vera Liao, Larry Chan, I.-Hsiang Lee, Michael Muller, and Mark O. Riedl. 2021. The who in explainable AI: how AI background shapes perceptions of AI explanations. https://doi.org/10.48550/arXiv.2107.13509
[18]
Mica R. Endsley. 1995. Toward a theory of situation awareness in dynamic systems. Human Factors 37, 1 (March 1995), 32–64. https://doi.org/10.1518/001872095779049543
[19]
Mica R. Endsley. 2017. From here to autonomy: lessons learned from human–automation research. Human Factors: The Journal of the Human Factors and Ergonomics Society 59, 1 (Feb. 2017), 5–27. https://doi.org/10.1177/0018720816681350
[20]
Claudia Fernández Henning. 2021. FOR-DEC and beyond – conceptual HMI design for a diversion assistance system. Bachelor Thesis. Technische Hochschule Ingolstadt, Ingolstadt, Germany.
[21]
Therese Fessenden. 2019. The diverge-and-converge technique for UX workshops. Nielsen Norman Group. Retrieved 2023-01-23 from https://www.nngroup.com/articles/diverge-converge/
[22]
Ken Funk, Beth Lyall, Jennifer Wilson, Rebekah Vint, Mary Niemczyk, Candy Suroteguh, and Griffith Owen. 1999. Flight deck automation issues. The International Journal of Aviation Psychology 9, 2 (April 1999), 109–123. https://doi.org/10.1207/s15327108ijap0902_2
[23]
Thomas Gilovich and Dale Griffin. 2002. Introduction – heuristics and biases: then and now. In Heuristics and Biases: The Psychology of Intuitive Judgment, Dale Griffin, Daniel Kahneman, and Thomas Gilovich (Eds.). Cambridge University Press, Cambridge, 1–18. https://doi.org/10.1017/CBO9780511808098.002
[24]
Cleotilde Gonzalez and Joachim Meyer. 2016. Integrating trends in decision-making research. Journal of Cognitive Engineering and Decision Making 10, 2 (June 2016), 120–122. https://doi.org/10.1177/1555343416655256
[25]
Ben Green and Yiling Chen. 2019. The principles and limits of algorithm-in-the-loop decision making. Proceedings of the ACM on Human-Computer Interaction 3, CSCW (Nov. 2019), 50:1–50:24. https://doi.org/10.1145/3359152
[26]
Francisco Gutiérrez, Nyi Nyi Htun, Vero Vanden Abeele, Robin De Croon, and Katrien Verbert. 2022. Explaining call recommendations in nursing homes: a user-centered design approach for interacting with knowledge-based health decision support systems. In Proceedings of the 27th International Conference on Intelligent User Interfaces(IUI ’22). ACM, Helsinki, Finland, 162–172. https://doi.org/10.1145/3490099.3511158
[27]
Don Harris. 2007. A human‐centred design agenda for the development of single crew operated commercial aircraft. Aircraft Engineering and Aerospace Technology 79, 5 (Sept. 2007), 518–526. https://doi.org/10.1108/00022660710780650
[28]
Robert L. Helmreich, Ashleigh C. Merritt, and John A. Wilhelm. 1999. The evolution of crew resource management training in commercial aviation. The International Journal of Aviation Psychology 9, 1 (1999), 19–32. https://doi.org/10.1207/s15327108ijap0901_2
[29]
EAAI HLG. 2020. Fly AI report - demystifying and accelerating AI in Aviation/ATM. Technical Report. European Aviation High Level Group on AI (EAAI HLG). 76 pages. https://www.eurocontrol.int/publication/fly-ai-report
[30]
Karen Holtzblatt and Hugh Beyer. 2017. The affinity diagram. In Contextual Design: Design for Life (2 ed.). Morgan Kaufmann, Boston, 127–146. https://doi.org/10.1016/B978-0-12-800894-2.00006-5
[31]
Hans-Jürgen Hörmann. 1994. FOR-DEC - A prescriptive model for aeronautical decision making. In Proceedings of the 21st Conference of the European Association for Aviation Psychology (EAAP). Dublin, Ireland, 17–23. https://elib.dlr.de/27044/
[32]
Maia Jacobs, Jeffrey He, Melanie F Pradier, Barbara Lam, Andrew C Ahn, Thomas H McCoy, Roy H Perlis, Finale Doshi-Velez, and Krzysztof Z Gajos. 2021. Designing AI for trust and collaboration in time-constrained medical decisions: a sociotechnical lens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems(CHI ’21). ACM, Yokohama, Japan, 659:1–659:14. https://doi.org/10.1145/3411764.3445385
[33]
Maia Jacobs, Melanie F. Pradier, Thomas H. McCoy, Roy H. Perlis, Finale Doshi-Velez, and Krzysztof Z. Gajos. 2021. How machine-learning recommendations influence clinician treatment selections: the example of antidepressant selection. Translational Psychiatry 11, 1 (June 2021), 108:1–108:9. https://doi.org/10.1038/s41398-021-01224-x
[34]
Daniel Kahneman. 2011. Thinking, fast and slow(1st ed.). Farrar, Straus and Giroux, New York.
[35]
Daniel Kahneman and Gary Klein. 2009. Conditions for intuitive expertise: A failure to disagree. American Psychologist 64, 6 (2009), 515–526. https://doi.org/10.1037/a0016755
[36]
Annika Kaltenhauser, Verena Rheinstädter, Andreas Butz, and Dieter P. Wallach. 2020. "You have to piece the puzzle together": implications for designing decision support in intensive care. In Proceedings of the 2020 ACM Designing Interactive Systems Conference(DIS ’20). ACM, Eindhoven, Netherlands, 1509–1522. https://doi.org/10.1145/3357236.3395436
[37]
Anna Kawakami, Venkatesh Sivaraman, Hao-Fei Cheng, Logan Stapleton, Yanghuidi Cheng, Diana Qing, Adam Perer, Zhiwei Steven Wu, Haiyi Zhu, and Kenneth Holstein. 2022. Improving human-AI partnerships in child welfare: understanding worker practices, challenges, and desires for algorithmic decision support. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems(CHI ’22). ACM, New Orleans, LA, USA, 52:1–52:18. https://doi.org/10.1145/3491102.3517439
[38]
Gary A. Klein. 1993. A recognition-primed decision (RPD) model of rapid decision making. In Decision making in action: Models and methods, Gary A. Klein, Judith Orasanu, Roberta Calderwood, and Caroline E. Zsambok (Eds.). Ablex Publishing, Westport, CT, US, 138–147.
[39]
Gary A. Klein. 2008. Naturalistic decision making. Human Factors: The Journal of the Human Factors and Ergonomics Society 50, 3 (June 2008), 456–460. https://doi.org/10.1518/001872008X288385
[40]
Gary A. Klein, Roberta Calderwood, and Donald MacGregor. 1989. Critical decision method for eliciting knowledge. IEEE Transactions on Systems, Man, and Cybernetics 19, 3 (May 1989), 462–472. https://doi.org/10.1109/21.31053
[41]
Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, and Chenhao Tan. 2021. Towards a science of human-AI decision making: a survey of empirical studies. https://doi.org/10.48550/arXiv.2112.11471
[42]
Bridget A. Lewis, Valerie J. Gawron, Ehsan Esmaeilzadeh, Ralf H. Mayer, Felipe Moreno-Hines, Neil Nerwich, and Paulo M. Alves. 2021. Data-driven estimation of the impact of diversions due to in-flight medical emergencies on flight delay and aircraft operating costs. Aerospace Medicine and Human Performance 92, 2 (Feb. 2021), 99–105. https://doi.org/10.3357/AMHP.5720.2021
[43]
Raanan Lipshitz, Gary Klein, Judith Orasanu, and Eduardo Salas. 2001. Taking stock of naturalistic decision making. Journal of Behavioral Decision Making 14, 5 (Dec. 2001), 331–352. https://doi.org/10.1002/bdm.381
[44]
Han Liu, Vivian Lai, and Chenhao Tan. 2021. Understanding the effect of out-of-distribution examples and interactive explanations on human-AI decision making. Proceedings of the ACM on Human-Computer Interaction 5, CSCW2 (Oct. 2021), 408:1–408:45. https://doi.org/10.1145/3479552
[45]
John M. McGuirl and Nadine B. Sarter. 2006. Supporting trust calibration and the effective use of decision aids by presenting dynamic system confidence information. Human Factors: The Journal of the Human Factors and Ergonomics Society 48, 4 (Dec. 2006), 656–665. https://doi.org/10.1518/001872006779166334
[46]
Mahsan Nourani, Chiradeep Roy, Jeremy E Block, Donald R Honeycutt, Tahrima Rahman, Eric Ragan, and Vibhav Gogate. 2021. Anchoring bias affects mental model formation and user reliance in explainable AI systems. In Proceedings of the 26th International Conference on Intelligent User Interfaces(IUI ’21). ACM, College Station, TX, USA, 340–350. https://doi.org/10.1145/3397481.3450639
[47]
Austrian Federal Office of Transport. 2006. Untersuchungsbericht: Flugunfall mit dem Motorflugzeug Type Airbus A310 am 12. Juli 2000 am Flughafen Wien-Schwechat, Niederösterreich. Investigation Report GZ. 85.007/0001-FUS/2006. Austrian Federal Office of Transport, Vienna, Austria. 89 pages. https://reports.aviation-safety.net/2000/20000712-0_A310_D-AHLB.pdf
[48]
Cecilia Panigutti, Andrea Beretta, Fosca Giannotti, and Dino Pedreschi. 2022. Understanding the impact of explanations on advice-taking: a user study for AI-based clinical decision support systems. In CHI Conference on Human Factors in Computing Systems(CHI ’22). ACM, New Orleans, LA, USA, 568:1–568:9. https://doi.org/10.1145/3491102.3502104
[49]
Emilie Roth, Devorah Klein, and Katie Ernst. 2021. Aviation decision making and situation awareness study: decision making literature review. Contract Report USAARL-TECH-CR–2022-17. Applied Decision Science, Cincinnati, OH, USA. 58 pages.
[50]
Nadine B. Sarter, David D. Woods, and Charles E. Billings. 1997. Automation surprises. In Handbook of Human Factors & Ergonomics (2 ed.), Gavriel Salvendy (Ed.). John Wiley & Sons, 1926–1943.
[51]
Philipp Schmidt and Felix Biessmann. 2020. Calibrating human-AI collaboration: impact of risk, ambiguity and transparency on algorithmic bias. In Machine Learning and Knowledge Extraction(CD-MAKE 2020). Springer International Publishing, Dublin, Ireland, 431–449. https://doi.org/10.1007/978-3-030-57321-8_24
[52]
Scott Shappell, Cristy Detwiler, Kali Holcomb, Carla Hackworth, Albert Boquet, and Douglas A. Wiegmann. 2007. Human error and commercial aviation accidents: an analysis using the human factors analysis and classification system. Human Factors: The Journal of the Human Factors and Ergonomics Society 49, 2 (April 2007), 227–242. https://doi.org/10.1518/001872007X312469
[53]
Ben Shneiderman. 2020. Design lessons from AI’s two grand goals: human emulation and useful applications. IEEE Transactions on Technology and Society 1, 2 (June 2020), 73–82. https://doi.org/10.1109/TTS.2020.2992669
[54]
P.J. Smith, C.E. McCoy, and C. Layton. 1997. Brittleness in the design of cooperative problem-solving systems: the effects on user performance. IEEE Transactions on Systems, Man, and Cybernetics - Part A: Systems and Humans 27, 3 (May 1997), 360–371. https://doi.org/10.1109/3468.568744
[55]
Henning Soll, Solveig Proske, Gesine Hofinger, and Gunnar Steinhardt. 2016. Decision-making tools for aeronautical teams: FOR-DEC and beyond. Aviation Psychology and Applied Human Factors 6, 2 (Sept. 2016), 101–112. https://doi.org/10.1027/2192-0923/a000099
[56]
Amos Tversky and Daniel Kahneman. 1974. Judgment under uncertainty: heuristics and biases. Science 185, 4157 (Sept. 1974), 1124–1131. https://doi.org/10.1126/science.185.4157.1124
[57]
Niels van Berkel, Mikael B. Skov, and Jesper Kjeldskov. 2021. Human-AI interaction: intermittent, continuous, and proactive. Interactions 28, 6 (Nov. 2021), 67–71. https://doi.org/10.1145/3486941
[58]
Xinru Wang and Ming Yin. 2021. Are explanations helpful? A comparative study of the effects of explanations in AI-assisted decision-making. In Proceedings of the 26th International Conference on Intelligent User Interfaces(IUI ’21). ACM, College Station, TX, USA, 318–328. https://doi.org/10.1145/3397481.3450650
[59]
David D. Woods. 2016. The risks of autonomy: Doyle’s Catch. Journal of Cognitive Engineering and Decision Making 10, 2 (June 2016), 131–133. https://doi.org/10.1177/1555343416653562
[60]
Fumeng Yang, Zhuanyi Huang, Jean Scholtz, and Dustin L. Arendt. 2020. How do visual explanations foster end users’ appropriate trust in machine learning?. In Proceedings of the 25th International Conference on Intelligent User Interfaces(IUI ’20). ACM, Cagliari, Italy, 189–201. https://doi.org/10.1145/3377325.3377480
[61]
Qian Yang, Aaron Steinfeld, and John Zimmerman. 2019. Unremarkable AI: fitting intelligent decision support into critical, clinical decision-making processes. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems(CHI ’19). ACM, Glasgow, Scotland, UK, 238:1–238:11. https://doi.org/10.1145/3290605.3300468
[62]
Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F. Antaki. 2016. Investigating the heart pump implant decision process: opportunities for decision support tools to help. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems(CHI ’16). ACM, San Jose, CA, USA, 4477–4488. https://doi.org/10.1145/2858036.2858373
[63]
Yunfeng Zhang, Q. Vera Liao, and Rachel K. E. Bellamy. 2020. Effect of confidence and explanation on accuracy and trust calibration in AI-assisted decision making. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency(FAT* ’20). ACM, Barcelona, Spain, 295–305. https://doi.org/10.1145/3351095.3372852
[64]
Zelun Tony Zhang, Yuanting Liu, and Heinrich Hußmann. 2021. Pilot attitudes toward AI in the cockpit: implications for design. In 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS). IEEE, Magdeburg, Germany, 159–164. https://doi.org/10.1109/ICHMS53169.2021.9582448

Cited By

View all
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • (2024)Beyond Recommendations: From Backward to Forward AI Support of Pilots' Decision-Making ProcessProceedings of the ACM on Human-Computer Interaction10.1145/36870248:CSCW2(1-32)Online publication date: 8-Nov-2024

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Conferences
IUI '23: Proceedings of the 28th International Conference on Intelligent User Interfaces
March 2023
972 pages
ISBN:9798400701061
DOI:10.1145/3581641
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 27 March 2023

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. AI-assisted decision-making
  2. appropriation
  3. aviation
  4. decision support tools
  5. human-AI interaction
  6. imperfect AI
  7. intelligent decision support
  8. naturalistic decision-making

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Bundesministerium für Wirtschaft und Energie

Conference

IUI '23
Sponsor:

Acceptance Rates

Overall Acceptance Rate 746 of 2,811 submissions, 27%

Upcoming Conference

IUI '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)89
  • Downloads (Last 6 weeks)7
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Exploring the Effects of User Input and Decision Criteria Control on Trust in a Decision Support Tool for Spare Parts Inventory ManagementProceedings of the International Conference on Mobile and Ubiquitous Multimedia10.1145/3701571.3701585(313-323)Online publication date: 1-Dec-2024
  • (2024)Beyond Recommendations: From Backward to Forward AI Support of Pilots' Decision-Making ProcessProceedings of the ACM on Human-Computer Interaction10.1145/36870248:CSCW2(1-32)Online publication date: 8-Nov-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media