[go: up one dir, main page]
More Web Proxy on the site http://driver.im/ skip to main content
research-article

Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human-centered AI Systems

Published: 16 October 2020 Publication History

Abstract

This article attempts to bridge the gap between widely discussed ethical principles of Human-centered AI (HCAI) and practical steps for effective governance. Since HCAI systems are developed and implemented in multiple organizational structures, I propose 15 recommendations at three levels of governance: team, organization, and industry. The recommendations are intended to increase the reliability, safety, and trustworthiness of HCAI systems: (1) reliable systems based on sound software engineering practices, (2) safety culture through business management strategies, and (3) trustworthy certification by independent oversight. Software engineering practices within teams include audit trails to enable analysis of failures, software engineering workflows, verification and validation testing, bias testing to enhance fairness, and explainable user interfaces. The safety culture within organizations comes from management strategies that include leadership commitment to safety, hiring and training oriented to safety, extensive reporting of failures and near misses, internal review boards for problems and future plans, and alignment with industry standard practices. The trustworthiness certification comes from industry-wide efforts that include government interventions and regulation, accounting firms conducting external audits, insurance companies compensating for failures, non-governmental and civil society organizations advancing design principles, and professional organizations and research institutes developing standards, policies, and novel ideas. The larger goal of effective governance is to limit the dangers and increase the benefits of HCAI to individuals, organizations, and society.

References

[1]
A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli. 2018. Trends and trajectories for explainable, accountable and intelligible systems: An HCI research agenda. Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 1--18.
[2]
S. Alsheibani, C. Messom, and Y. Cheung. 2019. Towards an artificial intelligence maturity model: From science fiction to business facts. Proceedings of the 23rd Pacific Asia Conference on Information Systems. Association for Information Systems. Retrieved from http://www.pacis2019.org/wd/Submissions/PACIS2019_paper_146.pdf.
[3]
S. Amershi, A. Begel, C. Bird, R. DeLine, H. Gall, E. Kamar, N. Nagappan, B. Nushi, and T. Zimmermann. 2019a. Software engineering for machine learning: A case study. In Proceedings of the IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP’19). IEEE, 291--300.
[4]
S. Amershi, D. Weld, M. Vorvoreanu, A. Fourney, B. Nushi, P. Collisson, and E. Horvitz. 2019b. Guidelines for human-AI interaction. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 1--13.
[5]
R. Baeza-Yates. 2018. Bias on the web. Commun. ACM 61, 6 (2018), 54--61.
[6]
S. K. Bell, P. B. Smulowitz, A. C. Woodward, M. M. Mello, A. M. Duva, R. C. Boothman, and K. Sands. 2012. Disclosure, apology, and offer programs: Stakeholders’ views of barriers to and strategies for broad implementation. Milbank Quart. 90, 4 (2012), 682--705.
[7]
R. K. Bellamy, K. Dey, M. Hind, S. C. Hoffman, S. Houde, K. Kannan, and S. Nagar. 2019. AI fairness 360: An extensible toolkit for detecting and mitigating algorithmic bias. IBM J. Res. Dev. 63, 4/5 (2019), 4--1.
[8]
J. C. Berry, J. T. Davis, T. Bartman, C. C. Hafer, L. M. Lieb, N. Khan, and R. J. Brilli. 2016. Improved safety culture and teamwork climate are associated with decreases in patient harm and hospital mortality across a hospital system. J. Patient Safety (Jan. 2016). Retrieved from http://www.ncbi.nlm.nih.gov/ /26741790.
[9]
O. Biran and C. Cotton. 2017. Explanation and justification in machine learning: A survey. In Proceedings of the International Joint Conference on Artificial Inteeligence Workshop on Explainable AI (XAI’17).
[10]
R. Bostelman, T. Hong, and J. Marvel. 2016. Survey of research for performance measurement of mobile manipulators. J. Res. Natl. Inst. Standards Technol. 121, 3 (2016), 342--366.
[11]
E. Breck, N. Polyzotis, S. Roy, S. E. Whang, and M. Zinkevich. 2019. Data validation for machine learning. In Proceedings of the Conference on Systems and Machine Learning (SysML’19). Retrieved from https://www.sysml.cc/doc/2019/167.pdf.
[12]
H. Brown. 2018. Keeping the lights on: A comparison of normal accidents and high reliability organizations. IEEE Technol. Soc. Mag. 37, 2 (2018), 62--70.
[13]
B. G. Buchanan and E. H. Shortliffe (Eds.). 1985. Rule-based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Addison-Wesley Publishing Company.
[14]
J. Buolamwini and T. Gebru. 2018. Gender shades: Intersectional accuracy disparities in commercial gender classification. Proc. Mach. Learn. Res. 81, (2018), 77--91.
[15]
Calo Ryan. 2016. Robots in American law. University of Washington School of Law Research Paper. Retrieved from https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2737598.
[16]
N. Campbell. 2007. The evolution of flight data analysis. Proceedings of Australian Society of Air Safety Investigators. Retrieved from https://asasi.org/papers/2007/The_Evolution_of_Flight_Data_Analysis_Neil_Campbell.pdf.
[17]
Canadian Government. 2019. Responsible use of artificial intelligence (AI). Retrieved from https://www.canada.ca/en/government/system/digital-government/modern-emerging-technologies/responsible-use-ai.html.
[18]
J. V. Carvalho, Á. Rocha, J. Vasconcelos, and A. Abreu. 2019. A health data analytics maturity model for hospitals information systems. Int. J. Info. Manage. 46 (2019), 278--285.
[19]
R. Challen, J. Denny, M. Pitt, L. Gompels, T. Edwards, and K. Tsaneva-Atanasova. 2019. Artificial intelligence, bias and clinical safety. BMJ Qual. Safety 28, 3 (2019), 231--237.
[20]
L. Chen, D. Yan, and F. Wang. 2019. User evaluations on sentiment-based recommendation explanations. ACM Trans. Interact. Intell. Syst. 9, 4 20 (2019), 38 pages.
[21]
H. F. Cheng, R. Wang, Z. Zhang, F. O'Connell, T. Gray, F. M. Harper, and H. Zhu. 2019. Explaining decision-making algorithms through UI: Strategies to help non-expert stakeholders. In Proceedings of the CHI Conference on Human Factors in Computing Systems ACM, 1--12.
[22]
W. J. Clancey. 1986. From GUIDON to NEOMYCIN and HERACLES in twenty short lessons. AI Mag. 7, 3 (1986), 40--40.
[23]
J. Couzin-Frankel. 2019. Medicine contends with how to use artificial intelligence. Science 354, 6446 (2019), 1119--1120.
[24]
P. R. Daugherty and H. J. Wilson. 2018. Human+ Machine: Reimagining Work in the Age of AI. Harvard Business Press.
[25]
T. G. Dietterich. 2019. Robust artificial intelligence and robust human organizations. Front. Comput. Sci. 13, 1--3 https://doi.org/10.1007/s11704-018-8900-4
[26]
F. Doshi-Velez and B. Kim. 2017. Towards a rigorous science of interpretable machine learning. Arxiv Preprint Arxiv:1702.08608.
[27]
F. Du, C. Plaisant, N. Spring, K. Crowley, and B. Shneiderman. 2019. EventAction: A visual analytics approach to explainable recommendation for event sequences. ACM Trans. Interact. Intell. Syst. 9, 4 (2019), 1--31.
[28]
M. Du, N. Liu, and X. Hu. 2020. Techniques for interpretable machine learning. Commun. ACM 63, 1 (2020), 68--77.
[29]
J. J. Dudley and P. O. Kristensson. 2018. A review of user interface design for interactive machine learning. ACM Trans. Interact. Intell. Syst. 8, 2 (2018), 8.
[30]
C. Ebert and M. Weyrich. 2019. Validation of autonomous systems. IEEE Softw. 36, 5 (2019), 15--23.
[31]
S. Elbaum and J. C. Munson. 2000. Software black box: An alternative mechanism for failure analysis. In Proceedings of the 11th International Symposium on Software Reliability Engineering (ISSRE’00). IEEE, 365--376. Retrieved from https://ieeexplore.ieee.org/abstract/document/885887.
[32]
S. M. Erickson, J. Wolcott, J. M. Corrigan, and P. Aspden (Eds.). 2004. Patient Safety: Achieving a New Standard for Care. National Academies Press, Washington, DC.
[33]
European Commission. 2020a. White paper on artificial intelligence—A European approach to excellence and trust, Brussels. Retrieved from https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf.
[34]
European Commission. 2020b. The assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment, independent high-level expert group on artificial intelligence, Brussels. Retrieved from https://ec.europa.eu/digital-single-market/en/news/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment.
[35]
G. Falco, M. Eling, D. Jablanski, M. Weber, V. Miller, L. A. Gordon, S. S. Wang, J. Schmit, R. Thomas, M. Elvedi, T. Maillart, E. Donovan, S. Dejung, E. Durand, F. Nutter, U. Scheffer, G. Arazi, G. Ohana, and H. Lin. 2019. Cyber risk research impeded by disciplinary barriers. Science, 366 (6469), 1066--1069.
[36]
J. Fjeld, N. Achten, H. Hilligoss, A. Nagy, and M. Srikumar. 2020. Principled artificial intelligence: Mapping consensus in ethical and rights-based approaches to principles for AI. Berkman Klein Center Research Publication. Retrieved from https://cyber.harvard.edu/publication/2020/principled-ai.
[37]
P. Fraser, J. Moultrie, and M. Gregory. 2002. The use of maturity models/grids as a tool in assessing product development capability. In Proceedings of the IEEE International Engineering Management Conference. IEEE, 244--249.
[38]
S. A. Friedler, C. Scheidegger, S. Venkatasubramanian, S. Choudhary, E. P. Hamilton, and D. Roth. 2019. A comparative study of fairness-enhancing interventions in machine learning. In Proceedings of the Conference on Fairness, Accountability, and Transparency, ACM, 329--338. https://doi.org/10.1145/3287560.3287589
[39]
B. Friedman and H. Nissenbaum. 1996. Bias in computer systems. ACM Trans. Info. Syst. 14, 3 (1996), 330--347.
[40]
S. Garfinkel, J. Matthews, S. S. Shapiro, and J. M. Smith. 2017. Toward algorithmic transparency and accountability. Commun. ACM 60, 9 (2017), 5--5.
[41]
B. Goodman and S. Flaxman. 2017. European union regulations on algorithmic decision-making and a “right to explanation.” AI Mag. 38, 3 (2017), 50--57.
[42]
D. R. Grossi. 1999. Aviation Recorder Overview. In Proceedings of the International Symposium on Transportation Recorders. 153--164.
[43]
F. W. Guldenmund. 2000. The nature of safety culture: a review of theory and research. Safety Sci. 34, 1-3 (1999), 215--257.
[44]
T. K. Haavik, S. Antonsen, R. Rosness, and A. Hale. 2019. HRO and RE: A pragmatic perspective. Safety Sci. 117 (2019), 479--489.
[45]
J. Heer. 2019. Agency plus automation: Designing artificial intelligence into interactive systems. Proc. Natl. Acad. Sci. U.S.A. Retrieved from https://www.pnas.org/content/116/6/1844.
[46]
M. Herschel, R. Diestelkämper, and H. B. Lahmar. 2017. A survey on provenance: What for? what form? what from? VLDB J. 26, 6 (2017), 881--906.
[47]
R. R. Hoffman and G. Klein. 2017. Explaining explanation, part 1: Theoretical foundations. IEEE Intell. Syst. 32, 3 (2017), 68--73.
[48]
R. R. Hoffman, S. T. Mueller, and G. Klein. 2017. Explaining explanation, part 2: Empirical foundations. IEEE Intell. Syst. 32, 4 (2017), 78--86.
[49]
F. Hohman, A. Head, R. Caruana, R. DeLine, and S. M. Drucker. 2019. Gamut: A design probe to understand how data scientists understand machine learning models. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 1--13.
[50]
F. Hohman, H. Park, C. Robinson, and D. H. P. Chau. 2019. SUMMIT: Scaling deep learning interpretability by visualizing activation and attribution summarizations. IEEE Trans. Visual. Comput. Graph. 26, 1 (2019), 1096--1106.
[51]
K. Holstein, J. Wortman Vaughan, H. Daumé III, M. Dudik, and H. Wallach. 2019. Improving fairness in machine learning systems: What do industry practitioners need? In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 1--16. https://doi.org/10.1145/3290605.3300830
[52]
S. Hong, J. Hullman, and E. Bertini. 2020. Human factors in model interpretability: Industry practices, challenges, and needs. Proc. ACM Hum.-Comput. Interact. 4 (2020), 1--26.
[53]
A. Hopkins. 1999. The limits of normal accident theory. Safety Sci. 32, 2 (1999), 93--102.
[54]
R. Hull, B. Kumar, D. Lieuwen, P. F. Patel-Schneider, A. Sahuguet, S. Varadarajan, and A. Vyas. 2003. Everything personal, not just business: Improving user experience through rule-based service customization. In Proceedings of the International Conference on Service-Oriented Computing. Springer, Berlin, 149--164.
[55]
W. S. Humphrey. 1988. Characterizing the software process: a maturity framework, IEEE Softw. 5, 2 (1988), 73--79.
[56]
IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems (2019). Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems, 1st ed. IEEE. Retrieved from https://standards.ieee.org/content/ieee-standards/en/industry-connections/ec/autonomous-systems.html https://ethicsinaction.ieee.org/.
[57]
Information Commissioner's Office and Alan Turing Institute (2019). Explaining decisions made with AI. Retrieved from https://ico.org.uk/about-the-ico/ico-and-stakeholder-consultations/ico-and-the-turing-consultation-on-explaining-ai-decisions-guidance/.
[58]
P. Kalluri. 2020. Don't ask if artificial intelligence is good or fair, ask how it shifts power. Nature 583 (7815), 169.
[59]
K. M. Kavi. 2010. Beyond the black box. IEEE Spectrum 47, 8 (2010), 46--51.
[60]
G. A. Klein. 2017. Sources of Power: How People Make Decisions. MIT Press, Cambridge, MA.
[61]
T. R. La Porte. 1996. High reliability organizations: Unlikely, demanding and at risk. J. Conting. Crisis Manage. 4, 2 (1996), 60--71.
[62]
T. C. Lacerda and C. G. von Wangenheim. 2018. Systematic literature review of usability capability/maturity models. Comput. Standards Interfaces 55, 95--105.
[63]
P. Landon, P. Weaver, and J. P. Fitch. 2016. Tracking minor and near-miss events and sharing lessons learned as a way to prevent accidents. Appl. Biosafety 21, 2 (2016), 61--65.
[64]
C. E. Landwehr. 2013. A building code for building code: Putting what we know works to work. In Proceedings of the 29th Annual Computer Security Applications Conference (ACSAC’13). 139—147. Retrieved from http://www.landwehr.org/2013-12-cl-acsac-essay-bc.pdf.
[65]
C. Landwehr. 2015. We need a building code for building code. Commun. ACM 58, 2 (2015), 24--26.
[66]
N. T. Lee, P. Resnick, and G. Barton. 2019. Algorithmic bias detection and mitigation: Best practices and policies to reduce consumer harms. Center for Technology Innovation, Brookings. Retrieved from https://www.brookings.edu/research/algorithmic-bias-detection-and-mitigation-best-practices-and-policies-to-reduce-consumer-harms/.
[67]
B. Letham, C. Rudin, T. H. McCormick, and D. Madigan. 2015. Interpretable classifiers using rules and bayesian analysis: Building a better stroke prediction model. Ann. Appl. Stat. 9, 3 (2015), 1350--1371.
[68]
N. Leveson. 2011. Engineering a Safer World: Systems Thinking Applied to Safety. MIT Press, Cambridge, MA.
[69]
F.-F. Li. 2018. How to make A.I. that's good for people. The New York Times (Mar. 7, 2018). Retrieved from https://www.nytimes.com/2018/03/07/opinion/artificial-intelligence-human.html.
[70]
X. Liang, S. Shetty, D. Tosh, C. Kamhoua, K. Kwiat, and L. Njilla. 2017. Provchain: A blockchain-based data provenance architecture in cloud environment with enhanced privacy and availability. In Proceedings of the 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID’17), 468--477.
[71]
Q. V. Liao, D. Gruen, and S. Miller. 2020. Questioning the AI: Informing design practices for explainable ai user experiences. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems. ACM, New York, 1--15.
[72]
T. Mai, R. Khanna, J. Dodge, J. Irvine, K. H. Lam, Z. Lin, N. Kiddle, E. Newman, S. Raja, C. Matthews, C. Perdriau, M. Burnett, and A. Fern. 2020. Keeping it “organized and logical”: After-action review for AI (AAR/AI). In Proceedings of the 25th International Conference on Intelligent User Interfaces. ACM, 465--476.
[73]
T. Miller. 2019. Explanation in artificial intelligence: Insights from the social sciences. Artific. Intell. 267, 1--38. https://doi.org/10.1016/j.artint.2018.07.007
[74]
A. Mitrevski, S. Thoduka, A. O. Sáinz, M. Schöbel, P. Nagel, P. G. Plöger, and E. Prassler. 2018. Deploying robots in everyday environments: Toward dependable and practical robotic systems. In Proceedings of the 29th International Workshop Principles of Diagnosis (DX’18). Retrieved from http://www.ropod.org/downloads/dx18.pdf.
[75]
B. Mittelstadt, C. Russell, and S. Wachter. 2019. Explaining explanations in AI. In Proceedings of the Conference on Fairness, Accountability, and Transparency. ACM, 279--288. https://doi.org/10.1145/3287560.3287574
[76]
M. Modarres, M. P. Kaminskiy, and V. Krivtsov. 2016. Reliability Engineering and Risk Analysis: A Practical Guide. CRC Press.
[77]
M. R. Morris. 2020. AI and accessibility: A discussion of ethical considerations. Commun. ACM 63, 6 (2016), ACM 35--37. 10.1145/3356727
[78]
Nadella Satya (2016). The partnership of the future, Slate. Retrieved from https://slate.com/technology/2016/06/microsoft-ceo-satya-nadella-humans-and-a-i-can-work-together-to-solve-societys-challenges.html.
[79]
J. Nicas, N. Kitroeff, D. Gelles, and J. Glanz. 2019. Boeing built deadly assumptions into 737 max, blind to a late design change. The New York Times (2019). Retrieved from https://www.nytimes.com/2019/06/01/business/boeing-737-max-crash.html.
[80]
S. Nourashrafeddin, E. Sherkat, R. Minghim, and E. E. Milios. 2018. A visual approach for interactive keyterm-based clustering. ACM Trans. Interact. Intell. Syst. 8, 1 (2018), 1--35.
[81]
C. O'Neil. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishers, New York.
[82]
F. Pasquale. 2015. The Black Box Society: The Secret Algorithms that Control Money and Information. Harvard University Press, Cambridge, MA.
[83]
F. Pasquale. 2017. Toward a fourth law of robotics: Preserving attribution, responsibility, and explainability in an algorithmic society. Ohio State Law J. 78 (2017), 1243--1255.
[84]
F. Pasquale. 2018. When machine learning is facially invalid. Commun. ACM 61, 9 (2018), 25--27.
[85]
M. C. Paulk, B. Curtis, M. B. Chrissis, and C. V. Weber. 1993. Capability maturity model, version 1.1. IEEE Softw. 10, 4 (1993), 18--27.
[86]
A. Pérez, M. I. García, M. Nieto, J. L. Pedraza, S. Rodríguez, and J. Zamorano. 2010. Argos: An advanced in-vehicle data recorder on a massively sensorized vehicle for car driver behavior experimentation. IEEE Trans. Intell. Transport. Syst. 11, 2 (2010), 463--473.
[87]
C. C. Perez. 2019. Invisible Women: Exposing Data Bias in a World Designed for Men. Random House.
[88]
C. Perrow. 2011. Normal Accidents: Living with High Risk Technologies—Updated edition. Princeton University Press.
[89]
O. Pettersson. 2005. Execution monitoring in robotics: A survey. Robot. Auton. Syst. 53, 2 (2005), 73--88.
[90]
Pichai Sundar (2018). AI at Google: Our principles. Retrieved from https://www.blog.google/technology/ai/ai-principles/.
[91]
E. D. Ragan, A. Endert, J. Sanyal, and J. Chen. 2015. Characterizing provenance in visualization and data analysis: An organizational framework of provenance types and purposes. IEEE Trans. Visual. Comput. Graph. 22, 1 (2015), 31--40.
[92]
I. D. Raji, A. Smart, R. N. White, M. Mitchell, T. Gebru, B. Hutchinson, J. Smith-Loud, D. Theron, and P. Barnes. 2020. Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. In Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT’20). ACM, 33--44. https://doi.org/10.1145/3351095.3372873
[93]
S. Reddy, S. Allan, S. Coghlan, and P. Cooper. 2020. A governance model for the application of AI in health care. J. Amer. Med. Info. Assoc. 27, 3 (2020), 491--497.
[94]
D. Reisman, J. Schultz, K. Crawford, and M. Whittaker. 2018. Algorithmic impact assessments: A practical framework for public agency accountability. AI Now Institute, 1--22. Retrieved from https://ainowinstitute.org/aiareport2018.pdf.
[95]
F. Rosenberg and S. Dustdar. 2005. Design and implementation of a service-oriented business rules broker. In Proceedings of the 7th IEEE International Conference on E-Commerce Technology Workshops. IEEE, 55--63.
[96]
M. Rotenberg. 2020. The AI Policy Sourcebook 2020. Electronic Privacy Information Center, Washington, DC.
[97]
C. Rudin. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Mach. Intell. 1, 5 (2019), 206--215.
[98]
F. Santoni de Sio and J. Van den Hoven. 2018. Meaningful human control over autonomous systems: A philosophical account. Front. Robot. AI 5, 15.
[99]
J. J. Seddon and W. L. Currie. 2017. A model for unpacking big data analytics in high-frequency trading. J. Bus. Res. 70 (2017), 300--307.
[100]
H. Sharp, J. Preece, and Y. Rogers. 2019. Interaction Design: Beyond Human-Computer Interaction, 5th ed. Wiley Publishers.
[101]
B. Shneiderman. 2016. Opinion: The dangers of faulty, biased, or malicious algorithms requires independent oversight. Proc. Natl. Acad. Sci. U.S.A. 113, 48 (2016), 13538--13540. Retrieved from http://www.pnas.org/content/113/48/13538.full.
[102]
Ben Shneiderman. 2020a. Human-centered artificial intelligence: Reliable, safe, 8 trustworthy. Int. J. Hum.-Comput. Interact. 36, 6 (2020a), 495--504. https://doi.org/10.1080/10447318.2020.1741118
[103]
Ben Shneiderman. 2020b. Design lessons from ai's two grand goals: Human emulation and useful applications. IEEE Trans. Technol. Soc. 1, 2 (Early Access). Retrieved from https://ieeexplore.ieee.org/document/9088114.
[104]
B. Shneiderman, C. Plaisant, M. Cohen, S. Jacobs, and N. Elmqvist. 2016. Designing the User Interface: Strategies for Effective Human-Computer Interaction, 6th ed. Pearson.
[105]
G. Siegel. 2014. Forensic Media: Reconstructing Accidents in Accelerated Modernity. Duke University Press.
[106]
A. Theodorou, R. H. Wortham, and J. J. Bryson. 2017. Designing and implementing transparency for real time inspection of autonomous robots. Connect. Sci. 29, 3 (2017), 230--241. Retrieved from https://doi.org/10.1080/09540091.2017.1310182.
[107]
U. S. National Research Council. 2008. Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment. National Academies Press, Washington, DC. Retrieved from http://www.nap.edu/catalog.php?record_id=12452.
[108]
U. S. National Science and Technology Council (2019). The National Artificial Intelligence Research and Development Strategic Plan: 2019 Update. Executive Office of the President. Retrieved from https://www.nitrd.gov/pubs/National-AI-RD-Strategy-2019.pdf.
[109]
U. S. National Security Commission on Artificial Intelligence(2019). Interim Report. Retrieved from https://epic.org/foia/epic-v-ai-commission/AI-Commission-Interim-Report-Nov-2019.pdf.
[110]
U. S. National Transportation Safety Board (2017). Collision between a car operating with automated vehicle control systems and a tractor-semitrailer truck near Williston, Florida, May 7, 2016, Report HAR1702. Retrieved from https://dms.ntsb.gov/public/59500-59999/59989/609449.pdf.
[111]
U. S. White House (2020). American artificial Intelligence Initiative: Year one annual report. Office of Science and Technology Policy. Retrieved from https://www.whitehouse.gov/wp-content/uploads/2020/02/American-AI-Initiative-One-Year-Annual-Report.pdf.
[112]
G. R. Vishnia and G. W. Peters. 2020. AuditChain: A trading audit platform over blockchain. Front. Blockchain 3, 9.
[113]
C. G. von Wangenheim, J. C. R. Hauck, A. Zoucas, C. F. Salviano, F. McCaffery, and F. Shull. 2010. Creating software process capability/maturity models. IEEE Softw. 27, 4 (2010), 92--94.
[114]
R. T. Vought. 2019. Guidance for regulation of artificial intelligence applications. February 11, 2019, U.S. White House Announcement, Washington, DC. Retrieved from https://www.whitehouse.gov/wp-content/uploads/2020/01/Draft-OMB-Memo-on-Regulation-of-AI-1-7-19.pdf.
[115]
S. Wachter, B. Mittelstadt, and C. Russell. 2017. Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harvard J. Law Technol. 31 (2017), 841--887.
[116]
D. Wang, Q. Yang, A. Abdul, and B. Y. Lim. 2019. Designing theory-driven user-centric explainable AI. In Proceedings of the CHI Conference on Human Factors in Computing Systems. ACM, 1--15. https://doi.org/10.1145/3290605.3300831
[117]
K. E. Weick, K. M. Sutcliffe, and D. Obstfeld. 1999. Organizing for high reliability: Processes of collective mindfulness. In Research in Organizational Behavior, Vol. 1, R.S. Sutton and B.M. Staw (Eds.). JAI Press, Stanford, Chap. 44, 81--123.
[118]
D. S. Weld and G. Bansal. 2019. The challenge of crafting intelligible intelligence. Commun. ACM 62, 6 (2019), 70--79.
[119]
J. Wenskovitch, M. X. Zhou, C. Collins, R. Chang, M. Dowling, A. Endert, and K. Xu. 2020. Putting the "I" in interaction: Interactive interfaces personalized to individuals. IEEE Comput. Graph. Appl. 40, 3 (2020), 73--82.
[120]
D. M. West and J. R. Allen. 2020. Turning Point: Policymaking in the Era of Artificial Intelligence. Brookings Institution Press, Washington, DC.
[121]
A. F. Winfield and M. Jirotka. 2017. The case for an ethical black box. In Proceedings of the Annual Conference Towards Autonomous Robotic Systems, Springer 262--273. Retrieved from https://link.springer.com/chapter/10.1007/978-3-319-64107-2_21.
[122]
A. F. Winfield and M. Jirotka. 2018. Ethical governance is essential to building trust in robotics and artificial intelligence systems. Philos. Trans. Roy. Soc. A: Math. Phys. Eng. Sci. 376 (2018), 0085.
[123]
D. D. Woods. 2017. Essential characteristics of resilience. In Resilience Engineering: Concepts and Precepts, E. Hollnagel, D. W. Woods, and N. Leveson (Eds.). Ashgate Publishing, 21--34.
[124]
W. Xu. 2019. Toward human-centered AI: A perspective from human-computer interaction. ACM Interact. 26, 4 (2019), 42--46. doi.org/10.1145/3328485
[125]
Y. Yao and E. Atkins. 2020. The smart black box: A value-driven high-bandwidth automotive event data recorder. IEEE Trans. Intell. Transport. Syst. Retrieved from https://ieeexplore.ieee.org/abstract/document/8995510/.
[126]
J. M. Zhang, M. Harman, L. Ma, and Y. Liu. 2020. Machine learning testing: Survey, landscapes and horizons. IEEE Trans. Softw. Eng. Retrieved from https://ieeexplore.ieee.org/document/9000651.
[127]
M. X. Zhou, G. Mark, J. Li, and H. Yang. 2019. Trusting virtual agents: The effect of personality. ACM Trans. Interact. Intell. Syst. 9 (2--3) (2019), 1--36.

Cited By

View all
  • (2025)Theoretical dimensions for integrating research on anticipatory governance, scientific foresight and sustainable S&T public policy designTechnology in Society10.1016/j.techsoc.2024.10275880(102758)Online publication date: Mar-2025
  • (2025)Novelty-aware concept drift detection for neural networksNeurocomputing10.1016/j.neucom.2024.128933617(128933)Online publication date: Feb-2025
  • (2025)Artificial Intelligence-driven regional energy transition:Evidence from ChinaEconomic Analysis and Policy10.1016/j.eap.2024.10.00485(48-60)Online publication date: Mar-2025
  • Show More Cited By

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image ACM Transactions on Interactive Intelligent Systems
ACM Transactions on Interactive Intelligent Systems  Volume 10, Issue 4
Special Issue on IUI 2019 Highlights
December 2020
274 pages
ISSN:2160-6455
EISSN:2160-6463
DOI:10.1145/3430697
Issue’s Table of Contents
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 16 October 2020
Accepted: 01 August 2020
Received: 01 May 2019
Published in TIIS Volume 10, Issue 4

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Artificial Intelligence
  2. Human-Computer Interaction
  3. Human-centered AI
  4. design
  5. independent oversight
  6. management strategies
  7. reliable
  8. safe
  9. software engineering practices
  10. trustworthy

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3,028
  • Downloads (Last 6 weeks)364
Reflects downloads up to 13 Dec 2024

Other Metrics

Citations

Cited By

View all
  • (2025)Theoretical dimensions for integrating research on anticipatory governance, scientific foresight and sustainable S&T public policy designTechnology in Society10.1016/j.techsoc.2024.10275880(102758)Online publication date: Mar-2025
  • (2025)Novelty-aware concept drift detection for neural networksNeurocomputing10.1016/j.neucom.2024.128933617(128933)Online publication date: Feb-2025
  • (2025)Artificial Intelligence-driven regional energy transition:Evidence from ChinaEconomic Analysis and Policy10.1016/j.eap.2024.10.00485(48-60)Online publication date: Mar-2025
  • (2024)THE ROLE OF ETHICAL AND TRUSTWORTHY AI TEAMMATES IN ENHANCING TEAM PERFORMANCE: A SYSTEMATIC LITERATURE REVIEWPerformance Improvement Quarterly10.56811/PIQ-24-0039Online publication date: 13-Dec-2024
  • (2024)The Cognitive Hourglass: Agent Abstractions in the Large Models EraProceedings of the 23rd International Conference on Autonomous Agents and Multiagent Systems10.5555/3635637.3663262(2706-2711)Online publication date: 6-May-2024
  • (2024)Exploring the Role of Explainable AI in Compliance Models for Fraud PreventionInternational Journal of Latest Technology in Engineering Management & Applied Science10.51583/IJLTEMAS.2024.13052413:5(232-239)Online publication date: 27-Jun-2024
  • (2024)Affordances and Constraints of Automation and AugmentationJournal of Global Information Management10.4018/JGIM.35726032:1(1-27)Online publication date: 16-Oct-2024
  • (2024)Revolutionizing HR CommunicationTechnological Enhancements for Improving Employee Performance, Safety, and Well-Being10.4018/979-8-3693-9631-5.ch014(293-314)Online publication date: 18-Oct-2024
  • (2024)Challenges and Issues in Requirements Elicitation for Based SystemsBridging Global Divides for Transnational Higher Education in the AI Era10.4018/979-8-3693-7016-2.ch020(423-446)Online publication date: 29-Nov-2024
  • (2024)Regulatory LandscapesUnleashing the Power of Basic Science in Business10.4018/979-8-3693-5503-9.ch007(118-137)Online publication date: 12-Jul-2024
  • Show More Cited By

View Options

Login options

Full Access

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media