A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How?
Abstract
1 Introduction
2 Four-Pillar Structure of Our Roadmap
2.1 Explain to Whom
2.2 When to Explain
2.3 What to Explain
2.4 How to Explain
3 When to Explain, to Whom, and to Meet What Need
3.1 AI System Lifecycle under Consideration
3.2 Identify Stakeholders and Their Needs for Explanation at Different Stages
3.2.1 AI Theory Experts.
3.2.2 AI Development Experts.
3.2.3 Model Validation Team.
3.2.4 Model Operators.
3.2.5 Decision Makers.
3.2.6 Regulators.
3.2.7 Data Subjects.
3.2.8 Subjects of Decision.
3.2.9 General End-Users.
4 What to Explain
5 How to Explain
“What to Explain” Question | XAI Methods |
---|---|
How explanations | Global DT, Global DR, Global FAE, GE |
Why explanations | Local DT, Local DR, Local FAE, PE, TE, IpAE, OAE, Global DT, Global DR, CF, GE, ItAE |
Why-not explanations | Local FAE, CF, Global DT, Global DR |
What explanations | MV, GE, ItAE |
What-if explanations | PDP, Global DT, Global DR, GE |
What-else explanations | PE, CFI |
How to be that explanations | CF, Local PDP, Global DT, Global DR |
How to still be this explanations | Local DT, Local DR, PDP, Global DT, Global DR, PE |
Data explanations | PE, EDA |
6 Bridging Stakeholders’ Needs and XAI Methods: A Roadmap and Guideline
Full Name | Abbreviation |
---|---|
Decision tree | DT |
Decision rule | DR |
Feature attribution explanation | FAE |
Partial dependence plot | PDP |
Counterfactual explanations/instance | CFE/CFI |
Prototype explanation | PE |
Text explanation | TE |
Model visualization | MV |
Graph explanation | GE |
Input/Internal/output association explanation | IpAE/ItAE/OAE |
Exploratory data analysis | EDA |
6.1 AI Theory Experts
6.2 AI Development Experts
6.3 Model Validation Team
6.4 Model Operators
6.5 Decision Makers
6.6 Regulators
6.7 Data Subjects
6.8 Subjects of Decision
6.9 General End-Users
Explain to Whom | When to Explain | Need | What to Explain | How to Explain | Examples |
---|---|---|---|---|---|
AI theory experts | Insight and understanding of the internal logic of complex ML models | What | MV, GE, ItAE | [17, 61, 62, 90, 101, 157, 168, 199, 242, 244, 248, 250–255] | |
AI development experts | Comparing analysis of multiple ML models | How, why, what | DT, DR, FAE, GE, PE, TE, AE, CF, MV | [65, 103, 139, 150, 155, 160] | |
Insight and understanding of datasets | Data | PE, EDA | [55, 82] | ||
Analyzing potential errors, noise, and bias in the dataset | Data | PE, EDA | [186] | ||
Assisting with feature selection | How, why | DT, DR, FAE, GE, PE, TE, AE, CF | [20, 100, 105, 135, 148, 152, 176, 206, 212, 216, 240, 243] | ||
Optimizing model architecture and hyperparameters | How, why, what | DT, DR, FAE, GE, PE, TE, AE, CF, MV | [77, 81, 150, 197, 203, 238, 244] | ||
Checking the model's decisions | How, why, why-not, what, what-if, what-else, how-to, how-still | DT, DR, FAE, GE, PE, TE, AE, CF, MV, PDP | [19, 26, 50, 51, 106, 111, 126, 143, 173] | ||
Guiding model debugging and error refinement | How, why, why-not, what | DT, DR, FAE, GE, PE, TE, AE, CF, MV | [4, 15, 19, 20, 34, 51, 69, 76, 114, 162, 163, 170, 188, 198, 200, 207, 222] | ||
Adjusting the ML model to meet the user's expectations and needs | Why, why-not, what-if, how-to | DT, DR, Local FAE, PE, TE, AE, CF, GE, PDP | [22, 29, 51, 69, 114, 178, 188, 207] | ||
Assessing the impact of dataset shift | How, why, data | DT, DR, FAE, GE, PE, TE, AE, CF, EDA | [32, 38, 136, 229] | ||
Model validation team | Evaluating data suitability | Data | PE, EDA | [110] | |
Reviewing the ML model's decision logic | How, why, why-not, what-if, what-else, how-to, how-still | DT, DR, FAE, GE, PE, TE, AE, CF, PDP | [39, 46, 113, 127] | ||
Determining compliance with regulations | How, why, why-not, what-if, what-else | DT, DR, FAE, GE, PE, TE, AE, CF, PDP | [109, 149, 156, 165, 169, 234, 247] | ||
Model operators | Ensuring correct and efficient interaction | How, why, what-if, what-else | DT, DR, FAE, GE, PE, TE, AE, CF, PDP | [134, 174, 194, 246, 249] | |
Decision makers | Comprehending specific model decisions | Why, why-not, what-else | DT, DR, Local FAE, PE, TE, AE, CF, GE | [1, 44, 59, 66, 104, 131, 147, 166, 180, 184, 202, 226, 239, 241] | |
Deepening overall understanding of the ML model and improving decisions | How | Global DT, Global DR, Global FAE, GE | [1, 35, 44, 166, 173, 224, 226] | ||
Regulators | Ancillary model review | How, why, why-not, what-if, what-else | DT, DR, FAE, GE, PE, TE, AE, CF, PDP | [3, 149, 165, 234, 247] | |
Assisting in apportioning responsibility | Why | DT, DR, Local FAE, PE, TE, AE, CF, GE | None | ||
Data subjects | Protecting personal data information | How, why | DT, DR, FAE, GE, PE, TE, AE, CF | None | |
Subjects of decision | Understanding the decision and how to maintain or change it | Why, why-not, what-if, what-else, how-to, how-still | DT, DR, Local FAE, PE, TE, AE, CF, GE, PDP | [12, 147] | |
Examining bias | How, why, why-not, what-if | DT, DR, FAE, GE, PE, TE, AE, CF, PDP | [40, 133, 149, 165, 230, 234, 247] | ||
General end-users | Establishing appropriate trust and building mental models | How, why, why-not, what-if, what-else | DT, DR, FAE, GE, PE, TE, AE, CF, PDP | [25, 54, 86, 114, 172, 215, 232, 236] | |
Protecting personal data information | How, why | DT, DR, FAE, GE, PE, TE, AE, CF | None |
7 Case Studies
7.1 Autonomous Driving Scenario
7.2 Financial Loan Scenario
8 Discussion
8.1 Imbalanced Consideration of Different Stakeholders
8.2 Imbalanced Consideration of Different Research Directions
8.2.1 XAI Methods for Interpretation of Data.
8.2.2 XAI Methods for Reviewing Model Compliance.
8.2.3 XAI Methods for Accountability.
8.2.4 XAI Methods to Help Subjects of Decision Understand the ML Model.
8.3 Imbalance in the Use of Different XAI Methods
8.4 Unclear Stakeholders When Discussing XAI Methods
9 Conclusion
References
Index Terms
- A Roadmap of Explainable Artificial Intelligence: Explain to Whom, When, What and How?
Recommendations
Explainable Artificial Intelligence: Requirements for Explainability
SIGSIM-PADS '22: Proceedings of the 2022 ACM SIGSIM Conference on Principles of Advanced Discrete SimulationTo date, many reasons have been suggested for making explainable artificial intelligence (XAI) models. However, it is unclear when the XAI suggested content is considered an explanation. This paper conducts a survey to determine the requirements for ...
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
Highlights- We review concepts related to the explainability of AI methods (XAI).
- We comprehensive analyze the XAI literature organized in two taxonomies.
- We identify future research directions of the XAI field.
- We discuss potential ...
AbstractIn the last few years, Artificial Intelligence (AI) has achieved a notable momentum that, if harnessed appropriately, may deliver the best of expectations over many application sectors across the field. For this to occur shortly in Machine ...
A Review of Taxonomies of Explainable Artificial Intelligence (XAI) Methods
FAccT '22: Proceedings of the 2022 ACM Conference on Fairness, Accountability, and TransparencyThe recent surge in publications related to explainable artificial intelligence (XAI) has led to an almost insurmountable wall if one wants to get started or stay up to date with XAI. For this reason, articles and reviews that present taxonomies of XAI ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Research-article
Funding Sources
- National Natural Science Foundation of China
- Guangdong Provincial Key Laboratory
- Program for Guangdong Introducing Innovative and Entrepreneurial Teams
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 1,058Total Downloads
- Downloads (Last 12 months)1,058
- Downloads (Last 6 weeks)1,058
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in