eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations
<p>Concepts of XAI methodologies.</p> "> Figure 2
<p>SLR Protocol Used.</p> "> Figure 3
<p>PRISMA diagram of the selection process.</p> "> Figure 4
<p>Distribution of document type for eligible and reviewed documents. (<b>a</b>) Eligible Documents. (<b>b</b>) Reviewed Documents.</p> "> Figure 5
<p>Cumulative distribution of document number for eligible and reviewed documents by the year. (<b>a</b>) Eligible Documents. (<b>b</b>) Reviewed Documents.</p> "> Figure 6
<p>Document word clouds before and after the selection generated by using author keywords: (<b>a</b>) Eligible documents; (<b>b</b>) Reviewed documents.</p> "> Figure 7
<p>Stacked Bar Chart: the bars represent the counting based on RQ.1, while the stacks represent the counting concerning RQ.2.</p> "> Figure 8
<p>Used dataset type.</p> "> Figure 9
<p>Used dataset type.</p> "> Figure 10
<p>Stacked Bar Chart: the bars represent the counting based on RQ.7, while the stacks represent the counting concerning different approaches.</p> ">
Abstract
:1. Introduction
2. Terms and Scope of This Research
3. Theoretical Framework on eXplainable Artificial Intelligence
- While striving for optimal performance, intrinsic explainability approaches usually incorporate the rationale behind the choice from the start of the data training. These techniques are frequently employed to produce explanations for transparent models, such as Bayesian models, decision trees, fuzzy logic, linear/logistic regression, and others. Transparent models make use of some degree of interpretability on their own. In particular, new types of interpretable models arise in literature, such as interpretable cascades [39] or non iterative artificial neural network which provides interpretable results in polynomial form [40]. By definition, intrinsic methods are model-specific, which means that explainability is limited to that particular class or kind of algorithm.
- in addition to the underlying model, post-hoc techniques also use an external or surrogate model. When AI models do not fulfill any of the requirements to be declared transparent, a different approach must be used to explain the model’s choices. The base model stays the same, and the external model imitates the base model’s behavior to explain the users. These techniques are usually linked to models (such as tree ensembles, support vector machines, multi-layer neural networks, convolutional neural networks, recurrent neural networks, and similar) whose inference mechanism is unknown to users. Post-hoc techniques can be used with any AI model because they are often model-agnostic.
- Visualization: This technique uses representations of an ML model, such as a DN, which is a natural way to look at the pattern hidden inside a cell. Three well-known visualization techniques are individual conditional expectation (ICE), partial dependence plot (PDP), and surrogate models:
- –
- More complex models are described by surrogate models, which are simple models. In order to understand the latter, a trainable model—like a decision tree or linear model—is trained using the original black-box model’s predictions;
- –
- A graphical representation known as PDP facilitates the visualization of an averaged partial relationship between one or more input variables and the predictions of a black-box model;
- –
- While PD charts provide a rough overview of a model’s activities, ICE graphs disaggregate the PDP output to show interactions and individual differences.
- Knowledge extraction is the process of obtaining, in an intelligible way, the information that the algorithm records as an internal representation during training. When considering an ANN, knowledge extraction confronts the difficulty of extracting explanations from the network. Rule extraction and model distillation are two kinds of techniques for obtaining information concealed in complex algorithms:
- –
- rule extraction provides a symbolic and understandable explanation of the data the algorithm has learned during training by utilizing its input and output to extract rules that mimic the decision-making process;
- –
- model distillation is a compression technique used to transport information (dark knowledge) from deep networks (the “teacher”) to shallow networks.
- Influence methods: This approach assesses the importance or relevance of a feature by altering internal or input components and documenting the extent to which the changes affect model performance. The use of influence tactics is common. Three methods for assessing the significance of an input variable are feature importance, sensitivity analysis, and Layer-wise Relevance Propagation (LRP):
- –
- sensitivity analysis explains how the output is affected by changes in its input and/or algorithm parameters. It is frequently used to test models for stability and reliability, either as a tool to identify and eliminate irrelevant input attributes or as a foundation for a more potent explanation technique (e.g., decomposition)
- –
- LRP redistributes the prediction function backwards, backpropagating up to the input layer from the network output layer. One important component of this redistribution process is relevance conservation.
- –
- Feature importance measures how much each feature, or input variable, contributes to a complicated AI/ML model’s predictions. The rise in the model prediction error following feature permutation is used to assess a feature’s importance. As the values of important characteristics are changed, the model error rises. To maintain a consistent model error throughout permutation, the model disregards the values of unimportant attributes.
- Example-based explanation: In accordance with this paradigm, the practitioner describes the behavior of AI/ML models by selecting particular examples from the dataset. Two possible example-based interpretability techniques are critiques and prototypes, as well as counterfactual justifications:
- –
- Prototypes and criticisms are a subset of typical cases drawn from the data; hence, item membership is defined by its resemblance to the prototypes, resulting in over-generalization. To overcome this, benefit exceptions, also known as criticisms, must be indicated for situations that are not properly represented by those prototypes.
- –
- Counterfactual explanations without having to discuss the entire logic of the program, define the minimum criteria that would have led to a choice. Unlike the counterfactual instances, where the emphasis is on the reversal of the prediction rather than its explanation, the emphasis here is on the explanation of a single prediction.
4. Related Works
5. SLR Process
5.1. SLR Target
5.2. SLR Protocol
6. Research Questions
- (RQ.1) What is the approach of the paper regarding XAI?
- (RQ.2) What kind of application does the paper cover?
- (RQ.3) What kind of AI subset does the paper cover?
- (RQ.3.1) What kind of algorithm is implemented?
- (RQ.4) What kind of dataset is used for the training?
- (RQ.5) Does AI play a major role in the study? Or is it an auxiliary technology?
- (RQ.6) What side effect of XAI is investigated?
- (RQ.7) What is the stage of XAI used method?
- (RQ.7.1) What is the used technique?
- (RQ.8) Is the XAI technique performance covered by the paper?
- (RQ.9) Is there an assessment for improvement of performance?
- (RQ.10) Are the XAI technique computational costs considered by the paper?
7. Search Strategy and Databases
TITLE-ABS-KEY((xai AND engineering) OR ((explainable OR trustworthy) AND ai AND engineering))
8. Selection Criteria, Data Cleaning and Collection
9. Results
9.1. General Statistics
9.2. Research Questions
- Implementation: This category groups all papers that document AI-based solutions applied to the domain for the design, implementation, control, verification, validation, or as a support in these functions, possibly with a comparison between techniques or between experimental tests or testbeds;
- Case Study: This category groups all papers reporting an actual application of AI-based solutions to specific cases, either with a central or a support role;
- Methodology: This category is dedicated to papers that present or discuss theoretical aspects, general approaches, proposals, and methodological issues related to the application of AI-based techniques to improve, manage, design, verify, or validate process engineering solutions and designs.
10. Conclusions
Author Contributions
Funding
Conflicts of Interest
References
- Boje, C.; Guerriero, A.; Kubicki, S.; Rezgui, Y. Towards a semantic Construction Digital Twin: Directions for future research. Autom. Constr. 2020, 114, 103179. [Google Scholar] [CrossRef]
- Bilal, M.; Oyedele, L.O.; Qadir, J.; Munir, K.; Ajayi, S.O.; Akinade, O.O.; Owolabi, H.A.; Alaka, H.A.; Pasha, M. Big Data in the construction industry: A review of present status, opportunities, and future trends. Adv. Eng. Inform. 2016, 30, 500–521. [Google Scholar] [CrossRef]
- Simpson, T.W. Product platform design and customization: Status and promise. Artif. Intell. Eng. Des. Anal. Manuf. AIEDAM 2004, 18, 3–20. [Google Scholar] [CrossRef]
- Shen, C. A Transdisciplinary Review of Deep Learning Research and Its Relevance for Water Resources Scientists. Water Resour. Res. 2018, 54, 8558–8593. [Google Scholar] [CrossRef]
- Qadri, Y.A.; Nauman, A.; Zikria, Y.B.; Vasilakos, A.V.; Kim, S.W. The Future of Healthcare Internet of Things: A Survey of Emerging Technologies. IEEE Commun. Surv. Tutor. 2020, 22, 1121–1167. [Google Scholar] [CrossRef]
- Sircar, A.; Yadav, K.; Rayavarapu, K.; Bist, N.; Oza, H. Application of machine learning and artificial intelligence in oil and gas industry. Pet. Res. 2021, 6, 379–391. [Google Scholar] [CrossRef]
- Rajulapati, L.; Chinta, S.; Shyamala, B.; Rengaswamy, R. Integration of machine learning and first principles models. AIChE J. 2022, 68, e17715. [Google Scholar] [CrossRef]
- Faraji Niri, M.; Aslansefat, K.; Haghi, S.; Hashemian, M.; Daub, R.; Marco, J. A Review of the Applications of Explainable Machine Learning for Lithium-Ion Batteries: From Production to State and Performance Estimation. Energies 2023, 16, 6360. [Google Scholar] [CrossRef]
- Nandipati, M.; Fatoki, O.; Desai, S. Bridging Nanomanufacturing and Artificial Intelligence—A Comprehensive Review. Materials 2024, 17, 1621. [Google Scholar] [CrossRef]
- Gani, R. Chemical product design: Challenges and opportunities. Comput. Chem. Eng. 2004, 28, 2441–2457. [Google Scholar] [CrossRef]
- Karner, S.; Anne Urbanetz, N. The impact of electrostatic charge in pharmaceutical powders with specific focus on inhalation-powders. J. Aerosol Sci. 2011, 42, 428–445. [Google Scholar] [CrossRef]
- Löwe, H.; Ehrfeld, W. State-of-the-art in microreaction technology: Concepts, manufacturing and applications. Electrochim. Acta 1999, 44, 3679–3689. [Google Scholar] [CrossRef]
- Xie, R.; Chu, L.Y.; Deng, J.G. Membranes and membrane processes for chiral resolution. Chem. Soc. Rev. 2008, 37, 1243–1263. [Google Scholar] [CrossRef] [PubMed]
- Plumb, K. Continuous processing in the pharmaceutical industry: Changing the mind set. Chem. Eng. Res. Des. 2005, 83, 730–738. [Google Scholar] [CrossRef]
- Powell, D.; Magnanini, M.C.; Colledani, M.; Myklebust, O. Advancing zero defect manufacturing: A state-of-the-art perspective and future research directions. Comput. Ind. 2022, 136, 103596. [Google Scholar] [CrossRef]
- Sadhukhan, J.; Dugmore, T.I.J.; Matharu, A.; Martinez-Hernandez, E.; Aburto, J.; Rahman, P.K.S.M.; Lynch, J. Perspectives on “game changer” global challenges for sustainable 21st century: Plant-based diet, unavoidable food waste biorefining, and circular economy. Sustainability 2020, 12, 1976. [Google Scholar] [CrossRef]
- Halasz, L.; Povoden, G.; Narodoslawsky, M. Sustainable processes synthesis for renewable resources. Resour. Conserv. Recycl. 2005, 44, 293–307. [Google Scholar] [CrossRef]
- Ioannou, I.; D’Angelo, S.C.; Galán-Martín, A.; Pozo, C.; Pérez-Ramírez, J.; Guillén-Gosálbez, G. Process modelling and life cycle assessment coupled with experimental work to shape the future sustainable production of chemicals and fuels. React. Chem. Eng. 2021, 6, 1179–1194. [Google Scholar] [CrossRef]
- Guillén-Gosálbez, G.; You, F.; Galán-Martín, A.; Pozo, C.; Grossmann, I.E. Process systems engineering thinking and tools applied to sustainability problems: Current landscape and future opportunities. Curr. Opin. Chem. Eng. 2019, 26, 170–179. [Google Scholar] [CrossRef]
- de Faria, D.R.G.; de Medeiros, J.L.; Araújo, O.d.Q.F. Screening biorefinery pathways to biodiesel, green-diesel and propylene-glycol: A hierarchical sustainability assessment of process. J. Environ. Manag. 2021, 300, 113772. [Google Scholar] [CrossRef]
- Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
- Negri, E.; Fumagalli, L.; Macchi, M. A Review of the Roles of Digital Twin in CPS-based Production Systems. Procedia Manuf. 2017, 11, 939–948. [Google Scholar] [CrossRef]
- Frank, A.G.; Dalenogare, L.S.; Ayala, N.F. Industry 4.0 technologies: Implementation patterns in manufacturing companies. Int. J. Prod. Econ. 2019, 210, 15–26. [Google Scholar] [CrossRef]
- Hofmann, E.; Rüsch, M. Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 2017, 89, 23–34. [Google Scholar] [CrossRef]
- Vlachos, D.; Mhadeshwar, A.; Kaisare, N. Hierarchical multiscale model-based design of experiments, catalysts, and reactors for fuel processing. Comput. Chem. Eng. 2006, 30, 1712–1724. [Google Scholar] [CrossRef]
- Li, J.; Ge, W.; Wang, W.; Yang, N.; Liu, X.; Wang, L.; He, X.; Wang, X.; Wang, J.; Kwauk, M. From multiscale modeling to meso-science: A chemical engineering perspective. Multiscale Model. Meso-Sci. A Chem. Eng. 2013, 9783642351891, 1–484. [Google Scholar] [CrossRef]
- Chen, X.; Wang, Q.; Liu, Z.; Han, Z. A novel approach for dimensionality reduction of high-dimensional stochastic dynamical systems using symbolic regression. Mech. Syst. Signal Process. 2024, 214, 111373. [Google Scholar] [CrossRef]
- Loiseau, J.C. Data-driven modeling of the chaotic thermal convection in an annular thermosyphon. Theor. Comput. Fluid Dyn. 2020, 34, 339–365. [Google Scholar] [CrossRef]
- Wu, T.; Gao, X.; An, F.; Kurths, J. The complex dynamics of correlations within chaotic systems. Chaos Solitons Fractals 2023, 167, 113052. [Google Scholar] [CrossRef]
- Wang, G.; Nixon, M.; Boudreaux, M. Toward Cloud-Assisted Industrial IoT Platform for Large-Scale Continuous Condition Monitoring. Proc. IEEE 2019, 107, 1193–1205. [Google Scholar] [CrossRef]
- Melo, A.; Câmara, M.M.; Pinto, J.C. Data-Driven Process Monitoring and Fault Diagnosis: A Comprehensive Survey. Processes 2024, 12, 251. [Google Scholar] [CrossRef]
- Shen, T.; Li, B. Digital twins in additive manufacturing: A state-of-the-art review. Int. J. Adv. Manuf. Technol. 2024, 131, 63–92. [Google Scholar] [CrossRef]
- Perera, Y.S.; Ratnaweera, D.; Dasanayaka, C.H.; Abeykoon, C. The role of artificial intelligence-driven soft sensors in advanced sustainable process industries: A critical review. Eng. Appl. Artif. Intell. 2023, 121, 105988. [Google Scholar] [CrossRef]
- Lewin, D.R.; Lachman-Shalem, S.; Grosman, B. The role of process system engineering (PSE) in integrated circuit (IC) manufacturing. Control Eng. Pract. 2007, 15, 793–802. [Google Scholar] [CrossRef]
- Gunning, D.; Aha, D.W. DARPA’s Explainable Artificial Intelligence Program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
- Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
- European Commission. Ethics guidelines for trustworthy AI. High-Level Expert Group on Artificial Intelligence. Eur. Comm. 2019, 6, 1–39. [Google Scholar]
- Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
- Izonin, I.; Tkachenko, R.; Yemets, K.; Havryliuk, M. An interpretable ensemble structure with a non-iterative training algorithm to improve the predictive accuracy of healthcare data analysis. Sci. Rep. 2024, 14, 12947. [Google Scholar] [CrossRef]
- Izonin, I.; Tkachenko, R.; Kryvinska, N.; Tkachenko, P.; Greguš ml, M. Multiple linear regression based on coefficients identification using non-iterative SGTM neural-like structure. In Proceedings of the International Work-Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; Springer: Cham, Switzerland, 2019; pp. 467–479. [Google Scholar]
- Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 2018, 51, 1–42. [Google Scholar] [CrossRef]
- Carvalho, D.V.; Pereira, E.M.; Cardoso, J.S. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics 2019, 8, 832. [Google Scholar] [CrossRef]
- Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.; Kagal, L. Explaining Explanations: An Overview of Interpretability of Machine Learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018; pp. 80–89. [Google Scholar] [CrossRef]
- Bodria, F.; Giannotti, F.; Guidotti, R.; Naretto, F.; Pedreschi, D.; Rinzivillo, S. Benchmarking and survey of explanation methods for black box models. Data Min. Knowl. Discov. 2023, 37, 1719–1778. [Google Scholar] [CrossRef]
- Tomsett, R.; Preece, A.; Braines, D.; Cerutti, F.; Chakraborty, S.; Srivastava, M.; Pearson, G.; Kaplan, L. Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI. Patterns 2020, 1, 100049. [Google Scholar] [CrossRef] [PubMed]
- Vyas, M.; Thakur, S.; Riyaz, B.; Bansal, K.K.; Tomar, B.; Mishra, V. Artificial intelligence: The beginning of a new era in pharmacy profession. Asian J. Pharm. 2018, 12, 72–76. [Google Scholar]
- Krishnan, S.; Athavale, Y. Trends in biomedical signal feature extraction. Biomed. Signal Process. Control 2018, 43, 41–63. [Google Scholar] [CrossRef]
- Emaminejad, N.; Akhavian, R. Trustworthy AI and robotics: Implications for the AEC industry. Autom. Constr. 2022, 139, 104298. [Google Scholar] [CrossRef]
- Joshi, R.P.; Kumar, N. Artificial intelligence for autonomous molecular design: A perspective. Molecules 2021, 26, 6761. [Google Scholar] [CrossRef]
- Zou, X.; Liu, W.; Huo, Z.; Wang, S.; Chen, Z.; Xin, C.; Bai, Y.; Liang, Z.; Gong, Y.; Qian, Y.; et al. Current Status and Prospects of Research on Sensor Fault Diagnosis of Agricultural Internet of Things. Sensors 2023, 23, 2528. [Google Scholar] [CrossRef]
- Khosravani, M.R.; Reinicke, T. 3D-printed sensors: Current progress and future challenges. Sens. Actuators Phys. 2020, 305, 111916. [Google Scholar] [CrossRef]
- Li, J.; King, S.; Jennions, I. Intelligent Fault Diagnosis of an Aircraft Fuel System Using Machine Learning—A Literature Review. Machines 2023, 11, 481. [Google Scholar] [CrossRef]
- Kitchenham, B. Procedures for Undertaking Systematic Reviews. In Joint Technical Report TR/SE0401 and 0400011T.1; Computer Science Department, Keele University and National ICT Australia Ltd.: Eveleigh, Australia, 2004. [Google Scholar]
- Campanile, L.; Gribaudo, M.; Iacono, M.; Marulli, F.; Mastroianni, M. Computer network simulation with ns-3: A systematic literature review. Electronics 2020, 9, 272. [Google Scholar] [CrossRef]
- Elsevier. Scopus. 2024. Available online: https://www.elsevier.com/products/scopus (accessed on 31 August 2024).
- Clarivate Analytics. Web of Science. 2024. Available online: https://clarivate.com/ (accessed on 31 August 2024).
- Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, T.P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed]
- Tapeh, A.T.G.; Naser, M.Z. Discovering Graphical Heuristics on Fire-Induced Spalling of Concrete Through Explainable Artificial Intelligence. Fire Technol. 2022, 58, 2871–2898. [Google Scholar] [CrossRef]
- Jacinto, M.V.; Doria Neto, A.D.; de Castro, D.L.; Bezerra, F.H. Karstified zone interpretation using deep learning algorithms: Convolutional neural networks applications and model interpretability with explainable AI. Comput. Geosci. 2023, 171, 105281. [Google Scholar] [CrossRef]
- Pan, Y.; Stark, R. An interpretable machine learning approach for engineering change management decision support in automotive industry. Comput. Ind. 2022, 138, 103633. [Google Scholar] [CrossRef]
- Masood, U.; Farooq, H.; Imran, A.; Abu-Dayya, A. Interpretable AI-Based Large-Scale 3D Pathloss Prediction Model for Enabling Emerging Self-Driving Networks. IEEE Trans. Mob. Comput. 2023, 22, 3967–3984. [Google Scholar] [CrossRef]
- Aslam, N.; Khan, I.U.; Alansari, A.; Alrammah, M.; Alghwairy, A.; Alqahtani, R.; Alqahtani, R.; Almushikes, M.; Hashim, M.A. Anomaly Detection Using Explainable Random Forest for the Prediction of Undesirable Events in Oil Wells. Appl. Comput. Intell. Soft Comput. 2022, 2022, 1558381. [Google Scholar] [CrossRef]
- Salem, H.; El-Hasnony, I.M.; Kabeel, A.; El-Said, E.M.; Elzeki, O.M. Deep Learning model and Classification Explainability of Renewable energy-driven Membrane Desalination System using Evaporative Cooler. Alex. Eng. J. 2022, 61, 10007–10024. [Google Scholar] [CrossRef]
- Wang, T.; Reiffsteck, P.; Chevalier, C.; Chen, C.W.; Schmidt, F. An interpretable model for bridge scour risk assessment using explainable artificial intelligence and engineers’ expertise. Struct. Infrastruct. Eng. 2023, 1–13. [Google Scholar] [CrossRef]
- Mishra, A.; Jatti, V.S.; Sefene, E.M.; Paliwal, S. Explainable Artificial Intelligence (XAI) and Supervised Machine Learning-based Algorithms for Prediction of Surface Roughness of Additively Manufactured Polylactic Acid (PLA) Specimens. Appl. Mech. 2023, 4, 668–698. [Google Scholar] [CrossRef]
- Ghosh, S.; Kamal, M.S.; Chowdhury, L.; Neogi, B.; Dey, N.; Sherratt, R.S. Explainable AI to understand study interest of engineering students. Educ. Inf. Technol. 2023, 29, 4657–4672. [Google Scholar] [CrossRef]
- Nguyen, D.D.; Tanveer, M.; Mai, H.N.; Pham, T.Q.D.; Khan, H.; Park, C.W.; Kim, G.M. Guiding the optimization of membraneless microfluidic fuel cells via explainable artificial intelligence: Comparative analyses of multiple machine learning models and investigation of key operating parameters. Fuel 2023, 349, 128742. [Google Scholar] [CrossRef]
- Cardellicchio, A.; Ruggieri, S.; Nettis, A.; Renò, V.; Uva, G. Physical interpretation of machine learning-based recognition of defects for the risk management of existing bridge heritage. Eng. Fail. Anal. 2023, 149, 107237. [Google Scholar] [CrossRef]
- Lee, Y.; Lee, G.; Choi, H.; Park, H.; Ko, M.J. Artificial intelligence-assisted auto-optical inspection toward the stain detection of an organic light-emitting diode panel at the backplane fabrication step. Displays 2023, 79, 102478. [Google Scholar] [CrossRef]
- Fayaz, J.; Torres-Rodas, P.; Medalla, M.; Naeim, F. Assessment of ground motion amplitude scaling using interpretable Gaussian process regression: Application to steel moment frames. Earthq. Eng. Struct. Dyn. 2023, 52, 2339–2359. [Google Scholar] [CrossRef]
- Oh, D.W.; Kong, S.M.; Kim, S.B.; Lee, Y.J. Prediction and Analysis of Axial Stress of Piles for Piled Raft Due to Adjacent Tunneling Using Explainable AI. Appl. Sci. 2023, 13, 6074. [Google Scholar] [CrossRef]
- Dachowicz, A.; Mall, K.; Balasubramani, P.; Maheshwari, A.; Panchal, J.H.; Delaurentis, D.; Raz, A. Mission Engineering and Design using Real-Time Strategy Games: An Explainable-AI Approach. J. Mech. Des. 2021, 144, 021710. [Google Scholar] [CrossRef]
- Karandin, O.; Ayoub, O.; Musumeci, F.; Yusuke, H.; Awaji, Y.; Tornatore, M. If Not Here, There. Explaining Machine Learning Models for Fault Localization in Optical Networks. In Proceedings of the 2022 International Conference on Optical Network Design and Modeling (ONDM), Warsaw, Poland, 16–19 May 2022; IEEE: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
- Conti, A.; Campagnolo, L.; Diciotti, S.; Pietroiusti, A.; Toschi, N. Predicting the cytotoxicity of nanomaterials through explainable, extreme gradient boosting. Nanotoxicology 2022, 16, 844–856. [Google Scholar] [CrossRef]
- Obermair, C.; Cartier-Michaud, T.; Apollonio, A.; Millar, W.; Felsberger, L.; Fischl, L.; Bovbjerg, H.S.; Wollmann, D.; Wuensch, W.; Catalan-Lasheras, N.; et al. Explainable machine learning for breakdown prediction in high gradient rf cavities. Phys. Rev. Accel. Beams 2022, 25, 104601. [Google Scholar] [CrossRef]
- Wehner, C.; Powlesland, F.; Altakrouri, B.; Schmid, U. Explainable Online Lane Change Predictions on a Digital Twin with a Layer Normalized LSTM and Layer-wise Relevance Propagation. In Advances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 621–632. [Google Scholar] [CrossRef]
- Raz, A.K.; Nolan, S.M.; Levin, W.; Mall, K.; Mia, A.; Mockus, L.; Ezra, K.; Williams, K. Test and Evaluation of Reinforcement Learning via Robustness Testing and Explainable AI for High-Speed Aerospace Vehicles. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022; IEEE: New York, NY, USA, 2022; Volume abs 1707 6347, pp. 1–14. [Google Scholar] [CrossRef]
- Meas, M.; Machlev, R.; Kose, A.; Tepljakov, A.; Loo, L.; Levron, Y.; Petlenkov, E.; Belikov, J. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors 2022, 22, 6338. [Google Scholar] [CrossRef]
- Kraus, M.A. Erklärbare domänenspezifische Künstliche Intelligenz im Massiv- und Brückenbau. Beton-Und Stahlbetonbau 2022, 117, 795–804. [Google Scholar] [CrossRef]
- Lundberg, H.; Mowla, N.I.; Abedin, S.F.; Thar, K.; Mahmood, A.; Gidlund, M.; Raza, S. Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System Using eXplainable Artificial Intelligence (XAI). IEEE Access 2022, 10, 102831–102841. [Google Scholar] [CrossRef]
- Narteni, S.; Orani, V.; Vaccari, I.; Cambiaso, E.; Mongelli, M. Sensitivity of Logic Learning Machine for Reliability in Safety-Critical Systems. IEEE Intell. Syst. 2022, 37, 66–74. [Google Scholar] [CrossRef]
- Baptista, M.L.; Goebel, K.; Henriques, E.M. Relation between prognostics predictor evaluation metrics and local interpretability SHAP values. Artif. Intell. 2022, 306, 103667. [Google Scholar] [CrossRef]
- Brusa, E.; Cibrario, L.; Delprete, C.; Di Maggio, L.G. Explainable AI for Machine Fault Diagnosis: Understanding Features’ Contribution in Machine Learning Models for Industrial Condition Monitoring. Appl. Sci. 2023, 13, 2038. [Google Scholar] [CrossRef]
- Jin, P.; Tian, J.; Zhi, D.; Wen, X.; Zhang, M. Trainify: A CEGAR-Driven Training and Verification Framework for Safe Deep Reinforcement Learning. In Computer Aided Verification; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 193–218. [Google Scholar] [CrossRef]
- Hines, B.; Talbert, D.; Anton, S. Improving Trust via XAI and Pre-Processing for Machine Learning of Complex Biomedical Datasets. Int. Flairs Conf. Proc. 2022, 35. [Google Scholar] [CrossRef]
- Bacciu, D.; Numeroso, D. Explaining Deep Graph Networks via Input Perturbation. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 10334–10345. [Google Scholar] [CrossRef]
- Neves, L.; Martinez, J.; Longo, L.; Roberto, G.; Tosta, T.; de Faria, P.; Loyola, A.; Cardoso, S.; Silva, A.; do Nascimento, M.; et al. Classification of H&E Images via CNN Models with XAI Approaches, DeepDream Representations and Multiple Classifiers. In Proceedings of the 25th International Conference on Enterprise Information Systems, Prague, Czech Republic, 24–26 April 2023; SCITEPRESS-Science and Technology Publications: Setúbal, Portugal, 2023; pp. 354–364. [Google Scholar] [CrossRef]
- Han, Y.; Chang, H. XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly. Comput. Mater. Contin. 2023, 76, 221–237. [Google Scholar] [CrossRef]
- Bobek, S.; Kuk, M.; Szelazek, M.; Nalepa, G.J. Enhancing Cluster Analysis With Explainable AI and Multidimensional Cluster Prototypes. IEEE Access 2022, 10, 101556–101574. [Google Scholar] [CrossRef]
- Al-Fayoumi, M.; Alhijawi, B.; Abu Al-Haija, Q.; Armoush, R. XAI-PhD: Fortifying Trust of Phishing URL Detection Empowered by Shapley Additive Explanations. Int. J. Online Biomed. Eng. (iJOE) 2024, 20, 80–101. [Google Scholar] [CrossRef]
- Yang, C.; Wang, C.; Wu, B.; Zhao, F.; Fan, J.s.; Zhou, L. Settlement estimation during foundation excavation using pattern analysis and explainable AI modeling. Autom. Constr. 2024, 166, 105651. [Google Scholar] [CrossRef]
- Groza, A.; Toderean, L.; Muntean, G.A.; Nicoara, S.D. Agents that Argue and Explain Classifications of Retinal Conditions. J. Med. Biol. Eng. 2021, 41, 730–741. [Google Scholar] [CrossRef]
- Hanna, B.N.; Trieu, L.L.T.; Son, T.C.; Dinh, N.T. An Application of ASP in Nuclear Engineering: Explaining the Three Mile Island Nuclear Accident Scenario. Theory Pract. Log. Program. 2020, 20, 926–941. [Google Scholar] [CrossRef]
- Hamilton, D.; Watkins, L.; Zanlongo, S.; Leeper, C.; Sleight, R.; Silbermann, J.; Kornegay, K. Assuring Autonomous UAS Traffic Management Systems Using Explainable, Fuzzy Logic, Black Box Monitoring. In Proceedings of the 2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), Negombo, Sri Lanka, 11–13 August 2021; IEEE: New York, NY, USA, 2021; Volume 31, pp. 470–476. [Google Scholar] [CrossRef]
- Brandsæter, A.; Smefjell, G.; Merwe, K.v.d.; Kamsvåg, V. Assuring Safe Implementation of Decision Support Functionality based on Data-Driven Methods for Ship Navigation. In Proceedings of the 30th European Safety and Reliability Conference and 15th Probabilistic Safety Assessment and Management Conference, ESREL, Venice, Italy, 1–5 November 2020; Research Publishing Services: San Jose, CA, USA, 2020; pp. 637–643. [Google Scholar] [CrossRef]
- Sherry, L.; Baldo, J.; Berlin, B. Design of Flight Guidance and Control Systems Using Explainable AI. In Proceedings of the 2021 Integrated Communications Navigation and Surveillance Conference (ICNS), Virtual, 20–22 April 2021; IEEE: New York, NY, USA, 2021; pp. 1–10. [Google Scholar] [CrossRef]
- Valdes, J.J.; Tchagang, A.B. Deterministic Numeric Simulation and Surrogate Models with White and Black Machine Learning Methods: A Case Study on Direct Mappings. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; IEEE: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
- Weitz, K.; Schiller, D.; Schlagowski, R.; Huber, T.; André, E. “Do you trust me?”: Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, IVA ’19, Paris, France, 2–5 July 2019. [Google Scholar] [CrossRef]
- Feng, J.; Lansford, J.L.; Katsoulakis, M.A.; Vlachos, D.G. Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences. Sci. Adv. 2020, 6, 42. [Google Scholar] [CrossRef]
- Thakker, D.; Mishra, B.K.; Abdullatif, A.; Mazumdar, S.; Simpson, S. Explainable Artificial Intelligence for Developing Smart Cities Solutions. Smart Cities 2020, 3, 1353–1382. [Google Scholar] [CrossRef]
- Yoo, S.; Kang, N. Explainable artificial intelligence for manufacturing cost estimation and machining feature visualization. Expert Syst. Appl. 2021, 183, 115430. [Google Scholar] [CrossRef]
- Sun, Y.; Chockler, H.; Huang, X.; Kroening, D. Explaining Image Classifiers Using Statistical Fault Localization. In Computer Vision—ECCV 2020; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 391–406. [Google Scholar] [CrossRef]
- Bobek, S.; Mozolewski, M.; Nalepa, G.J. Explanation-Driven Model Stacking. In Computational Science–ICCS 2021; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 361–371. [Google Scholar] [CrossRef]
- Borg, M.; Bronson, J.; Christensson, L.; Olsson, F.; Lennartsson, O.; Sonnsjo, E.; Ebabi, H.; Karsberg, M. Exploring the Assessment List for Trustworthy AI in the Context of Advanced Driver-Assistance Systems. In Proceedings of the 2021 IEEE/ACM 2nd International Workshop on Ethics in Software Engineering Research and Practice (SEthics), Madrid, Spain, 4 June 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
- Kouvaros, P.; Kyono, T.; Leofante, F.; Lomuscio, A.; Margineantu, D.; Osipychev, D.; Zheng, Y. Formal Analysis of Neural Network-Based Systems in the Aircraft Domain. In Formal Methods; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 730–740. [Google Scholar] [CrossRef]
- Guo, W.; Mu, D.; Xu, J.; Su, P.; Wang, G.; Xing, X. LEMNA: Explaining Deep Learning based Security Applications. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, Toronto, ON, Canada, 15–19 October 2018. [Google Scholar] [CrossRef]
- Mohammad Hossain, T.; Watada, J.; Aziz, A.P.D.I.; Hermana, M.; Meraj, S.; Sakai, H. Lithology prediction using well logs: A granular computing approach. Int. J. Innov. Comput. Inf. Control IJICIC 2021, 17, 225–244. [Google Scholar] [CrossRef]
- Blanco-Justicia, A.; Domingo-Ferrer, J.; Martínez, S.; Sánchez, D. Machine learning explainability via microaggregation and shallow decision trees. Knowl.-Based Syst. 2020, 194, 105532. [Google Scholar] [CrossRef]
- Sirmacek, B.; Riveiro, M. Occupancy Prediction Using Low-Cost and Low-Resolution Heat Sensors for Smart Offices. Sensors 2020, 20, 5497. [Google Scholar] [CrossRef]
- Pornprasit, C.; Tantithamthavorn, C.; Jiarpakdee, J.; Fu, M.; Thongtanunam, P. PyExplainer: Explaining the Predictions of Just-In-Time Defect Models. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), Melbourne, Australia, 15–19 November 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
- Dalpiaz, F.; Dell’Anna, D.; Aydemir, F.B.; Cevikol, S. Requirements Classification with Interpretable Machine Learning and Dependency Parsing. In Proceedings of the 2019 IEEE 27th International Requirements Engineering Conference (RE), Jeju Island, South of Korea, 23–27 September 2019; IEEE: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
- Bendre, N.; Desai, K.; Najafirad, P. Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–21 October 2021; IEEE: New York, NY, USA, 2021; pp. 3006–3012. [Google Scholar] [CrossRef]
- Irarrázaval, M.E.; Maldonado, S.; Pérez, J.; Vairetti, C. Telecom traffic pumping analytics via explainable data science. Decis. Support Syst. 2021, 150, 113559. [Google Scholar] [CrossRef]
- Borg, M.; Jabangwe, R.; Aberg, S.; Ekblom, A.; Hedlund, L.; Lidfeldt, A. Test Automation with Grad-CAM Heatmaps—A Future Pipe Segment in MLOps for Vision AI? In Proceedings of the 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Virtual, 12–16 April 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
- DeLaurentis, D.A.; Panchal, J.H.; Raz, A.K.; Balasubramani, P.; Maheshwari, A.; Dachowicz, A.; Mall, K. Toward Automated Game Balance: A Systematic Engineering Design Approach. In Proceedings of the 2021 IEEE Conference on Games (CoG), Virtual, 17–20 August 2021; IEEE: New York, NY, USA, 2021; Volume 6, pp. 1–8. [Google Scholar] [CrossRef]
- Meacham, S.; Isaac, G.; Nauck, D.; Virginas, B. Towards Explainable AI: Design and Development for Explanation of Machine Learning Predictions for a Patient Readmittance Medical Application. In Intelligent Computing; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 939–955. [Google Scholar] [CrossRef]
- Iyer, R.; Li, Y.; Li, H.; Lewis, M.; Sundar, R.; Sycara, K. Transparency and Explanation in Deep Reinforcement Learning Neural Networks. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’18, New Orleans, LA, USA, 2–3 February 2018. [Google Scholar] [CrossRef]
- Sun, K.H.; Huh, H.; Tama, B.A.; Lee, S.Y.; Jung, J.H.; Lee, S. Vision-Based Fault Diagnostics Using Explainable Deep Learning with Class Activation Maps. IEEE Access 2020, 8, 129169–129179. [Google Scholar] [CrossRef]
- Younas, F.; Raza, A.; Thalji, N.; Abualigah, L.; Zitar, R.A.; Jia, H. An efficient artificial intelligence approach for early detection of cross-site scripting attacks. Decis. Anal. J. 2024, 11, 100466. [Google Scholar] [CrossRef]
- Basnet, P.M.S.; Jin, A.; Mahtab, S. Developing an explainable rockburst risk prediction method using monitored microseismicity based on interpretable machine learning approach. Acta Geophys. 2024, 72, 2597–2618. [Google Scholar] [CrossRef]
- Hu, J.; Zhu, K.; Cheng, S.; Kovalchuk, N.M.; Soulsby, A.; Simmons, M.J.; Matar, O.K.; Arcucci, R. Explainable AI models for predicting drop coalescence in microfluidics device. Chem. Eng. J. 2024, 481, 148465. [Google Scholar] [CrossRef]
- Askr, H.; El-dosuky, M.; Darwish, A.; Hassanien, A.E. Explainable ResNet50 learning model based on copula entropy for cotton plant disease prediction. Appl. Soft Comput. 2024, 164, 112009. [Google Scholar] [CrossRef]
- Shojaeinasab, A.; Jalayer, M.; Baniasadi, A.; Najjaran, H. Unveiling the Black Box: A Unified XAI Framework for Signal-Based Deep Learning Models. Machines 2024, 12, 121. [Google Scholar] [CrossRef]
- Huang, Z.; Yu, H.; Fan, G.; Shao, Z.; Li, M.; Liang, Y. Aligning XAI explanations with software developers’ expectations: A case study with code smell prioritization. Expert Syst. Appl. 2024, 238, 121640. [Google Scholar] [CrossRef]
- Chai, C.; Fan, G.; Yu, H.; Huang, Z.; Ding, J.; Guan, Y. Exploring better alternatives to size metrics for explainable software defect prediction. Softw. Qual. J. 2023, 32, 459–486. [Google Scholar] [CrossRef]
- Khan, S.A.; Chaudary, E.; Mumtaz, W. EEG-ConvNet: Convolutional networks for EEG-based subject-dependent emotion recognition. Comput. Electr. Eng. 2024, 116, 109178. [Google Scholar] [CrossRef]
- Gulmez, S.; Gorgulu Kakisim, A.; Sogukpinar, I. XRan: Explainable deep learning-based ransomware detection using dynamic analysis. Comput. Secur. 2024, 139, 103703. [Google Scholar] [CrossRef]
- Kim, I.; Wook Kim, S.; Kim, J.; Huh, H.; Jeong, I.; Choi, T.; Kim, J.; Lee, S. Single domain generalizable and physically interpretable bearing fault diagnosis for unseen working conditions. Expert Syst. Appl. 2024, 241, 122455. [Google Scholar] [CrossRef]
- Ashraf, W.M.; Dua, V. Partial derivative-based dynamic sensitivity analysis expression for non-linear auto regressive with exogenous (NARX) model case studies on distillation columns and model’s interpretation investigation. Chem. Eng. J. Adv. 2024, 18, 100605. [Google Scholar] [CrossRef]
- Daghigh, V.; Bakhtiari Ramezani, S.; Daghigh, H.; Lacy, T.E., Jr. Explainable artificial intelligence prediction of defect characterization in composite materials. Compos. Sci. Technol. 2024, 256, 110759. [Google Scholar] [CrossRef]
- Lin, S.; Liang, Z.; Zhao, S.; Dong, M.; Guo, H.; Zheng, H. A comprehensive evaluation of ensemble machine learning in geotechnical stability analysis and explainability. Int. J. Mech. Mater. Des. 2023, 20, 331–352. [Google Scholar] [CrossRef]
- Abdollahi, A.; Li, D.; Deng, J.; Amini, A. An explainable artificial-intelligence-aided safety factor prediction of road embankments. Eng. Appl. Artif. Intell. 2024, 136, 108854. [Google Scholar] [CrossRef]
- Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [Google Scholar] [CrossRef]
- Koyama, N.; Sakai, Y.; Sasaoka, S.; Dominguez, D.; Somiya, K.; Omae, Y.; Terada, Y.; Meyer-Conde, M.; Takahashi, H. Enhancing the rationale of convolutional neural networks for glitch classification in gravitational wave detectors: A visual explanation. Mach. Learn. Sci. Technol. 2024, 5, 035028. [Google Scholar] [CrossRef]
- Frie, C.; Riza Durmaz, A.; Eberl, C. Exploration of materials fatigue influence factors using interpretable machine learning. Fatigue Fract. Eng. Mater. Struct. 2024, 47, 2752–2773. [Google Scholar] [CrossRef]
- He, X.; Huang, W.; Lv, C. Trustworthy autonomous driving via defense-aware robust reinforcement learning against worst-case observational perturbations. Transp. Res. Part C Emerg. Technol. 2024, 163, 104632. [Google Scholar] [CrossRef]
- Bottieau, J.; Audemard, G.; Bellart, S.; Lagniez, J.M.; Marquis, P.; Szczepanski, N.; Toubeau, J.F. Logic-based explanations of imbalance price forecasts using boosted trees. Electr. Power Syst. Res. 2024, 235, 110699. [Google Scholar] [CrossRef]
- Soon, R.J.; Chui, C.K. Textile Surface Defects Analysis with Explainable AI. In Proceedings of the 2024 IEEE Conference on Artificial Intelligence (CAI), Singapore, 25–27 June 2024; IEEE: New York, NY, USA, 2024; pp. 1394–1398. [Google Scholar] [CrossRef]
- BOUROKBA, A.; EL HAMDI, R.; Mohamed, N.J.A.H. A Shapley based XAI approach for a turbofan RUL estimation. In Proceedings of the 2024 21st International Multi-Conference on Systems, Signals & Devices (SSD), As Sulaymaniyah, Iraq, 22–25 April 2024; IEEE: New York, NY, USA, 2024; Volume 12391, pp. 832–837. [Google Scholar] [CrossRef]
- Tasioulis, T.; Karatzas, K. Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling. In Advances and New Trends in Environmental Informatics 2023; Springer Nature: Cham, Switzerland, 2024; pp. 3–19. [Google Scholar] [CrossRef]
- Fiosina, J.; Sievers, P.; Drache, M.; Beuermann, S. Polymer reaction engineering meets explainable machine learning. Comput. Chem. Eng. 2023, 177, 108356. [Google Scholar] [CrossRef]
- Sharma, K.; Talpa Sai, P.S.; Sharma, P.; Kanti, P.K.; Bhramara, P.; Akilu, S. Prognostic modeling of polydisperse SiO2/Aqueous glycerol nanofluids’ thermophysical profile using an explainable artificial intelligence (XAI) approach. Eng. Appl. Artif. Intell. 2023, 126, 106967. [Google Scholar] [CrossRef]
- Yaprakdal, F.; Varol Arısoy, M. A Multivariate Time Series Analysis of Electrical Load Forecasting Based on a Hybrid Feature Selection Approach and Explainable Deep Learning. Appl. Sci. 2023, 13, 12946. [Google Scholar] [CrossRef]
- Wallsberger, R.; Knauer, R.; Matzka, S. Explainable Artificial Intelligence in Mechanical Engineering: A Synthetic Dataset for Comprehensive Failure Mode Analysis. In Proceedings of the 2023 Fifth International Conference on Transdisciplinary AI (TransAI), Laguna Hills, CA, USA, 25–27 September 2023; IEEE: New York, NY, USA, 2023; pp. 249–252. [Google Scholar] [CrossRef]
- Zhang, J.; Cosma, G.; Bugby, S.; Finke, A.; Watkins, J. Morphological Image Analysis and Feature Extraction for Reasoning with AI-Based Defect Detection and Classification Models. In Proceedings of the 2023 IEEE Symposium Series on Computational Intelligence (SSCI), Mexico, Russia, 5–8 December 2023; IEEE: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
- Bhakte, A.; Pakkiriswamy, V.; Srinivasan, R. An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks. Chem. Eng. Sci. 2022, 250, 117373. [Google Scholar] [CrossRef]
- Liu, J.; Hou, L.; Wang, X.; Zhang, R.; Sun, X.; Xu, L.; Yu, Q. Explainable fault diagnosis of gas-liquid separator based on fully convolutional neural network. Comput. Chem. Eng. 2021, 155, 107535. [Google Scholar] [CrossRef]
- Peng, P.; Zhang, Y.; Wang, H.; Zhang, H. Towards robust and understandable fault detection and diagnosis using denoising sparse autoencoder and smooth integrated gradients. ISA Trans. 2021, 125, 371–383. [Google Scholar] [CrossRef]
- Guzman Urbina, A.; Aoyama, A. Pipeline risk assessment using artificial intelligence: A case from the colombian oil network. Process Saf. Prog. 2018, 37, 110–116. [Google Scholar] [CrossRef]
- Agarwal, P.; Tamer, M.; Budman, H. Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes. Comput. Chem. Eng. 2021, 154, 107467. [Google Scholar] [CrossRef]
- Wu, D.; Zhao, J. Process topology convolutional network model for chemical process fault diagnosis. Process Saf. Environ. Prot. 2021, 150, 93–109. [Google Scholar] [CrossRef]
- Harinarayan, R.R.A.; Shalinie, S.M. XFDDC: eXplainable Fault Detection Diagnosis and Correction framework for chemical process systems. Process Saf. Environ. Prot. 2022, 165, 463–474. [Google Scholar] [CrossRef]
- Santana, V.V.; Gama, M.S.; Loureiro, J.M.; Rodrigues, A.E.; Ribeiro, A.M.; Tavares, F.W.; Barreto, A.G.; Nogueira, I.B. A First Approach towards Adsorption-Oriented Physics-Informed Neural Networks: Monoclonal Antibody Adsorption Performance on an Ion-Exchange Column as a Case Study. ChemEngineering 2022, 6, 21. [Google Scholar] [CrossRef]
- Di Bonito, L.P.; Campanile, L.; Napolitano, E.; Iacono, M.; Portolano, A.; Di Natale, F. Prediction of chemical plants operating performances: A machine learning approach. In Proceedings of the 37th ECMS International Conference on Modelling and Simulation, ECMS 2023, Florence, Italy, 20–23 June 2023. [Google Scholar] [CrossRef]
- Di Bonito, L.P.; Campanile, L.; Napolitano, E.; Iacono, M.; Portolano, A.; Di Natale, F. Analysis of a marine scrubber operation with a combined analytical/AI-based method. Chem. Eng. Res. Des. 2023, 195, 613–623. [Google Scholar] [CrossRef]
- De Micco, M.; Gragnaniello, D.; Zonfrilli, F.; Guida, V.; Villone, M.M.; Poggi, G.; Verdoliva, L. Stability assessment of liquid formulations: A deep learning approach. Chem. Eng. Sci. 2022, 262, 117991. [Google Scholar] [CrossRef]
- Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
- Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
- Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017; pp. 1–358. [Google Scholar] [CrossRef]
- Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3147–3155. [Google Scholar]
- Klir, G.J.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications; Prentice-Hall, Inc.: Hillsdale, NJ, USA, 1994. [Google Scholar]
- Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
- Cover, T.; Hart, P. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
- Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. Catboost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018, 31, 6638–6648. [Google Scholar]
- Rasmussen, C.E. Gaussian Processes in machine learning. Lect. Notes Comput. Sci. 2004, 3176, 63–71. [Google Scholar] [CrossRef]
- Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian Network Classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
- Harrell, F.E., Jr. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis, 2nd ed.; Springer Series in Statistics; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
- Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Theory and Decision Library D; Springer: Dordrecht, The Netherlands, 1991; Volume 9. [Google Scholar] [CrossRef]
- Caruana, R.; Lou, Y.; Gehrke, J.; Koch, P.; Sturm, M.; Elhadad, N. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, 10–13 August 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1721–1730. [Google Scholar] [CrossRef]
- Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 2, 1097–1105. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.u.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
- Li, W.; Gu, C.; Chen, J.; Ma, C.; Zhang, X.; Chen, B.; Wan, S. DLS-GAN: Generative adversarial nets for defect location sensitive data augmentation. IEEE Trans. Autom. Sci. Eng. 2023, 21, 4. [Google Scholar] [CrossRef]
- Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
- Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
- Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef] [PubMed]
- Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: Proceedings of Machine Learning Research. Precup, D., Teh, Y.W., Eds.; 2017; Volume 70, pp. 3319–3328. [Google Scholar]
- Bertsimas, D.; Dunn, J. Optimal classification trees. Mach. Learn. 2017, 106, 1039–1082. [Google Scholar] [CrossRef]
- Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arxiv 2013, arXiv:1312.6034. [Google Scholar] [CrossRef]
Field | Description |
---|---|
Id | Unique identifier for each record |
Author | Authors of the paper |
Title | Title of the document |
Year | Year of publication |
Author Keywords | Keywords provided by the author |
Document Type | Type of the document |
Open Access | Open Access availability |
DOI | Digital Object Identifier |
Category | Count |
---|---|
Fault Diagnosis | 30 |
Prediction | 29 |
Modelling | 21 |
Optimisation | 9 |
Design | 6 |
Control | 5 |
Algorithm | Count | Reference |
---|---|---|
XG Boost | 14 | [157] |
Random Forest | 6 | [158] |
Decision Tree | 4 | [159] |
LightGBM | 4 | [160] |
Fuzzy Logic | 3 | [161] |
Support Vector Machine (SVM) | 3 | [162] |
k-Nearest Neighbors (k-NN) | 3 | [163] |
CatBoost | 2 | [164] |
Gaussian Bases Process Regression | 2 | [165] |
Bayesian Network | 1 | [166] |
Logistic Regression | 1 | [167] |
Rough set theory (RST) | 1 | [168] |
Explainable Boosting Machine (EBM) | 1 | [169] |
Algorithm | Count | Reference |
---|---|---|
Convolutional Neural Networks (CNN) | 25 | [170] |
Artificial Neural Networks (ANN) | 24 | [171] |
Recurrent Neural Networks (RNN) | 6 | [172] |
Technique | Count | Reference |
---|---|---|
SHapley Additive exPlanations (SHAP) | 38 | [174] |
Local Interpretable Model Agnostic Explanation (LIME) | 13 | [175] |
Class Activation Maps (CAM) | 8 | [174] |
Layer-wise Relevance Propagation (LRP) | 3 | [176] |
Feature Importance (others) | 2 | [175] |
Integrated Gradients (IG) | 2 | [177] |
Abductive Logic-Based Explanations (ALBE) | 1 | [175] |
Classification and Regression Trees (CART) | 1 | [159] |
Local Explanations for deep Graph networks by Input perturbaTion (LEGIT) | 1 | [86] |
Local Explanation Method using Nonlinear Approximation (LEMNA) | 1 | [106] |
Mixed Integer Linear Problem (MLP) | 1 | [178] |
Object Saliency Maps (OSM) | 1 | [179] |
Statistical Fault Localization (SFL) | 1 | [102] |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Published by MDPI on behalf of the International Institute of Knowledge Innovation and Invention. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Di Bonito, L.P.; Campanile, L.; Di Natale, F.; Mastroianni, M.; Iacono, M. eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations. Appl. Syst. Innov. 2024, 7, 121. https://doi.org/10.3390/asi7060121
Di Bonito LP, Campanile L, Di Natale F, Mastroianni M, Iacono M. eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations. Applied System Innovation. 2024; 7(6):121. https://doi.org/10.3390/asi7060121
Chicago/Turabian StyleDi Bonito, Luigi Piero, Lelio Campanile, Francesco Di Natale, Michele Mastroianni, and Mauro Iacono. 2024. "eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations" Applied System Innovation 7, no. 6: 121. https://doi.org/10.3390/asi7060121
APA StyleDi Bonito, L. P., Campanile, L., Di Natale, F., Mastroianni, M., & Iacono, M. (2024). eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations. Applied System Innovation, 7(6), 121. https://doi.org/10.3390/asi7060121