[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
A Novel Optimization Algorithm Inspired by Egyptian Stray Dogs for Solving Multi-Objective Optimal Power Flow Problems
Previous Article in Journal
Options for Performing DNN-Based Causal Speech Denoising Using the U-Net Architecture
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations

by
Luigi Piero Di Bonito
1,†,
Lelio Campanile
1,*,†,
Francesco Di Natale
2,†,
Michele Mastroianni
3,† and
Mauro Iacono
1,†
1
Dipartimento di Matematica e Fisica, Università degli Studi della Campania “Luigi Vanvitelli”, 81100 Caserta, Italy
2
Dipartimento di Ingegneria Chimica, dei Materiali e della Produzione Industriale, Università degli Studi di Napoli “Federico II”, 80125 Napoli, Italy
3
Dipartimento di Scienze Agrarie, Alimenti, Risorse Naturali e Ingegneria, Università degli Studi di Foggia, 84084 Fisciano, Italy
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Syst. Innov. 2024, 7(6), 121; https://doi.org/10.3390/asi7060121
Submission received: 15 October 2024 / Revised: 14 November 2024 / Accepted: 22 November 2024 / Published: 30 November 2024
(This article belongs to the Section Artificial Intelligence)

Abstract

:
Artificial Intelligence (AI) has been swiftly incorporated into the industry to become a part of both customer services and manufacturing operations. To effectively address the ethical issues now being examined by the government, AI models must be explainable in order to be used in both scientific and societal contexts. The current state of eXplainable artificial intelligence (XAI) in process engineering is examined in this study through a systematic literature review (SLR), with particular attention paid to the technology’s effect, degree of adoption, and potential to improve process and product quality. Due to restricted access to sizable, reliable datasets, XAI research in process engineering is still primarily exploratory or propositional, despite noteworthy applicability in well-known case studies. According to our research, XAI is becoming more and more positioned as a tool for decision support, with a focus on robustness and dependability in process optimization, maintenance, and quality assurance. This study, however, emphasizes that the use of XAI in process engineering is still in its early stages, and there is significant potential for methodological development and wider use across technical domains.

1. Introduction

Adopting Artificial Intelligence (AI) techniques is largely advocated or affirmed as a good practice for pursuing innovation through digitalizing industrial processes [1]. Industrial processes are characterized by an elevated degree of automation, with a rising number of indicators and controllers and an increasing complexity of the Distributed Control Systems (DCS). The acquisition of large amounts of experimental data provides large datasets, which can be suitably used for statistical analysis as well as for the application of AI processes [2]. The main application of AI is related to off-line diagnosis, anomaly detection, process control, and quality prediction. The size of the dataset and the nature of the process make the explainability of AI pivotal for any organization interested in accelerating their control of process functioning, the assessment of predictive maintenance routines or quality control [3,4,5,6]. Additionally, the European Union indicated innovation in sustainability, environmental problems and eXplainable Artificial Intelligence (XAI) among its key strategic directions.
The application of XAI to the process industry is still at a pioneering level, both because datasets are scarce, heavily biased, and sparsely distributed and because of the intrinsic novelty of this discipline. Despite the abundance of experimental data collected by companies, most of the studies refer to simulated data and metadata because real data are very rarely shared [7,8,9]. Nevertheless, part of the literature claims that XAI is an established practice in the field and both academia and industry strive to increase its adoption and exploitation.
Understanding the actual results and directions requires filtering away marketing noise (and also a fuzzy use of keywords in scientific literature) and focusing on the solutions and the techniques made available to a wide audience through publication in the open, peer-reviewed literature. To clarify the state of the art and to provide a sound foundation for future research programs, we joined the efforts of a computer science and a process engineering research team to explore the literature through a set of research questions aimed to address the following: (i) Depict a clearer image of the existing results; (ii) Understand the influence of XAI in computer science; (iii) Verify how much the terms are coherently used between computer science and process engineering; (iv) Exploit the main objectives, the impacts, and benefits that XAI and its components are expected to provide to process engineering, either in terms of process and product performance improvement. We found that XAI models are still at an exploratory stage, with several examples based on third-party or simulated data rather than on full-scale processes. The scrutiny of selected works shows a low yield of papers pertinent to the subject of this work with regard to those found from the research query. Other works previously known to the authors have not been found by the research queries despite being available in the same database: this signals a non-standard use of keywords for this subject. Additionally, we found that the main goal of XAI models is related to assuring transparent and robust descriptions of process functioning and fault analysis.

2. Terms and Scope of This Research

At an industrial level, process engineering is the knowledge and application of the fundamental principles and laws of nature that allow people to turn raw materials and energy into products that are important to humans. Process engineering encompasses a wide range of subsidiary topics such as mechanical-, aerospace-, computer-, thermal-, bio-, electrochemical-, chemical-, and systems-process engineering, as well as nanotechnology. For example, chemical process engineers can synthesize and purify vast amounts of desired chemical products to produce intermediates for mechanical processing, which produce components for electronics, aerospace, or other applications. Oil and gas, automotive, chemical, paper production, metal, cement, material development, mining, pharmaceutical, civil engineering, packaging, and food and beverage sectors are just a few examples [10,11,12,13,14]. In the last number of years, process engineering has experienced profound modifications derived from the improvement of logistics, the modification of value chains, and, above all, a greater demand for high-performance products and the establishment of economic and social models based on sustainable development principles [15,16,17].
As a result, several national and international administrations and institutions have introduced innovative/sustainable manufacturing development plans in recent years. The most celebrated action in this sense is the establishment of the Sustainable Development Goals proposed by the UN, which poses targets for the improvement, within 2030, of several social indicators, such as access to clean water and sanitation, affordable and clean energy, responsible consumption and production, improved quality of human life and the environment [18,19,20]. Integration between industrial manufacturing and Information and Communication Technologies (ICT) is expected to play a key role in the transition from the traditional to the new production model [21,22,23,24]. Process systems engineering is the application of systematic computer-based approaches to model process engineering systems.
The multiscale nature of many process engineering applications and the complexities of systems layout and interconnecting systems result in mathematical models characterized by extremely high dimensionality and very complicated correlations between different parameters and several optimization tools, and data reduction methods are used to ensure suitable management and control of the process at hand [25,26,27,28,29]. Due to the widespread usage of distributed control systems over the last few decades, the process industry is collecting a vast and ever-increasing amount of data [30]. While developing first-principles models for more complex processes becomes increasingly challenging, data-driven process modeling, monitoring, prediction, and control have become a suitable approach to support process management [31,32]. The demand for automated techniques to model complex systems is growing [33,34]. This demand has been answered by the development of AI, first in the form of Machine Learning (ML) algorithms and later with the introduction of Deep Learning (DL) algorithms. While AI has been widely used in ICT and smart grids, there are still numerous problems with implementing AI for process engineering because of a lack of model explainability, transparency, and trustworthiness. To resolve these issues, XAI algorithms have been defined to support decision-makers in the comprehension of the model outputs by opening the AI “black boxes” [35,36]. By increasing model explainability and transparency, XAI could also promote the creation of more accurate and advanced AI models. The goal of this paper is to understand the current maturity of use and diffusion of XAI in process engineering, as well as to leverage literature to identify potential weaknesses and limitations of the methodologies that may affect the progress and outcome of our research and industrial activities, also classifying the application areas (e.g., chemical, mechanical, aerospace, or bioengineering) in which adoption is wider and potential community support is stronger. The analysis includes, as a side effect, an investigation into the type of ML or DL algorithms employed, as well as the type and source of the dataset used during the training step.

3. Theoretical Framework on eXplainable Artificial Intelligence

AI is a broad term that includes the theory and use of computer systems to mimic tasks that often call for human intelligence, such as strategic thinking or the recognition of categorization schemes, like speech or picture recognition. Most of the time, these issues cannot be resolved by analytical approaches. As a branch of artificial intelligence, machine learning (ML) uses statistical models and algorithms to analyze and draw conclusions from data. Furthermore, DL is a branch of ML that builds ML systems using Artificial Neural Networks (ANN). With applications in criminology, health, auto self-piloting, chemical reaction path detection, and many other fields, artificial intelligence is becoming more and more significant in our daily lives.
In process engineering, the modeling of phenomena can take either a deterministic or probabilistic approach. Deterministic models, based on well-defined mathematical equations, are often simpler to implement and interpret but fail to fully capture the nonlinearities typical of real systems. On the other hand, probabilistic models include variability and uncertainties, which describe phenomena more accurately but at the cost of greater computational complexity. A probabilistic approach, supported by artificial intelligence, thus allows for a more realistic representation of fluctuations and stochastic behavior in industrial systems. Integration with Explainable AI (XAI) techniques would not only make the decision-making process in task resolution more transparent but also allow clarification of the phenomena underlying system nonlinearities.
According to new European Union recommendations aiming at promoting moral, open, and trustworthy AI processes, the ethical implications of this process are extremely pertinent [37]. These recommendations emphasize the repeatability and dependability of results, thus preserving operator safety and encouraging smooth interaction between process operators and AI systems.
Two fundamental ideas—the stage of explainability and the breadth of explainability—are the foundation for the conception and construction of a trustworthy and explainable AI methodology, according to published research [36].
The process by which the model is rendered explainable is referred to as the “stage of explainability”. The literature makes a distinction between models that can be described by external XAI methods and those that can be explained by structure [38]. The difference between interpretable models and model interpretability methodologies is another way to conceptualize this dichotomy. The distinction between intrinsic and post-hoc explainability is a more often-used classification:
  • While striving for optimal performance, intrinsic explainability approaches usually incorporate the rationale behind the choice from the start of the data training. These techniques are frequently employed to produce explanations for transparent models, such as Bayesian models, decision trees, fuzzy logic, linear/logistic regression, and others. Transparent models make use of some degree of interpretability on their own. In particular, new types of interpretable models arise in literature, such as interpretable cascades [39] or non iterative artificial neural network which provides interpretable results in polynomial form [40]. By definition, intrinsic methods are model-specific, which means that explainability is limited to that particular class or kind of algorithm.
  • in addition to the underlying model, post-hoc techniques also use an external or surrogate model. When AI models do not fulfill any of the requirements to be declared transparent, a different approach must be used to explain the model’s choices. The base model stays the same, and the external model imitates the base model’s behavior to explain the users. These techniques are usually linked to models (such as tree ensembles, support vector machines, multi-layer neural networks, convolutional neural networks, recurrent neural networks, and similar) whose inference mechanism is unknown to users. Post-hoc techniques can be used with any AI model because they are often model-agnostic.
In terms of model-agnostic post-hoc methodologies, several have been developed recently to improve the explainability of AI models using data science, ML, and statistics techniques [41]. Four broad categories may be used to group these techniques: example-based explanation, impact approaches, information extraction, and visualization.
  • Visualization: This technique uses representations of an ML model, such as a DN, which is a natural way to look at the pattern hidden inside a cell. Three well-known visualization techniques are individual conditional expectation (ICE), partial dependence plot (PDP), and surrogate models:
    More complex models are described by surrogate models, which are simple models. In order to understand the latter, a trainable model—like a decision tree or linear model—is trained using the original black-box model’s predictions;
    A graphical representation known as PDP facilitates the visualization of an averaged partial relationship between one or more input variables and the predictions of a black-box model;
    While PD charts provide a rough overview of a model’s activities, ICE graphs disaggregate the PDP output to show interactions and individual differences.
  • Knowledge extraction is the process of obtaining, in an intelligible way, the information that the algorithm records as an internal representation during training. When considering an ANN, knowledge extraction confronts the difficulty of extracting explanations from the network. Rule extraction and model distillation are two kinds of techniques for obtaining information concealed in complex algorithms:
    rule extraction provides a symbolic and understandable explanation of the data the algorithm has learned during training by utilizing its input and output to extract rules that mimic the decision-making process;
    model distillation is a compression technique used to transport information (dark knowledge) from deep networks (the “teacher”) to shallow networks.
  • Influence methods: This approach assesses the importance or relevance of a feature by altering internal or input components and documenting the extent to which the changes affect model performance. The use of influence tactics is common. Three methods for assessing the significance of an input variable are feature importance, sensitivity analysis, and Layer-wise Relevance Propagation (LRP):
    sensitivity analysis explains how the output is affected by changes in its input and/or algorithm parameters. It is frequently used to test models for stability and reliability, either as a tool to identify and eliminate irrelevant input attributes or as a foundation for a more potent explanation technique (e.g., decomposition)
    LRP redistributes the prediction function backwards, backpropagating up to the input layer from the network output layer. One important component of this redistribution process is relevance conservation.
    Feature importance measures how much each feature, or input variable, contributes to a complicated AI/ML model’s predictions. The rise in the model prediction error following feature permutation is used to assess a feature’s importance. As the values of important characteristics are changed, the model error rises. To maintain a consistent model error throughout permutation, the model disregards the values of unimportant attributes.
  • Example-based explanation: In accordance with this paradigm, the practitioner describes the behavior of AI/ML models by selecting particular examples from the dataset. Two possible example-based interpretability techniques are critiques and prototypes, as well as counterfactual justifications:
    Prototypes and criticisms are a subset of typical cases drawn from the data; hence, item membership is defined by its resemblance to the prototypes, resulting in over-generalization. To overcome this, benefit exceptions, also known as criticisms, must be indicated for situations that are not properly represented by those prototypes.
    Counterfactual explanations without having to discuss the entire logic of the program, define the minimum criteria that would have led to a choice. Unlike the counterfactual instances, where the emphasis is on the reversal of the prediction rather than its explanation, the emphasis here is on the explanation of a single prediction.
With the term “scope of explainability” the authors refer to the extent of an explanation developed by the method [42]. In an explanation with a global scope model, the full inferential technique of the model is made transparent or understandable to the user. Explanation with a local scope, on the other hand, refers to explicitly explaining a single instance of inference to the user [43]. In Figure 1, previous concepts about XAI methods are summarized.
The need for XAI derives from the significant success of AI-based technologies in several fields. Scientific and technological improvements produce autonomous systems that perceive, learn, decide, and act on their own. The applicability of these systems is limited by the computer incapability to explain its decisions and actions to human users. In this sense, XAI programs aim to develop machine learning or deep learning techniques that produce more explainable models while assuring high levels of learning efficiency and prediction accuracy [44].
A wide number of applications related to artificial intelligence involve military and civil fields. Significant examples of practical AI/XAI applications are visible in security, film-making, geographical analysis, text analysis, simultaneous translation and medical applications, autonomous vehicle driving, as well as in advanced data-bank research. AI and sometimes XAI are used to support vehicle maneuvering, either for cars or airplanes, but also for ships. Recently, AI has been applied to molecular simulations and to the fast recognition of the best possible option for complex chemical and biological processes such as drug design, which is involved, for example, in the synthesis of the SARS-COV2 vaccine. Many of these applications involve AI as advanced image analysis and fast screening and recognition of paths. On the contrary, application to process and product engineering is at an earlier stage, probably because of the hectic and sparse character of data available in this field.

4. Related Works

Today, XAI is still a border concept: the scientific community composed of both CS and other field researchers started to study and check how the implementation of XAI could change the interaction between humans and machines, taking out problems related to the “black-box nature” of AI algorithms. In order to confirm this kind of approach, in this section, we briefly analyze a few reviews for different research areas.
Tomsett et al. [45] provide a thorough examination of the importance of XAI in the context of AI system trust calibration. The research goes into several XAI algorithms and their applications in quick trust calibration. It investigates strategies such as rule-based models, local surrogate models, and uncertainty-aware AI, which give explanations as well as insights into the model’s confidence in its predictions. The study covers the difficulties and constraints of implementing XAI in AI systems, including the trade-off between model complexity and interpretability, the possible performance effect of interpretable models, and the necessity for XAI technique standardization. Users may obtain significant insights into the decision-making process by making AI models interpretable and uncertainty-aware, leading to higher trust and adoption of AI systems.
Vyas et al. [46] investigate the transformational potential of AI in the field of pharmacy, while simultaneously emphasizing the need of XAI in assuring responsible and ethical AI technology adoption. XAI is critical in drug development in pharmaceutical applications, where AI algorithms find possible therapeutic candidates. XAI helps chemists and researchers better understand the aspects impacting drug discovery results by offering explicit explanations for AI-generated predictions. It also confirms the reliability of AI-driven drug candidate suggestions. Furthermore, XAI is important in pharmaceutical management because it can explain how AI makes decisions in discovering drug interactions, adverse reactions, and prescription mistakes. Because of this openness, chemists may validate AI-generated alarms and take relevant measures to maintain patient safety. Furthermore, the article investigates the use of XAI in personalized medicine, in which AI algorithms provide individualized pharmaceutical recommendations based on patient-specific data. XAI increases patient confidence and helps chemists to have educated talks with patients about treatment strategies by offering interpretable explanations. It also addresses ongoing research attempts to build strong XAI approaches appropriate for complicated AI models utilized in pharmaceutical applications.
Krishnan et al. [47] emphasize the transformational influence of AI in biological signal processing while recognizing the necessity of XAI in boosting the interpretability and trustworthiness of AI-driven feature extraction approaches. XAI enables healthcare practitioners to confidently adopt AI-driven signal processing techniques by making AI models transparent and interpretable, resulting in enhanced patient care, diagnosis, and medical research.
Emaminejad et al. [48] investigate the crucial role of XAI in assuring the credibility and ethical application of AI and robotics technologies in the Architecture, Engineering, and Construction (AEC) business. The authors stress the need for XAI in establishing Trustworthy AI and Robotics in the AEC sector. AI and robotic systems can use XAI approaches to offer human-readable explanations for their actions, providing transparency and promoting a deeper knowledge of their decision-making processes. Furthermore, the article analyses the ethical implications of artificial intelligence and robots in the AEC business.
Joshi et al. [49] present a thorough overview of the function of AI in autonomous molecular design, with a particular emphasis on the significance of XAI in this domain. The authors address several XAI approaches relevant to autonomous molecular design, such as molecular visualization tools, saliency maps, and interpretable machine learning models, to provide human-readable explanations for AI-generated molecular designs. These approaches allow researchers to discover important molecular traits that contribute to the required attributes and assess the validity of AI-generated molecular candidates.
Zou et al. [50] provide a detailed evaluation of the current status of research and future possibilities in the context of the agricultural Internet of Things (IoT). The article emphasized the XAI function in sensor problem diagnostics. XAI approaches provide visible and interpretable explanations for AI-driven fault detection model judgments. This interpretability is critical for establishing confidence in AI models and evaluating sensor fault diagnosis accuracy. The study investigates several XAI approaches, such as rule-based models, feature visualization, and local interpretability methods, that are suitable for sensor malfunction detection in the agricultural IoT.
Khosravani et al. [51] gives an in-depth examination of the most recent advances in 3D-printed sensors as well as the possible hurdles that lie ahead. Along with sensor technology improvements, the study explores the significance of XAI in the context of these sensors. XAI can increase the use of these sophisticated sensors in crucial applications such as healthcare diagnostics, environmental monitoring, and autonomous systems by increasing transparency and interpretability. The study not only covers the present state-of-the-art in 3D-printed sensors, but it also recognizes XAI as an important component in assuring the responsible and reliable deployment of these sensors in many areas. The use of XAI methodologies in the development of 3D-printed sensors has the potential to considerably contribute to the future adoption and utilization of this breakthrough technology.
Li et al. [52] provide a comprehensive assessment of studies on the use of machine learning for intelligent problem diagnostics of airplane fuel systems. The study discusses the significance of defect diagnostics in aviation fuel systems, which play a critical role in maintaining safe and efficient flight operations. XAI approaches seek to give human-readable explanations for machine learning model judgments. This openness is critical in aviation, as engineers and pilots must comprehend the reasoning behind defect diagnoses and respond appropriately in real-time settings. The authors explore feature visualization, rule-based models, and sensitivity analysis as XAI methodologies relevant to intelligent problem detection of airplane fuel systems. Researchers and aviation professionals may improve the interpretability of machine learning models and obtain insights into the characteristics and patterns that influence defect diagnostics by combining these approaches. The report also investigates the possible benefits of XAI in the context of aircraft safety and maintenance.
The discussed studies demonstrate the growing impact of XAI in various sectors, underscoring the urgent need for transparency, reliability, and interpretability in AI systems that interact closely with human-driven processes. While XAI has made significant advancements in domains such as healthcare, agriculture, and aeronautics, its integration into process engineering remains relatively unexplored and nascent. This paper aims to address this gap by providing a comprehensive analysis of the current applications and development of XAI in process engineering, identifying both the challenges and the significant potential for its adoption. By means of a systematic literature review, this study addresses the manner in which XAI could transform process engineering by providing support for critical functions such as fault detection, optimization, and quality control, thereby advancing the reliability and robustness of AI in this complex domain. The study analyzes in detail essential aspects not fully described in previous studies, such as the types of datasets used (e.g., real, simulated, or literature datasets), the algorithms employed (e.g., machine learning or deep learning), the side effects of XAI implementation, the computational costs/performance evaluation, and the degree of explainability embedded in the different steps.

5. SLR Process

A strategy for assessing and gathering data about a particular argument and one or more specific research questions is called a Systematic Literature Review (SLR). Through a well-established and standardized procedure that goes through three crucial stages—planning the review, conducting the review, and reporting the findings—an SLR should offer a precise and thorough examination of the subject of interest.

5.1. SLR Target

The goal of this study is to conduct a thorough analysis of the research literature on the application of XAI in the field of process engineering through 31 August 2024. The goal is to determine which application areas have made use of XAI, how they have been used or developed, what type of data has been used, and whether or not the calculation costs related to their usage have been covered.

5.2. SLR Protocol

Kitchenham et al. [53] and Campanile et al. [54] are two examples of consolidated approaches that the review protocol and methodology used adhere to. The first step in putting the review process into practice is defining the search query, which is then utilized to automatically discover the most pertinent publications by retrieving data from a scientific literature search engine. The publications were then carefully examined using both human analysis and automatic list refinement; all abstracts and conclusions were evaluated to identify those that were pertinent to the SLR. To obtain the most pertinent information, the contents of these chosen articles have finally been assessed utilizing the data extraction form. Figure 2 shows this protocol.
It is worth noting that large language models may improve future systematic reviews by improving the synthesis and analysis of extensive literature. These models may quickly synthesize extensive datasets, potentially enabling researchers to discern broader trends and find emerging areas within the field more quickly, which could enhance the proposed SLR process.

6. Research Questions

Given the widespread claims about the adoption of AI-based solutions, both in scientific and commercial documents, public funding invested in promoting and sustaining AI solutions in the industry, and the importance of responsible use of AI in impacting applications like the process engineering ones, we, as anticipated, decided to investigate the maturity level of the use of AI and XAI, using scientific literature as a proxy of the situation of the field. We chose as signs of this maturity the spread of the use of AI or XAI in many application fields, the quality of the AI or XAI application process in the solution, the importance of the role of AI or XAI in the solutions, the presence of an explicit evaluation of the performance improvement of an AI or XAI solution for previous cases, and the adoption of XAI solutions rather than other AI solutions. To conduct our investigation of literature, we translated these criteria into 10 specific research questions, which are easier to use as specific analysis tools while studying single papers.
To achieve the goals of the study, the following research questions have been defined:
  • (RQ.1) What is the approach of the paper regarding XAI?
  • (RQ.2) What kind of application does the paper cover?
  • (RQ.3) What kind of AI subset does the paper cover?
  • (RQ.3.1) What kind of algorithm is implemented?
  • (RQ.4) What kind of dataset is used for the training?
  • (RQ.5) Does AI play a major role in the study? Or is it an auxiliary technology?
  • (RQ.6) What side effect of XAI is investigated?
  • (RQ.7) What is the stage of XAI used method?
  • (RQ.7.1) What is the used technique?
  • (RQ.8) Is the XAI technique performance covered by the paper?
  • (RQ.9) Is there an assessment for improvement of performance?
  • (RQ.10) Are the XAI technique computational costs considered by the paper?
The purpose of the first three questions is to explore whether XAI is considered in process engineering papers. In particular, RQ.1has been designed to search for and categorize papers related to process engineering and XAI in general, specifically considering whether they follow a methodological approach, focus on implementation, or present a case study; RQ.2 has the purpose of exploring the current engagement of XAI techniques in process engineering practice, either to verify potential “polarization” of the AI in a specific process engineering field (i.e., design, fault diagnosis, optimization, prediction, or modeling) and to explore how the process engineering community believe XAI can be used in their fields; RQ.3 and RQ.3.1 act to verify the declination of the XAI framework through the different AI subsets.
RQ.4 is used to indirectly understand the degree of penetration of XAI in process engineering practice and the current level of investment into experimental campaigns: the use of experimental data or real data implies an intervention on real setups in vivo, with the related problems in the operation of a plant, including possible interference, that is accounted as a cost paid back by the results of the experimentation; laboratory data are obtained in controlled conditions on secondary or dedicated setups with no impact on production, and denote both an intermediate level of research activity and lower investments; the use of literature data can be interpreted as an early stage of activity. It is also necessary to make some consideration about the assumption that different authors use aggregated datasets from various kinds of sources. This kind of dataset is used when the real one is not consistent or complete, so to have a descriptive dataset of the phenomena, scientists “fill-completing” the dataset using other sources.
RQ.5, RQ.6 and RQ.7 with RQ.7.1 are used to obtain technical information about XAI utilization in detail such as the dimension impacted by the XAI effect (i.e., Privacy and Data Governance, Reproducibility and Replicability, Robustness, Safety or Transparency and Explainability), while RQ.8, RQ.9 and RQ.10 are an attempt to verify if there is an advanced level of usage of XAI when engineering performances are needed.
These ten research questions have been carefully structured to comprehensively investigate the penetration and utilization of XAI within process engineering. The scope of this study encompasses a broad range of XAI applications, from theoretical approaches to practical implementations and case studies. The focus is not merely on identifying the types of XAI techniques employed but also on understanding how they are integrated with the specific requirements of process engineering, such as design, fault diagnosis, optimization, and modeling. The research questions address several key concepts: the nature of the XAI approach (whether it is methodological, focused on implementation, or centered on case studies), the specific AI techniques and datasets used, and the extent of XAI integration within the broader AI framework, whether it plays a primary or auxiliary role. Furthermore, they explore critical dimensions of XAI, including transparency, explainability, robustness, and safety, while also examining practical concerns such as performance improvement and computational costs. This structured approach ensures a thorough analysis of both the technical and practical implications of XAI in process engineering. It provides insights into the current state of adoption, identifies potential gaps, and highlights opportunities for future development. Ultimately, this study aims to contribute valuable knowledge about its role in advancing the field by capturing the maturity and orientation of XAI in process engineering.

7. Search Strategy and Databases

To obtain a comprehensive overview of the topic, searches were conducted using the two most significant search engines for scientific literature: Scopus ([55]) as the primary source, with additional results from WoS ([56]). This approach increases the likelihood of selecting conferences and workshops deemed relevant to the subject by the scientific community or recognized by publishers and editors while being filtered and indexed by reputable databases. The papers retrieved from both databases were combined to form the final set of documents for analysis. Given the difficulty in assessing the quality of informal literature, it was excluded from the study (including PowerPoint slides, conference reviews, informal reports, work in progress, and technical notes). This decision enhances the stability of the analysis and ensures a higher level of confidence in the quality of the selected papers. The collection of documents was identified by using free keyword-based search terms in both search engines (Scopus and Web of Science), with filtering restricted to English-language documents. Moreover, as AI and XAI are subjects, which are largely objects of marketing claims, misunderstandings, and announcements because of the current hype around them relating to the wave of European Union initiatives, and as the market is not mature yet, we decided to resort to these two engines to build a safe perimeter. Besides having the support of peer review, this usually constitutes an ideal working set, which is generally considered as a reference by academic institutions and research quality assessment panels, in order to avoid an explosion of potential sources, which may be unreliable, unchecked or may constitute an entropy contribution for our study. This helped us in filtering grey literature, which is abundant on the web and cannot be verified with a sufficient degree of confidence and detail.
The specific query was applied to the title, abstract, or keywords of the papers to capture all relevant publications in the field of process engineering related to the topic:
TITLE-ABS-KEY((xai AND engineering) OR ((explainable OR trustworthy) AND ai AND engineering))
For the selection of papers in the field of XAI, the keywords “Explainable”, “Trustworthy”, “XAI”, and “AI” were used, as these are among the most commonly associated with the subject. Given that process engineering encompasses a wide range of sub-domains (e.g., aerospace engineering, mechanical engineering, chemical engineering, etc.), the term “Process” was deliberately excluded from the query, using only the term “Engineering” instead. This approach allowed for collecting all papers within the engineering field, with a subsequent refinement process to select only those related to process engineering. The keywords were combined using logical relations through Boolean operators such as “OR” and “AND”, forming a single comprehensive query.

8. Selection Criteria, Data Cleaning and Collection

The protocol began in August 2024. A total of 1209 papers were identified through searches using Scopus (816 papers, accessed on 31 August 2024) and WoS (393 papers, accessed on 31 August 2024), with the majority found on Scopus. To apply the aforementioned selection criteria, a data-cleaning process was necessary. Papers written in English on any application of XAI in process engineering, published in peer-reviewed international conference proceedings and journals, met the inclusion requirements for the SLR. Exclusion criteria involve articles related to the methodologies under study in disciplines other than process engineering (such as medicine or psychology), articles not published in peer-reviewed conference proceedings or journals, reviews or research perspectives, and duplicate papers. These inclusion and exclusion criteria guided the selection of articles.
An initial cleaning was performed on each database by removing documents without a DOI to determine document eligibility. This resulted in the exclusion of 143 and 9 documents from Scopus and WoS, respectively, due to the absence of a DOI. Additionally, six papers from Scopus and three from WoS not written in English were excluded. This left 667 documents from Scopus and 381 from WoS. With a total exceeding 1000 documents, results from the search engines were combined, and duplicates were removed using the documents’ DOI for verification. These initial database cleaning operations were carried out using Python’s Pandas library.
From the initial selection, 774 articles were chosen for screening. After reviewing the titles and abstracts, 332 articles were excluded because they were not related to the operational use of XAI but only referred to its general conceptualization and philosophical approach. An additional 60 documents were removed because they were reviews, surveys, books, perspectives, notes, or editorials. Out of the remaining 382 documents, another 293 were excluded for being outside the application domain (e.g., medicine, healthcare, psychology, software engineering), resulting in 89 documents. Subsequently, based on the authors’ knowledge, an additional 11 relevant documents, which were not available in either the Scopus or WoS databases but are significant to the research, were included. This brought the total number of papers subjected to review to 100.
Figure 3 presents a step-by-step flow diagram of this identification process, using a modified “Preferred Reporting Items for Systematic Reviews and Meta-Analyses” diagram reported in [57]. In this way, the 11.5% (89/774) of examined papers have been included in the study [58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112,113,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143,144,145] by integrating them with fourteen manually selected papers [146,147,148,149,150,151,152,153,154,155,156].
The data collection was conducted both by catching out some data from search engines and then studying the published article. Once divided the selected papers, a data extraction form has been used to categorize and analyze the papers organically and objectively, to gather information according to the research questions. The first part collects all search engines’ general information about the paper, as reported in Table 1; the second part collects the information related to the research questions to address them.

9. Results

9.1. General Statistics

Figure 4a shows an evident prevalence of conference papers and proceedings: while this is quite typical of the computer science community, a comparison with the number of articles and conference papers shown in Figure 4b indicates that most of those conference and proceedings papers had been filtered away by our analysis, as their actual contents were using AI marginally or in a way that is not consistent with the criteria stated in the SLR (e.g., they were presenting projects in their starting phase, or they were documenting generic strategies). Figure 4a,b document also a substantial balance between open-access publications before the selection, while after the selection, the open-access publications represent 64% of the total.
Figure 5a,b show the temporal cumulative distribution of papers before and after the selection. Starting from 2016, when the current concept of XAI was defined by DARPA [35], the distribution of selected papers seems to capture, as expected, a more recent interest due to two factors: the time needed to transfer ideas and methods from the community of computer science to the community of process engineering and the higher sensibility of the last to the degree of trust that a solution should exhibit before becoming a viable option.
Finally, a comparison was conducted between the keywords of the two sets before and after selection. The objective was to determine whether and how these keywords were used appropriately or if a trend effect influenced their usage, as well as to identify which keywords are considered more relevant by the authors of the most specialized works. The results, presented as word clouds in Figure 6a,b, indicate a notable difference in keyword prominence.
The eligible documents word cloud, representing the initial set of articles, shows the dominance of broad terms like Explainable AI, Artificial Intelligence and Machine Learning reflecting the general interest in XAI without necessarily focusing on its specific applications in process engineering. More specialized terms, such as system, decision making and trust are present but less prominent.
In contrast, the second word cloud, which reflects the selected and reviewed articles, reveals a shift toward more process-engineering-specific keywords. Terms like Fault detection, diagnosis, defect, safety, and process indicate a stronger focus on industrial safety and fault management, which are critical to process engineering. Concepts such as optimization, prediction, and classification also gain prominence, demonstrating the technical emphasis of the selected works.
This comparison shows a clear progression from a broader discussion of XAI to a more specialized focus on its practical applications in process engineering, particularly concerning system reliability and fault detection, underscoring the success and the necessity of the selection process in narrowing the literature to more relevant studies.

9.2. Research Questions

Concerning the RQ.1, to shape the general framework of literature contribution to the theme, papers have been classified into 3 categories:
  • Implementation: This category groups all papers that document AI-based solutions applied to the domain for the design, implementation, control, verification, validation, or as a support in these functions, possibly with a comparison between techniques or between experimental tests or testbeds;
  • Case Study: This category groups all papers reporting an actual application of AI-based solutions to specific cases, either with a central or a support role;
  • Methodology: This category is dedicated to papers that present or discuss theoretical aspects, general approaches, proposals, and methodological issues related to the application of AI-based techniques to improve, manage, design, verify, or validate process engineering solutions and designs.
The analysis of the stacked bar chart reported in Figure 7 reveals that more than one-third of the selected papers fall under the Case Study category, while papers focusing on implementation account for more than half of those devoted to case studies. The Methodology category contains an even smaller number of papers, further confirming the growing interest in the adoption of AI-based solutions, although their validation and large-scale implementation are still in the early stages.
This distribution also reflects the findings from the examination of the content of the overall significant paper set: many papers are future-oriented or provide technical analyses without proposing a concrete approach beyond strategic planning. The ratio between the two major categories of applied papers—implementation and case study—confirms the lack of a consolidated, mature methodological framework. This suggests that design studies and experimental applications are still evolving, with few fully validated solutions. On the other hand, the chart shows that pilot applications do exist, and there is ongoing gradual experimentation with incremental solutions, some of which stem from long-standing documented experiences.
We thus point out, as a preliminary result, that the presence of a significant number of papers about case studies and a lower number of papers about methodologies signals an intermediate maturity level, as it suggests that the adoption is still in an empirical experimentation phase, with no consolidated design practice or established best practices to be used as references, but the extent of the attempts is non-negligible. Moreover, as the initial choice of the two indexing solutions makes the analysis focus on a subset of what is commonly considered scientific or technical research literature, this prevalence is even more suggesting that exploration is going on about these solutions, starting from applications, so with a prevalence of bottom-up approaches, rather than from a top-down point of view, which would instead need a consolidated methodological framework and a more mature phase of penetration of both the concepts and the tools which are proper of AI and XAI. One may conclude that there is a will to harvest the benefits of these evolutions, but there is, on one side, a cautious behavior typical of the background of industrial research, and on the other side, the absence of strong guidance from a clear understanding of potential and pitfalls, as well as the lack of clear and design methodologies.
This scenario highlights an urgent need for the process engineering field to enhance its collaboration with the AI community, aiming to speed up the exchange of knowledge between these two domains. Numerous recent process engineering conferences have underlined this need, suggesting that stronger cooperation would lead to a more effective allocation of resources in both academic research and industrial applications.
Next, to better understand how AI is exploited in the common practice of process engineering and to answer RQ.2, papers have been classified (as first employing keywords, then with a final refinement and reclassification after examining the contents, which often did not confirm the fitness of chosen keywords) according to one between six uses of it: Control, Design, Fault Diagnosis, Modeling, Optimization and Prediction.
Figure 7 and Table 2 illustrate the distribution and counting of significant papers across various application categories, respectively. The most frequent application is fault diagnosis, which is a ubiquitous issue across all engineering fields. This prevalence allows leveraging existing experiences and methodologies with minimal translational effort. Furthermore, fault diagnosis is a common practice, enabling a straightforward comparison between innovative AI-based approaches and well-established techniques in the same cases. This cross-validation greatly benefits from the pre-existing process engineering know-how already available to researchers and practitioners.
The second most frequent applications are modeling and prediction. These categories are closely related, as predictive modeling and the extraction of synthetic descriptions from datasets are typical uses of AI techniques, representing two sides of the same coin. Data collection is necessary for the monitoring and control of process plants, and in some cases, it is mandated by regulations. As a result, datasets are generally readily available and can be easily obtained from the regular operation of process plants or can be specifically collected to establish new operational practices. In this context, AI enables better exploitation of collected data and, in some cases, allows real-time analysis compared to traditional computational methods. The application of AI in design spans a broad range of possibilities, such as the exploration of design parameter space or its structure. The high frequency of studies focused on modeling and prediction can be interpreted as an indicator of ongoing efforts to shift toward an AI-based approach in design. This exploratory phase signals a gradual transition toward more systematic AI integration within the design process, reflecting an active area of research and application.
The limited number of papers in these areas may indicate that optimization and control often require more rigorous validation before implementation in industrial settings. These domains demand not only robust AI models but also stringent testing to ensure that AI-driven solutions can operate safely and reliably within the constraints of existing systems. As the AI and process engineering communities continue to collaborate, the exploration of AI applications in optimization and control is expected to grow, particularly as the demand for efficiency, sustainability, and agility in process industries continues to rise.
Also, this finding seems to corroborate the preliminary result, moving the balance toward a lower level of maturity, as the dominant category is rather connected to engineering in general and off-line, generic activities instead of contributing in-the-loop to process operations, nor especially impact on field-specific aspects.
Concerning the RQ.3 and RQ.3.1 research questions, another classification of papers (including all three categories) may be done following the family of techniques: Machine Learning or Deep Learning. The analysis reveals a little prevalence for DL, 56 papers out of 100 (56%): this may be explained considering the recent impulse given by technological evolution favoring the experimentation of popular solutions from other application fields but with a resistance opposed by consolidated solutions that require fewer resources and fewer investments as in the case of ML. A few papers report both ML and DL methods aimed to compare their results. The exploratory nature of the current phase that emerged from this SLR justifies this tension between the two approaches. A deeper analysis of the situation is presented in Table 3 and Table 4 for the two approaches, showing the types of algorithms the authors implement. For the sake of brevity, a full description of all methodologies in Table 3 and Table 4 is not reported here, but references have been provided for each of them.
Regarding the fourth research question (RQ.4), Figure 8 presents an analysis of the selected papers from the perspective of the nature of the datasets employed for the application of AI. This dataset characteristic serves as a critical proxy for assessing the current level of effort and maturity in this domain. The results can be further categorized into three main types: data derived from real-world industrial datasets, which originate from direct investigations and investments in operational plants; lab-generated data, produced under controlled conditions with limited investments, including both experimental and simulated data; and data sourced from third-party entities, such as literature or aggregated datasets, often reflecting early-stage studies with low investment levels, potentially indicative of skepticism, limited interest in AI-based solutions, or lack of data. New models for data generation are emerging that also use GANs and could help solve data scarcity problems, such as the new technique described in [173].
Approximately 40% of the papers analyzed rely on industrial data, while the remaining papers are distributed among the other dataset categories. When the percentages for literature and aggregated data are combined, their prevalence corresponds to the dominance of methodological papers observed in Figure 7, aligning with the interpretation that the adoption of AI remains in its early stages. This is again a sign of a low maturity level rather than of cautiousness, but, in our opinion, the significant percentage of datasets consisting of real industrial data (and some literature data may be assimilated, as potential industrial data not produced in autonomy) confirms the effective efforts and interest rather than an attempt to ride the wave of public funding.
In fact, a very interesting result in support of this interpretation also emerges from RQ.5, as, in most cases, AI plays a major role in the context presented in the papers (95 out of 100 papers), while a minority of papers report a support role in processes. This result documents an actual trust in AI and, consequently, a high openness degree that confirms the call for action and collaboration for the AI community, also considering that a support role of AI is not only a more prudential strategy but also an easy reuse of existing experiences from other application fields. This also supports the hypothesis of a lack of clear vision on what is actually possible and reliable about the exploitation of AI in a context in which safety or quality requirements are of paramount importance: reuse is possibly a lever to obtain a first approach and experiment in similar conditions, but it does not provide warranties, nor may it be assumed as a methodological foundation for novel initiatives or top-down processes.
A relevant and unexpected point came out from RQ.6 about the side-effect generated by the use of XAI, as shown in Figure 9. Indeed, more than half of the works are related to transparency and explainability issues, while almost one in four are focused on robustness. This result reveals a very important aspect of the application of artificial intelligence to process engineering: this is a highly controlled sector with a strict commitment toward safety and consolidated practices of risk assessment and process and quality control, which involve a dedicated team of specialists. As a consequence, most of the works are focused on explaining the results of the AI model and verifying its robustness, while those aspects related to privacy and data governance, safety, and reproducibility are far less explored. In other words, it appears that, differently than in other sectors, AI tools are used to assist process engineers instead of substituting them in process operation, control, or maintenance. In terms of preliminary results, this reveals a higher level of maturity in the understanding, in a very abstract perspective, of the potential of XAI as a potential support for a crucial aspect of process engineering, accountability, and control of single processes and operations in general, rather than a race for early adoption, as in other fields, under the thrust of trends (or, in commercial applications, of marketing stimuli). This is a sign of maturity in the adoption in general, as it suggests that there is a general consciousness about the advantages of XAI over AI and that a precise direction is being taken, which is consistent with the nature of XAI itself.
This is confirmed, in our opinion, by the findings related to RQ.7, about the adopted kind of XAI technique: considering the intrinsic explainable methodologies for results explainability (27 out of 100 papers), it is clear that hybrid modeling (16 out of 100 papers) is the most common choice despite very intrinsic explainable models (11 out of 100 papers) such as linear models, decision trees and others (Figure 10). This outcome aligns with the inherent complexity of modeling process engineering phenomena, which are characterized by non-linear behaviors that are difficult to capture through linearization. Hybrid models used in the analyzed studies often rely on algorithms, such as ANN, to describe highly non-linear processes combined with physical principles (e.g., PINNs), thus ensuring a balance between accuracy and explainability. However, this approach poses challenges in generalizing the models, as they are typically tailored for specific applications or processes. On the other hand, post-hoc explainability is the most commonly used stage (73 out of 100 papers) with different kinds of implemented XAI tools as reported in Table 5.
For XAI tools, Table 5 confirms both the presence of prevalent approaches (i.e., CAM, LIME, and SHAP) and the richness of research directions, confirming the general findings of this SLR about the current early phase. For the sake of brevity, a full description of all methodologies in Table 5 is not reported here, but references have been provided for each of them.
This is also confirmed by an analysis of the presence of performance-oriented evaluation of XAI solutions answering the research questions RQ.8, RQ.9 and RQ.10. Regarding RQ.8, only 55% (55 out of 100 papers) of the analyzed papers make assessments on the performance of AI techniques combined with XAI techniques in the context of process engineering applications when considering generalization, it is well known that explainability and accuracy represent a typical cutoff. An even smaller percentage of only 38% (38 out of 100 papers) make assessments of the computational costs involved in implementing the above techniques, often justifying any high cost by maintaining high system explainability and transparency (RQ.9). Finally, only 29% (29 out of 100 papers) of the papers address the problem of improving the current performance of the XAI algorithms (RQ.10).
In summary, we argue that as XAI techniques continue to evolve, their application within process engineering can further enhance transparency and reliability, as demonstrated by post-hoc methods such as SHAP and LIME, which provide insight into model decisions.

10. Conclusions

This SLR analyzed the current state of research of AI and TAI applied to process engineering, intending to clarify popular claims in the scientific community of process engineering and to verify to what extent the excitement and enthusiasm (that are tangible at conference debates and discussions) have become research efforts and results. Through a joint contribution from the perspectives of a CS research team and a process engineering one, we found that the literature documents an early stage of the activities, with a well-defined orientation at the state, in a quest for explainability as a decision support resource, which is similar to what happens in other very relevant fields, e.g., medical diagnostics. This dominating feature of AI applications in process engineering is unexpected: the majority of scientific papers in this field are devoted to understanding the phenomena underpinning the specific process and ensuring robust modeling of the same process, which suggested that the main dimensions of explored artificial intelligence models were reproducibility and robustness. Indeed, this work clearly shows that artificial intelligence is used for fault diagnosis and process design, and the main aspect, which the researchers are meant to explore, is the explainability of the models instead of their robustness or reliability. The AI approach starts with a deep knowledge of the physical domain, and mathematical models are not meant to substitute but to support the comprehension and control of the process.
Consequently, the overall outcome of this study indicates that the use of AI and XAI in process engineering seems to be currently at an intermediate maturity level: the literature clearly shows an actual yet pioneering interest documented by the prevalence of the use of industrial data and XAI in a proper way, even if the limited focus on performance evaluation and engineering (that is, on costs) and the relative scarcity of consolidated methodologies testifies to the fact that the efforts to engineer correctly AI and XAI based solutions for process engineering are still limited and preliminary. We believe that this proves an active and wisely oriented attention among the vanguard of the process engineering community on this topic and that in a few years, practices will be consolidated and integrated, especially given the level of awareness, which is rising regarding the need for a closer collaboration with the computer science engineering AI and XAI communities.
This brings up a further and last conclusion, which actually turns on the lights in the room and lets the elephant become visible: a twofold strategy seems to be needed to train practitioners working in industrial engineering in general and in the short term, which can safely pave the way to a new approach to design and to educate a new generation of industrial engineers in the long term, who can natively see the new opportunities as integrated with traditional doctrine and design practices. What does the bridge, which can let the gap pass, look like? If traditional AI courses may do the trick, the gap, which is confirmed by our study, would be definitely smaller, or small enough to be passed with a jump between disciplines, which is possible to smart practitioners who are well supported by their organizations. There is thus a cultural change to be stimulated, which needs as a prerequisite distilling the principles of AI and XAI from the theory, which in turn is still in its full evolution, and in a transfer toward the categories that inform the mindset of industrial engineering practice and doctrine. Is technology transfer sufficient to produce this evolution? Which perspective can enable successful training and teaching of such an integrated know-how?

Author Contributions

Conceptualization, L.P.D.B., L.C., M.I. and F.D.N.; methodology, L.C. and M.I.; software, L.P.D.B. and L.C.; validation, L.P.D.B., L.C., M.I. and F.D.N.; formal analysis, L.P.D.B.; investigation, L.P.D.B., L.C., M.I., F.D.N. and M.M.; resources, L.P.D.B.; data curation, L.P.D.B.; writing—original draft preparation, L.P.D.B.; writing—review and editing, L.P.D.B., L.C., M.I., F.D.N. and M.M.; visualization, L.P.D.B.; supervision, M.I., L.C. and F.D.N.; project administration, M.I. and L.C.; funding acquisition, M.I. All authors have read and agreed to the published version of the manuscript.

Funding

This work is part of the research activity developed within Industrial Ph.D. Programme PON 2014–2020 and of the research activities developed within the project PON “Ricerca e Innovazione” 2014–2020, action IV.6 “Contratti di ricerca su tematiche Green”, issued by MUR.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Boje, C.; Guerriero, A.; Kubicki, S.; Rezgui, Y. Towards a semantic Construction Digital Twin: Directions for future research. Autom. Constr. 2020, 114, 103179. [Google Scholar] [CrossRef]
  2. Bilal, M.; Oyedele, L.O.; Qadir, J.; Munir, K.; Ajayi, S.O.; Akinade, O.O.; Owolabi, H.A.; Alaka, H.A.; Pasha, M. Big Data in the construction industry: A review of present status, opportunities, and future trends. Adv. Eng. Inform. 2016, 30, 500–521. [Google Scholar] [CrossRef]
  3. Simpson, T.W. Product platform design and customization: Status and promise. Artif. Intell. Eng. Des. Anal. Manuf. AIEDAM 2004, 18, 3–20. [Google Scholar] [CrossRef]
  4. Shen, C. A Transdisciplinary Review of Deep Learning Research and Its Relevance for Water Resources Scientists. Water Resour. Res. 2018, 54, 8558–8593. [Google Scholar] [CrossRef]
  5. Qadri, Y.A.; Nauman, A.; Zikria, Y.B.; Vasilakos, A.V.; Kim, S.W. The Future of Healthcare Internet of Things: A Survey of Emerging Technologies. IEEE Commun. Surv. Tutor. 2020, 22, 1121–1167. [Google Scholar] [CrossRef]
  6. Sircar, A.; Yadav, K.; Rayavarapu, K.; Bist, N.; Oza, H. Application of machine learning and artificial intelligence in oil and gas industry. Pet. Res. 2021, 6, 379–391. [Google Scholar] [CrossRef]
  7. Rajulapati, L.; Chinta, S.; Shyamala, B.; Rengaswamy, R. Integration of machine learning and first principles models. AIChE J. 2022, 68, e17715. [Google Scholar] [CrossRef]
  8. Faraji Niri, M.; Aslansefat, K.; Haghi, S.; Hashemian, M.; Daub, R.; Marco, J. A Review of the Applications of Explainable Machine Learning for Lithium-Ion Batteries: From Production to State and Performance Estimation. Energies 2023, 16, 6360. [Google Scholar] [CrossRef]
  9. Nandipati, M.; Fatoki, O.; Desai, S. Bridging Nanomanufacturing and Artificial Intelligence—A Comprehensive Review. Materials 2024, 17, 1621. [Google Scholar] [CrossRef]
  10. Gani, R. Chemical product design: Challenges and opportunities. Comput. Chem. Eng. 2004, 28, 2441–2457. [Google Scholar] [CrossRef]
  11. Karner, S.; Anne Urbanetz, N. The impact of electrostatic charge in pharmaceutical powders with specific focus on inhalation-powders. J. Aerosol Sci. 2011, 42, 428–445. [Google Scholar] [CrossRef]
  12. Löwe, H.; Ehrfeld, W. State-of-the-art in microreaction technology: Concepts, manufacturing and applications. Electrochim. Acta 1999, 44, 3679–3689. [Google Scholar] [CrossRef]
  13. Xie, R.; Chu, L.Y.; Deng, J.G. Membranes and membrane processes for chiral resolution. Chem. Soc. Rev. 2008, 37, 1243–1263. [Google Scholar] [CrossRef] [PubMed]
  14. Plumb, K. Continuous processing in the pharmaceutical industry: Changing the mind set. Chem. Eng. Res. Des. 2005, 83, 730–738. [Google Scholar] [CrossRef]
  15. Powell, D.; Magnanini, M.C.; Colledani, M.; Myklebust, O. Advancing zero defect manufacturing: A state-of-the-art perspective and future research directions. Comput. Ind. 2022, 136, 103596. [Google Scholar] [CrossRef]
  16. Sadhukhan, J.; Dugmore, T.I.J.; Matharu, A.; Martinez-Hernandez, E.; Aburto, J.; Rahman, P.K.S.M.; Lynch, J. Perspectives on “game changer” global challenges for sustainable 21st century: Plant-based diet, unavoidable food waste biorefining, and circular economy. Sustainability 2020, 12, 1976. [Google Scholar] [CrossRef]
  17. Halasz, L.; Povoden, G.; Narodoslawsky, M. Sustainable processes synthesis for renewable resources. Resour. Conserv. Recycl. 2005, 44, 293–307. [Google Scholar] [CrossRef]
  18. Ioannou, I.; D’Angelo, S.C.; Galán-Martín, A.; Pozo, C.; Pérez-Ramírez, J.; Guillén-Gosálbez, G. Process modelling and life cycle assessment coupled with experimental work to shape the future sustainable production of chemicals and fuels. React. Chem. Eng. 2021, 6, 1179–1194. [Google Scholar] [CrossRef]
  19. Guillén-Gosálbez, G.; You, F.; Galán-Martín, A.; Pozo, C.; Grossmann, I.E. Process systems engineering thinking and tools applied to sustainability problems: Current landscape and future opportunities. Curr. Opin. Chem. Eng. 2019, 26, 170–179. [Google Scholar] [CrossRef]
  20. de Faria, D.R.G.; de Medeiros, J.L.; Araújo, O.d.Q.F. Screening biorefinery pathways to biodiesel, green-diesel and propylene-glycol: A hierarchical sustainability assessment of process. J. Environ. Manag. 2021, 300, 113772. [Google Scholar] [CrossRef]
  21. Ghobakhloo, M. Industry 4.0, digitization, and opportunities for sustainability. J. Clean. Prod. 2020, 252, 119869. [Google Scholar] [CrossRef]
  22. Negri, E.; Fumagalli, L.; Macchi, M. A Review of the Roles of Digital Twin in CPS-based Production Systems. Procedia Manuf. 2017, 11, 939–948. [Google Scholar] [CrossRef]
  23. Frank, A.G.; Dalenogare, L.S.; Ayala, N.F. Industry 4.0 technologies: Implementation patterns in manufacturing companies. Int. J. Prod. Econ. 2019, 210, 15–26. [Google Scholar] [CrossRef]
  24. Hofmann, E.; Rüsch, M. Industry 4.0 and the current status as well as future prospects on logistics. Comput. Ind. 2017, 89, 23–34. [Google Scholar] [CrossRef]
  25. Vlachos, D.; Mhadeshwar, A.; Kaisare, N. Hierarchical multiscale model-based design of experiments, catalysts, and reactors for fuel processing. Comput. Chem. Eng. 2006, 30, 1712–1724. [Google Scholar] [CrossRef]
  26. Li, J.; Ge, W.; Wang, W.; Yang, N.; Liu, X.; Wang, L.; He, X.; Wang, X.; Wang, J.; Kwauk, M. From multiscale modeling to meso-science: A chemical engineering perspective. Multiscale Model. Meso-Sci. A Chem. Eng. 2013, 9783642351891, 1–484. [Google Scholar] [CrossRef]
  27. Chen, X.; Wang, Q.; Liu, Z.; Han, Z. A novel approach for dimensionality reduction of high-dimensional stochastic dynamical systems using symbolic regression. Mech. Syst. Signal Process. 2024, 214, 111373. [Google Scholar] [CrossRef]
  28. Loiseau, J.C. Data-driven modeling of the chaotic thermal convection in an annular thermosyphon. Theor. Comput. Fluid Dyn. 2020, 34, 339–365. [Google Scholar] [CrossRef]
  29. Wu, T.; Gao, X.; An, F.; Kurths, J. The complex dynamics of correlations within chaotic systems. Chaos Solitons Fractals 2023, 167, 113052. [Google Scholar] [CrossRef]
  30. Wang, G.; Nixon, M.; Boudreaux, M. Toward Cloud-Assisted Industrial IoT Platform for Large-Scale Continuous Condition Monitoring. Proc. IEEE 2019, 107, 1193–1205. [Google Scholar] [CrossRef]
  31. Melo, A.; Câmara, M.M.; Pinto, J.C. Data-Driven Process Monitoring and Fault Diagnosis: A Comprehensive Survey. Processes 2024, 12, 251. [Google Scholar] [CrossRef]
  32. Shen, T.; Li, B. Digital twins in additive manufacturing: A state-of-the-art review. Int. J. Adv. Manuf. Technol. 2024, 131, 63–92. [Google Scholar] [CrossRef]
  33. Perera, Y.S.; Ratnaweera, D.; Dasanayaka, C.H.; Abeykoon, C. The role of artificial intelligence-driven soft sensors in advanced sustainable process industries: A critical review. Eng. Appl. Artif. Intell. 2023, 121, 105988. [Google Scholar] [CrossRef]
  34. Lewin, D.R.; Lachman-Shalem, S.; Grosman, B. The role of process system engineering (PSE) in integrated circuit (IC) manufacturing. Control Eng. Pract. 2007, 15, 793–802. [Google Scholar] [CrossRef]
  35. Gunning, D.; Aha, D.W. DARPA’s Explainable Artificial Intelligence Program. AI Mag. 2019, 40, 44–58. [Google Scholar] [CrossRef]
  36. Barredo Arrieta, A.; Díaz-Rodríguez, N.; Del Ser, J.; Bennetot, A.; Tabik, S.; Barbado, A.; Garcia, S.; Gil-Lopez, S.; Molina, D.; Benjamins, R.; et al. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion 2020, 58, 82–115. [Google Scholar] [CrossRef]
  37. European Commission. Ethics guidelines for trustworthy AI. High-Level Expert Group on Artificial Intelligence. Eur. Comm. 2019, 6, 1–39. [Google Scholar]
  38. Adadi, A.; Berrada, M. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access 2018, 6, 52138–52160. [Google Scholar] [CrossRef]
  39. Izonin, I.; Tkachenko, R.; Yemets, K.; Havryliuk, M. An interpretable ensemble structure with a non-iterative training algorithm to improve the predictive accuracy of healthcare data analysis. Sci. Rep. 2024, 14, 12947. [Google Scholar] [CrossRef]
  40. Izonin, I.; Tkachenko, R.; Kryvinska, N.; Tkachenko, P.; Greguš ml, M. Multiple linear regression based on coefficients identification using non-iterative SGTM neural-like structure. In Proceedings of the International Work-Conference on Artificial Neural Networks, Munich, Germany, 17–19 September 2019; Springer: Cham, Switzerland, 2019; pp. 467–479. [Google Scholar]
  41. Guidotti, R.; Monreale, A.; Ruggieri, S.; Turini, F.; Giannotti, F.; Pedreschi, D. A Survey of Methods for Explaining Black Box Models. ACM Comput. Surv. 2018, 51, 1–42. [Google Scholar] [CrossRef]
  42. Carvalho, D.V.; Pereira, E.M.; Cardoso, J.S. Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics 2019, 8, 832. [Google Scholar] [CrossRef]
  43. Gilpin, L.H.; Bau, D.; Yuan, B.Z.; Bajwa, A.; Specter, M.; Kagal, L. Explaining Explanations: An Overview of Interpretability of Machine Learning. In Proceedings of the 2018 IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Turin, Italy, 1–3 October 2018; pp. 80–89. [Google Scholar] [CrossRef]
  44. Bodria, F.; Giannotti, F.; Guidotti, R.; Naretto, F.; Pedreschi, D.; Rinzivillo, S. Benchmarking and survey of explanation methods for black box models. Data Min. Knowl. Discov. 2023, 37, 1719–1778. [Google Scholar] [CrossRef]
  45. Tomsett, R.; Preece, A.; Braines, D.; Cerutti, F.; Chakraborty, S.; Srivastava, M.; Pearson, G.; Kaplan, L. Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI. Patterns 2020, 1, 100049. [Google Scholar] [CrossRef] [PubMed]
  46. Vyas, M.; Thakur, S.; Riyaz, B.; Bansal, K.K.; Tomar, B.; Mishra, V. Artificial intelligence: The beginning of a new era in pharmacy profession. Asian J. Pharm. 2018, 12, 72–76. [Google Scholar]
  47. Krishnan, S.; Athavale, Y. Trends in biomedical signal feature extraction. Biomed. Signal Process. Control 2018, 43, 41–63. [Google Scholar] [CrossRef]
  48. Emaminejad, N.; Akhavian, R. Trustworthy AI and robotics: Implications for the AEC industry. Autom. Constr. 2022, 139, 104298. [Google Scholar] [CrossRef]
  49. Joshi, R.P.; Kumar, N. Artificial intelligence for autonomous molecular design: A perspective. Molecules 2021, 26, 6761. [Google Scholar] [CrossRef]
  50. Zou, X.; Liu, W.; Huo, Z.; Wang, S.; Chen, Z.; Xin, C.; Bai, Y.; Liang, Z.; Gong, Y.; Qian, Y.; et al. Current Status and Prospects of Research on Sensor Fault Diagnosis of Agricultural Internet of Things. Sensors 2023, 23, 2528. [Google Scholar] [CrossRef]
  51. Khosravani, M.R.; Reinicke, T. 3D-printed sensors: Current progress and future challenges. Sens. Actuators Phys. 2020, 305, 111916. [Google Scholar] [CrossRef]
  52. Li, J.; King, S.; Jennions, I. Intelligent Fault Diagnosis of an Aircraft Fuel System Using Machine Learning—A Literature Review. Machines 2023, 11, 481. [Google Scholar] [CrossRef]
  53. Kitchenham, B. Procedures for Undertaking Systematic Reviews. In Joint Technical Report TR/SE0401 and 0400011T.1; Computer Science Department, Keele University and National ICT Australia Ltd.: Eveleigh, Australia, 2004. [Google Scholar]
  54. Campanile, L.; Gribaudo, M.; Iacono, M.; Marulli, F.; Mastroianni, M. Computer network simulation with ns-3: A systematic literature review. Electronics 2020, 9, 272. [Google Scholar] [CrossRef]
  55. Elsevier. Scopus. 2024. Available online: https://www.elsevier.com/products/scopus (accessed on 31 August 2024).
  56. Clarivate Analytics. Web of Science. 2024. Available online: https://clarivate.com/ (accessed on 31 August 2024).
  57. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G.; Group, T.P. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med. 2009, 6, e1000097. [Google Scholar] [CrossRef] [PubMed]
  58. Tapeh, A.T.G.; Naser, M.Z. Discovering Graphical Heuristics on Fire-Induced Spalling of Concrete Through Explainable Artificial Intelligence. Fire Technol. 2022, 58, 2871–2898. [Google Scholar] [CrossRef]
  59. Jacinto, M.V.; Doria Neto, A.D.; de Castro, D.L.; Bezerra, F.H. Karstified zone interpretation using deep learning algorithms: Convolutional neural networks applications and model interpretability with explainable AI. Comput. Geosci. 2023, 171, 105281. [Google Scholar] [CrossRef]
  60. Pan, Y.; Stark, R. An interpretable machine learning approach for engineering change management decision support in automotive industry. Comput. Ind. 2022, 138, 103633. [Google Scholar] [CrossRef]
  61. Masood, U.; Farooq, H.; Imran, A.; Abu-Dayya, A. Interpretable AI-Based Large-Scale 3D Pathloss Prediction Model for Enabling Emerging Self-Driving Networks. IEEE Trans. Mob. Comput. 2023, 22, 3967–3984. [Google Scholar] [CrossRef]
  62. Aslam, N.; Khan, I.U.; Alansari, A.; Alrammah, M.; Alghwairy, A.; Alqahtani, R.; Alqahtani, R.; Almushikes, M.; Hashim, M.A. Anomaly Detection Using Explainable Random Forest for the Prediction of Undesirable Events in Oil Wells. Appl. Comput. Intell. Soft Comput. 2022, 2022, 1558381. [Google Scholar] [CrossRef]
  63. Salem, H.; El-Hasnony, I.M.; Kabeel, A.; El-Said, E.M.; Elzeki, O.M. Deep Learning model and Classification Explainability of Renewable energy-driven Membrane Desalination System using Evaporative Cooler. Alex. Eng. J. 2022, 61, 10007–10024. [Google Scholar] [CrossRef]
  64. Wang, T.; Reiffsteck, P.; Chevalier, C.; Chen, C.W.; Schmidt, F. An interpretable model for bridge scour risk assessment using explainable artificial intelligence and engineers’ expertise. Struct. Infrastruct. Eng. 2023, 1–13. [Google Scholar] [CrossRef]
  65. Mishra, A.; Jatti, V.S.; Sefene, E.M.; Paliwal, S. Explainable Artificial Intelligence (XAI) and Supervised Machine Learning-based Algorithms for Prediction of Surface Roughness of Additively Manufactured Polylactic Acid (PLA) Specimens. Appl. Mech. 2023, 4, 668–698. [Google Scholar] [CrossRef]
  66. Ghosh, S.; Kamal, M.S.; Chowdhury, L.; Neogi, B.; Dey, N.; Sherratt, R.S. Explainable AI to understand study interest of engineering students. Educ. Inf. Technol. 2023, 29, 4657–4672. [Google Scholar] [CrossRef]
  67. Nguyen, D.D.; Tanveer, M.; Mai, H.N.; Pham, T.Q.D.; Khan, H.; Park, C.W.; Kim, G.M. Guiding the optimization of membraneless microfluidic fuel cells via explainable artificial intelligence: Comparative analyses of multiple machine learning models and investigation of key operating parameters. Fuel 2023, 349, 128742. [Google Scholar] [CrossRef]
  68. Cardellicchio, A.; Ruggieri, S.; Nettis, A.; Renò, V.; Uva, G. Physical interpretation of machine learning-based recognition of defects for the risk management of existing bridge heritage. Eng. Fail. Anal. 2023, 149, 107237. [Google Scholar] [CrossRef]
  69. Lee, Y.; Lee, G.; Choi, H.; Park, H.; Ko, M.J. Artificial intelligence-assisted auto-optical inspection toward the stain detection of an organic light-emitting diode panel at the backplane fabrication step. Displays 2023, 79, 102478. [Google Scholar] [CrossRef]
  70. Fayaz, J.; Torres-Rodas, P.; Medalla, M.; Naeim, F. Assessment of ground motion amplitude scaling using interpretable Gaussian process regression: Application to steel moment frames. Earthq. Eng. Struct. Dyn. 2023, 52, 2339–2359. [Google Scholar] [CrossRef]
  71. Oh, D.W.; Kong, S.M.; Kim, S.B.; Lee, Y.J. Prediction and Analysis of Axial Stress of Piles for Piled Raft Due to Adjacent Tunneling Using Explainable AI. Appl. Sci. 2023, 13, 6074. [Google Scholar] [CrossRef]
  72. Dachowicz, A.; Mall, K.; Balasubramani, P.; Maheshwari, A.; Panchal, J.H.; Delaurentis, D.; Raz, A. Mission Engineering and Design using Real-Time Strategy Games: An Explainable-AI Approach. J. Mech. Des. 2021, 144, 021710. [Google Scholar] [CrossRef]
  73. Karandin, O.; Ayoub, O.; Musumeci, F.; Yusuke, H.; Awaji, Y.; Tornatore, M. If Not Here, There. Explaining Machine Learning Models for Fault Localization in Optical Networks. In Proceedings of the 2022 International Conference on Optical Network Design and Modeling (ONDM), Warsaw, Poland, 16–19 May 2022; IEEE: New York, NY, USA, 2022. [Google Scholar] [CrossRef]
  74. Conti, A.; Campagnolo, L.; Diciotti, S.; Pietroiusti, A.; Toschi, N. Predicting the cytotoxicity of nanomaterials through explainable, extreme gradient boosting. Nanotoxicology 2022, 16, 844–856. [Google Scholar] [CrossRef]
  75. Obermair, C.; Cartier-Michaud, T.; Apollonio, A.; Millar, W.; Felsberger, L.; Fischl, L.; Bovbjerg, H.S.; Wollmann, D.; Wuensch, W.; Catalan-Lasheras, N.; et al. Explainable machine learning for breakdown prediction in high gradient rf cavities. Phys. Rev. Accel. Beams 2022, 25, 104601. [Google Scholar] [CrossRef]
  76. Wehner, C.; Powlesland, F.; Altakrouri, B.; Schmid, U. Explainable Online Lane Change Predictions on a Digital Twin with a Layer Normalized LSTM and Layer-wise Relevance Propagation. In Advances and Trends in Artificial Intelligence. Theory and Practices in Artificial Intelligence; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 621–632. [Google Scholar] [CrossRef]
  77. Raz, A.K.; Nolan, S.M.; Levin, W.; Mall, K.; Mia, A.; Mockus, L.; Ezra, K.; Williams, K. Test and Evaluation of Reinforcement Learning via Robustness Testing and Explainable AI for High-Speed Aerospace Vehicles. In Proceedings of the 2022 IEEE Aerospace Conference (AERO), Big Sky, MT, USA, 5–12 March 2022; IEEE: New York, NY, USA, 2022; Volume abs 1707 6347, pp. 1–14. [Google Scholar] [CrossRef]
  78. Meas, M.; Machlev, R.; Kose, A.; Tepljakov, A.; Loo, L.; Levron, Y.; Petlenkov, E.; Belikov, J. Explainability and Transparency of Classifiers for Air-Handling Unit Faults Using Explainable Artificial Intelligence (XAI). Sensors 2022, 22, 6338. [Google Scholar] [CrossRef]
  79. Kraus, M.A. Erklärbare domänenspezifische Künstliche Intelligenz im Massiv- und Brückenbau. Beton-Und Stahlbetonbau 2022, 117, 795–804. [Google Scholar] [CrossRef]
  80. Lundberg, H.; Mowla, N.I.; Abedin, S.F.; Thar, K.; Mahmood, A.; Gidlund, M.; Raza, S. Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System Using eXplainable Artificial Intelligence (XAI). IEEE Access 2022, 10, 102831–102841. [Google Scholar] [CrossRef]
  81. Narteni, S.; Orani, V.; Vaccari, I.; Cambiaso, E.; Mongelli, M. Sensitivity of Logic Learning Machine for Reliability in Safety-Critical Systems. IEEE Intell. Syst. 2022, 37, 66–74. [Google Scholar] [CrossRef]
  82. Baptista, M.L.; Goebel, K.; Henriques, E.M. Relation between prognostics predictor evaluation metrics and local interpretability SHAP values. Artif. Intell. 2022, 306, 103667. [Google Scholar] [CrossRef]
  83. Brusa, E.; Cibrario, L.; Delprete, C.; Di Maggio, L.G. Explainable AI for Machine Fault Diagnosis: Understanding Features’ Contribution in Machine Learning Models for Industrial Condition Monitoring. Appl. Sci. 2023, 13, 2038. [Google Scholar] [CrossRef]
  84. Jin, P.; Tian, J.; Zhi, D.; Wen, X.; Zhang, M. Trainify: A CEGAR-Driven Training and Verification Framework for Safe Deep Reinforcement Learning. In Computer Aided Verification; Springer International Publishing: Berlin/Heidelberg, Germany, 2022; pp. 193–218. [Google Scholar] [CrossRef]
  85. Hines, B.; Talbert, D.; Anton, S. Improving Trust via XAI and Pre-Processing for Machine Learning of Complex Biomedical Datasets. Int. Flairs Conf. Proc. 2022, 35. [Google Scholar] [CrossRef]
  86. Bacciu, D.; Numeroso, D. Explaining Deep Graph Networks via Input Perturbation. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 10334–10345. [Google Scholar] [CrossRef]
  87. Neves, L.; Martinez, J.; Longo, L.; Roberto, G.; Tosta, T.; de Faria, P.; Loyola, A.; Cardoso, S.; Silva, A.; do Nascimento, M.; et al. Classification of H&E Images via CNN Models with XAI Approaches, DeepDream Representations and Multiple Classifiers. In Proceedings of the 25th International Conference on Enterprise Information Systems, Prague, Czech Republic, 24–26 April 2023; SCITEPRESS-Science and Technology Publications: Setúbal, Portugal, 2023; pp. 354–364. [Google Scholar] [CrossRef]
  88. Han, Y.; Chang, H. XA-GANomaly: An Explainable Adaptive Semi-Supervised Learning Method for Intrusion Detection Using GANomaly. Comput. Mater. Contin. 2023, 76, 221–237. [Google Scholar] [CrossRef]
  89. Bobek, S.; Kuk, M.; Szelazek, M.; Nalepa, G.J. Enhancing Cluster Analysis With Explainable AI and Multidimensional Cluster Prototypes. IEEE Access 2022, 10, 101556–101574. [Google Scholar] [CrossRef]
  90. Al-Fayoumi, M.; Alhijawi, B.; Abu Al-Haija, Q.; Armoush, R. XAI-PhD: Fortifying Trust of Phishing URL Detection Empowered by Shapley Additive Explanations. Int. J. Online Biomed. Eng. (iJOE) 2024, 20, 80–101. [Google Scholar] [CrossRef]
  91. Yang, C.; Wang, C.; Wu, B.; Zhao, F.; Fan, J.s.; Zhou, L. Settlement estimation during foundation excavation using pattern analysis and explainable AI modeling. Autom. Constr. 2024, 166, 105651. [Google Scholar] [CrossRef]
  92. Groza, A.; Toderean, L.; Muntean, G.A.; Nicoara, S.D. Agents that Argue and Explain Classifications of Retinal Conditions. J. Med. Biol. Eng. 2021, 41, 730–741. [Google Scholar] [CrossRef]
  93. Hanna, B.N.; Trieu, L.L.T.; Son, T.C.; Dinh, N.T. An Application of ASP in Nuclear Engineering: Explaining the Three Mile Island Nuclear Accident Scenario. Theory Pract. Log. Program. 2020, 20, 926–941. [Google Scholar] [CrossRef]
  94. Hamilton, D.; Watkins, L.; Zanlongo, S.; Leeper, C.; Sleight, R.; Silbermann, J.; Kornegay, K. Assuring Autonomous UAS Traffic Management Systems Using Explainable, Fuzzy Logic, Black Box Monitoring. In Proceedings of the 2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), Negombo, Sri Lanka, 11–13 August 2021; IEEE: New York, NY, USA, 2021; Volume 31, pp. 470–476. [Google Scholar] [CrossRef]
  95. Brandsæter, A.; Smefjell, G.; Merwe, K.v.d.; Kamsvåg, V. Assuring Safe Implementation of Decision Support Functionality based on Data-Driven Methods for Ship Navigation. In Proceedings of the 30th European Safety and Reliability Conference and 15th Probabilistic Safety Assessment and Management Conference, ESREL, Venice, Italy, 1–5 November 2020; Research Publishing Services: San Jose, CA, USA, 2020; pp. 637–643. [Google Scholar] [CrossRef]
  96. Sherry, L.; Baldo, J.; Berlin, B. Design of Flight Guidance and Control Systems Using Explainable AI. In Proceedings of the 2021 Integrated Communications Navigation and Surveillance Conference (ICNS), Virtual, 20–22 April 2021; IEEE: New York, NY, USA, 2021; pp. 1–10. [Google Scholar] [CrossRef]
  97. Valdes, J.J.; Tchagang, A.B. Deterministic Numeric Simulation and Surrogate Models with White and Black Machine Learning Methods: A Case Study on Direct Mappings. In Proceedings of the 2020 IEEE Symposium Series on Computational Intelligence (SSCI), Canberra, Australia, 1–4 December 2020; IEEE: New York, NY, USA, 2020. [Google Scholar] [CrossRef]
  98. Weitz, K.; Schiller, D.; Schlagowski, R.; Huber, T.; André, E. “Do you trust me?”: Increasing User-Trust by Integrating Virtual Agents in Explainable AI Interaction Design. In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents, IVA ’19, Paris, France, 2–5 July 2019. [Google Scholar] [CrossRef]
  99. Feng, J.; Lansford, J.L.; Katsoulakis, M.A.; Vlachos, D.G. Explainable and trustworthy artificial intelligence for correctable modeling in chemical sciences. Sci. Adv. 2020, 6, 42. [Google Scholar] [CrossRef]
  100. Thakker, D.; Mishra, B.K.; Abdullatif, A.; Mazumdar, S.; Simpson, S. Explainable Artificial Intelligence for Developing Smart Cities Solutions. Smart Cities 2020, 3, 1353–1382. [Google Scholar] [CrossRef]
  101. Yoo, S.; Kang, N. Explainable artificial intelligence for manufacturing cost estimation and machining feature visualization. Expert Syst. Appl. 2021, 183, 115430. [Google Scholar] [CrossRef]
  102. Sun, Y.; Chockler, H.; Huang, X.; Kroening, D. Explaining Image Classifiers Using Statistical Fault Localization. In Computer Vision—ECCV 2020; Springer International Publishing: Berlin/Heidelberg, Germany, 2020; pp. 391–406. [Google Scholar] [CrossRef]
  103. Bobek, S.; Mozolewski, M.; Nalepa, G.J. Explanation-Driven Model Stacking. In Computational Science–ICCS 2021; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 361–371. [Google Scholar] [CrossRef]
  104. Borg, M.; Bronson, J.; Christensson, L.; Olsson, F.; Lennartsson, O.; Sonnsjo, E.; Ebabi, H.; Karsberg, M. Exploring the Assessment List for Trustworthy AI in the Context of Advanced Driver-Assistance Systems. In Proceedings of the 2021 IEEE/ACM 2nd International Workshop on Ethics in Software Engineering Research and Practice (SEthics), Madrid, Spain, 4 June 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  105. Kouvaros, P.; Kyono, T.; Leofante, F.; Lomuscio, A.; Margineantu, D.; Osipychev, D.; Zheng, Y. Formal Analysis of Neural Network-Based Systems in the Aircraft Domain. In Formal Methods; Springer International Publishing: Berlin/Heidelberg, Germany, 2021; pp. 730–740. [Google Scholar] [CrossRef]
  106. Guo, W.; Mu, D.; Xu, J.; Su, P.; Wang, G.; Xing, X. LEMNA: Explaining Deep Learning based Security Applications. In Proceedings of the 2018 ACM SIGSAC Conference on Computer and Communications Security, CCS ’18, Toronto, ON, Canada, 15–19 October 2018. [Google Scholar] [CrossRef]
  107. Mohammad Hossain, T.; Watada, J.; Aziz, A.P.D.I.; Hermana, M.; Meraj, S.; Sakai, H. Lithology prediction using well logs: A granular computing approach. Int. J. Innov. Comput. Inf. Control IJICIC 2021, 17, 225–244. [Google Scholar] [CrossRef]
  108. Blanco-Justicia, A.; Domingo-Ferrer, J.; Martínez, S.; Sánchez, D. Machine learning explainability via microaggregation and shallow decision trees. Knowl.-Based Syst. 2020, 194, 105532. [Google Scholar] [CrossRef]
  109. Sirmacek, B.; Riveiro, M. Occupancy Prediction Using Low-Cost and Low-Resolution Heat Sensors for Smart Offices. Sensors 2020, 20, 5497. [Google Scholar] [CrossRef]
  110. Pornprasit, C.; Tantithamthavorn, C.; Jiarpakdee, J.; Fu, M.; Thongtanunam, P. PyExplainer: Explaining the Predictions of Just-In-Time Defect Models. In Proceedings of the 2021 36th IEEE/ACM International Conference on Automated Software Engineering (ASE), Melbourne, Australia, 15–19 November 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  111. Dalpiaz, F.; Dell’Anna, D.; Aydemir, F.B.; Cevikol, S. Requirements Classification with Interpretable Machine Learning and Dependency Parsing. In Proceedings of the 2019 IEEE 27th International Requirements Engineering Conference (RE), Jeju Island, South of Korea, 23–27 September 2019; IEEE: New York, NY, USA, 2019. [Google Scholar] [CrossRef]
  112. Bendre, N.; Desai, K.; Najafirad, P. Show Why the Answer is Correct! Towards Explainable AI using Compositional Temporal Attention. In Proceedings of the 2021 IEEE International Conference on Systems, Man, and Cybernetics (SMC), Melbourne, Australia, 17–21 October 2021; IEEE: New York, NY, USA, 2021; pp. 3006–3012. [Google Scholar] [CrossRef]
  113. Irarrázaval, M.E.; Maldonado, S.; Pérez, J.; Vairetti, C. Telecom traffic pumping analytics via explainable data science. Decis. Support Syst. 2021, 150, 113559. [Google Scholar] [CrossRef]
  114. Borg, M.; Jabangwe, R.; Aberg, S.; Ekblom, A.; Hedlund, L.; Lidfeldt, A. Test Automation with Grad-CAM Heatmaps—A Future Pipe Segment in MLOps for Vision AI? In Proceedings of the 2021 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW), Virtual, 12–16 April 2021; IEEE: New York, NY, USA, 2021. [Google Scholar] [CrossRef]
  115. DeLaurentis, D.A.; Panchal, J.H.; Raz, A.K.; Balasubramani, P.; Maheshwari, A.; Dachowicz, A.; Mall, K. Toward Automated Game Balance: A Systematic Engineering Design Approach. In Proceedings of the 2021 IEEE Conference on Games (CoG), Virtual, 17–20 August 2021; IEEE: New York, NY, USA, 2021; Volume 6, pp. 1–8. [Google Scholar] [CrossRef]
  116. Meacham, S.; Isaac, G.; Nauck, D.; Virginas, B. Towards Explainable AI: Design and Development for Explanation of Machine Learning Predictions for a Patient Readmittance Medical Application. In Intelligent Computing; Springer International Publishing: Berlin/Heidelberg, Germany, 2019; pp. 939–955. [Google Scholar] [CrossRef]
  117. Iyer, R.; Li, Y.; Li, H.; Lewis, M.; Sundar, R.; Sycara, K. Transparency and Explanation in Deep Reinforcement Learning Neural Networks. In Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, AIES ’18, New Orleans, LA, USA, 2–3 February 2018. [Google Scholar] [CrossRef]
  118. Sun, K.H.; Huh, H.; Tama, B.A.; Lee, S.Y.; Jung, J.H.; Lee, S. Vision-Based Fault Diagnostics Using Explainable Deep Learning with Class Activation Maps. IEEE Access 2020, 8, 129169–129179. [Google Scholar] [CrossRef]
  119. Younas, F.; Raza, A.; Thalji, N.; Abualigah, L.; Zitar, R.A.; Jia, H. An efficient artificial intelligence approach for early detection of cross-site scripting attacks. Decis. Anal. J. 2024, 11, 100466. [Google Scholar] [CrossRef]
  120. Basnet, P.M.S.; Jin, A.; Mahtab, S. Developing an explainable rockburst risk prediction method using monitored microseismicity based on interpretable machine learning approach. Acta Geophys. 2024, 72, 2597–2618. [Google Scholar] [CrossRef]
  121. Hu, J.; Zhu, K.; Cheng, S.; Kovalchuk, N.M.; Soulsby, A.; Simmons, M.J.; Matar, O.K.; Arcucci, R. Explainable AI models for predicting drop coalescence in microfluidics device. Chem. Eng. J. 2024, 481, 148465. [Google Scholar] [CrossRef]
  122. Askr, H.; El-dosuky, M.; Darwish, A.; Hassanien, A.E. Explainable ResNet50 learning model based on copula entropy for cotton plant disease prediction. Appl. Soft Comput. 2024, 164, 112009. [Google Scholar] [CrossRef]
  123. Shojaeinasab, A.; Jalayer, M.; Baniasadi, A.; Najjaran, H. Unveiling the Black Box: A Unified XAI Framework for Signal-Based Deep Learning Models. Machines 2024, 12, 121. [Google Scholar] [CrossRef]
  124. Huang, Z.; Yu, H.; Fan, G.; Shao, Z.; Li, M.; Liang, Y. Aligning XAI explanations with software developers’ expectations: A case study with code smell prioritization. Expert Syst. Appl. 2024, 238, 121640. [Google Scholar] [CrossRef]
  125. Chai, C.; Fan, G.; Yu, H.; Huang, Z.; Ding, J.; Guan, Y. Exploring better alternatives to size metrics for explainable software defect prediction. Softw. Qual. J. 2023, 32, 459–486. [Google Scholar] [CrossRef]
  126. Khan, S.A.; Chaudary, E.; Mumtaz, W. EEG-ConvNet: Convolutional networks for EEG-based subject-dependent emotion recognition. Comput. Electr. Eng. 2024, 116, 109178. [Google Scholar] [CrossRef]
  127. Gulmez, S.; Gorgulu Kakisim, A.; Sogukpinar, I. XRan: Explainable deep learning-based ransomware detection using dynamic analysis. Comput. Secur. 2024, 139, 103703. [Google Scholar] [CrossRef]
  128. Kim, I.; Wook Kim, S.; Kim, J.; Huh, H.; Jeong, I.; Choi, T.; Kim, J.; Lee, S. Single domain generalizable and physically interpretable bearing fault diagnosis for unseen working conditions. Expert Syst. Appl. 2024, 241, 122455. [Google Scholar] [CrossRef]
  129. Ashraf, W.M.; Dua, V. Partial derivative-based dynamic sensitivity analysis expression for non-linear auto regressive with exogenous (NARX) model case studies on distillation columns and model’s interpretation investigation. Chem. Eng. J. Adv. 2024, 18, 100605. [Google Scholar] [CrossRef]
  130. Daghigh, V.; Bakhtiari Ramezani, S.; Daghigh, H.; Lacy, T.E., Jr. Explainable artificial intelligence prediction of defect characterization in composite materials. Compos. Sci. Technol. 2024, 256, 110759. [Google Scholar] [CrossRef]
  131. Lin, S.; Liang, Z.; Zhao, S.; Dong, M.; Guo, H.; Zheng, H. A comprehensive evaluation of ensemble machine learning in geotechnical stability analysis and explainability. Int. J. Mech. Mater. Des. 2023, 20, 331–352. [Google Scholar] [CrossRef]
  132. Abdollahi, A.; Li, D.; Deng, J.; Amini, A. An explainable artificial-intelligence-aided safety factor prediction of road embankments. Eng. Appl. Artif. Intell. 2024, 136, 108854. [Google Scholar] [CrossRef]
  133. Kobayashi, K.; Alam, S.B. Explainable, interpretable, and trustworthy AI for an intelligent digital twin: A case study on remaining useful life. Eng. Appl. Artif. Intell. 2024, 129, 107620. [Google Scholar] [CrossRef]
  134. Koyama, N.; Sakai, Y.; Sasaoka, S.; Dominguez, D.; Somiya, K.; Omae, Y.; Terada, Y.; Meyer-Conde, M.; Takahashi, H. Enhancing the rationale of convolutional neural networks for glitch classification in gravitational wave detectors: A visual explanation. Mach. Learn. Sci. Technol. 2024, 5, 035028. [Google Scholar] [CrossRef]
  135. Frie, C.; Riza Durmaz, A.; Eberl, C. Exploration of materials fatigue influence factors using interpretable machine learning. Fatigue Fract. Eng. Mater. Struct. 2024, 47, 2752–2773. [Google Scholar] [CrossRef]
  136. He, X.; Huang, W.; Lv, C. Trustworthy autonomous driving via defense-aware robust reinforcement learning against worst-case observational perturbations. Transp. Res. Part C Emerg. Technol. 2024, 163, 104632. [Google Scholar] [CrossRef]
  137. Bottieau, J.; Audemard, G.; Bellart, S.; Lagniez, J.M.; Marquis, P.; Szczepanski, N.; Toubeau, J.F. Logic-based explanations of imbalance price forecasts using boosted trees. Electr. Power Syst. Res. 2024, 235, 110699. [Google Scholar] [CrossRef]
  138. Soon, R.J.; Chui, C.K. Textile Surface Defects Analysis with Explainable AI. In Proceedings of the 2024 IEEE Conference on Artificial Intelligence (CAI), Singapore, 25–27 June 2024; IEEE: New York, NY, USA, 2024; pp. 1394–1398. [Google Scholar] [CrossRef]
  139. BOUROKBA, A.; EL HAMDI, R.; Mohamed, N.J.A.H. A Shapley based XAI approach for a turbofan RUL estimation. In Proceedings of the 2024 21st International Multi-Conference on Systems, Signals & Devices (SSD), As Sulaymaniyah, Iraq, 22–25 April 2024; IEEE: New York, NY, USA, 2024; Volume 12391, pp. 832–837. [Google Scholar] [CrossRef]
  140. Tasioulis, T.; Karatzas, K. Reviewing Explainable Artificial Intelligence Towards Better Air Quality Modelling. In Advances and New Trends in Environmental Informatics 2023; Springer Nature: Cham, Switzerland, 2024; pp. 3–19. [Google Scholar] [CrossRef]
  141. Fiosina, J.; Sievers, P.; Drache, M.; Beuermann, S. Polymer reaction engineering meets explainable machine learning. Comput. Chem. Eng. 2023, 177, 108356. [Google Scholar] [CrossRef]
  142. Sharma, K.; Talpa Sai, P.S.; Sharma, P.; Kanti, P.K.; Bhramara, P.; Akilu, S. Prognostic modeling of polydisperse SiO2/Aqueous glycerol nanofluids’ thermophysical profile using an explainable artificial intelligence (XAI) approach. Eng. Appl. Artif. Intell. 2023, 126, 106967. [Google Scholar] [CrossRef]
  143. Yaprakdal, F.; Varol Arısoy, M. A Multivariate Time Series Analysis of Electrical Load Forecasting Based on a Hybrid Feature Selection Approach and Explainable Deep Learning. Appl. Sci. 2023, 13, 12946. [Google Scholar] [CrossRef]
  144. Wallsberger, R.; Knauer, R.; Matzka, S. Explainable Artificial Intelligence in Mechanical Engineering: A Synthetic Dataset for Comprehensive Failure Mode Analysis. In Proceedings of the 2023 Fifth International Conference on Transdisciplinary AI (TransAI), Laguna Hills, CA, USA, 25–27 September 2023; IEEE: New York, NY, USA, 2023; pp. 249–252. [Google Scholar] [CrossRef]
  145. Zhang, J.; Cosma, G.; Bugby, S.; Finke, A.; Watkins, J. Morphological Image Analysis and Feature Extraction for Reasoning with AI-Based Defect Detection and Classification Models. In Proceedings of the 2023 IEEE Symposium Series on Computational Intelligence (SSCI), Mexico, Russia, 5–8 December 2023; IEEE: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  146. Bhakte, A.; Pakkiriswamy, V.; Srinivasan, R. An explainable artificial intelligence based approach for interpretation of fault classification results from deep neural networks. Chem. Eng. Sci. 2022, 250, 117373. [Google Scholar] [CrossRef]
  147. Liu, J.; Hou, L.; Wang, X.; Zhang, R.; Sun, X.; Xu, L.; Yu, Q. Explainable fault diagnosis of gas-liquid separator based on fully convolutional neural network. Comput. Chem. Eng. 2021, 155, 107535. [Google Scholar] [CrossRef]
  148. Peng, P.; Zhang, Y.; Wang, H.; Zhang, H. Towards robust and understandable fault detection and diagnosis using denoising sparse autoencoder and smooth integrated gradients. ISA Trans. 2021, 125, 371–383. [Google Scholar] [CrossRef]
  149. Guzman Urbina, A.; Aoyama, A. Pipeline risk assessment using artificial intelligence: A case from the colombian oil network. Process Saf. Prog. 2018, 37, 110–116. [Google Scholar] [CrossRef]
  150. Agarwal, P.; Tamer, M.; Budman, H. Explainability: Relevance based dynamic deep learning algorithm for fault detection and diagnosis in chemical processes. Comput. Chem. Eng. 2021, 154, 107467. [Google Scholar] [CrossRef]
  151. Wu, D.; Zhao, J. Process topology convolutional network model for chemical process fault diagnosis. Process Saf. Environ. Prot. 2021, 150, 93–109. [Google Scholar] [CrossRef]
  152. Harinarayan, R.R.A.; Shalinie, S.M. XFDDC: eXplainable Fault Detection Diagnosis and Correction framework for chemical process systems. Process Saf. Environ. Prot. 2022, 165, 463–474. [Google Scholar] [CrossRef]
  153. Santana, V.V.; Gama, M.S.; Loureiro, J.M.; Rodrigues, A.E.; Ribeiro, A.M.; Tavares, F.W.; Barreto, A.G.; Nogueira, I.B. A First Approach towards Adsorption-Oriented Physics-Informed Neural Networks: Monoclonal Antibody Adsorption Performance on an Ion-Exchange Column as a Case Study. ChemEngineering 2022, 6, 21. [Google Scholar] [CrossRef]
  154. Di Bonito, L.P.; Campanile, L.; Napolitano, E.; Iacono, M.; Portolano, A.; Di Natale, F. Prediction of chemical plants operating performances: A machine learning approach. In Proceedings of the 37th ECMS International Conference on Modelling and Simulation, ECMS 2023, Florence, Italy, 20–23 June 2023. [Google Scholar] [CrossRef]
  155. Di Bonito, L.P.; Campanile, L.; Napolitano, E.; Iacono, M.; Portolano, A.; Di Natale, F. Analysis of a marine scrubber operation with a combined analytical/AI-based method. Chem. Eng. Res. Des. 2023, 195, 613–623. [Google Scholar] [CrossRef]
  156. De Micco, M.; Gragnaniello, D.; Zonfrilli, F.; Guida, V.; Villone, M.M.; Poggi, G.; Verdoliva, L. Stability assessment of liquid formulations: A deep learning approach. Chem. Eng. Sci. 2022, 262, 117991. [Google Scholar] [CrossRef]
  157. Chen, T.; Guestrin, C. XGBoost: A scalable tree boosting system. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar] [CrossRef]
  158. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  159. Breiman, L.; Friedman, J.H.; Olshen, R.A.; Stone, C.J. Classification and Regression Trees; Chapman and Hall/CRC: Boca Raton, FL, USA, 2017; pp. 1–358. [Google Scholar] [CrossRef]
  160. Ke, G.; Meng, Q.; Finley, T.; Wang, T.; Chen, W.; Ma, W.; Ye, Q.; Liu, T.Y. LightGBM: A highly efficient gradient boosting decision tree. Adv. Neural Inf. Process. Syst. 2017, 30, 3147–3155. [Google Scholar]
  161. Klir, G.J.; Yuan, B. Fuzzy Sets and Fuzzy Logic: Theory and Applications; Prentice-Hall, Inc.: Hillsdale, NJ, USA, 1994. [Google Scholar]
  162. Cortes, C.; Vapnik, V. Support-Vector Networks. Mach. Learn. 1995, 20, 273–297. [Google Scholar] [CrossRef]
  163. Cover, T.; Hart, P. Nearest Neighbor Pattern Classification. IEEE Trans. Inf. Theory 1967, 13, 21–27. [Google Scholar] [CrossRef]
  164. Prokhorenkova, L.; Gusev, G.; Vorobev, A.; Dorogush, A.V.; Gulin, A. Catboost: Unbiased boosting with categorical features. Adv. Neural Inf. Process. Syst. 2018, 31, 6638–6648. [Google Scholar]
  165. Rasmussen, C.E. Gaussian Processes in machine learning. Lect. Notes Comput. Sci. 2004, 3176, 63–71. [Google Scholar] [CrossRef]
  166. Friedman, N.; Geiger, D.; Goldszmidt, M. Bayesian Network Classifiers. Mach. Learn. 1997, 29, 131–163. [Google Scholar] [CrossRef]
  167. Harrell, F.E., Jr. Regression Modeling Strategies: With Applications to Linear Models, Logistic and Ordinal Regression, and Survival Analysis, 2nd ed.; Springer Series in Statistics; Springer: Cham, Switzerland, 2015. [Google Scholar] [CrossRef]
  168. Pawlak, Z. Rough Sets: Theoretical Aspects of Reasoning about Data; Theory and Decision Library D; Springer: Dordrecht, The Netherlands, 1991; Volume 9. [Google Scholar] [CrossRef]
  169. Caruana, R.; Lou, Y.; Gehrke, J.; Koch, P.; Sturm, M.; Elhadad, N. Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Sydney, NSW, Australia, 10–13 August 2015; Association for Computing Machinery: New York, NY, USA, 2015; pp. 1721–1730. [Google Scholar] [CrossRef]
  170. Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  171. Krizhevsky, A.; Sutskever, I.; Hinton, G.E. ImageNet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 2012, 2, 1097–1105. [Google Scholar] [CrossRef]
  172. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, L.u.; Polosukhin, I. Attention is All you Need. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R., Eds.; Curran Associates, Inc.: Red Hook, NY, USA, 2017; Volume 30. [Google Scholar]
  173. Li, W.; Gu, C.; Chen, J.; Ma, C.; Zhang, X.; Chen, B.; Wan, S. DLS-GAN: Generative adversarial nets for defect location sensitive data augmentation. IEEE Trans. Autom. Sci. Eng. 2023, 21, 4. [Google Scholar] [CrossRef]
  174. Selvaraju, R.R.; Cogswell, M.; Das, A.; Vedantam, R.; Parikh, D.; Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE: New York, NY, USA, 2017. [Google Scholar] [CrossRef]
  175. Ribeiro, M.T.; Singh, S.; Guestrin, C. “Why Should I Trust You?”: Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’16, San Francisco, CA, USA, 13–17 August 2016; pp. 1135–1144. [Google Scholar] [CrossRef]
  176. Bach, S.; Binder, A.; Montavon, G.; Klauschen, F.; Müller, K.R.; Samek, W. On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation. PLoS ONE 2015, 10, e0130140. [Google Scholar] [CrossRef] [PubMed]
  177. Sundararajan, M.; Taly, A.; Yan, Q. Axiomatic Attribution for Deep Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR: Proceedings of Machine Learning Research. Precup, D., Teh, Y.W., Eds.; 2017; Volume 70, pp. 3319–3328. [Google Scholar]
  178. Bertsimas, D.; Dunn, J. Optimal classification trees. Mach. Learn. 2017, 106, 1039–1082. [Google Scholar] [CrossRef]
  179. Simonyan, K.; Vedaldi, A.; Zisserman, A. Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arxiv 2013, arXiv:1312.6034. [Google Scholar] [CrossRef]
Figure 1. Concepts of XAI methodologies.
Figure 1. Concepts of XAI methodologies.
Asi 07 00121 g001
Figure 2. SLR Protocol Used.
Figure 2. SLR Protocol Used.
Asi 07 00121 g002
Figure 3. PRISMA diagram of the selection process.
Figure 3. PRISMA diagram of the selection process.
Asi 07 00121 g003
Figure 4. Distribution of document type for eligible and reviewed documents. (a) Eligible Documents. (b) Reviewed Documents.
Figure 4. Distribution of document type for eligible and reviewed documents. (a) Eligible Documents. (b) Reviewed Documents.
Asi 07 00121 g004
Figure 5. Cumulative distribution of document number for eligible and reviewed documents by the year. (a) Eligible Documents. (b) Reviewed Documents.
Figure 5. Cumulative distribution of document number for eligible and reviewed documents by the year. (a) Eligible Documents. (b) Reviewed Documents.
Asi 07 00121 g005
Figure 6. Document word clouds before and after the selection generated by using author keywords: (a) Eligible documents; (b) Reviewed documents.
Figure 6. Document word clouds before and after the selection generated by using author keywords: (a) Eligible documents; (b) Reviewed documents.
Asi 07 00121 g006
Figure 7. Stacked Bar Chart: the bars represent the counting based on RQ.1, while the stacks represent the counting concerning RQ.2.
Figure 7. Stacked Bar Chart: the bars represent the counting based on RQ.1, while the stacks represent the counting concerning RQ.2.
Asi 07 00121 g007
Figure 8. Used dataset type.
Figure 8. Used dataset type.
Asi 07 00121 g008
Figure 9. Used dataset type.
Figure 9. Used dataset type.
Asi 07 00121 g009
Figure 10. Stacked Bar Chart: the bars represent the counting based on RQ.7, while the stacks represent the counting concerning different approaches.
Figure 10. Stacked Bar Chart: the bars represent the counting based on RQ.7, while the stacks represent the counting concerning different approaches.
Asi 07 00121 g010
Table 1. Collected data from Scopus and WoS.
Table 1. Collected data from Scopus and WoS.
FieldDescription
IdUnique identifier for each record
AuthorAuthors of the paper
TitleTitle of the document
YearYear of publication
Author KeywordsKeywords provided by the author
Document TypeType of the document
Open AccessOpen Access availability
DOIDigital Object Identifier
Table 2. RQ.2 Categories and their Counts.
Table 2. RQ.2 Categories and their Counts.
CategoryCount
Fault Diagnosis30
Prediction29
Modelling21
Optimisation9
Design6
Control5
Table 3. RQ.3.1 Machine Learning Algorithms.
Table 3. RQ.3.1 Machine Learning Algorithms.
AlgorithmCountReference
XG Boost14[157]
Random Forest6[158]
Decision Tree4[159]
LightGBM4[160]
Fuzzy Logic3[161]
Support Vector Machine (SVM)3[162]
k-Nearest Neighbors (k-NN)3[163]
CatBoost2[164]
Gaussian Bases Process Regression2[165]
Bayesian Network1[166]
Logistic Regression1[167]
Rough set theory (RST)1[168]
Explainable Boosting Machine (EBM)1[169]
Table 4. RQ.3.1 Deep Learning Algorithms.
Table 4. RQ.3.1 Deep Learning Algorithms.
AlgorithmCountReference
Convolutional Neural Networks (CNN)25[170]
Artificial Neural Networks (ANN)24[171]
Recurrent Neural Networks (RNN)6[172]
Table 5. RQ.7.1 Post-hoc XAI Tools.
Table 5. RQ.7.1 Post-hoc XAI Tools.
TechniqueCountReference
SHapley Additive exPlanations (SHAP)38[174]
Local Interpretable Model Agnostic Explanation (LIME)13[175]
Class Activation Maps (CAM)8[174]
Layer-wise Relevance Propagation (LRP)3[176]
Feature Importance (others)2[175]
Integrated Gradients (IG)2[177]
Abductive Logic-Based Explanations (ALBE)1[175]
Classification and Regression Trees (CART)1[159]
Local Explanations for deep Graph networks by Input perturbaTion (LEGIT)1[86]
Local Explanation Method using Nonlinear Approximation (LEMNA)1[106]
Mixed Integer Linear Problem (MLP)1[178]
Object Saliency Maps (OSM)1[179]
Statistical Fault Localization (SFL)1[102]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Di Bonito, L.P.; Campanile, L.; Di Natale, F.; Mastroianni, M.; Iacono, M. eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations. Appl. Syst. Innov. 2024, 7, 121. https://doi.org/10.3390/asi7060121

AMA Style

Di Bonito LP, Campanile L, Di Natale F, Mastroianni M, Iacono M. eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations. Applied System Innovation. 2024; 7(6):121. https://doi.org/10.3390/asi7060121

Chicago/Turabian Style

Di Bonito, Luigi Piero, Lelio Campanile, Francesco Di Natale, Michele Mastroianni, and Mauro Iacono. 2024. "eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations" Applied System Innovation 7, no. 6: 121. https://doi.org/10.3390/asi7060121

APA Style

Di Bonito, L. P., Campanile, L., Di Natale, F., Mastroianni, M., & Iacono, M. (2024). eXplainable Artificial Intelligence in Process Engineering: Promises, Facts, and Current Limitations. Applied System Innovation, 7(6), 121. https://doi.org/10.3390/asi7060121

Article Metrics

Back to TopTop