Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
Alzheimer’s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer’s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visual-based, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This work examines whether the resolution of a programming guide is related to academic success in a introductory programming course at the Andrés Bello University (Chile). We investigated whether the guide, which consists of 52 exercises which are not mandatory to solve, helps predict the failure of the first test of this course by first-year students. Furthermore, the use of the unified SHAP and XAI framework is proposed to analyze and understand how programming guides influence student performance. The study includes a literature review of previous related studies, a descriptive analysis of the data collected, and a discussion of the practical and theoretical implications of the study. The results obtained will be useful to improve student support strategies and decision making related to the use of guides as an educational tool.
Authored by Gaston Sepulveda, Billy Peralta, Marcos Levano, Pablo Schwarzenberg, Orietta Nicolis
Despite intensive research, survival rate for pancreatic cancer, a fatal and incurable illness, has not dramatically improved in recent years. Deep learning systems have shown superhuman ability in a considerable number of activities, and recent developments in Artificial Intelligence (AI) have led to its widespread use in predictive analytics of pancreatic cancer. However, the improvement in performance is the result of model complexity being raised, which transforms these systems into “black box” methods and creates uncertainty about how they function and, ultimately, how they make judgements. This ambiguity has made it difficult for deep learning algorithms to be accepted in important field like healthcare, where their benefit may be enormous. As a result, there has been a significant resurgence in recent years of scholarly interest in the topic of Explainable Artificial Intelligence (XAI), which is concerned with the creation of novel techniques for interpreting and explaining deep learning models. In this study, we utilize Computed Tomography (CT) images and Clinical data to predict and analyze pancreatic cancer and survival rate respectively. Since pancreatic tumors are small to identify, the region marking through XAI will assist medical professionals to identify the appropriate region and determine the presence of cancer. Various features are taken into consideration for survival prediction. The most prominent features can be identified with the help of XAI, which in turn aids medical professionals in making better decisions. This study mainly focuses on the XAI strategy for deep and machine learning models rather than prediction and survival methodology.
Authored by Srinidhi B, M Bhargavi
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
Machine learning models have become increasingly complex, making it difficult to understand how they make predictions. Explainable AI (XAI) techniques have been developed to enhance model interpretability, thereby improving model transparency, trust, and accountability. In this paper, we present a comparative analysis of several XAI techniques to enhance the interpretability of machine learning models. We evaluate the performance of these techniques on a dataset commonly used for regression or classification tasks. The XAI techniques include SHAP, LIME, PDP, and GAM. We compare the effectiveness of these techniques in terms of their ability to explain model predictions and identify the most important features in the dataset. Our results indicate that XAI techniques significantly improve model interpretability, with SHAP and LIME being the most effective in identifying important features in the dataset. Our study provides insights into the strengths and limitations of different XAI techniques and their implications for the development and deployment of machine learning models. We conclude that XAI techniques have the potential to significantly enhance model interpretability and promote trust and accountability in the use of machine learning models. The paper emphasizes the importance of interpretability in medical applications of machine learning and highlights the significance of XAI techniques in developing accurate and reliable models for medical applications.
Authored by Swathi Y, Manoj Challa
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller
In the evolving landscape of Internet of Things (IoT) security, the need for continuous adaptation of defenses is critical. Class Incremental Learning (CIL) can provide a viable solution by enabling Machine Learning (ML) and Deep Learning (DL) models to ( i) learn and adapt to new attack types (0-day attacks), ( ii) retain their ability to detect known threats, (iii) safeguard computational efficiency (i.e. no full re-training). In IoT security, where novel attacks frequently emerge, CIL offers an effective tool to enhance Intrusion Detection Systems (IDS) and secure network environments. In this study, we explore how CIL approaches empower DL-based IDS in IoT networks, using the publicly-available IoT-23 dataset. Our evaluation focuses on two essential aspects of an IDS: ( a) attack classification and ( b) misuse detection. A thorough comparison against a fully-retrained IDS, namely starting from scratch, is carried out. Finally, we place emphasis on interpreting the predictions made by incremental IDS models through eXplainable AI (XAI) tools, offering insights into potential avenues for improvement.
Authored by Francesco Cerasuolo, Giampaolo Bovenzi, Christian Marescalco, Francesco Cirillo, Domenico Ciuonzo, Antonio Pescapè
Digital Twin can be developed to represent a certain soil carbon emissions ecosystem that takes into account various parameters such as the type of soil, vegetation, climate, human interaction, and many more. With the help of sensors and satellite imagery, real-time data can be collected and fed into the digital model to simulate and predict soil carbon emissions. However, the lack of interpretable prediction results and transparent decision-making results makes Digital Twin unreliable, which could damage the management process. Therefore, we proposed an explainable artificial intelligence (XAI) empowered Digital Twin for better managing soil carbon emissions through AI-enabled proximal sensing. We validated our XAIoT-DT components by analyzing real-world soil carbon content datasets. The preliminary results demonstrate that our framework is a reliable tool for managing soil carbon emissions with relatively high prediction results at a low cost.
Authored by Di An, YangQuan Chen
Authored by Ayshah Chan, Maja Schneider, Marco Körner
Alzheimer s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
Explainable Artificial Intelligence (XAI) seeks to enhance transparency and trust in AI systems. Evaluating the quality of XAI explanation methods remains challenging due to limitations in existing metrics. To address these issues, we propose a novel metric called Explanation Significance Assessment (ESA) and its extension, the Weighted Explanation Significance Assessment (WESA). These metrics offer a comprehensive evaluation of XAI explanations, considering spatial precision, focus overlap, and relevance accuracy. In this paper, we demonstrate the applicability of ESA and WESA on medical data. These metrics quantify the understandability and reliability of XAI explanations, assisting practitioners in interpreting AI-based decisions and promoting informed choices in critical domains like healthcare. Moreover, ESA and WESA can play a crucial role in AI certification, ensuring both accuracy and explainability. By evaluating the performance of XAI methods and underlying AI models, these metrics contribute to trustworthy AI systems. Incorporating ESA and WESA in AI certification efforts advances the field of XAI and bridges the gap between accuracy and interpretability. In summary, ESA and WESA provide comprehensive metrics to evaluate XAI explanations, benefiting research, critical domains, and AI certification, thereby enabling trustworthy and interpretable AI systems.
Authored by Jan Stodt, Christoph Reich, Nathan Clarke
The rapid advancement in Deep Learning (DL) proposes viable solutions to various real-world problems. However, deploying DL-based models in some applications is hindered by their black-box nature and the inability to explain them. This has pushed Explainable Artificial Intelligence (XAI) research toward DL-based models, aiming to increase the trust by reducing their opacity. Although many XAI algorithms were proposed earlier, they lack the ability to explain certain tasks, i.e. image captioning (IC). This is caused by the IC task nature, e.g. the presence of multiple objects from the same category in the captioned image. In this paper we propose and investigate an XAI approach for this particular task. Additionally, we provide a method to evaluate XAI algorithms performance in the domain1.
Authored by Modafar Al-Shouha, Gábor Szűcs
The results of the Deep Learning (DL) are indisputable in different fields and in particular that of the medical diagnosis. The black box nature of this tool has left the doctors very cautious with regard to its estimates. The eXplainable Artificial Intelligence (XAI) recently seemed to lift this challenge by providing explanations to the DL estimates. Several works are published in the literature offering explanatory methods. We are interested in this survey to present an overview on the application of XAI in Deep Learning-based Magnetic Resonance Imaging (MRI) image analysis for Brain Tumor (BT) diagnosis. In this survey, we divide these XAI methods into four groups, the group of the intrinsic methods and three groups of post-hoc methods which are the activation based, the gradientr based and the perturbation based XAI methods. These XAI tools improved the confidence on the DL based brain tumor diagnosis.
Authored by Hana Charaabi, Hiba Mzoughi, Ridha Hamdi, Mohamed Njah
This paper delves into the nascent paradigm of Explainable AI (XAI) and its pivotal role in enhancing the acceptability of growing AI systems that are shaping the Digital Management 5.0 era. XAI holds significant promise, promoting compliance with legal and ethical standards and offering transparent decision-making tools. The imperative of interpretable AI systems to counter the black box effect and adhere to data protection laws like GDPR is highlighted. This paper aims to achieve a dual objective. Firstly, it provides an indepth understanding of the emerging XAI paradigm, helping practitioners and academics project their future research trajectories. Secondly, it proposes a new taxonomy of XAI models with potential applications that could facilitate AI acceptability. Although the academic literature reflects a crucial lack of exploration into the full potential of XAI, existing models remain mainly theoretical and lack practical applications. By bridging the gap between abstract models and the pragmatic implementation of XAI in management, this paper breaks new ground by launching the scientific foundations of XAI in the upcoming era of Digital Management 5.0.
Authored by Samia Gamoura
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visualbased, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
This work proposes an interpretable Deep Learning framework utilizing Vision Transformers (ViT) for the classification of remote sensing images into land use and land cover (LULC) classes. It uses the Shapley Additive Explanations (SHAP) values to achieve two-stage explanations: 1) bandwise feature importance per class, showing which band assists the prediction of each class and 2) spatial-wise feature understanding, explaining which embedded patches per band affected the network’s performance. Experimental results on the EuroSAT dataset demonstrate the ViT’s accurate classification with an overall accuracy 96.86 \%, offering improved results when compared to popular CNN models. Heatmaps in each one of the dataset’s existing classes highlight the effectiveness of the proposed framework in the band explanation and the feature importance.
Authored by Anastasios Temenos, Nikos Temenos, Maria Kaselimi, Anastasios Doulamis, Nikolaos Doulamis