The aim of the study is to review XAI studies in terms of their solutions, applications and challenges in renewable energy and resources. The results have shown that XAI really helps to explain how the decisions are made by AI models, to increase confidence and trust to the models, to make decision mode reliable, show the transparency of decision-making mechanism. Even if there have been a number of solutions such as SHAP, LIME, ELI5, DeepLIFT, Rule Based Approach of XAI methods, a number of problems in metrics, evaluations, performance and explanations are still specific, and require domain experts to develop new models or to apply available techniques. It is hoped that this article might help researchers to develop XAI solutions in their energy applications and improve their AI approaches for further studies.
Authored by Betül Ersöz, Şeref Sağıroğlu, Halil Bülbül
Over the past two decades, Cyber-Physical Systems (CPS) have emerged as critical components in various industries, integrating digital and physical elements to improve efficiency and automation, from smart manufacturing and autonomous vehicles to advanced healthcare devices. However, the increasing complexity of CPS and their deployment in highly dynamic contexts undermine user trust. This motivates the investigation of methods capable of generating explanations about the behavior of CPS. To this end, Explainable Artificial Intelligence (XAI) methodologies show potential. However, these approaches do not consider contextual variables that a CPS may be subjected to (e.g., temperature, humidity), and the provided explanations are typically not actionable. In this article, we propose an Actionable Contextual Explanation System (ACES) that considers such contextual influences. Based on a user query about a behavioral attribute of a CPS (for example, vibrations and speed), ACES creates contextual explanations for the behavior of such a CPS considering its context. To generate contextual explanations, ACES uses a context model to discover sensors and actuators in the physical environment of a CPS and obtains time-series data from these devices. It then cross-correlates these time-series logs with the user-specified behavioral attribute of the CPS. Finally, ACES employs a counterfactual explanation method and takes user feedback to identify causal relationships between the contextual variables and the behavior of the CPS. We demonstrate our approach with a synthetic use case; the favorable results obtained, motivate the future deployment of ACES in real-world scenarios.
Authored by Sanjiv Jha, Simon Mayer, Kimberly Garcia
In recent years there is a surge of interest in the interpretability and explainability of AI systems, which is largely motivated by the need for ensuring the transparency and accountability of Artificial Intelligence (AI) operations, as well as by the need to minimize the cost and consequences of poor decisions. Another challenge that needs to be mentioned is the Cyber security attacks against AI infrastructures in manufacturing environments. This study examines eXplainable AI (XAI)-enhanced approaches against adversarial attacks for optimizing Cyber defense methods in manufacturing image classification tasks. The examined XAI methods were applied to an image classification task providing some insightful results regarding the utility of Local Interpretable Model-agnostic Explanations (LIME), Saliency maps, and the Gradient-weighted Class Activation Mapping (Grad-Cam) as methods to fortify a dataset against gradient evasion attacks. To this end, we “attacked” the XAI-enhanced Images and used them as input to the classifier to measure their robustness of it. Given the analyzed dataset, our research indicates that LIME-masked images are more robust to adversarial attacks. We additionally propose an Encoder-Decoder schema that timely predicts (decodes) the masked images, setting the proposed approach sufficient for a real-life problem.
Authored by Georgios Makridis, Spyros Theodoropoulos, Dimitrios Dardanis, Ioannis Makridis, Maria Separdani, Georgios Fatouros, Dimosthenis Kyriazis, Panagiotis Koulouris
In today s age of digital technology, ethical concerns regarding computing systems are increasing. While the focus of such concerns currently is on requirements for software, this article spotlights the hardware domain, specifically microchips. For example, the opaqueness of modern microchips raises security issues, as malicious actors can manipulate them, jeopardizing system integrity. As a consequence, governments invest substantially to facilitate a secure microchip supply chain. To combat the opaqueness of hardware, this article introduces the concept of Explainable Hardware (XHW). Inspired by and building on previous work on Explainable AI (XAI) and explainable software systems, we develop a framework for achieving XHW comprising relevant stakeholders, requirements they might have concerning hardware, and possible explainability approaches to meet these requirements. Through an exploratory survey among 18 hardware experts, we showcase applications of the framework and discover potential research gaps. Our work lays the foundation for future work and structured debates on XHW.
Authored by Timo Speith, Julian Speith, Steffen Becker, Yixin Zou, Asia Biega, Christof Paar
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
In various fields, such as medical engi-neering or aerospace engineering, it is difficult to apply the decisions of a machine learning (ML) or a deep learning (DL) model that do not account for the vast amount of human limitations which can lead to errors and incidents. Explainable Artificial Intelligence (XAI) comes to explain the results of artificial intelligence software (ML or DL) still considered black boxes to understand their decisions and adopt them. In this paper, we are interested in the deployment of a deep neural network (DNN) model able to predict the Remaining Useful Life (RUL) of a turbofan engine of an aircraft. Shapley s method was then applied in the explanation of the DL results. This made it possible to determine the participation rate of each parameter in the RUL and to identify the most decisive parameters for extending or shortening the RUL of the turbofan engine.
Authored by Anouar BOUROKBA, Ridha HAMDI, Mohamed Njah
Alzheimer’s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer’s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.
Authored by Sayandev Mukherjee, Jason Rupe, Jingjie Zhu
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visual-based, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This work examines whether the resolution of a programming guide is related to academic success in a introductory programming course at the Andrés Bello University (Chile). We investigated whether the guide, which consists of 52 exercises which are not mandatory to solve, helps predict the failure of the first test of this course by first-year students. Furthermore, the use of the unified SHAP and XAI framework is proposed to analyze and understand how programming guides influence student performance. The study includes a literature review of previous related studies, a descriptive analysis of the data collected, and a discussion of the practical and theoretical implications of the study. The results obtained will be useful to improve student support strategies and decision making related to the use of guides as an educational tool.
Authored by Gaston Sepulveda, Billy Peralta, Marcos Levano, Pablo Schwarzenberg, Orietta Nicolis
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Recently, Deep learning (DL) model has made remarkable achievements in image processing. To increase the accuracy of the DL model, more parameters are used. Therefore, the current DL models are black-box models that cannot understand the internal structure. This is the reason why the DL model cannot be applied to fields where stability and reliability are important despite its high performance. In this paper, We investigated various Explainable artificial intelligence (XAI) techniques to solve this problem. We also investigated what approaches exist to make multi-modal deep learning models transparent.
Authored by Haekang Song, Sungho Kim
In the evolving landscape of Internet of Things (IoT) security, the need for continuous adaptation of defenses is critical. Class Incremental Learning (CIL) can provide a viable solution by enabling Machine Learning (ML) and Deep Learning (DL) models to ( i) learn and adapt to new attack types (0-day attacks), ( ii) retain their ability to detect known threats, (iii) safeguard computational efficiency (i.e. no full re-training). In IoT security, where novel attacks frequently emerge, CIL offers an effective tool to enhance Intrusion Detection Systems (IDS) and secure network environments. In this study, we explore how CIL approaches empower DL-based IDS in IoT networks, using the publicly-available IoT-23 dataset. Our evaluation focuses on two essential aspects of an IDS: ( a) attack classification and ( b) misuse detection. A thorough comparison against a fully-retrained IDS, namely starting from scratch, is carried out. Finally, we place emphasis on interpreting the predictions made by incremental IDS models through eXplainable AI (XAI) tools, offering insights into potential avenues for improvement.
Authored by Francesco Cerasuolo, Giampaolo Bovenzi, Christian Marescalco, Francesco Cirillo, Domenico Ciuonzo, Antonio Pescapè
Zero-day attacks, which are defined by their abrupt appearance without any previous detection mechanisms, present a substantial obstacle in the field of network security. To address this difficulty, a wide variety of machine learning and deep learning models have been used to identify and minimize zeroday assaults. The models have been assessed for both binary and multi-class classification situations, The objective of this work is to do a thorough comparison and analysis of these models, including the impact of class imbalance and utilizing SHAP (SHapley Additive exPlanations) explainability approaches. Class imbalance is a prevalent problem in cybersecurity datasets, characterized by a considerable disparity between the number of attack cases and non-attack instances. By equalizing the dataset, we guarantee equitable depiction of both categories, so preventing prejudice towards the dominant category throughout the training and assessment of the model. Moreover, the application of SHAP XAI facilitates a more profound comprehension of model predictions, empowering analysts to analyze the fundamental aspects that contribute to the detection of zero-day attacks.
Authored by C.K. Sruthi, Aswathy Ravikumar, Harini Sriraman
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visualbased, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
XAI with natural language processing aims to produce human-readable explanations as evidence for AI decision-making, which addresses explainability and transparency. However, from an HCI perspective, the current approaches only focus on delivering a single explanation, which fails to account for the diversity of human thoughts and experiences in language. This paper thus addresses this gap, by proposing a generative XAI framework, INTERACTION (explain aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder). Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation. We conduct intensive experiments with the Transformer architecture on a benchmark dataset, e-SNLI [1]. Our method achieves competitive or better performance against state-of-the-art baseline models on explanation generation (up to 4.7% gain in BLEU) and prediction (up to 4.4% gain in accuracy) in step one; it can also generate multiple diverse explanations in step two.
Authored by Jialin Yu, Alexandra Cristea, Anoushka Harit, Zhongtian Sun, Olanrewaju Aduragba, Lei Shi, Noura Moubayed