As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by Navneet Kaur, Lav Gupta
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
In various fields, such as medical engi-neering or aerospace engineering, it is difficult to apply the decisions of a machine learning (ML) or a deep learning (DL) model that do not account for the vast amount of human limitations which can lead to errors and incidents. Explainable Artificial Intelligence (XAI) comes to explain the results of artificial intelligence software (ML or DL) still considered black boxes to understand their decisions and adopt them. In this paper, we are interested in the deployment of a deep neural network (DNN) model able to predict the Remaining Useful Life (RUL) of a turbofan engine of an aircraft. Shapley s method was then applied in the explanation of the DL results. This made it possible to determine the participation rate of each parameter in the RUL and to identify the most decisive parameters for extending or shortening the RUL of the turbofan engine.
Authored by Anouar BOUROKBA, Ridha HAMDI, Mohamed Njah
Alzheimer’s disease (AD) is a disorder that has an impact on the functioning of the brain cells which begins gradually and worsens over time. The early detection of the disease is very crucial as it will increase the chances of benefiting from treatment. There is a possibility for delayed diagnosis of the disease. To overcome this delay, in this work an approach has been proposed using Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) to use active Magnetic Resonance Imaging (MRI) scanned reports of Alzheimer’s patients to classify the stages of AD along with Explainable Artificial Intelligence (XAI) known as Gradient Class Activation Map (Grad-CAM) to highlight the regions of the brain where the disease is detected.
Authored by Savarala Chethana, Sreevathsa Charan, Vemula Srihitha, Suja Palaniswamy, Peeta Pati
With deep neural networks (DNNs) involved in more and more decision making processes, critical security problems can occur when DNNs give wrong predictions. This can be enforced with so-called adversarial attacks. These attacks modify the input in such a way that they are able to fool a neural network into a false classification, while the changes remain imperceptible to a human observer. Even for very specialized AI systems, adversarial attacks are still hardly detectable. The current state-of-the-art adversarial defenses can be classified into two categories: pro-active defense and passive defense, both unsuitable for quick rectifications: Pro-active defense methods aim to correct the input data to classify the adversarial samples correctly, while reducing the accuracy of ordinary samples. Passive defense methods, on the other hand, aim to filter out and discard the adversarial samples. Neither of the defense mechanisms is suitable for the setup of autonomous driving: when an input has to be classified, we can neither discard the input nor have the time to go for computationally expensive corrections. This motivates our method based on explainable artificial intelligence (XAI) for the correction of adversarial samples. We used two XAI interpretation methods to correct adversarial samples. We experimentally compared this approach with baseline methods. Our analysis shows that our proposed method outperforms the state-of-the-art approaches.
Authored by Ching-Yu Kao, Junhao Chen, Karla Markert, Konstantin Böttinger
Explainable AI (XAI) is a topic of intense activity in the research community today. However, for AI models deployed in the critical infrastructure of communications networks, explainability alone is not enough to earn the trust of network operations teams comprising human experts with many decades of collective experience. In the present work we discuss some use cases in communications networks and state some of the additional properties, including accountability, that XAI models would have to satisfy before they can be widely deployed. In particular, we advocate for a human-in-the-Ioop approach to train and validate XAI models. Additionally, we discuss the use cases of XAI models around improving data preprocessing and data augmentation techniques, and refining data labeling rules for producing consistently labeled network datasets.
Authored by Sayandev Mukherjee, Jason Rupe, Jingjie Zhu
In the past two years, technology has undergone significant changes that have had a major impact on healthcare systems. Artificial intelligence (AI) is a key component of this change, and it can assist doctors with various healthcare systems and intelligent health systems. AI is crucial in diagnosing common diseases, developing new medications, and analyzing patient information from electronic health records. However, one of the main issues with adopting AI in healthcare is the lack of transparency, as doctors must interpret the output of the AI. Explainable AI (XAI) is extremely important for the healthcare sector and comes into play in this regard. With XAI, doctors, patients, and other stakeholders can more easily examine a decision s reliability by knowing its reasoning due to XAI s interpretable explanations. Deep learning is used in this study to discuss explainable artificial intelligence (XAI) in medical image analysis. The primary goal of this paper is to provide a generic six-category XAI architecture for classifying DL-based medical image analysis and interpretability methods.The interpretability method/XAI approach for medical image analysis is often categorized based on the explanation and technical method. In XAI approaches, the explanation method is further sub-categorized into three types: text-based, visual-based, and examples-based. In interpretability technical method, it was divided into nine categories. Finally, the paper discusses the advantages, disadvantages, and limitations of each neural network-based interpretability method for medical imaging analysis.
Authored by Priya S, Ram K, Venkatesh S, Narasimhan K, Adalarasu K
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This work examines whether the resolution of a programming guide is related to academic success in a introductory programming course at the Andrés Bello University (Chile). We investigated whether the guide, which consists of 52 exercises which are not mandatory to solve, helps predict the failure of the first test of this course by first-year students. Furthermore, the use of the unified SHAP and XAI framework is proposed to analyze and understand how programming guides influence student performance. The study includes a literature review of previous related studies, a descriptive analysis of the data collected, and a discussion of the practical and theoretical implications of the study. The results obtained will be useful to improve student support strategies and decision making related to the use of guides as an educational tool.
Authored by Gaston Sepulveda, Billy Peralta, Marcos Levano, Pablo Schwarzenberg, Orietta Nicolis
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Despite intensive research, survival rate for pancreatic cancer, a fatal and incurable illness, has not dramatically improved in recent years. Deep learning systems have shown superhuman ability in a considerable number of activities, and recent developments in Artificial Intelligence (AI) have led to its widespread use in predictive analytics of pancreatic cancer. However, the improvement in performance is the result of model complexity being raised, which transforms these systems into “black box” methods and creates uncertainty about how they function and, ultimately, how they make judgements. This ambiguity has made it difficult for deep learning algorithms to be accepted in important field like healthcare, where their benefit may be enormous. As a result, there has been a significant resurgence in recent years of scholarly interest in the topic of Explainable Artificial Intelligence (XAI), which is concerned with the creation of novel techniques for interpreting and explaining deep learning models. In this study, we utilize Computed Tomography (CT) images and Clinical data to predict and analyze pancreatic cancer and survival rate respectively. Since pancreatic tumors are small to identify, the region marking through XAI will assist medical professionals to identify the appropriate region and determine the presence of cancer. Various features are taken into consideration for survival prediction. The most prominent features can be identified with the help of XAI, which in turn aids medical professionals in making better decisions. This study mainly focuses on the XAI strategy for deep and machine learning models rather than prediction and survival methodology.
Authored by Srinidhi B, M Bhargavi
Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Recently, Deep learning (DL) model has made remarkable achievements in image processing. To increase the accuracy of the DL model, more parameters are used. Therefore, the current DL models are black-box models that cannot understand the internal structure. This is the reason why the DL model cannot be applied to fields where stability and reliability are important despite its high performance. In this paper, We investigated various Explainable artificial intelligence (XAI) techniques to solve this problem. We also investigated what approaches exist to make multi-modal deep learning models transparent.
Authored by Haekang Song, Sungho Kim
In this paper, we investigate the use of Explainable Artificial Intelligence (XAI) methods for the interpretation of two Convolutional Neural Network (CNN) classifiers in the field of remote sensing (RS). Specifically, the SegNet and Unet architectures for RS building information extraction and segmentation are evaluated using a comprehensive array of primary- and layer-attributions XAI methods. The attribution methods are quantitatively evaluated using the sensitivity metric. Based on the visualization of the different XAI methods, Deconvolution and GradCAM results in many of the study areas show reliability. Moreover, these methods are able to accurately interpret both Unet s and SegNet s decisions and managed to analyze and reveal the internal mechanisms in both models (confirmed by the low sensitivity scores). Overall, no single method stood out as the best one.
Authored by Loghman Moradi, Bahareh Kalantar, Erfan Zaryabi, Alfian Halin, Naonori Ueda
Machine learning models have become increasingly complex, making it difficult to understand how they make predictions. Explainable AI (XAI) techniques have been developed to enhance model interpretability, thereby improving model transparency, trust, and accountability. In this paper, we present a comparative analysis of several XAI techniques to enhance the interpretability of machine learning models. We evaluate the performance of these techniques on a dataset commonly used for regression or classification tasks. The XAI techniques include SHAP, LIME, PDP, and GAM. We compare the effectiveness of these techniques in terms of their ability to explain model predictions and identify the most important features in the dataset. Our results indicate that XAI techniques significantly improve model interpretability, with SHAP and LIME being the most effective in identifying important features in the dataset. Our study provides insights into the strengths and limitations of different XAI techniques and their implications for the development and deployment of machine learning models. We conclude that XAI techniques have the potential to significantly enhance model interpretability and promote trust and accountability in the use of machine learning models. The paper emphasizes the importance of interpretability in medical applications of machine learning and highlights the significance of XAI techniques in developing accurate and reliable models for medical applications.
Authored by Swathi Y, Manoj Challa
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This tertiary systematic literature review examines 29 systematic literature reviews and surveys in Explainable Artificial Intelligence (XAI) to uncover trends, limitations, and future directions. The study explores current explanation techniques, providing insights for researchers, practitioners, and policymakers interested in enhancing AI transparency. Notably, the increasing number of systematic literature reviews (SLRs) in XAI publications indicates a growing interest in the field. The review offers an annotated catalogue for human-computer interaction-focused XAI stakeholders, emphasising practitioner guidelines. Automated searches across ACM, IEEE, and Science Direct databases identified SLRs published between 2019 and May 2023, covering diverse application domains and topics. While adhering to methodological guidelines, the SLRs often lack primary study quality assessments. The review highlights ongoing challenges and future opportunities related to XAI evaluation standardisation, its impact on users, and interdisciplinary research on ethics and GDPR aspects. The 29 SLRs, analysing over two thousand papers, include five directly relevant to practitioners. Additionally, references from the SLRs were analysed to compile a list of frequently cited papers, serving as recommended foundational readings for newcomers to the field.
Authored by Saša Brdnik, Boštjan Šumak
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller