Explainable AI (XAI) techniques are used for understanding the internals of the AI algorithms and how they produce a particular result. Several software packages are available implementing XAI techniques however, their use requires a deep knowledge of the AI algorithms and their output is not intuitive for non-experts. In this paper we present a framework, (XAI4PublicPolicy), that provides customizable and reusable dashboards for XAI ready to be used both for data scientists and general users with no code. The models, and data sets are selected dragging and dropping from repositories While dashboards are generated selecting the type of charts. The framework can work with structured data and images in different formats. This XAI framework was developed and is being used in the context of the AI4PublicPolicy European project for explaining the decisions made by machine learning models applied to the implementation of public policies.
Authored by Marta Martínez, Ainhoa Azqueta-Alzúaz
Forest fire is a problem that cannot be overlooked as it occurs every year and covers many areas. GISTDA has recognized this problem and created the model to detect burn scars from satellite imagery. However, it is effective only to some extent with additional manual correction being often required. An automated system is enriched with learning capacity is the preferred tool to support this decision-making process. Despite the improved predictive performance, the underlying model may not be transparent or explainable to operators. Reasoning and annotation of the results are essential for this problem, for which the XAI approach is appropriate. In this work, we use the SHAP framework to describe predictive variables of complex neural models such as DNN. This can be used to optimize the model and provide overall accuracy up to 99.85 \% for the present work. Moreover, to show stakeholders the reason and the contributed factors involved such as the various indices that use the reflectance of the wavelength (e.g. NIR and SWIR).
Authored by Tonkla Maneerat
Recently, Deep learning (DL) model has made remarkable achievements in image processing. To increase the accuracy of the DL model, more parameters are used. Therefore, the current DL models are black-box models that cannot understand the internal structure. This is the reason why the DL model cannot be applied to fields where stability and reliability are important despite its high performance. In this paper, We investigated various Explainable artificial intelligence (XAI) techniques to solve this problem. We also investigated what approaches exist to make multi-modal deep learning models transparent.
Authored by Haekang Song, Sungho Kim
In this paper, we investigate the use of Explainable Artificial Intelligence (XAI) methods for the interpretation of two Convolutional Neural Network (CNN) classifiers in the field of remote sensing (RS). Specifically, the SegNet and Unet architectures for RS building information extraction and segmentation are evaluated using a comprehensive array of primary- and layer-attributions XAI methods. The attribution methods are quantitatively evaluated using the sensitivity metric. Based on the visualization of the different XAI methods, Deconvolution and GradCAM results in many of the study areas show reliability. Moreover, these methods are able to accurately interpret both Unet s and SegNet s decisions and managed to analyze and reveal the internal mechanisms in both models (confirmed by the low sensitivity scores). Overall, no single method stood out as the best one.
Authored by Loghman Moradi, Bahareh Kalantar, Erfan Zaryabi, Alfian Halin, Naonori Ueda
Machine learning models have become increasingly complex, making it difficult to understand how they make predictions. Explainable AI (XAI) techniques have been developed to enhance model interpretability, thereby improving model transparency, trust, and accountability. In this paper, we present a comparative analysis of several XAI techniques to enhance the interpretability of machine learning models. We evaluate the performance of these techniques on a dataset commonly used for regression or classification tasks. The XAI techniques include SHAP, LIME, PDP, and GAM. We compare the effectiveness of these techniques in terms of their ability to explain model predictions and identify the most important features in the dataset. Our results indicate that XAI techniques significantly improve model interpretability, with SHAP and LIME being the most effective in identifying important features in the dataset. Our study provides insights into the strengths and limitations of different XAI techniques and their implications for the development and deployment of machine learning models. We conclude that XAI techniques have the potential to significantly enhance model interpretability and promote trust and accountability in the use of machine learning models. The paper emphasizes the importance of interpretability in medical applications of machine learning and highlights the significance of XAI techniques in developing accurate and reliable models for medical applications.
Authored by Swathi Y, Manoj Challa
This research emphasizes its main contribution in the context of applying Black Box Models in Knowledge-Based Systems. It elaborates on the fundamental limitations of these models in providing internal explanations, leading to non-compliance with prevailing regulations such as GDPR and PDP, as well as user needs, especially in high-risk areas like credit evaluation. Therefore, the use of Explainable Artificial Intelligence (XAI) in such systems becomes highly significant. However, its implementation in the credit granting process in Indonesia is still limited due to evolving regulations. This study aims to demonstrate the development of a knowledge-based credit granting system in Indonesia with local explanations. The development is carried out by utilizing credit data in Indonesia, identifying suitable machine learning models, and implementing user-friendly explanation algorithms. To achieve this goal, the final system s solution is compared using Decision Tree and XGBoost models with LIME, SHAP, and Anchor explanation algorithms. Evaluation criteria include accuracy and feedback from domain experts. The research results indicate that the Decision Tree explanation method outperforms other tested methods. However, this study also faces several challenges, including limited data size due to time constraints on expert data provision and the simplicity of important features, stemming from limitations on expert authorization to share privacy-related data.
Authored by Rolland Supardi, Windy Gambetta
This tertiary systematic literature review examines 29 systematic literature reviews and surveys in Explainable Artificial Intelligence (XAI) to uncover trends, limitations, and future directions. The study explores current explanation techniques, providing insights for researchers, practitioners, and policymakers interested in enhancing AI transparency. Notably, the increasing number of systematic literature reviews (SLRs) in XAI publications indicates a growing interest in the field. The review offers an annotated catalogue for human-computer interaction-focused XAI stakeholders, emphasising practitioner guidelines. Automated searches across ACM, IEEE, and Science Direct databases identified SLRs published between 2019 and May 2023, covering diverse application domains and topics. While adhering to methodological guidelines, the SLRs often lack primary study quality assessments. The review highlights ongoing challenges and future opportunities related to XAI evaluation standardisation, its impact on users, and interdisciplinary research on ethics and GDPR aspects. The 29 SLRs, analysing over two thousand papers, include five directly relevant to practitioners. Additionally, references from the SLRs were analysed to compile a list of frequently cited papers, serving as recommended foundational readings for newcomers to the field.
Authored by Saša Brdnik, Boštjan Šumak
This work proposed a unified approach to increase the explainability of the predictions made by Convolution Neural Networks (CNNs) on medical images using currently available Explainable Artificial Intelligent (XAI) techniques. This method in-cooperates multiple techniques such as LISA aka Local Interpretable Model Agnostic Explanations (LIME), integrated gradients, Anchors and Shapley Additive Explanations (SHAP) which is Shapley values-based approach to provide explanations for the predictions provided by Blackbox models. This unified method increases the confidence in the black-box model’s decision to be employed in crucial applications under the supervision of human specialists. In this work, a Chest X-ray (CXR) classification model for identifying Covid-19 patients is trained using transfer learning to illustrate the applicability of XAI techniques and the unified method (LISA) to explain model predictions. To derive predictions, an image-net based Inception V2 model is utilized as the transfer learning model.
Authored by Sudil Abeyagunasekera, Yuvin Perera, Kenneth Chamara, Udari Kaushalya, Prasanna Sumathipala, Oshada Senaweera
Anomaly detection and its explanation is important in many research areas such as intrusion detection, fraud detection, unknown attack detection in network traffic and logs. It is challenging to identify the cause or explanation of “why one instance is an anomaly?” and the other is not due to its unbounded and lack of supervisory nature. The answer to this question is possible with the emerging technique of explainable artificial intelligence (XAI). XAI provides tools and techniques to interpret and explain the output and working of complex models such as Deep Learning (DL). This paper aims to detect and explain network anomalies with XAI, kernelSHAP method. The same approach is used to improve the network anomaly detection model in terms of accuracy, recall, precision and f-score. The experiment is conduced with the latest CICIDS2017 dataset. Two models are created (Model\_1 and OPT\_Model) and compared. The overall accuracy and F-score of OPT\_Model (when trained in unsupervised way) are 0.90 and 0.76, respectively.
Authored by Khushnaseeb Roshan, Aasim Zafar
As a common network attack method, False data injection attack (FDLA) can often cause serious consequences to the power system due to its strong concealment. Attackers utilize grid topology information to carefully construct covert attack vectors, thus bypassing the traditional bad data detection (BDD) mechanism to maliciously tamper with measurements, which is more destructive and threatening to the power system. To address the difficulty of detecting them effectively, a detection method based on adaptive interpolation-adaptive inhibition extended Kalman filter (AI-AIEKF) is proposed in this paper. By the adaptive interpolation strategy and exponential weight function, the AI-AIEKF algorithm can improve the estimation accuracy and enhance the robustness of the EKF algorithm. Combined with the weight least squares (WVS), the two estimators respond to the system at different speeds, and the consistency test is introduced to detect the FDLAs. The extensive simulations on the IEEE-14-bus demonstrate that FDIAs can be accurately detected, thus validating the validity of the method.
Authored by Guoqing Zhang, Wengen Gao
As deep-learning based image and video manipulation technology advances, the future of truth and information looks bleak. In particular, Deepfakes, wherein a person’s face can be transferred onto the face of someone else, pose a serious threat for potential spread of convincing misinformation that is drastic and ubiquitous enough to have catastrophic real-world consequences. To prevent this, an effective detection tool for manipulated media is needed. However, the detector cannot just be good, it has to evolve with the technology to keep pace with or even outpace the enemy. At the same time, it must defend against different attack types to which deep learning systems are vulnerable. To that end, in this paper, we review various methods of both attack and defense on AI systems, as well as modes of evolution for such a system. Then, we put forward a potential system that combines the latest technologies in multiple areas as well as several novel ideas to create a detection algorithm that is robust against many attacks and can learn over time with unprecedented effectiveness and efficiency.
Authored by Ian Miller, Dan Lin
Bigdata and IoT technologies are developing rapidly. Accordingly, consideration of network security is also emphasized, and efficient intrusion detection technology is required for detecting increasingly sophisticated network attacks. In this study, we propose an efficient network anomaly detection method based on ensemble and unsupervised learning. The proposed model is built by training an autoencoder, a representative unsupervised deep learning model, using only normal network traffic data. The anomaly score of the detection target data is derived by ensemble the reconstruction loss and the Mahalanobis distances for each layer output of the trained autoencoder. By applying a threshold to this score, network anomaly traffic can be efficiently detected. To evaluate the proposed model, we applied our method to UNSW-NB15 dataset. The results show that the overall performance of the proposed method is superior to those of the model using only the reconstruction loss of the autoencoder and the model applying the Mahalanobis distance to the raw data.
Authored by Donghun Yang, Myunggwon Hwang
It is suggested in this paper that an LSIM model be used to find DDoS attacks, which usually involve patterns of bad traffic that happen over time. The idea for the model comes from the fact that bad IoTdevices often leave traces in network traffic data that can be used to find them. This is what the LSIM model needs to be done before it can spot attacks in real-time. An IoTattack dataset was used to test how well the suggested method works. What the test showed was that the suggested method worked well to find attacks. The suggested method can likely be used to find attacks on the Internet of Things. It s simple to set up and can stop many types of break-ins. This method will only work, though, if the training data are correct.LSIMmodel could be used to find attack detection who are breaking into the Internet of Things. Long short-term memory (LSIM) models are a type of AI that can find trends in data that have been collected over time. The LSIM model learns the difference patterns in network traffic data that are normal and patterns that show an attack. The proposed method to see how well it worked and found that it could achieve a precision of 99.4\%.
Authored by Animesh Srivastava, Vikash Sawan, Kumari Jugnu, Shiv Dhondiyal
The traditional port smart gate ground scale line pressure detection system employs a centralized data training method that carries the risk of privacy leakage. Federated Learning offers an effective solution to this issue by enabling each port gate to locally train data, sharing only model parameters, without the need to transmit raw data to a central server. This is particularly crucial for ground scale line pressure detection systems dealing with sensitive data. However, researchers have identified potential risks of backdoor attacks when applying Federated Learning. Currently, most existing backdoor attacks are directed towards image classification and centralized object detection. However, backdoor attacks for Federated Learning-based object detection tasks have not been explored. In this paper, we reveal that these threats may also manifest in this task. To analyze the impact of backdoor attacks on this task, we designed three backdoor attack triggers and proposed three trigger attack operations. To assess backdoor attacks on this task, we developed corresponding metrics and conducted experiments on local datasets from three port gates. The experimental results indicate that Federated Learning-based object detection tasks are susceptible to backdoor threats.
Authored by Chunming Tang, Jinghong Liu, Xinguang Dai, Yan Li
The Internet of Things (IoT) heralds a innovative generation in communication via enabling regular gadgets to supply, receive, and percentage records easily. IoT applications, which prioritise venture automation, aim to present inanimate items autonomy; they promise increased consolation, productivity, and automation. However, strong safety, privateness, authentication, and recuperation methods are required to understand this goal. In order to assemble give up-to-quit secure IoT environments, this newsletter meticulously evaluations the security troubles and risks inherent to IoT applications. It emphasises the vital necessity for architectural changes.The paper starts by conducting an examination of security worries before exploring emerging and advanced technologies aimed at nurturing a sense of trust, in Internet of Things (IoT) applications. The primary focus of the discussion revolves around how these technologies aid in overcoming security challenges and fostering an ecosystem for IoT.
Authored by Pranav A, Sathya S, HariHaran B
Nowadays, anomaly-based network intrusion detection system (NIDS) still have limited real-world applications; this is particularly due to false alarms, a lack of datasets, and a lack of confidence. In this paper, we propose to use explainable artificial intelligence (XAI) methods for tackling these issues. In our experimentation, we train a random forest (RF) model on the NSL-KDD dataset, and use SHAP to generate global explanations. We find that these explanations deviate substantially from domain expertise. To shed light on the potential causes, we analyze the structural composition of the attack classes. There, we observe severe imbalances in the number of records per attack type subsumed in the attack classes of the NSL-KDD dataset, which could lead to generalization and overfitting regarding classification. Hence, we train a new RF classifier and SHAP explainer directly on the attack types. Classification performance is considerably improved, and the new explanations are matching the expectations based on domain knowledge better. Thus, we conclude that the imbalances in the dataset bias classification and consequently also the results of XAI methods like SHAP. However, the XAI methods can also be employed to find and debug issues and biases in the data and the applied model. Furthermore, the debugging results in higher trustworthiness of anomaly-based NIDS.
Authored by Eric Lanfer, Sophia Sylvester, Nils Aschenbruck, Martin Atzmueller
With the increasing complexity of network attacks, traditional firewall technologies are facing challenges in effectively detecting and preventing these attacks. As a result, AI technology has emerged as a promising approach to enhance the capabilities of firewalls in detecting and mitigating network attacks. This paper aims to investigate the application of AI firewalls in network attack detection and proposes a testing method to evaluate their performance. An experiment was conducted to verify the feasibility of the proposed testing method. The results demonstrate that AI firewalls exhibit higher accuracy in detecting network attacks, thereby highlighting their effectiveness. Furthermore, the testing method can be utilized to compare different AI firewalls.
Authored by Zhijia Wang, Qi Deng
DDoS is considered as the most dangerous attack and threat to software defined network (SDN). The existing mitigation technologies include flow capacity method, entropy method and flow analysis method. They rely on traffic sampling to achieve true real-time inline DDoS detection accuracy. However, the cost of the method based on traffic sampling is very high. Early detection of DDoS attacks in the controller is very important, which requires highly adaptive and accurate methods. Therefore, this paper proposes an effective and accurate real-time DDoS attack detection technology based on hurst index. The main detection methods of DDoS attacks and the traffic characteristics when DDoS attacks occur are briefly analyzed. The Hurst exponent estimation method and its application in real-time detection (RTD) of DDoS attacks are discussed. Finally, the simulation experiment test analysis is improved to verify the effectiveness and feasibility of RTD of DDoS attacks based on hurst index.
Authored by Ying Ling, Chunyan Yang, Xin Li, Ming Xie, Shaofeng Ming, Jieke Lu, Fuchuan Tang
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
Healthcare systems have recently utilized the Internet of Medical Things (IoMT) to assist intelligent data collection and decision-making. However, the volume of malicious threats, particularly new variants of malware attacks to the connected medical devices and their connected system, has risen significantly in recent years, which poses a critical threat to patients’ confidential data and the safety of the healthcare systems. To address the high complexity of conventional software-based detection techniques, Hardware-supported Malware Detection (HMD) has proved to be efficient for detecting malware at the processors’ micro-architecture level with the aid of Machine Learning (ML) techniques applied to Hardware Performance Counter (HPC) data. In this work, we examine the suitability of various standard ML classifiers for zero-day malware detection on new data streams in the real-world operation of IoMT devices and demonstrate that such methods are not capable of detecting unknown malware signatures with a high detection rate. In response, we propose a hybrid and adaptive image-based framework based on Deep Learning and Deep Reinforcement Learning (DRL) for online hardware-assisted zero-day malware detection in IoMT devices. Our proposed method dynamically selects the best DNN-based malware detector at run-time customized for each device from a pool of highly efficient models continuously trained on all stream data. It first converts tabular hardware-based data (HPC events) into small-size images and then leverages a transfer learning technique to retrain and enhance the Deep Neural Network (DNN) based model’s performance for unknown malware detection. Multiple DNN models are trained on various stream data continuously to form an inclusive model pool. Next, a DRL-based agent constructed with two Multi-Layer Perceptrons (MLPs) is trained (one acts as an Actor and another acts as a Critic) to align the decision of selecting the most optimal DNN model for highly accurate zero-day malware detection at run-time using a limited number of hardware events. The experimental results demonstrate that our proposed AI-enabled method achieves 99\% detection rate in both F1-score and AUC, with only 0.01\% false positive rate and 1\% false negative rate.
Authored by Zhangying He, Hossein Sayadi
Increasing automation in vehicles enabled by increased connectivity to the outside world has exposed vulnerabilities in previously siloed automotive networks like controller area networks (CAN). Attributes of CAN such as broadcast-based communication among electronic control units (ECUs) that lowered deployment costs are now being exploited to carry out active injection attacks like denial of service (DoS), fuzzing, and spoofing attacks. Research literature has proposed multiple supervised machine learning models deployed as Intrusion detection systems (IDSs) to detect such malicious activity; however, these are largely limited to identifying previously known attack vectors. With the ever-increasing complexity of active injection attacks, detecting zero-day (novel) attacks in these networks in real-time (to prevent propagation) becomes a problem of particular interest. This paper presents an unsupervised-learning-based convolutional autoencoder architecture for detecting zero-day attacks, which is trained only on benign (attack-free) CAN messages. We quantise the model using Vitis-AI tools from AMD/Xilinx targeting a resource-constrained Zynq Ultrascale platform as our IDS-ECU system for integration. The proposed model successfully achieves equal or higher classification accuracy (\textgreater 99.5\%) on unseen DoS, fuzzing, and spoofing attacks from a publicly available attack dataset when compared to the state-of-the-art unsupervised learning-based IDSs. Additionally, by cleverly overlapping IDS operation on a window of CAN messages with the reception, the model is able to meet line-rate detection (0.43 ms per window) of high-speed CAN, which when coupled with the low energy consumption per inference, makes this architecture ideally suited for detecting zero-day attacks on critical CAN networks.
Authored by Shashwat Khandelwal, Shanker Shreejith
In the evolving landscape of Internet of Things (IoT) security, the need for continuous adaptation of defenses is critical. Class Incremental Learning (CIL) can provide a viable solution by enabling Machine Learning (ML) and Deep Learning (DL) models to ( i) learn and adapt to new attack types (0-day attacks), ( ii) retain their ability to detect known threats, (iii) safeguard computational efficiency (i.e. no full re-training). In IoT security, where novel attacks frequently emerge, CIL offers an effective tool to enhance Intrusion Detection Systems (IDS) and secure network environments. In this study, we explore how CIL approaches empower DL-based IDS in IoT networks, using the publicly-available IoT-23 dataset. Our evaluation focuses on two essential aspects of an IDS: ( a) attack classification and ( b) misuse detection. A thorough comparison against a fully-retrained IDS, namely starting from scratch, is carried out. Finally, we place emphasis on interpreting the predictions made by incremental IDS models through eXplainable AI (XAI) tools, offering insights into potential avenues for improvement.
Authored by Francesco Cerasuolo, Giampaolo Bovenzi, Christian Marescalco, Francesco Cirillo, Domenico Ciuonzo, Antonio Pescapè
Automated Internet of Things (IoT) devices generate a considerable amount of data continuously. However, an IoT network can be vulnerable to botnet attacks, where a group of IoT devices can be infected by malware and form a botnet. Recently, Artificial Intelligence (AI) algorithms have been introduced to detect and resist such botnet attacks in IoT networks. However, most of the existing Deep Learning-based algorithms are designed and implemented in a centralized manner. Therefore, these approaches can be sub-optimal in detecting zero-day botnet attacks against a group of IoT devices. Besides, a centralized AI approach requires sharing of data traces from the IoT devices for training purposes, which jeopardizes user privacy. To tackle these issues in this paper, we propose a federated learning based framework for a zero-day botnet attack detection model, where a new aggregation algorithm for the IoT devices is developed so that a better model aggregation can be achieved without compromising user privacy. Evaluations are conducted on an open dataset, i.e., the N-BaIoT. The evaluation results demonstrate that the proposed learning framework with the new aggregation algorithm outperforms the existing baseline aggregation algorithms in federated learning for zero-day botnet attack detection in IoT networks.
Authored by Jielun Zhang, Shicong Liang, Feng Ye, Rose Hu, Yi Qian
Significant progress has been made towards developing Deep Learning (DL) in Artificial Intelligence (AI) models that can make independent decisions. However, this progress has also highlighted the emergence of malicious entities that aim to manipulate the outcomes generated by these models. Due to increasing complexity, this is a concerning issue in various fields, such as medical image classification, autonomous vehicle systems, malware detection, and criminal justice. Recent research advancements have highlighted the vulnerability of these classifiers to both conventional and adversarial assaults, which may skew their results in both the training and testing stages. The Systematic Literature Review (SLR) aims to analyse traditional and adversarial attacks comprehensively. It evaluates 45 published works from 2017 to 2023 to better understand adversarial attacks, including their impact, causes, and standard mitigation approaches.
Authored by Tarek Ali, Amna Eleyan, Tarek Bejaoui
This study presents a novel approach for fortifying network security systems, crucial for ensuring network reliability and survivability against evolving cyber threats. Our approach integrates Explainable Artificial Intelligence (XAI) with an en-semble of autoencoders and Linear Discriminant Analysis (LDA) to create a robust framework for detecting both known and elusive zero-day attacks. We refer to this integrated method as AE- LDA. Our method stands out in its ability to effectively detect both known and previously unidentified network intrusions. By employing XAI for feature selection, we ensure improved inter-pretability and precision in identifying key patterns indicative of network anomalies. The autoencoder ensemble, trained on benign data, is adept at recognising a broad spectrum of network behaviours, thereby significantly enhancing the detection of zero-day attacks. Simultaneously, LDA aids in the identification of known threats, ensuring a comprehensive coverage of potential network vulnerabilities. This hybrid model demonstrates superior performance in anomaly detection accuracy and complexity management. Our results highlight a substantial advancement in network intrusion detection capabilities, showcasing an effective strategy for bolstering network reliability and resilience against a diverse range of cyber threats.
Authored by Fatemeh Stodt, Fabrice Theoleyre, Christoph Reich