Malware poses a significant threat to global cy-bersecurity, with machine learning emerging as the primary method for its detection and analysis. However, the opaque nature of machine learning s decision-making process of-ten leads to confusion among stakeholders, undermining their confidence in the detection outcomes. To enhance the trustworthiness of malware detection, Explainable Artificial Intelligence (XAI) is employed to offer transparent and comprehensible explanations of the detection mechanisms, which enable stakeholders to gain a deeper understanding of detection mechanisms and assist in developing defensive strategies. Despite the recent XAI advancements, several challenges remain unaddressed. In this paper, we explore the specific obstacles encountered in applying XAI to malware detection and analysis, aiming to provide a road map for future research in this critical domain.
Authored by L. Rui, Olga Gadyatskaya
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
Many studies have been conducted to detect various malicious activities in cyberspace using classifiers built by machine learning. However, it is natural for any classifier to make mistakes, and hence, human verification is necessary. One method to address this issue is eXplainable AI (XAI), which provides a reason for the classification result. However, when the number of classification results to be verified is large, it is not realistic to check the output of the XAI for all cases. In addition, it is sometimes difficult to interpret the output of XAI. In this study, we propose a machine learning model called classification verifier that verifies the classification results by using the output of XAI as a feature and raises objections when there is doubt about the reliability of the classification results. The results of experiments on malicious website detection and malware detection show that the proposed classification verifier can efficiently identify misclassified malicious activities.
Authored by Koji Fujita, Toshiki Shibahara, Daiki Chiba, Mitsuaki Akiyama, Masato Uchida
Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage.
Authored by Yasintha Rumesh, Thulitha Senevirathna, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification \& robustness, and 4) offensive use of explanations. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows – user studies for explanation evaluation are conducted in only 14\% of the cases. The security literature sometimes also fails to disentangle the role of the various stakeholders, e.g., by providing explanations to model users and designers while also exposing them to adversaries. Additionally, the role of model designers is particularly minimized in the security literature. To this end, we present an illustrative tutorial for model designers, demonstrating how XAI can help with model verification. We also discuss scenarios where interpretability by design may be a better alternative. The systematization and the tutorial enable us to challenge several assumptions, and present open problems that can help shape the future of XAI research within cybersecurity.
Authored by Azqa Nadeem, Daniël Vos, Clinton Cao, Luca Pajola, Simon Dieck, Robert Baumgartner, Sicco Verwer
Deep learning models are being utilized and further developed in many application domains, but challenges still exist regarding their interpretability and consistency. Interpretability is important to provide users with transparent information that enhances the trust between the user and the learning model. It also gives developers feedback to improve the consistency of their deep learning models. In this paper, we present a novel architectural design to embed interpretation into the architecture of the deep learning model. We apply dynamic pixel-wised weights to input images and produce a highly correlated feature map for classification. This feature map is useful for providing interpretation and transparent information about the decision-making of the deep learning model while keeping full context about the relevant feature information compared to previous interpretation algorithms. The proposed model achieved 92\% accuracy for CIFAR 10 classifications without finetuning the hyperparameters. Furthermore, it achieved a 20\% accuracy under 8/255 PGD adversarial attack for 100 iterations without any defense method, indicating extra natural robustness compared to other Convolutional Neural Network (CNN) models. The results demonstrate the feasibility of the proposed architecture.
Authored by Weimin Zhao, Qusay Mahmoud, Sanaa Alwidian
In the realm of agriculture, where crop health is integral to global food security, Our focus is on the early detection of crop diseases. Leveraging Convolutional Neural Networks (CNNs) on a diverse dataset of crop images, our study focuses on the development, training, and optimization of these networks to achieve accurate and timely disease classification. The first segment demonstrates the efficacy of CNN architecture and optimization strategy, showcasing the potential of deep learning models in automating the identification process. The synergy of robust disease detection and interpretability through Explainable Artificial Intelligence (XAI) presented in this work marks a significant stride toward bridging the gap between advanced technology and precision agriculture. By employing visualization, the research seeks to unravel the decision-making processes of our models. XAI Visualization method emerges as notably superior in terms of accuracy, hinting at its better identification of the disease, this method achieves an accuracy of 89.75\%, surpassing both the heat map model and the LIME explanation method. This not only enhances the transparency and trustworthiness of the predictions but also provides invaluable insights for end-users, allowing them to comprehend the diagnostic features considered by the complex algorithm.
Authored by Priyadarshini Patil, Sneha Pamali, Shreya Devagiri, A Sushma, Jyothi Mirje
The Zero-trust security architecture is a paradigm shift toward resilient cyber warfare. Although Intrusion Detection Systems (IDS) have been widely adopted within military operations to detect malicious traffic and ensure instant remediation against attacks, this paper proposed an explainable adversarial mitigation approach specifically designed for zero-trust cyber warfare scenarios. It aims to provide a transparent and robust defense mechanism against adversarial attacks, enabling effective protection and accountability for increased resilience against attacks. The simulation results show the balance of security and trust within the proposed parameter protection model achieving a high F1-score of 94\%, a least test loss of 0.264, and an adequate detection time of 0.34s during the prediction of attack types.
Authored by Ebuka Nkoro, Cosmas Nwakanma, Jae-Min Lee, Dong-Seong Kim
With UAVs on the rise, accurate detection and identification are crucial. Traditional unmanned aerial vehicle (UAV) identification systems involve opaque decision-making, restricting their usability. This research introduces an RF-based Deep Learning (DL) framework for drone recognition and identification. We use cutting-edge eXplainable Artificial Intelligence (XAI) tools, SHapley Additive Explanations (SHAP), and Local Interpretable Model-agnostic Explanations(LIME). Our deep learning model uses these methods for accurate, transparent, and interpretable airspace security. With 84.59\% accuracy, our deep-learning algorithms detect drone signals from RF noise. Most crucially, SHAP and LIME improve UAV detection. Detailed explanations show the model s identification decision-making process. This transparency and interpretability set our system apart. The accurate, transparent, and user-trustworthy model improves airspace security.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
This study addresses the critical need to secure VR network communication from non-immersive attacks, employing an intrusion detection system (IDS). While deep learning (DL) models offer advanced solutions, their opacity as "black box" models raises concerns. Recognizing this gap, the research underscores the urgency for DL-based explainability, enabling data analysts and cybersecurity experts to grasp model intricacies. Leveraging sensed data from IoT devices, our work trains a DL-based model for attack detection and mitigation in the VR network, Importantly, we extend our contribution by providing comprehensive global and local interpretations of the model’s decisions post-evaluation using SHAP-based explanation.
Authored by Urslla Izuazu, Dong-Seong Kim, Jae Lee
Explainable AI is an emerging field that aims to address how black-box decisions of AI systems are made, by attempting to understand the steps and models involved in this decision-making. Explainable AI in manufacturing is supposed to deliver predictability, agility, and resiliency across targeted manufacturing apps. In this context, large amounts of data, which can be of high sensitivity and various formats need to be securely and efficiently handled. This paper proposes an Asset Management and Secure Sharing solution tailored to the Explainable AI and Manufacturing context in order to tackle this challenge. The proposed asset management architecture enables an extensive data management and secure sharing solution for industrial data assets. Industrial data can be pulled, imported, managed, shared, and tracked with a high level of security using this design. This paper describes the solution´s overall architectural design and gives an overview of the functionalities and incorporated technologies of the involved components, which are responsible for data collection, management, provenance, and sharing as well as for overall security.
Authored by Sangeetha Reji, Jonas Hetterich, Stamatis Pitsios, Vasilis Gkolemi, Sergi Perez-Castanos, Minas Pertselakis
The interest in metaverse applications by existing industries has seen massive growth thanks to the accelerated pace of research in key technological fields and the shift towards virtual interactions fueled by the Covid-19 pandemic. One key industry that can benefit from the integration into the metaverse is healthcare. The potential to provide enhanced care for patients affected by multiple health issues, from standard afflictions to more specialized pathologies, is being explored through the fabrication of architectures that can support metaverse applications. In this paper, we focus on the persistent issues of lung cancer detection, monitoring, and treatment, to propose MetaLung, a privacy and integrity-preserving architecture on the metaverse. We discuss the use cases to enable remote patient-doctor interactions, patient constant monitoring, and remote care. By leveraging technologies such as digital twins, edge computing, explainable AI, IoT, and virtual/augmented reality, we propose how the system could provide better assistance to lung cancer patients and suggest individualized treatment plans to the doctors based on their information. In addition, we describe the current implementation state of the AI-based Decision Support System for treatment selection, I3LUNG, and the current state of patient data collection.
Authored by Michele Zanitti, Mieszko Ferens, Alberto Ferrarin, Francesco Trovò, Vanja Miskovic, Arsela Prelaj, Ming Shen, Sokol Kosta
In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang
With UAVs on the rise, accurate detection and identification are crucial. Traditional unmanned aerial vehicle (UAV) identification systems involve opaque decision-making, restricting their usability. This research introduces an RF-based Deep Learning (DL) framework for drone recognition and identification. We use cutting-edge eXplainable Artificial Intelligence (XAI) tools, SHapley Additive Explanations (SHAP), and Local Interpretable Model-agnostic Explanations(LIME). Our deep learning model uses these methods for accurate, transparent, and interpretable airspace security. With 84.59\% accuracy, our deep-learning algorithms detect drone signals from RF noise. Most crucially, SHAP and LIME improve UAV detection. Detailed explanations show the model s identification decision-making process. This transparency and interpretability set our system apart. The accurate, transparent, and user-trustworthy model improves airspace security.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
Artificial Intelligence used in future networks is vulnerable to biases, misclassifications, and security threats, which seeds constant scrutiny in accountability. Explainable AI (XAI) methods bridge this gap in identifying unaccounted biases in black-box AI/ML models. However, scaffolding attacks can hide the internal biases of the model from XAI methods, jeopardizing any auditory or monitoring processes, service provisions, security systems, regulators, auditors, and end-users in future networking paradigms, including Intent-Based Networking (IBN). For the first time ever, we formalize and demonstrate a framework on how an attacker would adopt scaffoldings to deceive the security auditors in Network Intrusion Detection Systems (NIDS). Furthermore, we propose a detection method that auditors can use to detect the attack efficiently. We rigorously test the attack and detection methods using the NSL-KDD. We then simulate the attack on 5G network data. Our simulation illustrates that the attack adoption method is successful, and the detection method can identify an affected model with extremely high confidence.
Authored by Thulitha Senevirathna, Bartlomiej Siniarski, Madhusanka Liyanage, Shen Wang