The aim of the study is to review XAI studies in terms of their solutions, applications and challenges in renewable energy and resources. The results have shown that XAI really helps to explain how the decisions are made by AI models, to increase confidence and trust to the models, to make decision mode reliable, show the transparency of decision-making mechanism. Even if there have been a number of solutions such as SHAP, LIME, ELI5, DeepLIFT, Rule Based Approach of XAI methods, a number of problems in metrics, evaluations, performance and explanations are still specific, and require domain experts to develop new models or to apply available techniques. It is hoped that this article might help researchers to develop XAI solutions in their energy applications and improve their AI approaches for further studies.
Authored by Betül Ersöz, Şeref Sağıroğlu, Halil Bülbül
Sixth generation (6G)-enabled massive network MANO orchestration, alongside distributed supervision and fully reconfigurable control logic that manages dynamic arrangement of network components, such as cell-free, Open-Air Interface (OAI) and RIS, is a potent enabler for the upcoming pervasive digitalization of the vertical use cases. In such a disruptive domain, artificial intelligence (AI)-driven zero-touch “Network of Networks” intent-based automation shall be able to guarantee a high degree of security, efficiency, scalability, and sustainability, especially in cross-domain and interoperable deployment environments (i.e., where points of presence (PoPs) are non-independent and identically distributed (non-IID)). To this extent, this paper presents a novel breakthrough, open, and fully reconfigurable networking architecture for 6G cellular paradigms, named 6G-BRICKS. To this end, 6G-BRICKS will deliver the first open and programmable O-RAN Radio Unit (RU) for 6G networks, termed as the OpenRU, based on an NI USRP-based platform. Moreover, 6G-BRICKS will integrate the RIS concept into the OAI alongside Testing as a Service (TaaS) capabilities, multi-tenancy, disaggregated Operations Support Systems (OSS) and Deep Edge adaptation at the forefront. The overall ambition of 6G-BRICKS is to offer evolvability, granularity, while, at the same time, tackling big challenges such as interdisciplinary efforts and big investments in 6G integration.
Authored by Kostas Ramantas, Anastasios Bikos, Walter Nitzold, Sofie Pollin, Adlen Ksentini, Sylvie Mayrargue, Vasileios Theodorou, Loizos Christofi, Georgios Gardikis, Md Rahman, Ashima Chawla, Francisco Ibañez, Ioannis Chochliouros, Didier Nicholson, Mario, Montagudand, Arman Shojaeifard, Alexios Pagkotzidis, Christos Verikoukis
Impact of Equivalence Assessment in the Education Sector using the XAI Model of Blockchain with ECTS
The procedure for obtaining an equivalency certificate for international educational recognition is typically complicated and opaque, and differs depending on the nation and system. To overcome these issues and empower students, this study suggests a revolutionary assessment tool that makes use of blockchain technology, chatbots, the European Credit Transfer and Accumulation System (ECTS), and Explainable Artificial Intelligence (XAI). Educational equivalency assessments frequently face difficulties and lack of openness in a variety of settings. The suggested solution uses blockchain for tamper-proof record keeping and secure data storage, based on the capabilities of each component. This improves the blockchain’s ability to securely store application data and evaluation results, fostering immutability and trust. Using the distributed ledger feature of blockchain promotes fairness in evaluations by preventing tampering and guaranteeing data integrity. The blockchain ensures data security and privacy by encrypting and storing data. Discuss how XAI might explain AI-driven equivalence choices, promoting fairness and trust, by reviewing pertinent material in each domain. Chatbots can improve accessibility by streamlining data collection and assisting students along the way. Transparency and efficiency are provided via ECTS computations that integrate XAI and chatbots. Emphasizing the availability of multilingual support for international students, we also address issues such as data privacy and system adaption. The study recommends further research to assess the multifaceted method in practical contexts and improve the technology for moral and efficient application. In the end, both students and institutions will benefit from this, as it can empower individuals and promote international mobility of degree equivalization.
Authored by Sumathy Krishnan, R Surendran
Internet of Things (IoT) and Artificial Intelligence (AI) systems have become prevalent across various industries, steering to diverse and far-reaching outcomes, and their convergence has garnered significant attention in the tech world. Studies and reviews are instrumental in supplying industries with the nuanced understanding of the multifaceted developments of this joint domain. This paper undertakes a critical examination of existing perspectives and governance policies, adopting a contextual approach, and addressing not only the potential but also the limitations of these governance policies. In the complex landscape of AI-infused IoT systems, transparency and interpretability are pivotal qualities for informed decision-making and effective governance. In AI governance, transparency allows for scrutiny and accountability, while interpretability facilitates trust and confidence in AI-driven decisions. Therefore, we also evaluate and advocate for the use of two very popular eXplainable AI (XAI) techniques-SHAP and LIME-in explaining the predictive results of AI models. Subsequently, this paper underscores the imperative of not only maximizing the advantages and services derived from the incorporation of IoT and AI but also diligently minimizing possible risks and challenges.
Authored by Nadine Fares, Denis Nedeljkovic, Manar Jammal
Over the past two decades, Cyber-Physical Systems (CPS) have emerged as critical components in various industries, integrating digital and physical elements to improve efficiency and automation, from smart manufacturing and autonomous vehicles to advanced healthcare devices. However, the increasing complexity of CPS and their deployment in highly dynamic contexts undermine user trust. This motivates the investigation of methods capable of generating explanations about the behavior of CPS. To this end, Explainable Artificial Intelligence (XAI) methodologies show potential. However, these approaches do not consider contextual variables that a CPS may be subjected to (e.g., temperature, humidity), and the provided explanations are typically not actionable. In this article, we propose an Actionable Contextual Explanation System (ACES) that considers such contextual influences. Based on a user query about a behavioral attribute of a CPS (for example, vibrations and speed), ACES creates contextual explanations for the behavior of such a CPS considering its context. To generate contextual explanations, ACES uses a context model to discover sensors and actuators in the physical environment of a CPS and obtains time-series data from these devices. It then cross-correlates these time-series logs with the user-specified behavioral attribute of the CPS. Finally, ACES employs a counterfactual explanation method and takes user feedback to identify causal relationships between the contextual variables and the behavior of the CPS. We demonstrate our approach with a synthetic use case; the favorable results obtained, motivate the future deployment of ACES in real-world scenarios.
Authored by Sanjiv Jha, Simon Mayer, Kimberly Garcia
In recent years there is a surge of interest in the interpretability and explainability of AI systems, which is largely motivated by the need for ensuring the transparency and accountability of Artificial Intelligence (AI) operations, as well as by the need to minimize the cost and consequences of poor decisions. Another challenge that needs to be mentioned is the Cyber security attacks against AI infrastructures in manufacturing environments. This study examines eXplainable AI (XAI)-enhanced approaches against adversarial attacks for optimizing Cyber defense methods in manufacturing image classification tasks. The examined XAI methods were applied to an image classification task providing some insightful results regarding the utility of Local Interpretable Model-agnostic Explanations (LIME), Saliency maps, and the Gradient-weighted Class Activation Mapping (Grad-Cam) as methods to fortify a dataset against gradient evasion attacks. To this end, we “attacked” the XAI-enhanced Images and used them as input to the classifier to measure their robustness of it. Given the analyzed dataset, our research indicates that LIME-masked images are more robust to adversarial attacks. We additionally propose an Encoder-Decoder schema that timely predicts (decodes) the masked images, setting the proposed approach sufficient for a real-life problem.
Authored by Georgios Makridis, Spyros Theodoropoulos, Dimitrios Dardanis, Ioannis Makridis, Maria Separdani, Georgios Fatouros, Dimosthenis Kyriazis, Panagiotis Koulouris
In today s age of digital technology, ethical concerns regarding computing systems are increasing. While the focus of such concerns currently is on requirements for software, this article spotlights the hardware domain, specifically microchips. For example, the opaqueness of modern microchips raises security issues, as malicious actors can manipulate them, jeopardizing system integrity. As a consequence, governments invest substantially to facilitate a secure microchip supply chain. To combat the opaqueness of hardware, this article introduces the concept of Explainable Hardware (XHW). Inspired by and building on previous work on Explainable AI (XAI) and explainable software systems, we develop a framework for achieving XHW comprising relevant stakeholders, requirements they might have concerning hardware, and possible explainability approaches to meet these requirements. Through an exploratory survey among 18 hardware experts, we showcase applications of the framework and discover potential research gaps. Our work lays the foundation for future work and structured debates on XHW.
Authored by Timo Speith, Julian Speith, Steffen Becker, Yixin Zou, Asia Biega, Christof Paar
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
Impact of Equivalence Assessment in the Education Sector using the XAI Model of Blockchain with ECTS
The procedure for obtaining an equivalency certificate for international educational recognition is typically complicated and opaque, and differs depending on the nation and system. To overcome these issues and empower students, this study suggests a revolutionary assessment tool that makes use of blockchain technology, chatbots, the European Credit Transfer and Accumulation System (ECTS), and Explainable Artificial Intelligence (XAI). Educational equivalency assessments frequently face difficulties and lack of openness in a variety of settings. The suggested solution uses blockchain for tamper-proof record keeping and secure data storage, based on the capabilities of each component. This improves the blockchain’s ability to securely store application data and evaluation results, fostering immutability and trust. Using the distributed ledger feature of blockchain promotes fairness in evaluations by preventing tampering and guaranteeing data integrity. The blockchain ensures data security and privacy by encrypting and storing data. Discuss how XAI might explain AI-driven equivalence choices, promoting fairness and trust, by reviewing pertinent material in each domain. Chatbots can improve accessibility by streamlining data collection and assisting students along the way. Transparency and efficiency are provided via ECTS computations that integrate XAI and chatbots. Emphasizing the availability of multilingual support for international students, we also address issues such as data privacy and system adaption. The study recommends further research to assess the multifaceted method in practical contexts and improve the technology for moral and efficient application. In the end, both students and institutions will benefit from this, as it can empower individuals and promote international mobility of degree equivalization.
Authored by Sumathy Krishnan, R Surendran
Malware poses a significant threat to global cy-bersecurity, with machine learning emerging as the primary method for its detection and analysis. However, the opaque nature of machine learning s decision-making process of-ten leads to confusion among stakeholders, undermining their confidence in the detection outcomes. To enhance the trustworthiness of malware detection, Explainable Artificial Intelligence (XAI) is employed to offer transparent and comprehensible explanations of the detection mechanisms, which enable stakeholders to gain a deeper understanding of detection mechanisms and assist in developing defensive strategies. Despite the recent XAI advancements, several challenges remain unaddressed. In this paper, we explore the specific obstacles encountered in applying XAI to malware detection and analysis, aiming to provide a road map for future research in this critical domain.
Authored by L. Rui, Olga Gadyatskaya
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang
Many studies have been conducted to detect various malicious activities in cyberspace using classifiers built by machine learning. However, it is natural for any classifier to make mistakes, and hence, human verification is necessary. One method to address this issue is eXplainable AI (XAI), which provides a reason for the classification result. However, when the number of classification results to be verified is large, it is not realistic to check the output of the XAI for all cases. In addition, it is sometimes difficult to interpret the output of XAI. In this study, we propose a machine learning model called classification verifier that verifies the classification results by using the output of XAI as a feature and raises objections when there is doubt about the reliability of the classification results. The results of experiments on malicious website detection and malware detection show that the proposed classification verifier can efficiently identify misclassified malicious activities.
Authored by Koji Fujita, Toshiki Shibahara, Daiki Chiba, Mitsuaki Akiyama, Masato Uchida
Many forms of machine learning (ML) and artificial intelligence (AI) techniques are adopted in communication networks to perform all optimizations, security management, and decision-making tasks. Instead of using conventional blackbox models, the tendency is to use explainable ML models that provide transparency and accountability. Moreover, Federate Learning (FL) type ML models are becoming more popular than the typical Centralized Learning (CL) models due to the distributed nature of the networks and security privacy concerns. Therefore, it is very timely to research how to find the explainability using Explainable AI (XAI) in different ML models. This paper comprehensively analyzes using XAI in CL and FL-based anomaly detection in networks. We use a deep neural network as the black-box model with two data sets, UNSW-NB15 and NSLKDD, and SHapley Additive exPlanations (SHAP) as the XAI model. We demonstrate that the FL explanation differs from CL with the client anomaly percentage.
Authored by Yasintha Rumesh, Thulitha Senevirathna, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification \& robustness, and 4) offensive use of explanations. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows – user studies for explanation evaluation are conducted in only 14\% of the cases. The security literature sometimes also fails to disentangle the role of the various stakeholders, e.g., by providing explanations to model users and designers while also exposing them to adversaries. Additionally, the role of model designers is particularly minimized in the security literature. To this end, we present an illustrative tutorial for model designers, demonstrating how XAI can help with model verification. We also discuss scenarios where interpretability by design may be a better alternative. The systematization and the tutorial enable us to challenge several assumptions, and present open problems that can help shape the future of XAI research within cybersecurity.
Authored by Azqa Nadeem, Daniël Vos, Clinton Cao, Luca Pajola, Simon Dieck, Robert Baumgartner, Sicco Verwer
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
Conventional approaches to analyzing industrial control systems have relied on either white-box analysis or black-box fuzzing. However, white-box methods rely on sophisticated domain expertise, while black-box methods suffers from state explosion and thus scales poorly when analyzing real ICS involving a large number of sensors and actuators. To address these limitations, we propose XAI-based gray-box fuzzing, a novel approach that leverages explainable AI and machine learning modeling of ICS to accurately identify a small set of actuators critical to ICS safety, which result in significant reduction of state space without relying on domain expertise. Experiment results show that our method accurately explains the ICS model and significantly speeds-up fuzzing by 64x when compared to conventional black-box methods.
Authored by Justin Kur, Jingshu Chen, Jun Huang