The pervasive proliferation of digital technologies and interconnected systems has heightened the necessity for comprehensive cybersecurity measures in computer technological know-how. While deep gaining knowledge of (DL) has turn out to be a effective tool for bolstering security, its effectiveness is being examined via malicious hacking. Cybersecurity has end up an trouble of essential importance inside the cutting-edge virtual world. By making it feasible to become aware of and respond to threats in actual time, Deep Learning is a important issue of progressed security. Adversarial assaults, interpretability of models, and a lack of categorized statistics are all obstacles that want to be studied further with the intention to support DL-based totally security solutions. The protection and reliability of DL in our on-line world relies upon on being able to triumph over those boundaries. The present studies presents a unique method for strengthening DL-based totally cybersecurity, known as name dynamic adverse resilience for deep learning-based totally cybersecurity (DARDL-C). DARDL-C gives a dynamic and adaptable framework to counter antagonistic assaults by using combining adaptive neural community architectures with ensemble learning, real-time threat tracking, risk intelligence integration, explainable AI (XAI) for version interpretability, and reinforcement getting to know for adaptive defense techniques. The cause of this generation is to make DL fashions more secure and proof against the constantly transferring nature of online threats. The importance of simulation evaluation in determining DARDL-C s effectiveness in practical settings with out compromising genuine safety is important. Professionals and researchers can compare the efficacy and versatility of DARDL-C with the aid of simulating realistic threats in managed contexts. This gives precious insights into the machine s strengths and regions for improvement.
Authored by D. Poornima, A. Sheela, Shamreen Ahamed, P. Kathambari
Deep learning models are being utilized and further developed in many application domains, but challenges still exist regarding their interpretability and consistency. Interpretability is important to provide users with transparent information that enhances the trust between the user and the learning model. It also gives developers feedback to improve the consistency of their deep learning models. In this paper, we present a novel architectural design to embed interpretation into the architecture of the deep learning model. We apply dynamic pixel-wised weights to input images and produce a highly correlated feature map for classification. This feature map is useful for providing interpretation and transparent information about the decision-making of the deep learning model while keeping full context about the relevant feature information compared to previous interpretation algorithms. The proposed model achieved 92\% accuracy for CIFAR 10 classifications without finetuning the hyperparameters. Furthermore, it achieved a 20\% accuracy under 8/255 PGD adversarial attack for 100 iterations without any defense method, indicating extra natural robustness compared to other Convolutional Neural Network (CNN) models. The results demonstrate the feasibility of the proposed architecture.
Authored by Weimin Zhao, Qusay Mahmoud, Sanaa Alwidian
In the realm of agriculture, where crop health is integral to global food security, Our focus is on the early detection of crop diseases. Leveraging Convolutional Neural Networks (CNNs) on a diverse dataset of crop images, our study focuses on the development, training, and optimization of these networks to achieve accurate and timely disease classification. The first segment demonstrates the efficacy of CNN architecture and optimization strategy, showcasing the potential of deep learning models in automating the identification process. The synergy of robust disease detection and interpretability through Explainable Artificial Intelligence (XAI) presented in this work marks a significant stride toward bridging the gap between advanced technology and precision agriculture. By employing visualization, the research seeks to unravel the decision-making processes of our models. XAI Visualization method emerges as notably superior in terms of accuracy, hinting at its better identification of the disease, this method achieves an accuracy of 89.75\%, surpassing both the heat map model and the LIME explanation method. This not only enhances the transparency and trustworthiness of the predictions but also provides invaluable insights for end-users, allowing them to comprehend the diagnostic features considered by the complex algorithm.
Authored by Priyadarshini Patil, Sneha Pamali, Shreya Devagiri, A Sushma, Jyothi Mirje
The Zero-trust security architecture is a paradigm shift toward resilient cyber warfare. Although Intrusion Detection Systems (IDS) have been widely adopted within military operations to detect malicious traffic and ensure instant remediation against attacks, this paper proposed an explainable adversarial mitigation approach specifically designed for zero-trust cyber warfare scenarios. It aims to provide a transparent and robust defense mechanism against adversarial attacks, enabling effective protection and accountability for increased resilience against attacks. The simulation results show the balance of security and trust within the proposed parameter protection model achieving a high F1-score of 94\%, a least test loss of 0.264, and an adequate detection time of 0.34s during the prediction of attack types.
Authored by Ebuka Nkoro, Cosmas Nwakanma, Jae-Min Lee, Dong-Seong Kim
Towards an Interpretable AI Framework for Advanced Classification of Unmanned Aerial Vehicles (UAVs)
With UAVs on the rise, accurate detection and identification are crucial. Traditional unmanned aerial vehicle (UAV) identification systems involve opaque decision-making, restricting their usability. This research introduces an RF-based Deep Learning (DL) framework for drone recognition and identification. We use cutting-edge eXplainable Artificial Intelligence (XAI) tools, SHapley Additive Explanations (SHAP), and Local Interpretable Model-agnostic Explanations(LIME). Our deep learning model uses these methods for accurate, transparent, and interpretable airspace security. With 84.59\% accuracy, our deep-learning algorithms detect drone signals from RF noise. Most crucially, SHAP and LIME improve UAV detection. Detailed explanations show the model s identification decision-making process. This transparency and interpretability set our system apart. The accurate, transparent, and user-trustworthy model improves airspace security.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
This study addresses the critical need to secure VR network communication from non-immersive attacks, employing an intrusion detection system (IDS). While deep learning (DL) models offer advanced solutions, their opacity as "black box" models raises concerns. Recognizing this gap, the research underscores the urgency for DL-based explainability, enabling data analysts and cybersecurity experts to grasp model intricacies. Leveraging sensed data from IoT devices, our work trains a DL-based model for attack detection and mitigation in the VR network, Importantly, we extend our contribution by providing comprehensive global and local interpretations of the model’s decisions post-evaluation using SHAP-based explanation.
Authored by Urslla Izuazu, Dong-Seong Kim, Jae Lee
Explainable AI is an emerging field that aims to address how black-box decisions of AI systems are made, by attempting to understand the steps and models involved in this decision-making. Explainable AI in manufacturing is supposed to deliver predictability, agility, and resiliency across targeted manufacturing apps. In this context, large amounts of data, which can be of high sensitivity and various formats need to be securely and efficiently handled. This paper proposes an Asset Management and Secure Sharing solution tailored to the Explainable AI and Manufacturing context in order to tackle this challenge. The proposed asset management architecture enables an extensive data management and secure sharing solution for industrial data assets. Industrial data can be pulled, imported, managed, shared, and tracked with a high level of security using this design. This paper describes the solution´s overall architectural design and gives an overview of the functionalities and incorporated technologies of the involved components, which are responsible for data collection, management, provenance, and sharing as well as for overall security.
Authored by Sangeetha Reji, Jonas Hetterich, Stamatis Pitsios, Vasilis Gkolemi, Sergi Perez-Castanos, Minas Pertselakis
The interest in metaverse applications by existing industries has seen massive growth thanks to the accelerated pace of research in key technological fields and the shift towards virtual interactions fueled by the Covid-19 pandemic. One key industry that can benefit from the integration into the metaverse is healthcare. The potential to provide enhanced care for patients affected by multiple health issues, from standard afflictions to more specialized pathologies, is being explored through the fabrication of architectures that can support metaverse applications. In this paper, we focus on the persistent issues of lung cancer detection, monitoring, and treatment, to propose MetaLung, a privacy and integrity-preserving architecture on the metaverse. We discuss the use cases to enable remote patient-doctor interactions, patient constant monitoring, and remote care. By leveraging technologies such as digital twins, edge computing, explainable AI, IoT, and virtual/augmented reality, we propose how the system could provide better assistance to lung cancer patients and suggest individualized treatment plans to the doctors based on their information. In addition, we describe the current implementation state of the AI-based Decision Support System for treatment selection, I3LUNG, and the current state of patient data collection.
Authored by Michele Zanitti, Mieszko Ferens, Alberto Ferrarin, Francesco Trovò, Vanja Miskovic, Arsela Prelaj, Ming Shen, Sokol Kosta
In the progressive development towards 6G, the ROBUST-6G initiative aims to provide fundamental contributions to developing data-driven, AIIML-based security solutions to meet the new concerns posed by the dynamic nature of forth-coming 6G services and networks in the future cyber-physical continuum. This aim has to be accompanied by the transversal objective of protecting AIIML systems from security attacks and ensuring the privacy of individuals whose data are used in AI-empowered systems. ROBUST-6G will essentially investigate the security and robustness of distributed intelligence, enhancing privacy and providing transparency by leveraging explainable AIIML (XAI). Another objective of ROBUST-6G is to promote green and sustainable AIIML methodologies to achieve energy efficiency in 6G network design. The vision of ROBUST-6G is to optimize the computation requirements and minimize the consumed energy while providing the necessary performance for AIIML-driven security functionalities; this will enable sustainable solutions across society while suppressing any adverse effects. This paper aims to initiate the discussion and to highlight the key goals and milestones of ROBUST-6G, which are important for investigation towards a trustworthy and secure vision for future 6G networks.
Authored by Bartlomiej Siniarski, Chamara Sandeepa, Shen Wang, Madhusaska Liyanage, Cem Ayyildiz, Veli Yildirim, Hakan Alakoca, Fatma Kesik, Betül Paltun, Giovanni Perin, Michele Rossi, Stefano Tomasin, Arsenia Chorti, Pietro Giardina, Alberto Pércz, José Valero, Tommy Svensson, Nikolaos Pappas, Marios Kountouris
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The fixed security solutions and related security configurations may no longer meet the diverse requirements of 6G networks. Open Radio Access Network (O-RAN) architecture is going to be one key entry point to 6G where the direct user access is granted. O-RAN promotes the design, deployment and operation of the RAN with open interfaces and optimized by intelligent controllers. O-RAN networks are to be implemented as multi-vendor systems with interoperable components and can be programmatically optimized through centralized abstraction layer and data driven closed-loop control. However, since O-RAN contains many new open interfaces and data flows, new security issues may emerge. Providing the recommendations for dynamic security policy adjustments by considering the energy availability and risk or security level of the network is something lacking in the current state-of-the-art. When the security process is managed and executed in an autonomous way, it must also assure the transparency of the security policy adjustments and provide the reasoning behind the adjustment decisions to the interested parties whenever needed. Moreover, the energy consumption for such security solutions are constantly bringing overhead to the networking devices. Therefore, in this paper we discuss XAI based green security architecture for resilient open radio access networks in 6G known as XcARet for providing cognitive and transparent security solutions for O-RAN in a more energy efficient manner.
Authored by Pawani Porambage, Jarno Pinola, Yasintha Rumesh, Chen Tao, Jyrki Huusko
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam