The aim of the study is to review XAI studies in terms of their solutions, applications and challenges in renewable energy and resources. The results have shown that XAI really helps to explain how the decisions are made by AI models, to increase confidence and trust to the models, to make decision mode reliable, show the transparency of decision-making mechanism. Even if there have been a number of solutions such as SHAP, LIME, ELI5, DeepLIFT, Rule Based Approach of XAI methods, a number of problems in metrics, evaluations, performance and explanations are still specific, and require domain experts to develop new models or to apply available techniques. It is hoped that this article might help researchers to develop XAI solutions in their energy applications and improve their AI approaches for further studies.
Authored by Betül Ersöz, Şeref Sağıroğlu, Halil Bülbül
Sixth generation (6G)-enabled massive network MANO orchestration, alongside distributed supervision and fully reconfigurable control logic that manages dynamic arrangement of network components, such as cell-free, Open-Air Interface (OAI) and RIS, is a potent enabler for the upcoming pervasive digitalization of the vertical use cases. In such a disruptive domain, artificial intelligence (AI)-driven zero-touch “Network of Networks” intent-based automation shall be able to guarantee a high degree of security, efficiency, scalability, and sustainability, especially in cross-domain and interoperable deployment environments (i.e., where points of presence (PoPs) are non-independent and identically distributed (non-IID)). To this extent, this paper presents a novel breakthrough, open, and fully reconfigurable networking architecture for 6G cellular paradigms, named 6G-BRICKS. To this end, 6G-BRICKS will deliver the first open and programmable O-RAN Radio Unit (RU) for 6G networks, termed as the OpenRU, based on an NI USRP-based platform. Moreover, 6G-BRICKS will integrate the RIS concept into the OAI alongside Testing as a Service (TaaS) capabilities, multi-tenancy, disaggregated Operations Support Systems (OSS) and Deep Edge adaptation at the forefront. The overall ambition of 6G-BRICKS is to offer evolvability, granularity, while, at the same time, tackling big challenges such as interdisciplinary efforts and big investments in 6G integration.
Authored by Kostas Ramantas, Anastasios Bikos, Walter Nitzold, Sofie Pollin, Adlen Ksentini, Sylvie Mayrargue, Vasileios Theodorou, Loizos Christofi, Georgios Gardikis, Md Rahman, Ashima Chawla, Francisco Ibañez, Ioannis Chochliouros, Didier Nicholson, Mario, Montagudand, Arman Shojaeifard, Alexios Pagkotzidis, Christos Verikoukis
This paper presents a reputation-based threat mitigation framework that defends potential security threats in electroencephalogram (EEG) signal classification during model aggregation of Federated Learning. While EEG signal analysis has attracted attention because of the emergence of brain-computer interface (BCI) technology, it is difficult to create efficient learning models for EEG analysis because of the distributed nature of EEG data and related privacy and security concerns. To address these challenges, the proposed defending framework leverages the Federated Learning paradigm to preserve privacy by collaborative model training with localized data from dispersed sources and introduces a reputation-based mechanism to mitigate the influence of data poisoning attacks and identify compromised participants. To assess the efficiency of the proposed reputation-based federated learning defense framework, data poisoning attacks based on the risk level of training data derived by Explainable Artificial Intelligence (XAI) techniques are conducted on both publicly available EEG signal datasets and the self-established EEG signal dataset. Experimental results on the poisoned datasets show that the proposed defense methodology performs well in EEG signal classification while reducing the risks associated with security threats.
Authored by Zhibo Zhang, Pengfei Li, Ahmed Hammadi, Fusen Guo, Ernesto Damiani, Chan Yeun
Internet of Things (IoT) and Artificial Intelligence (AI) systems have become prevalent across various industries, steering to diverse and far-reaching outcomes, and their convergence has garnered significant attention in the tech world. Studies and reviews are instrumental in supplying industries with the nuanced understanding of the multifaceted developments of this joint domain. This paper undertakes a critical examination of existing perspectives and governance policies, adopting a contextual approach, and addressing not only the potential but also the limitations of these governance policies. In the complex landscape of AI-infused IoT systems, transparency and interpretability are pivotal qualities for informed decision-making and effective governance. In AI governance, transparency allows for scrutiny and accountability, while interpretability facilitates trust and confidence in AI-driven decisions. Therefore, we also evaluate and advocate for the use of two very popular eXplainable AI (XAI) techniques-SHAP and LIME-in explaining the predictive results of AI models. Subsequently, this paper underscores the imperative of not only maximizing the advantages and services derived from the incorporation of IoT and AI but also diligently minimizing possible risks and challenges.
Authored by Nadine Fares, Denis Nedeljkovic, Manar Jammal
Malware poses a significant threat to global cy-bersecurity, with machine learning emerging as the primary method for its detection and analysis. However, the opaque nature of machine learning s decision-making process of-ten leads to confusion among stakeholders, undermining their confidence in the detection outcomes. To enhance the trustworthiness of malware detection, Explainable Artificial Intelligence (XAI) is employed to offer transparent and comprehensible explanations of the detection mechanisms, which enable stakeholders to gain a deeper understanding of detection mechanisms and assist in developing defensive strategies. Despite the recent XAI advancements, several challenges remain unaddressed. In this paper, we explore the specific obstacles encountered in applying XAI to malware detection and analysis, aiming to provide a road map for future research in this critical domain.
Authored by L. Rui, Olga Gadyatskaya
Deep learning models are being utilized and further developed in many application domains, but challenges still exist regarding their interpretability and consistency. Interpretability is important to provide users with transparent information that enhances the trust between the user and the learning model. It also gives developers feedback to improve the consistency of their deep learning models. In this paper, we present a novel architectural design to embed interpretation into the architecture of the deep learning model. We apply dynamic pixel-wised weights to input images and produce a highly correlated feature map for classification. This feature map is useful for providing interpretation and transparent information about the decision-making of the deep learning model while keeping full context about the relevant feature information compared to previous interpretation algorithms. The proposed model achieved 92\% accuracy for CIFAR 10 classifications without finetuning the hyperparameters. Furthermore, it achieved a 20\% accuracy under 8/255 PGD adversarial attack for 100 iterations without any defense method, indicating extra natural robustness compared to other Convolutional Neural Network (CNN) models. The results demonstrate the feasibility of the proposed architecture.
Authored by Weimin Zhao, Qusay Mahmoud, Sanaa Alwidian
The vision and key elements of the 6th generation (6G) ecosystem are being discussed very actively in academic and industrial circles. In this work, we provide a timely update to the 6G security vision presented in our previous publications to contribute to these efforts. We elaborate further on some key security challenges for the envisioned 6G wireless systems, explore recently emerging aspects, and identify potential solutions from an additive perspective. This speculative treatment aims explicitly to complement our previous work through the lens of developments of the last two years in 6G research and development.
Authored by Gürkan Gur, Pawani Porambage, Diana Osorio, Attila Yavuz, Madhusanka Liyanage
Procurement is a critical step in the setup of systems, as reverting decisions made at this point is typically time-consuming and costly. Especially Artificial Intelligence (AI) based systems face many challenges, starting with unclear and unknown side parameters at design time of the systems, changing ecosystems and regulations, as well as problems of overselling capabilities of systems by vendors. Furthermore, the AI Act puts forth a great deal of additional requirements for operators of critical AI systems, like risk management and transparency measures, thus making procurement even more complex. In addition, the number of providers of AI systems is drastically increasing. In this paper we provide guidelines for the procurement of AI based systems that support the decision maker in identifying the key elements for the procurement of secure AI systems, depending on the respective technical and regulatory environment. Furthermore, we provide additional resources for utilizing these guidelines in practical procurement.
Authored by Peter Kieseberg, Christina Buttinger, Laura Kaltenbrunner, Marlies Temper, Simon Tjoa
In recent years, with the accelerated development of social informatization, digital economy has gradually become the core force of economic growth in various countries. As the carrier for the digital economy, the number of IDCs is also increasing day by day, and their construction volume and scale are expanding. Energy consumption and carbon emissions are growing rapidly as IDCs require large amounts of electricity to run servers, storage, backup, cooling systems and other infrastructure. IDCs are facing serious challenges of energy saving and greenhouse gas emission. How to achieve green, low-carbon and high-quality development is of particular concern. This paper summarizes and classifies all the current green energy-saving technologies in IDCs, introduces AI-based energy-saving solutions for IDC cooling systems in detail, compares and analyzes the energy-saving effects of AI energy-saving technologies and traditional energy-saving technologies, and points out the advantages of AI energy-saving solutions applied in green IDCs.
Authored by Hongdan Ren, Xinlan Xu, Yu Zeng
As the world becomes increasingly interconnected, new AI technologies bring new opportunities while also giving rise to new network security risks, while network security is critical to national and social security, enterprise as well as personal information security. This article mainly introduces the empowerment of AI in network security, the development trend of its application in the field of network security, the challenges faced, and suggestions, providing beneficial exploration for effectively applying artificial intelligence technology to computer network security protection.
Authored by Jia Li, Sijia Zhang, Jinting Wang, Han Xiao
The network of smart physical object has a significant impact on the growth of urban civilization. The evidence has been cited from the digital sources such as scientific journals, conferences and publications, etc. Along with other security services, these kinds of structured, sophisticated data have addressed a number of security-related challenges. Here, many forms of cutting-edge machine learning and AI techniques are used to research how merging two or more algorithms with AI and ML might make the internet of things more safe. The main objective of this paper is it explore the applications of how ML and AI that can be used to improve IOT security.
Authored by Brijesh Singh, Santosh Sharma, Ravindra Verma
Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Artificial intelligence (AI) has emerged as one of the most formative technologies of the century and further gains importance to solve the big societal challenges (e.g. achievement of the sustainable development goals) or as a means to stay competitive in today’s global markets. The role as a key enabler in many areas of our daily life leads to a growing dependence, which has to be managed accordingly to mitigate negative economic, societal or privacy impacts. Therefore, the European Union is working on an AI Act, which defines concrete governance, risk and compliance (GRC) requirements. One of the key demands of this regulation is the operation of a risk management system for High-Risk AI systems. In this paper, we therefore present a detailed analysis of relevant literature in this domain and introduce our proposed approach for an AI Risk Management System (AIRMan).
Authored by Simon Tjoa, Peter Temper, Marlies Temper, Jakob Zanol, Markus Wagner, Andreas Holzinger
Despite the tremendous impact and potential of Artificial Intelligence (AI) for civilian and military applications, it has reached an impasse as learning and reasoning work well for certain applications and it generally suffers from a number of challenges such as hidden biases and causality. Next, “symbolic” AI (not as efficient as “sub-symbolic” AI), offers transparency, explainability, verifiability and trustworthiness. To address these limitations, neuro-symbolic AI has been emerged as a new AI field that combines efficiency of “sub-symbolic” AI with the assurance and transparency of “symbolic” AI. Furthermore, AI (that suffers from aforementioned challenges) will remain inadequate for operating independently in contested, unpredictable and complex multi-domain battlefield (MDB) environment for the foreseeable future and the AI enabled autonomous systems will require human in the loop to complete the mission in such a contested environment. Moreover, in order to successfully integrate AI enabled autonomous systems into military operations, military operators need to have assurance that these systems will perform as expected and in a safe manner. Most importantly, Human-Autonomy Teaming (HAT) for shared learning and understanding and joint reasoning is crucial to assist operations across military domains (space, air, land, maritime, and cyber) at combat speed with high assurance and trust. In this paper, we present a rough guide to key research challenges and perspectives of neuro symbolic AI for assured and trustworthy HAT.
Authored by Danda Rawat
In the context of increasing digitalization and the growing reliance on intelligent systems, the importance of network information security has become paramount. This study delves into the exploration of network information security technologies within the framework of a digital intelligent security strategy. The aim is to comprehensively analyze the diverse methods and techniques employed to ensure the confidentiality, integrity, and availability of digital assets in the contemporary landscape of cybersecurity challenges. Key methodologies include the review and analysis of encryption algorithms, intrusion detection systems, authentication protocols, and anomaly detection mechanisms. The investigation also encompasses the examination of emerging technologies like blockchain and AI-driven security solutions. Through this research, we seek to provide a comprehensive understanding of the evolving landscape of network information security, equipping professionals and decision-makers with valuable insights to fortify digital infrastructure against ever-evolving threats.
Authored by Yingshi Feng
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
The vision and key elements of the 6th generation (6G) ecosystem are being discussed very actively in academic and industrial circles. In this work, we provide a timely update to the 6G security vision presented in our previous publications to contribute to these efforts. We elaborate further on some key security challenges for the envisioned 6G wireless systems, explore recently emerging aspects, and identify potential solutions from an additive perspective. This speculative treatment aims explicitly to complement our previous work through the lens of developments of the last two years in 6G research and development.
Authored by Gürkan Gur, Pawani Porambage, Diana Osorio, Attila Yavuz, Madhusanka Liyanage
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by
This research aims to investigate the challenge of recognizing and evaluating distributed database management system (DDBMS) security and privacy challenges and also to describe important danger indicators that may lead to possible data security difficulties in distributing resources, both tangible and functioning. An investigation of the primary DDBMS security and privacy challenges that can be found while developing and managing a standard DDBMS meant for supporting data procedures and offering data services was conducted as a component of this research. The data assessment was produced depending on the findings of surveys and questionnaires administered to DDBS security and privacy professionals with varying degrees of training and emphases in their operations within this expertise field. The findings of the research include a list of primary risk variables based on their significance and frequency in practice, and also a spotlight on the most important security and privacy measures. This article summarizes data on traditional methods to illustrate data security challenges employing statistical, qualitative, and mixed evaluation, and also the most recent techniques depending on smart categorization and examination of DDBS security and privacy challenge factors and functioning with huge amounts of data.
Authored by Santosh Kumar, Kishor Dash, Bhargav Piduru, K. Rajkumar, Bandi Bhaskar, Mohit Tiwari
Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In this experience paper, we present the lessons learned from the First University of St. Gallen Grand Challenge 2023, a competition involving interdisciplinary teams tasked with assessing the legal compliance of real-world AI-based systems with the European Union’s Artificial Intelligence Act (AI Act). The AI Act is the very first attempt in the world to regulate AI systems and its potential impact is huge. The competition provided firsthand experience and practical knowledge regarding the AI Act’s requirements. It also highlighted challenges and opportunities for the software engineering and AI communities.CCS CONCEPTS• Social and professional topics → Governmental regulations; • Computing methodologies → Artificial intelligence; • Security and privacy → Privacy protections; • Software and its engineering → Software creation and management.
Authored by Teresa Scantamburlo, Paolo Falcarin, Alberto Veneri, Alessandro Fabris, Chiara Gallese, Valentina Billa, Francesca Rotolo, Federico Marcuzzi
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka