Artificial intelligence (AI) has emerged as one of the most formative technologies of the century and further gains importance to solve the big societal challenges (e.g. achievement of the sustainable development goals) or as a means to stay competitive in today’s global markets. The role as a key enabler in many areas of our daily life leads to a growing dependence, which has to be managed accordingly to mitigate negative economic, societal or privacy impacts. Therefore, the European Union is working on an AI Act, which defines concrete governance, risk and compliance (GRC) requirements. One of the key demands of this regulation is the operation of a risk management system for High-Risk AI systems. In this paper, we therefore present a detailed analysis of relevant literature in this domain and introduce our proposed approach for an AI Risk Management System (AIRMan).
Authored by Simon Tjoa, Peter Temper, Marlies Temper, Jakob Zanol, Markus Wagner, Andreas Holzinger
Despite the tremendous impact and potential of Artificial Intelligence (AI) for civilian and military applications, it has reached an impasse as learning and reasoning work well for certain applications and it generally suffers from a number of challenges such as hidden biases and causality. Next, “symbolic” AI (not as efficient as “sub-symbolic” AI), offers transparency, explainability, verifiability and trustworthiness. To address these limitations, neuro-symbolic AI has been emerged as a new AI field that combines efficiency of “sub-symbolic” AI with the assurance and transparency of “symbolic” AI. Furthermore, AI (that suffers from aforementioned challenges) will remain inadequate for operating independently in contested, unpredictable and complex multi-domain battlefield (MDB) environment for the foreseeable future and the AI enabled autonomous systems will require human in the loop to complete the mission in such a contested environment. Moreover, in order to successfully integrate AI enabled autonomous systems into military operations, military operators need to have assurance that these systems will perform as expected and in a safe manner. Most importantly, Human-Autonomy Teaming (HAT) for shared learning and understanding and joint reasoning is crucial to assist operations across military domains (space, air, land, maritime, and cyber) at combat speed with high assurance and trust. In this paper, we present a rough guide to key research challenges and perspectives of neuro symbolic AI for assured and trustworthy HAT.
Authored by Danda Rawat
The objective of this study is to examine the key factors that contribute to the enhancement of financial network security through the utilization of blockchain technology and artificial intelligence (AI) tools. In this study, we utilize Google Trend Analytics and VOSviewer to examine the interrelationships among significant concepts in the domain of financial security driven by blockchain technology. The findings of the study provide significant insights and recommendations for various stakeholders, such as government entities, policymakers, regulators, and professionals in the field of information technology. Our research aims to enhance the comprehension of the intricate relationship between blockchain technology and AI tools in bolstering financial network security by revealing the network connections among crucial aspects. The aforementioned findings can be utilized as a valuable resource for facilitating future joint endeavors with the objective of enhancing financial inclusion and fostering community well-being. Through the utilization of blockchain technology and artificial intelligence (AI), it is possible to collaboratively strive towards the establishment of a financial ecosystem that is both more secure and inclusive. This endeavor aims to guarantee the well-being and stability of both individuals and enterprises.
Authored by Kuldeep Singh, Shivaprasad G.
Recent developments in generative artificial intelligence are bringing great concerns for privacy, security and misinformation. Our work focuses on the detection of fake images generated by text-to-image models. We propose a dual-domain CNN-based classifier that utilizes image features in both the spatial and frequency domain. Through an extensive set of experiments, we demonstrate that the frequency domain features facilitate high accuracy, zero-transfer learning between different generative models, and faster convergence. To our best knowledge, this is the first effective detector against generative models that are finetuned for a specific subject.
Authored by Eric Ji, Boxiang Dong, Bharath Samanthula, Na Zhou
With the continuous enrichment of intelligent applications, it is anticipated that 6G will evolve into a ubiquitous intelligent network. In order to achieve the vision of full-scenarios intelligent services, how to collaborate AI capabilities in different domains is an urgent issue. After analyzing potential use cases and technological requirements, this paper proposes an endto-end (E2E) cross-domain artificial intelligence (AI) collaboration framework for next-generation mobile communication systems. Two potential technical solutions, namely cross-domain AI management and orchestration and RAN-CN convergence, are presented to facilitate intelligent collaboration in both E2E scenarios and the edge network. Furthermore, we have validated the performance of a cross-domain federated learning algorithm in a simulated environment for the prediction of received signal power. While ensuring the security and privacy of terminal data, we have analyzed the communication overhead caused by cross-domain training.
Authored by Zexu Li, Zhen Li, Xiong Xiong, Dongjie Liu
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by
Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
In this experience paper, we present the lessons learned from the First University of St. Gallen Grand Challenge 2023, a competition involving interdisciplinary teams tasked with assessing the legal compliance of real-world AI-based systems with the European Union’s Artificial Intelligence Act (AI Act). The AI Act is the very first attempt in the world to regulate AI systems and its potential impact is huge. The competition provided firsthand experience and practical knowledge regarding the AI Act’s requirements. It also highlighted challenges and opportunities for the software engineering and AI communities.CCS CONCEPTS• Social and professional topics → Governmental regulations; • Computing methodologies → Artificial intelligence; • Security and privacy → Privacy protections; • Software and its engineering → Software creation and management.
Authored by Teresa Scantamburlo, Paolo Falcarin, Alberto Veneri, Alessandro Fabris, Chiara Gallese, Valentina Billa, Francesca Rotolo, Federico Marcuzzi
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Edge computing enables the computation and analytics capabilities to be brought closer to data sources. The available literature on AI solutions for edge computing primarily addresses just two edge layers. The upper layer can directly communicate with the cloud and comprises one or more IoT edge devices that gather sensing data from IoT devices present in the lower layer. However, industries mainly adopt a multi-layered architecture, referred to as the ISA-95 standard, to isolate and safeguard their assets. In this architecture, only the upper layer is connected to the cloud, while the lower layers of the hierarchy get to interact only with the neighbouring layers. Due to these added intermediate layers (and IoT edge devices) between the top and lower layers, existing AI solutions for typical two-layer edge architectures may not be directly applicable in this scenario. Moreover, not all industries prefer to send and store their private data in the cloud. Implementing AI solutions tailored to a hierarchical edge architecture would increase response time and maintain the same degree of security by working within the ISA-95-compliant network architecture. This paper explores a possible strategy for deploying a centralized federated learning-based AI solution in a hierarchical edge architecture and demonstrates its efficacy through a real deployment scenario.
Authored by Narendra Bisht, Subhasri Duttagupta
The development of 5G, cloud computing, artificial intelligence (AI) and other new generation information technologies has promoted the rapid development of the data center (DC) industry, which directly increase severe energy consumption and carbon emissions problem. In addition to traditional engineering based methods, AI based technology has been widely used in existing data centers. However, the existing AI model training schemes are time-consuming and laborious. To tackle this issues, we propose an automated training and deployment platform for AI modes based on cloud-edge architecture, including the processes of data processing, data annotation, model training optimization, and model publishing. The proposed system can generate specific models based on the room environment and realize standardization and automation of model training, which is helpful for large-scale data center scenarios. The simulation and experimental results show that the proposed solution can reduce the time required of single model training by 76.2\%, and multiple training tasks can run concurrently. Therefore, it can adapt to the large-scale energy-saving scenario and greatly improve the model iteration efficiency, which improves the energy-saving rate and help green energy conservation for data centers.
Authored by Chunfang Li, Zhou Guo, Xingmin He, Fei Hu, Weiye Meng
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
Data security in numerous businesses, including banking, healthcare, and transportation, depends on cryptography. As IoT and AI applications proliferate, this is becoming more and more evident. Despite the benefits and drawbacks of traditional cryptographic methods such as symmetric and asymmetric encryption, there remains a demand for enhanced security that does not compromise efficiency. This work introduces a novel approach called Multi-fused cryptography, which combines the benefits of distinct cryptographic methods in order to overcome their shortcomings. Through a comparative performance analysis; our study demonstrates that the proposed technique successfully enhances data security during network transmission.
Authored by Irin Loretta, Idamakanti Kasireddy, M. Prameela, D Rao, M. Kalaiyarasi, S. Saravanan
The resurgence of Artificial Intelligence (AI) has been accompanied by a rise in ethical issues. AI practitioners either face challenges in making ethical choices when designing AI-based systems or are not aware of such challenges in the first place. Increasing the level of awareness and understanding of the perceptions of those who develop AI systems is a critical step toward mitigating ethical issues in AI development. Motivated by these challenges, needs, and the lack of engaging approaches to address these, we developed an interactive, scenario-based ethical AI quiz. It allows AI practitioners, including software engineers who develop AI systems, to self-assess their awareness and perceptions about AI ethics. The experience of taking the quiz, and the feedback it provides, will help AI practitioners understand the gap areas, and improve their overall ethical practice in everyday development scenarios. To demonstrate these expected outcomes and the relevance of our tool, we also share a preliminary user study. The video demo can be found at https://zenodo.org/record/7601169\#.Y9xgA-xBxhF.
Authored by Wei Teo, Ze Teoh, Dayang Arabi, Morad Aboushadi, Khairenn Lai, Zhe Ng, Aastha Pant, Rashina Hoda, Chakkrit Tantithamthavorn, Burak Turhan
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
This work aims to construct a management system capable of automatically detecting, analyzing, and responding to network security threats, thereby enhancing the security and stability of networks. It is based on the role of artificial intelligence (AI) in computer network security management to establish a network security system that combines AI with traditional technologies. Furthermore, by incorporating the attention mechanism into Graph Neural Network (GNN) and utilizing botnet detection, a more robust and comprehensive network security system is developed to improve detection and response capabilities for network attacks. Finally, experiments are conducted using the Canadian Institute for Cybersecurity Intrusion Detection Systems 2017 dataset. The results indicate that the GNN combined with an attention mechanism performs well in botnet detection, with decreasing false positive and false negative rates at 0.01 and 0.03, respectively. The model achieves a monitoring accuracy of 98\%, providing a promising approach for network security management. The findings underscore the potential role of AI in network security management, especially the positive impact of combining GNN and attention mechanisms on enhancing network security performance.
Authored by Fei Xia, Zhihao Zhou
In this work, we present a comprehensive survey on applications of the most recent transformer architecture based on attention in information security. Our review reveals three primary areas of application: Intrusion detection, Anomaly Detection and Malware Detection. We have presented an overview of attention-based mechanisms and their application in each cybersecurity use case, and discussed open grounds for future trends in Artificial Intelligence enabled information security.
Authored by M. Vubangsi, Sarumi Abidemi, Olukayode Akanni, Auwalu Mubarak, Fadi Al-Turjman