Recent developments in generative artificial intelligence are bringing great concerns for privacy, security and misinformation. Our work focuses on the detection of fake images generated by text-to-image models. We propose a dual-domain CNN-based classifier that utilizes image features in both the spatial and frequency domain. Through an extensive set of experiments, we demonstrate that the frequency domain features facilitate high accuracy, zero-transfer learning between different generative models, and faster convergence. To our best knowledge, this is the first effective detector against generative models that are finetuned for a specific subject.
Authored by Eric Ji, Boxiang Dong, Bharath Samanthula, Na Zhou
With the continuous enrichment of intelligent applications, it is anticipated that 6G will evolve into a ubiquitous intelligent network. In order to achieve the vision of full-scenarios intelligent services, how to collaborate AI capabilities in different domains is an urgent issue. After analyzing potential use cases and technological requirements, this paper proposes an endto-end (E2E) cross-domain artificial intelligence (AI) collaboration framework for next-generation mobile communication systems. Two potential technical solutions, namely cross-domain AI management and orchestration and RAN-CN convergence, are presented to facilitate intelligent collaboration in both E2E scenarios and the edge network. Furthermore, we have validated the performance of a cross-domain federated learning algorithm in a simulated environment for the prediction of received signal power. While ensuring the security and privacy of terminal data, we have analyzed the communication overhead caused by cross-domain training.
Authored by Zexu Li, Zhen Li, Xiong Xiong, Dongjie Liu
Integrated photonics based on silicon photonics platform is driving several application domains, from enabling ultra-fast chip-scale communication in high-performance computing systems to energy-efficient optical computation in artificial intelligence (AI) hardware accelerators. Integrating silicon photonics into a system necessitates the adoption of interfaces between the photonic and the electronic subsystems, which are required for buffering data and optical-to-electrical and electrical-to-optical conversions. Consequently, this can lead to new and inevitable security breaches that cannot be fully addressed using hardware security solutions proposed for purely electronic systems. This paper explores different types of attacks profiting from such breaches in integrated photonic neural network accelerators. We show the impact of these attacks on the system performance (i.e., power and phase distributions, which impact accuracy) and possible solutions to counter such attacks.
Authored by Felipe De Magalhaes, Mahdi Nikdast, Gabriela Nicolescu
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
Currently, research on 5G communication is focusing increasingly on communication techniques. The previous studies have primarily focused on the prevention of communications disruption. To date, there has not been sufficient research on network anomaly detection as a countermeasure against on security aspect. 5g network data will be more complex and dynamic, intelligent network anomaly detection is necessary solution for protecting the network infrastructure. However, since the AI-based network anomaly detection is dependent on data, it is difficult to collect the actual labeled data in the industrial field. Also, the performance degradation in the application process to real field may occur because of the domain shift. Therefore, in this paper, we research the intelligent network anomaly detection technique based on domain adaptation (DA) in 5G edge network in order to solve the problem caused by data-driven AI. It allows us to train the models in data-rich domains and apply detection techniques in insufficient amount of data. For Our method will contribute to AI-based network anomaly detection for improving the security for 5G edge network.
Authored by Hyun-Jin Kim, Jonghoon Lee, Cheolhee Park, Jong-Geun Park
The rising use of Artificial Intelligence (AI) in human detection on Edge camera systems has led to accurate but complex models, challenging to interpret and debug. Our research presents a diagnostic method using XAI for model debugging, with expert-driven problem identification and solution creation. Validated on the Bytetrack model in a real-world office Edge network, we found the training dataset as the main bias source and suggested model augmentation as a solution. Our approach helps identify model biases, essential for achieving fair and trustworthy models.
Authored by Truong Nguyen, Vo Nguyen, Quoc Cao, Van Truong, Quoc Nguyen, Hung Cao
The vision and key elements of the 6th generation (6G) ecosystem are being discussed very actively in academic and industrial circles. In this work, we provide a timely update to the 6G security vision presented in our previous publications to contribute to these efforts. We elaborate further on some key security challenges for the envisioned 6G wireless systems, explore recently emerging aspects, and identify potential solutions from an additive perspective. This speculative treatment aims explicitly to complement our previous work through the lens of developments of the last two years in 6G research and development.
Authored by Gürkan Gur, Pawani Porambage, Diana Osorio, Attila Yavuz, Madhusanka Liyanage
The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by
This research aims to investigate the challenge of recognizing and evaluating distributed database management system (DDBMS) security and privacy challenges and also to describe important danger indicators that may lead to possible data security difficulties in distributing resources, both tangible and functioning. An investigation of the primary DDBMS security and privacy challenges that can be found while developing and managing a standard DDBMS meant for supporting data procedures and offering data services was conducted as a component of this research. The data assessment was produced depending on the findings of surveys and questionnaires administered to DDBS security and privacy professionals with varying degrees of training and emphases in their operations within this expertise field. The findings of the research include a list of primary risk variables based on their significance and frequency in practice, and also a spotlight on the most important security and privacy measures. This article summarizes data on traditional methods to illustrate data security challenges employing statistical, qualitative, and mixed evaluation, and also the most recent techniques depending on smart categorization and examination of DDBS security and privacy challenge factors and functioning with huge amounts of data.
Authored by Santosh Kumar, Kishor Dash, Bhargav Piduru, K. Rajkumar, Bandi Bhaskar, Mohit Tiwari
Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In this experience paper, we present the lessons learned from the First University of St. Gallen Grand Challenge 2023, a competition involving interdisciplinary teams tasked with assessing the legal compliance of real-world AI-based systems with the European Union’s Artificial Intelligence Act (AI Act). The AI Act is the very first attempt in the world to regulate AI systems and its potential impact is huge. The competition provided firsthand experience and practical knowledge regarding the AI Act’s requirements. It also highlighted challenges and opportunities for the software engineering and AI communities.CCS CONCEPTS• Social and professional topics → Governmental regulations; • Computing methodologies → Artificial intelligence; • Security and privacy → Privacy protections; • Software and its engineering → Software creation and management.
Authored by Teresa Scantamburlo, Paolo Falcarin, Alberto Veneri, Alessandro Fabris, Chiara Gallese, Valentina Billa, Francesca Rotolo, Federico Marcuzzi
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
6G networks are beginning to take shape, and it is envisaged that they will be made up of networks from different vendors, and with different technologies, in what is known as the network-of-networks. The topology will be constantly changing, allowing it to adapt to the capacities available at any given moment. 6G networks will be managed automatically and natively by AI, but allowing direct management of learning by technical teams through Explainable AI. In this context, security becomes an unprecedented challenge. In this paper we present a flexible architecture that integrates the necessary modules to respond to the needs of 6G, focused on managing security, network and services through choreography intents that coordinate the capabilities of different stakeholders to offer advanced services.
Authored by Rodrigo Asensio-Garriga, Alejandro Zarca, Antonio Skarmeta
With the rapid development of cloud computing services and big data applications, the number of data centers is proliferating, and with it, the problem of energy consumption in data centers is becoming more and more serious. Data center energy-saving has received more and more attention as a way to reduce carbon emissions and power costs. The main energy consumption of data centers lies in IT equipment energy consumption and end air conditioning energy consumption. In this paper, we propose a data center energy-saving application system based on fog computing architecture to reduce air conditioning energy consumption, and thus reduce data center energy consumption. Specifically, the intelligent module is placed in the fog node to take advantage of the low latency, proximal computing, and proximal storage of fog computing to shorten the network call link and improve the stability of acquiring energy-saving policies and the frequency of energy-saving regulation, thus solving the disadvantages of high latency and instability in the energy-saving approach of cloud computing architecture. The AI technology is used in the intelligent module to generate energy-saving strategies and remotely regulate the end air conditioners to achieve better energy-saving effects. This solves the shortcomings of the traditional manual regulation based on expert experience with low adjustment frequency and serious loss of cooling capacity of the terminal air conditioner. According to the experimental results, statistics show that compared with the traditional manual regulation based on expert experience, the data center energy-saving application based on fog computing can operate safely and efficiently, and reduce the PUE to 1.04. Compared with the AI energy-saving strategy based on cloud computing, the AI energy-saving strategy based on fog computing generates strategies faster and with lower latency, and the speed is increased by 29.84\%.
Authored by Yazhen Zhang, Fei Hu, Yisa Han, Weiye Meng, Zhou Guo, Chunfang Li
AI systems face potential hardware security threats. Existing AI systems generally use the heterogeneous architecture of CPU + Intelligent Accelerator, with PCIe bus for communication between them. Security mechanisms are implemented on CPUs based on the hardware security isolation architecture. But the conventional hardware security isolation architecture does not include the intelligent accelerator on the PCIe bus. Therefore, from the perspective of hardware security, data offloaded to the intelligent accelerator face great security risks. In order to effectively integrate intelligent accelerator into the CPU’s security mechanism, a novel hardware security isolation architecture is presented in this paper. The PCIe protocol is extended to be security-aware by adding security information packaging and unpacking logic in the PCIe controller. The hardware resources on the intelligent accelerator are isolated in fine-grained. The resources classified into the secure world can only be controlled and used by the software of CPU’s trusted execution environment. Based on the above hardware security isolation architecture, a security isolation spiking convolutional neural network accelerator is designed and implemented in this paper. The experimental results demonstrate that the proposed security isolation architecture has no overhead on the bandwidth and latency of the PCIe controller. The architecture does not affect the performance of the entire hardware computing process from CPU data offloading, intelligent accelerator computing, to data returning to CPU. With low hardware overhead, this security isolation architecture achieves effective isolation and protection of input data, model, and output data. And this architecture can effectively integrate hardware resources of intelligent accelerator into CPU’s security isolation mechanism.
Authored by Rui Gong, Lei Wang, Wei Shi, Wei Liu, JianFeng Zhang
Edge computing enables the computation and analytics capabilities to be brought closer to data sources. The available literature on AI solutions for edge computing primarily addresses just two edge layers. The upper layer can directly communicate with the cloud and comprises one or more IoT edge devices that gather sensing data from IoT devices present in the lower layer. However, industries mainly adopt a multi-layered architecture, referred to as the ISA-95 standard, to isolate and safeguard their assets. In this architecture, only the upper layer is connected to the cloud, while the lower layers of the hierarchy get to interact only with the neighbouring layers. Due to these added intermediate layers (and IoT edge devices) between the top and lower layers, existing AI solutions for typical two-layer edge architectures may not be directly applicable in this scenario. Moreover, not all industries prefer to send and store their private data in the cloud. Implementing AI solutions tailored to a hierarchical edge architecture would increase response time and maintain the same degree of security by working within the ISA-95-compliant network architecture. This paper explores a possible strategy for deploying a centralized federated learning-based AI solution in a hierarchical edge architecture and demonstrates its efficacy through a real deployment scenario.
Authored by Narendra Bisht, Subhasri Duttagupta
The development of 5G, cloud computing, artificial intelligence (AI) and other new generation information technologies has promoted the rapid development of the data center (DC) industry, which directly increase severe energy consumption and carbon emissions problem. In addition to traditional engineering based methods, AI based technology has been widely used in existing data centers. However, the existing AI model training schemes are time-consuming and laborious. To tackle this issues, we propose an automated training and deployment platform for AI modes based on cloud-edge architecture, including the processes of data processing, data annotation, model training optimization, and model publishing. The proposed system can generate specific models based on the room environment and realize standardization and automation of model training, which is helpful for large-scale data center scenarios. The simulation and experimental results show that the proposed solution can reduce the time required of single model training by 76.2\%, and multiple training tasks can run concurrently. Therefore, it can adapt to the large-scale energy-saving scenario and greatly improve the model iteration efficiency, which improves the energy-saving rate and help green energy conservation for data centers.
Authored by Chunfang Li, Zhou Guo, Xingmin He, Fei Hu, Weiye Meng
Foundation models, such as large language models (LLMs), have been widely recognised as transformative AI technologies due to their capabilities to understand and generate content, including plans with reasoning capabilities. Foundation model based agents derive their autonomy from the capabilities of foundation models, which enable them to autonomously break down a given goal into a set of manageable tasks and orchestrate task execution to meet the goal. Despite the huge efforts put into building foundation model based agents, the architecture design of the agents has not yet been systematically explored. Also, while there are significant benefits of using agents for planning and execution, there are serious considerations regarding responsible AI related software quality attributes, such as security and accountability. Therefore, this paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model based agents. We evaluate the completeness and utility of the proposed reference architecture by mapping it to the architecture of two real-world agents.
Authored by Qinghua Lu, Liming Zhu, Xiwei Xu, Zhenchang Xing, Stefan Harrer, Jon Whittle
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
As a result of globalization, the COVID-19 pandemic and the migration of data to the cloud, the traditional security measures where an organization relies on a security perimeter and firewalls do not work. There is a shift to a concept whereby resources are not being trusted, and a zero-trust architecture (ZTA) based on a zero-trust principle is needed. Adapting zero trust principles to networks ensures that a single insecure Application Protocol Interface (API) does not become the weakest link comprising of Critical Data, Assets, Application and Services (DAAS). The purpose of this paper is to review the use of zero trust in the security of a network architecture instead of a traditional perimeter. Different software solutions for implementing secure access to applications and services for remote users using zero trust network access (ZTNA) is also summarized. A summary of the author s research on the qualitative study of “Insecure Application Programming Interface in Zero Trust Networks” is also discussed. The study showed that there is an increased usage of zero trust in securing networks and protecting organizations from malicious cyber-attacks. The research also indicates that APIs are insecure in zero trust environments and most organization are not aware of their presence.
Authored by Farhan Qazi
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
The use of encryption for medical images offers several benefits. Firstly, it enhances the confidentiality and privacy of patient data, preventing unauthorized individuals or entities from accessing sensitive medical information. Secondly, encrypted medical images may be sent securely via unreliable networks, like the Internet, without running the danger of data eavesdropping or tampering. Traditional methods of storing and retrieving medical images often lack efficient encryption and privacy-preserving mechanisms. This project delves into enhancing the security and accessibility of medical image storage across diverse cloud environments. Through the implementation of encryption methods, pixel scrambling techniques, and integration with AWS S3, the research aimed to fortify the confidentiality of medical images while ensuring rapid retrieval. These findings collectively illuminate the security, and operational efficiency of the implemented encryption, scrambling techniques, AWS integration, and offer a foundation for advancing secure medical image retrieval in multi-cloud settings.
Authored by Mohammad Shanavaz, Charan Manikanta, M. Gnanaprasoona, Sai Kishore, R. Karthikeyan, M.A. Jabbar