The integration of IoT with cellular wireless networks is expected to deepen as cellular technology progresses from 5G to 6G, enabling enhanced connectivity and data exchange capabilities. However, this evolution raises security concerns, including data breaches, unauthorized access, and increased exposure to cyber threats. The complexity of 6G networks may introduce new vulnerabilities, highlighting the need for robust security measures to safeguard sensitive information and user privacy. Addressing these challenges is critical for 5G networks massively IoT-connected systems as well as any new ones that that will potentially work in the 6G environment. Artificial Intelligence is expected to play a vital role in the operation and management of 6G networks. Because of the complex interaction of IoT and 6G networks, Explainable Artificial Intelligence (AI) is expected to emerge as an important tool for enhancing security. This study presents an AI-powered security system for the Internet of Things (IoT), utilizing XGBoost, Shapley Additive, and Local Interpretable Model-agnostic explanation methods, applied to the CICIoT 2023 dataset. These explanations empowers administrators to deploy more resilient security measures tailored to address specific threats and vulnerabilities, improving overall system security against cyber threats and attacks.
Authored by
This research aims to investigate the challenge of recognizing and evaluating distributed database management system (DDBMS) security and privacy challenges and also to describe important danger indicators that may lead to possible data security difficulties in distributing resources, both tangible and functioning. An investigation of the primary DDBMS security and privacy challenges that can be found while developing and managing a standard DDBMS meant for supporting data procedures and offering data services was conducted as a component of this research. The data assessment was produced depending on the findings of surveys and questionnaires administered to DDBS security and privacy professionals with varying degrees of training and emphases in their operations within this expertise field. The findings of the research include a list of primary risk variables based on their significance and frequency in practice, and also a spotlight on the most important security and privacy measures. This article summarizes data on traditional methods to illustrate data security challenges employing statistical, qualitative, and mixed evaluation, and also the most recent techniques depending on smart categorization and examination of DDBS security and privacy challenge factors and functioning with huge amounts of data.
Authored by Santosh Kumar, Kishor Dash, Bhargav Piduru, K. Rajkumar, Bandi Bhaskar, Mohit Tiwari
Artificial Intelligence (AI) and Machine Learning (ML) models, while powerful, are not immune to security threats. These models, often seen as mere data files, are executable code, making them susceptible to attacks. Serialization formats like .pickle, .HDF5, .joblib, .ONNX etc. commonly used for model storage, can inadvertently allow arbitrary code execution, a vulnerability actively exploited by malicious actors. Furthermore, the execution environment for these models, such as PyTorch and TensorFlow, lacks robust sandboxing, enabling the creation of computational graphs that can perform I/O operations, interact with files, communicate over networks, and even spawn additional processes, underscoring the importance of ensuring the safety of the code executed within these frameworks. The emergence of Software Development Kits (SDKs) like ClearML, designed for tracking experiments and managing model versions, adds another layer of complexity and risk. Both open-source and enterprise versions of these SDKs have vulnerabilities that are just beginning to surface, posing additional challenges to the security of AI/ML systems. In this paper, we delve into these security challenges, exploring attacks, vulnerabilities, and potential mitigation strategies to safeguard AI and ML deployments.
Authored by Natalie Grigorieva
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In this experience paper, we present the lessons learned from the First University of St. Gallen Grand Challenge 2023, a competition involving interdisciplinary teams tasked with assessing the legal compliance of real-world AI-based systems with the European Union’s Artificial Intelligence Act (AI Act). The AI Act is the very first attempt in the world to regulate AI systems and its potential impact is huge. The competition provided firsthand experience and practical knowledge regarding the AI Act’s requirements. It also highlighted challenges and opportunities for the software engineering and AI communities.CCS CONCEPTS• Social and professional topics → Governmental regulations; • Computing methodologies → Artificial intelligence; • Security and privacy → Privacy protections; • Software and its engineering → Software creation and management.
Authored by Teresa Scantamburlo, Paolo Falcarin, Alberto Veneri, Alessandro Fabris, Chiara Gallese, Valentina Billa, Francesca Rotolo, Federico Marcuzzi
Cloud computing has become increasingly popular in the modern world. While it has brought many positives to the innovative technological era society lives in today, cloud computing has also shown it has some drawbacks. These drawbacks are present in the security aspect of the cloud and its many services. Security practices differ in the realm of cloud computing as the role of securing information systems is passed onto a third party. While this reduces managerial strain on those who enlist cloud computing it also brings risk to their data and the services they may provide. Cloud services have become a large target for those with malicious intent due to the high density of valuable data stored in one relative location. By soliciting help from the use of honeynets, cloud service providers can effectively improve their intrusion detection systems as well as allow for the opportunity to study attack vectors used by malicious actors to further improve security controls. Implementing honeynets into cloud-based networks is an investment in cloud security that will provide ever-increasing returns in the hardening of information systems against cyber threats.
Authored by Eric Toth, Md Chowdhury
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
6G networks are beginning to take shape, and it is envisaged that they will be made up of networks from different vendors, and with different technologies, in what is known as the network-of-networks. The topology will be constantly changing, allowing it to adapt to the capacities available at any given moment. 6G networks will be managed automatically and natively by AI, but allowing direct management of learning by technical teams through Explainable AI. In this context, security becomes an unprecedented challenge. In this paper we present a flexible architecture that integrates the necessary modules to respond to the needs of 6G, focused on managing security, network and services through choreography intents that coordinate the capabilities of different stakeholders to offer advanced services.
Authored by Rodrigo Asensio-Garriga, Alejandro Zarca, Antonio Skarmeta
With the rapid development of cloud computing services and big data applications, the number of data centers is proliferating, and with it, the problem of energy consumption in data centers is becoming more and more serious. Data center energy-saving has received more and more attention as a way to reduce carbon emissions and power costs. The main energy consumption of data centers lies in IT equipment energy consumption and end air conditioning energy consumption. In this paper, we propose a data center energy-saving application system based on fog computing architecture to reduce air conditioning energy consumption, and thus reduce data center energy consumption. Specifically, the intelligent module is placed in the fog node to take advantage of the low latency, proximal computing, and proximal storage of fog computing to shorten the network call link and improve the stability of acquiring energy-saving policies and the frequency of energy-saving regulation, thus solving the disadvantages of high latency and instability in the energy-saving approach of cloud computing architecture. The AI technology is used in the intelligent module to generate energy-saving strategies and remotely regulate the end air conditioners to achieve better energy-saving effects. This solves the shortcomings of the traditional manual regulation based on expert experience with low adjustment frequency and serious loss of cooling capacity of the terminal air conditioner. According to the experimental results, statistics show that compared with the traditional manual regulation based on expert experience, the data center energy-saving application based on fog computing can operate safely and efficiently, and reduce the PUE to 1.04. Compared with the AI energy-saving strategy based on cloud computing, the AI energy-saving strategy based on fog computing generates strategies faster and with lower latency, and the speed is increased by 29.84\%.
Authored by Yazhen Zhang, Fei Hu, Yisa Han, Weiye Meng, Zhou Guo, Chunfang Li
AI systems face potential hardware security threats. Existing AI systems generally use the heterogeneous architecture of CPU + Intelligent Accelerator, with PCIe bus for communication between them. Security mechanisms are implemented on CPUs based on the hardware security isolation architecture. But the conventional hardware security isolation architecture does not include the intelligent accelerator on the PCIe bus. Therefore, from the perspective of hardware security, data offloaded to the intelligent accelerator face great security risks. In order to effectively integrate intelligent accelerator into the CPU’s security mechanism, a novel hardware security isolation architecture is presented in this paper. The PCIe protocol is extended to be security-aware by adding security information packaging and unpacking logic in the PCIe controller. The hardware resources on the intelligent accelerator are isolated in fine-grained. The resources classified into the secure world can only be controlled and used by the software of CPU’s trusted execution environment. Based on the above hardware security isolation architecture, a security isolation spiking convolutional neural network accelerator is designed and implemented in this paper. The experimental results demonstrate that the proposed security isolation architecture has no overhead on the bandwidth and latency of the PCIe controller. The architecture does not affect the performance of the entire hardware computing process from CPU data offloading, intelligent accelerator computing, to data returning to CPU. With low hardware overhead, this security isolation architecture achieves effective isolation and protection of input data, model, and output data. And this architecture can effectively integrate hardware resources of intelligent accelerator into CPU’s security isolation mechanism.
Authored by Rui Gong, Lei Wang, Wei Shi, Wei Liu, JianFeng Zhang
Edge computing enables the computation and analytics capabilities to be brought closer to data sources. The available literature on AI solutions for edge computing primarily addresses just two edge layers. The upper layer can directly communicate with the cloud and comprises one or more IoT edge devices that gather sensing data from IoT devices present in the lower layer. However, industries mainly adopt a multi-layered architecture, referred to as the ISA-95 standard, to isolate and safeguard their assets. In this architecture, only the upper layer is connected to the cloud, while the lower layers of the hierarchy get to interact only with the neighbouring layers. Due to these added intermediate layers (and IoT edge devices) between the top and lower layers, existing AI solutions for typical two-layer edge architectures may not be directly applicable in this scenario. Moreover, not all industries prefer to send and store their private data in the cloud. Implementing AI solutions tailored to a hierarchical edge architecture would increase response time and maintain the same degree of security by working within the ISA-95-compliant network architecture. This paper explores a possible strategy for deploying a centralized federated learning-based AI solution in a hierarchical edge architecture and demonstrates its efficacy through a real deployment scenario.
Authored by Narendra Bisht, Subhasri Duttagupta
The development of 5G, cloud computing, artificial intelligence (AI) and other new generation information technologies has promoted the rapid development of the data center (DC) industry, which directly increase severe energy consumption and carbon emissions problem. In addition to traditional engineering based methods, AI based technology has been widely used in existing data centers. However, the existing AI model training schemes are time-consuming and laborious. To tackle this issues, we propose an automated training and deployment platform for AI modes based on cloud-edge architecture, including the processes of data processing, data annotation, model training optimization, and model publishing. The proposed system can generate specific models based on the room environment and realize standardization and automation of model training, which is helpful for large-scale data center scenarios. The simulation and experimental results show that the proposed solution can reduce the time required of single model training by 76.2\%, and multiple training tasks can run concurrently. Therefore, it can adapt to the large-scale energy-saving scenario and greatly improve the model iteration efficiency, which improves the energy-saving rate and help green energy conservation for data centers.
Authored by Chunfang Li, Zhou Guo, Xingmin He, Fei Hu, Weiye Meng
Foundation models, such as large language models (LLMs), have been widely recognised as transformative AI technologies due to their capabilities to understand and generate content, including plans with reasoning capabilities. Foundation model based agents derive their autonomy from the capabilities of foundation models, which enable them to autonomously break down a given goal into a set of manageable tasks and orchestrate task execution to meet the goal. Despite the huge efforts put into building foundation model based agents, the architecture design of the agents has not yet been systematically explored. Also, while there are significant benefits of using agents for planning and execution, there are serious considerations regarding responsible AI related software quality attributes, such as security and accountability. Therefore, this paper presents a pattern-oriented reference architecture that serves as guidance when designing foundation model based agents. We evaluate the completeness and utility of the proposed reference architecture by mapping it to the architecture of two real-world agents.
Authored by Qinghua Lu, Liming Zhu, Xiwei Xu, Zhenchang Xing, Stefan Harrer, Jon Whittle
The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
As a result of globalization, the COVID-19 pandemic and the migration of data to the cloud, the traditional security measures where an organization relies on a security perimeter and firewalls do not work. There is a shift to a concept whereby resources are not being trusted, and a zero-trust architecture (ZTA) based on a zero-trust principle is needed. Adapting zero trust principles to networks ensures that a single insecure Application Protocol Interface (API) does not become the weakest link comprising of Critical Data, Assets, Application and Services (DAAS). The purpose of this paper is to review the use of zero trust in the security of a network architecture instead of a traditional perimeter. Different software solutions for implementing secure access to applications and services for remote users using zero trust network access (ZTNA) is also summarized. A summary of the author s research on the qualitative study of “Insecure Application Programming Interface in Zero Trust Networks” is also discussed. The study showed that there is an increased usage of zero trust in securing networks and protecting organizations from malicious cyber-attacks. The research also indicates that APIs are insecure in zero trust environments and most organization are not aware of their presence.
Authored by Farhan Qazi
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
The use of encryption for medical images offers several benefits. Firstly, it enhances the confidentiality and privacy of patient data, preventing unauthorized individuals or entities from accessing sensitive medical information. Secondly, encrypted medical images may be sent securely via unreliable networks, like the Internet, without running the danger of data eavesdropping or tampering. Traditional methods of storing and retrieving medical images often lack efficient encryption and privacy-preserving mechanisms. This project delves into enhancing the security and accessibility of medical image storage across diverse cloud environments. Through the implementation of encryption methods, pixel scrambling techniques, and integration with AWS S3, the research aimed to fortify the confidentiality of medical images while ensuring rapid retrieval. These findings collectively illuminate the security, and operational efficiency of the implemented encryption, scrambling techniques, AWS integration, and offer a foundation for advancing secure medical image retrieval in multi-cloud settings.
Authored by Mohammad Shanavaz, Charan Manikanta, M. Gnanaprasoona, Sai Kishore, R. Karthikeyan, M.A. Jabbar
Data security in numerous businesses, including banking, healthcare, and transportation, depends on cryptography. As IoT and AI applications proliferate, this is becoming more and more evident. Despite the benefits and drawbacks of traditional cryptographic methods such as symmetric and asymmetric encryption, there remains a demand for enhanced security that does not compromise efficiency. This work introduces a novel approach called Multi-fused cryptography, which combines the benefits of distinct cryptographic methods in order to overcome their shortcomings. Through a comparative performance analysis; our study demonstrates that the proposed technique successfully enhances data security during network transmission.
Authored by Irin Loretta, Idamakanti Kasireddy, M. Prameela, D Rao, M. Kalaiyarasi, S. Saravanan
In this work, we leverage the pure skin color patch from the face image as the additional information to train an auxiliary skin color feature extractor and face recognition model in parallel to improve performance of state-of-the-art (SOTA) privacy-preserving face recognition (PPFR) systems. Our solution is robust against black-box attacking and well-established generative adversarial network (GAN) based image restoration. We analyze the potential risk in previous work, where the proposed cosine similarity computation might directly leak the protected precomputed embedding stored on the server side. We propose a Function Secret Sharing (FSS) based face embedding comparison protocol without any intermediate result leakage. In addition, we show in experiments that the proposed protocol is more efficient compared to the Secret Sharing (SS) based protocol.
Authored by Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, Joachim Denzler
Recent innovations in computer science and informatics are driving the integration of AI into modern healthcare, extending its applications to medical sectors previously reliant on human expertise. Creating robust and clinically relevant AI models requires extensive data, which can be challenging to gather, particularly when dealing with rare diseases. Data sharing among healthcare entities can address this issue, but legal, privacy, and data ownership concerns hinder such approach. To foster data sharing, in this paper we propose the GEmelli GeNerator - Real World Data (GEN-RWD) Sandbox, that provides a secure environment for data analysis without compromising sensitive medical data. This modular architecture serves as a research platform for various stakeholders, including clinical researchers, policymakers, and pharmaceutical companies. Au-thorized users submit research requests through the GUI, which are processed within the hospital, and the results can be accessed without revealing the original clinical data source. In the context of this paper we present GEN-RWD Sandbox s architecture module in charge of executing the analysis requests, the Processor. Processor s code is openly shared as the GSProcessor R package available at https://gitlab.com/benedetta.gottardelli/GSProcessor.
Authored by Benedetta Gottardelli, Roberto Gatta, Leonardo Nucciarelli, Mariachiara Savino, Andrada Tudor, Mauro Vallati, Andrea Damiani
The resurgence of Artificial Intelligence (AI) has been accompanied by a rise in ethical issues. AI practitioners either face challenges in making ethical choices when designing AI-based systems or are not aware of such challenges in the first place. Increasing the level of awareness and understanding of the perceptions of those who develop AI systems is a critical step toward mitigating ethical issues in AI development. Motivated by these challenges, needs, and the lack of engaging approaches to address these, we developed an interactive, scenario-based ethical AI quiz. It allows AI practitioners, including software engineers who develop AI systems, to self-assess their awareness and perceptions about AI ethics. The experience of taking the quiz, and the feedback it provides, will help AI practitioners understand the gap areas, and improve their overall ethical practice in everyday development scenarios. To demonstrate these expected outcomes and the relevance of our tool, we also share a preliminary user study. The video demo can be found at https://zenodo.org/record/7601169\#.Y9xgA-xBxhF.
Authored by Wei Teo, Ze Teoh, Dayang Arabi, Morad Aboushadi, Khairenn Lai, Zhe Ng, Aastha Pant, Rashina Hoda, Chakkrit Tantithamthavorn, Burak Turhan
In this work, we provide an in-depth characterization study of the performance overhead for running Transformer models with secure multi-party computation (MPC). MPC is a cryptographic framework for protecting both the model and input data privacy in the presence of untrusted compute nodes. Our characterization study shows that Transformers introduce several performance challenges for MPC-based private machine learning inference. First, Transformers rely extensively on “softmax” functions. While softmax functions are relatively cheap in a non-private execution, softmax dominates the MPC inference runtime, consuming up to 50\% of the total inference runtime. Further investigation shows that computing the maximum, needed for providing numerical stability to softmax, is a key culprit for the increase in latency. Second, MPC relies on approximating non-linear functions that are part of the softmax computations, and the narrow dynamic ranges make optimizing softmax while maintaining accuracy quite difficult. Finally, unlike CNNs, Transformer-based NLP models use large embedding tables to convert input words into embedding vectors. Accesses to these embedding tables can disclose inputs; hence, additional obfuscation for embedding access patterns is required for guaranteeing the input privacy. One approach to hide address accesses is to convert an embedding table lookup into a matrix multiplication. However, this naive approach increases MPC inference runtime significantly. We then apply tensor-train (TT) decomposition, a lossy compression technique for representing embedding tables, and evaluate its performance on embedding lookups. We show the trade-off between performance improvements and the corresponding impact on model accuracy using detailed experiments.
Authored by Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin Lee
Searchable encryption allows users to perform search operations on encrypted data before decrypting it first. Secret sharing is one of the most important cryptographic primitives used to design an information theoretic scheme. Nowadays cryptosys-tem designers are providing a facility to adjust the security parameters in real time to circumvent AI-enabled cyber security threats. For long term security of data which is used by various applications, proactive secret sharing allows the shares of the original secret to be dynamically adjusted during a specific interval of time. In proactive secret sharing, the updation of shares at regular intervals of time is done by the servers (participants) and not by the dealer. In this paper, we propose a novel proactive secret sharing scheme where the shares stored at servers are updated using preshared pairwise keys between servers at regular intervals of time. The direct search of words over sentences using the conjunctive search function without the generation of any index is possible using the underlying querying method.
Authored by Praveen K, Gabriel S, Indranil Ray, Avishek Adhikari, Sabyasachi Datta, Arnab Biswas
Electronic media knowledge is unprecedently increasing in recent years. In almost all security control areas, traffic control, weather monitoring, video conferences, social media etc., videos and multimedia data analysis practices are used. As a consequence, it is necessary to retain and transmit these data, by considering the security and privacy issues. IN this research study, a new Div-Mod Stego algorithm is combined with the Multi-Secret Sharing method along with temporary frame reordering and Genetic algorithm to implement high-end security in the process of video sharing. The qualitative and quantitative analysis has also been carried out to compare the performance of the proposed model with the other existing models. A computer analysis shows that the proposed solution would satisfy the requirements of the real-time application.
Authored by R. Logeshwari, Rajasekar Velswamy, Subhashini R, Karunakaran V