The complex landscape of multi-cloud settings is the result of the fast growth of cloud computing and the ever-changing needs of contemporary organizations. Strong cyber defenses are of fundamental importance in this setting. In this study, we investigate the use of AI in hybrid cloud settings for the purpose of multi-cloud security management. To help businesses improve their productivity and resilience, we provide a mathematical model for optimal resource allocation. Our methodology streamlines dynamic threat assessments, making it easier for security teams to efficiently priorities vulnerabilities. The advent of a new age of real-time threat response is heralded by the incorporation of AI-driven security tactics. The technique we use has real-world implications that may help businesses stay ahead of constantly changing threats. In the future, scientists will focus on autonomous security systems, interoperability, ethics, interoperability, and cutting-edge AI models that have been validated in the real world. This study provides a detailed road map for businesses to follow as they navigate the complex cybersecurity landscape of multi-cloud settings, therefore promoting resilience and agility in this era of digital transformation.
Authored by Srimathi. J, K. Kanagasabapathi, Kirti Mahajan, Shahanawaj Ahamad, E. Soumya, Shivangi Barthwal
As a result of globalization, the COVID-19 pandemic and the migration of data to the cloud, the traditional security measures where an organization relies on a security perimeter and firewalls do not work. There is a shift to a concept whereby resources are not being trusted, and a zero-trust architecture (ZTA) based on a zero-trust principle is needed. Adapting zero trust principles to networks ensures that a single insecure Application Protocol Interface (API) does not become the weakest link comprising of Critical Data, Assets, Application and Services (DAAS). The purpose of this paper is to review the use of zero trust in the security of a network architecture instead of a traditional perimeter. Different software solutions for implementing secure access to applications and services for remote users using zero trust network access (ZTNA) is also summarized. A summary of the author s research on the qualitative study of “Insecure Application Programming Interface in Zero Trust Networks” is also discussed. The study showed that there is an increased usage of zero trust in securing networks and protecting organizations from malicious cyber-attacks. The research also indicates that APIs are insecure in zero trust environments and most organization are not aware of their presence.
Authored by Farhan Qazi
In the dynamic and ever-changing domain of Unmanned Aerial Vehicles (UAVs), the utmost importance lies in guaranteeing resilient and lucid security measures. This study highlights the necessity of implementing a Zero Trust Architecture (ZTA) to enhance the security of unmanned aerial vehicles (UAVs), hence departing from conventional perimeter defences that may expose vulnerabilities. The Zero Trust Architecture (ZTA) paradigm requires a rigorous and continuous process of authenticating all network entities and communications. The accuracy of our methodology in detecting and identifying unmanned aerial vehicles (UAVs) is 84.59\%. This is achieved by utilizing Radio Frequency (RF) signals within a Deep Learning framework, a unique method. Precise identification is crucial in Zero Trust Architecture (ZTA), as it determines network access. In addition, the use of eXplainable Artificial Intelligence (XAI) tools such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) contributes to the improvement of the model s transparency and interpretability. Adherence to Zero Trust Architecture (ZTA) standards guarantees that the classifications of unmanned aerial vehicles (UAVs) are verifiable and comprehensible, enhancing security within the UAV field.
Authored by Ekramul Haque, Kamrul Hasan, Imtiaz Ahmed, Md. Alam, Tariqul Islam
We propose a conceptual framework, named "AI Security Continuum," consisting of dimensions to deal with challenges of the breadth of the AI security risk sustainably and systematically under the emerging context of the computing continuum as well as continuous engineering. The dimensions identified are the continuum in the AI computing environment, the continuum in technical activities for AI, the continuum in layers in the overall architecture, including AI, the level of AI automation, and the level of AI security measures. We also prospect an engineering foundation that can efficiently and effectively raise each dimension.
Authored by Hironori Washizaki, Nobukazu Yoshioka
The use of encryption for medical images offers several benefits. Firstly, it enhances the confidentiality and privacy of patient data, preventing unauthorized individuals or entities from accessing sensitive medical information. Secondly, encrypted medical images may be sent securely via unreliable networks, like the Internet, without running the danger of data eavesdropping or tampering. Traditional methods of storing and retrieving medical images often lack efficient encryption and privacy-preserving mechanisms. This project delves into enhancing the security and accessibility of medical image storage across diverse cloud environments. Through the implementation of encryption methods, pixel scrambling techniques, and integration with AWS S3, the research aimed to fortify the confidentiality of medical images while ensuring rapid retrieval. These findings collectively illuminate the security, and operational efficiency of the implemented encryption, scrambling techniques, AWS integration, and offer a foundation for advancing secure medical image retrieval in multi-cloud settings.
Authored by Mohammad Shanavaz, Charan Manikanta, M. Gnanaprasoona, Sai Kishore, R. Karthikeyan, M.A. Jabbar
Data security in numerous businesses, including banking, healthcare, and transportation, depends on cryptography. As IoT and AI applications proliferate, this is becoming more and more evident. Despite the benefits and drawbacks of traditional cryptographic methods such as symmetric and asymmetric encryption, there remains a demand for enhanced security that does not compromise efficiency. This work introduces a novel approach called Multi-fused cryptography, which combines the benefits of distinct cryptographic methods in order to overcome their shortcomings. Through a comparative performance analysis; our study demonstrates that the proposed technique successfully enhances data security during network transmission.
Authored by Irin Loretta, Idamakanti Kasireddy, M. Prameela, D Rao, M. Kalaiyarasi, S. Saravanan
In this work, we leverage the pure skin color patch from the face image as the additional information to train an auxiliary skin color feature extractor and face recognition model in parallel to improve performance of state-of-the-art (SOTA) privacy-preserving face recognition (PPFR) systems. Our solution is robust against black-box attacking and well-established generative adversarial network (GAN) based image restoration. We analyze the potential risk in previous work, where the proposed cosine similarity computation might directly leak the protected precomputed embedding stored on the server side. We propose a Function Secret Sharing (FSS) based face embedding comparison protocol without any intermediate result leakage. In addition, we show in experiments that the proposed protocol is more efficient compared to the Secret Sharing (SS) based protocol.
Authored by Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, Joachim Denzler
Recent innovations in computer science and informatics are driving the integration of AI into modern healthcare, extending its applications to medical sectors previously reliant on human expertise. Creating robust and clinically relevant AI models requires extensive data, which can be challenging to gather, particularly when dealing with rare diseases. Data sharing among healthcare entities can address this issue, but legal, privacy, and data ownership concerns hinder such approach. To foster data sharing, in this paper we propose the GEmelli GeNerator - Real World Data (GEN-RWD) Sandbox, that provides a secure environment for data analysis without compromising sensitive medical data. This modular architecture serves as a research platform for various stakeholders, including clinical researchers, policymakers, and pharmaceutical companies. Au-thorized users submit research requests through the GUI, which are processed within the hospital, and the results can be accessed without revealing the original clinical data source. In the context of this paper we present GEN-RWD Sandbox s architecture module in charge of executing the analysis requests, the Processor. Processor s code is openly shared as the GSProcessor R package available at https://gitlab.com/benedetta.gottardelli/GSProcessor.
Authored by Benedetta Gottardelli, Roberto Gatta, Leonardo Nucciarelli, Mariachiara Savino, Andrada Tudor, Mauro Vallati, Andrea Damiani
The resurgence of Artificial Intelligence (AI) has been accompanied by a rise in ethical issues. AI practitioners either face challenges in making ethical choices when designing AI-based systems or are not aware of such challenges in the first place. Increasing the level of awareness and understanding of the perceptions of those who develop AI systems is a critical step toward mitigating ethical issues in AI development. Motivated by these challenges, needs, and the lack of engaging approaches to address these, we developed an interactive, scenario-based ethical AI quiz. It allows AI practitioners, including software engineers who develop AI systems, to self-assess their awareness and perceptions about AI ethics. The experience of taking the quiz, and the feedback it provides, will help AI practitioners understand the gap areas, and improve their overall ethical practice in everyday development scenarios. To demonstrate these expected outcomes and the relevance of our tool, we also share a preliminary user study. The video demo can be found at https://zenodo.org/record/7601169\#.Y9xgA-xBxhF.
Authored by Wei Teo, Ze Teoh, Dayang Arabi, Morad Aboushadi, Khairenn Lai, Zhe Ng, Aastha Pant, Rashina Hoda, Chakkrit Tantithamthavorn, Burak Turhan
In this work, we provide an in-depth characterization study of the performance overhead for running Transformer models with secure multi-party computation (MPC). MPC is a cryptographic framework for protecting both the model and input data privacy in the presence of untrusted compute nodes. Our characterization study shows that Transformers introduce several performance challenges for MPC-based private machine learning inference. First, Transformers rely extensively on “softmax” functions. While softmax functions are relatively cheap in a non-private execution, softmax dominates the MPC inference runtime, consuming up to 50\% of the total inference runtime. Further investigation shows that computing the maximum, needed for providing numerical stability to softmax, is a key culprit for the increase in latency. Second, MPC relies on approximating non-linear functions that are part of the softmax computations, and the narrow dynamic ranges make optimizing softmax while maintaining accuracy quite difficult. Finally, unlike CNNs, Transformer-based NLP models use large embedding tables to convert input words into embedding vectors. Accesses to these embedding tables can disclose inputs; hence, additional obfuscation for embedding access patterns is required for guaranteeing the input privacy. One approach to hide address accesses is to convert an embedding table lookup into a matrix multiplication. However, this naive approach increases MPC inference runtime significantly. We then apply tensor-train (TT) decomposition, a lossy compression technique for representing embedding tables, and evaluate its performance on embedding lookups. We show the trade-off between performance improvements and the corresponding impact on model accuracy using detailed experiments.
Authored by Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin Lee
Searchable encryption allows users to perform search operations on encrypted data before decrypting it first. Secret sharing is one of the most important cryptographic primitives used to design an information theoretic scheme. Nowadays cryptosys-tem designers are providing a facility to adjust the security parameters in real time to circumvent AI-enabled cyber security threats. For long term security of data which is used by various applications, proactive secret sharing allows the shares of the original secret to be dynamically adjusted during a specific interval of time. In proactive secret sharing, the updation of shares at regular intervals of time is done by the servers (participants) and not by the dealer. In this paper, we propose a novel proactive secret sharing scheme where the shares stored at servers are updated using preshared pairwise keys between servers at regular intervals of time. The direct search of words over sentences using the conjunctive search function without the generation of any index is possible using the underlying querying method.
Authored by Praveen K, Gabriel S, Indranil Ray, Avishek Adhikari, Sabyasachi Datta, Arnab Biswas
Electronic media knowledge is unprecedently increasing in recent years. In almost all security control areas, traffic control, weather monitoring, video conferences, social media etc., videos and multimedia data analysis practices are used. As a consequence, it is necessary to retain and transmit these data, by considering the security and privacy issues. IN this research study, a new Div-Mod Stego algorithm is combined with the Multi-Secret Sharing method along with temporary frame reordering and Genetic algorithm to implement high-end security in the process of video sharing. The qualitative and quantitative analysis has also been carried out to compare the performance of the proposed model with the other existing models. A computer analysis shows that the proposed solution would satisfy the requirements of the real-time application.
Authored by R. Logeshwari, Rajasekar Velswamy, Subhashini R, Karunakaran V
At present people can easily share multimedia information on Internet, which leads to serious data security issues. Especially in medical, military and financial fields, images always contain a lot of sensitive information. To safely transmit images among people, many secret image sharing methods are proposed. However, the existing methods can not solve the problems of pixel expansion and high computational complexity of shadow images at the same time. In this paper, we propose an image sharing method by combining sharing matrix and variational hyperprior network, to reduce the pixel expansion and computational complexity of secret image sharing methods. The method uses the variational hyperprior network to encode images. It introduces the hyperprior to effectively catch spatial dependencies in the latent representation, which can compress image with high efficiency. The experimental results show that our method has low computational complexity and high security performance compared with the state-of-the-art approaches. In addition, the proposed method can effectively reduce the pixel expansion when using the sharing matrix to generate shadow images.
Authored by Yuxin Ding, Miaomiao Shao, Cai Nie
Quantum secret sharing (QSS) is a cryptography technique relying on the transmission and manipulation of quantum states to distribute secret information across multiple participants securely. However, quantum systems are susceptible to various types of noise that can compromise their security and reliability. Therefore, it is essential to analyze the influence of noise on QSS to ensure their effectiveness and practicality in real-world quantum communication. This paper studies the impact of various noisy environments on multi-dimensional QSS. Using quantum fidelity, we examine the influence of four noise models: d-phase-flip(dpf), dit-flip(df), amplitude damping(ad), and depolarizing(d). It has been discovered that the fidelity declines with an increase in the noise parameter. Furthermore, the results demonstrate that the efficiency of the QSS protocol differs significantly across distinct noise models.
Authored by Deepa Rathi, Sanjeev Kumar, Reena Grover
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
Data in AI-Empowered Electric Vehicles is protected by using blockchain technology for immutable and verifiable transactions, in addition to high-strength encryption methods and digital signatures. This research paper compares and evaluates the security mechanisms for V2X communication in AI-enabled EVs. The purpose of the study is to ensure the reliability of security measures by evaluating performance metrics including false positive rate, false negative rate, detection accuracy, processing time, communication latency, computational resources, key generation time, and throughput. A comprehensive experimental approach is implemented using a diverse dataset gathered from actual V2X communication condition. The evaluation reveals that the security mechanisms perform inconsistently. Message integrity verification obtains the highest detection accuracy with a low false positive rate of 2\% and a 0\% false negative rate. Traffic encryption has a low processing time, requiring only 10 milliseconds for encryption and decryption, and adds only 5 bytes of communication latency to V2X messages. The detection accuracy of intrusion detection systems is adequate at 95\%, but they require more computational resources, consuming 80\% of the CPU and 150 MB of memory. In particular attack scenarios, certificate-based authentication and secure key exchange show promise. Certificate-based authentication mitigates MitM attacks with a false positive rate of 3\% and a false negative rate of 1\%. Secure key exchange thwarts replication attacks with a false positive rate of 0 and a false negative rate of 2. Nevertheless, their efficacy varies based on the attack scenario, highlighting the need for adaptive security mechanisms. The evaluated security mechanisms exhibit varying rates of throughput. Message integrity verification and traffic encryption accomplish high throughput, enabling 1 Mbps and 800 Kbps, respectively, of secure data transfer rates. Overall, the results contribute to the comprehension of V2X communication security challenges in AI-enabled EVs. Message integrity verification and traffic encryption have emerged as effective mechanisms that provide robust security with high performance. The study provides insight for designing secure and dependable V2X communication systems. Future research should concentrate on enhancing V2X communication s security mechanisms and exploring novel approaches to resolve emerging threats.
Authored by Edward V, Dhivya. S, M.Joe Marshell, Arul Jeyaraj, Ebenezer. V, Jenefa. A
This article proposes a security protection technology based on active dynamic defense technology. Solved unknown threats that traditional rule detection methods cannot detect, effectively resisting purposeless virus spread such as worms; Isolate new unknown viruses, Trojans, and other attack threats; Strengthen terminal protection, effectively solve east-west horizontal penetration attacks in the internal network, and enhance the adversarial capabilities of the internal network. Propose modeling user behavior habits based on machine learning algorithms. By using historical behavior models, abnormal user behavior can be detected in real-time, network danger can be perceived, and proactive changes in network defense strategies can be taken to increase the difficulty of attackers. To achieve comprehensive and effective defense, identification, and localization of network attack behaviors, including APT attacks.
Authored by Fu Yu
This work aims to construct a management system capable of automatically detecting, analyzing, and responding to network security threats, thereby enhancing the security and stability of networks. It is based on the role of artificial intelligence (AI) in computer network security management to establish a network security system that combines AI with traditional technologies. Furthermore, by incorporating the attention mechanism into Graph Neural Network (GNN) and utilizing botnet detection, a more robust and comprehensive network security system is developed to improve detection and response capabilities for network attacks. Finally, experiments are conducted using the Canadian Institute for Cybersecurity Intrusion Detection Systems 2017 dataset. The results indicate that the GNN combined with an attention mechanism performs well in botnet detection, with decreasing false positive and false negative rates at 0.01 and 0.03, respectively. The model achieves a monitoring accuracy of 98\%, providing a promising approach for network security management. The findings underscore the potential role of AI in network security management, especially the positive impact of combining GNN and attention mechanisms on enhancing network security performance.
Authored by Fei Xia, Zhihao Zhou
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In this work, we present a comprehensive survey on applications of the most recent transformer architecture based on attention in information security. Our review reveals three primary areas of application: Intrusion detection, Anomaly Detection and Malware Detection. We have presented an overview of attention-based mechanisms and their application in each cybersecurity use case, and discussed open grounds for future trends in Artificial Intelligence enabled information security.
Authored by M. Vubangsi, Sarumi Abidemi, Olukayode Akanni, Auwalu Mubarak, Fadi Al-Turjman
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial intelligence (AI) has been successfully used in cyber security for enhancing comprehending, investigating, and evaluating cyber threats. It can effectively anticipate cyber risks in a more efficient way. AI also helps in putting in place strategies to safeguard assets and data. Due to their complexity and constant development, it has been difficult to comprehend cybersecurity controls and adopt the corresponding cyber training and security policies and plans.Given that both cyber academics and cyber practitioners need to have a deep comprehension of cybersecurity rules, artificial intelligence (AI) in cybersecurity can be a crucial tool in both education and awareness. By offering an in-depth demonstration of how AI may help in cybersecurity education and awareness and in creating policies fast and to the needed level, this study focuses on the efficiency of AI-driven mechanisms in strengthening the entire cyber security education life cycle.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Authored by Adam Petz, Will Thomas, Anna Fritz, Timothy Barclay, Logan Schmalz, Perry Alexander