Data security in numerous businesses, including banking, healthcare, and transportation, depends on cryptography. As IoT and AI applications proliferate, this is becoming more and more evident. Despite the benefits and drawbacks of traditional cryptographic methods such as symmetric and asymmetric encryption, there remains a demand for enhanced security that does not compromise efficiency. This work introduces a novel approach called Multi-fused cryptography, which combines the benefits of distinct cryptographic methods in order to overcome their shortcomings. Through a comparative performance analysis; our study demonstrates that the proposed technique successfully enhances data security during network transmission.
Authored by Irin Loretta, Idamakanti Kasireddy, M. Prameela, D Rao, M. Kalaiyarasi, S. Saravanan
In this work, we leverage the pure skin color patch from the face image as the additional information to train an auxiliary skin color feature extractor and face recognition model in parallel to improve performance of state-of-the-art (SOTA) privacy-preserving face recognition (PPFR) systems. Our solution is robust against black-box attacking and well-established generative adversarial network (GAN) based image restoration. We analyze the potential risk in previous work, where the proposed cosine similarity computation might directly leak the protected precomputed embedding stored on the server side. We propose a Function Secret Sharing (FSS) based face embedding comparison protocol without any intermediate result leakage. In addition, we show in experiments that the proposed protocol is more efficient compared to the Secret Sharing (SS) based protocol.
Authored by Dong Han, Yufan Jiang, Yong Li, Ricardo Mendes, Joachim Denzler
Recent innovations in computer science and informatics are driving the integration of AI into modern healthcare, extending its applications to medical sectors previously reliant on human expertise. Creating robust and clinically relevant AI models requires extensive data, which can be challenging to gather, particularly when dealing with rare diseases. Data sharing among healthcare entities can address this issue, but legal, privacy, and data ownership concerns hinder such approach. To foster data sharing, in this paper we propose the GEmelli GeNerator - Real World Data (GEN-RWD) Sandbox, that provides a secure environment for data analysis without compromising sensitive medical data. This modular architecture serves as a research platform for various stakeholders, including clinical researchers, policymakers, and pharmaceutical companies. Au-thorized users submit research requests through the GUI, which are processed within the hospital, and the results can be accessed without revealing the original clinical data source. In the context of this paper we present GEN-RWD Sandbox s architecture module in charge of executing the analysis requests, the Processor. Processor s code is openly shared as the GSProcessor R package available at https://gitlab.com/benedetta.gottardelli/GSProcessor.
Authored by Benedetta Gottardelli, Roberto Gatta, Leonardo Nucciarelli, Mariachiara Savino, Andrada Tudor, Mauro Vallati, Andrea Damiani
The resurgence of Artificial Intelligence (AI) has been accompanied by a rise in ethical issues. AI practitioners either face challenges in making ethical choices when designing AI-based systems or are not aware of such challenges in the first place. Increasing the level of awareness and understanding of the perceptions of those who develop AI systems is a critical step toward mitigating ethical issues in AI development. Motivated by these challenges, needs, and the lack of engaging approaches to address these, we developed an interactive, scenario-based ethical AI quiz. It allows AI practitioners, including software engineers who develop AI systems, to self-assess their awareness and perceptions about AI ethics. The experience of taking the quiz, and the feedback it provides, will help AI practitioners understand the gap areas, and improve their overall ethical practice in everyday development scenarios. To demonstrate these expected outcomes and the relevance of our tool, we also share a preliminary user study. The video demo can be found at https://zenodo.org/record/7601169\#.Y9xgA-xBxhF.
Authored by Wei Teo, Ze Teoh, Dayang Arabi, Morad Aboushadi, Khairenn Lai, Zhe Ng, Aastha Pant, Rashina Hoda, Chakkrit Tantithamthavorn, Burak Turhan
In this work, we provide an in-depth characterization study of the performance overhead for running Transformer models with secure multi-party computation (MPC). MPC is a cryptographic framework for protecting both the model and input data privacy in the presence of untrusted compute nodes. Our characterization study shows that Transformers introduce several performance challenges for MPC-based private machine learning inference. First, Transformers rely extensively on “softmax” functions. While softmax functions are relatively cheap in a non-private execution, softmax dominates the MPC inference runtime, consuming up to 50\% of the total inference runtime. Further investigation shows that computing the maximum, needed for providing numerical stability to softmax, is a key culprit for the increase in latency. Second, MPC relies on approximating non-linear functions that are part of the softmax computations, and the narrow dynamic ranges make optimizing softmax while maintaining accuracy quite difficult. Finally, unlike CNNs, Transformer-based NLP models use large embedding tables to convert input words into embedding vectors. Accesses to these embedding tables can disclose inputs; hence, additional obfuscation for embedding access patterns is required for guaranteeing the input privacy. One approach to hide address accesses is to convert an embedding table lookup into a matrix multiplication. However, this naive approach increases MPC inference runtime significantly. We then apply tensor-train (TT) decomposition, a lossy compression technique for representing embedding tables, and evaluate its performance on embedding lookups. We show the trade-off between performance improvements and the corresponding impact on model accuracy using detailed experiments.
Authored by Yongqin Wang, Edward Suh, Wenjie Xiong, Benjamin Lefaudeux, Brian Knott, Murali Annavaram, Hsien-Hsin Lee
Searchable encryption allows users to perform search operations on encrypted data before decrypting it first. Secret sharing is one of the most important cryptographic primitives used to design an information theoretic scheme. Nowadays cryptosys-tem designers are providing a facility to adjust the security parameters in real time to circumvent AI-enabled cyber security threats. For long term security of data which is used by various applications, proactive secret sharing allows the shares of the original secret to be dynamically adjusted during a specific interval of time. In proactive secret sharing, the updation of shares at regular intervals of time is done by the servers (participants) and not by the dealer. In this paper, we propose a novel proactive secret sharing scheme where the shares stored at servers are updated using preshared pairwise keys between servers at regular intervals of time. The direct search of words over sentences using the conjunctive search function without the generation of any index is possible using the underlying querying method.
Authored by Praveen K, Gabriel S, Indranil Ray, Avishek Adhikari, Sabyasachi Datta, Arnab Biswas
Electronic media knowledge is unprecedently increasing in recent years. In almost all security control areas, traffic control, weather monitoring, video conferences, social media etc., videos and multimedia data analysis practices are used. As a consequence, it is necessary to retain and transmit these data, by considering the security and privacy issues. IN this research study, a new Div-Mod Stego algorithm is combined with the Multi-Secret Sharing method along with temporary frame reordering and Genetic algorithm to implement high-end security in the process of video sharing. The qualitative and quantitative analysis has also been carried out to compare the performance of the proposed model with the other existing models. A computer analysis shows that the proposed solution would satisfy the requirements of the real-time application.
Authored by R. Logeshwari, Rajasekar Velswamy, Subhashini R, Karunakaran V
At present people can easily share multimedia information on Internet, which leads to serious data security issues. Especially in medical, military and financial fields, images always contain a lot of sensitive information. To safely transmit images among people, many secret image sharing methods are proposed. However, the existing methods can not solve the problems of pixel expansion and high computational complexity of shadow images at the same time. In this paper, we propose an image sharing method by combining sharing matrix and variational hyperprior network, to reduce the pixel expansion and computational complexity of secret image sharing methods. The method uses the variational hyperprior network to encode images. It introduces the hyperprior to effectively catch spatial dependencies in the latent representation, which can compress image with high efficiency. The experimental results show that our method has low computational complexity and high security performance compared with the state-of-the-art approaches. In addition, the proposed method can effectively reduce the pixel expansion when using the sharing matrix to generate shadow images.
Authored by Yuxin Ding, Miaomiao Shao, Cai Nie
Quantum secret sharing (QSS) is a cryptography technique relying on the transmission and manipulation of quantum states to distribute secret information across multiple participants securely. However, quantum systems are susceptible to various types of noise that can compromise their security and reliability. Therefore, it is essential to analyze the influence of noise on QSS to ensure their effectiveness and practicality in real-world quantum communication. This paper studies the impact of various noisy environments on multi-dimensional QSS. Using quantum fidelity, we examine the influence of four noise models: d-phase-flip(dpf), dit-flip(df), amplitude damping(ad), and depolarizing(d). It has been discovered that the fidelity declines with an increase in the noise parameter. Furthermore, the results demonstrate that the efficiency of the QSS protocol differs significantly across distinct noise models.
Authored by Deepa Rathi, Sanjeev Kumar, Reena Grover
At present, technological solutions based on artificial intelligence (AI) are being accelerated in various sectors of the economy and social relations in the world. Practice shows that fast-developing information technologies, as a rule, carry new, previously unidentified threats to information security (IS). It is quite obvious that identification of vulnerabilities, threats and risks of AI technologies requires consideration of each technology separately or in some aggregate in cases of their joint use in application solutions. Of the wide range of AI technologies, data preparation, DevOps, Machine Learning (ML) algorithms, cloud technologies, microprocessors and public services (including Marketplaces) have received the most attention. Due to the high importance and impact on most AI solutions, this paper will focus on the key AI assets, the attacks and risks that arise when implementing AI-based systems, and the issue of building secure AI.
Authored by P. Lozhnikov, S. Zhumazhanova
The effective use of artificial intelligence (AI) to enhance cyber security has been demonstrated in various areas, including cyber threat assessments, cyber security awareness, and compliance. AI also provides mechanisms to write cybersecurity training, plans, policies, and procedures. However, when it comes to cyber security risk assessment and cyber insurance, it is very complicated to manage and measure. Cybersecurity professionals need to have a thorough understanding of cybersecurity risk factors and assessment techniques. For this reason, artificial intelligence (AI) can be an effective tool for producing a more thorough and comprehensive analysis. This study focuses on the effectiveness of AI-driven mechanisms in enhancing the complete cyber security insurance life cycle by examining and implementing a demonstration of how AI can aid in cybersecurity resilience.
Authored by Shadi Jawhar, Craig Kimble, Jeremy Miller, Zeina Bitar
The authors clarified in 2020 that the relationship between AI and security can be classified into four categories: (a) attacks using AI, (b) attacks by AI itself, (c) attacks to AI, and (d) security measures using AI, and summarized research trends for each. Subsequently, ChatGPT became available in November 2022, and the various potential applications of ChatGPT and other generative AIs and the associated risks have attracted attention. In this study, we examined how the emergence of generative AI affects the relationship between AI and security. The results show that (a) the need for the four perspectives of AI and security remains unchanged in the era of generative AI, (b) The generalization of AI targets and automatic program generation with the birth of generative AI will greatly increase the risk of attacks by the AI itself, (c) The birth of generative AI will make it possible to generate easy-to-understand answers to various questions in natural language, which may lead to the spread of fake news and phishing e-mails that can easily fool many people and an increase in AI-based attacks. In addition, it became clear that (1) attacks using AI and (2) responses to attacks by AI itself are highly important. Among these, the analysis of attacks by AI itself, using an attack tree, revealed that the following measures are needed: (a) establishment of penalties for developing inappropriate programs, (b) introduction of a reporting system for signs of attacks by AI, (c) measures to prevent AI revolt by incorporating Asimov s three principles of robotics, and (d) establishment of a mechanism to prevent AI from attacking humans even when it becomes confused.
Authored by Ryoichi Sasaki
Data in AI-Empowered Electric Vehicles is protected by using blockchain technology for immutable and verifiable transactions, in addition to high-strength encryption methods and digital signatures. This research paper compares and evaluates the security mechanisms for V2X communication in AI-enabled EVs. The purpose of the study is to ensure the reliability of security measures by evaluating performance metrics including false positive rate, false negative rate, detection accuracy, processing time, communication latency, computational resources, key generation time, and throughput. A comprehensive experimental approach is implemented using a diverse dataset gathered from actual V2X communication condition. The evaluation reveals that the security mechanisms perform inconsistently. Message integrity verification obtains the highest detection accuracy with a low false positive rate of 2\% and a 0\% false negative rate. Traffic encryption has a low processing time, requiring only 10 milliseconds for encryption and decryption, and adds only 5 bytes of communication latency to V2X messages. The detection accuracy of intrusion detection systems is adequate at 95\%, but they require more computational resources, consuming 80\% of the CPU and 150 MB of memory. In particular attack scenarios, certificate-based authentication and secure key exchange show promise. Certificate-based authentication mitigates MitM attacks with a false positive rate of 3\% and a false negative rate of 1\%. Secure key exchange thwarts replication attacks with a false positive rate of 0 and a false negative rate of 2. Nevertheless, their efficacy varies based on the attack scenario, highlighting the need for adaptive security mechanisms. The evaluated security mechanisms exhibit varying rates of throughput. Message integrity verification and traffic encryption accomplish high throughput, enabling 1 Mbps and 800 Kbps, respectively, of secure data transfer rates. Overall, the results contribute to the comprehension of V2X communication security challenges in AI-enabled EVs. Message integrity verification and traffic encryption have emerged as effective mechanisms that provide robust security with high performance. The study provides insight for designing secure and dependable V2X communication systems. Future research should concentrate on enhancing V2X communication s security mechanisms and exploring novel approaches to resolve emerging threats.
Authored by Edward V, Dhivya. S, M.Joe Marshell, Arul Jeyaraj, Ebenezer. V, Jenefa. A
This article proposes a security protection technology based on active dynamic defense technology. Solved unknown threats that traditional rule detection methods cannot detect, effectively resisting purposeless virus spread such as worms; Isolate new unknown viruses, Trojans, and other attack threats; Strengthen terminal protection, effectively solve east-west horizontal penetration attacks in the internal network, and enhance the adversarial capabilities of the internal network. Propose modeling user behavior habits based on machine learning algorithms. By using historical behavior models, abnormal user behavior can be detected in real-time, network danger can be perceived, and proactive changes in network defense strategies can be taken to increase the difficulty of attackers. To achieve comprehensive and effective defense, identification, and localization of network attack behaviors, including APT attacks.
Authored by Fu Yu
This work aims to construct a management system capable of automatically detecting, analyzing, and responding to network security threats, thereby enhancing the security and stability of networks. It is based on the role of artificial intelligence (AI) in computer network security management to establish a network security system that combines AI with traditional technologies. Furthermore, by incorporating the attention mechanism into Graph Neural Network (GNN) and utilizing botnet detection, a more robust and comprehensive network security system is developed to improve detection and response capabilities for network attacks. Finally, experiments are conducted using the Canadian Institute for Cybersecurity Intrusion Detection Systems 2017 dataset. The results indicate that the GNN combined with an attention mechanism performs well in botnet detection, with decreasing false positive and false negative rates at 0.01 and 0.03, respectively. The model achieves a monitoring accuracy of 98\%, providing a promising approach for network security management. The findings underscore the potential role of AI in network security management, especially the positive impact of combining GNN and attention mechanisms on enhancing network security performance.
Authored by Fei Xia, Zhihao Zhou
As cloud computing continues to evolve, the security of cloud-based systems remains a paramount concern. This research paper delves into the intricate realm of intrusion detection systems (IDS) within cloud environments, shedding light on their diverse types, associated challenges, and inherent limitations. In parallel, the study dissects the realm of Explainable AI (XAI), unveiling its conceptual essence and its transformative role in illuminating the inner workings of complex AI models. Amidst the dynamic landscape of cybersecurity, this paper unravels the synergistic potential of fusing XAI with intrusion detection, accentuating how XAI can enrich transparency and interpretability in the decision-making processes of AI-driven IDS. The exploration of XAI s promises extends to its capacity to mitigate contemporary challenges faced by traditional IDS, particularly in reducing false positives and false negatives. By fostering an understanding of these challenges and their ram-ifications this study elucidates the path forward in enhancing cloud-based security mechanisms. Ultimately, the culmination of insights reinforces the imperative role of Explainable AI in fortifying intrusion detection systems, paving the way for a more robust and comprehensible cybersecurity landscape in the cloud.
Authored by Utsav Upadhyay, Alok Kumar, Satyabrata Roy, Umashankar Rawat, Sandeep Chaurasia
In this work, we present a comprehensive survey on applications of the most recent transformer architecture based on attention in information security. Our review reveals three primary areas of application: Intrusion detection, Anomaly Detection and Malware Detection. We have presented an overview of attention-based mechanisms and their application in each cybersecurity use case, and discussed open grounds for future trends in Artificial Intelligence enabled information security.
Authored by M. Vubangsi, Sarumi Abidemi, Olukayode Akanni, Auwalu Mubarak, Fadi Al-Turjman
The use of artificial intelligence (AI) in cyber security [1] has proven to be very effective as it helps security professionals better understand, examine, and evaluate possible risks and mitigate them. It also provides guidelines to implement solutions to protect assets and safeguard the technology used. As cyber threats continue to evolve in complexity and scope, and as international standards continuously get updated, the need to generate new policies or update existing ones efficiently and easily has increased [1] [2].The use of (AI) in developing cybersecurity policies and procedures can be key in assuring the correctness and effectiveness of these policies as this is one of the needs for both private organizations and governmental agencies. This study sheds light on the power of AI-driven mechanisms in enhancing digital defense procedures by providing a deep implementation of how AI can aid in generating policies quickly and to the needed level.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Artificial intelligence (AI) has been successfully used in cyber security for enhancing comprehending, investigating, and evaluating cyber threats. It can effectively anticipate cyber risks in a more efficient way. AI also helps in putting in place strategies to safeguard assets and data. Due to their complexity and constant development, it has been difficult to comprehend cybersecurity controls and adopt the corresponding cyber training and security policies and plans.Given that both cyber academics and cyber practitioners need to have a deep comprehension of cybersecurity rules, artificial intelligence (AI) in cybersecurity can be a crucial tool in both education and awareness. By offering an in-depth demonstration of how AI may help in cybersecurity education and awareness and in creating policies fast and to the needed level, this study focuses on the efficiency of AI-driven mechanisms in strengthening the entire cyber security education life cycle.
Authored by Shadi Jawhar, Jeremy Miller, Zeina Bitar
Authored by Adam Petz, Will Thomas, Anna Fritz, Timothy Barclay, Logan Schmalz, Perry Alexander
As of 2024, the landscape of infrastructure Distributed Denial of Service (DDoS) attacks continues to evolve with increasing complexity and sophistication. These attacks are not only increasing in volume but also in their ability to evade traditional defenses due to advancements in AI, which enables adversaries to dynamically adapt their attack targets and tactics to maximize damage. The emergence of high-performance botnets utilizing virtual machines allows attackers to launch large-scale attacks with fewer resources. Consequently, defense strategies must adapt by integrating AI-driven anomaly detection and robust multi-layered defenses to keep pace with these evolving threats. In this paper, we introduce a novel deep reinforcement learning (DRL) framework for mitigating Infrastructure DDoS attacks. Our framework features an actor-critic-based DRL network, integrated with variational autoencoders (VAE) to improve learning efficiency and scalability. The VAE assesses the risk of each traffic flow by analyzing various traffic features, while the actor-critic networks use the current link load and the VAE-generated flow risk scores to determine the probability of DDoS mitigation actions, such as traffic limiting, redirecting, or sending puzzles to verify traffic sources. The puzzle inquiry results are fed back to the VAE to refine the risk assessment process.The key strengths of our framework are: (1) the VAE serves as an adaptive anomaly detector, evolving based on DRL agent actions instead of relying on static IDS rules that may quickly become outdated; (2) by separating traffic behavior characterization (handled by VAE) from action selection (handled by DRL), we significantly reduce the DRL state space, enhancing scalability; and (3) the dynamic collaboration between the DRL engine and the VAE allows for real-time adaptation to evolving attack patterns with high efficiency.We show the feasibility and effectiveness of the framework with various attack scenarios. Our approach uniquely integrates an actor-critic learning algorithm with the VAE to understand traffic flow properties and determine optimal actions through a continuous learning process. Our evaluation demonstrates that this framework effectively identifies attack traffic flows, achieving a true positive rate exceeding 95% and a false positive rate below 4%. Additionally, it learns the optimal strategy in a reasonable time, under 20,000 episodes in most experimental settings.
Authored by Qi Duan
Authored by Ehab Duan, David Garlan
Modern network defense can benefit from the use of autonomous systems, offloading tedious and time-consuming work to agents with standard and learning-enabled components. These agents, operating on critical network infrastructure, need to be robust and trustworthy to ensure defense against adaptive cyber-attackers and, simultaneously, provide explanations for their actions and network activity. However, learning-enabled components typically use models, such as deep neural networks, that are not transparent in their high-level decision-making leading to assurance challenges. Additionally, cyber-defense agents must execute complex long-term defense tasks in a reactive manner that involve coordination of multiple interdependent subtasks. Behavior trees are known to be successful in modelling interpretable, reactive, and modular agent policies with learning-enabled components. In this paper, we develop an approach to design autonomous cyber defense agents using behavior trees with learning-enabled components, which we refer to as Evolving Behavior Trees (EBTs). We learn the structure of an EBT with a novel abstract cyber environment and optimize learning-enabled components for deployment. The learning-enabled components are optimized for adapting to various cyber-attacks and deploying security mechanisms. The learned EBT structure is evaluated in a simulated cyber environment, where it effectively mitigates threats and enhances network visibility. For deployment, we develop a software architecture for evaluating EBT-based agents in computer network defense scenarios. Our results demonstrate that the EBT-based agent is robust to adaptive cyber-attacks and provides high-level explanations for interpreting its decisions and actions.
Authored by Nicholas Potteiger, Ankita Samaddar, Hunter Bergstrom, Xenofon Koutsoukos
The growing deployment of IoT devices has led to unprecedented interconnection and information sharing. However, it has also presented novel difficulties with security. Using intrusion detection systems (IDS) that are based on artificial intelligence (AI) and machine learning (ML), this research study proposes a unique strategy for addressing security issues in Internet of Things (IoT) networks. This technique seeks to address the challenges that are associated with these IoT networks. The use of intrusion detection systems (IDS) makes this technique feasible. The purpose of this research is to simultaneously improve the present level of security in ecosystems that are connected to the Internet of Things (IoT) while simultaneously ensuring the effectiveness of identifying and mitigating possible threats. The frequency of cyber assaults is directly proportional to the increasing number of people who rely on and utilize the internet. Data sent via a network is vulnerable to interception by both internal and external parties. Either a human or an automated system may launch this attack. The intensity and effectiveness of these assaults are continuously rising. The difficulty of avoiding or foiling these types of hackers and attackers has increased. There will occasionally be individuals or businesses offering IDS solutions who have extensive domain expertise. These solutions will be adaptive, unique, and trustworthy. IDS and cryptography are the subjects of this research. There are a number of scholarly articles on IDS. An investigation of some machine learning and deep learning techniques was carried out in this research. To further strengthen security standards, some cryptographic techniques are used. Problems with accuracy and performance were not considered in prior research. Furthermore, further protection is necessary. This means that deep learning can be even more effective and accurate in the future.
Authored by Mohammed Mahdi
In the ever-evolving landscape of cybersecurity threats, Intrusion detection systems are critical in protecting network and server infrastructure in the ever-changing spectrum of cybersecurity threats. This research introduces a hybrid detection approach that uses deep learning techniques to improve intrusion detection accuracy and efficiency. The proposed prototype combines the strength of the XGBoost and MaxPooling1D algorithms within an ensemble model, resulting in a stable and effective solution. Through the fusion of these methodologies, the hybrid detection system achieves superior performance in identifying and mitigating various types of intrusions. This paper provides an overview of the prototype s architecture, discusses the benefits of using deep learning in intrusion detection, and presents experimental results showcasing the system s efficacy.
Authored by Vishnu Kurnala, Swaraj Naik, Dhanush Surapaneni, Ch. Reddy