The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Machine-learning-based approaches have emerged as viable solutions for automatic detection of container-related cyber attacks. Choosing the best anomaly detection algorithms to identify such cyber attacks can be difficult in practice, and it becomes even more difficult for zero-day attacks for which no prior attack data has been labeled. In this paper, we aim to address this issue by adopting an ensemble learning strategy: a combination of different base anomaly detectors built using conventional machine learning algorithms. The learning strategy provides a highly accurate zero-day container attack detection. We first architect a testbed to facilitate data collection and storage, model training and inference. We then perform two case studies of cyber attacks. We show that, for both case studies, despite the fact that individual base detector performance varies greatly between model types and model hyperparameters, the ensemble learning can consistently produce detection results that are close to the best base anomaly detectors. Additionally, we demonstrate that the detection performance of the resulting ensemble models is on average comparable to the best-performing deep learning anomaly detection approaches, but with much higher robustness, shorter training time, and much less training data. This makes the ensemble learning approach very appealing for practical real-time cyber attack detection scenarios with limited training data.
Authored by Shuai Guo, Thanikesavan Sivanthi, Philipp Sommer, Maëlle Kabir-Querrec, Nicolas Coppik, Eshaan Mudgal, Alessandro Rossotti
Deploying Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) makes them vulnerable to increasing vectors of security and privacy attacks. In this context, a wide range of advanced machine/deep learningbased solutions have been designed to accurately detect security attacks. Specifically, supervised learning techniques have been widely applied to train attack detection models. However, the main limitation of such solutions is their inability to detect attacks different from those seen during the training phase, or new attacks, also called zero-day attacks. Moreover, training the detection model requires significant data collection and labeling, which increases the communication overhead, and raises privacy concerns. To address the aforementioned limits, we propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern. Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs’ privacy, and minimizing the communication overhead. The in-depth experiment on a recent network traffic dataset shows that the proposed system achieved a high detection rate while minimizing the false positive rate, and the detection delay.
Authored by Abdelaziz Korba, Abdelwahab Boualouache, Bouziane Brik, Rabah Rahal, Yacine Ghamri-Doudane, Sidi Senouci
An intrusion detection system (IDS) is a crucial software or hardware application that employs security mechanisms to identify suspicious activity in a system or network. According to the detection technique, IDS is divided into two, namely signature-based and anomaly-based. Signature-based is said to be incapable of handling zero-day attacks, while anomaly-based is able to handle it. Machine learning techniques play a vital role in the development of IDS. There are differences of opinion regarding the most optimal algorithm for IDS classification in several previous studies, such as Random Forest, J48, and AdaBoost. Therefore, this study aims to evaluate the performance of the three algorithm models, using the NSL-KDD and UNSW-NB15 datasets used in previous studies. Empirical results demonstrate that utilizing AdaBoost+J48 with NSL-KDD achieves an accuracy of 99.86\%, along with precision, recall, and f1-score rates of 99.9\%. These results surpass previous studies using AdaBoost+Random Tree, with an accuracy of 98.45\%. Furthermore, this research explores the effectiveness of anomaly-based systems in dealing with zero-day attacks. Remarkably, the results show that anomaly-based systems perform admirably in such scenarios. For instance, employing Random Forest with the UNSW-NB15 dataset yielded the highest performance, with an accuracy rating of 99.81\%.
Authored by Nurul Fauzi, Fazmah Yulianto, Hilal Nuha
This paper seeks to understand how zero- day vulnerabilities relate to traded markets. People in trade and development are reluctant to talk about zero-day vulnerabilities. Thanks to years of research, in addition to interviews, The majority of thepublic documentation about Mr. Cesar Cerrudo s 0-day vulnerabilities are examinedby him, and he talks to experts in many computer security domains about them. In this research, we gave a summary of the current malware detection technologies and suggest a fresh zero-day malware detection and prevention model that is capable of efficiently separating malicious from benign zero-day samples. We also discussed various methods used to detect malicious files and present the results obtained from these methods.
Authored by Atharva Deshpande, Isha Patil, Jayesh Bhave, Aum Giri, Nilesh Sable, Gurunath Chavan
Android is the most popular smartphone operating system with a market share of 68.6\% in Apr 2023. Hence, Android is a more tempting target for cybercriminals. This research aims at contributing to the ongoing efforts to enhance the security of Android applications and protect users from the ever-increasing sophistication of malware attacks. Zero-day attacks pose a significant challenge to traditional signature-based malware detection systems, as they exploit vulnerabilities that are unknown to all. In this context, static analysis can be an encouraging approach for detecting malware in Android applications, leveraging machine learning (ML) and deep learning (DL)-based models. In this research, we have used single feature and combination of features extracted from the static properties of mobile apps as input(s) to the ML and DL based models, enabling it to learn and differentiate between normal and malicious behavior. We have evaluated the performance of those models based on a diverse dataset (DREBIN) comprising of real-world Android applications features, including both benign and zero-day malware samples. We have achieved F1 Score 96\% from the multi-view model (DL Model) in case of Zero-day malware scenario. So, this research can be helpful for mitigating the risk of unknown malware.
Authored by Jabunnesa Sara, Shohrab Hossain
The most serious risk to network security can arise from a zero-day attack. Zero-day attacks are challenging to identify as they exhibit unseen behavior. Intrusion detection systems (IDS) have gained considerable attention as an effective tool for detecting such attacks. IDS are deployed in network systems to monitor the network and to detect any potential threats. Recently, a lot of Machine learning (ML) and Deep Learning (DL) techniques have been employed in Intrusion Detection Systems, and it has been found that these techniques can detect zero-day attacks efficiently. This paper provides an overview of the background, importance, and different types of ML and DL techniques adopted for detecting zero-day attacks. Then it conducts a comprehensive review of recent ML and DL techniques for detecting zero-day attacks and discusses the associated issues. Further, we analyze the results and highlight the research challenges and future scope for improving the ML and DL approaches for zero-day attack detection.
Authored by Nowsheen Mearaj, Arif Wani
In this paper, we propose a novel approach for detecting zero-day attacks on networked autonomous systems (AS). The proposed approach combines CNN and LSTM algorithms to offer efficient and precise detection of zero-day attacks. We evaluated the proposed approach’s performance against various ML models using a real-world dataset. The experimental results demonstrate the effectiveness of the proposed approach in detecting zero-day attacks in networked AS, achieving better accuracy and detection probability than other ML models.
Authored by Hassan Alami, Danda Rawat
Towards Incorporating a Possibility of Zero-day Attacks Into Security Risk Metrics: Work in Progress
This paper reports on work in progress on incorporating a possibility of zero-day attacks into security risk metrics. System security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zero-day exploits. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. We propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some base likelihoods of zero-day exploits, robust risk metrics assume worst-case Probabilistic or Bayesian AG scenario allowing for a controlled deviation of actual likelihoods of zero-day exploits from their base values. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. These robust risk metrics interpolate between the corresponding probabilistic or Bayesian AG model on the one hand and purely antagonistic game-theoretic model on the other hand. Popular k-zero day security metric is a particular case of the proposed metric.
Authored by Vladimir Marbukh
This paper reports on work in progress on security metrics combining risks of known and zero-day attacks. We assume that system security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zeroday exploits and impact of successful attacks is quantified by system loss function. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. After averaging the system loss function over likelihoods of known exploits, we propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some prior likelihoods of zero-day exploits, robust risk metrics are identified with the worst-case Bayesian AG scenario subject to a controlled deviation of actual likelihoods of zero-day exploits from their priors. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. We argue that the proposed risk metric quantifies potential benefits of system configuration diversification, such as Moving Target Defense, for mitigation of the system/attacker information asymmetry.
Authored by Vladimir Marbukh
This paper provides an end-to-end solution to defend against known microarchitectural attacks such as speculative execution attacks, fault-injection attacks, covert and side channel attacks, and unknown or evasive versions of these attacks. Current defenses are attack specific and can have unacceptably high performance overhead. We propose an approach that reduces the overhead of state-of-art defenses by over 95%, by applying defenses only when attacks are detected. Many current proposed mitigations are not practical for deployment; for example, InvisiSpec has 27% overhead and Fencing has 74% overhead while protecting against only Spectre attacks. Other mitigations carry similar performance penalties. We reduce the overhead for InvisiSpec to 1.26% and for Fencing to 3.45% offering performance and security for not only spectre attacks but other known transient attacks as well, including the dangerous class of LVI and Rowhammer attacks, as well as covering a large set of future evasive and zero-day attacks. Critical to our approach is an accurate detector that is not fooled by evasive attacks and that can generalize to novel zero-day attacks. We use a novel Generative framework, Evasion Vaccination (EVAX) for training ML models and engineering new security-centric performance counters. EVAX significantly increases sensitivity to detect and classify attacks in time for mitigation to be deployed with low false positives (4 FPs in every 1M instructions in our experiments). Such performance enables efficient and timely mitigations, enabling the processor to automatically switch between performance and security as needed.
Authored by Samira Ajorpaz, Daniel Moghimi, Jeffrey Collins, Gilles Pokam, Nael Abu-Ghazaleh, Dean Tullsen
Advanced metamorphic malware and ransomware use techniques like obfuscation to alter their internal structure with every attack. Therefore, any signature extracted from such attack, and used to bolster endpoint defense, cannot avert subsequent attacks. Therefore, if even a single such malware intrudes even a single device of an IoT network, it will continue to infect the entire network. Scenarios where an entire network is targeted by a coordinated swarm of such malware is not beyond imagination. Therefore, the IoT era also requires Industry-4.0 grade AI-based solutions against such advanced attacks. But AI-based solutions need a large repository of data extracted from similar attacks to learn robust representations. Whereas, developing a metamorphic malware is a very complex task and requires extreme human ingenuity. Hence, there does not exist abundant metamorphic malware to train AI-based defensive solutions. Also, there is currently no system that could generate enough functionality preserving metamorphic variants of multiple malware to train AI-based defensive systems. Therefore, to this end, we design and develop a novel system, named X-Swarm. X-Swarm uses deep policy-based adversarial reinforcement learning to generate swarm of metamorphic instances of any malware by obfuscating them at the opcode level and ensuring that they could evade even capable, adversarial-attack immune endpoint defense systems.
Authored by Mohit Sewak, Sanjay Sahay, Hemant Rathore