The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
Machine-learning-based approaches have emerged as viable solutions for automatic detection of container-related cyber attacks. Choosing the best anomaly detection algorithms to identify such cyber attacks can be difficult in practice, and it becomes even more difficult for zero-day attacks for which no prior attack data has been labeled. In this paper, we aim to address this issue by adopting an ensemble learning strategy: a combination of different base anomaly detectors built using conventional machine learning algorithms. The learning strategy provides a highly accurate zero-day container attack detection. We first architect a testbed to facilitate data collection and storage, model training and inference. We then perform two case studies of cyber attacks. We show that, for both case studies, despite the fact that individual base detector performance varies greatly between model types and model hyperparameters, the ensemble learning can consistently produce detection results that are close to the best base anomaly detectors. Additionally, we demonstrate that the detection performance of the resulting ensemble models is on average comparable to the best-performing deep learning anomaly detection approaches, but with much higher robustness, shorter training time, and much less training data. This makes the ensemble learning approach very appealing for practical real-time cyber attack detection scenarios with limited training data.
Authored by Shuai Guo, Thanikesavan Sivanthi, Philipp Sommer, Maëlle Kabir-Querrec, Nicolas Coppik, Eshaan Mudgal, Alessandro Rossotti
Deploying Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) makes them vulnerable to increasing vectors of security and privacy attacks. In this context, a wide range of advanced machine/deep learningbased solutions have been designed to accurately detect security attacks. Specifically, supervised learning techniques have been widely applied to train attack detection models. However, the main limitation of such solutions is their inability to detect attacks different from those seen during the training phase, or new attacks, also called zero-day attacks. Moreover, training the detection model requires significant data collection and labeling, which increases the communication overhead, and raises privacy concerns. To address the aforementioned limits, we propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern. Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs’ privacy, and minimizing the communication overhead. The in-depth experiment on a recent network traffic dataset shows that the proposed system achieved a high detection rate while minimizing the false positive rate, and the detection delay.
Authored by Abdelaziz Korba, Abdelwahab Boualouache, Bouziane Brik, Rabah Rahal, Yacine Ghamri-Doudane, Sidi Senouci
An intrusion detection system (IDS) is a crucial software or hardware application that employs security mechanisms to identify suspicious activity in a system or network. According to the detection technique, IDS is divided into two, namely signature-based and anomaly-based. Signature-based is said to be incapable of handling zero-day attacks, while anomaly-based is able to handle it. Machine learning techniques play a vital role in the development of IDS. There are differences of opinion regarding the most optimal algorithm for IDS classification in several previous studies, such as Random Forest, J48, and AdaBoost. Therefore, this study aims to evaluate the performance of the three algorithm models, using the NSL-KDD and UNSW-NB15 datasets used in previous studies. Empirical results demonstrate that utilizing AdaBoost+J48 with NSL-KDD achieves an accuracy of 99.86\%, along with precision, recall, and f1-score rates of 99.9\%. These results surpass previous studies using AdaBoost+Random Tree, with an accuracy of 98.45\%. Furthermore, this research explores the effectiveness of anomaly-based systems in dealing with zero-day attacks. Remarkably, the results show that anomaly-based systems perform admirably in such scenarios. For instance, employing Random Forest with the UNSW-NB15 dataset yielded the highest performance, with an accuracy rating of 99.81\%.
Authored by Nurul Fauzi, Fazmah Yulianto, Hilal Nuha
This paper seeks to understand how zero- day vulnerabilities relate to traded markets. People in trade and development are reluctant to talk about zero-day vulnerabilities. Thanks to years of research, in addition to interviews, The majority of thepublic documentation about Mr. Cesar Cerrudo s 0-day vulnerabilities are examinedby him, and he talks to experts in many computer security domains about them. In this research, we gave a summary of the current malware detection technologies and suggest a fresh zero-day malware detection and prevention model that is capable of efficiently separating malicious from benign zero-day samples. We also discussed various methods used to detect malicious files and present the results obtained from these methods.
Authored by Atharva Deshpande, Isha Patil, Jayesh Bhave, Aum Giri, Nilesh Sable, Gurunath Chavan
Android is the most popular smartphone operating system with a market share of 68.6\% in Apr 2023. Hence, Android is a more tempting target for cybercriminals. This research aims at contributing to the ongoing efforts to enhance the security of Android applications and protect users from the ever-increasing sophistication of malware attacks. Zero-day attacks pose a significant challenge to traditional signature-based malware detection systems, as they exploit vulnerabilities that are unknown to all. In this context, static analysis can be an encouraging approach for detecting malware in Android applications, leveraging machine learning (ML) and deep learning (DL)-based models. In this research, we have used single feature and combination of features extracted from the static properties of mobile apps as input(s) to the ML and DL based models, enabling it to learn and differentiate between normal and malicious behavior. We have evaluated the performance of those models based on a diverse dataset (DREBIN) comprising of real-world Android applications features, including both benign and zero-day malware samples. We have achieved F1 Score 96\% from the multi-view model (DL Model) in case of Zero-day malware scenario. So, this research can be helpful for mitigating the risk of unknown malware.
Authored by Jabunnesa Sara, Shohrab Hossain
In this paper, we propose a novel approach for detecting zero-day attacks on networked autonomous systems (AS). The proposed approach combines CNN and LSTM algorithms to offer efficient and precise detection of zero-day attacks. We evaluated the proposed approach’s performance against various ML models using a real-world dataset. The experimental results demonstrate the effectiveness of the proposed approach in detecting zero-day attacks in networked AS, achieving better accuracy and detection probability than other ML models.
Authored by Hassan Alami, Danda Rawat
This paper reports on work in progress on incorporating a possibility of zero-day attacks into security risk metrics. System security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zero-day exploits. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. We propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some base likelihoods of zero-day exploits, robust risk metrics assume worst-case Probabilistic or Bayesian AG scenario allowing for a controlled deviation of actual likelihoods of zero-day exploits from their base values. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. These robust risk metrics interpolate between the corresponding probabilistic or Bayesian AG model on the one hand and purely antagonistic game-theoretic model on the other hand. Popular k-zero day security metric is a particular case of the proposed metric.
Authored by Vladimir Marbukh
This paper reports on work in progress on security metrics combining risks of known and zero-day attacks. We assume that system security is modelled by Attack Graph (AG), where attack paths may include a combination of known and zeroday exploits and impact of successful attacks is quantified by system loss function. While set of feasible zero-day exploits and composition of each attack path are known, only estimates of likelihoods of known exploits are available. After averaging the system loss function over likelihoods of known exploits, we propose addressing uncertain likelihoods of zero-day exploits within framework of robust risk metrics. Assuming some prior likelihoods of zero-day exploits, robust risk metrics are identified with the worst-case Bayesian AG scenario subject to a controlled deviation of actual likelihoods of zero-day exploits from their priors. The corresponding worst-case scenario is defined with respect to the system losses due to a zero-day attack. We argue that the proposed risk metric quantifies potential benefits of system configuration diversification, such as Moving Target Defense, for mitigation of the system/attacker information asymmetry.
Authored by Vladimir Marbukh