Modern network defense can benefit from the use of autonomous systems, offloading tedious and time-consuming work to agents with standard and learning-enabled components. These agents, operating on critical network infrastructure, need to be robust and trustworthy to ensure defense against adaptive cyber-attackers and, simultaneously, provide explanations for their actions and network activity. However, learning-enabled components typically use models, such as deep neural networks, that are not transparent in their high-level decision-making leading to assurance challenges. Additionally, cyber-defense agents must execute complex long-term defense tasks in a reactive manner that involve coordination of multiple interdependent subtasks. Behavior trees are known to be successful in modelling interpretable, reactive, and modular agent policies with learning-enabled components. In this paper, we develop an approach to design autonomous cyber defense agents using behavior trees with learning-enabled components, which we refer to as Evolving Behavior Trees (EBTs). We learn the structure of an EBT with a novel abstract cyber environment and optimize learning-enabled components for deployment. The learning-enabled components are optimized for adapting to various cyber-attacks and deploying security mechanisms. The learned EBT structure is evaluated in a simulated cyber environment, where it effectively mitigates threats and enhances network visibility. For deployment, we develop a software architecture for evaluating EBT-based agents in computer network defense scenarios. Our results demonstrate that the EBT-based agent is robust to adaptive cyber-attacks and provides high-level explanations for interpreting its decisions and actions.
Authored by Nicholas Potteiger, Ankita Samaddar, Hunter Bergstrom, Xenofon Koutsoukos
Active cyber defense mechanisms are necessary to perform automated, and even autonomous operations using intelligent agents that defend against modern/sophisticated AI-inspired cyber threats (e.g., ransomware, cryptojacking, deep-fakes). These intelligent agents need to rely on deep learning using mature knowledge and should have the ability to apply this knowledge in a situational and timely manner for a given AI-inspired cyber threat. In this paper, we describe a ‘domain-agnostic knowledge graph-as-a-service’ infrastructure that can support the ability to create/store domain-specific knowledge graphs for intelligent agent Apps to deploy active cyber defense solutions defending real-world applications impacted by AI-inspired cyber threats. Specifically, we present a reference architecture, describe graph infrastructure tools, and intuitive user interfaces required to construct and maintain large-scale knowledge graphs for the use in knowledge curation, inference, and interaction, across multiple domains (e.g., healthcare, power grids, manufacturing). Moreover, we present a case study to demonstrate how to configure custom sets of knowledge curation pipelines using custom data importers and semantic extract, transform, and load scripts for active cyber defense in a power grid system. Additionally, we show fast querying methods to reach decisions regarding cyberattack detection to deploy pertinent defense to outsmart adversaries.
Authored by Prasad Calyam, Mayank Kejriwal, Praveen Rao, Jianlin Cheng, Weichao Wang, Linquan Bai, Sriram Nadendla, Sanjay Madria, Sajal Das, Rohit Chadha, Khaza Hoque, Kannappan Palaniappan, Kiran Neupane, Roshan Neupane, Sankeerth Gandhari, Mukesh Singhal, Lotfi Othmane, Meng Yu, Vijay Anand, Bharat Bhargava, Brett Robertson, Kerk Kee, Patrice Buzzanell, Natalie Bolton, Harsh Taneja
The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
The Internet of Things (IoT) refers to the growing network of connected physical objects embedded with sensors, software and connectivity. While IoT has potential benefits, it also introduces new cyber security risks. This paper provides an overview of IoT security issues, vulnerabilities, threats, and mitigation strategies. The key vulnerabilities arising from IoT s scale, ubiquity and connectivity include inadequate authentication, lack of encryption, poor software security, and privacy concerns. Common attacks against IoT devices and networks include denial of service, ransom-ware, man-in-the-middle, and spoofing. An analysis of recent literature highlights emerging attack trends like swarm-based DDoS, IoT botnets, and automated large-scale exploits. Recommended techniques to secure IoT include building security into architecture and design, access control, cryptography, regular patching and upgrades, activity monitoring, incident response plans, and end-user education. Future technologies like blockchain, AI-enabled defense, and post-quantum cryptography can help strengthen IoT security. Additional focus areas include shared threat intelligence, security testing, certification programs, international standards and collaboration between industry, government and academia. A robust multilayered defense combining preventive and detective controls is required to combat rising IoT threats. This paper provides a comprehensive overview of the IoT security landscape and identifies areas for continued research and development.
Authored by Luis Cambosuela, Mandeep Kaur, Rani Astya
A three-party evolutionary game model is constructed by combining the cyber deception, the defender (intrusion detection system), and the attacker. The attackers choose attack strategies to gain greater benefits. The cyber deception can induce attackers to attack fake vulnerabilities, so as capture and analyze the attackers intentions. The defenders use the captured attacker information to adjust their defense strategies and improve detection of attacks. Using cyber deception to enhance the defender choice of strategy, reduce attacker s profit, enable defender to play their own superior strategy, reduce node resource overhead, and prolong network survival time. Through the capture and feature extraction of attacker s attack information, the attack feature database of intrusion detection system is improved, and the detection probability of the attack by the defender is increased. According to the simulation results, the cyber deception can provide the defender with the attacker s attack information in the process of attack and defense, increase the probability of the defender s successful defense, speed up the convergence speed of the optimal defense strategy, and reduce the convergence speed of the attacker s optimal strategy. It is proved that the cyber deception as a third-party participant can effectively help the defender to protect the security of the network.
Authored by Shuai Li, Ting Wang, Ji Ma, Weibo Zhao
Cyber threats have been a major issue in the cyber security domain. Every hacker follows a series of cyber-attack stages known as cyber kill chain stages. Each stage has its norms and limitations to be deployed. For a decade, researchers have focused on detecting these attacks. Merely watcher tools are not optimal solutions anymore. Everything is becoming autonomous in the computer science field. This leads to the idea of an Autonomous Cyber Resilience Defense algorithm design in this work. Resilience has two aspects: Response and Recovery. Response requires some actions to be performed to mitigate attacks. Recovery is patching the flawed code or back door vulnerability. Both aspects were performed by human assistance in the cybersecurity defense field. This work aims to develop an algorithm based on Reinforcement Learning (RL) with a Convoluted Neural Network (CNN), far nearer to the human learning process for malware images. RL learns through a reward mechanism against every performed attack. Every action has some kind of output that can be classified into positive or negative rewards. To enhance its thinking process Markov Decision Process (MDP) will be mitigated with this RL approach. RL impact and induction measures for malware images were measured and performed to get optimal results. Based on the Malimg Image malware, dataset successful automation actions are received. The proposed work has shown 98\% accuracy in the classification, detection, and autonomous resilience actions deployment.
Authored by Kainat Rizwan, Mudassar Ahmad, Muhammad Habib
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
In this research, we evaluate the effectiveness of different MTD techniques on the transformer-based cyber anomaly detection models trained on the KDD Cup’99 Dataset, a publicly available dataset commonly used for evaluating intrusion detection systems. We explore the trade-offs between security and performance when using MTD techniques for cyber anomaly detection and investigate how MTD techniques can be combined with other cybersecurity techniques to improve the overall security of the system. We evaluate their performance using standard metrics such as accuracy and FI score, as well as measures of robustness against adversarial attacks. Our results show that MTD techniques can significantly improve the security of the anomaly detection model, with some techniques being more effective than others depending on the model architecture. We also find that there are trade-offs between security and performance, with some MTD techniques leading to a reduction in model accuracy or an increase in computation time. However, we demonstrate that these tradeoffs can be mitigated by optimizing the MTD parameters for the specific model architecture.
Authored by M. Vubangsi, Auwalu Mubarak, Jameel Yayah, Chadi Altrjman, Manika Manwal, Satya Yadav, Fadi Al-Turjman
Mission Impact Assessment (MIA) is a critical endeavor for evaluating the performance of mission systems, encompassing intricate elements such as assets, services, tasks, vulnerability, attacks, and defenses. This study introduces an innovative MIA framework that transcends existing methodologies by intricately modeling the interdependencies among these components. Additionally, we integrate hypergame theory to address the strategic dynamics of attack-defense interactions. To illustrate its practicality, we apply the framework to an Internet-of-Things (IoT)-based mission system tailored for accurate, time-sensitive object detection. Rigorous simulation experiments affirm the framework s robustness across a spectrum of scenarios. Our results prove that the developed MIA framework shows a sufficiently high inference accuracy (e.g., 80 \%) even with a small portion of the training dataset (e.g., 20–50 \%).
Authored by Ashrith Thukkaraju, Han Yoon, Shou Matsumoto, Jair Ferrari, Donghwan Lee, Myung Ahn, Paulo Costa, Jin-Hee Cho
The current research focuses on the physical security of UAV, while there are few studies on UAV information security. Moreover, the frequency of various security problems caused by UAV has been increasing in recent years, so research on UAV information security is urgent. In order to solve the high cost of UAV experiments, complex protocol types, and hidden security problems, we designe a UAV cyber range and analyze the attack and defense scenarios of three types of honeypot deployment. On this basis, we propose a UAV honeypot active defense strategy based on reinforcement learning. The active defense model of UAV honeypot is described of four dimensions: state, action, reward, and strategy. The simulation results show that the UAV honeypot strategy can maximize the capture of attacker data, which has important theoretical significance for the research of UAV information security.
Authored by Shangting Miao, Yang Li, Quan Pan
AssessJet mainly deals with the vulnerability assessment of websites which is passed as the input. The process of detection and assorting the security threats is known as Vulnerability assessment. Security vulnerabilities can be identified by using appropriate security scanning tools on the back-end. This system produces an extensive report that includes various security threats a website in detail which are likely to be faced by the particular website. Report is to be generated in such a way that the client can understand it easily. Using AssessJet, bugs in websites and web applications, including those under development can be identified.
Authored by J Periasamy, Dakiniswari V, Tapasya K
The energy revolution is primarily driven by the adoption of advanced communication technologies that allow for the digitization of power grids. With the confluence of Information Technology (IT) and Operational Technology (OT), energy systems are entering the larger world of Cyber-Physical Systems (CPS). Cyber threats are expected to grow as the attack surface expands, posing a significant operational risk to any cyber-physical system, including the power grid. Substations are the electricity transmission systems’ most critical assets. Substation outages caused by cyber-attacks produce widespread power outages impacting thousands of consumers. To plan and prepare for such rare yet high-impact occurrences, this paper proposes an integrated defense-in-depth framework for power transmission systems to reduce the risk of cyber-induced substation failures. The inherent resilience of physical power systems assesses cyber-attacks’ impact on critical substations. The presented approach integrates the physical implications of substation failures with cyber vulnerabilities to analyze cyber-physical risks holistically.
Authored by Kush Khanna, Gelli Ravikumar, Manimaran Govindarasu
Unlike traditional defense concepts, active defense is an asymmetric defense concept. It can not only identify potential threats in advance and nip them in the bud but also increase the attack cost of unknown threats by using change, interference, deception, or other means. Although active defense can reverse the asymmetric situation between attacks and defenses, current active defense technologies have two shortcomings: (i) they mainly aim at detecting attacks and increasing the cost of attacks without addressing the underlying problem; and (ii) they have problems such as high deployment costs and compromised system operational efficiency. This paper proposes an active defense architecture based on trap vulnerability with vulnerability as the core and introduces its design concept and specific implementation scheme. We deploy “traps” in the system to lure and find attackers while combining built-in detection, rejection, and traceback mechanisms to protect the system and trace the source of the attack.
Authored by Quan Hong, Yang Zhao, Jian Chang, Yuxin Du, Jun Li, Lidong Zhai
Dynamic Infrastructural Distributed Denial of Service (I-DDoS) attacks constantly change attack vectors to congest core backhaul links and disrupt critical network availability while evading end-system defenses. To effectively counter these highly dynamic attacks, defense mechanisms need to exhibit adaptive decision strategies for real-time mitigation. This paper presents a novel Autonomous DDoS Defense framework that employs model-based reinforcement agents. The framework continuously learns attack strategies, predicts attack actions, and dynamically determines the optimal composition of defense tactics such as filtering, limiting, and rerouting for flow diversion. Our contributions include extending the underlying formulation of the Markov Decision Process (MDP) to address simultaneous DDoS attack and defense behavior, and accounting for environmental uncertainties. We also propose a fine-grained action mitigation approach robust to classification inaccuracies in Intrusion Detection Systems (IDS). Additionally, our reinforcement learning model demonstrates resilience against evasion and deceptive attacks. Evaluation experiments using real-world and simulated DDoS traces demonstrate that our autonomous defense framework ensures the delivery of approximately 96 – 98% of benign traffic despite the diverse range of attack strategies.
Authored by Ashutosh Dutta, Ehab Al-Shaer, Samrat Chatterjee, Qi Duan
In coalition military operations, secure and effective information sharing is vital to the success of the mission. Protected Core Networking (PCN) provides a way for allied nations to securely interconnect their networks to facilitate the sharing of data. PCN, and military networks in general, face unique security challenges. Heterogeneous links and devices are deployed in hostile environments, while motivated adversaries launch cyberattacks at ever-increasing pace, volume, and sophistication. Humans cannot defend these systems and networks, not only because the volume of cyber events is too great, but also because there are not enough cyber defenders situated at the tactical edge. Thus, autonomous, machine-speed cyber defense capabilities are needed to protect mission-critical information systems from cyberattacks and system failures. This paper discusses the motivation for adding autonomous cyber defense capabilities to PCN and outlines a path toward implementing these capabilities. We propose to leverage existing reference architectures, frameworks, and enabling technologies, in order to adapt autonomous cyber defense concepts to the PCN context. We highlight expected challenges of implementing autonomous cyber defense agents for PCN, including: defining the state space and action space that will be necessary for monitoring and for generating recovery plans; implementing a suite of models, sensors, actuators, and agents specific to the PCN context; and designing metrics and experiments to measure the efficacy of such a system.
Authored by Alexander Velazquez, Joseph Mathews, Roberto Lopes, Tracy Braun, Frederica Free-Nelson
This paper discusses the design and implementation of Autonomous Cyber Defense (ACD) agents for Protected Core Networking (PCN). Our solution includes two types of specialized, complementary agents placed in different parts of the network. One type of agent, ACD-Core, is deployed within the protected core segment of a particular nation and can monitor and act in the physical and IP layers. The other, ACDCC, is deployed within a colored cloud and can monitor and act in the transport and application layers. We analyze the threat landscape and identify possible uses and misuses of these agents. Our work is part of an ongoing collaboration between two NATO research task groups, IST-162 and IST-196. The goal of this collaboration is to detail the design and roadmap for implementing ACD agents for PCN and to create a virtual lab for related experimentation and validation. Our vision is that ACD will contribute to improving the cybersecurity of military networks, protecting them against evolving cyber threats, and ensuring connectivity at the tactical edge.
Authored by Alexander Velazquez, Roberto Lopes, Adrien Bécue, Johannes Loevenich, Paulo Rettore, Konrad Wrona
The last decade has shown that networked cyberphysical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous CyberPhysical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these selflearning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine