Active cyber defense mechanisms are necessary to perform automated, and even autonomous operations using intelligent agents that defend against modern/sophisticated AI-inspired cyber threats (e.g., ransomware, cryptojacking, deep-fakes). These intelligent agents need to rely on deep learning using mature knowledge and should have the ability to apply this knowledge in a situational and timely manner for a given AI-inspired cyber threat. In this paper, we describe a ‘domain-agnostic knowledge graph-as-a-service’ infrastructure that can support the ability to create/store domain-specific knowledge graphs for intelligent agent Apps to deploy active cyber defense solutions defending real-world applications impacted by AI-inspired cyber threats. Specifically, we present a reference architecture, describe graph infrastructure tools, and intuitive user interfaces required to construct and maintain large-scale knowledge graphs for the use in knowledge curation, inference, and interaction, across multiple domains (e.g., healthcare, power grids, manufacturing). Moreover, we present a case study to demonstrate how to configure custom sets of knowledge curation pipelines using custom data importers and semantic extract, transform, and load scripts for active cyber defense in a power grid system. Additionally, we show fast querying methods to reach decisions regarding cyberattack detection to deploy pertinent defense to outsmart adversaries.
Authored by Prasad Calyam, Mayank Kejriwal, Praveen Rao, Jianlin Cheng, Weichao Wang, Linquan Bai, Sriram Nadendla, Sanjay Madria, Sajal Das, Rohit Chadha, Khaza Hoque, Kannappan Palaniappan, Kiran Neupane, Roshan Neupane, Sankeerth Gandhari, Mukesh Singhal, Lotfi Othmane, Meng Yu, Vijay Anand, Bharat Bhargava, Brett Robertson, Kerk Kee, Patrice Buzzanell, Natalie Bolton, Harsh Taneja
The last decade has shown that networked cyber-physical systems (NCPS) are the future of critical infrastructure such as transportation systems and energy production. However, they have introduced an uncharted territory of security vulnerabilities and a wider attack surface, mainly due to network openness and the deeply integrated physical and cyber spaces. On the other hand, relying on manual analysis of intrusion detection alarms might be effective in stopping run-of-the-mill automated probes but remain useless against the growing number of targeted, persistent, and often AI-enabled attacks on large-scale NCPS. Hence, there is a pressing need for new research directions to provide advanced protection. This paper introduces a novel security paradigm for emerging NCPS, namely Autonomous Cyber-Physical Defense (ACPD). We lay out the theoretical foundations and describe the methods for building autonomous and stealthy cyber-physical defense agents that are able to dynamically hunt, detect, and respond to intelligent and sophisticated adversaries in real time without human intervention. By leveraging the power of game theory and multi-agent reinforcement learning, these self-learning agents will be able to deploy complex cyber-physical deception scenarios on the fly, generate optimal and adaptive security policies without prior knowledge of potential threats, and defend themselves against adversarial learning. Nonetheless, serious challenges including trustworthiness, scalability, and transfer learning are yet to be addressed for these autonomous agents to become the next-generation tools of cyber-physical defense.
Authored by Talal Halabi, Mohammad Zulkernine
The rise in autonomous Unmanned Aerial Vehicles (UAVs) for objectives requiring long-term navigation in diverse environments is attributed to their compact, agile, and accessible nature. Specifically, problems exploring dynamic obstacle and collision avoidance are of increasing interest as UAVs become more popular for tasks such as transportation of goods, formation control, and search and rescue routines. Prioritizing safety in the design of autonomous UAVs is crucial to prevent costly collisions that endanger pedestrians, mission success, and property. Safety must be ensured in these systems whose behavior emerges from multiple software components including learning-enabled components. Learning-enabled components, optimized through machine learning (ML) or reinforcement learning (RL) require adherence to safety constraints while interacting with the environment during training and deployment, as well as adaptation to new unknown environments. In this paper, we safeguard autonomous UAV navigation by designing agents based on behavior trees with learning-enabled components, referred to as Evolving Behavior Trees (EBTs). We learn the structure of EBTs with explicit safety components, optimize learning-enabled components with safe hierarchical RL, deploy, and update specific components for transfer to unknown environments. Safe and successful navigation is evaluated using a realistic UAV simulation environment. The results demonstrate the design of an explainable learned EBT structure, incurring near-zero collisions during training and deployment, with safe time-efficient transfer to an unknown environment.
Authored by Nicholas Potteiger, Xenofon Koutsoukos
The Internet of Things (IoT) refers to the growing network of connected physical objects embedded with sensors, software and connectivity. While IoT has potential benefits, it also introduces new cyber security risks. This paper provides an overview of IoT security issues, vulnerabilities, threats, and mitigation strategies. The key vulnerabilities arising from IoT s scale, ubiquity and connectivity include inadequate authentication, lack of encryption, poor software security, and privacy concerns. Common attacks against IoT devices and networks include denial of service, ransom-ware, man-in-the-middle, and spoofing. An analysis of recent literature highlights emerging attack trends like swarm-based DDoS, IoT botnets, and automated large-scale exploits. Recommended techniques to secure IoT include building security into architecture and design, access control, cryptography, regular patching and upgrades, activity monitoring, incident response plans, and end-user education. Future technologies like blockchain, AI-enabled defense, and post-quantum cryptography can help strengthen IoT security. Additional focus areas include shared threat intelligence, security testing, certification programs, international standards and collaboration between industry, government and academia. A robust multilayered defense combining preventive and detective controls is required to combat rising IoT threats. This paper provides a comprehensive overview of the IoT security landscape and identifies areas for continued research and development.
Authored by Luis Cambosuela, Mandeep Kaur, Rani Astya
A three-party evolutionary game model is constructed by combining the cyber deception, the defender (intrusion detection system), and the attacker. The attackers choose attack strategies to gain greater benefits. The cyber deception can induce attackers to attack fake vulnerabilities, so as capture and analyze the attackers intentions. The defenders use the captured attacker information to adjust their defense strategies and improve detection of attacks. Using cyber deception to enhance the defender choice of strategy, reduce attacker s profit, enable defender to play their own superior strategy, reduce node resource overhead, and prolong network survival time. Through the capture and feature extraction of attacker s attack information, the attack feature database of intrusion detection system is improved, and the detection probability of the attack by the defender is increased. According to the simulation results, the cyber deception can provide the defender with the attacker s attack information in the process of attack and defense, increase the probability of the defender s successful defense, speed up the convergence speed of the optimal defense strategy, and reduce the convergence speed of the attacker s optimal strategy. It is proved that the cyber deception as a third-party participant can effectively help the defender to protect the security of the network.
Authored by Shuai Li, Ting Wang, Ji Ma, Weibo Zhao
Cyber threats have been a major issue in the cyber security domain. Every hacker follows a series of cyber-attack stages known as cyber kill chain stages. Each stage has its norms and limitations to be deployed. For a decade, researchers have focused on detecting these attacks. Merely watcher tools are not optimal solutions anymore. Everything is becoming autonomous in the computer science field. This leads to the idea of an Autonomous Cyber Resilience Defense algorithm design in this work. Resilience has two aspects: Response and Recovery. Response requires some actions to be performed to mitigate attacks. Recovery is patching the flawed code or back door vulnerability. Both aspects were performed by human assistance in the cybersecurity defense field. This work aims to develop an algorithm based on Reinforcement Learning (RL) with a Convoluted Neural Network (CNN), far nearer to the human learning process for malware images. RL learns through a reward mechanism against every performed attack. Every action has some kind of output that can be classified into positive or negative rewards. To enhance its thinking process Markov Decision Process (MDP) will be mitigated with this RL approach. RL impact and induction measures for malware images were measured and performed to get optimal results. Based on the Malimg Image malware, dataset successful automation actions are received. The proposed work has shown 98\% accuracy in the classification, detection, and autonomous resilience actions deployment.
Authored by Kainat Rizwan, Mudassar Ahmad, Muhammad Habib
Cybersecurity is an increasingly critical aspect of modern society, with cyber attacks becoming more sophisticated and frequent. Artificial intelligence (AI) and neural network models have emerged as promising tools for improving cyber defense. This paper explores the potential of AI and neural network models in cybersecurity, focusing on their applications in intrusion detection, malware detection, and vulnerability analysis. Intruder detection, or "intrusion detection," is the process of identifying Invasion of Privacy to a computer system. AI-based security systems that can spot intrusions (IDS) use AI-powered packet-level network traffic analysis and intrusion detection patterns to signify an assault. Neural network models can also be used to improve IDS accuracy by modeling the behavior of legitimate users and detecting anomalies. Malware detection involves identifying malicious software on a computer system. AI-based malware machine-learning algorithms are used by detecting systems to assess the behavior of software and recognize patterns that indicate malicious activity. Neural network models can also serve to hone the precision of malware identification by modeling the behavior of known malware and identifying new variants. Vulnerability analysis involves identifying weaknesses in a computer system that could be exploited by attackers. AI-based vulnerability analysis systems use machine learning algorithms to analyze system configurations and identify potential vulnerabilities. Neural network models can also be used to improve the accuracy of vulnerability analysis by modeling the behavior of known vulnerabilities and identifying new ones. Overall, AI and neural network models have significant potential in cybersecurity. By improving intrusion detection, malware detection, and vulnerability analysis, they can help organizations better defend against cyber attacks. However, these technologies also present challenges, including a lack of understanding of the importance of data in machine learning and the potential for attackers to use AI themselves. As such, careful consideration is necessary when implementing AI and neural network models in cybersecurity.
Authored by D. Sugumaran, Y. John, Jansi C, Kireet Joshi, G. Manikandan, Geethamanikanta Jakka
In this research, we evaluate the effectiveness of different MTD techniques on the transformer-based cyber anomaly detection models trained on the KDD Cup’99 Dataset, a publicly available dataset commonly used for evaluating intrusion detection systems. We explore the trade-offs between security and performance when using MTD techniques for cyber anomaly detection and investigate how MTD techniques can be combined with other cybersecurity techniques to improve the overall security of the system. We evaluate their performance using standard metrics such as accuracy and FI score, as well as measures of robustness against adversarial attacks. Our results show that MTD techniques can significantly improve the security of the anomaly detection model, with some techniques being more effective than others depending on the model architecture. We also find that there are trade-offs between security and performance, with some MTD techniques leading to a reduction in model accuracy or an increase in computation time. However, we demonstrate that these tradeoffs can be mitigated by optimizing the MTD parameters for the specific model architecture.
Authored by M. Vubangsi, Auwalu Mubarak, Jameel Yayah, Chadi Altrjman, Manika Manwal, Satya Yadav, Fadi Al-Turjman
Mission Impact Assessment (MIA) is a critical endeavor for evaluating the performance of mission systems, encompassing intricate elements such as assets, services, tasks, vulnerability, attacks, and defenses. This study introduces an innovative MIA framework that transcends existing methodologies by intricately modeling the interdependencies among these components. Additionally, we integrate hypergame theory to address the strategic dynamics of attack-defense interactions. To illustrate its practicality, we apply the framework to an Internet-of-Things (IoT)-based mission system tailored for accurate, time-sensitive object detection. Rigorous simulation experiments affirm the framework s robustness across a spectrum of scenarios. Our results prove that the developed MIA framework shows a sufficiently high inference accuracy (e.g., 80 \%) even with a small portion of the training dataset (e.g., 20–50 \%).
Authored by Ashrith Thukkaraju, Han Yoon, Shou Matsumoto, Jair Ferrari, Donghwan Lee, Myung Ahn, Paulo Costa, Jin-Hee Cho
The current research focuses on the physical security of UAV, while there are few studies on UAV information security. Moreover, the frequency of various security problems caused by UAV has been increasing in recent years, so research on UAV information security is urgent. In order to solve the high cost of UAV experiments, complex protocol types, and hidden security problems, we designe a UAV cyber range and analyze the attack and defense scenarios of three types of honeypot deployment. On this basis, we propose a UAV honeypot active defense strategy based on reinforcement learning. The active defense model of UAV honeypot is described of four dimensions: state, action, reward, and strategy. The simulation results show that the UAV honeypot strategy can maximize the capture of attacker data, which has important theoretical significance for the research of UAV information security.
Authored by Shangting Miao, Yang Li, Quan Pan
AssessJet mainly deals with the vulnerability assessment of websites which is passed as the input. The process of detection and assorting the security threats is known as Vulnerability assessment. Security vulnerabilities can be identified by using appropriate security scanning tools on the back-end. This system produces an extensive report that includes various security threats a website in detail which are likely to be faced by the particular website. Report is to be generated in such a way that the client can understand it easily. Using AssessJet, bugs in websites and web applications, including those under development can be identified.
Authored by J Periasamy, Dakiniswari V, Tapasya K
The energy revolution is primarily driven by the adoption of advanced communication technologies that allow for the digitization of power grids. With the confluence of Information Technology (IT) and Operational Technology (OT), energy systems are entering the larger world of Cyber-Physical Systems (CPS). Cyber threats are expected to grow as the attack surface expands, posing a significant operational risk to any cyber-physical system, including the power grid. Substations are the electricity transmission systems’ most critical assets. Substation outages caused by cyber-attacks produce widespread power outages impacting thousands of consumers. To plan and prepare for such rare yet high-impact occurrences, this paper proposes an integrated defense-in-depth framework for power transmission systems to reduce the risk of cyber-induced substation failures. The inherent resilience of physical power systems assesses cyber-attacks’ impact on critical substations. The presented approach integrates the physical implications of substation failures with cyber vulnerabilities to analyze cyber-physical risks holistically.
Authored by Kush Khanna, Gelli Ravikumar, Manimaran Govindarasu
Unlike traditional defense concepts, active defense is an asymmetric defense concept. It can not only identify potential threats in advance and nip them in the bud but also increase the attack cost of unknown threats by using change, interference, deception, or other means. Although active defense can reverse the asymmetric situation between attacks and defenses, current active defense technologies have two shortcomings: (i) they mainly aim at detecting attacks and increasing the cost of attacks without addressing the underlying problem; and (ii) they have problems such as high deployment costs and compromised system operational efficiency. This paper proposes an active defense architecture based on trap vulnerability with vulnerability as the core and introduces its design concept and specific implementation scheme. We deploy “traps” in the system to lure and find attackers while combining built-in detection, rejection, and traceback mechanisms to protect the system and trace the source of the attack.
Authored by Quan Hong, Yang Zhao, Jian Chang, Yuxin Du, Jun Li, Lidong Zhai
Trust evaluation and trust establishment play crucial roles in the management of trust within a multi-agent system. When it comes to collaboration systems, trust becomes directly linked to the specific roles performed by agents. The Role-Based Collaboration (RBC) methodology serves as a framework for assigning roles that facilitate agent collaboration. Within this context, the behavior of an agent with respect to a role is referred to as a process role. This research paper introduces a role engine that incorporates a trust establishment algorithm aimed at identifying optimal and reliable process roles. In our study, we define trust as a continuous value ranging from 0 to 1. To optimize trustworthy process roles, we have developed a consensus-based Gaussian Process Factor Graph (GPFG) tool. Our simulations and experiments validate the feasibility and efficiency of our proposed approach with autonomous robots in unsignalized intersections and narrow hallways.
Authored by Behzad Akbari, Haibin Zhu, Ya-Jun Pan
Dynamic Infrastructural Distributed Denial of Service (I-DDoS) attacks constantly change attack vectors to congest core backhaul links and disrupt critical network availability while evading end-system defenses. To effectively counter these highly dynamic attacks, defense mechanisms need to exhibit adaptive decision strategies for real-time mitigation. This paper presents a novel Autonomous DDoS Defense framework that employs model-based reinforcement agents. The framework continuously learns attack strategies, predicts attack actions, and dynamically determines the optimal composition of defense tactics such as filtering, limiting, and rerouting for flow diversion. Our contributions include extending the underlying formulation of the Markov Decision Process (MDP) to address simultaneous DDoS attack and defense behavior, and accounting for environmental uncertainties. We also propose a fine-grained action mitigation approach robust to classification inaccuracies in Intrusion Detection Systems (IDS). Additionally, our reinforcement learning model demonstrates resilience against evasion and deceptive attacks. Evaluation experiments using real-world and simulated DDoS traces demonstrate that our autonomous defense framework ensures the delivery of approximately 96 – 98% of benign traffic despite the diverse range of attack strategies.
Authored by Ashutosh Dutta, Ehab Al-Shaer, Samrat Chatterjee, Qi Duan
Intelligent environments rely heavily on the Internet of Things, which can be targeted by malicious attacks. Therefore, the autonomous capabilities of agents in intelligent health-care environments, and the agents’ characteristics (accuracy, reliability, efficiency and responsiveness), should be exploited to devise an autonomous intelligent agent that can safeguard the entire environment from malicious attacks. Hence, this paper contributes to achieving this aim by selecting the eight most valuable features out of 50 features from the adopted dataset using the Chi-squared test. Then, three wellknown machine learning classifiers (i.e. naive Bayes, random forest and logistic regression) are compared in classifying malicious attacks from non-attacks in an intelligent health-care environment. The highest achieved classification accuracy was for the random forest classifier (99.92\%).
Authored by Abdulkreem Alzahrani
In an environment where terrorist group actions are heavily predominate, the study introduces novel modeling tools that really are adept at controlling, coordinating, manipulating, detecting, and tracing drones. Modern humans now need to simulate their surroundings in order to boost their comfort and productivity at work. The ability to imitate a person s everyday work has undergone tremendous advancement. A simulation is a representation of how a system or process would work in the actual world.
Authored by Soumya V, S. Sujitha, Mohan R, Sharmi Kanaujia, Sanskriti Agarwalla, Shaik Sameer, Tabasum Manzoor
As vehicles increasingly embed digital systems, new security vulnerabilities are also being introduced. Computational constraints make it challenging to add security oversight layers on top of core vehicle systems, especially when the security layers rely on additional deep learning models for anomaly detection. To improve security-aware decision-making for autonomous vehicles (AV), this paper proposes a bi-level security framework. The first security level consists of a one-shot resource allocation game that enables a single vehicle to fend off an attacker by optimizing the configuration of its intrusion prevention system based on risk estimation. The second level relies on a reinforcement learning (RL) environment where an agent is responsible for forming and managing a platoon of vehicles on the fly while also dealing with a potential attacker. We solve the first problem using a minimax algorithm to identify optimal strategies for each player. Then, we train RL agents and analyze their performance in forming security-aware platoons. The trained agents demonstrate superior performance compared to our baseline strategies that do not consider security risk.
Authored by Dominic Phillips, Talal Halabi, Mohammad Zulkernine
In coalition military operations, secure and effective information sharing is vital to the success of the mission. Protected Core Networking (PCN) provides a way for allied nations to securely interconnect their networks to facilitate the sharing of data. PCN, and military networks in general, face unique security challenges. Heterogeneous links and devices are deployed in hostile environments, while motivated adversaries launch cyberattacks at ever-increasing pace, volume, and sophistication. Humans cannot defend these systems and networks, not only because the volume of cyber events is too great, but also because there are not enough cyber defenders situated at the tactical edge. Thus, autonomous, machine-speed cyber defense capabilities are needed to protect mission-critical information systems from cyberattacks and system failures. This paper discusses the motivation for adding autonomous cyber defense capabilities to PCN and outlines a path toward implementing these capabilities. We propose to leverage existing reference architectures, frameworks, and enabling technologies, in order to adapt autonomous cyber defense concepts to the PCN context. We highlight expected challenges of implementing autonomous cyber defense agents for PCN, including: defining the state space and action space that will be necessary for monitoring and for generating recovery plans; implementing a suite of models, sensors, actuators, and agents specific to the PCN context; and designing metrics and experiments to measure the efficacy of such a system.
Authored by Alexander Velazquez, Joseph Mathews, Roberto Lopes, Tracy Braun, Frederica Free-Nelson
This paper discusses the design and implementation of Autonomous Cyber Defense (ACD) agents for Protected Core Networking (PCN). Our solution includes two types of specialized, complementary agents placed in different parts of the network. One type of agent, ACD-Core, is deployed within the protected core segment of a particular nation and can monitor and act in the physical and IP layers. The other, ACDCC, is deployed within a colored cloud and can monitor and act in the transport and application layers. We analyze the threat landscape and identify possible uses and misuses of these agents. Our work is part of an ongoing collaboration between two NATO research task groups, IST-162 and IST-196. The goal of this collaboration is to detail the design and roadmap for implementing ACD agents for PCN and to create a virtual lab for related experimentation and validation. Our vision is that ACD will contribute to improving the cybersecurity of military networks, protecting them against evolving cyber threats, and ensuring connectivity at the tactical edge.
Authored by Alexander Velazquez, Roberto Lopes, Adrien Bécue, Johannes Loevenich, Paulo Rettore, Konrad Wrona
This paper highlights the progress toward securing teleoperating devices over the past ten years of active technology development. The relevance of this issue lies in the widespread development of teleoperating systems with a small number of systems allowed for operations. Anomalous behavior of the operating device, caused by a disruption in the normal functioning of the system modules, can be associated with remote attacks and exploitation of vulnerabilities, which can lead to fatal consequences. There are regulations and mandates from licensing agencies such as the US Food and Drug Administration (FDA) that place restrictions on the architecture and components of teleoperating systems. These requirements are also evolving to meet new cybersecurity threats. In particular, consumers and safety regulatory agencies are attracted by the threat of compromising hardware modules along with software insecurity. Recently, detailed security frameworks and protocols for teleoperating devices have appeared. However, a matter of intelligent autonomous controllers for analyzing anomalous and suspicious actions in the system remain unattended, as well as emergency protocols from the point of cybersecurity view. This work provides a new approach for the intraoperative cybersecurity of intelligent teleoperative surgical systems, taking into account modern requirements for implementing into the Surgical Remote Intelligent Robotic System LevshAI. The proposed principal security model allows a surgeon or autonomous agent to manage the operation process during various attacks.
Authored by Alexandra Bernadotte
Nowadays, the Internet has been greatly popularized and penetrated into all aspects of people s lives. In the campus, the level of network construction has also been continuously improved. However, the issue of campus network security has become an important issue that the whole society is concerned about. The research of this paper focuses on this hot spot, and based on the actual situation and characteristics of the campus network in the specific research process, using the relevant network security technology to develop an intelligent monitoring campus network security system optimization model, through the module test and performance test of the system optimization model, the test results verify that the system in this paper can effectively prevent network attacks, monitor campus network security, and ensure the practicability and scientificity of the system.
Authored by Yuanyuan Liu, Jingtao Lan
Cooperative autonomous systems are a priority objective for military research \& development of unmanned vehicles. Drone teams are one of the most prominent applications of cooperative unmanned systems and represent an ideal solution for providing both autonomous detection and recognition within security surveillance and monitoring. Here, a drone team may be arranged as a mobile and cooperative sensor network, whose coordination mechanism shall ensure real-time reconfiguration and sensing task balancing within the team. This work proposes a dynamic and decentralized mission planner of a drone team to attain a cooperative behaviour concerning detection and recognition for security surveillance. The design of the planner exploits multi-agent task allocation and game theory, and is based on the theory of learning in games to implement a scalable and resilient system. Model-in-the-loop simulation results are reported to validate the effectiveness of the proposed approach.
Authored by Vittorio Castrillo, Ivan Iudice, Domenico Pascarella, Gianpaolo Pigliasco, Angela Vozella
Multi-agent systems offer the advantage of performing tasks in a distributed and decentralized manner, thereby increasing efficiency and effectiveness. However, building these systems also presents challenges in terms of communication, security, and data integrity. Blockchain technology has the potential to address these challenges and to revolutionize the way that data is stored and shared, by providing a tamper-evident log of events in event-driven distributed multi-agent systems. In this paper, we propose a blockchain-based approach for event-sourcing in such systems, which allows for the reliable and transparent recording of events and state changes. Our approach leverages the decentralized nature of blockchains to provide a tamperresistant event log, enabling agents to verify the integrity of the data they rely on.
Authored by Ayman Cherif, Youssef Achir, Mohamed Youssfi, Mouhcine Elgarej, Omar Bouattane
In the future, maritime autonomous surface ship(MASS)will be extensively used in maritime cargo transportation. In the process of MASS development, a gradual process of “constrained autonomy to full autonomy” is necessary, so the control system of "man-machine co-driving" is a stage that the MASS must go through. The switching of control rights among autonomous system, shore-based operator and crew on board has also become necessary. At present, there are no standards for the switching mechanism of MASS control right. In order to establish a preliminary MASS control switching mechanism and provide reference for the safety of "man-machine co-driving", this paper makes an analysis and research on the autonomous ship guidelines. The study found that on the basis of the existing autonomous ship specifications, the autonomous ship control switching mechanism can be obtained from the four dimensions of scenario, agent, priority and process. The research results are meaningful to provide reference for the establishment of autonomous ship control switching mechanism and subsequent research.
Authored by Congrui Mu, Wenjun Zhang, Xiangyu Zhou, Xue Yang