Nowadays, the Internet has been greatly popularized and penetrated into all aspects of people s lives. In the campus, the level of network construction has also been continuously improved. However, the issue of campus network security has become an important issue that the whole society is concerned about. The research of this paper focuses on this hot spot, and based on the actual situation and characteristics of the campus network in the specific research process, using the relevant network security technology to develop an intelligent monitoring campus network security system optimization model, through the module test and performance test of the system optimization model, the test results verify that the system in this paper can effectively prevent network attacks, monitor campus network security, and ensure the practicability and scientificity of the system.
Authored by Yuanyuan Liu, Jingtao Lan
Cooperative autonomous systems are a priority objective for military research \& development of unmanned vehicles. Drone teams are one of the most prominent applications of cooperative unmanned systems and represent an ideal solution for providing both autonomous detection and recognition within security surveillance and monitoring. Here, a drone team may be arranged as a mobile and cooperative sensor network, whose coordination mechanism shall ensure real-time reconfiguration and sensing task balancing within the team. This work proposes a dynamic and decentralized mission planner of a drone team to attain a cooperative behaviour concerning detection and recognition for security surveillance. The design of the planner exploits multi-agent task allocation and game theory, and is based on the theory of learning in games to implement a scalable and resilient system. Model-in-the-loop simulation results are reported to validate the effectiveness of the proposed approach.
Authored by Vittorio Castrillo, Ivan Iudice, Domenico Pascarella, Gianpaolo Pigliasco, Angela Vozella
Ensuring security and data integrity in Multi Micro-Agent System Middleware with Blockchain Technology
Multi-agent systems offer the advantage of performing tasks in a distributed and decentralized manner, thereby increasing efficiency and effectiveness. However, building these systems also presents challenges in terms of communication, security, and data integrity. Blockchain technology has the potential to address these challenges and to revolutionize the way that data is stored and shared, by providing a tamper-evident log of events in event-driven distributed multi-agent systems. In this paper, we propose a blockchain-based approach for event-sourcing in such systems, which allows for the reliable and transparent recording of events and state changes. Our approach leverages the decentralized nature of blockchains to provide a tamperresistant event log, enabling agents to verify the integrity of the data they rely on.
Authored by Ayman Cherif, Youssef Achir, Mohamed Youssfi, Mouhcine Elgarej, Omar Bouattane
In the future, maritime autonomous surface ship(MASS)will be extensively used in maritime cargo transportation. In the process of MASS development, a gradual process of “constrained autonomy to full autonomy” is necessary, so the control system of "man-machine co-driving" is a stage that the MASS must go through. The switching of control rights among autonomous system, shore-based operator and crew on board has also become necessary. At present, there are no standards for the switching mechanism of MASS control right. In order to establish a preliminary MASS control switching mechanism and provide reference for the safety of "man-machine co-driving", this paper makes an analysis and research on the autonomous ship guidelines. The study found that on the basis of the existing autonomous ship specifications, the autonomous ship control switching mechanism can be obtained from the four dimensions of scenario, agent, priority and process. The research results are meaningful to provide reference for the establishment of autonomous ship control switching mechanism and subsequent research.
Authored by Congrui Mu, Wenjun Zhang, Xiangyu Zhou, Xue Yang
Organizations strive to secure their valuable data and minimise potential damages, recognising that critical operations are susceptible to attacks. This research paper seeks to elucidate the concept of proactive cyber threat hunting. The proposed framework is to help organisations check their preparedness against upcoming threats and their probable mitigation plan. While traditional threat detection methods have been implemented, they often need to address the evolving landscape of advanced cyber threats. Organisations must adopt proactive threat-hunting strategies to safeguard business operations and identify and mitigate unknown or undetected network threats. This research proposes a conceptual model based on a review of the literature. The proposed framework will help the organisation recover from the attack. As the recovery time is less, the financial loss for the company will also be reduced. Also, the attacker might need more time to gather data, so there will be less stealing of confidential information. Cybersecurity companies use proactive cyber defence strategies to reduce an attacker s time on the network. The different frameworks used are SANS, MITRE, Hunting ELK, Logstash, Digital Kill Chain, Model in Diamonds, and NIST Framework for Cybersecurity, which proposes a proactive approach. It is beneficial for the defensive security team to assess their capabilities to defend against Advanced Threats Persistent (ATP) and a wide range of attack vectors.
Authored by Mugdha Kulkarni, Dudhia Ashit, Chauhan Chetan
Advanced persistent threat (APT) attack is one of the most serious threats to power system cyber security. ATT\&CK framework integrates the known historical and practical APT attack tactics and techniques to form a general language for describing hacker behavior and an abstract knowledge base framework for hacker attacks. Combined with the ATT\&CK for ICS framework, this paper combed the known attack techniques used by viruses or hacker groups aimed at cyberattacks on infrastructure, especially power systems. Then found the corresponding mitigations for each attack technique, and merged them. Next, we listed the high frequency and important mitigations for reference. At last, we proposed a cyber security defense model suitable for ICS to provide a reference for security teams on how to apply ATT\&ck; other similar cyberattack frameworks.
Authored by Tengyan Wang, Yuanyuan Ma, Zhipeng Shao, Zheng Xu
The rapid growth of communication networks, coupled with the increasing complexity of cyber threats, necessitates the implementation of proactive measures to protect networks and systems. In this study, we introduce a federated learning-based approach for cyber threat hunting at the endpoint level. The proposed method utilizes the collective intelligence of multiple devices to effectively and confidentially detect attacks on individual machines. A security assessment tool is also developed to emulate the behavior of adversary groups and Advanced Persistent Threat (APT) actors in the network. This tool provides network security experts with the ability to assess their network environment s resilience and aids in generating authentic data derived from diverse threats for use in subsequent stages of the federated learning (FL) model. The results of the experiments demonstrate that the proposed model effectively detects cyber threats on the devices while safeguarding privacy.
Authored by Saeid Sheikhi, Panos Kostakos
Advanced Persistent Threats (APTs) have been a major challenge in securing both Information Technology (IT) and Operational Technology (OT) systems. APT is a sophisticated attack that masquerade their actions to navigates around defenses, breach networks, often, over multiple network hosts and evades detection. It also uses “low-and-slow” approach over a long period of time. Resource availability, integrity, and confidentiality of the operational cyber-physical systems (CPS) state and control is highly impacted by the safety and security measures in place. A framework multi-stage detection approach termed “APT$_\textrmDASAC$” to detect different tactics, techniques, and procedures (TTPs) used during various APT steps is proposed. Implementation was carried out in three stages: (i) Data input and probing layer - this involves data gathering and pre-processing, (ii) Data analysis layer; applies the core process of “APT$_\textrmDASAC$” to learn the behaviour of attack steps from the sequence data, correlate and link the related output and, (iii) Decision layer; the ensemble probability approach is utilized to integrate the output and make attack prediction. The framework was validated with three different datasets and three case studies. The proposed approach achieved a significant attacks detection capability of 86.36\% with loss as 0.32\%, demonstrating that attack detection techniques applied that performed well in one domain may not yield the same good result in another domain. This suggests that robustness and resilience of operational systems state to withstand attack and maintain system performance are regulated by the safety and security measures in place, which is specific to the system in question.
Authored by Hope Eke, Andrei Petrovski
Advanced Persistent Threats (APTs) have significantly impacted organizations over an extended period with their coordinated and sophisticated cyberattacks. Unlike signature-based tools such as antivirus and firewalls that can detect and block other types of malware, APTs exploit zero-day vulnerabilities to generate new variants of undetectable malware. Additionally, APT adversaries engage in complex relationships and interactions within network entities, necessitating the learning of interactions in network traffic flows, such as hosts, users, or IP addresses, for effective detection. However, traditional deep neural networks often fail to capture the inherent graph structure and overlook crucial contextual information in network traffic flows. To address these issues, this research models APTs as heterogeneous graphs, capturing the diverse features and complex interactions in network flows. Consequently, a hetero-geneous graph transformer (HGT) model is used to accurately distinguish between benign and malicious network connections. Experiment results reveal that the HGT model achieves better performance, with 100 \% accuracy and accelerated learning time, outperferming homogeneous graph neural network models.
Authored by Kazeem Saheed, Shagufta Henna
Few-shot Multi-domain Knowledge Rearming for Context-aware Defence against Advanced Persistent Threats
Advanced persistent threats (APTs) have novel features such as multi-stage penetration, highly-tailored intention, and evasive tactics. APTs defense requires fusing multi-dimensional Cyber threat intelligence data to identify attack intentions and conducts efficient knowledge discovery strategies by data-driven machine learning to recognize entity relationships. However, data-driven machine learning lacks generalization ability on fresh or unknown samples, reducing the accuracy and practicality of the defense model. Besides, the private deployment of these APT defense models on heterogeneous environments and various network devices requires significant investment in context awareness (such as known attack entities, continuous network states, and current security strategies). In this paper, we propose a few-shot multi-domain knowledge rearming (FMKR) scheme for context-aware defense against APTs. By completing multiple small tasks that are generated from different network domains with meta-learning, the FMKR firstly trains a model with good discrimination and generalization ability for fresh and unknown APT attacks. In each FMKR task, both threat intelligence and local entities are fused into the support/query sets in meta-learning to identify possible attack stages. Secondly, to rearm current security strategies, an finetuning-based deployment mechanism is proposed to transfer learned knowledge into the student model, while minimizing the defense cost. Compared to multiple model replacement strategies, the FMKR provides a faster response to attack behaviors while consuming less scheduling cost. Based on the feedback from multiple real users of the Industrial Internet of Things (IIoT) over 2 months, we demonstrate that the proposed scheme can improve the defense satisfaction rate.
Authored by Gaolei Li, Yuanyuan Zhao, Wenqi Wei, Yuchen Liu
Past Advanced Persistent Threat (APT) attacks on Industrial Internet-of-Things (IIoT), such as the 2016 Ukrainian power grid attack and the 2017 Saudi petrochemical plant attack, have shown the disruptive effects of APT campaigns while new IIoT malware continue to be developed by APT groups. Existing APT detection systems have been designed using cyberattack TTPs modelled for enterprise IT networks and leverage specific data sources (e.g., Linux audit logs, Windows event logs) which are not found on ICS devices. In this work, we propose RAPTOR, a system to detect APT campaigns in IIoT. Using cyberattack TTPs modelled for ICS/OT environments and focusing on ‘invariant’ attack phases, RAPTOR detects and correlates various APT attack stages in IIoT leveraging data which can be readily collected from ICS devices/networks (packet traffic traces, IDS alerts). Subsequently, it constructs a high-level APT campaign graph which can be used by cybersecurity analysts towards attack analysis and mitigation. A performance evaluation of RAPTOR’s APT attack-stage detection modules shows high precision and low false positive/negative rates. We also show that RAPTOR is able to construct the APT campaign graph for APT attacks (modelled after real-world attacks on ICS/OT infrastructure) executed on our IIoT testbed.
Authored by Ayush Kumar, Vrizlynn Thing
Detection of Previously Unknown Advanced Persistent Threats Through Visual Analytics with the MASFAD Framework
With the rapid evolution of the Internet and the prevalence of sophisticated adversarial cyber threats, it has become apparent that an equally rapid development of new Situation Awareness techniques is needed. The vast amount of data produced everyday by Intrusion Detection Systems, Firewalls, Honeypots and other systems can quickly become insurmountable to analyze by the domain experts. To enhance the human - machine interaction, new Visual Analytics systems need to be implemented and tested, bridging the gap between the detection of possible malicious activity, identifying it and taking the necessary measures to stop its propagation. The detection of previously unknown, highly sophisticated Advanced Persistent Threats (APT) adds a higher degree of complexity to this task. In this paper, we discuss the principles inherent to Visual Analytics and propose a new technique for the detection of APT attacks through the use of anomaly and behavior-based analysis. Our ultimate goal is to define sophisticated cyber threats by their defining characteristics and combining those to construct a pattern of behavior, which can be presented in visual form to be explored and analyzed. This can be achieved through the use of our Multi-Agent System for Advanced Persistent Threat Detection (MASFAD) framework and the combination of highly-detailed and dynamic visualization techniques. This paper was originally presented at the NATO Science and Technology Organization Symposium (ICMCIS) organized by the Information Systems Technology (IST) Panel, IST-200 RSY - the ICMCIS, held in Skopje, North Macedonia, 16–17 May 2023.
Authored by Georgi Nikolov, Wim Mees
As cyber attacks grow in complexity and frequency, cyber threat intelligence (CTI) remains a priority objective for defenders. A critical component of CTI at the strategic level of defensive operations is attack attribution. Attributing an attack to a threat group informs defenders on adversaries that are actively engaging them and advances their ability respond. In this paper, we propose a data analytic approach towards threat attribution using adversary playbooks of tactics, techniques, and procedures (TTPs). Specifically, our approach uses association rule mining on a large real world CTI dataset to extend known threat TTP playbooks with statistically probable TTPs the adversary may deploy. The benefits are twofold. First, we offer a dataset of learned TTP associations and extended threat playbooks. Second, we show that we can attribute attacks using a weighted Jaccard similarity with 96\% accuracy.
Authored by Kelsie Edie, Cole Mckee, Adam Duby
A Tool for Security Risk Assessment for APT Attacks: using Scenarios, Security Requirements, and Evidence
Advanced Persistent Threat (APT) attacks are complex, employing diverse attack elements and increasingly intelligent techniques. This paper introduces a tool for security risk assessment specifically designed for these attacks. This tool assists security teams in systematically analyzing APT attacks to derive adaptive security requirements for mission-critical target systems. Additionally, the tool facilitates the assessment of security risks, providing a comprehensive understanding of their impact on target systems. By leveraging this tool, security teams can enhance defense strategies, mitigating potential threats and ensuring the security of target systems.
Authored by Sihn-Hye Park, Dongyoon Kim, Seok-Won Lee
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-Enabled IoTs: An Anticipatory Study
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
Agro-Ledger: Blockchain Based Framework for Transparency and Traceability in Agri-Food Supply Chains
This paper presents a pioneering blockchain-based framework for enhancing traceability and transparency within the global agrifood supply chain. By seamlessly integrating blockchain technology and the Ethereum Virtual Machine (EVM), the framework offers a robust solution to the industry s challenges. It weaves a narrative where each product s journey is securely documented in an unalterable digital ledger, accessible to all stakeholders. Real-time Internet of Things (IoT) sensors stand sentinel, monitoring variables crucial to product quality. With millions afflicted by foodborne diseases, substantial food wastage, and a strong consumer desire for transparency, this framework responds to a clarion call for change. Moreover, the framework s data-driven approach not only rejuvenates consumer confidence and product authenticity but also lays the groundwork for robust sustainability and toxicity assessments. In this narrative of technological innovation, the paper embarks on an architectural odyssey, intertwining the threads of blockchain and EVM to reimagine a sustainable, transparent, and trustworthy agrifood landscape.
Authored by Prasanna Kumar, Bharati Mohan, Akilesh S, Jaikanth Y, Roshan George, Vishal G, Vishnu P, Elakkiya R
A Dimensional Perspective Analysis on the Cybersecurity Risks and Opportunities of ChatGPT-Like Information Systems
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).
Authored by Ramesh Saini
The adoption of IoT in a multitude of critical infrastructures revolutionizes several sectors, ranging from smart healthcare systems to financial organizations and thermal and nuclear power plants. Yet, the progressive growth of IoT devices in critical infrastructure without considering security risks can damage the user’s privacy, confidentiality, and integrity of both individuals and organizations. To overcome the aforementioned security threats, we proposed an AI and onion routing-based secure architecture for IoT-enabled critical infrastructure. Here, we first employ AI classifiers that classify the attack and non-attack IoT data, where attack data is discarded from further communication. In addition, the AI classifiers are secure from data poisoning attacks by incorporating an isolation forest algorithm that efficiently detects the poisoned data and eradicates it from the dataset’s feature space. Only non-attack data is forwarded to the onion routing network, which offers triple encryption to encrypt IoT data. As the onion routing only processes non-attack data, it is less computationally expensive than other baseline works. Moreover, each onion router is associated with blockchain nodes that store the verifying tokens of IoT data. The proposed architecture is evaluated using performance parameters, such as accuracy, precision, recall, training time, and compromisation rate. In this proposed work, SVM outperforms by achieving 97.7\% accuracy.
Authored by Nilesh Jadav, Rajesh Gupta, Sudeep Tanwar
This survey paper provides an overview of the current state of AI attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques.
Authored by Md Rahman, Aiasha Arshi, Md Hasan, Sumayia Mishu, Hossain Shahriar, Fan Wu
AI technology is widely used in different fields due to the effectiveness and accurate results that have been achieved. The diversity of usage attracts many attackers to attack AI systems to reach their goals. One of the most important and powerful attacks launched against AI models is the label-flipping attack. This attack allows the attacker to compromise the integrity of the dataset, where the attacker is capable of degrading the accuracy of ML models or generating specific output that is targeted by the attacker. Therefore, this paper studies the robustness of several Machine Learning models against targeted and non-targeted label-flipping attacks against the dataset during the training phase. Also, it checks the repeatability of the results obtained in the existing literature. The results are observed and explained in the domain of the cyber security paradigm.
Authored by Alanoud Almemari, Raviha Khan, Chan Yeun
Federated learning is proposed as a typical distributed AI technique to protect user privacy and data security, and it is based on decentralized datasets that train machine learning models by sharing model gradients rather than sharing user data. However, while this particular machine learning approach safeguards data from being shared, it also increases the likelihood that servers will be attacked. Joint learning models are sensitive to poisoning attacks and can effectively pose a threat to the global model when an attacker directly contaminates the global model by passing poisoned gradients. In this paper, we propose a joint learning poisoning attack method based on feature selection. Unlike traditional poisoning attacks, it only modifies important features of the data and ignores other features, which ensures the effectiveness of the attack while being highly stealthy and can bypass general defense methods. After experiments, we demonstrate the feasibility of the method.
Authored by Zhengqi Liu, Ziwei Liu, Xu Yang
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this position paper, we address the security of AI code generators by identifying a novel data poisoning attack that results in the generation of vulnerable code. Next, we devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation. Lastly, we discuss potential solutions to overcome this threat.
Authored by Cristina Improta
The technology described in this paper allows two or more air-gapped computers with passive speakers to discreetly exchange data between them while they are in the same room. The suggested solution takes advantage of the audio chip’s HDA Jack Retask capability, which enables speakers to be attached to it to be switched from output devices to input devices, turning them into microphones. Details of the implementation, technical background, and attack model are discussed. The reversed speakers nonetheless operate effectively in the near-ultrasonic frequency range (18kHz to 24kHz), despite not being intended to function as microphones. The analysis of practical factors for the effective application of the suggested strategy continues. The findings have important ramifications for safe data transfer between air-gapped systems and emphasise the necessity of extra security measures to thWart such assaults.
Authored by S Suraj, Meenu Mohan, Suma S