Advanced Persistent Threats (APTs) have been a major challenge in securing both Information Technology (IT) and Operational Technology (OT) systems. APT is a sophisticated attack that masquerade their actions to navigates around defenses, breach networks, often, over multiple network hosts and evades detection. It also uses “low-and-slow” approach over a long period of time. Resource availability, integrity, and confidentiality of the operational cyber-physical systems (CPS) state and control is highly impacted by the safety and security measures in place. A framework multi-stage detection approach termed “APT$_\textrmDASAC$” to detect different tactics, techniques, and procedures (TTPs) used during various APT steps is proposed. Implementation was carried out in three stages: (i) Data input and probing layer - this involves data gathering and pre-processing, (ii) Data analysis layer; applies the core process of “APT$_\textrmDASAC$” to learn the behaviour of attack steps from the sequence data, correlate and link the related output and, (iii) Decision layer; the ensemble probability approach is utilized to integrate the output and make attack prediction. The framework was validated with three different datasets and three case studies. The proposed approach achieved a significant attacks detection capability of 86.36\% with loss as 0.32\%, demonstrating that attack detection techniques applied that performed well in one domain may not yield the same good result in another domain. This suggests that robustness and resilience of operational systems state to withstand attack and maintain system performance are regulated by the safety and security measures in place, which is specific to the system in question.
Authored by Hope Eke, Andrei Petrovski
Advanced Persistent Threats (APTs) have significantly impacted organizations over an extended period with their coordinated and sophisticated cyberattacks. Unlike signature-based tools such as antivirus and firewalls that can detect and block other types of malware, APTs exploit zero-day vulnerabilities to generate new variants of undetectable malware. Additionally, APT adversaries engage in complex relationships and interactions within network entities, necessitating the learning of interactions in network traffic flows, such as hosts, users, or IP addresses, for effective detection. However, traditional deep neural networks often fail to capture the inherent graph structure and overlook crucial contextual information in network traffic flows. To address these issues, this research models APTs as heterogeneous graphs, capturing the diverse features and complex interactions in network flows. Consequently, a hetero-geneous graph transformer (HGT) model is used to accurately distinguish between benign and malicious network connections. Experiment results reveal that the HGT model achieves better performance, with 100 \% accuracy and accelerated learning time, outperferming homogeneous graph neural network models.
Authored by Kazeem Saheed, Shagufta Henna
Advanced persistent threats (APTs) have novel features such as multi-stage penetration, highly-tailored intention, and evasive tactics. APTs defense requires fusing multi-dimensional Cyber threat intelligence data to identify attack intentions and conducts efficient knowledge discovery strategies by data-driven machine learning to recognize entity relationships. However, data-driven machine learning lacks generalization ability on fresh or unknown samples, reducing the accuracy and practicality of the defense model. Besides, the private deployment of these APT defense models on heterogeneous environments and various network devices requires significant investment in context awareness (such as known attack entities, continuous network states, and current security strategies). In this paper, we propose a few-shot multi-domain knowledge rearming (FMKR) scheme for context-aware defense against APTs. By completing multiple small tasks that are generated from different network domains with meta-learning, the FMKR firstly trains a model with good discrimination and generalization ability for fresh and unknown APT attacks. In each FMKR task, both threat intelligence and local entities are fused into the support/query sets in meta-learning to identify possible attack stages. Secondly, to rearm current security strategies, an finetuning-based deployment mechanism is proposed to transfer learned knowledge into the student model, while minimizing the defense cost. Compared to multiple model replacement strategies, the FMKR provides a faster response to attack behaviors while consuming less scheduling cost. Based on the feedback from multiple real users of the Industrial Internet of Things (IIoT) over 2 months, we demonstrate that the proposed scheme can improve the defense satisfaction rate.
Authored by Gaolei Li, Yuanyuan Zhao, Wenqi Wei, Yuchen Liu
Past Advanced Persistent Threat (APT) attacks on Industrial Internet-of-Things (IIoT), such as the 2016 Ukrainian power grid attack and the 2017 Saudi petrochemical plant attack, have shown the disruptive effects of APT campaigns while new IIoT malware continue to be developed by APT groups. Existing APT detection systems have been designed using cyberattack TTPs modelled for enterprise IT networks and leverage specific data sources (e.g., Linux audit logs, Windows event logs) which are not found on ICS devices. In this work, we propose RAPTOR, a system to detect APT campaigns in IIoT. Using cyberattack TTPs modelled for ICS/OT environments and focusing on ‘invariant’ attack phases, RAPTOR detects and correlates various APT attack stages in IIoT leveraging data which can be readily collected from ICS devices/networks (packet traffic traces, IDS alerts). Subsequently, it constructs a high-level APT campaign graph which can be used by cybersecurity analysts towards attack analysis and mitigation. A performance evaluation of RAPTOR’s APT attack-stage detection modules shows high precision and low false positive/negative rates. We also show that RAPTOR is able to construct the APT campaign graph for APT attacks (modelled after real-world attacks on ICS/OT infrastructure) executed on our IIoT testbed.
Authored by Ayush Kumar, Vrizlynn Thing
With the rapid evolution of the Internet and the prevalence of sophisticated adversarial cyber threats, it has become apparent that an equally rapid development of new Situation Awareness techniques is needed. The vast amount of data produced everyday by Intrusion Detection Systems, Firewalls, Honeypots and other systems can quickly become insurmountable to analyze by the domain experts. To enhance the human - machine interaction, new Visual Analytics systems need to be implemented and tested, bridging the gap between the detection of possible malicious activity, identifying it and taking the necessary measures to stop its propagation. The detection of previously unknown, highly sophisticated Advanced Persistent Threats (APT) adds a higher degree of complexity to this task. In this paper, we discuss the principles inherent to Visual Analytics and propose a new technique for the detection of APT attacks through the use of anomaly and behavior-based analysis. Our ultimate goal is to define sophisticated cyber threats by their defining characteristics and combining those to construct a pattern of behavior, which can be presented in visual form to be explored and analyzed. This can be achieved through the use of our Multi-Agent System for Advanced Persistent Threat Detection (MASFAD) framework and the combination of highly-detailed and dynamic visualization techniques. This paper was originally presented at the NATO Science and Technology Organization Symposium (ICMCIS) organized by the Information Systems Technology (IST) Panel, IST-200 RSY - the ICMCIS, held in Skopje, North Macedonia, 16–17 May 2023.
Authored by Georgi Nikolov, Wim Mees
As cyber attacks grow in complexity and frequency, cyber threat intelligence (CTI) remains a priority objective for defenders. A critical component of CTI at the strategic level of defensive operations is attack attribution. Attributing an attack to a threat group informs defenders on adversaries that are actively engaging them and advances their ability respond. In this paper, we propose a data analytic approach towards threat attribution using adversary playbooks of tactics, techniques, and procedures (TTPs). Specifically, our approach uses association rule mining on a large real world CTI dataset to extend known threat TTP playbooks with statistically probable TTPs the adversary may deploy. The benefits are twofold. First, we offer a dataset of learned TTP associations and extended threat playbooks. Second, we show that we can attribute attacks using a weighted Jaccard similarity with 96\% accuracy.
Authored by Kelsie Edie, Cole Mckee, Adam Duby
Advanced Persistent Threat (APT) attacks are complex, employing diverse attack elements and increasingly intelligent techniques. This paper introduces a tool for security risk assessment specifically designed for these attacks. This tool assists security teams in systematically analyzing APT attacks to derive adaptive security requirements for mission-critical target systems. Additionally, the tool facilitates the assessment of security risks, providing a comprehensive understanding of their impact on target systems. By leveraging this tool, security teams can enhance defense strategies, mitigating potential threats and ensuring the security of target systems.
Authored by Sihn-Hye Park, Dongyoon Kim, Seok-Won Lee
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-Enabled IoTs: An Anticipatory Study
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
Agro-Ledger: Blockchain Based Framework for Transparency and Traceability in Agri-Food Supply Chains
This paper presents a pioneering blockchain-based framework for enhancing traceability and transparency within the global agrifood supply chain. By seamlessly integrating blockchain technology and the Ethereum Virtual Machine (EVM), the framework offers a robust solution to the industry s challenges. It weaves a narrative where each product s journey is securely documented in an unalterable digital ledger, accessible to all stakeholders. Real-time Internet of Things (IoT) sensors stand sentinel, monitoring variables crucial to product quality. With millions afflicted by foodborne diseases, substantial food wastage, and a strong consumer desire for transparency, this framework responds to a clarion call for change. Moreover, the framework s data-driven approach not only rejuvenates consumer confidence and product authenticity but also lays the groundwork for robust sustainability and toxicity assessments. In this narrative of technological innovation, the paper embarks on an architectural odyssey, intertwining the threads of blockchain and EVM to reimagine a sustainable, transparent, and trustworthy agrifood landscape.
Authored by Prasanna Kumar, Bharati Mohan, Akilesh S, Jaikanth Y, Roshan George, Vishal G, Vishnu P, Elakkiya R
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).
Authored by Ramesh Saini
The adoption of IoT in a multitude of critical infrastructures revolutionizes several sectors, ranging from smart healthcare systems to financial organizations and thermal and nuclear power plants. Yet, the progressive growth of IoT devices in critical infrastructure without considering security risks can damage the user’s privacy, confidentiality, and integrity of both individuals and organizations. To overcome the aforementioned security threats, we proposed an AI and onion routing-based secure architecture for IoT-enabled critical infrastructure. Here, we first employ AI classifiers that classify the attack and non-attack IoT data, where attack data is discarded from further communication. In addition, the AI classifiers are secure from data poisoning attacks by incorporating an isolation forest algorithm that efficiently detects the poisoned data and eradicates it from the dataset’s feature space. Only non-attack data is forwarded to the onion routing network, which offers triple encryption to encrypt IoT data. As the onion routing only processes non-attack data, it is less computationally expensive than other baseline works. Moreover, each onion router is associated with blockchain nodes that store the verifying tokens of IoT data. The proposed architecture is evaluated using performance parameters, such as accuracy, precision, recall, training time, and compromisation rate. In this proposed work, SVM outperforms by achieving 97.7\% accuracy.
Authored by Nilesh Jadav, Rajesh Gupta, Sudeep Tanwar
This survey paper provides an overview of the current state of AI attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques.
Authored by Md Rahman, Aiasha Arshi, Md Hasan, Sumayia Mishu, Hossain Shahriar, Fan Wu
AI technology is widely used in different fields due to the effectiveness and accurate results that have been achieved. The diversity of usage attracts many attackers to attack AI systems to reach their goals. One of the most important and powerful attacks launched against AI models is the label-flipping attack. This attack allows the attacker to compromise the integrity of the dataset, where the attacker is capable of degrading the accuracy of ML models or generating specific output that is targeted by the attacker. Therefore, this paper studies the robustness of several Machine Learning models against targeted and non-targeted label-flipping attacks against the dataset during the training phase. Also, it checks the repeatability of the results obtained in the existing literature. The results are observed and explained in the domain of the cyber security paradigm.
Authored by Alanoud Almemari, Raviha Khan, Chan Yeun
Federated learning is proposed as a typical distributed AI technique to protect user privacy and data security, and it is based on decentralized datasets that train machine learning models by sharing model gradients rather than sharing user data. However, while this particular machine learning approach safeguards data from being shared, it also increases the likelihood that servers will be attacked. Joint learning models are sensitive to poisoning attacks and can effectively pose a threat to the global model when an attacker directly contaminates the global model by passing poisoned gradients. In this paper, we propose a joint learning poisoning attack method based on feature selection. Unlike traditional poisoning attacks, it only modifies important features of the data and ignores other features, which ensures the effectiveness of the attack while being highly stealthy and can bypass general defense methods. After experiments, we demonstrate the feasibility of the method.
Authored by Zhengqi Liu, Ziwei Liu, Xu Yang
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this position paper, we address the security of AI code generators by identifying a novel data poisoning attack that results in the generation of vulnerable code. Next, we devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation. Lastly, we discuss potential solutions to overcome this threat.
Authored by Cristina Improta
The technology described in this paper allows two or more air-gapped computers with passive speakers to discreetly exchange data between them while they are in the same room. The suggested solution takes advantage of the audio chip’s HDA Jack Retask capability, which enables speakers to be attached to it to be switched from output devices to input devices, turning them into microphones. Details of the implementation, technical background, and attack model are discussed. The reversed speakers nonetheless operate effectively in the near-ultrasonic frequency range (18kHz to 24kHz), despite not being intended to function as microphones. The analysis of practical factors for the effective application of the suggested strategy continues. The findings have important ramifications for safe data transfer between air-gapped systems and emphasise the necessity of extra security measures to thWart such assaults.
Authored by S Suraj, Meenu Mohan, Suma S
Urban Air Mobility is envisioned as an on-demand, highly automated and autonomous air transportation modality. It requires the use of advanced sensing and data communication technologies to gather, process, and share flight-critical data. Where this sharing of mix-critical data brings opportunities, if compromised, presents serious cybersecurity threats and safety risks due to the cyber-physical nature of the airborne vehicles. Therefore the avionics system design approach of adhering to functional safety standards (DO-178C) alone is inadequate to protect the mission-critical avionics functions from cyber-attacks. To approach this challenge, the DO-326A/ED-202A standard provides a baseline to effectively manage cybersecurity risks and to ensure the airworthiness of airborne systems. In this regard, this paper pursues a holistic cybersecurity engineering and bridges the security gap by mapping the DO-326A/ED-202A system security risk assessment activities to the Threat Analysis and Risk Assessment process. It introduces Resilient Avionics Architecture as an experimental use case for Urban Air Mobility by apprehending the DO-326A/ED-202A standard guidelines. It also presents a comprehensive system security risk assessment of the use case and derives appropriate risk mitigation strategies. The presented work facilitates avionics system designers to identify, assess, protect, and manage the cybersecurity risks across the avionics system life cycle.
Authored by Fahad Siddiqui, Alexander Ahlbrecht, Rafiullah Khan, Sena Tasdemir, Henry Hui, Balmukund Sonigara, Sakir Sezer, Kieran McLaughlin, Wanja Zaeske, Umut Durak
Wireless Sensor Networks (WSN s) have gained prominence in technology for diverse applications, such as environmental monitoring, health care, smart agriculture, and industrial automation. Comprising small, low-power sensor nodes that sense and collect data from the environment, process it locally, and communicate wirelessly with a central sink or gateway, WSN s face challenges related to limited energy resources, communication constraints, and data processing requirements. This paper presents a comprehensive review of the current state of research in WSN s, focusing on aspects such as network architecture, communication protocols, energy management techniques, data processing and fusion, security and privacy, and applications. Existing solutions are critically analysed regarding their strengths, weaknesses, research gaps, and future directions for WSNs.
Authored by Santosh Jaiswal, Anshu Dwivedi
AMLA is the novel Auckland Model for Logical Airgaps developed at University of Auckland. Convergence of IT-OT use cases are rapidly being implemented and mostly in an ad-hoc manner leaving large security holes. This paper introduces the first novel AMLA logical airgap design pattern; and showcases the AMLA’s layered defense system via New Zealand case study for the electricity distribution sector to propose how logical airgaps can be beneficial in New Zealand. Thus, able to provide security even to legacy methods and devices without replacing them to make the newer convergence use cases work economically and securely.
Authored by Abhinav Chopra, Nirmal-Kumar Nair, Rizki Rahayani
Wireless communication enables an ingestible device to send sensor information and support external on-demand operation while in the gastrointestinal (GI) tract. However, it is challenging to maintain stable wireless communication with an ingestible device that travels inside the dynamic GI environment as this environment easily detunes the antenna and decreases the antenna gain. In this paper, we propose an air-gap based antenna solution to stabilize the antenna gain inside this dynamic environment. By surrounding a chip antenna with 1 2 mms of air, the antenna is isolated from the environment, recovering its antenna gain and the received signal strength by 12 dB or more according to our in vitro and in vivo evaluation in swine. The air gap makes margin for the high path loss, enabling stable wireless communication at 2.4 GHz that allows users to easily access their ingestible devices by using mobile devices with Bluetooth Low Energy (BLE). On the other hand, the data sent or received over the wireless medium is vulnerable to being eavesdropped on by nearby devices other than authorized users. Therefore, we also propose a lightweight security protocol. The proposed protocol is implemented in low energy without compromising the security level thanks to the base protocol of symmetric challenge-response and Speck, the cipher that is optimized for software implementation.
Authored by Yeseul Jeon, Saurav Maji, So-Yoon Yang, Muhammed Thaniana, Adam Gierlach, Ian Ballinger, George Selsing, Injoo Moon, Josh Jenkins, Andrew Pettinari, Niora Fabian, Alison Hayward, Giovanni Traverso, Anantha Chandrakasan
The notion that ships, marine vessels and off-shore structures are digitally isolated is quickly disappearing. Affordable and accessible wireless communication technologies (e.g., short-range radio, long-range satellite) are quickly removing any air-gaps these entities have. Commercial, defence, and personal ships have a wide range of communication systems to choose from, yet some can weaken the overall ship security. One of the most significant information technologies (IT) being used today is satellite-based communications. While the backbone of this technology is often secure, third-party devices may introduce vulnerabilities. Within maritime industries, the market for satellite communication devices has also grown significantly, with a wide range of products available. With these devices and services, marine cyber-physical systems are now more interconnected than ever. However, some of these off-the-shelf products can be more insecure than others and, as shown here, can decrease the security of the overall maritime network and other connected devices. This paper examines the vulnerability of an existing, off-the-shelf product, how a novel attack-chain can compromise the device, how that introduces vulnerabilities to the wider network, and then proposes solutions to the found vulnerabilities.
Authored by Jordan Gurren, Avanthika Harish, Kimberly Tam, Kevin Jones
Air-gapped workstations are separated from the Internet because they contain confidential or sensitive information. Studies have shown that attackers can leak data from air-gapped computers with covert ultrasonic signals produced by loudspeakers. To counteract the threat, speakers might not be permitted on highly sensitive computers or disabled altogether - a measure known as an ’audio gap.’ This paper presents an attack enabling adversaries to exfiltrate data over ultrasonic waves from air-gapped, audio-gapped computers without external speakers. The malware on the compromised computer uses its built-in buzzer to generate sonic and ultrasonic signals. This component is mounted on many systems, including PC workstations, embedded systems, and server motherboards. It allows software and firmware to provide error notifications to a user, such as memory and peripheral hardware failures. We examine the different types of internal buzzers and their hardware and software controls. Despite their limited technological capabilities, such as 1-bit sound, we show that sensitive data can be encoded in sonic and ultrasonic waves. This is done using pulse width modulation (PWM) techniques to maintain a carrier wave with a dynamic range. We also show that malware can evade detection by hiding in the frequency bands of other components (e.g., fans and power supplies). We implement the attack using a PC transmitter and smartphone app receiver. We discuss transmission protocols, modulation, encoding, and reception and present the evaluation of the covert channel as well. Based on our tests, sensitive data can be exfiltrated from air-gapped computers through its built- in buzzer. A smartphone can receive data from up to six meters away at 100 bits per second.
Authored by Mordechai Guri
The rapid advancement of technology in aviation business management, notably through the implementation of location-independent aerodrome control systems, is reshaping service efficiency and cost-effectiveness. However, this emphasis on operational enhancements has resulted in a notable gap in cybersecurity incident management proficiency. This study addresses the escalating sophistication of the cybersecurity threat landscape, where malicious actors target critical safety information, posing risks from disruptions to potential catastrophic incidents. The paper employs a specialized conceptualization technique, derived from prior research, to analyze the interplays between malicious software and degraded modes operations in location-independent aerodrome control systems. Rather than predicting attack trajectories, this approach prioritizes the development of training paradigms to rigorously evaluate expertise across engineering, operational, and administrative levels in air traffic management domain. This strategy offers a proactive framework to safeguard critical infrastructures, ensuring uninterrupted, reliable services, and fortifying resilience against potential threats. This methodology promises to cultivate a more secure and adept environment for aerodrome control operations, mitigating vulnerabilities associated with malicious interventions.
Authored by Gabor Horvath