This paper presents a reputation-based threat mitigation framework that defends potential security threats in electroencephalogram (EEG) signal classification during model aggregation of Federated Learning. While EEG signal analysis has attracted attention because of the emergence of brain-computer interface (BCI) technology, it is difficult to create efficient learning models for EEG analysis because of the distributed nature of EEG data and related privacy and security concerns. To address these challenges, the proposed defending framework leverages the Federated Learning paradigm to preserve privacy by collaborative model training with localized data from dispersed sources and introduces a reputation-based mechanism to mitigate the influence of data poisoning attacks and identify compromised participants. To assess the efficiency of the proposed reputation-based federated learning defense framework, data poisoning attacks based on the risk level of training data derived by Explainable Artificial Intelligence (XAI) techniques are conducted on both publicly available EEG signal datasets and the self-established EEG signal dataset. Experimental results on the poisoned datasets show that the proposed defense methodology performs well in EEG signal classification while reducing the risks associated with security threats.
Authored by Zhibo Zhang, Pengfei Li, Ahmed Hammadi, Fusen Guo, Ernesto Damiani, Chan Yeun
The advent of Generative AI has marked a significant milestone in artificial intelligence, demonstrating remarkable capabilities in generating realistic images, texts, and data patterns. However, these advancements come with heightened concerns over data privacy and copyright infringement, primarily due to the reliance on vast datasets for model training. Traditional approaches like differential privacy, machine unlearning, and data poisoning only offer fragmented solutions to these complex issues. Our paper delves into the multifaceted challenges of privacy and copyright protection within the data lifecycle. We advocate for integrated approaches that combines technical innovation with ethical foresight, holistically addressing these concerns by investigating and devising solutions that are informed by the lifecycle perspective. This work aims to catalyze a broader discussion and inspire concerted efforts towards data privacy and copyright integrity in Generative AI.CCS CONCEPTS• Software and its engineering Software architectures; • Information systems World Wide Web; • Security and privacy Privacy protections; • Social and professional topics Copyrights; • Computing methodologies Machine learning.
Authored by Dawen Zhang, Boming Xia, Yue Liu, Xiwei Xu, Thong Hoang, Zhenchang Xing, Mark Staples, Qinghua Lu, Liming Zhu
In recent years, machine learning technology has been extensively utilized, leading to increased attention to the security of AI systems. In the field of image recognition, an attack technique called clean-label backdoor attack has been widely studied, and it is more difficult to detect than general backdoor attacks because data labels do not change when tampering with poisoning data during model training. However, there remains a lack of research on malware detection systems. Some of the current work is under the white-box assumption that requires knowledge of machine learning-based models which can be advantageous for attackers. In this study, we focus on clean-label backdoor attacks in malware detection systems and propose a new clean-label backdoor attack under the black-box assumption that does not require knowledge of machine learning-based models, which is riskier. The experimental evaluation of the proposed attack method shows that the attack success rate is up to 80.50\% when the poisoning rate is 14.00\%, demonstrating the effectiveness of the proposed attack method. In addition, we experimentally evaluated the effectiveness of the dimensionality reduction techniques in preventing clean-label backdoor attacks, and showed that it can reduce the attack success rate by 76.00\%.
Authored by Wanjia Zheng, Kazumasa Omote
AI is one of the most popular field of technologies nowadays. Developers implement these technologies everywhere forgetting sometimes about its robustness to unobvious types of traffic. This omission can be used by attackers, who are always seeking to develop new attacks. So, the growth of AI is highly correlates with the rise of adversarial attacks. Adversarial attacks or adversarial machine learning is a technique when attackers attempt to fool ML systems with deceptive data. They can use inconspicuous, natural-looking perturbations in input data to mislead neural networks without inferring into a model directly and often without the risk to be detected. Adversarial attacks usually are divided into three primary axes - the security violation, poisoning and evasion attacks, which further can be categorized on “targeted”, “untargeted”, “whitebox” and “blackbox” types. This research examines most of the adversarial attacks are known by 2023 relating to all these categories and some others.
Authored by Natalie Grigorieva, Sergei Petrenko
IoT scenarios face cybersecurity concerns due to unauthorized devices that can impersonate legitimate ones by using identical software and hardware configurations. This can lead to sensitive information leaks, data poisoning, or privilege escalation. Behavioral fingerprinting and ML/DL techniques have been used in the literature to identify devices based on performance differences caused by manufacturing imperfections. In addition, using Federated Learning to maintain data privacy is also a challenge for IoT scenarios. Federated Learning allows multiple devices to collaboratively train a machine learning model without sharing their data, but it requires addressing issues such as communication latency, heterogeneity of devices, and data security concerns. In this sense, Trustworthy Federated Learning has emerged as a potential solution, which combines privacy-preserving techniques and metrics to ensure data privacy, model integrity, and secure communication between devices. Therefore, this work proposes a trustworthy federated learning framework for individual device identification. It first analyzes the existing metrics for trustworthiness evaluation in FL and organizes them into six pillars (privacy, robustness, fairness, explainability, accountability, and federation) for computing the trustworthiness of FL models. The framework presents a modular setup where one component is in charge of the federated model generation and another one is in charge of trustworthiness evaluation. The framework is validated in a real scenario composed of 45 identical Raspberry Pi devices whose hardware components are monitored to generate individual behavior fingerprints. The solution achieves a 0.9724 average F1-Score in the identification on a centralized setup, while the average F1-Score in the federated setup is 0.8320. Besides, a 0.6 final trustworthiness score is achieved by the model on state-of-the-art metrics, indicating that further privacy and robustness techniques are required to improve this score.
Authored by Pedro Sánchez, Alberto Celdrán, Gérôme Bovet, Gregorio Pérez, Burkhard Stiller
Poisoning Attacks in Federated Edge Learning for Digital Twin 6G-Enabled IoTs: An Anticipatory Study
Federated edge learning can be essential in supporting privacy-preserving, artificial intelligence (AI)-enabled activities in digital twin 6G-enabled Internet of Things (IoT) environments. However, we need to also consider the potential of attacks targeting the underlying AI systems (e.g., adversaries seek to corrupt data on the IoT devices during local updates or corrupt the model updates); hence, in this article, we propose an anticipatory study for poisoning attacks in federated edge learning for digital twin 6G-enabled IoT environments. Specifically, we study the influence of adversaries on the training and development of federated learning models in digital twin 6G-enabled IoT environments. We demonstrate that attackers can carry out poisoning attacks in two different learning settings, namely: centralized learning and federated learning, and successful attacks can severely reduce the model s accuracy. We comprehensively evaluate the attacks on a new cyber security dataset designed for IoT applications with three deep neural networks under the non-independent and identically distributed (Non-IID) data and the independent and identically distributed (IID) data. The poisoning attacks, on an attack classification problem, can lead to a decrease in accuracy from 94.93\% to 85.98\% with IID data and from 94.18\% to 30.04\% with Non-IID.
Authored by Mohamed Ferrag, Burak Kantarci, Lucas Cordeiro, Merouane Debbah, Kim-Kwang Choo
Agro-Ledger: Blockchain Based Framework for Transparency and Traceability in Agri-Food Supply Chains
This paper presents a pioneering blockchain-based framework for enhancing traceability and transparency within the global agrifood supply chain. By seamlessly integrating blockchain technology and the Ethereum Virtual Machine (EVM), the framework offers a robust solution to the industry s challenges. It weaves a narrative where each product s journey is securely documented in an unalterable digital ledger, accessible to all stakeholders. Real-time Internet of Things (IoT) sensors stand sentinel, monitoring variables crucial to product quality. With millions afflicted by foodborne diseases, substantial food wastage, and a strong consumer desire for transparency, this framework responds to a clarion call for change. Moreover, the framework s data-driven approach not only rejuvenates consumer confidence and product authenticity but also lays the groundwork for robust sustainability and toxicity assessments. In this narrative of technological innovation, the paper embarks on an architectural odyssey, intertwining the threads of blockchain and EVM to reimagine a sustainable, transparent, and trustworthy agrifood landscape.
Authored by Prasanna Kumar, Bharati Mohan, Akilesh S, Jaikanth Y, Roshan George, Vishal G, Vishnu P, Elakkiya R
As a recent breakthrough in generative artificial intelligence, ChatGPT is capable of creating new data, images, audio, or text content based on user context. In the field of cybersecurity, it provides generative automated AI services such as network detection, malware protection, and privacy compliance monitoring. However, it also faces significant security risks during its design, training, and operation phases, including privacy breaches, content abuse, prompt word attacks, model stealing attacks, abnormal structure attacks, data poisoning attacks, model hijacking attacks, and sponge attacks. This paper starts from the risks and events that ChatGPT has recently faced, proposes a framework for analyzing cybersecurity in cyberspace, and envisions adversarial models and systems. It puts forward a new evolutionary relationship between attackers and defenders using ChatGPT to enhance their own capabilities in a changing environment and predicts the future development of ChatGPT from a security perspective.
Authored by Chunhui Hu, Jianfeng Chen
As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).
Authored by Ramesh Saini
The adoption of IoT in a multitude of critical infrastructures revolutionizes several sectors, ranging from smart healthcare systems to financial organizations and thermal and nuclear power plants. Yet, the progressive growth of IoT devices in critical infrastructure without considering security risks can damage the user’s privacy, confidentiality, and integrity of both individuals and organizations. To overcome the aforementioned security threats, we proposed an AI and onion routing-based secure architecture for IoT-enabled critical infrastructure. Here, we first employ AI classifiers that classify the attack and non-attack IoT data, where attack data is discarded from further communication. In addition, the AI classifiers are secure from data poisoning attacks by incorporating an isolation forest algorithm that efficiently detects the poisoned data and eradicates it from the dataset’s feature space. Only non-attack data is forwarded to the onion routing network, which offers triple encryption to encrypt IoT data. As the onion routing only processes non-attack data, it is less computationally expensive than other baseline works. Moreover, each onion router is associated with blockchain nodes that store the verifying tokens of IoT data. The proposed architecture is evaluated using performance parameters, such as accuracy, precision, recall, training time, and compromisation rate. In this proposed work, SVM outperforms by achieving 97.7\% accuracy.
Authored by Nilesh Jadav, Rajesh Gupta, Sudeep Tanwar
This survey paper provides an overview of the current state of AI attacks and risks for AI security and privacy as artificial intelligence becomes more prevalent in various applications and services. The risks associated with AI attacks and security breaches are becoming increasingly apparent and cause many financial and social losses. This paper will categorize the different types of attacks on AI models, including adversarial attacks, model inversion attacks, poisoning attacks, data poisoning attacks, data extraction attacks, and membership inference attacks. The paper also emphasizes the importance of developing secure and robust AI models to ensure the privacy and security of sensitive data. Through a systematic literature review, this survey paper comprehensively analyzes the current state of AI attacks and risks for AI security and privacy and detection techniques.
Authored by Md Rahman, Aiasha Arshi, Md Hasan, Sumayia Mishu, Hossain Shahriar, Fan Wu
AI technology is widely used in different fields due to the effectiveness and accurate results that have been achieved. The diversity of usage attracts many attackers to attack AI systems to reach their goals. One of the most important and powerful attacks launched against AI models is the label-flipping attack. This attack allows the attacker to compromise the integrity of the dataset, where the attacker is capable of degrading the accuracy of ML models or generating specific output that is targeted by the attacker. Therefore, this paper studies the robustness of several Machine Learning models against targeted and non-targeted label-flipping attacks against the dataset during the training phase. Also, it checks the repeatability of the results obtained in the existing literature. The results are observed and explained in the domain of the cyber security paradigm.
Authored by Alanoud Almemari, Raviha Khan, Chan Yeun
Federated learning is proposed as a typical distributed AI technique to protect user privacy and data security, and it is based on decentralized datasets that train machine learning models by sharing model gradients rather than sharing user data. However, while this particular machine learning approach safeguards data from being shared, it also increases the likelihood that servers will be attacked. Joint learning models are sensitive to poisoning attacks and can effectively pose a threat to the global model when an attacker directly contaminates the global model by passing poisoned gradients. In this paper, we propose a joint learning poisoning attack method based on feature selection. Unlike traditional poisoning attacks, it only modifies important features of the data and ignores other features, which ensures the effectiveness of the attack while being highly stealthy and can bypass general defense methods. After experiments, we demonstrate the feasibility of the method.
Authored by Zhengqi Liu, Ziwei Liu, Xu Yang
Machine learning models are susceptible to a class of attacks known as adversarial poisoning where an adversary can maliciously manipulate training data to hinder model performance or, more concerningly, insert backdoors to exploit at inference time. Many methods have been proposed to defend against adversarial poisoning by either identifying the poisoned samples to facilitate removal or developing poison agnostic training algorithms. Although effective, these proposed approaches can have unintended consequences on the model, such as worsening performance on certain data sub-populations, thus inducing a classification bias. In this work, we evaluate several adversarial poisoning defenses. In addition to traditional security metrics, i.e., robustness to poisoned samples, we also adapt a fairness metric to measure the potential undesirable discrimination of sub-populations resulting from using these defenses. Our investigation highlights that many of the evaluated defenses trade decision fairness to achieve higher adversarial poisoning robustness. Given these results, we recommend our proposed metric to be part of standard evaluations of machine learning defenses.
Authored by Nathalie Baracaldo, Farhan Ahmed, Kevin Eykholt, Yi Zhou, Shriti Priya, Taesung Lee, Swanand Kadhe, Mike Tan, Sridevi Polavaram, Sterling Suggs, Yuyang Gao, David Slater
AI-based code generators have gained a fundamental role in assisting developers in writing software starting from natural language (NL). However, since these large language models are trained on massive volumes of data collected from unreliable online sources (e.g., GitHub, Hugging Face), AI models become an easy target for data poisoning attacks, in which an attacker corrupts the training data by injecting a small amount of poison into it, i.e., astutely crafted malicious samples. In this position paper, we address the security of AI code generators by identifying a novel data poisoning attack that results in the generation of vulnerable code. Next, we devise an extensive evaluation of how these attacks impact state-of-the-art models for code generation. Lastly, we discuss potential solutions to overcome this threat.
Authored by Cristina Improta
Graph-based Semi-Supervised Learning (GSSL) is a practical solution to learn from a limited amount of labelled data together with a vast amount of unlabelled data. However, due to their reliance on the known labels to infer the unknown labels, these algorithms are sensitive to data quality. It is therefore essential to study the potential threats related to the labelled data, more specifically, label poisoning. In this paper, we propose a novel data poisoning method which efficiently approximates the result of label inference to identify the inputs which, if poisoned, would produce the highest number of incorrectly inferred labels. We extensively evaluate our approach on three classification problems under 24 different experimental settings each. Compared to the state of the art, our influence-driven attack produces an average increase of error rate 50% higher, while being faster by multiple orders of magnitude. Moreover, our method can inform engineers of inputs that deserve investigation (relabelling them) before training the learning model. We show that relabelling one-third of the poisoned inputs (selected based on their influence) reduces the poisoning effect by 50%. ACM Reference Format: Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, and Yves Le Traon. 2022. Influence-Driven Data Poisoning in Graph-Based Semi-Supervised Classifiers. In 1st Conference on AI Engineering - Software Engineering for AI (CAIN’22), May 16–24, 2022, Pittsburgh, PA, USA. ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3522664.3528606
Authored by Adriano Franci, Maxime Cordy, Martin Gubri, Mike Papadakis, Yves Le Traon
Recent trends in the convergence of edge computing and artificial intelligence (AI) have led to a new paradigm of “edge intelligence”, which are more vulnerable to attack such as data and model poisoning and evasion of attacks. This paper proposes a white-box poisoning attack against online regression model for edge intelligence environment, which aim to prepare the protection methods in the future. Firstly, the new method selects data points from original stream with maximum loss by two selection strategies; Secondly, it pollutes these points with gradient ascent strategy. At last, it injects polluted points into original stream being sent to target model to complete the attack process. We extensively evaluate our proposed attack on open dataset, the results of which demonstrate the effectiveness of the novel attack method and the real implications of poisoning attack in a case study electric energy prediction application.
Authored by Yanxu Zhu, Hong Wen, Peng Zhang, Wen Han, Fan Sun, Jia Jia
With the widespread deployment of data-driven services, the demand for data volumes continues to grow. At present, many applications lack reliable human supervision in the process of data collection, which makes the collected data contain low-quality data or even malicious data. This low-quality or malicious data make AI systems potentially face much security challenges. One of the main security threats in the training phase of machine learning is data poisoning attacks, which compromise model integrity by contaminating training data to make the resulting model skewed or unusable. This paper reviews the relevant researches on data poisoning attacks in various task environments: first, the classification of attacks is summarized, then the defense methods of data poisoning attacks are sorted out, and finally, the possible research directions in the prospect.
Authored by Jiaxin Fan, Qi Yan, Mohan Li, Guanqun Qu, Yang Xiao
In recent years, the security of AI systems has drawn increasing research attention, especially in the medical imaging realm. To develop a secure medical image analysis (MIA) system, it is a must to study possible backdoor attacks (BAs), which can embed hidden malicious behaviors into the system. However, designing a unified BA method that can be applied to various MIA systems is challenging due to the diversity of imaging modalities (e.g., X-Ray, CT, and MRI) and analysis tasks (e.g., classification, detection, and segmentation). Most existing BA methods are designed to attack natural image classification models, which apply spatial triggers to training images and inevitably corrupt the semantics of poisoned pixels, leading to the failures of attacking dense prediction models. To address this issue, we propose a novel Frequency-Injection based Backdoor Attack method (FIBA) that is capable of delivering attacks in various MIA tasks. Specifically, FIBA leverages a trigger function in the frequency domain that can inject the low-frequency information of a trigger image into the poisoned image by linearly combining the spectral amplitude of both images. Since it preserves the semantics of the poisoned image pixels, FIBA can perform attacks on both classification and dense prediction models. Experiments on three benchmarks in MIA (i.e., ISIC-2019 [4] for skin lesion classification, KiTS-19 [17] for kidney tumor segmentation, and EAD-2019 [1] for endoscopic artifact detection), validate the effectiveness of FIBA and its superiority over stateof-the-art methods in attacking MIA models and bypassing backdoor defense. Source code will be available at code.
Authored by Yu Feng, Benteng Ma, Jing Zhang, Shanshan Zhao, Yong Xia, Dacheng Tao
Federated learning (FL) has emerged as a promising paradigm for distributed training of machine learning models. In FL, several participants train a global model collaboratively by only sharing model parameter updates while keeping their training data local. However, FL was recently shown to be vulnerable to data poisoning attacks, in which malicious participants send parameter updates derived from poisoned training data. In this paper, we focus on defending against targeted data poisoning attacks, where the attacker’s goal is to make the model misbehave for a small subset of classes while the rest of the model is relatively unaffected. To defend against such attacks, we first propose a method called MAPPS for separating malicious updates from benign ones. Using MAPPS, we propose three methods for attack detection: MAPPS + X-Means, MAPPS + VAT, and their Ensemble. Then, we propose an attack mitigation approach in which a "clean" model (i.e., a model that is not negatively impacted by an attack) can be trained despite the existence of a poisoning attempt. We empirically evaluate all of our methods using popular image classification datasets. Results show that we can achieve \textgreater 95% true positive rates while incurring only \textless 2% false positive rate. Furthermore, the clean models that are trained using our proposed methods have accuracy comparable to models trained in an attack-free scenario.
Authored by Pinar Erbil, Emre Gursoy
Machine Learning (ML) and Artificial Intelligence (AI) techniques are widely adopted in the telecommunication industry, especially to automate beyond 5G networks. Federated Learning (FL) recently emerged as a distributed ML approach that enables localized model training to keep data decentralized to ensure data privacy. In this paper, we identify the applicability of FL for securing future networks and its limitations due to the vulnerability to poisoning attacks. First, we investigate the shortcomings of state-of-the-art security algorithms for FL and perform an attack to circumvent FoolsGold algorithm, which is known as one of the most promising defense techniques currently available. The attack is launched with the addition of intelligent noise at the poisonous model updates. Then we propose a more sophisticated defense strategy, a threshold-based clustering mechanism to complement FoolsGold. Moreover, we provide a comprehensive analysis of the impact of the attack scenario and the performance of the defense mechanism.
Authored by Yushan Siriwardhana, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila
Existing defense strategies against adversarial attacks (AAs) on AI/ML are primarily focused on examining the input data streams using a wide variety of filtering techniques. For instance, input filters are used to remove noisy, misleading, and out-of-class inputs along with a variety of attacks on learning systems. However, a single filter may not be able to detect all types of AAs. To address this issue, in the current work, we propose a robust, transferable, distribution-independent, and cross-domain supported framework for selecting Adaptive Filter Ensembles (AFEs) to minimize the impact of data poisoning on learning systems. The optimal filter ensembles are determined through a Multi-Objective Bi-Level Programming Problem (MOBLPP) that provides a subset of diverse filter sequences, each exhibiting fair detection accuracy. The proposed framework of AFE is trained to model the pristine data distribution to identify the corrupted inputs and converges to the optimal AFE without vanishing gradients and mode collapses irrespective of input data distributions. We presented preliminary experiments to show the proposed defense outperforms the existing defenses in terms of robustness and accuracy.
Authored by Arunava Roy, Dipankar Dasgupta
Trojan attacks threaten deep neural networks (DNNs) by poisoning them to behave normally on most samples, yet to produce manipulated results for inputs attached with a particular trigger. Several works attempt to detect whether a given DNN has been injected with a specific trigger during the training. In a parallel line of research, the lottery ticket hypothesis reveals the existence of sparse sub-networks which are capable of reaching competitive performance as the dense network after independent training. Connecting these two dots, we investigate the problem of Trojan DNN detection from the brand new lens of sparsity, even when no clean training data is available. Our crucial observation is that the Trojan features are significantly more stable to network pruning than benign features. Leveraging that, we propose a novel Trojan network detection regime: first locating a “winning Trojan lottery ticket” which preserves nearly full Trojan information yet only chance-level performance on clean inputs; then recovering the trigger embedded in this already isolated sub-network. Extensive experiments on various datasets, i.e., CIFAR-10, CIFAR-100, and ImageNet, with different network architectures, i.e., VGG-16, ResNet-18, ResNet-20s, and DenseNet-100 demonstrate the effectiveness of our proposal. Codes are available at https://github.com/VITA-Group/Backdoor-LTH.
Authored by Tianlong Chen, Zhenyu Zhang, Yihua Zhang, Shiyu Chang, Sijia Liu, Zhangyang Wang