Poisoning Attack based on Data Feature Selection in Federated Learning | |
---|---|
Author | |
Abstract |
Federated learning is proposed as a typical distributed AI technique to protect user privacy and data security, and it is based on decentralized datasets that train machine learning models by sharing model gradients rather than sharing user data. However, while this particular machine learning approach safeguards data from being shared, it also increases the likelihood that servers will be attacked. Joint learning models are sensitive to poisoning attacks and can effectively pose a threat to the global model when an attacker directly contaminates the global model by passing poisoned gradients. In this paper, we propose a joint learning poisoning attack method based on feature selection. Unlike traditional poisoning attacks, it only modifies important features of the data and ignores other features, which ensures the effectiveness of the attack while being highly stealthy and can bypass general defense methods. After experiments, we demonstrate the feasibility of the method. |
Year of Publication |
2023
|
Date Published |
jan
|
URL |
https://ieeexplore.ieee.org/document/10048854
|
DOI |
10.1109/Confluence56041.2023.10048854
|
Google Scholar | BibTeX | DOI |