Nonparametric Keyed Hypothesis Tests Machine Learning Defense Against Poisoning Attacks
Author
Abstract

As the use of machine learning continues to grow in prominence, so does the need for increased knowledge of the threats posed by artificial intelligence. Now more than ever, people are worried about poison attacks, one of the many AI-generated dangers that have already been made public. To fool a classifier during testing, an attacker may "poison" it by altering a portion of the dataset it utilised for training. The poison-resistance strategy presented in this article is novel. The approach uses a recently developed basic called the keyed nonlinear probability test to determine whether or not the training input is consistent with a previously learnt Ddistribution when the odds are stacked against the model. We use an adversary-unknown secret key in our operation. Since the caveats are kept hidden, an adversary cannot use them to fool a keyed nonparametric normality test into concluding that a (substantially) modified dataset really originates from the designated dataset (D).

Year of Publication
2023
Date Published
jan
URL
https://ieeexplore.ieee.org/document/10085424
DOI
10.1109/AISC56616.2023.10085424
Google Scholar | BibTeX | DOI