"Seeking a Way of Preventing Audio Models for AI Machine Learning From Being Fooled"

Researchers at the UPV/EHU-University of the Basque Country have proven that the distortion metrics, used to detect whether an audio perturbation designed to fool Artificial Intelligence (AI) models, are not a reliable measure of human perception. Such perturbations can be used by malicious actors to cause AI models to produce inaccurate predictions. Distortion metrics are used to assess the effectiveness of the methods involved in generating such attacks. AI is increasingly based on Machine Learning (ML) models trained on large datasets. Likewise, human-computer interaction is now more reliant on speech communication, primarily because of the advanced performance of ML models in speech recognition tasks. However, malicious actors can fool these models using adversarial examples, which are inputs intentionally perturbed to cause incorrect predictions without humans noticing changes. Much research has been conducted on developing new techniques for generating adversarial perturbations, but there has been less attention on how humans perceive these perturbations. This realm must be explored as adversarial perturbation methods only pose a threat if they cannot be detected by humans. The researchers investigated the extent to which the distortion metrics presented in the literature for audio adversarial examples are reliable in measuring the human perception of perturbations by asking 36 people to evaluate adversarial examples or audio perturbations according to different factors. They also proposed a stronger evaluation method resulting from the analysis of certain properties or factors relevant in assessing detectability. This article continues to discuss the performance and results of the study on the human evaluation of universal audio adversarial perturbations.

ScienceDaily reports "Seeking a Way of Preventing Audio Models for AI Machine Learning From Being Fooled"

Submitted by Anonymous on