"Detecting Backdoor Attacks on Artificial Neural Networks"

A team of researchers at Duke Engineering's Center for Evolutionary Intelligence has made an advancement in the detection of backdoor attacks against machine learning models. The execution of backdoor attacks involves poisoning the data fed to a machine learning model so that the model produces incorrect output or predictions. For example, a model can be taught by an attacker to label anyone wearing a black-and-white cap as "Frank Smith". According to researchers, these types of backdoors are hard to detect because the shape and size of their triggers can be designed by attackers. These triggers can be a hat, flower, or other harmless-looking objects. The team's software identifies backdoor triggers by finding out the class in which the trigger was injected, where the trigger was placed, and the form of the trigger. This article continues to discuss the concept of backdoor attacks on artificial neural networks, the significant threat posed by such attacks, and the software developed by the Duke team to identify backdoor triggers. 

Duke reports "Detecting Backdoor Attacks on Artificial Neural Networks"

Submitted by Anonymous on