"Skoltech Team Shows How Turing-Like Patterns Fool Neural Networks"

Researchers at the Skolkovo Institute of Science and Technology have demonstrated the use of Turing-like patterns to cause neural networks to make errors in the recognition of images. Turing patterns refer to patterns found in nature, such as stripes and spots on animals. The results of this research can be used to develop solutions for defending pattern recognition systems against attacks. Although deep neural networks are considered smart and highly capable at recognizing and classifying images, they are still vulnerable to adversarial perturbations, which are small unique details in images that can lead to incorrect neural network output. Studies have brought attention to the threat that adversarial perturbations pose to safety. For example, another team of researchers described how self-driving vehicles could be tricked into seeing innocuous advertisements and logos as traffic signs. The problem is worsened as most known defenses implemented for networks against such attacks can easily be evaded by malicious actors.  Researchers still find the nature and roots of adversarial perturbations mysterious. The lack of theory is one of the reasons why adversarial attacks are difficult to combat. This work is a step towards explaining the properties of universal adversarial perturbations (UAPs) by Turing patterns, which will help build a theory of adversarial examples in the future. This study has allowed the team to show ways in which new attacks can be generated against neural networks. This article continues to discuss the performance and purpose of this study on the use of Turing-like patterns to fool neural networks. 

Skoltech reports "Skoltech Team Shows How Turing-Like Patterns Fool Neural Networks"

Submitted by Anonymous on