"Computer Scientists Design Way to Close 'Backdoors' in AI-Based Security Systems"

Security researchers at the University of Chicago are developing methods to defend against backdoor attacks in artificial neural network security systems. One technique that will be presented by researchers at the 2019 IEEE Symposium on Security and Privacy in San Francisco involves the scanning of machine learning (ML) systems for signs of a sleeper cell, which is a group of spies or terrorists that secretly remain inactive in a targeted environment until given instructions to act. The use of this technique also allows the owner of the system to trap potential infiltrators. This article continues to discuss the possible hiding of backdoors in AI-based security systems due to the black box nature of AI and the research behind the defense method designed to close backdoors in neural networks.  

TechXplore reports "Computer Scientists Design Way to Close 'Backdoors' in AI-Based Security Systems"

Submitted by Anonymous on