"Enhancing AI Robustness for More Secure and Reliable Systems"

Reevaluating how most Artificial Intelligence (AI) systems protect against attacks helped researchers at EPFL's School of Engineering develop a new training approach to ensure Machine Learning (ML) models, particularly deep neural networks, always perform as intended. The new model effectively replaces a long-standing training approach based on a zero-sum game. It uses a continuously adaptive attack strategy to develop a more intelligent training scenario. The results can be applied to many activities that rely on AI for classification, including protecting video streaming content, autonomous vehicles, and surveillance. The Laboratory for Information and Inference Systems (LIONS) at EPFL's School of Engineering collaborated closely with researchers from the University of Pennsylvania (UPenn) on the groundbreaking research. This article continues to discuss the researchers' discovery of a fundamental flaw in training ML systems and their elaboration of a new formulation for strengthening these systems against adversarial attacks.

EPFL reports "Enhancing AI Robustness for More Secure and Reliable Systems"

Submitted by grigby1

Submitted by grigby1 CPVI on