"Aiding Evaluation of Adversarial AI Defenses"

Existing Machine Learning (ML) models have many inherent vulnerabilities that leave the technology open to spoofing, corruption, and other forms of deception. Attacks against Artificial Intelligence (AI) algorithms could lead to altered content recommendation engines, the disruption of self-driving vehicles, and more. These vulnerabilities raise concerns as ML models become increasingly integrated into critical infrastructure and systems. A program launched by DARPA (Defense Advanced Research Projects Agency), called Guaranteeing AI Robustness against Deception (GARD), aims to develop a new generation of defenses against adversarial attacks on ML models in order to get ahead of this safety challenge. One of GARD's objectives is to develop a testbed for characterizing ML defenses and evaluating the scope of their applicability. The field of adversarial AI is relatively new, so there are only a few methods for testing and evaluating potential defenses. In addition, these existing methods have been found to lack rigor and sophistication. It is critical to ensure that emerging defenses keep pace with or outperform the capabilities of known attacks to establish trust and guarantee their eventual use. GARD researchers have developed several resources and virtual tools to strengthen the community's efforts in evaluating and verifying the efficacy of existing and emerging ML models and defenses against adversarial attacks. GARD researchers from Two Six Technologies, IBM, MITRE, Google Research, and the University of Chicago, worked together to create a virtual testbed, toolbox, and benchmarking dataset, as well as training materials to support this effort. They made these assets available through a public repository for the broader research community to use. The virtual testbed, called Armory, enables repeatable, scalable, and robust evaluations of adversarial defenses. It allows researchers to test their defenses against known attacks and relevant scenarios. The Armory testbed also lets researchers make changes to scenarios to ensure that their defenses can be used across various attacks. This article continues to discuss the goals and developments of the GARD program. 

Homeland Security News Wire reports "Aiding Evaluation of Adversarial AI Defenses"

Submitted by Anonymous on