"NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems"

In a new publication titled "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations (NIST.AI.100-2)," computer scientists at the National Institute of Standards and Technology (NIST) and their collaborators identify the vulnerabilities of Machine Learning (ML) and Artificial Intelligence (AI). Their publication aims to help AI users and developers understand potential attacks and mitigation strategies. It is part of NIST's broader effort to support the development of trustworthy AI. This article continues to discuss the new publication that highlights adversarial ML threats as well as describes mitigation strategies and their limitations.

NIST reports "NIST Identifies Types of Cyberattacks That Manipulate Behavior of AI Systems"

Submitted by grigby1

Submitted by Gregory Rigby on