"How AI Can Be Hacked With Prompt Injection: NIST Report"
"How AI Can Be Hacked With Prompt Injection: NIST Report"
In "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations," the National Institute of Standards and Technology (NIST) defines different Adversarial Machine Learning (AML) tactics and cyberattacks, as well as provides guidance on how to mitigate and manage them. AML tactics gather information about how Machine Learning (ML) systems work in order to determine how they can be manipulated.