"AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought"

North Carolina State University researchers discovered that Artificial Intelligence (AI) tools are more vulnerable than previously thought to attacks aimed at forcing AI systems to make bad decisions. According to the researchers, the issue is what are known as "adversarial attacks," in which someone manipulates the data fed into an AI system in order to confuse it. For example, a hacker could install code on an X-ray machine to modify image data in order to cause an AI system to make incorrect diagnoses. The team developed a piece of software called QuadAttacK to test the vulnerability of deep neural networks to adversarial attacks. Any deep neural network can be tested for adversarial vulnerabilities using the QuadAttacK software. This article continues to discuss the study titled "QuadAttacK: A Quadratic Programming Approach to Learning Ordered Top-K Adversarial Attacks."

North Carolina State University reports "AI Networks Are More Vulnerable to Malicious Attacks Than Previously Thought"

Submitted by grigby1

Submitted by grigby1 CPVI on