"Protecting Smart Machines From Smart Attacks"

A team of researchers at Princeton University conducted studies on how adversaries can attack machine learning models. As the application of machine learning grows, it is important that we examine the different ways in which this technology can be exploited by attackers to develop countermeasures against them. The researchers demonstrated different adversarial machine learning attacks, which include data poisoning attacks, evasion attacks, and privacy attacks. Data poisoning attacks occur when an adversary inserts bad data into an AI system's training set. Evasion attacks refer to the manipulation of an input so that it appears normal to a human, but can be incorrectly classified by the machine learning model. Privacy attacks are performed when adversaries try to expose sensitive information using data learned by the machine learning model. This article continues to discuss the importance of exploring the vulnerabilities of machine learning technologies and the adversarial machine learning attacks demonstrated by researchers. 

Princeton University reports "Protecting Smart Machines From Smart Attacks"

Submitted by Anonymous on