"It’s Disturbingly Easy to Trick AI into Doing Something Deadly"
Recent studies conducted by artificial intelligence (AI) researchers emphasize the major impacts that adversarial machine learning (ML) can have on safety. Researchers have performed adversarial attacks on machine learning systems to demonstrate how easy it is to alter the proper functioning of such systems and highlight the potential consequences of such manipulations by hackers. This article continues to discuss adversarial attacks on machine learning, how adversarial AI attacks can affect different fields that rely on AI, and a program recently launched by DARPA (Defense Advanced Research Projects Agency), called Guaranteeing AI Robustness against Deception (GARD), to defend against such attacks, along with other efforts to improve the security of ML systems.
Vox reports "It’s Disturbingly Easy to Trick AI into Doing Something Deadly"