"Expect an Increase in Attacks on AI Systems"

There has been an increase in research surrounding methods of executing attacks against Machine Learning (ML) and Artificial Intelligence (AI) systems, with nearly 2,000 papers published on the topic in one repository over the last ten years. However, organizations still have not adopted commensurate approaches to ensuring the trustworthiness of results produced by AI systems. A new report released by the Artificial Intelligence (AI) firm Adversa explored the various measurements of the adoption of AI systems. They looked at the number and types of research papers that discuss AI, as well as government initiatives aimed at providing policy frameworks for AI technology. Results from the study found that organizations are increasingly adopting AI, but they are utilizing the technology without implementing the defenses to protect AI systems from adversarial AI attacks such as those that can bypass the systems, manipulate results, and exfiltrate data. There are a variety of potential attacks on AI systems, yet attacks against image-recognition algorithms and other vision-related ML models have been the focus of most studies, with 65 percent of adversarial ML papers being heavily vision-focused. The other third of papers analyzed by Adversa focused on analytical attacks, language attacks, and the autonomy of the algorithms. Eugene Neelou, co-founder and chief technology officer of Adversa, says that attacks on AI systems are not prevalent at the moment, but they have occurred and are expected to grow in frequency in the future significantly. The report stated that image data is the most popular target because it is easier to attack and to use for producing visible evidence that demonstrates the vulnerability of AI systems. This article continues to discuss Adversa's key findings regarding the adoption of AI, adversarial AI attacks, and other threats facing AI systems, as well as what organizations should do to improve AI security. 

Dark Reading reports "Expect an Increase in Attacks on AI Systems"

Submitted by Anonymous on