"Microsoft Says It's Time to Attack Your Machine-Learning Models"
Hyrum Anderson, the principal architect of the Azure Trustworthy Machine Learning (ML) group at Microsoft, gave a presentation at the recent USENIX ENIGMA Conference in which they called on mature companies to conduct red team attacks against their ML systems to find vulnerabilities and strengthen their defenses. In order to better understand the impact of attacks on ML, Microsoft's internal red team recreated an ML automated system that can assign hardware resources in response to cloud requests. The team's testing of the offline version of the system revealed adversarial examples that can lead to Denial-of-Service (DoS). Data-science teams should defensively protect their data and model, as well as perform sanity checks to make sure that the ML model is not over-provisioning resources, thus increasing robustness. Anderson says that just because a model is not accessible externally does not mean it is safe against attacks. Internal models are not secure by default as there are paths that attackers can take to cause downstream effects in an overall system. Anderson emphasized that organizations face the risk of exposure if they use ML due to the gap between this technology and security. The USENIX presentation is a part of Microsoft's efforts to bring further attention to the possibility of adversarial attacks on ML models. These types of attacks are often highly technical, making it difficult for most companies to know how to assess their security. Anderson suggests that the security community increases its exploration of adversarial ML attacks and considers this issue as a part of the broader threat landscape. According to a survey conducted by Microsoft last year, nearly 90 percent of organizations do not know how to protect their ML systems against attacks. This article continues to discuss why mature companies should perform red team attacks against their ML systems, the lack of awareness among organizations about how to protect ML systems from attacks, and Microsoft's research on adversarial ML attacks.
Dark Reading reports "Microsoft Says It's Time to Attack Your Machine-Learning Models"