"Securing Machine Learning Requires a Sociotechnical Approach"

Morgan Livingston, an expert focused on Artificial Intelligence (AI) policy, suggests using a sociotechnical approach to leveraging and securing Machine Learning (ML). ML is a critical capability in a defense environment that relies on rapidly converting vast volumes of data and new data sources into information and intelligence. It has numerous applications, including geospatial imaging, enterprise and predictive maintenance, and cybersecurity. ML can help cyber defense by monitoring networks for anomalies that indicate intrusions, detecting malware, discovering vulnerabilities with fuzzing, creating dynamic honeypots, and automating known tasks. Over the last decade, AI research in cybersecurity has exploded, and ML is expected to become a critical technology for businesses to counter nation-state cyberattacks. Although AI can help with cyber defense, it also has offensive applications. AI can scale existing attacks like spear-phishing, discover exploitable software vulnerabilities, improve password brute force attacks, develop self-learning targeted malware, or generate data points that can fool other AI models. Adversarial AI could amplify existing threats, create new threats, and change the nature of threats by scaling impact, driving down costs of attacks, increasing the challenges of attribution, and more. ML introduces technical characteristics that make providing security more difficult. Defenders must always be correct, and attackers must only be correct once. With ML's increased complexity, the potential vulnerabilities multiply. ML systems are vulnerable to both traditional and ML-specific vulnerabilities. The attack surface is broad, encompassing the ML model, ML implementation, software throughout the ML pipeline, and even the hardware. There are inherent flaws in ML systems, not because of error but due to the way AI learns. Attackers can poison training data sets, steal models, and reveal hidden aspects of the training data. ML security is still in its early stages, and research on ML robustness may advance sufficiently to provide security guarantees, similar to how cryptography evolved. However, securing AI will require new approaches, including new test, evaluation, verification, and validation processes. This article continues to discuss ML and AI security challenges and the need for a sociotechnical approach to help defenders mitigate risks.    

SIGNAL Magazine reports "Securing Machine Learning Requires a Sociotechnical Approach"

Submitted by Anonymous on