On Managing Vulnerabilities in AI/ML Systems

pdf

Abstract

This paper explores how the current paradigm of vulnerability management might adapt to include machine learning systems through a thought experiment: what if flaws in machine learning (ML) were assigned Common Vulnerabilities and Exposures (CVE) identifiers (CVE-IDs)? We consider both ML algorithms and model objects. The hypothetical scenario is structured around exploring the changes to the six areas of vulnerability management: discovery, report intake, analysis, coordination, disclosure, and response. While algorithm flaws are well-known in academic research community, there is no apparent clear line of communication between this research community and the operational communities that deploy and manage systems that use ML. The thought experiments identify some ways in which CVE-IDs may establish some useful lines of communication between these two communities. In particular, it would start to introduce the research community to operational security concepts, which appears to be a gap left by existing efforts.

VIEW FULL PAPER

BIO

Dr. Jonathan Spring is a senior member of the technical staff with the CERT division of the Software Engineering Institute (SEI) at Carnegie Mellon University, where he has worked since 2009. Prior posts include adjunct professor at the University of Pittsburgh’s School of Information Sciences and research fellow for the ICANN’s Security and Stability Advisory Committee (SSAC); he has served as program chair of FloCon and the New Security Paradigms Workshop. He holds a doctoral degree in computer science from University College London. At the SEI, he produces reliable evidence in support of crafting effective cybersecurity policies at the operational, organizational, and national levels. Dr. Spring's practice includes the areas of vulnerability management, machine learning, and threat intelligence.

Tags:
License: CC-2.5
Submitted by Anonymous on