"The Challenge of Adversarial Machine Learning"

Researchers at Carnegie Mellon University's (CMU) Software Engineering Institute (SEI) have published a blog post explaining the concept of adversarial Machine Learning (ML) as well as examining the motivations of adversaries and what researchers are doing to mitigate their attacks. They also provided a taxonomy of what an adversary can accomplish or what a defender needs to defend against. Due to the significant growth of ML and Artificial Intelligence (AI), adversarial tactics, techniques, and procedures (TTPs) have generated a great deal of interest and expanded. When ML algorithms are used for a prediction model and then incorporated into AI systems, the focus is typically on making performance as high as possible and ensuring that the model can make accurate predictions. This emphasis on capability often places security as a secondary concern to other priorities, such as using properly curated datasets for training models, applying domain-appropriate ML algorithms, and tuning parameters and configurations for optimal results and probabilities. However, research has demonstrated that an adversary can influence an ML system by manipulating the model, the data, or both. By doing so, an adversary can force an ML system to learn, do, or reveal the wrong information. This article continues to discuss the concept of adversarial ML, how adversaries seek to influence models, and defending against adversarial AI. 

Carnegie Mellon University reports "The Challenge of Adversarial Machine Learning"

Submitted by Anonymous on