"Researchers Develop Adversarial Training Methods to Improve Machine Learning-Based Malware Detection Software"

Machine Learning (ML) has changed how computer-related tasks are considered and performed. Its ability to identify patterns and process massive amounts of data lends itself to many applications. When it comes to malware detection, ML has streamlined a once daunting process, allowing antivirus software to detect potential attacks more efficiently and with a higher success rate. Antivirus software previously relied on knowledge of earlier attacks, comparing program code to a list of known malicious binaries to determine which programs may be harmful. ML currently uses behavioral and static artifacts to identify ever-evolving malware attacks, improving antivirus software's effectiveness. However, with new technology comes numerous unknowns, so it is the responsibility of researchers to identify potential vulnerabilities. Professor Lujo Bauer from Carnegie Mellon's Electrical and Computer Engineering and Software and Societal Systems departments noted that the first step is determining the threat model for some of the newest ML technologies, such as generative Artificial Intelligence (AI). Bauer and a team of researchers demonstrated that ML-based malware detectors can be fooled by creating variants of malicious binaries, known as adversarial examples. These are transformed in a way that preserves functionality in order to avoid detection. In their most recent paper, titled "Adversarial Training for Raw-Binary Malware Classifiers," researchers examine the effectiveness of using adversarial training methods to develop malware detection models that are more resistant to some cutting-edge attacks. To train these models, the authors of this study discovered a method to increase the efficiency and scalability of creating adversarial examples, thus making adversarial training practical. This article continues to discuss the adversarial training methods developed to improve ML-based malware detection software.

CyLab reports "Researchers Develop Adversarial Training Methods to Improve Machine Learning-Based Malware Detection Software"

Submitted by grigby1 CPVI on