Machine learning for cybersecurity: historical perspectives, opportunities and (potential) pitfalls

pdf

ABSTRACT

This may be surprising to some of you: but actually there has been 20+ years' of very active and productive research in applying machine learning to cybersecurity. There has been a lot of success in building ML models for intrusion detection and malware analysis. On the other hand, early adversarial ML work also showed how these models can be defeated. The major obstacles in using classical ML for security include false positives and the need for (manual) feature engineering. There are new opportunities/need for ML in cybersecurity, including automated vulnerability discovery and exploit generation, anomaly detection, threat intelligence analysis, and continuous authentication, and deep learning seems to be a promising approach to address the feature engineering challenges. On the other hand, recent adversarial ML work has shown that deep learning models can be quite easily evaded, due to in no small part that these black-box models cannot be sanity-checked.

 Dr. Wenke Lee is a Professor and John P. Imlay Jr. Chair in the School of Computer Science in the College of Computing at The Georgia Institute of Technology. He is also the Co-Executive Director of the Institute for Information Security & Privacy (IISP) at Georgia Tech. He received his Ph.D. in Computer Science from Columbia University in the City of New York in 1999. Dr. Lee’s research interests include systems and network security, applied cryptography, and machine learning. Most recently, he has focused on botnet detection and malware analysis, security of mobile systems and apps, detection and mitigation of information manipulation on the Internet, and adversarial machine learning.
 

Tags:
License: CC-2.5
Submitted by Katie Dey on