WiP: An Investigation of Large Language Models and Their Vulnerabilities in Spam Detection

ABSTRACT

Spam messages continue to present significant challenges to digital users, cluttering inboxes and posing security risks. Traditional spam detection methods, including rules-based, collaborative, and machine learning approaches, struggle to keep up with the rapidly evolving tactics employed by spammers. This project studies new spam detection systems that leverage Large Language Models (LLMs) fine-tuned with spam datasets. More importantly, we want to understand how LLM-based spam detection systems perform under adversarial attacks that purposefully modify spam emails and data poisoning attacks that exploit the differences between the training data and the massages in detection, to which traditional machine learning models are shown to be vulnerable. This experimentation employs two LLM models of GPT2 and BERT and three spam datasets of Enron, LingSpam, and SMSspamCollection for extensive training and testing tasks. The results show that, while they can function as effective spam filters, the LLM models are susceptible to the adversarial and data poisoning attacks. This research provides very useful insights for future applications of LLM models for information security.

 

BIO

Xiangyang Li Headshot

Xiangyang Li, PhD, has been working for a long time in security analytics, user modeling and intelligent systems, and knowledge discovery and engineering. He currently works on the threats and trustworthiness of AI systems at Johns Hopkins University. He has extensive experience of advising students, designing and directing student projects, and curriculum development. He received his PhD degree from Arizona State University.

 

License: CC-3.0
Submitted by Regan Williams on