Assessing AI vs Human-Authored Spear Phishing SMS Attacks: An Empirical Study

ABSTRACT

This paper explores the use of Large Language Models (LLMs) in spear phishing message generation and evaluates their performance compared to human-authored counterparts. Our pilot study examines the effectiveness of smishing (SMS phishing) messages created by GPT-4 and human authors, which have been personalized for willing targets. The targets assessed these messages in a modified ranked-order experiment using a novel methodology we call TRAPD (Threshold Ranking Approach for Personalized Deception). Experiments involved ranking each spear phishing message from most to least convincing, providing qualitative feedback, and guessing which messages were human- or AI-generated. Results show that LLM-generated messages are often perceived as more convincing than those authored by humans, particularly job-related messages. Targets also struggled to distinguish between human- and AI-generated messages. We analyze different criteria the targets used to assess the persuasiveness and source of messages. This study aims to highlight the urgent need for further research and improved countermeasures against personalized AI-enabled social engineering attacks.

jerson headshot

 

 

 

Jerson Francia received his B.S. degree in Electronics and Communications Engineering from the University of the Philippines, in 2019. He is currently pursuing a Ph.D. degree in electrical and computer engineering at Brigham Young University, Provo.


He was a software developer with experience in deep learning research. He is currently a Graduate Research Assistant with the Electrical and Computer Engineering Department, at BYU. His current research interests include the implications of AI in cybersecurity.
 

License: CC-3.0
Submitted by Regan Williams on