"Researchers Put LLMs to the Test in Phishing Email Experiment"

A team of security researchers tested the performance of Large Language Models (LLMs) in composing convincing phishing emails and detecting them. The results showed that Artificial Intelligence (AI) technology could generate highly effective phishing lures, though not as convincing as emails created manually. Bruce Schneier, a security expert, Arun Vishwanath, chief technologist at Avant Research Group, and Jeremy Bernstein, a postdoctoral researcher at MIT, tested with four commercial LLMs in experimental phishing attacks against Harvard students. The four LLMs included ChatGPT from OpenAI, Bard from Google, Claude from Anthropic, and ChatLlama, an open-source chatbot based on Llama from Meta. The experiment sent 112 students phishing emails offering Starbucks gift cards. LLMs could still be used to create simple marketing emails that can be repurposed for attacks, even though generative AI vendors have implemented stricter safeguards and restrictions for LLMs to prevent prompts for phishing email creation. This article continues to discuss the experiment on LLMs to see how effective the technology can be in detecting and producing phishing emails.

TechTarget reports "Researchers Put LLMs to the Test in Phishing Email Experiment"

Submitted by Anonymous on