VI Reflections: AI-Enhanced Phishing

By grigby1

Artificial Intelligence (AI) continues to fascinate technology experts and the public. AI systems that can automatically generate text, videos, and images are attracting many users, including cybercriminals who can use the technology to automate phishing. The time it takes to conduct phishing attacks could be reduced significantly using AI tools such as OpenAI's ChatGPT (Generative Pre-trained Transformer), a Large Language Model (LLM) launched in November 2022. It was designed to use AI and Natural Language Processing (NLP) to generate content nearly indistinguishable from human writing. ChatGPT is fine-tuned with both supervised and reinforcement learning techniques. It allows users to engage in human-like question-and-answer exchanges with a chatbot. A user can ask the chatbot to write something in a specific author's style. For example, the chatbot can provide an informative response appearing to have been formed by an expert. Despite being fun to play with, the AI chatbot can facilitate phishing attacks by low-skilled threat actors. Phishing emails used to be easily identifiable based solely on its brevity and grammatical errors but that is no longer the case due to AI. AI-driven tools such as ChatGPT allow cybercriminals to generate emails that are more detailed, grammatically correct, and diverse in languages, rendering them more difficult to detect by both spam filters and the average person. Phishing emails are becoming increasingly sophisticated, difficult to identify, and considerably more dangerous as a result of the rapid advancement of Generative AI (GenAI) tools.

In a recent study conducted by a team of researchers from Harvard University, Avant Research Group, and the Massachusetts Institute of Technology (MIT), 60 percent of participants fell victim to AI-automated phishing. Their research showed that the entire phishing process can be automated through the use of LLMs, resulting in a reduction of over 95 percent in the costs of phishing attacks while maintaining comparable or higher success rates. According to the team, phishing is a process that involves five distinct phases: the collection of targets, the collection of information about the targets, the creation of emails, the sending of emails, and the validation and improvement of the emails. LLMs such as ChatGPT can be used to automate each phase by generating human-like text and conversing coherently.

According to SlashNext's "2024 Mid-Year Assessment on The State of Phishing" report, there has been a 4,151 percent increase in malicious phishing messages since the launch of ChatGPT in 2022. AI tools can help malicious actors quickly and easily create content, including convincing phishing emails. The volume of AI-assisted attacks is expected to continue to rise, leaving individuals and organizations increasingly vulnerable. Security experts anticipate that the quality and quantity of phishing attacks will significantly increase because of the growth in AI. Therefore, it is essential to combat AI with AI by implementing AI-powered email and messaging security tools that prevent malicious content from reaching users' inboxes. It is important to continue exploring and mitigating the cybersecurity risks of ChatGPT and other GenAI tools. 

To see previous articles, please visit the VI Reflections Archive.

Submitted by Gregory Rigby on