"AI-Generated Phishing Emails Almost Impossible to Detect, Report Finds"

The potential for cybercriminals to use AI chatbots to create phishing campaigns has been cause for concern, and now security researchers at Egress have found that it is almost impossible to detect AI-generated phishing emails.  The researchers noted that AI detectors cannot tell whether a phishing email has been written by a chatbot or a human in three cases out of four (71.4%).  The researchers stated that the reason for this is due to how AI detectors work.  Most of these tools are based on large language models (LLMs), so their accuracy increases with longer sample sizes, often requiring a minimum of 250 characters to work.  The researchers noted that almost half (44.9%) of phishing emails do not meet the 250-character requirement, and a further 26.5% fall below 500, meaning that currently, AI detectors either will not work reliably or will not work at all on 71.4% of attacks.  During the study, the researchers also found that human-generated phishing campaigns are getting harder to detect, with a 24.4% jump in obfuscation techniques, which were integrated in over half (55%) of phishing emails in 2023.

 

Infosecurity reports: "AI-Generated Phishing Emails Almost Impossible to Detect, Report Finds"

Submitted by Adam Ekwall on