"Generative AI Making It Harder to Spot Fraudulent Emails"

Cybercriminals are using generative Artificial Intelligence (AI) to evade email security solutions and deceive employees. According to Mike Britton, CISO of Abnormal Security, generative AI makes detecting email attacks more difficult. Prior to the AI's breakthrough, cybercriminals relied on formats or templates to create malicious campaigns. Many attacks shared common Indicators of Compromise (IOCs), making them detectable by traditional security software. However, generative AI enables scammers to produce unique content in milliseconds, significantly complicating detection involving matching known malicious text strings. ​​Generative AI has helped cybercriminals increase the sophistication of social engineering attacks and email threats. Cybercriminals can use the ChatGPT Application Programming Interface (API) for phishing emails, malware, and fraudulent payment requests. This article continues to discuss the use of generative AI by cybercriminals to evade email security.

Cybernews reports "Generative AI Making It Harder to Spot Fraudulent Emails"

Submitted by grigby1

Submitted by grigby1 CPVI on