"ChatGPT and Other AI-Themed Lures Used to Deliver Malicious Software"

According to Check Point researchers, from the beginning of 2023 until the end of April, one out of every 25 newly created domains related to ChatGPT or OpenAI was malicious or potentially malicious. In addition, Meta has stated that, since March 2023, they have prevented the sharing of over 1,000 malicious links using ChatGPT as a lure across their platforms. Typically, threat actors hide malware within files that appear harmless and offer nonexistent ChatGPT desktop and mobile apps or browser extensions in official app stores. Fake ChatGPT Chrome extensions that steal Facebook session cookies to compromise personal and business Facebook accounts are common. Threat actors may customize their malware to a specific online platform, including incorporating more sophisticated account compromise techniques than expected from common malware. Malware families have been observed attempting to circumvent two-factor authentication (2FA) or automatically scanning for and detecting connections between a compromised account and a business account. The malware they use, such as DuckTail, NodeStealer, and others, are after almost any login credentials or session cookies they can get, which they will use to take over accounts on various social media platforms and online services in order to spread and host malware. This article continues to discuss key findings on the malware threat landscape. 

Help Net Security reports "ChatGPT and Other AI-Themed Lures Used to Deliver Malicious Software"

Submitted by Anonymous on