"ChatGPT Users at Risk for Credential Theft"

New research conducted by Group-IB reveals that threat actors are increasingly compromising ChatGPT accounts. They may use this access to collect sensitive data and launch additional targeted attacks. According to Group-IB, ChatGPT credentials have become a major target for malicious activities. Researchers cautioned that because OpenAI's Artificial Intelligence-driven chatbot stores past user queries and AI responses by default, each account could be an entry point for threat actors to access user data. Dmitry Shestakov, head of threat intelligence at Group-IB, emphasizes that exposed information, whether personal or professional, could be used for malicious purposes such as identity theft, financial fraud, targeted scams, and more. Over the past year, Group-IB researchers identified 101,134 information stealer-infected devices storing ChatGPT data. Using Group-IB's Threat Intelligence platform to gain visibility into dark web communities, researchers were able to find compromised ChatGPT credentials within the logs of information stealers sold by threat actors via illicit marketplaces. Most victims were found to reside in the Asia-Pacific region. This article continues to discuss threat actors exploiting stolen ChatGPT accounts to collect users' sensitive data and professional credentials.

TechTarget reports "ChatGPT Users at Risk for Credential Theft"

Submitted by Anonymous on