"Criminals Have Created Their Own ChatGPT Clones"
Just months after the launch of OpenAI's ChatGPT chatbot, cybercriminals and hackers claim to have developed their own versions of the text-generating Artificial Intelligence (AI) technology. Theoretically, the systems could improve criminals' ability to develop malware or write phishing emails that can trick users into revealing login information. Since the beginning July, cybercriminals have been advertising two Large Language Models (LLMs), WormGPT and FraudGPT, on dark web forums and marketplaces. The systems, said to mimic ChatGPT and Google's Bard, generate text in response to the questions or prompts that users input. In contrast to the LLMs built by legitimate companies, these chatbots are marketed for illegal purposes. The malicious LLMs claim to eliminate all safety protections and ethical barriers. This article continues to discuss the LLMs advertised by cybercriminals that could help perform phishing attacks and create malware.
Wired reports "Criminals Have Created Their Own ChatGPT Clones"