"ChatGPT Jailbreaking Forums Proliferate in Dark Web Communities"
The weaponization of generative Artificial Intelligence (AI) tools, such as ChatGPT, is taking shape. In online communities, threat actors are collaborating on new methods to circumvent ChatGPT's ethics rules, also known as "jailbreaking." Hackers are building a network of new tools to exploit or create Large Language Models (LLMs) for malicious purposes. It appears that ChatGPT has sparked a frenzy among cybercriminal forums. Since December, hackers have been looking for new and inventive ways to maliciously manipulate ChatGPT and open-source LLMs. According to SlashNext, the result is a new but thriving LLM hacking community, with many creative prompts and a few AI-enabled pieces of malware worthy of further examination. This article continues to discuss cybercriminals bypassing ethical and safety restrictions to use generative AI chatbots in the way they want.
Dark Reading reports "ChatGPT Jailbreaking Forums Proliferate in Dark Web Communities"