"Malicious AI Arrives on the Dark Web"

Malicious non-state actors are using Artificial Intelligence (AI) to amplify their malicious activities. Since the release of OpenAI's ChatGPT last year, there has been much discussion on the dark web about methods involving this technology. Dark web users have shared tips on jailbreaking the technology to evade safety and ethical limitations and use it for more sophisticated malicious activity. A new generation of AI-powered tools and applications has emerged that aim to satisfy cybercriminals' needs. On July 13, the first of these tools, WormGPT, surfaced on the dark web. WormGPT is based on the open-source GPT-J Large Language Model (LLM) developed in 2021. It is marketed as a 'blackhat' alternative to ChatGPT with no ethical limits. Allegedly trained on malware data, its primary applications are to generate advanced phishing and business email attacks, as well as to write malicious code. FraudGPT first appeared for sale on the dark web on July 22, following WormGPT's release. Based on GPT-3 technology, FraudGPT is advertised as an advanced bot for offensive purposes. Its uses include creating undetectable malware, developing hacking tools, discovering security flaws, and more. This article continues to discuss new malicious AI tools.

The Australian Strategic Policy Institute reports "Malicious AI Arrives on the Dark Web"

Submitted by Anonymous on