"Cybercriminals Are Creating Their Own AI Chatbots to Support Hacking and Scam Users"
"Cybercriminals Are Creating Their Own AI Chatbots to Support Hacking and Scam Users"
Cybersecurity experts from the University of East Anglia and the University of Kent call for more attention to criminals creating their own Artificial Intelligence (AI) chatbots for hacking and scams. Malicious variants of Large Language Models (LLMs), the technology that drives AI chatbots such as ChatGPT, are emerging. WormGPT and FraudGPT are examples that can create malware, identify security flaws in systems, provide advice on how to scam people, support hacking, and more.