"Cybercriminals Are Creating Their Own AI Chatbots to Support Hacking and Scam Users"

Cybersecurity experts from the University of East Anglia and the University of Kent call for more attention to criminals creating their own Artificial Intelligence (AI) chatbots for hacking and scams. Malicious variants of Large Language Models (LLMs), the technology that drives AI chatbots such as ChatGPT, are emerging. WormGPT and FraudGPT are examples that can create malware, identify security flaws in systems, provide advice on how to scam people, support hacking, and more. Love-GPT is a newer variant commonly used in romance scams, creating fake dating profiles capable of chatting with unsuspecting victims on different dating apps. This article continues to discuss the experts' insights on the use of AI chatbots by cybercriminals.

The Conversation reports "Cybercriminals Are Creating Their Own AI Chatbots to Support Hacking and Scam Users"

Submitted by grigby1

Submitted by Gregory Rigby on