"FraudGPT and Other Malicious AIs Are the New Frontier of Online Threats. What Can We Do?"

Researchers at Monash University give their insights on the rise of dark Large Language Models (LLMs), what we can do to protect ourselves, and the role of government in regards to regulations on Artificial Intelligence (AI). They note that widely available generative AI tools have further complicated cybersecurity, so online security is more important than ever. Dark LLMs are uncensored versions of AI systems such as ChatGPT. Re-engineered for criminal activities, they can be used to improve phishing campaigns, create sophisticated malware, and more. This article continues to discuss the researchers' insights on malicious AI systems.

The Conversation reports "FraudGPT and Other Malicious AIs Are the New Frontier of Online Threats. What Can We Do?"

Submitted by grigby1

Submitted by Gregory Rigby on