"Researchers Develop Malicious AI 'Worm' Targeting Generative AI Systems"

A team of researchers from Cornell Tech, the Israel Institute of Technology, and Intuit developed a novel type of malware dubbed the "Morris II" worm, which uses popular Artificial Intelligence (AI) services to spread itself, infect systems, and steal data. The worm further highlights the potential dangers of AI security threats and the need to secure AI models. The team used an "adversarial self-replicating prompt" to create the worm. When fed into a Large Language Model (LLM) such as OpenAI's ChatGPT, Google's Gemini, and the open source LLaVA model, the prompt causes the model to generate an additional prompt. It causes the chatbot to generate its own malicious prompts, which it then responds to by performing those instructions. This article continues to discuss the demonstration and potential impact of the Morris II AI worm. 

SecurityIntelligence reports "Researchers Develop Malicious AI 'Worm' Targeting Generative AI Systems"

Submitted by grigby1

Submitted by grigby1 CPVI on