"Researchers Jailbreak AI Chatbots With ASCII Art -- ArtPrompt Bypasses Safety Measures to Unlock Malicious Queries"
"Researchers Jailbreak AI Chatbots With ASCII Art -- ArtPrompt Bypasses Safety Measures to Unlock Malicious Queries"
A team of researchers has developed ArtPrompt, a new approach for bypassing the safety measures built into Large Language Models (LLMs). According to the researchers' paper titled "ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs," users can make chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 respond to queries that are supposed to be rejected. The attack involves using ASCII art prompts generated by their ArtPrompt tool.