"Researchers Jailbreak AI Chatbots With ASCII Art -- ArtPrompt Bypasses Safety Measures to Unlock Malicious Queries"

A team of researchers has developed ArtPrompt, a new approach for bypassing the safety measures built into Large Language Models (LLMs). According to the researchers' paper titled "ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs," users can make chatbots such as GPT-3.5, GPT-4, Gemini, Claude, and Llama2 respond to queries that are supposed to be rejected. The attack involves using ASCII art prompts generated by their ArtPrompt tool. The attack is said to be simple and effective, with the team providing examples of ArtPrompt-induced chatbots advising on harmful activities. This article continues to discuss the research on evading safety measures in LLMs using ASCII art-based jailbreak attacks. 

Tom's Hardware reports "Researchers Jailbreak AI Chatbots With ASCII Art -- ArtPrompt Bypasses Safety Measures to Unlock Malicious Queries"

Submitted by grigby1

Submitted by grigby1 CPVI on