"ChatGPT Jailbreak: Researchers Bypass AI Safeguards Using Hexadecimal Encoding and Emojis"
Marco Figueroa, Generative Artificial Intelligence (GenAI) bug bounty programs manager at Mozilla, has disclosed new jailbreak methods that can trick the AI-driven chatbot ChatGPT into generating Python exploits and a malicious SQL injection tool. One involves encoding malicious instructions in hexadecimal format, and the other involves using emojis. ChatGPT and other AI chatbots are trained not to provide potentially hateful or harmful information. However, researchers have been able to evade these safeguards designed to prevent misuse through prompt injection, which uses different techniques to mislead the chatbot. This article continues to discuss the new ChatGPT jailbreak techniques.
Submitted by Gregory Rigby
on