"Jailbreaking ChatGPT: Researchers Swerved GPT-4's Safety Guardrails and Made the Chatbot Detail How to Make Explosives in Scots Gaelic"

Researchers have discovered a cross-lingual flaw in OpenAI's GPT-4 Large Language Model (LLM) that enables malicious users to jailbreak the model and bypass its safety measures by using prompts translated into lesser-spoken languages. A team of researchers at Brown University published a paper that explores a potential vulnerability in OpenAI's GPT-4 LLM caused by linguistic inequality in safety training data. According to the researchers, translating unsafe inputs into low-resource languages could provoke prohibited behavior from the chatbot. There are fewer examples of text created using such languages, so there is less data available for training conversational Artificial Intelligence (AI) tools. This article continues to discuss the study on the potential jailbreaking of ChatGPT using low-resource languages.

ITPro reports "Jailbreaking ChatGPT: Researchers Swerved GPT-4's Safety Guardrails and Made the Chatbot Detail How to Make Explosives in Scots Gaelic"

Submitted by grigby1

Submitted by Gregory Rigby on