"AI Chatbots Highly Vulnerable to Jailbreaks, UK Researchers Find"

Four popular generative Artificial Intelligence (AI) chatbots are vulnerable to basic jailbreak attempts, according to UK AI Safety Institute (AISI) researchers. The UK AISI conducted tests to assess cyber risks associated with these AI models. They were found to be vulnerable to basic jailbreak techniques, with the models producing harmful responses in 90 percent to 100 percent of cases when the researchers repeated the same attack patterns five times in a row. This article continues to discuss findings from the assessment of cyber risks associated with generative AI models.

Infosecurity Magazine reports "AI Chatbots Highly Vulnerable to Jailbreaks, UK Researchers Find"

Submitted by grigby1

Submitted by Gregory Rigby on