"UCR Outs Security Flaw in AI Query Models"

A security flaw in vision language Artificial Intelligence (AI) models, discovered by computer scientists at the University of California, Riverside, could allow malicious actors to use AI for nefarious purposes such as obtaining bomb-making instructions. Vision language models, when integrated with models such as Google Bard and ChatGPT, enable users to make inquiries using both images and text. The team demonstrated a "jailbreak" hack by manipulating the operations of Large Language Model (LLM) software programs, which are the foundation of query-and-answer AI programs. This article continues to discuss the discovery of a security flaw in AI query models.

The University of California, Riverside reports "UCR Outs Security Flaw in AI Query Models"

Submitted by grigby1

Submitted by grigby1 CPVI on