"AI Researchers Expose Critical Vulnerabilities Within Major LLMs"

Computer scientists from the Artificial Intelligence (AI) security startup Mindgard and Lancaster University in the UK have demonstrated the possibility of copying large chunks of Large Language Models (LLMs) such as ChatGPT and Bard in less than a week for as little as $50. The information gathered from this copying can be used to perform targeted attacks. According to the researchers, these vulnerabilities enable attackers to reveal confidential information, evade guardrails, provide incorrect answers, or stage additional targeted attacks. In a new paper, the researchers show that it is possible to copy important aspects of existing LLMs inexpensively, and that vulnerabilities can be transferred between different models. This article continues to discuss the study that revealed critical vulnerabilities in major LLMs.

Lancaster University reports "AI Researchers Expose Critical Vulnerabilities Within Major LLMs"

Submitted by grigby1 

Submitted by grigby1 CPVI on