"Security Researchers: ChatGPT Vulnerability Allows Training Data to be Accessed by Telling Chatbot to Endlessly Repeat a Word"

A ChatGPT vulnerability, described in a new report by a group of researchers from Google DeepMind, Cornell University, Carnegie Mellon University (CMU), UC Berkeley, ETH Zurich, and the University of Washington, exposes random training data that can be triggered only by telling the chatbot to repeat a specific word forever. According to the researchers, when ChatGPT is made to repeat a word such as "poem" or "part" forever, it will do so for a few hundred repetitions, then it will have a meltdown and begin outputting random text. This random text may contain identifiable data such as email address signatures and contact information. The incident raises concerns about the chatbot's security and where it gets all of this personal information. This article continues to discuss the potential exploitation and impact of the ChatGPT vulnerability.

CPO Magazine reports "Security Researchers: ChatGPT Vulnerability Allows Training Data to be Accessed by Telling Chatbot to Endlessly Repeat a Word"

Submitted by grigby1

Submitted by grigby1 CPVI on