"Security Researchers: ChatGPT Vulnerability Allows Training Data to be Accessed by Telling Chatbot to Endlessly Repeat a Word"
"Security Researchers: ChatGPT Vulnerability Allows Training Data to be Accessed by Telling Chatbot to Endlessly Repeat a Word"
A ChatGPT vulnerability, described in a new report by a group of researchers from Google DeepMind, Cornell University, Carnegie Mellon University (CMU), UC Berkeley, ETH Zurich, and the University of Washington, exposes random training data that can be triggered only by telling the chatbot to repeat a specific word forever. According to the researchers, when ChatGPT is made to repeat a word such as "poem" or "part" forever, it will do so for a few hundred repetitions, then it will have a meltdown and begin outputting random text.