"Simple Hacking Technique Can Extract ChatGPT Training Data"

According to a team of researchers from Google DeepMind, Cornell University, and four other universities who tested ChatGPT's vulnerability to leaking data when prompted in a certain way, getting it to repeat the same word can cause it to regurgitate large amounts of its training data, including Personally Identifiable Information (PII) and other scraped data. This article continues to discuss the hacking method demonstrated to extract ChatGPT training data.

Dark Reading reports "Simple Hacking Technique Can Extract ChatGPT Training Data"

Submitted by grigby1

Submitted by grigby1 CPVI on