"New Study Links OpenAI's GPT-3.5 Turbo To Alarming Privacy Threats"
"New Study Links OpenAI's GPT-3.5 Turbo To Alarming Privacy Threats"
A recent study conducted by Rui Zhu, a Ph.D. candidate at Indiana University Bloomington, discovered a potential privacy threat posed by OpenAI's Large Language Model (LLM), GPT-3.5 Turbo. As part of the experiment, Zhu used a GPT-3.5 Turbo feature that allows the model to recall personal data and successfully avoided the model's privacy safeguards. Although there were flaws, the model correctly provided the work addresses of 80 percent of Times employees tested.