"New Study Links OpenAI's GPT-3.5 Turbo To Alarming Privacy Threats"

A recent study conducted by Rui Zhu, a Ph.D. candidate at Indiana University Bloomington, discovered a potential privacy threat posed by OpenAI's Large Language Model (LLM), GPT-3.5 Turbo.  As part of the experiment, Zhu used a GPT-3.5 Turbo feature that allows the model to recall personal data and successfully avoided the model's privacy safeguards. Although there were flaws, the model correctly provided the work addresses of 80 percent of Times employees tested. This discovery has sparked concern about ChatGPT-like Artificial Intelligence (AI) tools leaking sensitive information without requiring significant changes. This article continues to discuss the exploitation of GPT-3.5 Turbo to extract personal data.

International Business Times UK reports "New Study Links OpenAI's GPT-3.5 Turbo To Alarming Privacy Threats"

Submitted by grigby1

Submitted by grigby1 CPVI on