"ChatGPT Is a Data Privacy Nightmare. If You've Ever Posted Online, You Ought to Be Concerned"

According to Uri Gal, a Professor of Business Information Systems at the University of Sydney Business School, ChatGPT poses significant privacy risks. ChatGPT is supported by a Large Language Model (LLM) that requires large quantities of data to function and advance. The more training data the model receives, the better it becomes at spotting patterns, predicting what will occur next, and generating probable language. OpenAI, the company behind ChatGPT, fed the tool with about 300 billion words scraped from the Internet, including books, articles, websites, and posts, which also involve personal information gathered without permission. Therefore, according to Gal, if a person has ever written a blog post, product review, or online comment, there is a strong possibility that ChatGPT has consumed this information. Gal points out that there are several problems with the data collection used to train ChatGPT. First, nobody was asked if OpenAI could use their data, which violates privacy, especially when the data is sensitive and can be used to identify individuals, their family members, or their location. Even if data is publicly accessible, its use can violate "contextual integrity," a key principle in privacy law. This principle refers to the requirement that personal information is not disclosed outside of the original context in which it was produced. In addition, OpenAI provides no way for individuals to determine if the company stores their personal information or to request its deletion. This article continues to discuss the privacy risks of ChatGPT.

The Conversation reports "ChatGPT Is a Data Privacy Nightmare. If You've Ever Posted Online, You Ought to Be Concerned"

Submitted by Anonymous on