"Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears"

Employees are feeding sensitive business data and privacy-protected information to Large Language Models (LLMs) such as ChatGPT. This raises concerns that Artificial Intelligence (AI) services could be using the data in their models, and that the information could be retrieved later if the service does not have the right data security in place. In a recent report, the data security service Cyberhaven found that 4.2 percent of the 1.6 million workers at its client companies tried to enter data into ChatGPT. Because of the risk of leaking confidential information, client data, source code, or regulated information to the LLM, Cyberhaven blocked these requests. In one case, an executive copied and pasted the company's 2023 strategy document into ChatGPT and asked it to make a PowerPoint presentation. In another case, a doctor typed in the name and medical condition of a patient and asked ChatGPT to write a letter to the patient's insurance company. Howard Ting, CEO of Cyberhaven, says that the risk will grow as more employees use ChatGPT and other AI-based services to get work done. This article continues to discuss employees submitting sensitive corporate data into LLMs, potentially resulting in massive leaks of proprietary information. 

Dark Reading reports "Employees Are Feeding Sensitive Biz Data to ChatGPT, Raising Security Fears"

Submitted by Anonymous on