"Blocking Access to ChatGPT Is a Short Term Solution to Mitigate Risk"

According to Netskope, for every 10,000 enterprise users, an enterprise organization faces around 183 incidents of sensitive data being posted to ChatGPT per month. Source code makes up the largest share of exposed sensitive data. Based on data from millions of enterprise users worldwide, researchers discovered that the use of generative Artificial Intelligence (AI) apps has increased by 22.5 percent over the past two months, thus increasing the likelihood of users disclosing sensitive information. Source code is posted to ChatGPT more frequently than any other form of sensitive data, according to Netskope. Other sensitive data shared in ChatGPT includes regulated data, such as financial data, healthcare data, and Personally Identifiable Information (PII). This article continues to discuss ChatGPT dominating the generative AI market, the AI chatbot being prone to source code exposure, and the safe adoption of AI apps. 

Help Net Security reports "Blocking Access to ChatGPT Is a Short Term Solution to Mitigate Risk"

Submitted by Anonymous on