"New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT"

Generative Artificial Intelligence (AI) tools such as ChatGPT pose significant threats to organizations' sensitive data. New research from the browser security company LayerX highlighted the scope and nature of these risks. The "Revealing the True GenAI Data Exposure Risk" report provides data protection stakeholders with essential insights and helps them to take proactive measures. The report identifies key areas of concern by analyzing the use of ChatGPT and other generative AI applications by 10,000 employees. Six percent of employees have pasted sensitive information into generative AI, with 4 percent engaging in this behavior weekly. This recurring behavior poses a significant risk of data exfiltration. The report addresses crucial risk assessment questions, such as the actual scope of generative AI usage across enterprise workforces, the proportion of "paste" actions within this usage, the number of employees pasting sensitive data into this AI, the departments that use generative AI the most, and the types of sensitive data that are most likely to be exposed through pasting. This article continues to discuss findings from the LayerX study on the risks posed by generative AI tools such as ChatGPT to organizations' sensitive data. 

THN reports "New Research: 6% of Employees Paste Sensitive Data into GenAI tools as ChatGPT"

Submitted by Anonymous on