"A New Wave of Insider Threats Will Be Driven by 'Shadow AI'"

According to Imperva, poor data controls and the introduction of new generative Artificial Intelligence (AI) tools based on Large Language Models (LLMs) will cause an increase in insider data breaches in the coming year. As the effectiveness of chatbots driven by LLMs has grown, many organizations have implemented bans or limitations on the data that can be shared with them. However, because most organizations (82 percent) lack an insider risk management strategy, they remain unaware of instances of employees using generative AI to help them with tasks such as writing code or filling out requests for proposals (RFPs). Terry Ray, SVP, Data Security GTM and Field CTO at Imperva, argues that prohibiting employees from using generative AI is futile. Ray added that, as with other technologies, people will always find a way to bypass such restrictions, so prohibitions create an infinite game of whack-a-mole for security teams, without actually securing the enterprise. Malicious intent is not required to cause a data breach, Ray emphasized. Instead of relying on employees not to use unauthorized tools, Imperva suggests that businesses should focus on securing their data and ensuring they can answer important questions such as who is accessing it, what is being accessed, how, and from where. This article continues to discuss the expectation that AI will lead to a significant rise in insider data breaches and the steps organizations should take to protect themselves. 

Continuity Central reports "A New Wave of Insider Threats Will Be Driven by 'Shadow AI'"

Submitted by Anonymous on