"What Lurks in the Dark: Taking Aim at Shadow AI"

The emergence of generative Artificial Intelligence (AI) has created new challenges for security teams. For CISOs, generative AI tools have brought on new potential issues, from enabling deepfakes that are nearly indistinguishable from reality to creating sophisticated phishing emails to take over accounts. The challenge posed by generative AI extends beyond Identity and Access Management (IAM), with attack vectors ranging from more innovative methods to infiltrate code to the exposure of sensitive proprietary data. Fifty-six percent of employees use generative AI at work, according to a survey by The Conference Board, but only 26 percent say their organization has a generative AI policy. Although many companies are trying to implement limitations on the use of generative AI in the workplace, the age-old pursuit of productivity has led to an alarming number of employees using AI without approval or consideration of potential consequences. For example, after some employees shared sensitive company data with ChatGPT, Samsung banned its use along with similar AI tools. This article continues to discuss shadow AI. 

Dark Reading reports "What Lurks in the Dark: Taking Aim at Shadow AI"

Submitted by grigby1

 

 

Submitted by Gregory Rigby on