"Guardrails on AI Tools Like ChatGPT Needed to Protect Secrets, CISOs Say"

Walmart, Amazon, and Microsoft have warned employees to avoid disclosing corporate secrets or proprietary code when using ChatGPT and other generative Artificial Intelligence (AI) tools. A recent CISO panel at CyberRisk Alliance's Identiverse conference suggests that many businesses have considered the same. When moderator Parham Eftekhari, executive vice president of collaboration at CyberRisk Alliance, asked how many attendees' organizations have implemented AI usage policies, a considerable portion of the audience raised their hands. During the same session, Ed Harris, CISO of Mauser Packaging, disclosed that his company had issued a policy similar to those Walmart and others have established, stating that sensitive company information should not be entered into external AI tools. Harris envisioned a scenario in which an employee asks an AI tool for help in refining the company's marketing strategy and, in doing so, enters corporate information that the AI remembers and then shares with other users, possibly a competitor. Yahoo's vice president and CISO, Sean Zadig, commented that security leaders must act expediently when developing these policies to keep up with the rapid increase in AI adoption and experimentation. This article continues to discuss the need for guardrails on AI tools to protect secrets. 

SC Magazine reports "Guardrails on AI Tools Like ChatGPT Needed to Protect Secrets, CISOs Say"

Submitted by Anonymous on