"Nvidia's AI Software Tricked Into Leaking Data"

According to researchers at the San Francisco-based company Robust Intelligence, a feature in Nvidia's Artificial Intelligence (AI) software can be manipulated to disregard safety restrictions and reveal private information. The "NeMo Framework" developed by Nvidia enables developers to work with various Large Language Models (LLMs), the underlying technology that drives generative AI products such as chatbots. The chipmaker designed the framework to be adopted by businesses. Researchers at Robust Intelligence discovered they could easily circumvent so-called guardrails intended to ensure the AI system's safe use. After using the Nvidia system on its own data sets, it took Robust Intelligence analysts hours to get LLMs to overcome restrictions. In one test scenario, the researchers instructed the Nvidia system to replace the letter 'I' with the letter 'J.' This action triggered the release of Personally Identifiable Information (PII) from a database. This article continues to discuss researchers manipulating a feature in Nvidia's AI software to reveal sensitive information. 

Ars Technica reports "Nvidia's AI Software Tricked Into Leaking Data"

 

Submitted by Anonymous on