"ChatGPT Spills Secrets in Novel PoC Attack"
Researchers from Google DeepMind, Open AI, ETH Zurich, McGill University, and the University of Washington have developed a new attack that extracts key architectural information from proprietary Large Language Models (LLMs) such as ChatGPT and Google PaLM-2. The study shows how adversaries can extract supposedly hidden data from an LLM-enabled chatbot, allowing them to duplicate or steal its functionality. The attack is one of several highlighted in the past year that have delved into the security flaws of Artificial Intelligence (AI) technologies. This article continues to discuss the study "Stealing Part of a Production Language Model."
Dark Reading reports "ChatGPT Spills Secrets in Novel PoC Attack"
Submitted by grigby1
Submitted by Gregory Rigby
on