"Hackers Can Read Private AI-Assistant Chats Even Though They're Encrypted"

Researchers at Ben-Gurion University's Offensive AI Research Lab have presented an attack that can decipher AI assistant responses. The technique involves a side-channel found in all major Artificial Intelligence (AI) assistants except Google Gemini. It refines the fairly raw results through Large Language Models (LLMs) trained specifically for the task. Someone with a passive Adversary-in-the-Middle (AitM) position, that is, an adversary who can monitor the data packets passing between an AI assistant and the user, can infer the specific topic of 55 percent of all captured responses, typically with high word accuracy. Twenty-nine percent of the time, the attack is able to infer responses with perfect word accuracy. This article continues to discuss the novel side-channel attack that can be used to read encrypted responses from AI assistants over the web.

Ars Technica reports "Hackers Can Read Private AI-Assistant Chats Even Though They're Encrypted"

Submitted by grigby1

Submitted by grigby1 CPVI on