"ChatGPT, Other Generative AI Apps Prone to Compromise, Manipulation"

Users of applications involving Large Language Models (LLMs) similar to ChatGPT must be aware of the possible risks. Researchers warn that an attacker who develops untrusted content for the Artificial Intelligence (AI) system could compromise any information or recommendations generated by the system. The attack could enable job applicants to circumvent resume-checking applications, disinformation specialists to force a news summary bot to only give a specific point of view, or malicious actors to turn a chatbot into a willing participant in their fraud. In a session titled "Compromising LLMs: The Advent of AI Malware," a group of computer scientists will demonstrate that indirect prompt-injection (PI) attacks are possible because applications connected to ChatGPT and other LLMs often treat consumed data in the same manner as user queries or commands. Attackers can take control of the user's session by inserting specially crafted information as comments into documents or web pages that an LLM will parse. This article continues to discuss researchers finding that AI applications involving LLMs could be compromised by attackers using natural language to trick users. 

Dark Reading reports "ChatGPT, Other Generative AI Apps Prone to Compromise, Manipulation"

Submitted by Anonymous on