"'LLM Hijacking' of Cloud Infrastructure Uncovered by Researchers"

Permiso researchers reported that attackers conducted Large Language Model (LLM) hijacking of cloud infrastructure for generative Artificial Intelligence (AI) to run rogue chatbot services. Permiso detailed attacks targeting Amazon Bedrock environments, which support access to foundational LLMs such as Anthropic's Claude. The company set up a honeypot that showed how hijackers used stolen resources to run jailbroken chatbots. Threat actors use Amazon Web Services (AWS) access keys leaked on platforms like GitHub to communicate with Application Programming Interface (API) endpoints. This lets them check model availability, request access, and prompt the model using victim resources. Permiso identified nine targeted APIs, most of which are typically only accessed using the AWS Web Management Console. This article continues to discuss the LLM hijacking attacks, why attackers target LLM cloud services, and how to protect cloud environments from LLM hijacking.

SC Media reports "'LLM Hijacking' of Cloud Infrastructure Uncovered by Researchers"

Submitted by grigby1

Submitted by grigby1 CPVI on