"Audio-Jacking: Using Generative AI to Distort Live Audio Transactions"

The emergence of generative Artificial Intelligence (AI), such as text-to-image, text-to-speech, and Large Language Models (LLMs), has created new security challenges and risks. Threat actors are increasingly attempting to exploit LLMs to compose phishing emails and use generative AI, including fake voices, to scam victims. IBM researchers have presented a successful attempt to intercept and hijack a live conversation. They used LLMs to understand the conversation in order to manipulate the audio output. This attack would allow the adversary to manipulate an audio call's outcomes silently. The researchers were able to change the details of a live financial conversation between two speakers, diverting money to a fake adversarial account rather than the intended recipient, all while the speakers were unaware their call had been compromised. This article continues to discuss the research on audio-jacking attacks.

Security Intelligence reports "Audio-Jacking: Using Generative AI to Distort Live Audio Transactions"

Submitted by grigby1

Submitted by grigby1 CPVI on