"Audio-Jacking: Using Generative AI to Distort Live Audio Transactions"
"Audio-Jacking: Using Generative AI to Distort Live Audio Transactions"
The emergence of generative Artificial Intelligence (AI), such as text-to-image, text-to-speech, and Large Language Models (LLMs), has created new security challenges and risks. Threat actors are increasingly attempting to exploit LLMs to compose phishing emails and use generative AI, including fake voices, to scam victims. IBM researchers have presented a successful attempt to intercept and hijack a live conversation. They used LLMs to understand the conversation in order to manipulate the audio output. This attack would allow the adversary to manipulate an audio call's outcomes silently.