"Identifying Fake Voice Recordings"

Researchers at the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum are exploring how data generated using Artificial Intelligence (AI), known as deepfakes, can be distinguished from real data. Deepfakes refer to synthetic media, including images, audio, and videos, manipulated or created using AI. Advancements in deepfakes will present security challenges as cybercriminals will use such forms of fake media to pose as legitimate individuals to steal money or other critical assets. Deepfakes can also be used to spread misinformation across social media platforms. Furthermore, the use of such media will strengthen social engineering attacks because cybercriminals will not need to be skilled in hacking to execute attacks. They can use deepfakes to impersonate high-level users and trick others into revealing sensitive information, which could then be used to access protected systems. Exploring the realm of audio deepfakes, the researchers found that real and fake voice recordings are different in the high frequencies. Based on the analysis of artificial audio files and recordings of real speech, the researchers developed algorithms capable of distinguishing between deepfakes and actual speech. Their algorithms are designed to be a starting point for other researchers to develop new detection methods. This article continues to discuss the study on audio deepfake detection. 

RUB reports "Identifying Fake Voice Recordings"

Submitted by Anonymous on