"New Techniques Emerge to Stop Audio Deepfakes"

Audio deepfakes are becoming more dangerous, which prompted the US Federal Trade Commission (FTC) to launch its Voice Cloning Challenge. Academics and industry contestants had to develop ideas to prevent, monitor, and evaluate malicious voice cloning. Three teams approached the problem differently, showing that audio deepfakes pose complex and evolving harms that require a multipronged, multidisciplinary approach. Artificial Intelligence (AI)-generated synthetic voices for speech-impaired people are a benefit of voice cloning. However, the technology has many malicious uses, as AI-cloned voices can be used to swindle people and businesses of millions. Voice cloning can create disinformation-spreading audio deepfakes. This article continues to discuss the dangers posed by audio deepfakes and the different ways developed to combat them.

IEEE Spectrum reports "New Techniques Emerge to Stop Audio Deepfakes"

Submitted by grigby1

Submitted by grigby1 CPVI on