"Voice Deepfakes Are Calling – Here's What They Are and How to Avoid Getting Scammed"

Security researchers have observed that advances in deep learning algorithms, audio editing, and synthetic voice generation are making it increasingly feasible to replicate a person's voice convincingly. In addition, Artificial Intelligence (AI)-driven chatbots such as ChatGPT are beginning to create realistic scripts with adaptive real-time responses. Combining these technologies with voice generation transforms a deepfake from a static recording into a lifelike avatar capable of carrying on a convincing phone conversation. Researchers behind the DeFake Project of the Rochester Institute of Technology, the University of Mississippi, Michigan State University, and more are working to detect video and audio deepfakes as well as reduce the damage they inflict. Voice phishing (vishing) scams are the most common voice deepfakes encountered in the workplace and at home. For example, in 2019, criminals scammed an energy company out of $243,000 by imitating the voice of its parent company's boss to instruct an employee to transfer funds to a supplier. In 2022, people were conned out of an estimated $11 million by simulated voices. This article continues to discuss security researchers' concerns regarding voice deepfakes and how people can avoid getting scammed by them. 

The Conversation reports "Voice Deepfakes Are Calling – Here's What They Are and How to Avoid Getting Scammed"

Submitted by Anonymous on