"Humans Unable to Reliably Detect Deepfake Speech"

Researchers from the University College London (UCL) have discovered that humans cannot detect deepfake speech 27% of the time.  During the study, the researchers presented 529 individuals with genuine and deepfake audio samples and asked them to identify the deepfakes.  Participants could only identify the fake audio 73% of the time, although detection accuracy improved by 3.84% on average after they received training to recognize aspects of deepfake speech.  The researchers noted that they used a text-to-speech (TTS) algorithm trained on two publicly available datasets to produce the deepfake speech samples.  These were run in English and Mandarin to understand if language can affect detection performance and decision-making rationale.  The researchers stated that their findings confirm that humans are unable to reliably detect deepfake speech, whether or not they have received training to help them spot artificial content.  The researchers noted that the samples that they used in this study were created with algorithms that are relatively old, which raises the question of whether humans would be less able to detect deepfake speech created using the most sophisticated technology available now and in the future.  The researchers are now planning to develop better automated speech detectors as part of efforts to create detection capabilities for deepfake audio and imagery.

 

Infosecurity reports: "Humans Unable to Reliably Detect Deepfake Speech"

Submitted by Anonymous on