"Your Brain Is Better at Busting Deepfakes Than You"

Deepfake videos, images, audio, or text may appear to be authentic, but they are computer-generated clones intended to mislead people and sway public opinion. They are used to spread disinformation and appear in cybersecurity, politics, counterfeiting, and more. A new study conducted by researchers at the University of Sydney explores whether fact could be distinguished from fraud regarding deepfakes. They discovered that people's brains can detect Artificial Intelligence (AI)-generated fake faces, despite the fact that people could not tell which faces were real and which were fake. The University of Sydney researchers discovered that deepfakes could be identified 54 percent of the time, based on the analysis of participants' brain activity. However, when asked to identify the deepfakes verbally, participants could only do so 37 percent of the time. Although the brain accuracy rate in this study was low, it is statistically reliable, according to senior researcher Associate Professor Thomas Carlson of the University of Sydney's School of Psychology. The researchers point out that this finding means the brain can distinguish between deepfakes and genuine images, thus providing a springboard in combatting deepfakes. According to Carlson, the fact that the brain can detect deepfakes indicates that current deepfakes are flawed. If the researchers can figure out how the brain detects deepfakes, they may be able to create algorithms to detect potential deepfakes on digital platforms such as Facebook and Twitter. They anticipate that, in the future, technology based on their research and similar studies will be developed to alert people to deepfake scams in real-time. Security personnel, for example, could wear electroencephalography (EEG)-enabled helmets that alert them if they are confronted with a deepfake. This article continues to discuss the study on decoding realistic AI-generated faces from neural activity. 

University of Sydney reports "Your Brain Is Better at Busting Deepfakes Than You"

Submitted by Anonymous on