"Detection Stays One Step Ahead of Deepfakes—for Now" 

Intel introduced its Real-Time Deepfake Detector video analysis technology in November 2022. "Deepfake" stems from the use of deep learning, a subfield of Artificial Intelligence (AI) involving multilayered Artificial Neural Networks (ANNs), to generate fake content. Intel researcher Ilke Demir explains that social media companies, broadcasters, and Non-Governmental Organizations (NGOs) are likely to distribute detectors to the general population. One of Intel's processors can simultaneously analyze 72 video streams. Eventually, the platform will use various detection methods, but when it launches, it will use a technology called FakeCatcher. FakeCatcher uses a technique known as photoplethysmography (PPG) to infer blood flow from changes in facial color. The software was created by the researchers to focus on specific color patterns on specific facial regions. If they had enabled it to use all the information in a video, it may have learned to rely on signals that other video generators could modify more easily. PPG signals are unique in that they are present everywhere on the skin, not just in the eyes and lips. In addition, changing illumination does not eliminate them, but any generative operation does, as the type of noise they add messes up spatial, spectral, and temporal correlations. FakeCatcher ensures that the color changes naturally over time as the heart pumps blood, and that there is consistency across facial regions. In one test, the detector's accuracy was 91 percent, about nine points higher than the next-best system. This article continues to discuss efforts made by the technology industry to improve the detection of deepfakes and the threat posed by AI-generated content. 

IEEE Spectrum reports "Detection Stays One Step Ahead of Deepfakes—for Now" 

Submitted by Anonymous on