"Using Frequency Analysis to Recognize Fake Images"

A new method to identify deepfake images has been developed by a team of researchers from the Horst Gortz Institute for IT Security at Ruhr-Universitat Bochum and the Cluster of Excellence "Cyber Security in the Age of Large-Scale Adversaries" (CASA). Deepfake images are fake, realistic-looking images generated using computer models, called Generative Adversarial Networks (GANs). Deepfakes can be used to spread disinformation and make social engineering attacks more effective. The new approach proposed by the team to efficiently identify deepfake images involves the analysis of the objects in the frequency domain, which is an established signal processing technique. Frequency analysis has revealed that images generated by GANs display artefacts in the high-frequency range. According to the researchers, the artefacts described in their study can help determine whether an image was created by machine learning algorithms. Researchers must continue to study the creation and identification of deepfakes to help combat deepfake attacks. This article discusses the technique used to advance deepfake images and the new method developed to recognize fake images using frequency analysis.

Homeland Security News Wire reports "Using Frequency Analysis to Recognize Fake Images"

Submitted by Anonymous on