Deepfakes - AI-Generated Media

Deepfakes - AI-Generated Media

 

Falsified videos created by AI are a recent development in online disinformation. While fabrication and manipulation of digital media are not new, the rapid development of AI technology has made the process to create convincing fake videos much easier and faster. AI-generated fake videos first caught the public's attention in late 2017, when a Reddit account with the name Deepfakes posted pornographic videos generated with a Deep Neural Network (DNN)-based face-swapping algorithm. Subsequently, the term “deepfake” has been used more broadly to refer to all types of AI-generated impersonating videos.

Such deepfakes have emerged as a major worry. A new paper from FireEye finds that tools published to open-source repositories such as GitHub are reducing the amount of technical expertise required to produce convincing deepfakes. Moreover, they are increasingly easy to purchase from disreputable marketing and PR firms. In their paper presented (virtually) at Black Hat, researchers Philip Tully and Lee Foster write new software tools for synthetic media generation take thousands of dollars and weeks to produce. “However, the application of transfer learning can drastically reduce the amount of time and effort involved,” they write. A set of commonly available tools has emerged that lends itself to the creation of faked images and voice recordings.

Deepfakes are emerging as a major national security concern. Adversary groups impersonating groups such as journalists is becoming an increasingly common tactic. Pro-Iranian actors, for example, have impersonated journalists in order to solicit comments to produce pro-regime propaganda. Russia has also taken to hacking legitimate news sites in order to push fake news stories aimed at undermining NATO in places like Lithuania, Poland, and Ukraine. Ultimately, easier-to-access deepfaking tools suggests that behavior will accelerate. Researchers have been working on ways to automatically detect deepfakes using biometric signals. Tully and Foster’s research shows that out-of-the-box deepfakes are relatively easy to spot, also using machine learning programs available on GitHub.

Anyone can download deepfake software or use web tools and apps to create an online deepfake. DeepFakeLab was one of the first apps for Windows that appeared. It allows one to swap faces, replace the whole head, age a person, or change lip movement. Zao is a Chinese app that can create a deepfake video in just a few seconds. Faceswap is a free open source deepfake app powered by Tensorflow, Keras, and Python. Deep Art Effects is a unique deepfake app that works with images rather than videos to create works of art; the algorithm behind it is inspired by and trained on works of famous artists like Van Gogh, Picasso, and Michelangelo. Other apps include REFACE, Morphin, and Jiggy.

There is a surge of interest in developing deepfake detection methods. Several deepfake datasets have been released to support the training and testing of deepfake detectors, such as DeepfakeDetection and FaceForensics++. Most of the real videos in these datasets are filmed with a few volunteer actors in limited scenes, and the fake videos crafted using a few popular apps. Detectors developed on these datasets may lose effectiveness when applied to detect the vast variety of deepfake videos in the wild (those uploaded to varies video-sharing websites) as in the DeepFake Detection Challenge. WildDeepfake is a new dataset, consisting of 7,314 face sequences extracted from 707 deepfake videos collected completely from the internet. WildDeepfake is a small dataset that can be used, in addition to existing datasets, to develop more effective detectors against real-world deepfakes.

Deepfake legislation as part of the National Defense Authorization Act for Fiscal Year 2020 (NDAA), was signed into law on December 20, 2019 after it was passed by the Senate 86-8 and the House 377-48.1 The Act requires a comprehensive report on the foreign weaponization of deepfakes; requires the government to notify Congress of foreign deepfake-disinformation activities targeting US elections; and establishes a “Deepfakes Prize” competition to encourage the research or commercialization of deepfake-detection technologies. Texas became the first state in the nation to prohibit the creation and distribution of deepfake videos intended to harm candidates for public office or influence elections on September 1, 2019. California enacted two laws a month later that allow victims of nonconsensual deepfake pornography to sue for damages and give candidates for public office the ability to sue individuals or organizations that distribute “with actual malice” election-related deepfakes without warning labels near Election Day.

One outcome of this effort is the Deepfake Detection Challenge. The winning solution used advanced DNNs with an average precision of 82.56 percent by the top performer. The organizers made their best effort to simulate situations where deepfake videos are deployed in real life, but a significant discrepancy between the performance on the evaluation data set and a more real data set remains; when tested on unseen videos, the top performer’s accuracy fell to 65.18 percent.

On a somewhat lighter note, Zoom is one of the services which has become increasingly popular with people being forced to stay at home to stop the spread of COVID-19 pandemic. Ali Aliev and Karim Isakov are two developers who have have built a new program called Avatarify that uses deepfakes for impersonating celebrities as avatars during Zoom calls. Their program superimposes a celebrity face onto the participant’s in real-time during the video meeting. The application supports image animations with complex motions.
 

Submitted by Anonymous on