SoS Musings #33 - Put the Brakes on Deepfakes

Image removed.SoS Musings #33 -
 

Put the Brakes on Deepfakes


Deepfakes--fake, realistic-looking images, text, and video generated using a Machine Learning (ML) model called a Generative Adversarial Network (GAN)--are one of the top cybersecurity threats to look out for in 2020. Security experts expect to see a rise in deepfakes in 2020 as a result of the increased implementation of biometrics used in technologies to identify and authenticate individuals, such as smartphones and airport boarding gates, among others. Advancements in deepfakes will pose new security challenges as cybercriminals will use such forms of fake media to masquerade as legitimate persons to steal money or other critical assets. Deepfakes can also be used to spread disinformation across social media platforms, undermine political candidates and perform other activities that involve fraud. Deepfake technology will strengthen social engineering attacks since cybercriminals will not need to perform special hacking skills to execute attacks as they can use deepfakes to impersonate high-level users and trick others into revealing sensitive information that could be used to gain access to protected systems. According to researchers at McAfee, accurate facial recognition will be more challenging to achieve because of deepfakes, adding to the growing list of problems faced by this type of biometrics system. A report released by Forrester Research, “Predictions 2020: Cybersecurity,” highlights that the costs associated with deepfake attacks will be more than $250 million in 2020. Studies on the creation and malicious application of deepfakes will help push the development of techniques and tools to help combat deepfake attacks in the future.

Recent incidents and studies have shown what threat actors can do through the use of AI-generated deepfakes and the manipulation of images. Engineers at Facebook demonstrated that it is possible to clone an individual's voice, using their ML system, named MelNet. With the MelNet system, the engineers generated audio clips of what seems to be Microsoft founder Bill Gates saying a series of harmless phrases. According to researchers, MelNet was trained on a 452-hour dataset consisting of TED talks and audiobooks. Deepfake voice attacks are already a significant threat to the business realm, as indicated by recent incidents in which threat actors used AI-generated audio to impersonate CEOs to steal millions of dollars. According to an article posted by Axios, Symantec observed three successful deepfake audio attacks against private companies, each of which impersonated a CEO to request money transfers. According to Symantec, in all attacks, scammers used an AI program to mimic the voices of the targeted CEOs. The program, similar to that of MelNet, was trained using speech from phone calls, YouTube videos such as TED talks, and other media that contained audio of the CEOs' voices. In the case of AI-generated images, Zao is one deepfake face-swapping app that quickly gained considerable popularity as it allows users to replace the faces of their favorite characters in TV shows or movies with theirs by uploading a single photograph. One user shared an example of how advanced the app is, showing a video of their face superimposed onto Leonardo Dicaprio in the Titanic, which was generated in under 8 seconds using a picture as small as a thumbnail. Another indication of the progression of deepfakes is a site, called “This Person Does Not Exist” that continuously generates images of realistic-looking human faces using Nvidia’s GAN, named StyleGAN. Using such techniques, one can masquerade as journalists on social media with AI-generated profile pictures to press for personal information from users, such as in the case of “Maisy Kinsley,” a supposed “senior journalist at Bloomberg.” These studies, incidents, and technologies, which highlight deepfake capabilities, bring further attention to the increased risk posed by uploading photos and videos of one's likeness online where anyone can access and use them for malicious purposes.

Security researchers, as well as social media platforms, are being encouraged to continue their efforts to fight deepfakes of all formats. A team of researchers at the University of Oregon is studying the complex sound processing of mice to gain insight into the mechanisms associated with the mammalian auditory system to detect fake audio and then develop new deepfake audio-detection algorithms based on this analysis. A Canadian AI company, Dessa, built a system, called RealTalk, aimed at discerning between real and fake audio. The system is capable of differentiation between real and fake audio samples by using spectrograms, which are visual representations of audio clips. According to engineers at Dessa, these visual representations can be used by the deepfake detector model to predict the fake audio clips it is fed, with an accuracy of 90%. A study conducted by researchers at New York University’s Tandon School of Engineering aims to combat deepfakes by making it easier to determine whether a photo has been altered. A significant challenge associated with the detection of manipulated photos is that digital photo files are not coded to show evidence of tampering. Therefore, the NYU team proposed the implementation of ML into a camera’s image signal processing to add watermarks to each photo’s code, which would act as a tamper-resistant seal. Siwei Lyu, a professor at the University of Albany, and his team examined the steps taken by one particular deepfake video creating-program, called DeepFake. The examination of this software found that the failure of such programs to pick up on physiological signals inherent to human beings such as blinking, could be used to reveal most deepfake videos of individuals. In addition, Facebook recently announced plans to ban deepfake videos from its platform. Monika Bickert, Facebook’s vice president for global policy management, further stressed the potential deepfake videos have to impact the social media industry and society in general significantly. Therefore, any video posted on Facebook that has been generated using AI or machine learning to manipulate a video to make it appear authentic will be removed. However, this change in Facebook’s policy does not apply to satirical content or videos edited for the purpose of removing or altering the order of words. Researchers and social media organizations must continue to explore and develop new tools and policies to accurately differentiate between real and fake media as well as reduce the generation of deepfakes. These efforts are essential as deepfakes will be weaponized to spread disinformation and increase the success of social engineering attacks.
 

 

Submitted by Anonymous on