"Deepfakes Expose Vulnerabilities in Certain Facial Recognition Technology"

Mobile devices use facial recognition technology to help users unlock their phones, make financial transactions, and access medical records quickly and securely. According to new research involving the Penn State College of Information Sciences and Technology, facial recognition technologies that use a specific user-detection method are highly vulnerable to deepfake-based attacks, which could lead to significant security concerns for users and applications. The researchers discovered that most application programming interfaces that use facial liveness verification, a feature of facial recognition technology that uses computer vision to confirm the presence of a live user, do not always detect deepfakes, which are digitally altered photos or videos of individuals made to look like a live version of someone else. Applications that use these detection measures are also much less effective at detecting deepfakes than the app provider claims. The study, which was presented at the USENIX Security Symposium, is the first systemic investigation into the security of facial liveness verification in real-world settings. Ting Wang, one principal investigator on the project, and his colleagues created LiveBugger, a new deepfake-powered attack framework that enables customizable, automated security evaluation of facial liveness verification. They tested six of the most popular commercial facial liveness verification application programming interfaces. Any flaws in these products could be passed on to other apps that use them, potentially putting millions of users at risk. LiveBugger attempted to fool the apps' facial liveness verification methods by using deepfake images and videos obtained from two separate data sets. These methods aim to verify a user's identity by analyzing static or video images of their face, listening to their voice, or measuring their response to performing an action on command. The researchers discovered that all four of the most common verification methods were easily circumvented. In addition to demonstrating how their framework circumvented these methods, they have made recommendations to improve the technology's security, such as avoiding verification methods that only analyze a static image of a user's face and matching lip movements with a user's voice in methods that analyze both audio and video from a user. This article continues to discuss the study on the security of facial liveness verification in real-world settings.

PSU reports "Deepfakes Expose Vulnerabilities in Certain Facial Recognition Technology"

Submitted by Anonymous on