"New BYU Algorithm Making ID Verification More Secure by Tracking Facial Movements"

Studies have revealed significant security flaws in some of the most advanced human biometric identification systems, including those based on fingerprints and retina scans. Researchers at Brigham Young University (BYU) developed an improved and more secure method for using facial recognition for access control. They created a technique called Concurrent Two-Factor Identity Verification (C2FIV) that requires both facial identity and a unique facial motion to gain access. A user must record a short 1-2 second video of a special facial motion or a lip movement while facing the camera and reading a secret phrase. The video is then used as input for the device. The facial features and the features of the facial movement are extracted and stored for later ID verification. This method ensures that the identity verification process is intentional since the verification cannot occur if the user is unconscious. With other biometric identification systems like fingerprint recognition technologies, a user's finger can still be used to unlock their device even if they are unconscious. C2FIV learns facial features and actions like blinking, smiling, eyebrow-raising, and more, simultaneously using an integrated neural network framework that models dynamic, sequential data such as facial motions. This framework considers all frames in a recording. In the preliminary study, the trained neural network verified identities with a 90 percent accuracy rate. D.J. Lee, an electrical and computer engineering professor at BYU and leader in the development of the new facial recognition technology, says the C2FIV system's application could go beyond smartphone access into ATM use, online banking, hotel room entry, safe deposit box access, keyless vehicle access, and more. This article continues to discuss the study, goal, capabilities, potential applications, and future of the C2FIV system. 

BYU reports "New BYU Algorithm Making ID Verification More Secure by Tracking Facial Movements"

Submitted by Anonymous on