"Hackers Compete To Confound Facial Recognition"

Hackers are invited to participate in a new competition aimed at exposing vulnerabilities in facial recognition technology and raising awareness of the potential risks. Since 2019, the Machine Learning (ML) Security Evasion Competition has been a staple of the Artificial Intelligence (AI) Village at the Def Con hacking conference. Researchers were initially challenged to bypass the defenses of ML-based malware detection systems. In 2021, organizers introduced a new track designed to uncover flaws in computer-vision models that use visual cues to detect phishing websites. However, this year's competition will pit hackers against facial recognition systems, challenging them to alter celebrity photographs so that an ML model misidentifies them. According to Zoltan Balazs, head of vulnerability research lab at the software company Cujo AI, one of the competition's organizers, the addition was made in response to the rapid expansion of facial recognition and the apparent lax approach to security among many vendors. Adversa AI, an AI security company, designed the facial recognition challenge because it understands how vulnerable these types of ML-based models are, as it regularly performs red teaming exercises to find security flaws in ML systems. Adversa CTO Eugene Neelou pointed out the increasing number of online tools that hackers can use to carry out these types of attacks and real-world examples of people exploiting flaws in facial recognition systems. For example, scammers recently fooled facial recognition software used by the identity-verification company ID.me in a $2.5 million unemployment fraud scheme. This article continues to discuss the ML Security Evasion Competition and the importance of increasing efforts to bolster the security of facial recognition systems.   

IEEE Spectrum reports "Hackers Compete To Confound Facial Recognition"

Submitted by Anonymous on