"AI Fake-Face Generators Can Be Rewound To Reveal the Real Faces They Trained On"

Several studies call into doubt the notion that neural networks are black boxes that do not reveal anything about what is happening inside. Researchers at the University of Caen Normandy in France performed a membership attack to expose hidden training data. This attack can be used to determine what data was used to train a neural network model. These attacks use subtle differences in the way in which a model treats the data on which it was trained. Membership attacks can result in significant security leaks. For example, discovering that an individual's medical data was used to train a model associated with a specific disease might reveal that the person has that disease. A Generative Adversarial Network (GAN) is a type of Artificial Intelligence (AI) that learns to generate realistic but fake examples of the data it used to train. Instead of identifying the actual photos used to train a GAN, the researchers identified photos in a GAN's training set that are not identical but appear to depict the same individual (i.e., faces with the same identity). They did this by generating faces with the GAN and then using a separate facial-recognition AI to detect if the identity of the generated faces matched the identity of the faces seen in the training data. In many instances, they found multiple photos of real people in the training data that appeared to match the fake faces produced by the GAN, thus exposing the identity of individuals used to train the AI. These results raise serious privacy concerns. Theoretically, this kind of attack could be applied to biometric data, medical data, and other data tied to an individual. The team also came up with a different way to expose private data that does not require access to the training data. They developed an algorithm that can re-create the data exposed to a trained model by reversing the steps taken by the model to process that data. This article continues to discuss the study in which AI-based fake-face generators were rewound to reveal the real faces on which they were trained, and other ways private data in deep-learning models could be exposed. 

MIT Technology Review reports "AI Fake-Face Generators Can Be Rewound To Reveal the Real Faces They Trained On"

Submitted by Anonymous on