"Researchers Discover That Privacy-Preserving Tools Leave Private Data Unprotected"

Researchers at the NYU Tandon School of Engineering explored the machine-learning frameworks behind privacy preservation tools used for technologies such as facial expression recognition systems to see how effective such tools are at protecting private data. In a paper titled "Subverting Privacy-Preserving GANs: Hiding Secrets in Sanitized Images," the researchers looked into the possibility of recovering private data from images that had been sanitized by privacy-protecting Generative Adversarial Networks (PP-GANs) and that had passed empirical tests. The team discovered that PP-GAN designs could be subverted to pass privacy checks while enabling secret information to be obtained from sanitized images. The study presents the first comprehensive security analysis of PP-GANs and highlights the inadequacy of existing privacy checks at detecting sensitive information leakage. Using a new steganographic method, the researchers were able to modify an advanced PP-GAN to hide a secret, such as a user ID, from supposedly sanitized images. The adversarial PP-GAN can hide sensitive information in sanitized output images that can pass privacy checks, with a 100 percent rate at recovering secrets. This article continues to discuss findings from the study on the subversion of PP-GANs.

The NYU Tandon School of Engineering reports "Researchers Discover That Privacy-Preserving Tools Leave Private Data Unprotected"

 

 

Submitted by Anonymous on