Facial Privacy Preservation using FGSM and Universal Perturbation attacks | |
---|---|
Author | |
Abstract |
Research done in Facial Privacy so far has entrenched the scope of gleaning race, age, and gender from a human’s facial image that are classifiable and compliant biometric attributes. Noticeable distortions, morphing, and face-swapping are some of the techniques that have been researched to restore consumers’ privacy. By fooling face recognition models, these techniques cater superficially to the needs of user privacy, however, the presence of visible manipulations negatively affects the aesthetic of the image. The objective of this work is to highlight common adversarial techniques that can be used to introduce granular pixel distortions using white-box and black-box perturbation algorithms that ensure the privacy of users’ sensitive or personal data in face images, fooling AI facial recognition models while maintaining the aesthetics of and visual integrity of the image.
|
Year of Publication |
2022
|
Conference Name |
2022 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COM-IT-CON)
|
Google Scholar | BibTeX |