Developing Self-evolving Deepfake Detectors Against AI Attacks | |
---|---|
Author | |
Abstract |
As deep-learning based image and video manipulation technology advances, the future of truth and information looks bleak. In particular, Deepfakes, wherein a person’s face can be transferred onto the face of someone else, pose a serious threat for potential spread of convincing misinformation that is drastic and ubiquitous enough to have catastrophic real-world consequences. To prevent this, an effective detection tool for manipulated media is needed. However, the detector cannot just be good, it has to evolve with the technology to keep pace with or even outpace the enemy. At the same time, it must defend against different attack types to which deep learning systems are vulnerable. To that end, in this paper, we review various methods of both attack and defense on AI systems, as well as modes of evolution for such a system. Then, we put forward a potential system that combines the latest technologies in multiple areas as well as several novel ideas to create a detection algorithm that is robust against many attacks and can learn over time with unprecedented effectiveness and efficiency. |
Year of Publication |
2022
|
Date Published |
dec
|
URL |
https://ieeexplore.ieee.org/document/10063474
|
DOI |
10.1109/TPS-ISA56441.2022.00016
|
Google Scholar | BibTeX | DOI |