"Researchers Tested AI Watermarks—and Broke All of Them"

According to Soheil Feizi, a University of Maryland computer science professor, there is currently no reliable Artificial Intelligence (AI) watermarking. Watermarking has become a promising strategy for identifying AI-generated images and text. One of the two varieties of AI watermarking he tested for a new study, "low perturbation" watermarks, which are invisible to the naked eye, are hopeless, according to his findings. Feizi and his co-authors examined how easy it is for malicious actors to circumvent watermarking attempts. He refers to this as "washing out" the watermark. In addition to demonstrating how attackers can remove watermarks, the study shows how watermarks can be added to human-generated images in order to generate false positives. This article continues to discuss the research on how easy it is to evade current methods of AI watermarking.

Wired reports "Researchers Tested AI Watermarks—and Broke All of Them"

Submitted by grigby1

Submitted by Gregory Rigby on