"This New Tool Could Protect Your Pictures From AI Manipulation"
People can take a photo posted online and edit it with advanced generative Artificial Intelligence (AI) systems for malicious purposes. Due to the sophistication of these systems, it may be impossible to prove that the resulting image is fake. However, a new tool developed by MIT researchers called PhotoGuard could prevent this. It serves as a protective shield by altering photos in small invisible ways that prevent them from being manipulated. If someone attempts to use editing software based on a generative AI model such as Stable Diffusion to manipulate an image "immunized" by PhotoGuard, the result will appear unrealistic or warped. PhotoGuard addresses the issue of malicious image manipulation by these models. Discovering ways to detect and stop AI-powered manipulation has never been more important, as generative AI tools have made it faster and easier than ever before. In a voluntary pledge with the White House, major AI companies such as OpenAI, Google, and Meta committed to developing such methods to combat fraud and deception. PhotoGuard is a complementary technique to watermarking. This article continues to discuss the PhotoGuard tool created by researchers at MIT.
MIT Technology Review reports "This New Tool Could Protect Your Pictures From AI Manipulation"