"Artists Fighting Back Against AI by Poisoning Their Images"

There are tools that can poison data and cause Artificial Intelligence (AI) models to malfunction, but the question is whether using them is a justified response by artists to copyright infringement or a potential cybersecurity threat. In October 2023, researchers at the University of Chicago unveiled "Nightshade," a data poisoning technique designed to disrupt the training process of AI models. Nightshade and similar disruptive tools could be a defense method that content creators can use to combat web scrapers. However, as the poisoning technique disrupts how AI models work, it may have cybersecurity implications. There is a risk that threat actors will use the data poisoning technique for malicious purposes. This article continues to discuss the use of data poisoning by artists to combat AI and the potential cybersecurity threat this poses.

Cybernews reports "Artists Fighting Back Against AI by Poisoning Their Images"

Submitted by grigby1

Submitted by grigby1 CPVI on