"AI Image Generators Can Be Tricked Into Making NSFW Content"

New research on popular Artificial Intelligence (AI) image generators reveals that they could be hacked to create inappropriate and potentially harmful content. Although most online art generators claim to block violent, pornographic, and other forms of inappropriate content, Johns Hopkins University researchers were able to manipulate two of the most well-known systems to generate the type of images that the products' safeguards are supposed to prevent. According to the researchers, with the right code, anyone could defeat the systems' safety filters and use them to make content "not suitable for work." This article continues to discuss the new safety tests by Johns Hopkins researchers that reveal vulnerabilities of popular systems such as DALL-E 2.

Johns Hopkins University reports "AI Image Generators Can Be Tricked Into Making NSFW Content"

Submitted by grigby1

 

 

Submitted by grigby1 CPVI on