"Researcher Tricks ChatGPT Into Building Undetectable Steganography Malware"

A security researcher has tricked ChatGPT into creating sophisticated data-stealing malware that signature- and behavior-based detection tools will be unable to identify, evading the chatbot's anti-malicious-use protections. Without writing code, the researcher, who admitted to having no experience developing malware, guided ChatGPT through a series of simple prompts that resulted in a malware tool capable of silently searching a system for certain documents, breaking them up, inserting them into image files, and sending them to Google Drive. According to Aaron Mulgrew, solutions architect at Forcepoint and one of the malware's authors, it took about four hours from the initial prompt into ChatGPT to have a working piece of malware with no detections on VirusTotal. Mulgrew noted that the purpose of his experiment was to demonstrate how simple it is to circumvent the protections ChatGPT has to prevent the creation of malware that would ordinarily require significant technical expertise. This article continues to discuss ChatGPT being convinced to create malware for finding and exfiltrating specific documents even though there is a directive to refuse malicious requests.

Dark Reading reports "Researcher Tricks ChatGPT Into Building Undetectable Steganography Malware"

Submitted by Anonymous on