"ChatGPT Is Enabling Script Kiddies to Write Functional Malware"

Since its beta release in November 2022, the Artificial Intelligence (AI) chatbot ChatGPT has been used to perform various tasks, such as writing poetry, technical papers, novels, and more. Malware development and other forms of cybercrime can now be added to the list. Researchers from the security company Check Point Research reported that within a few weeks of ChatGPT's release, members of cybercrime forums, including those with little to no coding experience, were using it to create software and emails that could be used for espionage, ransomware, spam, and other malicious activities. According to the researchers, it is too soon to determine whether ChatGPT features will become the new preferred tool for dark web participants. However, the cybercriminal community has already demonstrated a high level of interest and is quickly adopting this trend to develop malware. One forum member posted what they claimed was their first script and credited the AI chatbot with helping them complete the script. The Python code incorporated multiple cryptographic functions, such as code signing, encryption, and decryption. A part of the script generated a key for file signing using elliptic curve cryptography and the curve ed25519. Another part of the script encrypted system files using the Blowfish and Twofish algorithms. The third part compared multiple files using RSA keys and digital signatures, message signing, and the blake2 hash function. The outcome was a script that could decrypt a single file and append a Message Authentication Code (MAC) to the file's end. The resulting script could also be used to encrypt a hardcoded path and decrypt a list of files it receives as an argument. This article continues to discuss the possibility of those with limited coding skills using ChatGPT to write functional malware. 

Ars Technica reports "ChatGPT Is Enabling Script Kiddies to Write Functional Malware"

Submitted by Anonymous on