"Turning AI to Crime"

The Artificial Intelligence (AI) chatbot ChatGPT has been generating a great deal of buzz in the news and on social media regarding its ability to write blogs, software source code, and frameworks. People are sharing what they have done with the Large Language Model (LLM)-based bot and what they plan to do in the future. Their applications include product prototyping, virtual assistants, and nearly limitless duties. Cybercriminals have experimented with ChatGPT. Based on dark web forums, cybercriminals are using ChatGPT to generate malicious code. According to Nicole Sette, associate managing director of the cyber risk business at Kroll, a corporate investigation and risk consultancy, most researchers agree that chatbots are not yet optimized for code creation, as they lack the creativity to develop new code. However, in March 2023, Kroll observed hacking forum users discussing methods for bypassing ChatGPT restrictions and using the program to generate code. Sette explains that other forum users shared code for circumventing ChatGPT's Terms of Service, also known as 'jailbreaking ChatGPT,' in various dark web forums. Threat actors have discovered methods to use chatbots to write malware, including information stealers. Check Point Research reported that someone on an underground hacker forum used ChatGPT to recreate a Python-based information stealer using published analyses of prevalent malware. This article continues to discuss how cybercriminals are using ChatGPT.

CACM reports "Turning AI to Crime"

Submitted by Anonymous on