"ChatGPT Shows Promise of Using AI to Write Malware"

It can take at least an hour for even the most skilled hackers to write a script to exploit a software vulnerability and infiltrate their target. However, a machine may soon be able to do it in seconds. Brendan Dolan-Gavitt, a computer security researcher, wondered if he could instruct OpenAI's ChatGPT tool, which allows users to interact with an Artificial Intelligence (AI) chatbot, to write malicious code. He asked the model to solve a simple capture-the-flag challenge and was surprised by the results. ChatGPT correctly identified a buffer overflow vulnerability in the code and wrote code to exploit the flaw. The model would have solved the problem perfectly if it hadn't been for a minor error in the number of characters in the input. Dolan-Gavitt presented ChatGPT with a basic challenge that would be presented to students near the beginning of a vulnerability analysis course. The fact that it failed does not inspire confidence in large language models, which serve as the foundation for AI bots to respond to human inquiries. Dolan-Gavitt prompted the model to re-examine the answer after spotting the error, and ChatGPT got it right. ChatGPT is currently far from perfect in terms of code writing and exemplifies many of the shortcomings of relying on AI tools to write code. Nonetheless, as these models become more sophisticated, they are likely to play a significant role in writing malicious code. Large language models, such as OpenAI's, rely on massive amounts of data scraped from the Internet and books before employing statistical tools to predict the most likely ways to complete queries or answer questions. That data includes "tens of millions of public repositories" of computer code from sites like StackExchange and GitHub forums, giving the model the ability to mimic the skills of trained programmers. Large language models pose a double-edged sword regarding cybersecurity risks. These models can generate malicious code, but they are prone to error and risk inserting vulnerable code. This article continues to discuss ChatGPT showing promise of using AI to write malware and the risks posed by large language models writing malicious code. 

CyberScoop reports "ChatGPT Shows Promise of Using AI to Write Malware"

Submitted by Anonymous on