SoS Musings #69 - ChatGPT: A New Threat to Cybersecurity

Image removed.SoS Musings #69 -

ChatGPT: A New Threat to Cybersecurity

Artificial Intelligence (AI) continues to grow in fascination and popularity among experts within the technology industry and the general public. In the realm of advanced technologies, Machine Learning (ML) systems that can automatically generate content such as text, videos, and images are experiencing a surge as billions of dollars are being put into them. Despite AI's immense potential to help humanity, there are concerns regarding risks involved in developing an algorithm that can outperform humans and potentially get out of hand. Dystopian futures in which AI takes control of humanity are, for the time being, highly improbable. However, in the meantime, AI can help cybercriminals carry out their malicious activities. Writing a script to exploit a software vulnerability and compromise a target can take at least an hour, even for the most skilled hackers. However, the time it takes to do this could soon be reduced to seconds using OpenAI's ChatGPT (Generative Pre-trained Transformer). ChatGPT is Large Language Model (LLM) launched in November 2022 that is designed to use AI and Natural Language Processing (NLP) to generate content nearly indistinguishable from human writing. It is fine-tuned with both supervised and reinforcement learning techniques, and is built on top of OpenAI's GPT-3 family of language models. It enables users to engage in human-like question-and-answer exchanges with a chatbot. A user can ask it to write something in a specific author's style, debug code, and more. For example, when a user types in a question, the chatbot provides an informative response that appears to have been formed by an expert. Despite it being fun to play with the ChatGPT tool, it has been pointed out that the AI chatbot's capacity to provide guidance on exploiting vulnerabilities makes it potentially harmful. The release of ChatGPT is expected to facilitate cyberattacks by low-skilled threat actors.

Researchers have already proven the potential use of ChatGPT by cybercriminals for malicious operations. Brendan Dolan-Gavitt, a computer security expert, explored the possibility of directing the chatbot to produce malicious code. In order to test ChatGPT, he presented it with a simple capture-the-flag challenge. ChatGPT was able to discover a buffer overflow vulnerability and craft code to exploit the flaw. The model would have solved the problem perfectly if there was not an error in the input's number of characters. After noticing the error, Dolan-Gavitt prompted the model to reevaluate the answer, which ChatGPT then got right.

Within a few weeks of ChatGPT going live, members of cybercrime forums, some with little or no coding experience, were using it to write malware and phishing emails that could be used for espionage, ransomware attacks, spamming, and other malicious activities, according to researchers at the security firm Check Point Research. According to the researchers, it is still too early to determine whether ChatGPT will become cybercriminal's new preferred tool. However, the cybercriminal community has already expressed interest in its use to help write malware. One forum user claimed to have used the AI chatbot to help them write their first script and give it a good scope. Several cryptographic functions, such as code signing, encryption, and decryption, were all rolled into the Python code. Elliptic curve cryptography and the curve ed25519 were used in the script to generate a key for signing files. Another part of the script used a hardcoded password to encrypt system files using the Blowfish and Twofish algorithms. The resulting script can be used to decrypt a single file and append a Message Authentication Code (MAC) to the end of the file, as well as encrypt a hardcoded path and decrypt a list of files that it receives as an argument. While this code can be used in a benign manner, the researchers warn that the script can easily be updated to encrypt a target's machine without their interaction. For example, if the script and syntax errors are rectified, the code could be converted into ransomware. In another case observed by Check Point Research, a forum user who seems to be more skilled, shared two code samples written with the help of ChatGPT. The first was a Python script for post-exploitation information theft. It searched for different file types, such as PDFs, copied them to a temporary directory, compressed them, and sent them to a server under the control of the attacker. The second piece of code written in Java secretly downloaded the SSH and telnet client PuTTY and launched it using PowerShell. The user appeared to be a technically-advanced threat actor trying to demonstrate to low-skilled cybercriminals how to use ChatGPT for malicious activities through examples.

Researchers at Cybernews also brought further attention to the possible use of the AI chatbot by threat actors to easily hack into networks. The research team found that ChatGPT could help train hackers to break into websites by providing step-by-step instructions. They demonstrated this by using ChatGPT to help identify vulnerabilities in a website. The team asked questions and followed the chatbot's instructions to see if it could give them a step-by-step guide on how to exploit the vulnerability. For their test, they used the "Hack the Box" cybersecurity training platform, which cybersecurity professionals, students, and companies use to improve their hacking skills. The team approached ChatGPT in the context of a penetration testing challenge, asking the chatbot for help. ChatGPT provided five basic areas to check first while looking for security flaws on the website. The researchers were able to get the AI's advice on which areas of the source code to focus on by describing what they observed. They were also provided with examples of suggested modifications to the code. Following about 45 minutes of conversation with the chatbot, the team was able to compromise the website. More than enough examples were presented by the tool for the researchers to determine what was effective and what was not. While it did not provide the exact payload required at that stage, it did provide a wealth of potential keyword search possibilities.

According to researchers at CyberArk, threat actors can easily use ChatGPT to develop polymorphic malware with advanced capabilities that can circumvent most anti-malware products and make mitigation significantly difficult. CyberArk Labs developed a proof-of-concept (POC) for the highly evasive malware, finding a way to execute payloads using text prompts on a victim's computer. They tested their approach on Windows and reported that a malware package containing a Python interpreter could be created. This could be designed to periodically query ChatGPT for new modules. These modules may contain code in text form that defines the malware's functionality, such as code injection, file encryption, or persistence. The malware package would check whether or not the code operates as expected on the target system. The researchers stated that this could be accomplished through communication between the malware and a command-and-control (C2) server. In a use case involving a file decryption module, functionality derives from a ChatGPT query in text form, and the malware generates a test file for the C2 server to validate. If the validation is successful, the malware will be instructed to run the code, which would encrypt the files. If the validation is unsuccessful, the process will be repeated until functional encryption code is generated and validated. The malware would use the compile function of the built-in Python interpreter to turn the payload code string into a code object that could then be run on the victim's computer. The researchers said that this method demonstrates the possibility of making malware that can run new or modified code, thus making it polymorphic in nature. According to CyberArk's researchers, since the malware detects incoming payloads in the form of text instead of binaries, it does not have suspicious logic while in memory, so it can evade most of the security products used in the demonstration. It evades signature-based detection, and it will bypass Anti-Malware Scanning Interface (AMSI) measures.

AI-based tools could have a disastrous effect on security if they are in the hands of malicious actors. Therefore, it is essential to continue exploring and mitigating the risks of ChatGPT and other AI text generators. 

 

Submitted by Anonymous on