"No Programming Skills? Chatbots Will Help Inexperienced Hackers"
There is already evidence of experienced threat actors using the Artificial Intelligence (AI) chatbot ChatGPT and other chatbots to help them write malware. With a chatbot and existing code, anyone, including those without programming experience, can develop malware. Similarly, it will now be much simpler for anyone to compose more convincing phishing emails. Some of the telltale signs of a scam, such as poor language translations and grammatical errors, will be eliminated. It is also essential to note that sophisticated cybercriminals are now using such chatbots as attack vectors, capitalizing on their popularity to lead people to fake malicious websites. However, there may be a sufficient number of obstacles an attacker must overcome, making it tough for an amateur to begin developing cybercrime abilities. ChatGPT is now unavailable in countries such as Russia, China, Iran, and Ukraine, but there are always workarounds, including the use of Virtual Private Networks (VPNs). In countries where chatbots are available, the service is often overwhelmed and unavailable. This article continues to discuss the expected use of AI chatbots by inexperienced hackers, how easy it is for chatbots to be used for malicious intentions, and some barriers to using ChatGPT to cultivate cybercrime skills.
Security Boulevard reports "No Programming Skills? Chatbots Will Help Inexperienced Hackers"