"How Hackers Might Be Exploiting ChatGPT"

The popular Artificial Intelligence (AI) chatbot ChatGPT could be exploited by threat actors to hack into target networks with relative ease. The Cybernews research team uncovered that the AI-based chatbot ChatGPT, a recently launched platform that attracted the online community's attention, might supply hackers with step-by-step instructions on how to breach websites. Researchers at Cybernews warn that the AI chatbot, while entertaining to experiment with, may also be harmful because it can provide thorough instructions on how to exploit vulnerabilities. The team attempted to identify a website's vulnerabilities using ChatGPT. The researchers asked questions and followed the chatbot's instructions to determine if it could provide a step-by-step tutorial for exploiting the vulnerabilities. For their experiment, the researchers used the "Hack the Box" cybersecurity training platform, which provides a virtual training environment and is employed by cybersecurity professionals, schools, and organizations to develop hacking skills. The team approached ChatGPT under the notion that they were conducting a penetration testing challenge in which a hack is duplicated using various tools and techniques. The chatbot answered with five basic starting points for what to look for on the website in the exploitation of vulnerabilities. By describing what they observe in the source code, researchers were able to determine which parts of the code to prioritize. They also received samples of proposed code improvements. After about 45 minutes of interacting with the chatbot, researchers were able to hack the offered website. This article continues to discuss the demonstrated use of ChatGPT as a potential assistant in hacking operations.

Security Affairs reports "How Hackers Might Be Exploiting ChatGPT"

Submitted by Anonymous on