"Fighting Fire With Fire: White Hat Hackers Using ChatGPT Against Threat Actors"
Most cybersecurity experts and ethical hackers, also known as white hat hackers, have used the Artificial Intelligence (AI)-driven chatbot ChatGPT for web security practices. New research by Web3's bug bounty platform Immunefi reveals that despite ChatGPT's limitations, most recommend including it in toolkits. Seventy-six percent of white hat hackers have used ChatGPT for web security practices, while the remaining respondents (23.6 percent) have not yet used the technology. In regard to use cases, most white hat hackers cited education as ChatGPT's primary application (73.9 percent), followed by smart contract auditing (60.6 percent) and vulnerability discovery (46.7 percent). Cybersecurity researchers agree that ChatGPT has limitations, with most respondents citing limited accuracy in identifying security vulnerabilities, followed by a lack of domain-specific knowledge and difficulty managing large-scale audits. The accuracy of results and ease of use are the two most influential factors in deciding whether or not to use ChatGPT. This article continues to discuss the use of ChatGPT by white hat hackers.
Cybernews reports "Fighting Fire With Fire: White Hat Hackers Using ChatGPT Against Threat Actors"