"ChatGPT Could Make Phishing More Sophisticated"

As the new version of the Artificial Intelligence (AI)-driven chatbot tool ChatGPT recently rolled out, experts have reiterated their greatest cybersecurity concern, which is that the technology will be used to compose more sophisticated phishing emails, thus increasing the vulnerability of government systems to attacks. OpenAI revealed GPT-4, the newest version of its AI technology, and demonstrated its ability to draft lawsuits, pass different standardized exams, and analyze uploaded text and photos. According to the company, this latest version of the technology offers greater "steerability," allowing users to prescribe their AI's style and task rather than being limited to a classic ChatGPT personality with set language and tone. This improvement could allow hackers to create more effective phishing emails, especially those that appear to be from certain individuals and are sent to many of their contacts. In response, government agencies are encouraged to increase phishing training for employees and adopt AI-driven cybersecurity solutions. The chief product officer of the cybersecurity software company Ivanti, Srinivas Mukkamala, stated that governments should be "proactive" in responding to AI-driven threats by decreasing their attack surface, given the exponential growth of the problem. National intelligence officials are also concerned about the threat posed by AI-driven attacks. In its 2023 Annual Threat Assessment released, the Office of the Director of National Intelligence warned that new technologies, such as AI, are being developed and implemented faster than companies and governments can shape norms, protect privacy, and prevent damaging effects. This article continues to discuss the new version of ChatGPT that offers greater steerability to users and how this advancement could bolster cyberattacks. 

GCN reports "ChatGPT Could Make Phishing More Sophisticated"

Submitted by Anonymous on