"ASU Experts Explore National Security Risks of ChatGPT"

Experts from Arizona State University (ASU) are bringing further attention to how ChatGPT and other Artificial Intelligence (AI)-driven chatbots threaten national security. According to Nadya Bliss, executive director of ASU's Global Security Initiative and chair of the Defense Advanced Research Projects Agency's (DARPA) Information Science and Technology Study Group, ChatGPT could be used to craft phishing emails and messages that target unsuspecting victims and trick them into revealing sensitive information or installing malware. This technology can generate a large number of emails that are difficult to detect. She emphasizes the possibility of accelerating sophisticated phishing attacks while reducing their cost. ChatGPT poses a cybersecurity threat due to its ability to rapidly generate malicious code, allowing attackers to create and deploy new threats quicker than security countermeasures can be developed. Malicious code generated by ChatGPT could be quickly updated to evade detection by traditional antivirus software and signature-based detection mechanisms. This article continues to discuss the ways in which ChatGPT and other AI chatbots pose national security risks and efforts to address these risks.

Arizona State University reports "ASU Experts Explore National Security Risks of ChatGPT"

Submitted by Anonymous on