"Security Threats in AIs Such as ChatGPT Revealed by Researchers"
"Security Threats in AIs Such as ChatGPT Revealed by Researchers"
Scientists at the University of Sheffield have found that Natural Language Processing (NLP) tools, such as ChatGPT, can be tricked into generating malicious code, which could lead to cyberattacks. The study is said to be the first to demonstrate that NLP models can be used to attack real-world computer systems in various industries. The results show that Artificial Intelligence (AI) language models are vulnerable to simple backdoor attacks, such as planting a Trojan Horse, which could be activated anytime to steal data or disrupt services.