"Security Threats in AIs Such as ChatGPT Revealed by Researchers"

Scientists at the University of Sheffield have found that Natural Language Processing (NLP) tools, such as ChatGPT, can be tricked into generating malicious code, which could lead to cyberattacks. The study is said to be the first to demonstrate that NLP models can be used to attack real-world computer systems in various industries. The results show that Artificial Intelligence (AI) language models are vulnerable to simple backdoor attacks, such as planting a Trojan Horse, which could be activated anytime to steal data or disrupt services. Findings also shed light on the security risks associated with how people use AI tools to learn programming languages and interact with databases. This article continues to discuss AI tools, such as ChatGPT, possibly being tricked into producing malicious code that could be used to launch cyberattacks. 

The University of Sheffield reports "Security Threats in AIs Such as ChatGPT Revealed by Researchers"

Submitted by grigby1

Submitted by grigby1 CPVI on