"ChatGPT Hallucinations Open Developers to Supply Chain Malware Attacks"

Researchers have found that attackers can exploit ChatGPT's proneness for returning false information to spread malicious code packages. This poses a significant threat to the software supply chain because it can allow malicious code and Trojans to be included in legitimate applications and code repositories such as npm, PyPI, GitHub, and others. Using "AI package hallucinations," threat actors can make ChatGPT-recommended, yet malicious, code packages that a developer could download when using the chatbot, adding them to software that is then widely used, according to researchers from Vulcan Cyber's Voyager18 research team. In Artificial Intelligence (AI), a hallucination is a reasonable response by the AI that is incomplete, biased, or false. This article continues to discuss attackers exploiting false recommendations to spread malicious code via developers that use ChatGPT to create software.

Dark Reading reports "ChatGPT Hallucinations Open Developers to Supply Chain Malware Attacks"

Submitted by Anonymous on