"Pervasive LLM Hallucinations Expand Code Developer Attack Surface"

According to recent research published by the Large Language Model (LLM) security vendor Lasso Security, the use of LLMs by software developers provides a greater opportunity for attackers to distribute malicious packages to development environments than previously thought. The study is a follow-up to a report published last year on the possibility of attackers exploiting LLMs' tendency to hallucinate or generate seemingly plausible but factually incorrect results in response to user input. The previous study focused on ChatGPT's tendency to fabricate code library names, among other things, when software developers asked the AI-enabled chatbot for help. When a developer asked the chatbot to recommend packages for a project, it sometimes provided links to nonexistent packages on public code repositories. This article continues to discuss the follow-up research on the pervasiveness of the package hallucination problem across four different LLMs.

Dark Reading reports "Pervasive LLM Hallucinations Expand Code Developer Attack Surface"

Submitted by grigby1

Submitted by Gregory Rigby on