"Pervasive LLM Hallucinations Expand Code Developer Attack Surface"
"Pervasive LLM Hallucinations Expand Code Developer Attack Surface"
According to recent research published by the Large Language Model (LLM) security vendor Lasso Security, the use of LLMs by software developers provides a greater opportunity for attackers to distribute malicious packages to development environments than previously thought. The study is a follow-up to a report published last year on the possibility of attackers exploiting LLMs' tendency to hallucinate or generate seemingly plausible but factually incorrect results in response to user input.