"AI May Create a Tidal Wave of Buggy, Vulnerable Software"

Reliance on error-filled code written by generative Artificial Intelligence (AI) using Large Language Models (LLMs) is resulting in highly vulnerable software, according to Veracode Chief CTO and co-founder Chris Wysopal. He noted that LLMs write code like human software developers who do not write secure code. Code-writing generative AI programs such as Microsoft Copilot are expected to help improve software security. Generative AI programs help developers write 50 percent more code, but the code written by AI has been found to be less secure. A New York University study found that Microsoft Copilot-generated code was 41 percent more likely to be vulnerable. This article continues to discuss the expected increase in vulnerable software due to generative AI.

SC Magazine reports "AI May Create a Tidal Wave of Buggy, Vulnerable Software"

Submitted by grigby1

Submitted by Gregory Rigby on