"Code-Generating AI Can Introduce Security Vulnerabilities, Study Finds"

According to a new study, software developers that employ code-generating Artificial Intelligence (AI) systems are more likely to introduce security flaws into the applications they write. The report from Stanford University-affiliated researchers exposes the potential risks of code-generating systems as vendors such as GitHub begin pushing them. Neil Perry, a Ph.D. candidate at Stanford and co-author of the paper, emphasized that code-generating systems are not a replacement for human coders. Developers who use them to complete assignments outside of their area of expertise should be concerned, and those who use them to speed up tasks for which they are already proficient should double-check the outputs and context. The Stanford study focused on Codex, the AI code-generation system created by the San Francisco-based research lab OpenAI. The researchers recruited 47 developers, ranging from undergraduate students to industry experts with years of programming experience, to use Codex to solve security-related challenges in programming languages such as Python, JavaScript, and C. Compared to a control group, study participants who had access to Codex were more likely to create inaccurate and "insecure" programming solutions when using Codex. Furthermore, they were more likely than the control group to claim that their insecure solutions were secure. Megha Srivastava, a Stanford graduate student and the study's second co-author, emphasized that the findings do not completely condemn Codex and other code-generating systems. For one, the study participants lacked the security skills that would have enabled them to properly identify code vulnerabilities. Srivastava believes that code-generating systems are dependable for low-risk activities, such as exploratory research code, and that their coding suggestions could be enhanced through fine-tuning. This article continues to discuss the research on the possibility of code-generating AI introducing security vulnerabilities into applications.

TechCrunch reports "Code-Generating AI Can Introduce Security Vulnerabilities, Study Finds"

Submitted by Anonymous on