"Using ChatGPT to Analyze Your Code? Not So Fast"

According to the Cybersecurity and Information Systems Information Analysis Center (CSIAC), the average code sample has 6,000 defects per million lines of code, with the Software Engineering Institute (SEI) at Carnegie Mellon University (CMU) discovering that 5 percent of these defects become vulnerabilities. This turns into about three vulnerabilities per 10,000 lines of code. The question is whether ChatGPT can help improve this ratio. There has been much discussion regarding how tools built on top of Large Language Models (LLMs) will affect software development, specifically how developers write and evaluate code. In 2023, a team of CERT Secure Coding researchers used ChatGPT 3.5 to analyze noncompliant software code examples from their CERT Secure Coding standard, the SEI CERT C Coding Standard. This article continues to discuss the experiment and findings, which show that while ChatGPT 3.5 has promise, it has limitations.

Software Engineering Institute - Carnegie Mellon University reports "Using ChatGPT to Analyze Your Code? Not So Fast"

Submitted by grigby1

Submitted by Gregory Rigby on