"ChatGPT 'Not a Reliable' Tool for Detecting Vulnerabilities in Developed Code"

According to a new report by NCC Group that examines various Artificial Intelligence (AI) cybersecurity use cases, generative AI, particularly ChatGPT, should not be considered a reliable resource for detecting vulnerabilities in developed code without human expert oversight. However, Machine Learning (ML) models show significant promise for helping detect zero-day attacks. The whitepaper titled "Safety, Security, Privacy & Prompts: Cyber Resilience in the Age of Artificial Intelligence (AI)" aims to help those who want to gain a deeper understanding of how AI applies to cybersecurity by describing how cybersecurity professionals can use the technology. This article continues to discuss key points from NCC Group's report on AI cybersecurity use cases.

CSO Online reports "ChatGPT 'Not a Reliable' Tool for Detecting Vulnerabilities in Developed Code"

Submitted by grigby1

Submitted by grigby1 CPVI on