"ChatGPT Subs In as Security Analyst, Hallucinates Only Occasionally"

Experiments have demonstrated that ChatGPT, a popular Large Language Model (LLM), could help defenders in triaging potential security incidents and finding security flaws in code, even though the Artificial Intelligence (AI) model was not trained for such tasks. In an analysis of ChatGPT's value as an incident response tool, researchers determined that it could identify malicious processes running on compromised systems. The researchers infected a system with the Meterpreter and PowerShell Empire agents, assumed the role of an adversary, and then went through the system with a ChatGPT-powered scanner. The LLM identified two malicious processes running on the system and correctly disregarded 137 benign processes, potentially significantly decreasing overhead. This article continues to discuss the potential use of ChatGPT as a tool for incident response triage and software vulnerability discovery.

Dark Reading reports "ChatGPT Subs In as Security Analyst, Hallucinates Only Occasionally"

Submitted by Anonymous on