"How AI-Augmented Threat Intelligence Solves Security Shortfalls"

Security operations and threat intelligence teams are understaffed, overwhelmed with data, and juggling competing demands, all of which can be remedied by Large Language Model (LLM) systems. However, the lack of experience with the systems prevents many companies from using the technology. According to researchers, organizations that implement LLMs can better synthesize intelligence from raw data and expand their threat intelligence capabilities, but such programs require the support of the security leadership to be appropriately focused. John Miller, head of intelligence analysis at Mandiant, notes that teams should implement LLMs for solvable problems, but first, they must evaluate the utility of LLMs in an organization's environment. In a presentation titled "What Does an LLM-Powered Threat Intelligence Program Look Like?" to be given at Black Hat USA in early August, Miller and Ron Graf, a data scientist on the intelligence analytics team at Mandiant's Google Cloud, will demonstrate the areas in which LLMs can help security analysts in accelerating and enhancing cybersecurity analysis. This article continues to discuss where LLMs can be of help to security professionals. 

Dark Reading reports "How AI-Augmented Threat Intelligence Solves Security Shortfalls"

Submitted by Anonymous on