VI Reflections: False Positives and the Use of AI to Reduce Them

By grigby1 

False positives are a common problem when collecting threat intelligence, carrying out security operations, and responding to incidents. False positives, as defined by the National Institute of Standards and Technology (NIST), are alerts that incorrectly indicate the presence of a vulnerability or malicious activity. In cybersecurity, false positives occur when a file, setting, or event is flagged as malicious despite being completely benign. False positive alerts can expose organizations to security breaches because information security teams often waste valuable resources, time, and money on dealing with such alerts when they could be addressing actual threats to the system or network they are responsible for protecting. The burden placed on Security Operation Centers (SOCs) is further exacerbated by the prevalence of false positives. Analysts' time and resources are spent on false alerts produced due to misconfigurations, obsolete threat intelligence, or the limitations of the technology or data. These resources and time could be better allocated to proactive threat hunting or research endeavors. A large quantity of false positives lowers trust and confidence in security tools and instills doubt among analysts, which may result in a failure to adequately respond to genuine security incidents. 

Studies have shown that false positives are generated at an overwhelming rate. According to a survey conducted by the cybersecurity company FireEye of C-level security executives at large enterprises worldwide, 37 percent of respondents reported receiving more than 10,000 alerts per month, with over 50 percent being false positives. This contributes to "alert fatigue," one of the problems faced in cybersecurity operations. A study commissioned by IBM and completed by Morning Consult discovered that SOC team members receive only half of the alerts that they are supposed to review during a typical workday, which is a 50 percent blind spot. Most security analysts were found to spend nearly 32 percent of their typical workday investigating non-existent incidents, with the majority of threats being false positives or low-priority.

Excessive false positives is one factor that contributes to a high rate of analyst turnover. Critical Start conducted a survey of SOC professionals from various companies, Managed Security Service Providers (MSSPs), and Managed Detection and Response (MDR) providers. The survey gathered information about incident response in SOCs, focusing on areas such as alert volume, alert management, SOC turnover, and other related aspects. The survey findings suggested that the combination of alert overload and a false positive rate of 50 percent or more can have a detrimental impact on security analysts over time. This can result in burnout and a high rate of turnover within SOC teams. Over 80 percent of respondents indicated that their SOC had encountered analyst turnover ranging from 10 percent to over 50 percent as a result of alert overload and the difficulty in managing false positives. With the current cybersecurity workforce shortage, it increasingly challenging to find highly skilled and experienced security professionals to replace security analysts. SOCs try to deal with managing a large number of security alerts and a significant number of false positives by attempting to recruit additional analysts. They also disable high-volume alerting features that are deemed excessively disruptive and disregard alerts that are of low to medium importance. These factors increase the vulnerability of enterprises to risk and security threats.

Among the 1,000 SOC team members surveyed in the Morning Consult study, 39 percent see the use of more AI and automation throughout toolsets as a potential solution to dealing with an excessive number of false positive alerts. An AI-powered alert management system that uses Machine Learning (ML) to automatically handle alerts has the potential to significantly reduce analysts' workloads. A cyber assistant could learn from analyst decisions and behaviors in order to make recommendations and reduce false positives. One such solution has reduced false positives by 90 percent on average. A self-learning AI could help detect and respond autonomously in near real-time to previously unknown threats. With such a system, analysts can devote more time to higher-level analyses, threat hunting, and other critical security tasks. AI can help security analysts make quick and informed decisions through visualized content.

Combined with analytics functionalities, AI-driven automated multi-sourced threat intelligence enables SOCs to enhance alert prioritization and significantly reduce false positives. By harnessing ML and AI's capabilities, security tools can adjust and acquire knowledge from past data, thereby augmenting their precision in detecting and identifying real threats.  

To see previous articles, please visit the VI Reflections Archive.

Submitted by grigby1 CPVI on