SoS Musings #47 - The Problem with False Positives in Security Operations

Image removed.SoS Musings #47 -

The Problem with False Positives in Security Operations


False positives are an issue commonly faced in the collection of threat intelligence, the execution of security operations, and incident response performance. The National Institute of Standards and Technology (NIST) defines false positives as alerts that incorrectly indicate that a vulnerability is present or that malicious activity is occurring. Specifically, in cybersecurity, false positives denote that a file, setting, or event has been flagged as malicious when it is truly benign. False positive alerts may expose organizations to security breaches as information security teams often waste time, resources, and efforts handling such alerts when they could be addressing actual threats to the system or network they are responsible for protecting.

Studies have highlighted the overwhelming generation of false positives. According to a survey conducted by the cybersecurity firm FireEye that polled C-level security executives at large enterprises around the world, 37% of the respondents revealed that they receive more than 10,000 alerts each month. Of those alerts, over 50% were false positives. Findings from a study by the Ponemon Institute indicate that the average organization may receive significantly more alerts in a week. The Ponemon study reported that the average number of malware alerts received by an organization during a typical week is nearly 17,000, with only 19% of the alerts having been considered reliable. Research from the Neustar International Security Council (NISC) found that more than a quarter of security alerts handled by organizations are false positives. The survey carried out by the NISC to which senior security professionals across the US and five European markets responded, also found that over 40% of organizations experience false positive alerts in more than 20% of cases, while 15% revealed that over 50% of their security alerts turn out to be false positives. A FireEye-sponsored survey conducted by the leading global market intelligence firm International Data Corporation (IDC) polled 350 internal and Managed Security Service Provider (MSSP) security analysts and managers working in organizations across multiple sectors, including financial, healthcare, and government. Internal security analysts and IT security managers revealed that they receive thousands of alerts each day, with 45% of them being false positives. MSSP analysts pointed out that 53% of the alerts they receive are false positives. Sixty-eight percent of those who participated in another survey done by the cybersecurity company Critical Start reported that false positives make up 25-75% of the security alerts they investigate on a daily basis.

False positives have the potential to disrupt cybersecurity efforts and result in a costly breach. The IDC InfoBrief, "The Voice of the Analysts: Improving Security Operations Center Processes Through Adapted Technologies," calls attention to the fact that many security analysts and managers are experiencing alert fatigue, leading to less productivity, ignored alerts, increased stress, and the Fear of Missing Incidents (FOMI). Security analysts are becoming increasingly overwhelmed by the flood of false positive alerts they receive from the different kinds of solutions implemented by Security Operations Centers (SOCs). The influx of false positives is decreasing the efficiency of in-house analysts' jobs and slowing down workflow processes. The IDC survey found that 35% of internal analysts manage alert overload by ignoring alerts. Forty-four percent of analysts at MSSPs also said that they ignore alerts when their queue gets too crowded, leaving multiple clients vulnerable to a potential breach. FOMI is affecting most security analysts and managers. The more challenges that analysts face when manually managing alerts, the more they worry about missing an incident. According to the same IDC survey, three in four analysts worry about missing incidents, while one in four worries significantly about missing incidents. However, FOMI seems to be impacting security managers more than analysts, with over 6% of the managers reporting to have lost sleep due to this fear of missing an incident.

Alert overload and the overwhelming number of false positives lead to high analyst turnover. The cybersecurity firm Critical Start surveyed SOC professionals working in different companies, MSSPs, and Managed Detection & Response (MDR) providers to gain insight into the state of incident response within SOCs regarding alert volume, alert management, SOC turnover, and more. Findings from the survey further indicate that the alert overload problem, together with the false positive rate being 50% or higher, can wear on security analysts in the long-run, eventually leading to burnout and high turnover rates in SOC teams. More than eight out of ten reported that their SOC had experienced at least 10% up to over 50% in analyst turnover due to the alert overload and the struggle to handle false positives. Due to the current global cybersecurity skills shortage, it is increasingly difficult to find skilled, well-experienced security professionals to replace security analysts. In addition to trying to hire more analysts to manage the onslaught of security alerts and the high false positive rate, SOCs turn off high-volume alerting features considered too noisy and ignore low to medium priority alerts. All of these factors lead enterprises to being more vulnerable to risk and security threats.

To address the challenges associated with alert overload and high false positive rates, security analysts and managers are calling for more automated SOC solutions. Most enterprise security teams are currently not using tools that automate SOC activities, as indicated by the top tools shared by analysts who participated in the FireEye-sponsored IDC survey. Less than half of the respondents reported using tools that apply Artificial Intelligence (AI) and Machine Learning (ML) to investigate alerts. Less than 50% of the respondents said that they use security tools and functions such as Security Information and Event Management (SIEM) software, Security Orchestration Automation and Response (SOAR) tools, and threat hunting. The survey also suggests that only two in five analysts use AI and ML together with other tools. Threat detection was ranked the highest in the list of activities that are best to automate, followed by threat intelligence and incident triage. ML and AL are essential to automating threat detection. A Recursive Neural Network (RNN) is a type of Deep Neural Network (DNN), which is an Artificial Neural Network (ANN). Through the use of an RNN, the accuracy of threat detection can be improved significantly, thus reducing or eliminating false positives. ML can be trained and tuned to improve over time using the RNN. Ivan Novikov, the founder and CEO of the application security company Wallarm gave a research-based presentation at BSides San Francisco in which he discussed how a neural network built on ML could be trained based on false positive detections to continuously tune the system to prevent the detection of future false positive events. Novikov explained how automatic rule tuning by the ML network could replace traditional false positive response responses such as CAPTCHA or email alerts to security teams. Security teams need advanced automated solutions to help reduce alert fatigue and increase focus on more high-skilled activities such as threat hunting. 

Submitted by Anonymous on