Spotlight on Lablet Research #31 - Predicting the Difficulty of Compromise through How Attackers Discover Vulnerabilities

Spotlight on Lablet Research #31 -

Predicting the Difficulty of Compromise through How Attackers Discover Vulnerabilities

 

Lablet: North Carolina State University
Participating Sub-Lablet: Rochester Institute of Technology

The goal of this project is to provide actionable feedback on the discoverability of a vulnerability. This feedback is useful for in-process software risk assessment, incident response, and the vulnerabilities equities process. The approach is to combine the attack surface metaphor and attacker behavior to estimate how attackers will approach discovering a vulnerability. The researchers, led by Principal Investigator (PI) Andy Meneely and Co-PI Laurie Williams, want to develop metrics that are useful and improve the metric formulation based on qualitative and quantitative feedback.

This project focuses on the attack surface based on the notion that pathways into the system enable attackers to discover vulnerabilities. This knowledge is important to software developers, architects, system administrators, and users. A literature review to classify attack surface definitions led to six clusters of definitions which differ significantly (methods, avenues, flows, features, barriers, and vulnerabilities). The methodology used to discover the attack surface (mining stacktraces from thousands of crash reports) and what the attack surface meant within the context of metric actionability, will lead to evolving the models for a risky walk and deploying a human-in-the-loop study. Attacker behavior data is gathered from the National Collegiate Penetration Testing Competition (CPTC) from years 2018 and 2019.

Based on the research team's data analysis of the 2019 Collegiate Penetration Testing Competition, they found that: 1) vulnerabilities related to protection mechanism failure (e.g., lack of SSL/TLS) and improper neutralization (e.g., SQL injection) are discovered faster than others; 2) vulnerabilities related to protection mechanism failure and improper resource control (e.g., user sessions) are discovered more often and are exploited more easily than others; and 3) there is a clear process followed by penetration testers of discovery/collection to lateral movement/preattack.

Researchers are performing comparison and evaluation of existing vulnerable dependency detection tools. The goal of this study is to aid security practitioners and researchers in understanding the current state of vulnerable dependency detection. The team ran 10 industry-leading dependency detection tools on a large web application composed of 44 Maven (Java) and npm (JavaScript) projects and found that the tools' results vary for both vulnerable dependencies and the unique vulnerabilities.

The team is building on a natural language classifier to mine apology statements from software repositories to systematically discover self-admitted mistakes. This classifier is being applied to a random sampling of GitHub repositories, including language from commits, issues, and pull request conversations. Thus far, they have collected data from 17,491 repositories, which are the top 1000 ranked repositories from 54 different programming languages. They are scaling up data collection as well as working on apology results.

Researchers completed a study comparing the output of Software Component Analysis (SCA) tools, a tool type getting increased attention with the Executive Order on Cybersecurity's emphasis on the software supply chain. Manual analysis of the tools' results suggests that the accuracy of the vulnerability database is a key differentiator for SCA tools. As a result, the research team recommends that practitioners not rely on any single tool as that can result in missing known vulnerabilities.

The team also conducted a study to aid software practitioners and researchers in understanding the current practice of releasing security fixes by open-source packages through an empirical measurement study. Specifically, the study focused on the fix-to-release delay, code change size, and documentation of security releases over 4,377 advisories across seven open-source package ecosystems. The team found that packages are typically fast and light in their security releases as the median release comes under 4 days of the corresponding fix and contains 402 lines of code change. Furthermore, they found that 61.5% of the releases come with a release note that documents the corresponding security fix, while 6.4% of these releases also mention a breaking change. 
 

Submitted by Anonymous on