Spotlight on Lablet Research #26 - Monitoring, Fusion, and Response for Cyber Resilience

Spotlight on Lablet Research #26 -

Monitoring, Fusion, and Response for Cyber Resilience

 

Lablet: University of Illinois at Urbana-Champaign

The goal of this project is to facilitate faster Sensitivity Analysis (SA) and Uncertainty Quantification (UQ) of slow-running cybersecurity models using a novel stacked metamodel approach.

Realistic state-based discrete-event cybersecurity simulation models are often quite complex. The complexity can manifest in models that (a) contain many input variables whose values are difficult to determine precisely, and (b) take a relatively long time to execute. SA and UQ are used to understand and manage the uncertainty in the inputs. Unfortunately, the long execution times of models may make traditional SA and UQ prohibitively time-consuming. In this project, researchers led by Principal Investigator (PI) William Sanders developed a novel approach for performing faster SA and UQ by using a metamodel composed of a stacked ensemble of regressors that emulates the behavior of the base model. They demonstrate its use on a number of previously-published models, which are used as test cases. The research team found that their metamodels run hundreds or thousands of times faster than the base models, and are more accurate than state-of-practice metamodels.

Researchers began to pursue two main directions in research at the beginning of 2021. First, they investigated whether adaptive sampling could be used to collect higher-quality training data which could be used to build more accurate metamodels for SA and UQ. Their adaptive sampling approach was not successful; the metamodels trained using the data collected by the adaptive sampling method were less accurate than the metamodels trained on non-adaptive sampling methods. Although the researchers may return to adaptive sampling in the future, they decided to shift the focus to a new direction.

The second approach was to determine if the metamodeling approach could be generalized. Prior to 2021, the team had initially only tried it on two models, but they found six more models to use as test cases and found that their metamodeling approach works well on these six additional models. They created metamodels that were thousands of times faster than the base models, and more accurate than state-of-the-practice metamodels.

In addition, they designed and tested several different variations on the base stacked metamodel architecture to determine the most effective architectures given the test cases. They also developed a plan to make the metamodeling tool they developed more widely available to academics.

Background on this project can be found here.

Submitted by Anonymous on