Spotlight on Lablet Research #37 - Model-Based Explanation for Human-in-the-Loop Security

Spotlight on Lablet Research #37 -

Model-Based Explanation for Human-in-the-Loop Security

 

Lablet: Carnegie Mellon University

An effective response to security attacks often requires a combination of both automated and human-mediated actions. There are currently few adequate methods to reason about such human-system coordination, including ways to determine when to allocate tasks to each party and how to gain assurance that automated mechanisms are appropriately aligned with organizational needs and policies. This project focuses on combining human and automated actions in response to security attacks, and will show how probabilistic models and model checkers can be used both to synthesize complex plans that involve a combination of human and automated actions, as well as to provide human-understandable explanations of mitigation plans proposed or carried out by the system.

Models that support attack resiliency in systems need to address the allocation of tasks to humans and systems, and how the mechanisms align with organizational policies. These models include, for example, the identification of when and how systems and humans should cooperate, how to provide self-explanation to support human hand-offs, and ways to assess the overall effectiveness of coordinated human-system approaches for mitigating sophisticated threats. In this project, the research team, led by Principal Investigator (PI) David Garlan, is developing a model-based approach to: (1) reason about when and how systems and humans should cooperate with each other; (2) improve human understanding and trust in automated behavior through self-explanation; and (3) provide mechanisms for humans to correct a system’s automated behavior when it is inappropriate. They explore the effectiveness of the techniques in the context of coordinated system-human approaches for mitigating Advanced Persistent Threats (APTs).

The team has worked on the following thrusts:

Game Theoretic Approaches to Self-Protection. Game theory approaches have been explored in security to model malicious behaviors and design reliable defense for the system in a mathematically grounded manner. However, modeling the system as a single player, as done in prior works, is insufficient for the system under partial compromise and for the design of fine-grained defensive strategies where the rest of the system with autonomy can cooperate to mitigate the impact of attacks. To deal with such issues, the researchers proposed a new self-adaptive framework incorporating Bayesian game theory and modeled the defender (i.e., the system) at the granularity of components. Under security attacks, the architecture model of the system is translated into a Bayesian multi-player game, where each component is explicitly modeled as an independent player, while security attacks are encoded as variant types for the components. The optimal defensive strategy for the system is dynamically computed by solving the pure equilibrium (i.e., adaptation response) to achieve the best possible system utility, improving the resiliency of the system against security attacks.

To provide exploration capabilities for game-theoretic approaches to self-protection, the team developed a tool, xGames, that allows operators to (a) visualize and explore games by selecting nodes in the game tree and understanding the state of the game at that point, (b) ask "why," "why not," and "what if" questions about alternative courses of action, (c) understand the impact of games on the system that is affected by moves in the game, and (d) be customizable to arbitrary games and systems. See available video.

Preparing Humans. Informed by work in cognitive science on human attention and context management, the team extended their formal framework on reasoning about human-in-the-loop adaptation to reason about using preparatory notifications in self-adaptive systems involving human operators. The framework characterizes the effects of managing attention via task notification in terms of task context comprehension. They also built on the framework to develop an automated probabilistic reasoning technique able to determine when and in what form a preparatory notification tactic should be used to optimize system goals.

Explainability of Trade-offs. While recent developments in architectural analysis techniques can assist architects in exploring the satisfaction of quantitative guarantees across the design space, existing approaches in software design are limited because they do not explicitly link design decisions to the satisfaction of quality requirements. Furthermore, the amount of information they yield can be overwhelming to a human designer, making it difficult to distinguish the forest through the trees. The research team developed an approach to analyzing architectural design spaces that addresses these limitations and provides a basis to enable the explainability of design trade-offs. The approach combines dimensionality reduction techniques employed in machine learning pipelines with quantitative verification to enable architects to understand how design decisions contribute to the satisfaction of strict quantitative guarantees under uncertainty across the design space. The results show feasibility of the approach in two case studies and evidence that dimensionality reduction is a viable approach to facilitate comprehension of trade-offs in poorly-understood design spaces. This is foundational work that, while focused on software design, is also applicable to explaining run-time decisions when the decision space of possible actions is large by focusing on the key elements that influence the decision made.

More recent accomplishments include the following:

For realistic self-adaptive systems, multiple quality attributes need to be considered and traded off against each other. These quality attributes are commonly encoded in a utility function, for instance, a weighted sum of relevant objectives. Utility functions are typically subject to a set of constraints, i.e., hard requirements that should not be violated by the system. The research agenda for requirements engineering for self-adaptive systems has raised the need for decision-making techniques that consider the trade-offs and priorities of multiple objectives. Human stakeholders need to be engaged in the decision-making process so that constraints and the relative importance of each objective can be correctly elicited. The research team developed a method that supports multiple stakeholders in eliciting constraints, prioritizing relevant quality attributes, negotiating priorities, and giving input to define utility functions for self-adaptive systems. They developed tool support in the form of a blackboard system that aggregates information by different stakeholders, detects conflicts, proposes mechanisms to reach an agreement, and generates a utility function, and performed a think-aloud study with 14 participants to investigate negotiation processes and assess the approach's understandability and user satisfaction. The study sheds light on how humans reason about and how they negotiate around quality attributes. The mechanisms for conflict detection and resolution were perceived as very useful. Overall, the approach was found to make the process of utility function definition more understandable and transparent. This can be used to combine security quality requirements with other requirements in an explainable way, tracing the explanation back to stakeholder reasoning and conflict resolution. The team continued to work on using dimension-reducing approaches to focus explanations on the key factors used in decision-making for self-adaptive/self-protecting systems.

Many self-adaptive systems benefit from human involvement and oversight, where a human operator can provide expertise not available to the system and detect problems that the system is unaware of. One way of achieving this synergy is by placing the human operator in the loop - i.e., providing supervisory oversight and intervening in the case of questionable adaptation decisions. To make such interaction effective, an explanation can play an important role in allowing the human operator to understand why the system is making certain decisions and improve the level of knowledge that the operator has about the system. This, in turn, may improve the operator's capability to intervene and if necessary, override the decisions being made by the system. However, explanations may incur costs, in terms of delay in actions and the possibility that a human may make a bad judgement. Hence, it is not always obvious whether an explanation will improve overall utility and, if so, what kind of explanation should be provided to the operator. The team defined a formal framework for reasoning about explanations of adaptive system behaviors and the conditions under which they are warranted. Specifically, they characterize explanations in terms of explanation content, effect, and cost and use a dynamic system adaptation approach that leverages a probabilistic reasoning technique to determine when an explanation should be used in order to improve overall system utility. The framework was evaluated in the context of a realistic Industrial Control System (ICS) with adaptive behaviors.

They have developed a framework that uses various statistical approaches commonly used in machine learning for simplifying explanations of plans made in large trade-off spaces. The approach combines Principal Component Analysis (PCA), decision trees, and classification to understand key factors in deciding which plans to choose. The approach can allow explanations to focus on factors that really impacted the choice of plan, reducing the amount of information and context a human would need to understand to comprehend an explanation. 

Submitted by Anonymous on