Trust from Explicit Evidence: Integrating Digital Signatures and Formal Proofs
Abstract

ABOUT THE PROJECT:

This project is developing a common logical framework that will account for two principal sources of trust in software: digital signatures and explicit proof. The framework will allow us to rigorously specify, enforce, and analyze security policies that rely on multiple modes and sources of trust. Based on earlier work by the PI and collaborators, the framework is being cast as a modal type theory that comes equipped with a notation for programs and proofs. We will exploit this relationship to guarantee composability (since types govern interfaces between code) and usability from the programmer's point of view (since type checking is effective during software development). The project will proceed in two phases: framework design and metatheoretic study in the first phase, and a testbed implementation in the second phase. A possible third phase may study usability from the end user's point of view.

OUR TEAM:

Frank Pfenning (CSD, CMU) and Elsa L. Gunter

 

Towards a Science of Securing Network Forwarding
Abstract

ABOUT THE PROJECT:

In this project, we are formulating a formal science of securing packet forwarding in computer networks. In particular, we are developing a formal algebra of the network's data plane (packet-forwarding) operations. Our approach represents the network’s forwarding behavior and state in the form of Boolean satisfiability (SAT) clauses. Our framework enables rigorous analysis and study of the way that networks forward packets, including the ability to check correctness invariants against a specific network deployment (e.g., reachability, service quality, classification level), providing metrics that capture the degree of packet-forwarding security, and enabling the derivation of theorems about how to build networks with strong security properties on packet-forwarding behavior. We are bridging the gap between theory and practice by developing a system to directly read SAT clauses from network deployments and are building upon existing SAT solver technologies to perform automated reasoning. To speed progress, we are building upon our earlier work on using satisfiability checking for testing network reachability.

OUR TEAM:

Brighten Godfrey and Matthew Caesar

Toward a Theory of Resilience in Systems: A Game-Theoretic Approach
Abstract

ABOUT THE PROJECT:

This project falls within the areas of control theory and game & decision theory, and aims at addressing some foundational issues. It builds on a platform of ongoing research on strategic decision‐making that accommodates elements of security, robustness, learning, and adversarial intrusion, using control‐ and game‐theoretic frameworks.

OUR TEAM:

Tamer Basar

 

Theoretical Foundations of Threat Assessment by Inverse Optimal Control
Abstract

ABOUT THE PROJECT:

Starting from the premise that cyber attacks have quantifiable objectives, our long-term goal is to develop computational tools that quickly recover the objective associated with any given attack by observing its effects. Knowledge of this objective is critical to threat assessment. It allows us to understand the scope of an attack, to predict how the attack will proceed, and to respond in a systematic and targeted way. Our approach to recovering this objective is based on a new method for solving the problem of "inverse optimal control." Much of our recent work has focused on establishing the theoretical foundations for this method of solution. In this project we are exploring its application to threat assessment.

OUR TEAM:

Timothy W. Bretl

Secure Platforms via Stochastic Computing
Abstract

ABOUT THE PROJECT:

The criticality of the information protection and assurance (IPA) problem has understandably sparked rich intellectual and material investment into finding a solution. Several efforts have centered on understanding, identifying, tolerating, and patching security vulnerabilities at different levels of the electronic system stack for various security attack models. Most of these approaches tend to fall into the "sand-boxing" category, whereby unusual events are sequestered until their potential impacts are identified. Such efforts tend to be directed at well-known threats, and thus require that all existing techniques be revisited as newer attack models emerge.

This project is formulating a framework for designing secure computing platforms that treat security infractions as computational errors and employ error-resiliency techniques to tolerate them, while simultaneously providing the user with alert levels based on a grading of the severity of the infractions. Our ongoing work on stochastic computing is being leveraged to provide a foundation for the framework.

OUR TEAM:

Naresh Shanbhag and Rakesh Kumar

The Science of Summarizing Systems: Generating Security Properties Using Data Mining and Formal Analysis
Abstract

ABOUT THE PROJECT:

In this project, we are using our invariant generation methodologies for security focused applications, like generation of invariants for a new application from the Android app store. We are investigating the science behind re-engineering a system through iterative invariant generation. We are also studying the connections between the machine-learning model used by the data mining and the finite state machine model of the program that is actually learned. We will generalize the models/abstractions obtained from this study and validate these methods of learning invariants using empirical evidence from a real security application.

OUR TEAM:

Shobha Vasudevan

Scalable Methods for Security Against Distributed Attacks
Abstract

ABOUT THE PROJECT:

This project is developing methods for resilient, efficient recognition of distributed attacks on clouds, data warehouses, clusters, and other massively parallel systems. Such attacks cannot usually be detected through local monitoring alone. Specifically, we are developing a probabilistic distributed temporal logic for characterizing such attacks and methods of verifying formulas in such a logic. The novel approach of combining probabilistic, distributed, and temporal operators will yield a new representation for system properties. These operators will enable us to express more sophisticated attack patterns, as well as describe a wide variety of local and global properties, ranging from performance to availability.

Using distributed operators localizes the description and supplies the necessary scalability and performance. Probabilistic reasoning adds resilience against the uncertainty inherent in parallel systems. Temporal operators provide for history-awareness and anticipation of future events. The new semantics and associated algorithms we are researching will allow for reasoning about and proving of these properties. We will prototype a compiler that translates formulas into monitoring programs that can instantly be deployed in the network.

OUR TEAM:

Gul Agha

Quantitative Assessment of Access Control in Complex Distributed Systems
Abstract

ABOUT THE PROJECT:

Modern networked systems are large and heterogeneous, and employ a variety of access control mechanisms that are the first line of defense against cyber-attack. These mechanisms include, but are not limited to:
 

  • router-based dedicated firewalls, such as the Cisco PIX series;
  • host-based firewalls, which could be based in software (such as iptables in Linux, or in-built firewalls in the operating system) or hardware (such as specialized NICs);
  • operating-system-based mechanisms, such as discretionary access control in Linux or Windows, or more sophisticated mechanisms, such as the mandatory access control in NSA's SELinux and similar functionality provided for Windows by the Cisco Security Agent; and
  • middleware-based mechanisms, such as the Java Security Manager, that provide for specification and enforcement of fine-granularity access control policies for Java programs.

This project is addressing the problem of estimating the degree to which the actual security posture given by the configurations of these diverse mechanisms complies with machine-checkable global policy, when (1) the system is too large to admit to an exhaustive analysis, and (2) we consider the possibility of intruders creating connections through compromised hosts set up as stepping stones. The foundational science we are developing consists of application of the statistical method of importance sampling to the problem of statistically estimating metrics that quantify a system’s compliance with global policy, and to the problem of finding the hosts that, if compromised, would have the largest negative impact on accessibility compliance, under our two assumptions. The results of our research will quickly be included in an existing tool we've developed that exhaustively validates compliance. This will transition the technology into practice; furthermore, the insights we gain will have application in other security domains where the challenge is to estimate the number of rare but disastrous states in combinatorially huge state spaces.

OUR TEAM:

David M. Nicol and William H. Sanders

Protocol Verification: Beyond Reachability Properties
Abstract

ABOUT THE PROJECT:

This project is investigating the existence of decision procedures for two key properties at the heart of observational equivalence, assuming the protocol's algebraic properties satisfy the finite variant property (FVP), which is satisfied by most cryptographic functions. If time permits, extensions relaxing FVP will be developed. The first such property is deducibility, that is, determining whether an attacker who has seen a sequence of messages can deduce a given message from that sequence. The second is static equivalence, that is, the ability to decide whether two sequences of messages seen by an attacker are equivalent, in the sense that there is nothing that the attacker can distinguish from one of them and not from the other. Static equivalence is key to proving observational equivalence and trace equivalence. The research is trying to develop and prove correct inference systems for these two decision procedures under the FVP assumption or some more general condition.

OUR TEAM:

José Meseguer

From Measurements to Security Science: Data-Driven Approach
Abstract

ABOUT THE PROJECT:

In security more than in other computing disciplines, professionals depend heavily on rapid analysis of voluminous streams of data gathered by a combination of network-, file-, and system-level monitors. The data are used both to maintain a constant vigil against attacks and compromises on a target system and to improve the monitoring itself. While the focus of the security engineer is on ensuring operational security, it is our experience that the data are a gold mine of information that can be used to develop greater fundamental insight and hence a stronger scientific basis for building, monitoring, and analyzing future secure systems. The challenge lies in being able to extract the underlying models and develop methods and tools that can be the cornerstone of the next generation of disruptive technologies.

This project is taking an important step in addressing that challenge by developing scientific principles and data-driven formalisms that allow construction of dynamic situation-awareness models that are adaptive to system and environment changes (specifically, malicious attacks and accidental errors). Such models will be able (i) to identify and capture attacker actions at the system and network levels, and hence provide a way to reason about the attack independently of the vulnerabilities exploited; and (ii) to assist in reconfiguring the monitoring system (e.g., placing and dynamically configuring the detectors) to adapt detection capabilities to changes in the underlying infrastructure and to the growing sophistication of attackers. In brief, the continuous measurements and the models will form the basis of what we call execution under probation technologies.
 

OUR TEAM:

Ravishankar K. Iyer, Zbigniew Kalbarczyk, and Adam Slagell

 

Subscribe to