Epistemic Models for Security
Lead PI:
Robert Harper
Abstract

Noninterference defines a program to be secure if changes to high-security inputs cannot alter low-security outputs thereby indirectly stating the epistemic property that no low-security principal acquires knowledge of high-security data.  We consider a directly epistemic account of information-flow control focusing on the knowledge flows engendered by the program's execution.  Storage effects are of primary interest, since principals acquire and disclose knowledge from the execution only through these effects.  The information-flow properties of the individual effectful actions are characterized using a substructural epistemic logic that accounts for the knowledge transferred through their execution.  We prove that a low-security principal never acquires knowledge of a high-security input by executing a well-typed program.  Moreover, the epistemic approach facilitates going beyond noninterference to account for authorized declassification.  We prove that a low-security principal acquires knowledge of a high-security input only if there is an authorization proof.

PI: Robert Harper

Robert Harper
An Investigation of Scientific Principles Involved in Attack-Tolerant Software
Lead PI:
Mladen Vouk
Abstract

High-assurance systems, for which security is especially critical, should be designed to a) auto-detect attacks (even when correlated); b) isolate or interfere with the activities of a potential or actual attack; and (3) recover a secure state and continue, or fail safely. Fault-tolerant (FT) systems use forward or backward recovery to continue normal operation despite the presence of hardware or software failures. Similarly, an attack-tolerant (AT) system would recognize security anomalies, possibly identify user “intent”, and effect an appropriate defense and/or isolation. Some of the underlying questions in this context are. How is a security anomaly different from a “normal” anomaly, and how does one reliably recognize it? How does one recognize user intent? How does one deal with security failure-correlation issues? What is the appropriate safe response to potential security anomaly detection? The key hypothesis is that all security attacks always produce an anomalous state signature that is detectable at run-time, given enough of appropriate system, environment, and application provenance information. If that is true (and we plan to test that), then fault-tolerance technology (existing or newly develop) may be used with success to prevent or mitigate a security attack. A range of AT technologies will be reviewed, developed and assessed.

Team

PI: Mladen Vouk
Student: Da Young Lee

Mladen Vouk
Understanding the Fundamental Limits in Passive Inference of Wireless Channel Characteristics
Lead PI:
Huaiyu Dai
Abstract

It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in existing wireless physical layer security mechanisms for security assurance. We believe that this channel correlation model is incorrect in general: it leads to wrong hypothesis about the inference capability of a passive adversary and results in false sense of security, which will expose the legitimate systems to severe threats with little awareness. In this project, we focus on establishing correct modeling of channel correlation in wireless environments of interest, and properly evaluating the safety distance metric of existing and emerging wireless security mechanisms, as well as cyber-physical systems employing these security mechanisms. Upon successful completion of the project, the expected outcome will allow us to accurately determine key system parameters (e.g., the security zone for secrete key establishment from wireless channels) and confidently assess the security assurance in wireless security mechanisms. More importantly, the results will correct the previous misconception of channel de-correlation, and help security researchers develop new wireless security mechanisms based on a proven scientific foundation.

TEAM

PIs: Huaiyu Dai, Peng Ning
Student: Xiaofan He

Huaiyu Dai
Modeling the risk of user behavior on mobile devices
Lead PI:
Ben Watson
Co-Pi:
Abstract

It is already true that the majority of users' computing experience is a mobile one. Unfortunately that mobile experience is also more risky: users are often multitasking, hurrying or uncomfortable, leading them to make poor decisions. Our goal is to use mobile sensors to predict when users are distracted in these ways, and likely to behave insecurely. We will study this possibility in a series of lab and field experiments.

TEAM

PIs: Benjamin Watson, Will Enck, Anne McLaughlin, Michael Rappa

Ben Watson
An Adoption Theory of Secure Software Development Tools
Lead PI:
Emerson Murphy-Hill
Abstract

Programmers interact with a variety of tools that help them do their jobs, from "undo" to FindBugs' security warnings to entire development environments. However, programmers typically know about only a small subset of tools that are available, even when many of those tools might be valuable to them. In this project, we investigate how and why software developers find out about -- and don't find out about -- software security tools. The goal of the project is to help developers use more relevant security tools, more often.

TEAM

PI: Emerson Murphy-Hill
Student: Jim Witschey

Emerson Murphy-Hill
Low-level Analytics Models of Cognition for Novel Security Proofs
Abstract

A key concern in security is identifying differences between human users and “bot” programs that emulate humans. Users with malicious intent will often utilize wide-spread computational attacks in order to exploit systems and gain control. Conventional detection techniques can be grouped into two broad categories: human observational proofs (HOPs) and human interactive proofs (HIPs). The key distinguishing feature of these techniques is the degree to which human participants are actively engaged with the “proof.” HIPs require explicit action on the part of users to establish their identity (or at least distinguish them from bots). On the other hand, HOPs are passive. They examine the ways in which users complete the tasks they would normally be completing and look for patterns that are indicative of humans vs. bots. HIPs and HOPs have significant limitations. HOPs are susceptible to imitation attacks, in which bots carry out scripted actions designed to look like human behavior. HIPs, on the other hand, tend to be more secure because they require explicit action from a user to complete a dynamically generated test. Because humans have to expend cognitive effort in order pass HIPs, they can be disruptive or reduce productivity. We are developing the knowledge and techniques to enable “Human Subtlety Proofs” (HSPs) that blend the stronger security characteristics of HIPs with the unobtrusiveness of HOPs. HSPs will improve security by providing a new avenue for actively securing systems from non-human users.

TEAM

PIs: David Roberts, Robert St. Amant
Students: Titus Barik, Arpan Chakraborty, Brent Harrison

Subscribe to