Over the past decade, language-based security mechanisms—such as type systems, model checkers, symbolic executors, and other program analyses—have been successfully used to uncover or prevent many important (exploitable) software vulnerabilities, such as buffer overruns, side channels, unchecked inputs (leading to code injection), and race conditions, among others. But despite significant advances, current work makes two unrealistic assumptions: (1) the analyzed code comprises a complete program (as opposed to a framework or set of components), and (2) the software is written in a single programming language. These assumptions ignore the reality of modern software, which is composed of large sets of interacting components constructed in several programming languages that provide varying degrees of assurance that the components are well-behaved. In this project, we aim to address these limitations by developing new static-analysis techniques based on software contracts, which provide a way to extend the analysis of components to reason about security of an entire heterogeneous system.
Hyperproperties [Clarkson and Schneider 2010] can express security policies, such as secure information flow and service level agreements, which the standard kinds of trace properties used in program verification cannot.
Our objective is to develop verification methodologies for hyperproperties.
We intend to apply those methodologies to the construction of secure systems from components with known security properties, thereby addressing the problem of compositional security.
In highly configurable systems the configuration space is too big for (re-)certifying every configuration in isolation. In this project, we combine software analysis with network analysis to detect which configuration options interact and which have local effects. Instead of analyzing a system as Linux and SELinux for every combination of configuration settings one by one (>10^2000 even considering compile-time configurations only), we analyze the effect of each configuration option once for the entire configuration space. The analysis will guide us to designs separating interacting configuration options in a core system and isolating orthogonal and less trusted configuration options from this core.
HARD PROBLEM(S) ADDRESSED
Scalability and composability: Isolating conguration options or controlling their interactions will lead us toward composable analysis with regard to conguration options.
Predictive security metrics: To what degree can conguration-related indicate implementations that are more prone to vulnerabilities or in which vulnerabilities have more severe consequences?
Impact on Science of Security
We complement the Science of Security endeavor with a focus on the often overlooked problems of configuration options in systems. Whereas current approaches work on specific snapshots and require expensive recertification, our approaches extend underlying mathematical models (data-dependence graphs) with configuration knowledge and will thus scale analyses and reduce the need for repeating analyses. Furthermore, we expect that configuration complexity and configuration-specific program-dependence is a suitable empirical predictor for the likelihood and severity of vulnerabilities in complex systems. Finally, technical and empirical results of our work will also bring new approaches to the field of social network analysis that can be very powerful and applicable for Science of Security far beyond the scope of the current Lablet.
PUBLICATIONS
1. Kaestner, Christian & Pfeffer, Juergen (2014). Limiting Recertification in Highly Configurable Systems. Analyzing Interactions and Isolation among Configuration Options. HotSoS 2014: 2014 Symposium and Bootcamp on the Science of Security, April 8-9, Raleigh, NC.
ACCOMPLISHMENT HIGHLIGHTS
- Short paper (poster) presentation at HotSoS 2014
OUR TEAM
-
PI: Juergen Pfeffer
Co-PI: Christian Kaestner
Our research focuses on creating the scientific foundations to support model-based run-time diagnosis and repair of security attacks. Specifically, our research develops models that (a) scale gracefully with the size of system and have appropriate real-time characteristics for run-time use, and (b) support composition through multi-model analysis. Network models will complement architectural models in two ways: (a) to characterize the organizational context of a system, and (b) to detect anomalies through network representations of architectural behavior. The former can be particularly effective, for example, in detecting and preventing insider attacks, which are often linked to organizational issues. The latter will lead to the creation of a new set of architectural metrics (e.g., based on network measures) to rapidly detect anomalous behaviors.
PI: Juergen Pfeffer
Co-PIs: David Garlan, Bradley Schmerl
Hard Problem(s) Addressed
- Composability through multiple semantic models (here, architectural, organizational, and behavioral), which provide separation of concerns, while supporting synergistic benefits through integrated analyses.
- Scalability to large complex distributed systems using architectural models.
- Resilient architectures through the use of adaptive models that can be used at run-time to predict, detect and repair security attacks.
- Predictive security metrics by adapting social network-based metrics to the problem of architecture-level anomaly detection.
Impact on Science of Security
We address composability through multiple semantic models (here, architectural, organizational, and behavioral), which provide separation of concerns, while supporting synergistic benefits through integrated analyses. Our work is related to the thrust of resilience, through the use of adaptive models that can be used at run-time to predict, detect and repair security attacks. Finally, our work also bears on the topic of security metrics, since we will be adapting social network-based metrics to the problem of architecture-level anomaly detection.
Noninterference defines a program to be secure if changes to high-security inputs cannot alter low-security outputs thereby indirectly stating the epistemic property that no low-security principal acquires knowledge of high-security data. We consider a directly epistemic account of information-flow control focusing on the knowledge flows engendered by the program's execution. Storage effects are of primary interest, since principals acquire and disclose knowledge from the execution only through these effects. The information-flow properties of the individual effectful actions are characterized using a substructural epistemic logic that accounts for the knowledge transferred through their execution. We prove that a low-security principal never acquires knowledge of a high-security input by executing a well-typed program. Moreover, the epistemic approach facilitates going beyond noninterference to account for authorized declassification. We prove that a low-security principal acquires knowledge of a high-security input only if there is an authorization proof.
PI: Robert Harper
Science of Security Lablet BAA Kickoff Meeting
Science of Security Lablet Kickoff Meeting
28 February 2014 | National Security Agency – Laurel, Maryland
CMU SoS 2014 Quarterly Lablet PI Meeting
Tuesday, July 1, 2014 - 8:30am - 5:30pm
Wednesday, July 2, 2014 - 8:30am - 3:00pm
Full agenda is attached.
High-assurance systems, for which security is especially critical, should be designed to a) auto-detect attacks (even when correlated); b) isolate or interfere with the activities of a potential or actual attack; and (3) recover a secure state and continue, or fail safely. Fault-tolerant (FT) systems use forward or backward recovery to continue normal operation despite the presence of hardware or software failures. Similarly, an attack-tolerant (AT) system would recognize security anomalies, possibly identify user “intent”, and effect an appropriate defense and/or isolation. Some of the underlying questions in this context are. How is a security anomaly different from a “normal” anomaly, and how does one reliably recognize it? How does one recognize user intent? How does one deal with security failure-correlation issues? What is the appropriate safe response to potential security anomaly detection? The key hypothesis is that all security attacks always produce an anomalous state signature that is detectable at run-time, given enough of appropriate system, environment, and application provenance information. If that is true (and we plan to test that), then fault-tolerance technology (existing or newly develop) may be used with success to prevent or mitigate a security attack. A range of AT technologies will be reviewed, developed and assessed.
Team
PI: Mladen Vouk
Student: Da Young Lee
It is widely accepted that wireless channels decorrelate fast over space, and half a wavelength is the key distance metric used in existing wireless physical layer security mechanisms for security assurance. We believe that this channel correlation model is incorrect in general: it leads to wrong hypothesis about the inference capability of a passive adversary and results in false sense of security, which will expose the legitimate systems to severe threats with little awareness. In this project, we focus on establishing correct modeling of channel correlation in wireless environments of interest, and properly evaluating the safety distance metric of existing and emerging wireless security mechanisms, as well as cyber-physical systems employing these security mechanisms. Upon successful completion of the project, the expected outcome will allow us to accurately determine key system parameters (e.g., the security zone for secrete key establishment from wireless channels) and confidently assess the security assurance in wireless security mechanisms. More importantly, the results will correct the previous misconception of channel de-correlation, and help security researchers develop new wireless security mechanisms based on a proven scientific foundation.
TEAM
PIs: Huaiyu Dai, Peng Ning
Student: Xiaofan He