Trust from Explicit Evidence; Integrating Digital Signatures and Formal Proofs
Lead PI:
Frank Pfenning
Abstract

ABOUT THE PROJECT:

 

 

OUR TEAM:

Frank Pfenning

Frank Pfenning
Using Crowdsourcing to analyze and Summarize the Security of Mobile Applications
Abstract

ABOUT THE PROJECT:

 

OUR TEAM:

Norman Sadeh

Systematic Testing of Distributed and Multi-Threaded Systems at Scale
Lead PI:
Garth Gibson
Abstract

ABOUT THE PROJECT:

 

OUR TEAM:

Garth Gibson

Garth Gibson
USE: User Security Behavior
Abstract

Our ability to design appropriate information security mechanisms and sound security policies depends on our understanding of how end-users actually behave. To improve this understanding, we will establish a large panel of end-users whose complete online behavior will be captured, monitored, and analyzed over an extended period of time. Establishing such a panel will require the design of sound measurement methodologies, while paying particular attention to the protection of end-users' confidential information. Once established, our panel will offer an unprecedented window on real-time, real-life security and privacy behavior "in the wild." The panel will combine tracking, experimental, and survey data, and will provide a foundation on which sound models of both user and attacker behavior can rest. These models will lead to the scientific design of intervention policies and technical countermeasures against security threats. In other words, in addition to academic research, this research will also lead to actionable recommendations for policy makers and firms.

Architecture-based Self Securing Systems
Lead PI:
David Garlan
Co-Pi:
Abstract

An important emerging trend in the engineering of complex software-based systems is the ability to incorporate self-adaptive capabilities. Such systems typically include a set of monitoring mechanisms that allow a control layer to observe the running behavior of a target system and its environment, and then repair the system when problems are detected. Substantial results in applying these concepts have emerged over the past decade, addressing quality dimensions such as reliability, performance, and database optimization.  In particular, at Carnegie Mellon we have shown how architectural models, updated at runtime, can form the basis for effective and scalable problem detection and correction. However, to-date relatively little research has been done to apply these techniques to support detection of security-related problems and identification of remedial actions. In this project we propose to develop scientific foundations, as well as practical tools and techniques, to support self-securing systems, focusing specifically on questions of scalable assurance.

OUR QUALIFICATIONS:

Prof. David Garlan and Dr. Bradley Schmerl have been working in the area of architecture-based self-adaptation for over a decade. They have developed both foundations and tools – specifically, a platform called “Rainbow” – that are considered seminal work in this area of architecture-based adaptation. Ivan Ruchkin is a Ph.D. candidate working under the direction of Prof. Garlan in the area of formal modeling of dynamic changes in systems from an architectural perspective. His work will support assurances that operations that change a system at run-time are sound, and do not violate the properties and rules defined by the architecture.

OUR TEAM:

PI: Prof. David Garlan (Faculty),

Staff: Dr. Bradley Schmerl (Research Faculty)

Students: Ivan Ruchkin (Ph.D. Student), new student to be recruited.

David Garlan

David Garlan is a Professor in the School of Computer Science at Carnegie Mellon University. His research interests include:

  • software architecture
  • self-adaptive systems
  • formal methods
  • cyber-physical system

Dr. Garlan is a member of the Institute for Software Research and Computer Science Department in the School of Computer Science.

He is a Professor of Computer Science in the School of Computer Science at Carnegie Mellon University.  He received his Ph.D. from Carnegie Mellon in 1987 and worked as a software architect in industry between 1987 and 1990.  His research interests include software architecture, self-adaptive systems, formal methods, and cyber-physical systems.  He is recognized as one of the founders of the field of software architecture, and, in particular, formal representation and analysis of architectural designs. He is a co-author of two books on software architecture: "Software Architecture: Perspectives on an Emerging Discipline", and "Documenting Software Architecture: Views and Beyond." In 2005 he received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering.” In 2011 he received the Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture.”  In 2016 he received the Allen Newell Award for Research Excellence. In 2017 he received the IEEE TCSE Distinguished Education Award and also the Nancy Mead Award for Excellence in Software Engineering Education He is a Fellow of the IEEE and ACM.

Learned Resiliency: Secure Multi-Level Systems
Lead PI:
Kathleen Carley
Abstract

The objective of this project is to develop a theory of system resiliency for complex adaptive socio-technical systems.  A secondary objective is to develop the modeling framework and associated metrics for examining the resiliency of complex socio-technical systems in the face of various cyber and non-cyber attacks, such that the methodology can be used to support both basic level simulation based experimentation and assessment of actual socio-technical systems.

OUR TEAM:

Professor Kathleen M. Carley

Geoffrey Morgan (Student)

Mike Kowalchuk (Research Staff)

Kathleen Carley

Dr. Kathleen M. Carley is a Professor of Computation, Organizations and Society in the department – Institute for Software Research, in the School of Computer Science at Carnegie Mellon University & CEO of Carley Technologies Inc. Dr. Carley is the director of the center for Computational Analysis of Social and Organizational Systems (CASOS) which has over 25 members, both students and research staff.  Dr. Carley’s received her Ph.D. in Mathematical Sociology from Harvard University, and her undergraduate degrees in Economics and Political Science from MIT.  Her research combines cognitive science, organization science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations.  She and the members of the CASOS center have developed infrastructure tools for analyzing large scale dynamic networks and various multi-agent simulation systems.  The infrastructure tools include ORA, AutoMap and SmartCard.  ORA is a statistical toolkit for analyzing and visualizing multi-dimensional networks.  ORA results are organized into reports that meet various needs such as the management report, the mental model report, and the intelligence report.  Another tool is AutoMap, a text-mining system for extracting semantic networks from texts and then cross-classifying them using an organizational ontology into the underlying social, knowledge, resource and task networks. SmartCard is a network and behavioral estimation system for cities in the U.S..  Carley’s simulation models meld multi-agent technology with network dynamics and empirical data resulting in reusable large scale models: BioWar  a city-scale model for understanding the spread of disease and illness due to natural epidemics, chemical spills, and weaponized biological attacks; and Construct  an information and belief diffusion model that enables assessment of interventions.  She is the current and a founding editor of the journal Computational Organization Theory and has published over 200 papers and co-edited several books using computational and dynamic network models.

Security Reasoning for Distributed Systems with Uncertainties
Abstract
Phenomena like Stuxnet make apparent to the public what experts knew long ago: security is not an isolated question of securing a single door against lockpicking or securing a single computer against a single hacker trying to gain access via a single network activity. Because the strength of a security system is determined by its weakest link, security is much more holistic and affects more and more elements of a system design. Most systems are not understood properly by simplistic finite-state abstractions like yes/no information about whether a node in a (sufficiently small) finite network has been compromised or not. Stuxnet, for example, is reported to have a sophisticated interaction of control affects, sensor modifications, and even exhibits hidden long-term effects on the physical world by changing the behavior of programmable logic controllers (PLCs). Moreover, security-relevant systems of today are more often than not characterized by distributed setups, both in the system and in the attack. The security analyst, furthermore, faces uncertainties that aggregate to paralytic “zero” knowledge—unless he takes a probabilistic view and quantitatively relates the relative likelihoods of symptoms and explanations via partial observations and incomplete prior knowledge. The scale and complexity of any affected system, however, makes the analysis hard. But, more crucially, it is becoming infeasible to scale between systems to craft and tune a new security analysis over and over again for each new particular application scenario.

Research

We propose to address the scale problem in security analysis by developing  representation and reasoning techniques that support higher level structure and that enable the security community to factor out common core reasoning principles from the particular elements, rules, and data facts that are specific to the application at hand. This principle, separation of reasoning engine and problem specification, has been pursued with great success in satisfiability modulo theories (SMT) solving. Probabilistic reasoning is powerful for many specific domains, but does not have any full-fledged extensions to the level of scalable higher-level representations that are truly first-order. Based on our preliminary results, we propose to develop first-order probabilistic programs and study how they can represent security analysis questions for systems with both distributed aspects and quantitative uncertainty. We propose to study instance-based methods for reasoning about first-order probabilistic programs. Instance-based methods enjoy a good trade-off between generality and efficiency. They can leverage classical advances in probabilistic reasoning for finite-dimensional representations and lift them to the full expressive and descriptive power of first-order representations. With this approach, we achieve inter-system scaling by decoupling the generics from the specifics and we hope to improve intra-system scaling by being able to combine more powerful reasoning techniques from classically disjoint domains.

Relevance

This project is of relevance to the security community, since, if successful, it would provide more general and more flexible off-the-shelve solutions that can be used to address security analysis questions for particular systems. To simplify the design of problem-specific computer-aided security analysis procedures, this project addresses a separation of the problem description from the reasoning techniques about them. At the same time, it increases representational and computational power to scale to systems with uncertainty and distributed effects.

Impact

This project has the potential to help solve security analysis questions that scale to distributed systems and the presence of uncertainty. Both aspects are central in security challenges like Stuxnet. Because each security analysis question is different, it is more economically feasible to assemble particular security analyses from suitable reasoning components. This project addresses one such component with a good trade-off between generality and efficiency.

OUR TEAM:

The project team includes Andre Platzer who is an assistant professor in the computer science department at Carnegie Mellon University. He is an expert in verification and analysis of hybrid, distributed, and stochastic dynamic systems, including cyber-physical systems. The team further includes Erik P. Zawadzki, who is a fourth year graduate student in the computer science department at Carnegie Mellon University and is developing reasoning techniques for first-order MILPs and fast propositional solvers for probabilistic model counting.

 

Secure Composition of Systems and Policies
Lead PI:
Anupam Datta
Co-Pi:
Abstract

Compositional security is a recognized central scientific challenge for trustworthy computing. Contemporary systems are built up from smaller components. However, even if each component is secure in isolation, the composed system may not achieve the desired end-to-end security property: an adversary may exploit complex interactions between components to compromise security. Such attacks have shown up in the wild in many different settings, including web browsers and infrastructure, network protocols and infrastructure, and application and systems software. While there has been progress on understanding secure composition in some settings, such as information flow control for non-interference-style properties, cryptographic protocols, and certain classes of software systems, a number of important scientific questions pertaining to secure composition remain open. The broad goal of this project is to improve our scientific understanding of secure composition.

This project will focus on the following research directions:

Compositional Reasoning Principles for Higher-order Code. We propose to develop sound secure composition principles for trusted systems that interact with adversaries who have the capability to supply executable code to and modify code of the trusted system. We will build on prior work by Datta and collaborators on sound composition principles for trusted systems that interact with adversaries that can supply data (but not code) to tackle this problem. Theoretical Categorization of Compositionality. We will generalize from the insights acquired during the development of compositional reasoning principles and prior technical work in the computer security community on challenges in composing certain specific classes of security policies to arrive at semantic characterizations of classes of security policies that are and are not amenable to a compositional analysis. We plan to start from the security policy algebra in the prior work by Pincus and Wing, and expand the theory to larger classes of policy compositions.

Correspondence Between High-level Policies and Low-level Properties. We will develop a technical connection—a correspondence theorem—between high-level security policies (expressed either in the policy algebra or a logic) and lower level security properties expressed in our program logic. This result will connect our two lines of work and might also be useful in explaining failures of security policy composition in terms of failures of environmental assumptions.  Case Studies. We will conduct a compositional analysis of a class of systems that combine trusted computing technology and memory protection mechanisms to provide strong security guarantees against powerful adversaries. We will also revisit the real-world examples in the Pincus- Wing paper using the new formalism.

Anupam Datta
Anupam Datta is an Associate Professor at Carnegie Mellon University where he holds a joint appointment in the Computer Science and Electrical and Computer Engineering Departments. His research focuses on the scientific foundations of security and privacy. Datta’s work has led to new principles for securely composing cryptographic protocols and software systems; applications of these principles have influenced several IEEE and IETF standards. His work on privacy protection has led to formalizations of privacy as contextual integrity and purpose restrictions on information use; accountability mechanisms for privacy protection; and their applications in healthcare and Web privacy. Datta has authored a book and over 40 other publications on these topics. He serves on the Steering Committee and as the 2013-14 Program Co-Chair of the IEEE Computer Security Foundations Symposium. Datta obtained Ph.D. (2005) and M.S. (2002) degrees from Stanford University and a B.Tech. (2000) from IIT Kharagpur, all in Computer Science.
Subscribe to