Architecture-based Self Securing Systems
Lead PI:
David Garlan
Co-Pi:
Abstract

An important emerging trend in the engineering of complex software-based systems is the ability to incorporate self-adaptive capabilities. Such systems typically include a set of monitoring mechanisms that allow a control layer to observe the running behavior of a target system and its environment, and then repair the system when problems are detected. Substantial results in applying these concepts have emerged over the past decade, addressing quality dimensions such as reliability, performance, and database optimization.  In particular, at Carnegie Mellon we have shown how architectural models, updated at runtime, can form the basis for effective and scalable problem detection and correction. However, to-date relatively little research has been done to apply these techniques to support detection of security-related problems and identification of remedial actions. In this project we propose to develop scientific foundations, as well as practical tools and techniques, to support self-securing systems, focusing specifically on questions of scalable assurance.

OUR QUALIFICATIONS:

Prof. David Garlan and Dr. Bradley Schmerl have been working in the area of architecture-based self-adaptation for over a decade. They have developed both foundations and tools – specifically, a platform called “Rainbow” – that are considered seminal work in this area of architecture-based adaptation. Ivan Ruchkin is a Ph.D. candidate working under the direction of Prof. Garlan in the area of formal modeling of dynamic changes in systems from an architectural perspective. His work will support assurances that operations that change a system at run-time are sound, and do not violate the properties and rules defined by the architecture.

OUR TEAM:

PI: Prof. David Garlan (Faculty),

Staff: Dr. Bradley Schmerl (Research Faculty)

Students: Ivan Ruchkin (Ph.D. Student), new student to be recruited.

David Garlan

David Garlan is a Professor in the School of Computer Science at Carnegie Mellon University. His research interests include:

  • software architecture
  • self-adaptive systems
  • formal methods
  • cyber-physical system

Dr. Garlan is a member of the Institute for Software Research and Computer Science Department in the School of Computer Science.

He is a Professor of Computer Science in the School of Computer Science at Carnegie Mellon University.  He received his Ph.D. from Carnegie Mellon in 1987 and worked as a software architect in industry between 1987 and 1990.  His research interests include software architecture, self-adaptive systems, formal methods, and cyber-physical systems.  He is recognized as one of the founders of the field of software architecture, and, in particular, formal representation and analysis of architectural designs. He is a co-author of two books on software architecture: "Software Architecture: Perspectives on an Emerging Discipline", and "Documenting Software Architecture: Views and Beyond." In 2005 he received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering.” In 2011 he received the Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture.”  In 2016 he received the Allen Newell Award for Research Excellence. In 2017 he received the IEEE TCSE Distinguished Education Award and also the Nancy Mead Award for Excellence in Software Engineering Education He is a Fellow of the IEEE and ACM.

Learned Resiliency: Secure Multi-Level Systems
Lead PI:
Kathleen Carley
Abstract

The objective of this project is to develop a theory of system resiliency for complex adaptive socio-technical systems.  A secondary objective is to develop the modeling framework and associated metrics for examining the resiliency of complex socio-technical systems in the face of various cyber and non-cyber attacks, such that the methodology can be used to support both basic level simulation based experimentation and assessment of actual socio-technical systems.

OUR TEAM:

Professor Kathleen M. Carley

Geoffrey Morgan (Student)

Mike Kowalchuk (Research Staff)

Kathleen Carley

Dr. Kathleen M. Carley is a Professor of Computation, Organizations and Society in the department – Institute for Software Research, in the School of Computer Science at Carnegie Mellon University & CEO of Carley Technologies Inc. Dr. Carley is the director of the center for Computational Analysis of Social and Organizational Systems (CASOS) which has over 25 members, both students and research staff.  Dr. Carley’s received her Ph.D. in Mathematical Sociology from Harvard University, and her undergraduate degrees in Economics and Political Science from MIT.  Her research combines cognitive science, organization science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations.  She and the members of the CASOS center have developed infrastructure tools for analyzing large scale dynamic networks and various multi-agent simulation systems.  The infrastructure tools include ORA, AutoMap and SmartCard.  ORA is a statistical toolkit for analyzing and visualizing multi-dimensional networks.  ORA results are organized into reports that meet various needs such as the management report, the mental model report, and the intelligence report.  Another tool is AutoMap, a text-mining system for extracting semantic networks from texts and then cross-classifying them using an organizational ontology into the underlying social, knowledge, resource and task networks. SmartCard is a network and behavioral estimation system for cities in the U.S..  Carley’s simulation models meld multi-agent technology with network dynamics and empirical data resulting in reusable large scale models: BioWar  a city-scale model for understanding the spread of disease and illness due to natural epidemics, chemical spills, and weaponized biological attacks; and Construct  an information and belief diffusion model that enables assessment of interventions.  She is the current and a founding editor of the journal Computational Organization Theory and has published over 200 papers and co-edited several books using computational and dynamic network models.

Security Reasoning for Distributed Systems with Uncertainties
Abstract
Phenomena like Stuxnet make apparent to the public what experts knew long ago: security is not an isolated question of securing a single door against lockpicking or securing a single computer against a single hacker trying to gain access via a single network activity. Because the strength of a security system is determined by its weakest link, security is much more holistic and affects more and more elements of a system design. Most systems are not understood properly by simplistic finite-state abstractions like yes/no information about whether a node in a (sufficiently small) finite network has been compromised or not. Stuxnet, for example, is reported to have a sophisticated interaction of control affects, sensor modifications, and even exhibits hidden long-term effects on the physical world by changing the behavior of programmable logic controllers (PLCs). Moreover, security-relevant systems of today are more often than not characterized by distributed setups, both in the system and in the attack. The security analyst, furthermore, faces uncertainties that aggregate to paralytic “zero” knowledge—unless he takes a probabilistic view and quantitatively relates the relative likelihoods of symptoms and explanations via partial observations and incomplete prior knowledge. The scale and complexity of any affected system, however, makes the analysis hard. But, more crucially, it is becoming infeasible to scale between systems to craft and tune a new security analysis over and over again for each new particular application scenario.

Research

We propose to address the scale problem in security analysis by developing  representation and reasoning techniques that support higher level structure and that enable the security community to factor out common core reasoning principles from the particular elements, rules, and data facts that are specific to the application at hand. This principle, separation of reasoning engine and problem specification, has been pursued with great success in satisfiability modulo theories (SMT) solving. Probabilistic reasoning is powerful for many specific domains, but does not have any full-fledged extensions to the level of scalable higher-level representations that are truly first-order. Based on our preliminary results, we propose to develop first-order probabilistic programs and study how they can represent security analysis questions for systems with both distributed aspects and quantitative uncertainty. We propose to study instance-based methods for reasoning about first-order probabilistic programs. Instance-based methods enjoy a good trade-off between generality and efficiency. They can leverage classical advances in probabilistic reasoning for finite-dimensional representations and lift them to the full expressive and descriptive power of first-order representations. With this approach, we achieve inter-system scaling by decoupling the generics from the specifics and we hope to improve intra-system scaling by being able to combine more powerful reasoning techniques from classically disjoint domains.

Relevance

This project is of relevance to the security community, since, if successful, it would provide more general and more flexible off-the-shelve solutions that can be used to address security analysis questions for particular systems. To simplify the design of problem-specific computer-aided security analysis procedures, this project addresses a separation of the problem description from the reasoning techniques about them. At the same time, it increases representational and computational power to scale to systems with uncertainty and distributed effects.

Impact

This project has the potential to help solve security analysis questions that scale to distributed systems and the presence of uncertainty. Both aspects are central in security challenges like Stuxnet. Because each security analysis question is different, it is more economically feasible to assemble particular security analyses from suitable reasoning components. This project addresses one such component with a good trade-off between generality and efficiency.

OUR TEAM:

The project team includes Andre Platzer who is an assistant professor in the computer science department at Carnegie Mellon University. He is an expert in verification and analysis of hybrid, distributed, and stochastic dynamic systems, including cyber-physical systems. The team further includes Erik P. Zawadzki, who is a fourth year graduate student in the computer science department at Carnegie Mellon University and is developing reasoning techniques for first-order MILPs and fast propositional solvers for probabilistic model counting.

 

Secure Composition of Systems and Policies
Lead PI:
Anupam Datta
Co-Pi:
Abstract

Compositional security is a recognized central scientific challenge for trustworthy computing. Contemporary systems are built up from smaller components. However, even if each component is secure in isolation, the composed system may not achieve the desired end-to-end security property: an adversary may exploit complex interactions between components to compromise security. Such attacks have shown up in the wild in many different settings, including web browsers and infrastructure, network protocols and infrastructure, and application and systems software. While there has been progress on understanding secure composition in some settings, such as information flow control for non-interference-style properties, cryptographic protocols, and certain classes of software systems, a number of important scientific questions pertaining to secure composition remain open. The broad goal of this project is to improve our scientific understanding of secure composition.

This project will focus on the following research directions:

Compositional Reasoning Principles for Higher-order Code. We propose to develop sound secure composition principles for trusted systems that interact with adversaries who have the capability to supply executable code to and modify code of the trusted system. We will build on prior work by Datta and collaborators on sound composition principles for trusted systems that interact with adversaries that can supply data (but not code) to tackle this problem. Theoretical Categorization of Compositionality. We will generalize from the insights acquired during the development of compositional reasoning principles and prior technical work in the computer security community on challenges in composing certain specific classes of security policies to arrive at semantic characterizations of classes of security policies that are and are not amenable to a compositional analysis. We plan to start from the security policy algebra in the prior work by Pincus and Wing, and expand the theory to larger classes of policy compositions.

Correspondence Between High-level Policies and Low-level Properties. We will develop a technical connection—a correspondence theorem—between high-level security policies (expressed either in the policy algebra or a logic) and lower level security properties expressed in our program logic. This result will connect our two lines of work and might also be useful in explaining failures of security policy composition in terms of failures of environmental assumptions.  Case Studies. We will conduct a compositional analysis of a class of systems that combine trusted computing technology and memory protection mechanisms to provide strong security guarantees against powerful adversaries. We will also revisit the real-world examples in the Pincus- Wing paper using the new formalism.

Anupam Datta
Anupam Datta is an Associate Professor at Carnegie Mellon University where he holds a joint appointment in the Computer Science and Electrical and Computer Engineering Departments. His research focuses on the scientific foundations of security and privacy. Datta’s work has led to new principles for securely composing cryptographic protocols and software systems; applications of these principles have influenced several IEEE and IETF standards. His work on privacy protection has led to formalizations of privacy as contextual integrity and purpose restrictions on information use; accountability mechanisms for privacy protection; and their applications in healthcare and Web privacy. Datta has authored a book and over 40 other publications on these topics. He serves on the Steering Committee and as the 2013-14 Program Co-Chair of the IEEE Computer Security Foundations Symposium. Datta obtained Ph.D. (2005) and M.S. (2002) degrees from Stanford University and a B.Tech. (2000) from IIT Kharagpur, all in Computer Science.
Improving the Usability of Security Requirements by Software Developers through Empirical Studies and Analysis
Lead PI:
Travis Breaux
Co-Pi:
Abstract

Secure software depends upon the ability of software developers to respond to security risks early in the software development process.  Despite a wealth of security requirements, often called security controls, there is a shortfall in the adoption and implementation of these requirements.  This shortfall is due to the extensive expertise and higher level cognitive skillsets required to comprehend, decompose and reassemble security requirements concepts in the context of an emerging system design.  To address this shortfall, we propose to develop two empirical methods: (1) a method to derive security requirements patterns from requirements catalogues using expert knowledge; and (2) a method to empirically evaluate these patterns for their "usability" by novice software developers against a set of common problem descriptions, including the developer's ability to formulate problems, select and instantiate patterns.  The study results will yield a framework for discovering and evaluation security requirements patterns and new scientific knowledge about the limitations of patterns-based approaches when applied by novice software developers.

Hard Problem:

Security requirements are difficult to apply in design and must incorporate system architecture, functional requirements, security policies, regulations, and standards.

OUR TEAM:

PI(s):Travis Breaux, Laurie Williams, Jianwei Niu

Travis Breaux

Dr. Breaux is the Director of the CMU Requirements Engineering Lab, where his research program investigates how to specify and design software to comply with policy and law in a trustworthy, reliable manner. His work historically concerned the empirical extraction of legal requirements from policies and law, and has recently studied how to use formal specifications to reason about privacy policy compliance, how to measure and reason over ambiguous and vague policies, and how security and privacy experts and novices estimate the risk of system designs.

To learn more, read about his ongoing research projects or contact him.

A Language and Framework for Development of Secure Mobile Applications
Lead PI:
Jonathan Aldrich
Co-Pi:
Abstract

Mobile applications are a critical emerging segment of the software industry, and security for web-based mobile applications is of increasing concern.  We hypothesize that many of the most important security vulnerabilities in web-based mobile applications are a consequence of expressing programs at a low level of abstraction, in which important security properties are implicit and only indirectly related to code.  In order to test this hypothesis, we are building a system for expressing web-based mobile applications at a higher level of abstraction, in which security properties are made explicit through expressions of design intent, and in which those properties are more directly related to code.  We will evaluate whether such an approach can reduce or eliminate the most common vulnerabilities of web-based mobile software, while imposing a low or even negative marginal cost on developers.

Jonathan Aldrich

Jonathan Aldrich is an Associate Professor of the School of Computer Science. He does programming languages and software engineering research focused on developing better ways of expressing and enforcing software design within source code, typically through language design and type systems. Jonathan works at the intersection of programming languages and software engineering. His research explores how the way we express software affects our ability to engineer software at scale. A particular theme of much of his work is improving software quality and programmer productivity through better ways to express structural and behavioral aspects of software design within source code. Aldrich has contributed to object-oriented typestate verification, modular reasoning techniques for aspects and stateful programs, and new object-oriented language models. For his work specifying and verifying architecture, he received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize. Currently, Aldrich excited to be working on the design of Wyvern, a new modularly extensible programming language.

Sponsor: National Security Agency

C3E Team Telecon

Submitted by Katie Dey on

 

Dial-in at 1-866-680-0146 code 163668

 

Teleconference Agenda:
Workshop Agenda development is the driving focus.
West Point morning (Monday, 10 September) development status
Participant development will be a continuing background activity
Hotel, CIA Dinner, other logistics
Dan, in your discussions with West Point, we need to understand what restrictions or rules apply to non US persons since we may have some participants in that category.

Subscribe to