Enhancing Cyber Security Through Networks Resilient to Targeted Attacks
Abstract

ABOUT THE PROJECT:

The scientific objective of this project is to discover statistical models that characterize network resiliency, and develop simulation tools to test whether an existing network is resilient. Our work will show how to place questions of network connectivity resilience on a firm statistical basis, ultimately allowing one to design networks to be more resilient, formally assess the resiliency of existing networks, and formally assess the changes to resiliency achieved as modifications are introduced.

OUR TEAM:

Yuguo Chen

 

End-to-End Analysis of Side Channels
Abstract

This project is exploring a framework for characterizing side channels that is based on an end-to-end analysis of the side channel process. As in covert channel analysis, we are using information-theoretic tools to identify the potential of a worst-case attack, rather than the success of a given ad hoc approach. However, instead of measuring the capacity of an information channel, which presumes optimal coding and thus overestimates the impact of the side channel, we are measuring the mutual information between the sensitive data and observations available to an adversary.

OUR TEAM:

researcher: Nikita Borisov
 

 

Classification of Cyber-Physical System Adversaries
Abstract

Cyber-Physical Systems (CPS) are vulnerable to elusive dynamics-aware attacks that subtly change local behaviors in ways that lead to large deviations in global behavior, and to system instability. The broad agenda for this project is to classify attacks on different classes of CPS based on detectability. In particular, we are identifying attacks that are impossible to detect in a given class of CPS (with reasonable resources), and we are developing detection algorithms for those that are possible. The methods developed will primarily be aimed at scenarios in which attackers have some ability to intermittently disrupt either the timing or the quality-of-service of software or communication processes, even though the processes may not have been breached in the traditional sense. Much of the work will also apply to cases where such limited disruptions are introduced physically. Our approach is based on a set of powerful technical tools that draw from and combine ideas from robust control theory, formal methods, and information theory.

OUR TEAM:

Sayan Mitra and Geir Dullerud

 

Trust from Explicit Evidence; Integrating Digital Signatures and Formal Proofs
Lead PI:
Frank Pfenning
Abstract

ABOUT THE PROJECT:

 

 

OUR TEAM:

Frank Pfenning

Frank Pfenning
Using Crowdsourcing to analyze and Summarize the Security of Mobile Applications
Abstract

ABOUT THE PROJECT:

 

OUR TEAM:

Norman Sadeh

Systematic Testing of Distributed and Multi-Threaded Systems at Scale
Lead PI:
Garth Gibson
Abstract

ABOUT THE PROJECT:

 

OUR TEAM:

Garth Gibson

Garth Gibson
USE: User Security Behavior
Abstract

Our ability to design appropriate information security mechanisms and sound security policies depends on our understanding of how end-users actually behave. To improve this understanding, we will establish a large panel of end-users whose complete online behavior will be captured, monitored, and analyzed over an extended period of time. Establishing such a panel will require the design of sound measurement methodologies, while paying particular attention to the protection of end-users' confidential information. Once established, our panel will offer an unprecedented window on real-time, real-life security and privacy behavior "in the wild." The panel will combine tracking, experimental, and survey data, and will provide a foundation on which sound models of both user and attacker behavior can rest. These models will lead to the scientific design of intervention policies and technical countermeasures against security threats. In other words, in addition to academic research, this research will also lead to actionable recommendations for policy makers and firms.

Architecture-based Self Securing Systems
Lead PI:
David Garlan
Co-Pi:
Abstract

An important emerging trend in the engineering of complex software-based systems is the ability to incorporate self-adaptive capabilities. Such systems typically include a set of monitoring mechanisms that allow a control layer to observe the running behavior of a target system and its environment, and then repair the system when problems are detected. Substantial results in applying these concepts have emerged over the past decade, addressing quality dimensions such as reliability, performance, and database optimization.  In particular, at Carnegie Mellon we have shown how architectural models, updated at runtime, can form the basis for effective and scalable problem detection and correction. However, to-date relatively little research has been done to apply these techniques to support detection of security-related problems and identification of remedial actions. In this project we propose to develop scientific foundations, as well as practical tools and techniques, to support self-securing systems, focusing specifically on questions of scalable assurance.

OUR QUALIFICATIONS:

Prof. David Garlan and Dr. Bradley Schmerl have been working in the area of architecture-based self-adaptation for over a decade. They have developed both foundations and tools – specifically, a platform called “Rainbow” – that are considered seminal work in this area of architecture-based adaptation. Ivan Ruchkin is a Ph.D. candidate working under the direction of Prof. Garlan in the area of formal modeling of dynamic changes in systems from an architectural perspective. His work will support assurances that operations that change a system at run-time are sound, and do not violate the properties and rules defined by the architecture.

OUR TEAM:

PI: Prof. David Garlan (Faculty),

Staff: Dr. Bradley Schmerl (Research Faculty)

Students: Ivan Ruchkin (Ph.D. Student), new student to be recruited.

David Garlan

David Garlan is a Professor in the School of Computer Science at Carnegie Mellon University. His research interests include:

  • software architecture
  • self-adaptive systems
  • formal methods
  • cyber-physical system

Dr. Garlan is a member of the Institute for Software Research and Computer Science Department in the School of Computer Science.

He is a Professor of Computer Science in the School of Computer Science at Carnegie Mellon University.  He received his Ph.D. from Carnegie Mellon in 1987 and worked as a software architect in industry between 1987 and 1990.  His research interests include software architecture, self-adaptive systems, formal methods, and cyber-physical systems.  He is recognized as one of the founders of the field of software architecture, and, in particular, formal representation and analysis of architectural designs. He is a co-author of two books on software architecture: "Software Architecture: Perspectives on an Emerging Discipline", and "Documenting Software Architecture: Views and Beyond." In 2005 he received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering.” In 2011 he received the Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture.”  In 2016 he received the Allen Newell Award for Research Excellence. In 2017 he received the IEEE TCSE Distinguished Education Award and also the Nancy Mead Award for Excellence in Software Engineering Education He is a Fellow of the IEEE and ACM.

Learned Resiliency: Secure Multi-Level Systems
Lead PI:
Kathleen Carley
Abstract

The objective of this project is to develop a theory of system resiliency for complex adaptive socio-technical systems.  A secondary objective is to develop the modeling framework and associated metrics for examining the resiliency of complex socio-technical systems in the face of various cyber and non-cyber attacks, such that the methodology can be used to support both basic level simulation based experimentation and assessment of actual socio-technical systems.

OUR TEAM:

Professor Kathleen M. Carley

Geoffrey Morgan (Student)

Mike Kowalchuk (Research Staff)

Kathleen Carley

Dr. Kathleen M. Carley is a Professor of Computation, Organizations and Society in the department – Institute for Software Research, in the School of Computer Science at Carnegie Mellon University & CEO of Carley Technologies Inc. Dr. Carley is the director of the center for Computational Analysis of Social and Organizational Systems (CASOS) which has over 25 members, both students and research staff.  Dr. Carley’s received her Ph.D. in Mathematical Sociology from Harvard University, and her undergraduate degrees in Economics and Political Science from MIT.  Her research combines cognitive science, organization science, social networks and computer science to address complex social and organizational problems. Her specific research areas are dynamic network analysis, computational social and organization theory, adaptation and evolution, text mining, and the impact of telecommunication technologies and policy on communication, information diffusion, disease contagion and response within and among groups particularly in disaster or crisis situations.  She and the members of the CASOS center have developed infrastructure tools for analyzing large scale dynamic networks and various multi-agent simulation systems.  The infrastructure tools include ORA, AutoMap and SmartCard.  ORA is a statistical toolkit for analyzing and visualizing multi-dimensional networks.  ORA results are organized into reports that meet various needs such as the management report, the mental model report, and the intelligence report.  Another tool is AutoMap, a text-mining system for extracting semantic networks from texts and then cross-classifying them using an organizational ontology into the underlying social, knowledge, resource and task networks. SmartCard is a network and behavioral estimation system for cities in the U.S..  Carley’s simulation models meld multi-agent technology with network dynamics and empirical data resulting in reusable large scale models: BioWar  a city-scale model for understanding the spread of disease and illness due to natural epidemics, chemical spills, and weaponized biological attacks; and Construct  an information and belief diffusion model that enables assessment of interventions.  She is the current and a founding editor of the journal Computational Organization Theory and has published over 200 papers and co-edited several books using computational and dynamic network models.

Security Reasoning for Distributed Systems with Uncertainties
Abstract
Phenomena like Stuxnet make apparent to the public what experts knew long ago: security is not an isolated question of securing a single door against lockpicking or securing a single computer against a single hacker trying to gain access via a single network activity. Because the strength of a security system is determined by its weakest link, security is much more holistic and affects more and more elements of a system design. Most systems are not understood properly by simplistic finite-state abstractions like yes/no information about whether a node in a (sufficiently small) finite network has been compromised or not. Stuxnet, for example, is reported to have a sophisticated interaction of control affects, sensor modifications, and even exhibits hidden long-term effects on the physical world by changing the behavior of programmable logic controllers (PLCs). Moreover, security-relevant systems of today are more often than not characterized by distributed setups, both in the system and in the attack. The security analyst, furthermore, faces uncertainties that aggregate to paralytic “zero” knowledge—unless he takes a probabilistic view and quantitatively relates the relative likelihoods of symptoms and explanations via partial observations and incomplete prior knowledge. The scale and complexity of any affected system, however, makes the analysis hard. But, more crucially, it is becoming infeasible to scale between systems to craft and tune a new security analysis over and over again for each new particular application scenario.

Research

We propose to address the scale problem in security analysis by developing  representation and reasoning techniques that support higher level structure and that enable the security community to factor out common core reasoning principles from the particular elements, rules, and data facts that are specific to the application at hand. This principle, separation of reasoning engine and problem specification, has been pursued with great success in satisfiability modulo theories (SMT) solving. Probabilistic reasoning is powerful for many specific domains, but does not have any full-fledged extensions to the level of scalable higher-level representations that are truly first-order. Based on our preliminary results, we propose to develop first-order probabilistic programs and study how they can represent security analysis questions for systems with both distributed aspects and quantitative uncertainty. We propose to study instance-based methods for reasoning about first-order probabilistic programs. Instance-based methods enjoy a good trade-off between generality and efficiency. They can leverage classical advances in probabilistic reasoning for finite-dimensional representations and lift them to the full expressive and descriptive power of first-order representations. With this approach, we achieve inter-system scaling by decoupling the generics from the specifics and we hope to improve intra-system scaling by being able to combine more powerful reasoning techniques from classically disjoint domains.

Relevance

This project is of relevance to the security community, since, if successful, it would provide more general and more flexible off-the-shelve solutions that can be used to address security analysis questions for particular systems. To simplify the design of problem-specific computer-aided security analysis procedures, this project addresses a separation of the problem description from the reasoning techniques about them. At the same time, it increases representational and computational power to scale to systems with uncertainty and distributed effects.

Impact

This project has the potential to help solve security analysis questions that scale to distributed systems and the presence of uncertainty. Both aspects are central in security challenges like Stuxnet. Because each security analysis question is different, it is more economically feasible to assemble particular security analyses from suitable reasoning components. This project addresses one such component with a good trade-off between generality and efficiency.

OUR TEAM:

The project team includes Andre Platzer who is an assistant professor in the computer science department at Carnegie Mellon University. He is an expert in verification and analysis of hybrid, distributed, and stochastic dynamic systems, including cyber-physical systems. The team further includes Erik P. Zawadzki, who is a fourth year graduate student in the computer science department at Carnegie Mellon University and is developing reasoning techniques for first-order MILPs and fast propositional solvers for probabilistic model counting.

 

Subscribe to