Obsidian: A Language for Secure-By-Construction Blockchain Programs
Lead PI:
Jonathan Aldrich
Co-Pi:
Abstract

This project considers models for secure collaboration and contracts in a decentralized environment among parties that have not established trust. A significant example of this is blockchain programming, with platforms such as Ethereum and HyperLedger. There are many documented defects in secure collaboration mechanisms, and some have been exploited to steal money.  Our approach builds two kinds of models to address these defects: typestate models to mitigate re-entrancy-related vulnerabilities, and linear types to model and statically detect an important class of errors involving money and other transferrable resources.

The project research will include both technical and usability assessment of these two ideas. The technical assessment addresses the feasibility of sound and composable static analyses to support these two semantic innovations. The usability assessment focuses on the ability of programmers to use Obsidian effectively to write secure programs with little training.  A combined assessment would focus on whether programmers are more likely to write correct, safe code with Obsidian than with Solidity, and with comparable or improved productivity.

Jonathan Aldrich

Jonathan Aldrich is an Associate Professor of the School of Computer Science. He does programming languages and software engineering research focused on developing better ways of expressing and enforcing software design within source code, typically through language design and type systems. Jonathan works at the intersection of programming languages and software engineering. His research explores how the way we express software affects our ability to engineer software at scale. A particular theme of much of his work is improving software quality and programmer productivity through better ways to express structural and behavioral aspects of software design within source code. Aldrich has contributed to object-oriented typestate verification, modular reasoning techniques for aspects and stateful programs, and new object-oriented language models. For his work specifying and verifying architecture, he received a 2006 NSF CAREER award and the 2007 Dahl-Nygaard Junior Prize. Currently, Aldrich excited to be working on the design of Wyvern, a new modularly extensible programming language.

Institution: Carnegie Mellon University
Sponsor: National Security Agency
Model-Based Explanation For Human-in-the-Loop Security
Lead PI:
David Garlan
Abstract

Effective response to security attacks often requires a combination of both automated and human-mediated actions. Currently we lack adequate methods to reason about such human-system coordination, including ways to determine when to allocate tasks to each party and how to gain assurance that automated mechanisms are appropriately aligned with organizational needs and policies. In this project, we develop a model-based approach to (a) reason about when and how systems and humans should cooperate with each other, (b) improve human understanding and trust in automated behavior through self-explanation, and (c) provide mechanisms for humans to correct a system’s automated behavior when it is inappropriate. We will explore the effectiveness of the techniques in the context of coordinated system-human approaches for mitigating advanced persistent threats (APTs).

Building on prior work that we have carried out in this area, we will show how probabilistic models and model checkers can be used both to synthesize complex plans that involve a combination of human and automated actions, as well as to provide human understandable explanations of mitigation plans proposed or carried out by the system. Critically, these models capture an explicit value system (in a multi-dimensional utility space) that forms the basis for determining courses of action. Because the value system is explicit we believe that it will be possible to provide a rational explanation of the principles that led to a given system plan. Moreover, our approach will allow the user to make corrective actions to that value system (and hence, future decisions) when it is misaligned. This will be done without a user needing to know the mathematical form of the revised utility reward function.

David Garlan

David Garlan is a Professor in the School of Computer Science at Carnegie Mellon University. His research interests include:

  • software architecture
  • self-adaptive systems
  • formal methods
  • cyber-physical system

Dr. Garlan is a member of the Institute for Software Research and Computer Science Department in the School of Computer Science.

He is a Professor of Computer Science in the School of Computer Science at Carnegie Mellon University.  He received his Ph.D. from Carnegie Mellon in 1987 and worked as a software architect in industry between 1987 and 1990.  His research interests include software architecture, self-adaptive systems, formal methods, and cyber-physical systems.  He is recognized as one of the founders of the field of software architecture, and, in particular, formal representation and analysis of architectural designs. He is a co-author of two books on software architecture: "Software Architecture: Perspectives on an Emerging Discipline", and "Documenting Software Architecture: Views and Beyond." In 2005 he received a Stevens Award Citation for “fundamental contributions to the development and understanding of software architecture as a discipline in software engineering.” In 2011 he received the Outstanding Research award from ACM SIGSOFT for “significant and lasting software engineering research contributions through the development and promotion of software architecture.”  In 2016 he received the Allen Newell Award for Research Excellence. In 2017 he received the IEEE TCSE Distinguished Education Award and also the Nancy Mead Award for Excellence in Software Engineering Education He is a Fellow of the IEEE and ACM.

Institution: Carnegie Mellon University
Sponsor: National Security Agency
Securing Safety-Critical Machine Learning Algorithms
Lead PI:
Lujo Bauer
Co-Pi:
Abstract

Machine-learning algorithms, especially classifiers, are becoming prevalent in safety and security-critical applications. The susceptibility of some types of classifiers to being evaded by adversarial input data has been explored in domains such as spam filtering, but with the rapid growth in adoption of machine learning in multiple application domains amplifies the extent and severity of this vulnerability landscape. We propose to (1) develop predictive metrics that characterize the degree to which a neural-network-based image classifier used in domains such as face recognition (say, for surveillance and authentication) can be evaded through attacks that are both practically realizable and inconspicuous, and (2) develop methods that make these classifiers, and the applications that incorporate them, robust to such interference. We will examine how to manipulate images to fool classifiers in various ways, and how to do so in a way that escapes the suspicion of even human onlookers. Armed with this understanding of the weaknesses of popular classifiers and their modes of use, we will develop explanations of model behavior to help identify the presence of a likely attack; and generalize these explanations to harden models against future attacks.

Lujo Bauer

Lujo Bauer is an Associate Professor in the Electrical and Computer Engineering Department and in the Institute for Software Research at Carnegie Mellon University. He received his B.S. in Computer Science from Yale University in 1997 and his Ph.D., also in Computer Science, from Princeton University in 2003.

Dr. Bauer's research interests span many areas of computer security and privacy, and include building usable access-control systems with sound theoretical underpinnings, developing languages and systems for run-time enforcement of security policies on programs, and generally narrowing the gap between a formal model and a practical, usable system. His recent work focuses on developing tools and guidance to help users stay safer online and in examining how advances in machine learning can lead to a more secure future.

Dr. Bauer served as the program chair for the flagship computer security conferences of the IEEE (S&P 2015) and the Internet Society (NDSS 2014) and is an associate editor of ACM Transactions on Information and System Security.

Institution: Carnegie Mellon University
Sponsor: National Security Agency
A Monitoring, Fusion and Response Framework to Provide Cyber Resiliency
Lead PI:
William Sanders
William Sanders
Performance Period: 11/01/2016 - 06/01/2017
Real-time Privacy Risk Evaluation and Enforcement
Lead PI:
Travis Breaux
Abstract

Critical infrastructure is increasingly comprised of distributed, inter--‐dependent components and information that is vulnerable to sophisticated, multi--‐stage cyber--‐attacks.  These attacks are difficult to understand as isolated incidents, and thus to improve understanding and response, organizations must rapidly share high quality threat, vulnerability and exploit--‐related, cyber--‐security information.  However, pervasive and ubiquitous computing has blurred the boundary between work--‐related and personal data.  This includes both the use of workplace computers for personal purposes, and the increase in publicly available, employee information that can be used to gain unauthorized access to systems through attacks targeted at employees. 
 

To address this challenge, we envision a two part solution that includes: (1) the capability to assign information category tags to data “in transit” and “at rest” using an ontology that describes what information is personal and non--‐personal; and (2) a scoring algorithm that computes the “privacy risk” of some combination of assigned tags.

Travis Breaux

Dr. Breaux is the Director of the CMU Requirements Engineering Lab, where his research program investigates how to specify and design software to comply with policy and law in a trustworthy, reliable manner. His work historically concerned the empirical extraction of legal requirements from policies and law, and has recently studied how to use formal specifications to reason about privacy policy compliance, how to measure and reason over ambiguous and vague policies, and how security and privacy experts and novices estimate the risk of system designs.

To learn more, read about his ongoing research projects or contact him.

Performance Period: 02/15/2016 - 06/01/2017
Abstract

Anonymity is a basic right and a core aspect of Internet. Recently, there has been tremendous interest in anonymity and privacy in social networks, motivated by the natural desire to share one’s opinions without the fear of judgment or personal reprisal (by parents, authorities, and the public). We propose to study the fundamental questions associated with building such a semi-distributed, anonymous messaging platform, which aims to keep anonymous the identity of the source who initially posted a message as well as the identity of the relays who approved and propagated the message.

Pramod Viswanath
Institution: University of Illinois at Urbana-Champaign
Laurie Williams

Laurie Williams is a Distinguished University Professor in the Computer Science Department of the College of Engineering at North Carolina State University (NCSU). Laurie is a co-director of the NCSU Secure Computing Institute and the NCSU Science of Security Lablet. She is also the Chief Cybersecurity Technologist of the SecureAmerica Institute. Laurie's research focuses on software security; agile software development practices and processes, particularly continuous deployment; and software reliability, software testing and analysis. Laurie has more than 240 refereed publications.

Laurie is an IEEE Fellow. Laurie was named an ACM Distinguished Scientist in 2011, and is an NSF CAREER award winner. In 2009, she was honored to receive the ACM SIGSOFT Influential Educator Award. At NCSU, Laurie was named a University Faculty Scholars in 2013. She was inducted into the Research Leadership Academy and awarded an Alumni Association Outstanding Research Award in 2016. In 2006, she won the Outstanding Teaching award for her innovative teaching and is an inductee in the NC State's Academy of Outstanding Teachers.

Laurie leads the Software Engineering Realsearch research group at NCSU. With her students in the Realsearch group, Laurie has been involved in working collaboratively with high tech industries like ABB Corporation, Cisco, IBM Corporation, Merck, Microsoft, Nortel Networks, Red Hat, Sabre Airline Solutions, SAS, Tekelec (now Oracle), and other healthcare IT companies. They also extensively evaluate open source software.

Laurie is one of the foremost researchers in agile software development and in the security of healthcare IT applications. She was one of the founders of the first XP/Agile conference, XP Universe, in 2001 in Raleigh which has now grown into the Agile 200x annual conference. She is also the lead author of the book Pair Programming Illuminated and a co-editor of Extreme Programming Perspectives. Laurie is also the instructor of a highly-rated professional agile software development course that has been widely taught in Fortune 500 companies. She also is a certified instructor of John Musa's software reliability engineering course, More Reliable Software Faster and Cheaper.

Laurie received her Ph.D. in Computer Science from the University of Utah, her MBA from Duke University Fuqua School of Business, and her BS in Industrial Engineering from Lehigh University.   She worked for IBM Corporation for nine years in Raleigh, NC and Research Triangle Park, NC before returning to academia.

Performance Period: 03/17/2016 - 03/17/2017
Institution: NC State University
SoS Lablet Research Methods, Community Development and Support
Lead PI:
Jeffrey Carver
Co-Pi:
Abstract
  • Community Development - The goal is to build an extended and vibrant interdisciplinary community of science of security researchers, research methodologists, and practitioners (Carver, Williams).
  • Community Resources - To create and maintain a repository of defensible scientific methods for security research (Carver, Williams).
  • Oversight for the Application of Defensible Scientific Research Methodologies - To encourage the application of scientifically defensible research through various methods of consultation and feedback (Carver).
  • Usable Data Sharing - To enable open, efficient, and secure sharing of data and experimental results for experimentation among SoS researchers (Al-Shaer).
Jeffrey Carver
Performance Period: 03/17/2016 - 03/17/2017
Institution: University of AlabamaNC State UniversityUNC-Charlotte
Abstract
  • Contributions to Developing a Science of Security - We will design and implement an evaluation process for assessing the effectiveness and impact of the Lablet's research and community development activities (McGowen, Stallings, & Wright).
  • Contributions to Security Science Research Methodology - We will examine both the impact of Lablet work on the maturity of the SoS field and the methodological rigor of the Lablet research projects themselves (McGowen, Carver).
  • Development of a Community of Practice for the Science of Security - We will develop methods to assess whether Lablet activities are contributing to the development of a sustainable community of practice for the SoS field (McGowen, Stallings, Carver, & Wright).
Lindsey McGowen
Performance Period: 03/17/2016 - 03/17/2017
Institution: NC State UniversityUniversity of Alabama
Privacy Incidents Database
Lead PI:
Jessica Staddon
Abstract

The patterns and characteristics of security incidents are a significant driver of security technology innovation. Patterns are detected by analyzing repositories of malware/viruses/worms, incidents affecting control/SCADA systems, general security alerts and updates, and data breaches. For most types of privacy incidents there are no repositories. Privacy incidents that do not involve a security breach, such as cyber-bullying/slander/stalking, revenge porn, social media oversharing, data reidentification and surveillance, are not represented in the current repositories.  Our project is building the first comprehensive encyclopedia and database of privacy incidents. This publicly-accessible repository will enable tracking of incident rates and characteristics such as involved entities and incident root causes. The repository will provide a resource for privacy researchers to investigate the patterns of a broad range of privacy incidents, and the incident patterns surfaced by the database will help inform privacy technology development globally.

Jessica Staddon
Institution: NC State University
Subscribe to