Scalable Privacy Analysis
Lead PI:
Serge Egelman
Abstract

One major shortcoming of the current "notice and consent" privacy framework is that the constraints for data usage stated in policies—be they stated privacy practices, regulation, or laws—cannot easily be compared against the technologies that they govern. To that end, we are developing a framework to automatically compare policy against practice. Broadly, this involves identifying the relevant data usage policies and practices in a given domain, then measuring the real-world exchanges of data restricted by those rules. The results of such a method will then be used to measure and predict the harms brought onto the data’s subjects and holders in the event of its unauthorized usage. In doing so, we will be able to infer which specific protected pieces of information, individual prohibited operations on that data, and aggregations thereof pose the highest risks compared to other items covered by the policy. This will shed light on the relationship between the unwanted collection of data, its usage and dissemination, and resulting negative consequences.

We have built infrastructure into the Android operating system, whereby we have heavily instrumented both the permission-checking APIs and included network-monitory functionality. This allows us to monitor when an application attempts to access protected data (e.g., PII, persistent identifiers, etc.) and what it does with it. Unlike static analysis techniques, which only detect the potential for certain behaviors (e.g., data exfiltration), executing applications with our instrumentation yields real-time observations of actual privacy violations. The only drawback, however, is that applications need to be executed, and broad code coverage is desired. To date, we have demonstrated that many privacy violations are detectable when application user interfaces are “fuzzed” using random input. However, there are many open research questions about how we can yield better code coverage to detect a wider range of privacy- related events, while doing so in a scalable manner. Towards that end, we plan to virtualize our privacy testbed and integrate crowd-sourcing. By doing this, we will develop new methods for performing privacy experiments that are repeatable, rigorous, and gen- eralizable. The results of these experiments can then be used to implement data-driven privacy controls, address gaps in regulation, and enforce existing regulations.

Serge Egelman

Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also Chief Scientist and co-founder of AppCensus, Inc., which is commercializing his research by performing on-demand privacy analysis of mobile apps for compliance purposes. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and privacy research awards from two different European data protection authorities, CNIL and AEPD. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.

Performance Period: 01/01/2018 - 01/01/2018
Institution: International Computer Science Institute
Reasoning about Accidental and Malicious Misuse via Formal Methods
Lead PI:
Munindar Singh
Co-Pi:
Abstract

This project seeks to aid security analysts in identifying and protecting against accidental and malicious actions by users or software through automated reasoning on unified representations of user expectations and software implementation to identify misuses sensitive to usage and machine context.

Munindar Singh

Dr. Munindar P. Singh is Alumni Distinguished Graduate Professor in the Department of Computer Science at North Carolina State University. He is a co-director of the DoD-sponsored Science of Security Lablet at NCSU, one of six nationwide. Munindar’s research interests include computational aspects of sociotechnical systems, especially as a basis for addressing challenges such as ethics, safety, resilience, trust, and privacy in connection with AI and multiagent systems.

Munindar is a Fellow of AAAI (Association for the Advancement of Artificial Intelligence), AAAS (American Association for the Advancement of Science), ACM (Association for Computing Machinery), and IEEE (Institute of Electrical and Electronics Engineers), and was elected a foreign member of Academia Europaea (honoris causa). He has won the ACM/SIGAI Autonomous Agents Research Award, the IEEE TCSVC Research Innovation Award, and the IFAAMAS Influential Paper Award. He won NC State University’s Outstanding Graduate Faculty Mentor Award as well as the Outstanding Research Achievement Award (twice). He was selected as an Alumni Distinguished Graduate Professor and elected to NCSU’s Research Leadership Academy.

Munindar was the editor-in-chief of the ACM Transactions on Internet Technology from 2012 to 2018 and the editor-in-chief of IEEE Internet Computing from 1999 to 2002. His current editorial service includes IEEE Internet Computing, Journal of Artificial Intelligence Research, Journal of Autonomous Agents and Multiagent Systems, IEEE Transactions on Services Computing, and ACM Transactions on Intelligent Systems and Technology. Munindar served on the founding board of directors of IFAAMAS, the International Foundation for Autonomous Agents and MultiAgent Systems. He previously served on the editorial board of the Journal of Web Semantics. He also served on the founding steering committee for the IEEE Transactions on Mobile Computing. Munindar was a general co-chair for the 2005 International Conference on Autonomous Agents and MultiAgent Systems and the 2016 International Conference on Service-Oriented Computing.

Munindar’s research has been recognized with awards and sponsorship by (alphabetically) Army Research Lab, Army Research Office, Cisco Systems, Consortium for Ocean Leadership, DARPA, Department of Defense, Ericsson, Facebook, IBM, Intel, National Science Foundation, and Xerox.

Twenty-nine students have received Ph.D. degrees and thirty-nine students MS degrees under Munindar’s direction.

Institution: North Carolina State University
Sponsor: National Security Agency
Characterizing user behavior and anticipating its effects on computer security with a Security Behavior Observatory
Co-Pi:
Abstract

Systems that are technically secure may still be exploited if users behave in unsafe ways. Most studies of user behavior are in controlled laboratory settings or in large-scale between-subjects measurements in the field. Both methods have shortcomings: lab experiments are not in natural environments and therefore may not accurately capture real world behaviors (i.e., low ecological validity), whereas large-scale measurement studies do not allow the researchers to probe user intent or gather explanatory data for observed behaviors, and they offer limited control for confounding factors. The SBO addresses the gap through a panel of participants consenting to our observing their daily computing behavior in situ, so we can understand what constitutes “insecure” behavior. We use the security behavior observatory to attempt to answer a number of research questions, including 1) What are risk indicators of a user’s propensity to be infected by malware?  2) Why do victims fail to update vulnerable software in a timely manner? 3) How can user behavior be modeled with respect to security and privacy “in the wild”?

Performance Period: 01/01/2018 - 01/01/2018
Institution: Carnegie Mellon University
Sponsor: National Security Agency
Secure Native Binary Execution
Lead PI:
Prasad Kulkarni
Abstract

Typically, securing software is the responsibility of the software developer. The customer or end-user of the software does not control or direct the steps taken by the developer to employ best practice coding styles or mechanisms to ensure software security and robustness. Current systems and tools also do not provide the end-user with an ability to determine the level of security in the software they use. At the same time, any flaw or security vulnerabilities ultimately affect the end-user of the software. Therefore, our overall project aim is to provide greater control to the end-user to actively assess and secure the software they use.

Our project goal is to develop a high-performance framework for client-side security assessment and enforcement for binary software. Our research is developing new tools and techniques to: (a) assess the security level of binary executables, and (b) enhance the security level of binary software, when and as desired by the user to protect the binary against various classes of security issues. Our approach combines static and dynamic techniques to achieve efficiency, effectiveness, and accuracy.

Prasad Kulkarni
Institution: University of Kansas
Sponsor: National Security Agency
Secure Native Binary Executions
Lead PI:
Prasad Kulkarni
Abstract

CPS systems routinely employ custom off-the-shelf (COTS) applications and binaries to realize their overall system goals. COTS applications for CPS are typically programmed using unsafe languages, like C/C++ or assembly. Such programs are often plagued with memory and other vulnerabilities that attackers can exploit to compromise the system.

There are many issues that need to be explored and resolved to provide security in this environment. For instance; (a) different systems may desire distinct and customizable levels of protection (for the same software), (b) different systems may have varying tolerances to the performance and/or timing penalties imposed by existing security solutions, and hence a solution applicable in one case may not be appropriate to a different system, (c) multiple solutions to the same vulnerability/attack may impose varying levels of security and performance penalties. Such tradeoffs and comparisons with other potential solutions are typically unknown or unavailable to users, and (d) solutions to newly discovered attacks and improvements to existing solutions continue to be devised. There is currently no efficient mechanism to retrofit existing application binaries with new security patches with minimal disruption to system operation.

The goal of this research is to design a mechanism to: (a) analyze and quantify the level of security provided and performance penalty imposed by different solutions to various security risks affecting native binaries, and (b) to study and build an architecture that can efficiently and adaptively patch vulnerabilities or retrofit COTS applications with chosen security mechanisms with minimal disruption.

Successful completion of this project will result in:

  • Exploration and understanding of the security and performance tradeoffs imposed by different proposed solutions to important software problems.
  • Discovery of (a set of) mechanisms to reliably retrofit desired compiler-level, instrumentation-based, or other user-defined security solutions into existing binaries.
  • Study and resolution of the issues involved in the design and construction of an efficient production-quality framework to realize the proposed goals.
Prasad Kulkarni
Secure Native Binary Execution
Lead PI:
Prasad Kulkarni
Abstract

CPS systems routinely employ custom off-the-shelf (COTS) applications and binaries to realize their overall system goals. COTS applications for CPS are typically programmed using unsafe languages, like C/C++ or assembly. Such programs are often plagued with memory and other vulnerabilities that attackers can exploit to compromise the system.

There are many issues that need to be explored and resolved to provide security in this environment. For instance; (a) different systems may desire distinct and customizable levels of protection (for the same software), (b) different systems may have varying tolerances to the performance and/or timing penalties imposed by existing security solutions, and hence a solution applicable in one case may not be appropriate to a different system, (c) multiple solutions to the same vulnerability/attack may impose varying levels of security and performance penalties. Such tradeoffs and comparisons with other potential solutions are typically unknown or unavailable to users, and (d) solutions to newly discovered attacks and improvements to existing solutions continue to be devised. There is currently no efficient mechanism to retrofit existing application binaries with new security patches with minimal disruption to system operation.

The goal of this research is to design a mechanism to: (a) analyze and quantify the level of security provided and performance penalty imposed by different solutions to various security risks affecting native binaries, and (b) to study and build an architecture that can efficiently and adaptively patch vulnerabilities or retrofit COTS applications with chosen security mechanisms with minimal disruption.

Prasad Kulkarni
Contextual Integrity for Computer Systems
Lead PI:
Michael Tschantz
Abstract

Despite the success of Contextual Integrity (see project "Operationalizing Contextual Integrity"), its uptake by computer scientists has been limited due to the philosophical framework not meeting them on their terms. In this project we will both refine Contextual Integrity (CI) to better fit the problems computer scientists face and to express it in the mathematical terms they expect.

According to the theory of CI, informational norms are specific to social contexts (e.g., healthcare, education, commercial marketplace, political citizenship, etc.). Increasing interest in context as a factor in computer science research marks important progress toward a more nuanced interpretation of privacy. It is clear, however, that context takes on many meanings across these research projects. As noted above, Contextual Integrity is committed to context as social domain, or sphere, while some works have used the term to mean situation, physical surroundings, or even technical platform. In this project, we will disentangle the many meanings of context and expand the CI framework using formal models to show how these meanings are logically linked. We are exploring how precisely differentiating between situation and sphere can make CI more actionable. For example, this differentiation will help disentangle cases where a single situation participates in more than one sphere, or when information flows inappropriately from one situation to another. To make the de-conflated notions of context crisp, we are developing formal models for each notion of contexts with clear explanations of which applies in which setting. We are attempting to model the central notion of concept found in CI using Markov Decision Processes to capture that most contexts are organized around some goal (e.g., healthcare).

Privacy skeptics have cited variations across nations, cultures, and even individuals as proof that privacy is not a fundamental, but more like a preference. The lesson for designers, for example, is to assess preferences in order to succeed within the marketplace of their targeted users. The explanation CI offers is that differences in privacy norms are due to differences in societal structures and the function of values of specific contexts within those structures. But, because societies change over time, sometimes radically through revolutionary shifts, a theory of privacy must allow for changes in privacy norms. In the present time, revolutionary shifts are being forced by computer science and technology. Take, for example, a social platform such as a classroom discussion board and assume one has implemented Contextual Integrity, preventing flows from taking place that conflict with educational privacy norms. Assume, also, that norms change over time due to changes in technical practices and the educational system itself (e.g., the introduction of MOOCs). How might such systems adapt? We are laying the groundwork for understanding this problem by developing formal models of context and norm drift over time. We will augment the formal models of context mentioned above with with notions of change drawing inspiration from temporal logics.

CI and differential privacy (DP) both claim to define privacy as it applies to data flow. The former, as we have seen, offers a systematic account for what people mean when protesting that privacy is under threat, or is violated by systems that collect, accumulate, and analyze data; the latter offers a mathematical property of operations that process data as a definition of privacy that is robust, meaningful, and mathematically rigorous. For this project, another driving question is the relationship between CI and DP. For example, DP may be understood as one kind of transmission principle, but DP does not capture other socially meaningful transmission principles, such as reciprocity, confidentiality, and notice. Thus, we are also cataloging the wide range of transmission principles relevant to privacy and showing where DP is a useful mathematical expression. This will allow us to derive other mathematically rigorous specifications for other transmission principles.

Michael Tschantz
Performance Period: 01/01/2018 - 01/01/2018
Institution: International Computer Science Institute, Cornell Tech
Sponsor: National Security Agency
Securing Safety-Critical Machine Learning Algorithms
Lead PI:
Lujo Bauer
Abstract

Machine-learning algorithms, especially classifiers, are becoming prevalent in safety and security-critical applications. The susceptibility of some types of classifiers to being evaded by adversarial input data has been explored in domains such as

spam filtering, but with the rapid growth in adoption of machine learning in multiple application domains amplifies the extent and severity of this vulnerability landscape. We propose to (1) develop predictive metrics that characterize the degree to which a

neural-network-based image classifier used in domains such as face recognition (say, for surveillance and authentication) can be evaded through attacks that are both practically realizable and inconspicuous, and (2) develop methods that make these classifiers, and the applications that incorporate them, robust to such interference. We will examine how to manipulate images to fool classifiers in various ways, and how to do so in a way that escapes the suspicion of even human onlookers. Armed with this

understanding of the weaknesses of popular classifiers and their modes of use, we will develop explanations of model behavior to help identify the presence of a likely attack; and generalize these explanations to harden models against future attacks.

Lujo Bauer

Lujo Bauer is an Associate Professor in the Electrical and Computer Engineering Department and in the Institute for Software Research at Carnegie Mellon University. He received his B.S. in Computer Science from Yale University in 1997 and his Ph.D., also in Computer Science, from Princeton University in 2003.

Dr. Bauer's research interests span many areas of computer security and privacy, and include building usable access-control systems with sound theoretical underpinnings, developing languages and systems for run-time enforcement of security policies on programs, and generally narrowing the gap between a formal model and a practical, usable system. His recent work focuses on developing tools and guidance to help users stay safer online and in examining how advances in machine learning can lead to a more secure future.

Dr. Bauer served as the program chair for the flagship computer security conferences of the IEEE (S&P 2015) and the Internet Society (NDSS 2014) and is an associate editor of ACM Transactions on Information and System Security.

Institution: Carnegie Mellon University
Sponsor: National Science Agency
Multi-model Test Bed for the Simulation-based Evaluation of Resilience
Lead PI:
Peter Volgyesi
Co-Pi:
Abstract

We have developed the SURE platform, a modeling and simulation integration testbed for evaluation of resilience for complex CPS [1].  Our previous efforts resulted in a web-based collaborative design environment for attack-defense scenarios supported by a cloud-deployed simulation engine for executing and evaluating the scenarios. The goal of this project is to extend these design and simulation capabilities for better understanding the security and resilience aspects of CPS systems. These improvements include the first class support for the design of experiments (exploring different parameters and/or strategies),  alternative CPS domains (connected vehicles, railway systems, smart grid), incorporating models of human behavior, and executing multistage games. 

[1] Xenofon Koutsoukos, Gabor Karsai, Aron Laszka, Himanshu Neema, Bradley Potteiger, Peter Volgyesi, Yevgeniy Vorobeychik, and Janos Sztipanovits. "SURE: A Modeling and Simulation Integration Platform for Evaluation of SecUre and REsilient Cyber-Physical Systems", Proceedings of the IEEE, 106(1), 93-112, January 2018.

Peter Volgyesi

Peter Volgyesi is a Research Scientist at the Institute for Software Integrated Systems at Vanderbilt University. In the past decade Mr. Volgyesi has been working on several novel and high impact projects sponsored by DARPA, NSF, ONR, ARL and industrial companies (Lockheed Martin, BAE Systems, the Boeing Company, Raytheon, Microsoft). He is one of the architects of the Generic Modeling Environment, a widely used metaprogrammable visual modeling tool, and WebGME - its modern web-based variant. Mr. Volgyesi had a leading role in developing the real-time signal processing algorithms in PinPtr, a low cost, low power countersniper system. He also participated in the development of the Radio Interferometric Positioning System (RIPS), a patented technology for accurate low-power node localization. As PI on two NSF funded projects Mr. Volgyesi and his team developed a low-power software-defined radio platform (MarmotE) and a component-based development toolchain targeting multicore SoC architectures for wireless cyber-physical systems. His team won the Preliminary Tournament of the DARPA Spectrum Challenge in September, 2013.

Institution: Vanderbilt University
Sponsor: National Security Agency
Foundations for Cyber-Physical System Resilience
Lead PI:
Xenofon Koutsoukos
Abstract

The goals of this project are to develop the principles and methods for designing and analyzing resilient CPS architectures that deliver required service in the face of compromised components. A fundamental challenge is to understand the basic tenets of CPS resilience and how they can be used in developing resilient architectures. The proposed approach integrates redundancy, diversity, and hardening methods for designing either passive resilience methods that are inherently robust against attacks and active resilience methods that allow responding to attacks.

Xenofon Koutsoukos

Xenofon Koutsoukos is a Professor of Computer Science, Computer Engineering, and Electrical Engineering in the Department of Electrical Engineering and Computer Science at Vanderbilt University. He is also a Senior Research Scientist in the Institute for Software Integrated Systems (ISIS).

Before joining Vanderbilt, Dr. Koutsoukos was a Member of Research Staff in the Xerox Palo Alto Research Center (PARC) (2000-2002), working in the Embedded Collaborative Computing Area.
He received his Diploma in Electrical and Computer Engineering from the National Technical University of Athens (NTUA), Greece in 1993. Between 1993 and 1995, he joined the National Center for Space Applications, Hellenic Ministry of National Defense, Athens, Greece as a computer engineer in the areas of image processing and remote sensing. He received the Master of Science in Electrical Engineering in January 1998 and the Master of Science in Applied Mathematics in May 1998 both from the University of Notre Dame. He received his PhD in Electrical Engineering working under Professor Panos J. Antsaklis with the group for Interdisciplinary Studies of Intelligent Systems.

His research work is in the area of cyber-physical systems with emphasis on formal methods, distributed algorithms, diagnosis and fault tolerance, and adaptive resource management. He has published numerous journal and conference papers and he is co-inventor of four US patents. He is the recipient of the NSF Career Award in 2004, the Excellence in Teaching Award in 2009 from the Vanderbilt University School of Engineering, and the 2011 Aeronautics Research Mission Directorate (ARMD) Associate Administrator (AA) Award in Technology and Innovation from NASA.

Institution: Vanderbilt University
Sponsor: National Security Agency
Subscribe to