Cyber-physical system (CPS) security lapses may lead to catastrophic failure. We are interested in the scientific basis for discovering unique CPS security vulnerabilities to stepping-stone attacks that penetrate through network of intermediate hosts to the ultimate targets, the compromise of which leads to instability, unsafe behaviors, and ultimately diminished availability. Our project advances this scientific basis through design and evaluation of CPS, driven by uncertainty-aware formalization of system models, adversary classes, and security metrics. We propose to define metrics, develop and study analysis algorithms that provide formal guarantees on them with respect to different adversary classes and different defense mechanisms.
Prof. David M. Nicol is the Herman M. Dieckamp Endowed Chair in Engineering at the University of Illinois at Urbana‐Champaign, and a member of the Department of Electrical and Computer Engineering. He also serves as the Director of the Information Trust Institute (iti.illinois.edu), and the Director of the Advanced Digital Sciences Center (Singapore). He is PI for two national centers for infrastructure resilience: the DHS‐funded Critical Infrastructure Resilience Institute (ciri.illinois.edu), and the DoE funded Cyber Resilient Energy Delivery Consortium (cred‐c.org); he is also PI for the Boeing Trusted Software Center, and co-PI for the NSA‐funded Science of Security lablet.
Prior to joining UIUC in 2003 he served on the faculties of the computer science departments at Dartmouth College (1996‐2003), and before that the College of William and Mary (1987‐1996). He has won recognition for excellence in teaching at all three universities. His research interests include trust analysis of networks and software, analytic modeling, and parallelized discrete‐event simulation, research which has led to the founding of startup company Network Perception, and election as Fellow of the IEEE and Fellow of the ACM. He is the inaugural recipient of the ACM SIGSIM Outstanding Contributions award, and co‐author of the widely used undergraduate textbook “Discrete‐Event Systems Simulation”.
Nicol holds a B.A. (1979) degree in mathematics from Carleton College, M.S. (1983) and Ph.D. (1985) degrees in computer science from the University of Virginia.
We believe that diversity and redundancy can help us prevent an attacker from hiding all of his or her traces. Therefore, we will strategically deploy diverse security monitors and build a set of techniques to combine information originating at the monitors. We have shown that we can formulate monitor deployment as a constrained optimization problem wherein the objective function is the utility of monitors in detecting intrusions. In this project, we will develop methods to select and place diverse monitors at different architectural levels in the system and evaluate the trustworthiness of the data generated by the monitors. We will build event aggregation and correlation algorithms to achieve inferences for intrusion detection. Those algorithms will combine the events and alerts generated by the deployed monitors with important system-related information, including information on the system architecture, users, and vulnerabilities. Since the rule-based detection systems fail to detect novel attacks, we will adapt and extend existing anomaly detection methods. We will build on our previous SoS-funded work that resulted in the development of the special-purpose intrusion detection methods.
We propose to develop the analysis methodology needed to support scientific reasoning about the resilience and security of networks, with a particular focus on network control and information/data flow. The core of this vision is an automated synthesis framework (ASF), which will automatically derive network state and repairs from a set of specified correctness requirements and security policies. ASF consists of a set of techniques for performing and integrating security and resilience analyses applied at different layers (i.e., data forwarding, network control, programming language, and application software) in a real-time and automated fashion. The ASF approach is exciting because developing it adds to the theoretical underpinnings of SoS, while using it supports the practice of SoS.
This project seeks to aid developers in designing and implementing protocols for establishing mutual trust between users, Internet of Things (IoT) devices, and their intended environment through identifying principles of secure bootstrapping, including tradeoffs among security objectives, device capabilities, and usability.
The goal of this project is to aid security engineers in predicting the difficulty of system compromises through the development and evaluation of attack surface measurement techniques based upon attacker-centric vulnerability discovery processes.
This research aims at aiding administrators of virtualized computing infrastructures in making services more resilient to security attacks through applying machine learning to reduce both security and functionality risks in software patching by continually monitoring patched and unpatched software to discover vulnerabilities and triggering proper security updates.
According to Nissenbaum’s theory of contextual integrity (CI), protecting privacy means ensuring that personal information flows appropriately; it does not mean that no information flows (e.g., confidentiality), or that it flows only if the information subject allows it (e.g., control). Flow is appropriate if it conforms to legitimate, contextual informational norms. Contextual informational norms prescribe information flows in terms of five parameters: actors (sender, subject, recipient), information types, and transmission principles. Actors and information types range over respective contextual ontologies. Transmission principles (a term introduced by the theory) range over the conditions or constraints under which information flows, for example, whether confidentially, mandated by law, with notice, with consent, in accordance with subject's preference, and so on. The theory holds that our privacy expectations are a product of informational norms, meaning that people will judge particular information flows as respecting or violating privacy according to whether or not—in the first approximation—they conform to contextual informational norms. If so, we say contextual integrity has been preserved.
The theory has been recognized in policy arenas, has been formalized, has guided empirical social science research, and has shaped system development. Yet, despite resolving many longstanding privacy puzzles and its promising potential in practical realms, its direct application to pressing needs of design and policy has proven challenging. One challenge is that the theory requires knowledge of data flows, and in practice, systems may not be able to provide this, particularly once data leaves a device. The challenge of bridging theory and practice, in this case, grounding scientific research and design practice in the theory of CI, is not only tractable, but with sufficient effort devoted to operationalizing the relevant concepts, could enhance our methodological toolkit for studying individuals’ understandings and valuations of privacy in relation to data-intensive technologies and principles to guide design.
In our view, capturing people’s complex attitudes toward privacy, including expectations and preferences in situ, will require methodological innovation and new techniques that apply the theory of contextual integrity. These methodologies and techniques have to accommodate the five independent parameters of contextual norms, scale to diverse contexts in which privacy decision-making takes place, and be sensitive not only to the variety of preferences and expectations within respective contexts, but to distinguish preferences from expectations. What we learn about privacy attitudes by following such methods and techniques should serve in the discovery and identification of contextual information norms, and yield results that are sufficiently rigorous to serve as a foundation for the design of effective privacy interfaces. The first informs public policy and law with information about what people generally expect and what is generally viewed as objectionable; the second informs designers not only about mechanisms to help people to make informed decisions, but also what substantive constraints on flow should or could be implemented within design. Instead of ubiquitous “notice and choice” regimes, the project will aim to identify situations where clear norms, for example, those identified through careful study, can be embedded in technology (systems, applications, platforms) as constraints on flow and where no such norms emerge, variations may be selected according to user preferences. Thus, this project will yield a set of practical, usable, and scalable technologies and tools that can be applied to both existing and future technologies, thereby providing a scientific basis for future privacy research.
Serge Egelman is the Research Director of the Usable Security and Privacy group at the International Computer Science Institute (ICSI), which is an independent research institute affiliated with the University of California, Berkeley. He is also Chief Scientist and co-founder of AppCensus, Inc., which is commercializing his research by performing on-demand privacy analysis of mobile apps for compliance purposes. He conducts research to help people make more informed online privacy and security decisions, and is generally interested in consumer protection. This has included improvements to web browser security warnings, authentication on social networking websites, and most recently, privacy on mobile devices. Seven of his research publications have received awards at the ACM CHI conference, which is the top venue for human-computer interaction research; his research on privacy on mobile platforms has received the Caspar Bowden Award for Outstanding Research in Privacy Enhancing Technologies, the USENIX Security Distinguished Paper Award, and privacy research awards from two different European data protection authorities, CNIL and AEPD. His research has been cited in numerous lawsuits and regulatory actions, as well as featured in the New York Times, Washington Post, Wall Street Journal, Wired, CNET, NBC, and CBS. He received his PhD from Carnegie Mellon University and has previously performed research at Xerox Parc, Microsoft, and NIST.
Privacy governance for Big Data is challenging—data may be rich enough to allow the inference of private information that has been removed, redacted, or minimized. We must protect against both malicious and accidental inference, both by data analysts and by automated systems. To do this, we are extending existing methods for controlling the inference risks of common analysis tools (drawn from literature on the related problem of nondiscriminatory data analysis). We are coupling these methods with auditing tools such as verifiably integral audit logs. Robust audit logs hold analysts accountable for their investigations, reducing exposure to malicious sensitive inference. Further, tools for controlling information flow and leakage in analytical models eliminate many types of accidental sensitive inference. Together, the analytical guarantees of inference-sensitive data mining technologies and the record-keeping functions of logs can create a methodology for truly accountable systems for holding and processing private data.
This project will deliver a data governance methodology enabling more expressive policies that rely on accountability to enable exploration of the privacy consequences of Big Data analysis. This methodology will combine known techniques from computer science—including verification using formal methods—with principles from the study of privacy-by-design and with accountability mechanisms. Systems subject to this methodology will generate evidence both before they operate (in the form of guarantees about when and how data can be analyzed and disclosed – e.g., "differential privacy in this database implies that no analysis can determine the presence or absence of a single row.") and as they operate (in the form of audit log entries that describe how the system's capabilities were actually used – e.g. "the marketing application submitted a query Q which returned a set of rows R" – and records this information in a way that demonstrates that these audit materials could not have been altered or forged). While the techniques here are not novel, they have not been previously synthesized into an actionable methodology for practical data governance, especially as it applies to Big Data. Information gleaned by examining this evidence can be used to inform the development of traditional-style data access policies that support full personal accountability for data access.
To demonstrate that the output methodology is actionable, this project will also produce a set of generalizable design patterns applying the methodology to real data analysis scenarios drawn from interviews with practitioners in industry and government. These patterns will then inform practical use of the methodology. Together, the new methodology and design patterns will provide the start of a new generation of data governance approaches that function properly in the era of Big Data, machine learning, and automated decision-making. This project will relate the emerging science of privacy to data analysis practice, governance, and compliance and show how to make the newest technologies relevant, actionable, and useful.
Methods, approaches, and tools to identify the correct conceptualization of privacy early in the design and engineering process are important. For example, early whole body imaging technology for airport security were analyzed by the Department of Homeland Security through a Privacy Impact Assessment, focusing on the collection of personally identifiable information finding that the images of persons’ individual bodies were not detailed enough to constitute PII, and would not pose a privacy problem. Nevertheless, many citizens, policymakers, and organizations subsequently voiced strong privacy ob- jections: the conception of privacy as being about the collection of PII did not cover the types of privacy concerns raised by stakeholders, leading to expensive redesigns to ad- dress the correct concepts of privacy (such as having the system display an outline of a generic person rather than an image of the specific person being scanned). In this project, we will investigate current tools, methods and approaches being utilized by engineers and designers to identify and address privacy risks and harms.
To help address gaps and shortcomings that we find in current tools and approaches, we are adapting design research techniques—traditionally used to help designers and engineers to explore and define problem spaces in grounded, inductive, and generative ways—to specifically address privacy. This builds on a tradition of research termed "values in design," which seeks to identify values and create systems that better recognize and address them. Design methods, including card activities, design scenarios, design workbooks, and design probes, can be used by engineers or designers of systems, and/or can be used with other stakeholders of systems (such as end-users). These methods help foster discussion of values, chart the problem space of values, and are grounded by specific contexts or systems. These methods can be deployed during early ideation stages of a design process, during or after the design process as an analytical tool, or as part of training and educating. We suggest that design approaches can help explore and define the problem space of privacy and identify and define privacy risks (including, but also going beyond unauthorized use of data), leveraging the contextual integrity framework.
As part of this project, we are creating, testing, validating, and deploying a set of privacy-focused tools and approaches that can be used to help train engineers and designers to identify, define and analyze the privacy risks that need to be considered when designing a system, as part of privacy engineering.
- Select [Edit]
- Please add the project description here.
- Add the names of the PIs, Co-PIs, and Researchers below (they must have accounts on the VO)
- Select the correct hard problem from the vocabulary terms on the right