The ICSI Science of Privacy Lablet is contributing broadly to the development of privacy science through multiple multi-disciplinary efforts. The overarching goal of this lablet is to facilitate conducting and disseminating fundamental scientific research on privacy to better understand the implications of data use. When we describe "the implications of data use," we are concerned with systematically exploring several deeply-connected issues to address six privacy challenges:
- Defining privacy across varying contexts and conceptions, so that researchers and practitioners who approach privacy from varying disciplines can describe their work using a common lexicon;
- Providing transparency into data collection and usage, so that researchers and practitioners can better convey issues stemming from data usage and collection to stakeholders (e.g., data subjects, policymakers, and the general public);
- Understanding privacy perceptions that surround the usage of personal data across varying contexts, so that decision-support systems and frameworks can account for human behavior (e.g., concerns, preferences, expectations, and potential reactions);
- Assessing privacy risks using formal reasoning to account for data usage across varying contexts, so that researchers and practitioners can utilize mathematical models to predict future privacy risks that are introduced by the composition and aggregation of collected data from varying heterogeneous sources;
- Designing and validating new methods for Big Data accountability that provide hard guarantees and are context-aware; and
- Exploring how current advances in privacy engineering can be applied to solve the aforementioned privacy challenges.
The lablet represents a multi-disciplinary and multi-institutional collaboration to address these six challenges, while framing privacy as a scientific pursuit. We define science of privacy as research that is grounded in the three pillars of science: conceptual modeling, formal reasoning about precise models, and empiricism. Using methods from all three pillars, we intend to rigorously perform foundational research that yields generalizable knowledge into how privacy can be better protected, managed, and reasoned about. Rather than simply engineering new systems, our aim is to formulate and empirically validate new frameworks and methods that can be readily used by others and are generalizable to a myriad of privacy use cases. In short, this work will enable others to build systems grounded in scientific principles that evaluate privacy risks, rather than incremental improvements that are designed using ad hoc methods.
Projects
Operationalizing Contextual Integrity
Serge Egelman (ICSI), Helen Nissenbaum (Cornell Tech)
Contextual Integrity for Computer Systems
Michael Tschantz (ICSI), Helen Nissenbaum (Cornell Tech)
Designing for Privacy
Deirdre Mulligan (U.C. Berkeley)
Governance for Big Data
Deirdre Mulligan (U.C. Berkeley)
Scalable Privacy Analysis
Serge Egelman and Narseo Vallina-Rodriguez (ICSI)