According to Nissenbaum’s theory of contextual integrity (CI), protecting privacy means ensuring that personal information flows appropriately; it does not mean that no information flows (e.g., confidentiality), or that it flows only if the information subject allows it (e.g., control). Flow is appropriate if it conforms to legitimate, contextual informational norms. Contextual informational norms prescribe information flows in terms of five parameters: actors (sender, subject, recipient), information types, and transmission principles. Actors and information types range over respective contextual ontologies. Transmission principles (a term introduced by the theory) range over the conditions or constraints under which information flows, for example, whether confidentially, mandated by law, with notice, with consent, in accordance with subject's preference, and so on. The theory holds that our privacy expectations are a product of informational norms, meaning that people will judge particular information flows as respecting or violating privacy according to whether or not—in the first approximation—they conform to contextual informational norms. If so, we say contextual integrity has been preserved.
The theory has been recognized in policy arenas, has been formalized, has guided empirical social science research, and has shaped system development. Yet, despite resolving many longstanding privacy puzzles and its promising potential in practical realms, its direct application to pressing needs of design and policy has proven challenging. One challenge is that the theory requires knowledge of data flows, and in practice, systems may not be able to provide this, particularly once data leaves a device. The challenge of bridging theory and practice, in this case, grounding scientific research and design practice in the theory of CI, is not only tractable, but with sufficient effort devoted to operationalizing the relevant concepts, could enhance our methodological toolkit for studying individuals’ understandings and valuations of privacy in relation to data-intensive technologies and principles to guide design.
In our view, capturing people’s complex attitudes toward privacy, including expectations and preferences in situ, will require methodological innovation and new techniques that apply the theory of contextual integrity. These methodologies and techniques have to accommodate the five independent parameters of contextual norms, scale to diverse contexts in which privacy decision-making takes place, and be sensitive not only to the variety of preferences and expectations within respective contexts, but to distinguish preferences from expectations. What we learn about privacy attitudes by following such methods and techniques should serve in the discovery and identification of contextual information norms, and yield results that are sufficiently rigorous to serve as a foundation for the design of effective privacy interfaces. The first informs public policy and law with information about what people generally expect and what is generally viewed as objectionable; the second informs designers not only about mechanisms to help people to make informed decisions, but also what substantive constraints on flow should or could be implemented within design. Instead of ubiquitous “notice and choice” regimes, the project will aim to identify situations where clear norms, for example, those identified through careful study, can be embedded in technology (systems, applications, platforms) as constraints on flow and where no such norms emerge, variations may be selected according to user preferences. Thus, this project will yield a set of practical, usable, and scalable technologies and tools that can be applied to both existing and future technologies, thereby providing a scientific basis for future privacy research.