Spotlight on Lablet Research #6 - Contextual Integrity for Computer Systems 

Spotlight on Lablet Research #6 -

Project: Contextual Integrity for Computer Systems 

Lablet: International Computer Science Institute
Participating Sub-Lablet: Cornell Tech

The overall goal of this research is to convert the philosophical theory of Contextual Integrity (CI) into terms computer scientists can use. Philosophers and computer scientists have different understandings of context, with philosophers focusing on abstract spheres of life, and computer scientists focusing on the concrete. The goal is to develop models of context and contextual integrity that meet computer scientists on their own truth.

Relevant research questions include accounting for privacy in the design of multi-use computer systems that cut across contexts; modeling the adaptation of contexts to changes in technologies; and determining how contextual integrity relates to differential privacy. The current organizing hypothesis is that contexts are defined by a purpose. The privacy norms of a context promote the purpose, and that purpose restrictions are ubiquitous. There are several possible models, including game models, Markov decision process (MDP) models, partially observable Markov decision process models, and multi-agent influence diagrams. Some of the challenges are that contexts don’t exist in a vacuum, contexts might be in competition, privacy is multifaceted, and people often disagree. Potential outcomes are progressing on defining privacy, furthering accountability for big data systems that cut across contexts, and enabling policy governed privacy with respect to collaboration.

The research, led by Principal Investigator (PI) Michael Tschantz and Co-PI Helen Nissenbaum, seeks to create a formal representation of the contexts found in Contextual Integrity. Prior work has shown that the term “context” has been interpreted in a wide range of manners. The representation produced will serve as a reference model for not just comparing different interpretations but also for expressing what Helen Nissenbaum, the creator of Contextual Integrity, sees as the precise form of contexts in her theory. The representation will also serve as a starting point for adapting Contextual Integrity to the changing needs of computer science. The current focus is on how a context can be formed by smaller “sub-contexts” composing together. The working hypothesis is that the “values” of a sub-context may come from the purpose of the super-context.

The research team has started to translate the concept of context into a formal representation and has been working on a model of context in a formal representation similar to an MDP. After finding that the model was not flexible enough for accommodating disagreements about the state of a system, the researchers developed a new model that is more flexible by adding a layer of interpreted predicates between norms and states. The new model includes the possibility of values coming from a global context, to model deontological approaches to ethics.

The work surfaced that the notion of norms found in the theory of Contextual Integrity is difficult to precisely pin down, and the researchers will look more closely at the process that legitimates norms in hopes of providing an operational definition for at least legitimate ones. Their work on sub-contexts has found that a similar concept of sub-goals may be conceptually primary, and they are working toward a model of this prerequisite.

The CI framework is being used to abstract real-world communication exchanges into formally defined information flows where privacy policies describe sequences of admissible flows. CI enables decoupling (1) the syntactic extraction of flows from information exchanges; and (2) the enforcement of privacy policies on these flows. As an alternative to predominant approaches to privacy, which were ineffective against novel information practices enabled by IT, CI was able both to pinpoint sources of disruption and provide grounds for either accepting or rejecting them. Growing challenges from a burgeoning array of networked, sensor-enabled devices (IoT) and data-ravenous machine learning systems, similar in form though magnified in scope, call for renewed attention to theory.

Additional details on the project can be found here.

Submitted by Anonymous on