Spotlight on Lablet Research #12 - Operationalizing Contextual Integrity

Spotlight on Lablet Research #12 -

Project: Operationalizing Contextual Integrity

 

Lablet: International Computer Science Institute
Sub-Lablet: Cornell Tech

The ultimate goal of this research is to design new privacy controls that are grounded in the theory of Contextual Integrity (CI) so that they can automatically infer contextual norms and handle data-sharing and disclosure on a per-use basis. Another goal is to examine how policies surrounding the acceptable use of personal data can be adapted to support the theory of contextual integrity. Our goal is to design and develop future privacy controls that have high usability because their design principles are informed by empirical research.

This project centers around work on mobile device apps that is the basis for what is planned for the future, addressing privacy as contextual integrity. Inappropriate data flows violate contextual information norms; contextual information norms are modeled using data subjection, data sender, data recipient, information type, and transmission principle (constraints). In questioning what this means for user-centered design, it is suggested that an app should only provide notice when reasonable privacy expectations are expected to be violated. The next steps to determine what parameters are actually important to users are:

  • Phase 1: Factorial vignette studies (interviews, surveys; randomly generated scenarios based on controlled parameters)
  • Phase 2: Observational studies (instrument phones, detect parameters and resulting behaviors)

The research team, led by Principal Investigator (PI) Serge Egelman and Co-PI Helen Nissenbaum, is working on improving infrastructure to allow them to study privacy behaviors in situ, long-term project planning to examine new ways of applying the theory of contextual integrity to privacy controls for emergent technologies (e.g., in-home IoT devices), and constructing educational materials for use in the classroom based on the research findings.

In addition to the publications and engagement events, they are designing new studies to better understand users’ privacy perceptions surrounding in-home voice assistants and voice capture in general. The goal is to gather data that can be used to predict privacy-sensitive events based on contextual data.

To that end, the team deployed a study to examine contextual norms around in-home audio monitoring, which is likely to proliferate as new devices appear on the market. They recruited users of both the Google Home and Amazon Echo to answer questions about previously-recorded audio from their devices. Both manufacturers make audio recordings accessible to device owners through a web portal, and so the study involved using a browser extension to present these clips to users randomly, and then have them answer questions about the circumstances surrounding the recordings. The research seeks to determine whether users were aware that the recordings were made, how sensitive the content was, as well as participants’ preferences for various data retention and sharing policies.

In another set of studies, the team examined existing audio corpora, and then used crowdworkers to identify sensitive conversations that they can then label and use to train a classifier. The goal is to design devices that can predict when they should not be recording or sharing data. The study was deployed for several hundred participants, and the data is under review.

Additional details on the project can be found here.

 

 

Submitted by Anonymous on