Spotlight on Lablet Research #38 - Operationalizing Contextual Integrity

Spotlight on Lablet Research #38 -

Operationalizing Contextual Integrity

 


Lablet: International Computer Science Institute
Participating Sub-Lablet: Cornell Tech

The ultimate goal of this project is to design new privacy controls that are grounded in the theory of contextual integrity so that they can automatically infer contextual norms and handle data-sharing and disclosure on a per-use basis.

This project, led by Principal Investigator (PI) Serge Egelman and Co-PI Helen Nissenbaum, centers around work on mobile device apps to seek to address privacy as contextual integrity. Inappropriate data flows violate contextual information norms; data flows occurring within specific contextual information norms are modeled using a data subject, data sender, data recipient, information type, and transmission principle (constraints). In questioning what this means for user-centered design, it is suggested that an app should only provide notice when reasonable privacy expectations are expected to be violated. The next steps to determine what parameters are actually important to users are: Phase 1: Factorial vignette studies--interviews, surveys; randomly generated scenarios based on controlled parameters; Phase 2: Observational studies--instrument phones, detect parameters and resulting behaviors. With an improved infrastructure, researchers will be able to study privacy behaviors in situ; long-term project plans are to examine new ways of applying the theory of contextual integrity to privacy controls for emergent technologies (e.g., in-home IoT devices), and construct educational materials based on research findings for use in the classroom.

The increasing use of smart home devices affects the privacy of not only device owners, but also individuals who did not choose to deploy them, and may not even be aware of them. Some smart home devices and systems, especially those with cameras, can be used for remote surveillance of, for example, domestic employees. Domestic workers represent a special case of bystanders' privacy, due to the blending of home, work, and care contexts, and employer-employee power differentials. To examine the experiences, perspectives, and privacy concerns of domestic workers, the research team conducted a case study of nannies and parents who employ nannies, and have been analyzing the transcripts of 26 interviews with nannies and au pairs about their experiences with smart devices in their employers' homes, and their attitudes, expectations, and preferences with regard to data collection in smart homes. The goal in this case study is to examine what factors in this combined home/workplace/caregiving context impact employers' and employees' data-sharing choices, and how such choices and attitudes reflect or change power dynamics in their relationships. The study also aims to identify potential points of intervention (technical and social) for better respecting bystanders' privacy preferences. The team is planning a research agenda to integrate the concerns of bystanders into their work with smart home product developers, including experimental interventions to prompt more attention to the issue in design.

The research team conducted a study that collected people's perceptions of passive listening, their privacy preferences for it, their reactions to different modalities of permission requests, and their suggestions for other privacy controls. Based on the results, researchers created a set of recommendations for how users should be presented with privacy decisions for these and other future in-home data capture devices. The study of passive-listening devices used an interactive app store experience that provided a unique means of measuring consumer sentiment in a scenario modeling real life. Using both quantitative and qualitative analysis, researchers determined people's views on privacy models for always-listening voice assistants, which generally ranged from an outright rejection of the voice assistant described in the survey to preferring one model for its increased privacy protections. The researchers observed that users desire audit mechanisms, so that they can examine what decisions have been previously made about their privacy. Providing users with feedback and examples of the types of data apps may collect is an effective method for helping them detect malicious apps that may cause privacy violations. These techniques are likely applicable to other domains that rely on machine learning and which offer opportunities to decompose larger problems into smaller, self-contained tasks that are amenable to human verification. To find out how people would react to different kinds of runtime permission requests, they asked participants in this study to hold conversations while getting ambient suggestions (and plenty of permission requests) from a passive listening assistant, which researchers will simulate in real-time using the Wizard of Oz technique. Most of the participants seem to be excited about passive listening, but want control over the assistant's actions and their own data. They generally seem to prioritize an interruption-free experience above more fine-grained control over what the device is allowed to record.

The research team has been building infrastructure to allow them to test whether ad networks are obeying user privacy controls, thus enabling them to make inferences based on ad metadata. They have been collecting data from ~30 ad networks to analyze bid data under different contexts, to examine whether or not privacy controls are functioning as expected.

Researchers investigated California Consumer Privacy Act (CCPA) compliance for 160 top-ranked Android mobile app developers from the U.S. Google Play Store. CCPA requires developers to provide accurate privacy notices and to respond to "right to know" requests by disclosing personal information that they have collected, used, or shared about consumers for a business or commercial purpose. The researchers found that at least 39% of the apps they studied shared device-specific identifiers and at least 26% shared geolocation information with third parties without disclosing it in response to their request.
 

Submitted by Anonymous on