Differential privacy mechanisms have been proposed to guarantee the privacy of individuals in various types of statistical information. When constructing a probabilistic mechanism to satisfy differential privacy, it is necessary to consider the impact of an arbitrary record on its statistics, i.e., sensitivity, but there are situations where sensitivity is difficult to derive. In this paper, we first summarize the situations in which it is difficult to derive sensitivity in general, and then propose a definition equivalent to the conventional definition of differential privacy to deal with them. This definition considers neighboring datasets as in the conventional definition. Therefore, known differential privacy mechanisms can be applied. Next, as an example of the difficulty in deriving sensitivity, we focus on the t-test, a basic tool in statistical analysis, and show that a concrete differential privacy mechanism can be constructed in practice. Our proposed definition can be treated in the same way as the conventional differential privacy definition, and can be applied to cases where it is difficult to derive sensitivity.
Authored by Tomoaki Mimoto, Masayuki Hashimoto, Hiroyuki Yokoyama, Toru Nakamura, Takamasa Isohara, Ryosuke Kojima, Aki Hasegawa, Yasushi Okuno