"Computer Scientists Makes Noisy Data: Can Improve Treatments in Healthcare"

Collecting and analyzing data from a large number of patients in order to discover patterns is an important aspect of modern healthcare, but such data must be protected to prevent the violation of individuals' privacy. Breaches could also damage general trust, resulting in fewer people consenting to participate. Researchers at the University of Copenhagen's Department of Computer Science have developed a method for protecting data sets used to train Machine Learning (ML) models. According to Ph.D. student Joel Daniel Andersson, there have been several cases where data was anonymized, but researchers were able to recover participants' identities. As there are many other sources of information available in the public domain, an adversary with a good computer could infer identities even without names or citizen codes. This article continues to discuss the team's method that protects privacy while making data sets available for developing better treatments.

The University of Copenhagen reports "Computer Scientists Makes Noisy Data: Can Improve Treatments in Healthcare"

Submitted by grigby1

Submitted by Gregory Rigby on