"The Risks Of Attacks That Involve Poisoning Training Data For Machine Learning Models"

Machine Learning (ML) algorithms can leak information contained by the data used to train them using their model parameters and predictions. Therefore, it is possible for malicious users with general access to the algorithm to reconstruct and infer sensitive information included in the training dataset. Through this, they can steal information ranging from demographic data to bank account numbers. A team of researchers from Google, the National University of Singapore, Yale-NUS College, and Oregon State University recently evaluated the risks of these types of attacks that poison ML models to reconstruct the sensitive information hidden within their parameters or predictions. Their paper covers the nature of these attacks and how they can evade existing cryptographic privacy tools. The team specifically focused on implementing ML algorithms in a secure multi-party setting. In these cases, a combination of data independently provided by different individuals, developers, or other parties is used to train the ML model. They showed that a malicious party could significantly increase information leakage about other parties' data by adding adversarial data and poisoning the pool of training data. A malicious user can prompt a training algorithm to memorize data provided by other parties by poisoning the training data, which in turn, could allow them to reconstruct their victim's data using a series of inference attacks. The researchers evaluated the effectiveness and threat level of three different types of inference attacks, combined with the poisoning of training data. In addition to membership inference attacks, the team analyzed the effectiveness of reconstruction attacks and attribute inference attacks, which enable adversaries to partially reconstruct the training data. This article continues to discuss the study on attacks involving poisoning data for ML models. 

News Update UK reports "The Risks Of Attacks That Involve Poisoning Training Data For Machine Learning Models"

 

Submitted by Anonymous on