"New Tool Can Check for Data Leakage From AI Systems"

Artificial Intelligence (AI) helps companies power many applications, such as those used to improve marketing strategies, recommendation services, and health services. Although AI offers many benefits, security and privacy researchers have discovered the vulnerability of AI models to inference attacks that allow hackers to extract sensitive information about the original training dataset. Inference attacks are performed by repeatedly making the AI service generate information and then analyzing the patterns in the data that can be used to infer if a specific type of data was used to train the AI program. Hackers can reconstruct the original dataset used to train the AI service by performing such attacks. Assistant Professor Reza Shokri and his team at the National University of Singapore developed an open-source tool called the Machine Learning Privacy Meter (ML Privacy Meter) that organizations can use to determine if their AI services are vulnerable to inference attacks. This article continues to discuss the performance of inference attacks against AI models and the ML Privacy Meter developed to assess the risk of these attacks. 

NUS reports "New Tool Can Check for Data Leakage From AI Systems"

Submitted by Anonymous on