"Blind Spots in AI Just Might Help Protect Your Privacy"

Significant advancements have been made in machine learning (ML) as this technology has helped in detecting cancer and predicting personal traits. ML technology has also enabled self-driving cars and highly accurate facial recognition. However, ML models remain vulnerable to attacks in which adversarial examples are used to cause the models to make mistakes. Adversarial examples are inputs designed by an attacker to cause a ML model to produce incorrect output, which can pose a threat to the safety of users in the case of self-driving cars. According to privacy-focused researchers at the Rochester Institute of Technology and Duke University, there is a bright side to adversarial examples in that such inputs can be used to protect data and defend the privacy of users. This article continues to discuss ML applications, the use of adversarial examples to disrupt the success of ML models, Facebook's Cambridge Analytic incident, the never-ending cat-and-mouse game of predicting and protecting private user data, and research surrounding the use of adversarial examples to protect data. 

Wired reports "Blind Spots in AI Just Might Help Protect Your Privacy"

Submitted by Anonymous on