"Training Algorithms To Make Fair Decisions Using Private Data"

Researchers at USC Viterbi have created a data-securing algorithm that enhances fairness. Shen Yan, a newly graduated Ph.D. student at USC Viterbi's Information Sciences Institute (ISI) and co-author of "FairFed: Enabling Group Fairness in Federated Learning," stated that Artificial Intelligence (AI) systems make decisions depending on the data they observe. It is possible for decisions based on biased data to be skewed, regardless of whether they are made by a human or an AI system. Debiasing the source of information can help reduce the bias of Machine Learning (ML) algorithms. However, these sources are not always available. Federated learning is an ML technique for training algorithms across multiple decentralized datasets without exchanging local data samples. Federated learning supports privacy because it does not require direct access to data, making it an ideal solution for sensitive data such as financial or medical records. Motivated by the significance and difficulties of group fairness in federated learning, the researchers created FairFed, an algorithm designed to improve group fairness in federated learning. This article continues to discuss the fairness-enhancing algorithm that also keeps data secure. 

USC Viterbi reports "Training Algorithms To Make Fair Decisions Using Private Data"

Submitted by Anonymous on