"New AI Technology Protects Privacy in Healthcare Settings"

An interdisciplinary team of researchers from the Researchers from Imperial and Imperial College London (ICL), Technical University Munich (TUM)), and the non-profit organization OpenMined developed new technology to protect personal patient data while training healthcare Artificial Intelligence (AI) algorithms. According to the researchers, their new privacy-protecting techniques have shown better accuracy in diagnosing various types of pneumonia in children than existing algorithms. The effectiveness of AI algorithms used to support clinicians in diagnosing cancers and other illnesses depends on the quality and quantity of the medical data used to train them. Clinics often share patient data with each other to maximize the data pool. In order to protect this data, it usually goes through the processes of anonymization and pseudonymization. However, these safeguards have often been inadequate for the protection of patients' health data. The team developed a unique combination of AI-based diagnostic processes for radiological image data that maintains the privacy of patient data. The researchers applied federated learning where the deep learning algorithm is shared instead of the data itself. Machine Learning (ML) models were trained in different hospitals using local data and returned to the authors. The data owners did not need to share their data and maintained control. They used a technique called secure aggregation to prevent the identification of institutions where the algorithm was trained. Algorithms were combined in encrypted form, and only decrypted after they were trained with the participating institutions' data. This article continues to discuss the team's privacy-preserving AI method for healthcare settings, as well as the importance of ensuring the privacy and security of healthcare data.

Imperial College London reports "New AI Technology Protects Privacy in Healthcare Settings"

Submitted by Anonymous on