Federated learning and differential privacy for medical image analysis

Mohammed Adnan, Shivam Kalra, Jesse C. Cresswell, Graham W. Taylor, Hamid R. Tizhoosh

Research output: Contribution to journalArticlepeer-review

Abstract

The artificial intelligence revolution has been spurred forward by the availability of large-scale datasets. In contrast, the paucity of large-scale medical datasets hinders the application of machine learning in healthcare. The lack of publicly available multi-centric and diverse datasets mainly stems from confidentiality and privacy concerns around sharing medical data. To demonstrate a feasible path forward in medical image imaging, we conduct a case study of applying a differentially private federated learning framework for analysis of histopathology images, the largest and perhaps most complex medical images. We study the effects of IID and non-IID distributions along with the number of healthcare providers, i.e., hospitals and clinics, and the individual dataset sizes, using The Cancer Genome Atlas (TCGA) dataset, a public repository, to simulate a distributed environment. We empirically compare the performance of private, distributed training to conventional training and demonstrate that distributed training can achieve similar performance with strong privacy guarantees. We also study the effect of different source domains for histopathology images by evaluating the performance using external validation. Our work indicates that differentially private federated learning is a viable and reliable framework for the collaborative development of machine learning models in medical image analysis.

Original languageEnglish (US)
Article number1953
JournalScientific reports
Volume12
Issue number1
DOIs
StatePublished - Dec 2022

ASJC Scopus subject areas

  • General

Fingerprint

Dive into the research topics of 'Federated learning and differential privacy for medical image analysis'. Together they form a unique fingerprint.

Cite this