TY - GEN
T1 - HealthyGAN
T2 - 7th International Workshop on Simulation and Synthesis in Medical Imaging, SASHIMI 2022, held in conjunction with 25th International Conference on Medical Image Computing and Computer-Assisted Intervention, MICCAI 2022
AU - Rahman Siddiquee, Md Mahfuzur
AU - Shah, Jay
AU - Wu, Teresa
AU - Chong, Catherine
AU - Schwedt, Todd
AU - Li, Baoxin
N1 - Publisher Copyright:
© 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.
PY - 2022
Y1 - 2022
N2 - Automated anomaly detection from medical images, such as MRIs and X-rays, can significantly reduce human effort in disease diagnosis. Owing to the complexity of modeling anomalies and the high cost of manual annotation by domain experts (e.g., radiologists), a typical technique in the current medical imaging literature has focused on deriving diagnostic models from healthy subjects only, assuming the model will detect the images from patients as outliers. However, in many real-world scenarios, unannotated datasets with a mix of both healthy and diseased individuals are abundant. Therefore, this paper poses the research question of how to improve unsupervised anomaly detection by utilizing (1) an unannotated set of mixed images, in addition to (2) the set of healthy images as being used in the literature. To answer the question, we propose HealthyGAN, a novel one-directional image-to-image translation method, which learns to translate the images from the mixed dataset to only healthy images. Being one-directional, HealthyGAN relaxes the requirement of cycle-consistency of existing unpaired image-to-image translation methods, which is unattainable with mixed unannotated data. Once the translation is learned, we generate a difference map for any given image by subtracting its translated output. Regions of significant responses in the difference map correspond to potential anomalies (if any). Our HealthyGAN outperforms the conventional state-of-the-art methods by significant margins on two publicly available datasets: COVID-19 and NIH ChestX-ray14, and one institutional dataset collected from Mayo Clinic. The implementation is publicly available at https://github.com/mahfuzmohammad/HealthyGAN.
AB - Automated anomaly detection from medical images, such as MRIs and X-rays, can significantly reduce human effort in disease diagnosis. Owing to the complexity of modeling anomalies and the high cost of manual annotation by domain experts (e.g., radiologists), a typical technique in the current medical imaging literature has focused on deriving diagnostic models from healthy subjects only, assuming the model will detect the images from patients as outliers. However, in many real-world scenarios, unannotated datasets with a mix of both healthy and diseased individuals are abundant. Therefore, this paper poses the research question of how to improve unsupervised anomaly detection by utilizing (1) an unannotated set of mixed images, in addition to (2) the set of healthy images as being used in the literature. To answer the question, we propose HealthyGAN, a novel one-directional image-to-image translation method, which learns to translate the images from the mixed dataset to only healthy images. Being one-directional, HealthyGAN relaxes the requirement of cycle-consistency of existing unpaired image-to-image translation methods, which is unattainable with mixed unannotated data. Once the translation is learned, we generate a difference map for any given image by subtracting its translated output. Regions of significant responses in the difference map correspond to potential anomalies (if any). Our HealthyGAN outperforms the conventional state-of-the-art methods by significant margins on two publicly available datasets: COVID-19 and NIH ChestX-ray14, and one institutional dataset collected from Mayo Clinic. The implementation is publicly available at https://github.com/mahfuzmohammad/HealthyGAN.
KW - Anomaly detection
KW - COVID-19 detection
KW - Image-to-Image translation
KW - Migraine detection
KW - Thoracic disease detection
UR - http://www.scopus.com/inward/record.url?scp=85140490575&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85140490575&partnerID=8YFLogxK
U2 - 10.1007/978-3-031-16980-9_5
DO - 10.1007/978-3-031-16980-9_5
M3 - Conference contribution
AN - SCOPUS:85140490575
SN - 9783031169793
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 43
EP - 54
BT - Simulation and Synthesis in Medical Imaging - 7th International Workshop, SASHIMI 2022, Held in Conjunction with MICCAI 2022, Proceedings
A2 - Zhao, Can
A2 - Svoboda, David
A2 - Wolterink, Jelmer M.
A2 - Escobar, Maria
PB - Springer Science and Business Media Deutschland GmbH
Y2 - 18 September 2022 through 18 September 2022
ER -