TY - JOUR
T1 - Active, continual fine tuning of convolutional neural networks for reducing annotation effortsL Active continual fine tuningcnnrae
AU - Zhou, Zongwei
AU - Shin, Jae Y.
AU - Gurudu, Suryakanth R.
AU - Gotway, Michael B.
AU - Liang, Jianming
N1 - Funding Information:
This research has been supported partially by ASU and Mayo Clinic through a Seed Grant and an Innovation Grant, and partially by the NIH under Award Number R01HL128785 . The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. We thank S. Tatapudi and A. Pluhar for helping improve the writing of this paper. The content of this paper is covered by patents pending.
Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/7
Y1 - 2021/7
N2 - The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as IMAGENET and PLACES. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek “worthy” samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.
AB - The splendid success of convolutional neural networks (CNNs) in computer vision is largely attributable to the availability of massive annotated datasets, such as IMAGENET and PLACES. However, in medical imaging, it is challenging to create such large annotated datasets, as annotating medical images is not only tedious, laborious, and time consuming, but it also demands costly, specialty-oriented skills, which are not easily accessible. To dramatically reduce annotation cost, this paper presents a novel method to naturally integrate active learning and transfer learning (fine-tuning) into a single framework, which starts directly with a pre-trained CNN to seek “worthy” samples for annotation and gradually enhances the (fine-tuned) CNN via continual fine-tuning. We have evaluated our method using three distinct medical imaging applications, demonstrating that it can reduce annotation efforts by at least half compared with random selection.
KW - Active learning
KW - Annotation cost reduction
KW - Computer-aided diagnosis
KW - Convolutional neural networks
KW - Medical image analysis
KW - Transfer learning
UR - http://www.scopus.com/inward/record.url?scp=85103984392&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85103984392&partnerID=8YFLogxK
U2 - 10.1016/j.media.2021.101997
DO - 10.1016/j.media.2021.101997
M3 - Article
C2 - 33853034
AN - SCOPUS:85103984392
SN - 1361-8415
VL - 71
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 101997
ER -