TY - JOUR
T1 - Current Clinical Applications of Artificial Intelligence in Radiology and Their Best Supporting Evidence
AU - Tariq, Amara
AU - Purkayastha, Saptarshi
AU - Padmanaban, Geetha Priya
AU - Krupinski, Elizabeth
AU - Trivedi, Hari
AU - Banerjee, Imon
AU - Gichoya, Judy Wawira
N1 - Funding Information:
Funding support was received from the National Science Foundation, Division of Electrical, Communication and Cyber Systems (grant 1928481). Dr Purkayastha has received a grant from National Science Foundation. Dr Trivedi is a consultant to Arterys and a founder of LightBox. All other authors state that they have no conflict of interest related to the material discussed in this article. Drs Tariq, Purkayastha, Padmanaban, Krupinski, Trivedi, Banerjee, and Gichoya are employees.
Funding Information:
Funding support was received from the National Science Foundation , Division of Electrical, Communication and Cyber Systems (grant 1928481 ). Dr Purkayastha has received a grant from National Science Foundation . Dr Trivedi is a consultant to Arterys and a founder of LightBox. All other authors state that they have no conflict of interest related to the material discussed in this article. Drs Tariq, Purkayastha, Padmanaban, Krupinski, Trivedi, Banerjee, and Gichoya are employees.
Publisher Copyright:
© 2020 American College of Radiology
PY - 2020/11
Y1 - 2020/11
N2 - Purpose: Despite tremendous gains from deep learning and the promise of artificial intelligence (AI) in medicine to improve diagnosis and save costs, there exists a large translational gap to implement and use AI products in real-world clinical situations. Adoption of standards such as Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis, Consolidated Standards of Reporting Trials, and the Checklist for Artificial Intelligence in Medical Imaging is increasing to improve the peer-review process and reporting of AI tools. However, no such standards exist for product-level review. Methods: A review of clinical trials showed a paucity of evidence for radiology AI products; thus, the authors developed a 10-question assessment tool for reviewing AI products with an emphasis on their validation and result dissemination. The assessment tool was applied to commercial and open-source algorithms used for diagnosis to extract evidence on the clinical utility of the tools. Results: There is limited technical information on methodologies for FDA-approved algorithms compared with open-source products, likely because of intellectual property concerns. Furthermore, FDA-approved products use much smaller data sets compared with open-source AI tools, because the terms of use of public data sets are limited to academic and noncommercial entities, which precludes their use in commercial products. Conclusions: Overall, this study reveals a broad spectrum of maturity and clinical use of AI products, but a large gap exists in exploring actual performance of AI tools in clinical practice.
AB - Purpose: Despite tremendous gains from deep learning and the promise of artificial intelligence (AI) in medicine to improve diagnosis and save costs, there exists a large translational gap to implement and use AI products in real-world clinical situations. Adoption of standards such as Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis, Consolidated Standards of Reporting Trials, and the Checklist for Artificial Intelligence in Medical Imaging is increasing to improve the peer-review process and reporting of AI tools. However, no such standards exist for product-level review. Methods: A review of clinical trials showed a paucity of evidence for radiology AI products; thus, the authors developed a 10-question assessment tool for reviewing AI products with an emphasis on their validation and result dissemination. The assessment tool was applied to commercial and open-source algorithms used for diagnosis to extract evidence on the clinical utility of the tools. Results: There is limited technical information on methodologies for FDA-approved algorithms compared with open-source products, likely because of intellectual property concerns. Furthermore, FDA-approved products use much smaller data sets compared with open-source AI tools, because the terms of use of public data sets are limited to academic and noncommercial entities, which precludes their use in commercial products. Conclusions: Overall, this study reveals a broad spectrum of maturity and clinical use of AI products, but a large gap exists in exploring actual performance of AI tools in clinical practice.
KW - AI in clinical practice
KW - open-source AI tools for radiology
KW - proprietary AI tools for radiology
KW - radiology image processing
KW - survey of AI-based diagnostic tools
UR - http://www.scopus.com/inward/record.url?scp=85093107735&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85093107735&partnerID=8YFLogxK
U2 - 10.1016/j.jacr.2020.08.018
DO - 10.1016/j.jacr.2020.08.018
M3 - Article
C2 - 33153541
AN - SCOPUS:85093107735
VL - 17
SP - 1371
EP - 1381
JO - JACR Journal of the American College of Radiology
JF - JACR Journal of the American College of Radiology
SN - 1558-349X
IS - 11
ER -