Optic Disc Classification by Deep Learning versus Expert Neuro-Ophthalmologists

Valérie Biousse, Nancy J. Newman, Raymond P. Najjar, Caroline Vasseneix, Xinxing Xu, Daniel S. Ting, Léonard B. Milea, Jeong Min Hwang, Dong Hyun Kim, Hee Kyung Yang, Steffen Hamann, John J. Chen, Yong Liu, Tien Yin Wong, Dan Milea, Barnabé Rondé-Courbis, Philippe Gohier, Neil Miller, Tanyatuth Padungkiatsagul, Anuchit PoonyathalangYanin Suwan, Kavin Vanikieti, Leonard B. Milea, Giulia Amore, Piero Barboni, Michele Carbonelli, Valerio Carelli, Chiara La Morgia, Martina Romagnoli, Marie Bénédicte Rougier, Selvakumar Ambika, Swetha Komma, Pedro Fonseca, Miguel Raimundo, Isabelle Karlesand, Wolf Alexander Lagrèze, Nicolae Sanda, Gabriele Thumann, Florent Aptel, Christophe Chiquet, Kaiqun Liu, Hui Yang, Carmen K.M. Chan, Noel C.Y. Chan, Carol Y. Cheung, Thi Ha Chau Tran, James Acheson, Maged S. Habib, Neringa Jurkute, Patrick Yu-Wai-Man, Richard Kho, Jost B. Jonas, Nouran Sabbagh, Catherine Vignal-Clermont, Rabih Hage, Raoul K. Khanna, Tin Aung, Ching Yu Cheng, Ecosse Lamoureux, Jing Liang Loo, Shweta Singhal, Daniel Ting, Sharon Tow, Zhubo Jiang, Clare L. Fraser, Luis J. Mejico, Masoud Aghsaei Fard

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

Objective: To compare the diagnostic performance of an artificial intelligence deep learning system with that of expert neuro-ophthalmologists in classifying optic disc appearance. Methods: The deep learning system was previously trained and validated on 14,341 ocular fundus photographs from 19 international centers. The performance of the system was evaluated on 800 new fundus photographs (400 normal optic discs, 201 papilledema [disc edema from elevated intracranial pressure], 199 other optic disc abnormalities) and compared with that of 2 expert neuro-ophthalmologists who independently reviewed the same randomly presented images without clinical information. Area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity were calculated. Results: The system correctly classified 678 of 800 (84.7%) photographs, compared with 675 of 800 (84.4%) for Expert 1 and 641 of 800 (80.1%) for Expert 2. The system yielded areas under the receiver operating characteristic curve of 0.97 (95% confidence interval [CI] = 0.96–0.98), 0.96 (95% CI = 0.94–0.97), and 0.89 (95% CI = 0.87–0.92) for the detection of normal discs, papilledema, and other disc abnormalities, respectively. The accuracy, sensitivity, and specificity of the system's classification of optic discs were similar to or better than the 2 experts. Intergrader agreement at the eye level was 0.71 (95% CI = 0.67–0.76) between Expert 1 and Expert 2, 0.72 (95% CI = 0.68–0.76) between the system and Expert 1, and 0.65 (95% CI = 0.61–0.70) between the system and Expert 2. Interpretation: The performance of this deep learning system at classifying optic disc abnormalities was at least as good as 2 expert neuro-ophthalmologists. Future prospective studies are needed to validate this system as a diagnostic aid in relevant clinical settings. ANN NEUROL 2020;88:785–795.

Original languageEnglish (US)
Pages (from-to)785-795
Number of pages11
JournalAnnals of neurology
Volume88
Issue number4
DOIs
StatePublished - Oct 1 2020

ASJC Scopus subject areas

  • Neurology
  • Clinical Neurology

Fingerprint

Dive into the research topics of 'Optic Disc Classification by Deep Learning versus Expert Neuro-Ophthalmologists'. Together they form a unique fingerprint.

Cite this