Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain

W. Katherine Tan, Saeed Hassanpour, Patrick J. Heagerty, Sean D. Rundell, Pradeep Suri, Hannu T. Huhdanpaa, Kathryn James, David S. Carrell, Curtis P. Langlotz, Nancy L. Organ, Eric N. Meier, Karen J. Sherman, David F Kallmes, Patrick H Luetmer, Brent Griffith, David R. Nerenz, Jeffrey G. Jarvik

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Rationale and Objectives: To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. Materials and Methods: We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results: The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Conclusions: Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC.

Original languageEnglish (US)
JournalAcademic Radiology
DOIs
StateAccepted/In press - Jan 1 2018

Fingerprint

Natural Language Processing
Low Back Pain
Spine
Area Under Curve
X-Rays
Radiology
ROC Curve
Magnetic Resonance Spectroscopy
Sensitivity and Specificity
Machine Learning
Health
Datasets

Keywords

  • low back pain
  • lumbar spine diagnostic imaging
  • Natural language processing

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging

Cite this

Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain. / Tan, W. Katherine; Hassanpour, Saeed; Heagerty, Patrick J.; Rundell, Sean D.; Suri, Pradeep; Huhdanpaa, Hannu T.; James, Kathryn; Carrell, David S.; Langlotz, Curtis P.; Organ, Nancy L.; Meier, Eric N.; Sherman, Karen J.; Kallmes, David F; Luetmer, Patrick H; Griffith, Brent; Nerenz, David R.; Jarvik, Jeffrey G.

In: Academic Radiology, 01.01.2018.

Research output: Contribution to journalArticle

Tan, WK, Hassanpour, S, Heagerty, PJ, Rundell, SD, Suri, P, Huhdanpaa, HT, James, K, Carrell, DS, Langlotz, CP, Organ, NL, Meier, EN, Sherman, KJ, Kallmes, DF, Luetmer, PH, Griffith, B, Nerenz, DR & Jarvik, JG 2018, 'Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain', Academic Radiology. https://doi.org/10.1016/j.acra.2018.03.008
Tan, W. Katherine ; Hassanpour, Saeed ; Heagerty, Patrick J. ; Rundell, Sean D. ; Suri, Pradeep ; Huhdanpaa, Hannu T. ; James, Kathryn ; Carrell, David S. ; Langlotz, Curtis P. ; Organ, Nancy L. ; Meier, Eric N. ; Sherman, Karen J. ; Kallmes, David F ; Luetmer, Patrick H ; Griffith, Brent ; Nerenz, David R. ; Jarvik, Jeffrey G. / Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain. In: Academic Radiology. 2018.
@article{462cf881202b4caebc67141ba2a1f33f,
title = "Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain",
abstract = "Rationale and Objectives: To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. Materials and Methods: We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80{\%}) and testing (20{\%}) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results: The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3{\%} to 89{\%}. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Conclusions: Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC.",
keywords = "low back pain, lumbar spine diagnostic imaging, Natural language processing",
author = "Tan, {W. Katherine} and Saeed Hassanpour and Heagerty, {Patrick J.} and Rundell, {Sean D.} and Pradeep Suri and Huhdanpaa, {Hannu T.} and Kathryn James and Carrell, {David S.} and Langlotz, {Curtis P.} and Organ, {Nancy L.} and Meier, {Eric N.} and Sherman, {Karen J.} and Kallmes, {David F} and Luetmer, {Patrick H} and Brent Griffith and Nerenz, {David R.} and Jarvik, {Jeffrey G.}",
year = "2018",
month = "1",
day = "1",
doi = "10.1016/j.acra.2018.03.008",
language = "English (US)",
journal = "Academic Radiology",
issn = "1076-6332",
publisher = "Elsevier USA",

}

TY - JOUR

T1 - Comparison of Natural Language Processing Rules-based and Machine-learning Systems to Identify Lumbar Spine Imaging Findings Related to Low Back Pain

AU - Tan, W. Katherine

AU - Hassanpour, Saeed

AU - Heagerty, Patrick J.

AU - Rundell, Sean D.

AU - Suri, Pradeep

AU - Huhdanpaa, Hannu T.

AU - James, Kathryn

AU - Carrell, David S.

AU - Langlotz, Curtis P.

AU - Organ, Nancy L.

AU - Meier, Eric N.

AU - Sherman, Karen J.

AU - Kallmes, David F

AU - Luetmer, Patrick H

AU - Griffith, Brent

AU - Nerenz, David R.

AU - Jarvik, Jeffrey G.

PY - 2018/1/1

Y1 - 2018/1/1

N2 - Rationale and Objectives: To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. Materials and Methods: We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results: The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Conclusions: Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC.

AB - Rationale and Objectives: To evaluate a natural language processing (NLP) system built with open-source tools for identification of lumbar spine imaging findings related to low back pain on magnetic resonance and x-ray radiology reports from four health systems. Materials and Methods: We used a limited data set (de-identified except for dates) sampled from lumbar spine imaging reports of a prospectively assembled cohort of adults. From N = 178,333 reports, we randomly selected N = 871 to form a reference-standard dataset, consisting of N = 413 x-ray reports and N = 458 MR reports. Using standardized criteria, four spine experts annotated the presence of 26 findings, where 71 reports were annotated by all four experts and 800 were each annotated by two experts. We calculated inter-rater agreement and finding prevalence from annotated data. We randomly split the annotated data into development (80%) and testing (20%) sets. We developed an NLP system from both rule-based and machine-learned models. We validated the system using accuracy metrics such as sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results: The multirater annotated dataset achieved inter-rater agreement of Cohen's kappa > 0.60 (substantial agreement) for 25 of 26 findings, with finding prevalence ranging from 3% to 89%. In the testing sample, rule-based and machine-learned predictions both had comparable average specificity (0.97 and 0.95, respectively). The machine-learned approach had a higher average sensitivity (0.94, compared to 0.83 for rules-based), and a higher overall AUC (0.98, compared to 0.90 for rules-based). Conclusions: Our NLP system performed well in identifying the 26 lumbar spine findings, as benchmarked by reference-standard annotation by medical experts. Machine-learned models provided substantial gains in model sensitivity with slight loss of specificity, and overall higher AUC.

KW - low back pain

KW - lumbar spine diagnostic imaging

KW - Natural language processing

UR - http://www.scopus.com/inward/record.url?scp=85044660518&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85044660518&partnerID=8YFLogxK

U2 - 10.1016/j.acra.2018.03.008

DO - 10.1016/j.acra.2018.03.008

M3 - Article

C2 - 29605561

AN - SCOPUS:85044660518

JO - Academic Radiology

JF - Academic Radiology

SN - 1076-6332

ER -