Syntactic and semantic errors in radiology reports associated with speech recognition software

Michael D. Ringler, Brian C. Goss, Brian J. Bartholmai

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (p <.001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties (p <.001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p <.001). Error rate decreased over time (p <.001), which suggests that a quality control program with regular feedback may reduce errors.

Original languageEnglish (US)
Pages (from-to)3-13
Number of pages11
JournalHealth Informatics Journal
Volume23
Issue number1
DOIs
StatePublished - Mar 1 2017

Keywords

  • PowerScribe
  • quality control
  • radiology report
  • report errors
  • speech recognition

ASJC Scopus subject areas

  • Health Informatics

Fingerprint

Dive into the research topics of 'Syntactic and semantic errors in radiology reports associated with speech recognition software'. Together they form a unique fingerprint.

Cite this