Abstract
Speech recognition software can increase the frequency of errors in radiology reports, which may affect patient care. We retrieved 213,977 speech recognition software-generated reports from 147 different radiologists and proofread them for errors. Errors were classified as "material" if they were believed to alter interpretation of the report. "Immaterial" errors were subclassified as intrusion/omission or spelling errors. The proportion of errors and error type were compared among individual radiologists, imaging subspecialty, and time periods. In all, 20,759 reports (9.7%) contained errors, of which 3992 (1.9%) were material errors. Among immaterial errors, spelling errors were more common than intrusion/omission errors (p <.001). Proportion of errors and fraction of material errors varied significantly among radiologists and between imaging subspecialties (p <.001). Errors were more common in cross-sectional reports, reports reinterpreting results of outside examinations, and procedural studies (all p <.001). Error rate decreased over time (p <.001), which suggests that a quality control program with regular feedback may reduce errors.
Original language | English (US) |
---|---|
Pages (from-to) | 3-13 |
Number of pages | 11 |
Journal | Health Informatics Journal |
Volume | 23 |
Issue number | 1 |
DOIs | |
State | Published - Mar 1 2017 |
Keywords
- PowerScribe
- quality control
- radiology report
- report errors
- speech recognition
ASJC Scopus subject areas
- Health Informatics