Examiner accuracy in cognitive testing in multisite brain-Tumor clinical trials: An analysis from the Alliance for Clinical Trials in Oncology

Jane H. Cerhan, S. Keith Anderson, Alissa M. Butts, Alyx B. Porter, Kurt Jaeckle, Evanthia Galanis, Paul D. Brown

Research output: Contribution to journalArticle

Abstract

Background: Cognitive function is an important outcome in brain-Tumor clinical trials. Cognitive examiners are often needed across multiple sites, many of whom have no prior testing experience. To ensure quality, we looked at examiner errors in administering a commonly used cognitive test battery, determined whether the errors were correctable upon central review, and considered whether the same errors would be detected using onsite electronic data entry. Methods: We looked at 500 cognitive exams administered for brain-Tumor trials led by the Alliance for Clinical Trials in Oncology (Alliance). Of 2277 tests examined, 32 noncorrectable errors were detected with routine central review (1.4% of tests administered), and thus removed from the database of the respective trial. The invalidation rate for each test was 0.8% for each part of the Hopkins Verbal Learning Test-Revised, 0.8% for Controlled Oral Word Association, 1.8% for Trail Making Test-A and 2.6% for Trail Making Test-B. It was estimated that, with onsite data entry and no central review, 4.9% of the tests entered would have uncorrected errors and 1.3% of entered tests would be frankly invalid but not removed. Conclusions: Cognitive test results are useful and robust outcome measures for brain-Tumor clinical trials. Error rates are extremely low, and almost all are correctable with central review of scoring, which is easy to accomplish. We caution that many errors could be missed if onsite electronic entry is utilized instead of central review, and it would be important to mitigate the risk of invalid scores being entered. ClinicalTrials.gov identifiers: NCT01781468 (Alliance A221101), NCT01372774 (NCCTG N107C), NCT00731731 (NCCTG N0874), and NCT00887146 (NCCTG N0577).

Original languageEnglish (US)
Article numbernpy048
Pages (from-to)283-288
Number of pages6
JournalNeuro-Oncology Practice
Volume6
Issue number4
DOIs
StatePublished - Jul 27 2019

Fingerprint

Brain Neoplasms
Trail Making Test
Clinical Trials
Verbal Learning
Cognition
Outcome Assessment (Health Care)
Databases

Keywords

  • clinical trials
  • cognitive testing
  • neurocognitive

ASJC Scopus subject areas

  • Medicine (miscellaneous)

Cite this

Examiner accuracy in cognitive testing in multisite brain-Tumor clinical trials : An analysis from the Alliance for Clinical Trials in Oncology. / Cerhan, Jane H.; Anderson, S. Keith; Butts, Alissa M.; Porter, Alyx B.; Jaeckle, Kurt; Galanis, Evanthia; Brown, Paul D.

In: Neuro-Oncology Practice, Vol. 6, No. 4, npy048, 27.07.2019, p. 283-288.

Research output: Contribution to journalArticle

@article{682e6f7b18734e188edf968949b4f14d,
title = "Examiner accuracy in cognitive testing in multisite brain-Tumor clinical trials: An analysis from the Alliance for Clinical Trials in Oncology",
abstract = "Background: Cognitive function is an important outcome in brain-Tumor clinical trials. Cognitive examiners are often needed across multiple sites, many of whom have no prior testing experience. To ensure quality, we looked at examiner errors in administering a commonly used cognitive test battery, determined whether the errors were correctable upon central review, and considered whether the same errors would be detected using onsite electronic data entry. Methods: We looked at 500 cognitive exams administered for brain-Tumor trials led by the Alliance for Clinical Trials in Oncology (Alliance). Of 2277 tests examined, 32 noncorrectable errors were detected with routine central review (1.4{\%} of tests administered), and thus removed from the database of the respective trial. The invalidation rate for each test was 0.8{\%} for each part of the Hopkins Verbal Learning Test-Revised, 0.8{\%} for Controlled Oral Word Association, 1.8{\%} for Trail Making Test-A and 2.6{\%} for Trail Making Test-B. It was estimated that, with onsite data entry and no central review, 4.9{\%} of the tests entered would have uncorrected errors and 1.3{\%} of entered tests would be frankly invalid but not removed. Conclusions: Cognitive test results are useful and robust outcome measures for brain-Tumor clinical trials. Error rates are extremely low, and almost all are correctable with central review of scoring, which is easy to accomplish. We caution that many errors could be missed if onsite electronic entry is utilized instead of central review, and it would be important to mitigate the risk of invalid scores being entered. ClinicalTrials.gov identifiers: NCT01781468 (Alliance A221101), NCT01372774 (NCCTG N107C), NCT00731731 (NCCTG N0874), and NCT00887146 (NCCTG N0577).",
keywords = "clinical trials, cognitive testing, neurocognitive",
author = "Cerhan, {Jane H.} and Anderson, {S. Keith} and Butts, {Alissa M.} and Porter, {Alyx B.} and Kurt Jaeckle and Evanthia Galanis and Brown, {Paul D.}",
year = "2019",
month = "7",
day = "27",
doi = "10.1093/nop/npy048",
language = "English (US)",
volume = "6",
pages = "283--288",
journal = "Neuro-Oncology Practice",
issn = "2054-2577",
publisher = "Oxford University Press",
number = "4",

}

TY - JOUR

T1 - Examiner accuracy in cognitive testing in multisite brain-Tumor clinical trials

T2 - An analysis from the Alliance for Clinical Trials in Oncology

AU - Cerhan, Jane H.

AU - Anderson, S. Keith

AU - Butts, Alissa M.

AU - Porter, Alyx B.

AU - Jaeckle, Kurt

AU - Galanis, Evanthia

AU - Brown, Paul D.

PY - 2019/7/27

Y1 - 2019/7/27

N2 - Background: Cognitive function is an important outcome in brain-Tumor clinical trials. Cognitive examiners are often needed across multiple sites, many of whom have no prior testing experience. To ensure quality, we looked at examiner errors in administering a commonly used cognitive test battery, determined whether the errors were correctable upon central review, and considered whether the same errors would be detected using onsite electronic data entry. Methods: We looked at 500 cognitive exams administered for brain-Tumor trials led by the Alliance for Clinical Trials in Oncology (Alliance). Of 2277 tests examined, 32 noncorrectable errors were detected with routine central review (1.4% of tests administered), and thus removed from the database of the respective trial. The invalidation rate for each test was 0.8% for each part of the Hopkins Verbal Learning Test-Revised, 0.8% for Controlled Oral Word Association, 1.8% for Trail Making Test-A and 2.6% for Trail Making Test-B. It was estimated that, with onsite data entry and no central review, 4.9% of the tests entered would have uncorrected errors and 1.3% of entered tests would be frankly invalid but not removed. Conclusions: Cognitive test results are useful and robust outcome measures for brain-Tumor clinical trials. Error rates are extremely low, and almost all are correctable with central review of scoring, which is easy to accomplish. We caution that many errors could be missed if onsite electronic entry is utilized instead of central review, and it would be important to mitigate the risk of invalid scores being entered. ClinicalTrials.gov identifiers: NCT01781468 (Alliance A221101), NCT01372774 (NCCTG N107C), NCT00731731 (NCCTG N0874), and NCT00887146 (NCCTG N0577).

AB - Background: Cognitive function is an important outcome in brain-Tumor clinical trials. Cognitive examiners are often needed across multiple sites, many of whom have no prior testing experience. To ensure quality, we looked at examiner errors in administering a commonly used cognitive test battery, determined whether the errors were correctable upon central review, and considered whether the same errors would be detected using onsite electronic data entry. Methods: We looked at 500 cognitive exams administered for brain-Tumor trials led by the Alliance for Clinical Trials in Oncology (Alliance). Of 2277 tests examined, 32 noncorrectable errors were detected with routine central review (1.4% of tests administered), and thus removed from the database of the respective trial. The invalidation rate for each test was 0.8% for each part of the Hopkins Verbal Learning Test-Revised, 0.8% for Controlled Oral Word Association, 1.8% for Trail Making Test-A and 2.6% for Trail Making Test-B. It was estimated that, with onsite data entry and no central review, 4.9% of the tests entered would have uncorrected errors and 1.3% of entered tests would be frankly invalid but not removed. Conclusions: Cognitive test results are useful and robust outcome measures for brain-Tumor clinical trials. Error rates are extremely low, and almost all are correctable with central review of scoring, which is easy to accomplish. We caution that many errors could be missed if onsite electronic entry is utilized instead of central review, and it would be important to mitigate the risk of invalid scores being entered. ClinicalTrials.gov identifiers: NCT01781468 (Alliance A221101), NCT01372774 (NCCTG N107C), NCT00731731 (NCCTG N0874), and NCT00887146 (NCCTG N0577).

KW - clinical trials

KW - cognitive testing

KW - neurocognitive

UR - http://www.scopus.com/inward/record.url?scp=85073909008&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85073909008&partnerID=8YFLogxK

U2 - 10.1093/nop/npy048

DO - 10.1093/nop/npy048

M3 - Article

AN - SCOPUS:85073909008

VL - 6

SP - 283

EP - 288

JO - Neuro-Oncology Practice

JF - Neuro-Oncology Practice

SN - 2054-2577

IS - 4

M1 - npy048

ER -