Test-enhanced web-based learning

Optimizing the number of questions (a Randomized Crossover Trial)

David Allan Cook, Warren G. Thompson, Kris G. Thomas

Research output: Contribution to journalArticle

12 Citations (Scopus)

Abstract

PURPOSE: Questions enhance learning in Web-based courses, but preliminary evidence suggests that too many questions may interfere with learning. The authors sought to determine how varying the number of self-assessment questions affects knowledge outcomes in a Web-based course. METHOD: The authors conducted a randomized crossover trial in one internal medicine and one family medicine residency program between January 2009 and July 2010. Eight Web-based modules on ambulatory medicine topics were developed, with varying numbers of self-assessment questions (0, 1, 5, 10, or 15). Participants completed modules in four different formats each year, with sequence randomly assigned. Participants completed a pretest for half their modules. Outcomes included knowledge, completion time, and module ratings. RESULTS: One hundred eighty residents provided data. The mean (standard error) percent correct knowledge score was 53.2 (0.8) for pretests and 73.7 (0.5) for posttests. In repeated-measures analysis pooling all data, mean posttest knowledge scores were highest for the 10- and 15-question formats (75.7 [1.1] and 74.4 [1.0], respectively) and lower for 0-, 1-, and 5-question formats (73.1 [1.3], 72.9 [1.0], and 72.8 [1.5], respectively); P = .04 for differences across all modules. Modules with more questions generally took longer to complete and were rated higher, although differences were small. Residents most often identified 10 questions as ideal. Posttest knowledge scores were higher for modules that included a pretest (75.4 [0.9] versus 72.2 [0.9]; P = .0002). CONCLUSIONS: Increasing the number of self-assessment questions improves learning until a plateau beyond which additional questions do not add value.

Original languageEnglish (US)
Pages (from-to)169-175
Number of pages7
JournalAcademic Medicine
Volume89
Issue number1
DOIs
StatePublished - Jan 2014

Fingerprint

self-assessment
Cross-Over Studies
Learning
medicine
Medicine
resident
knowledge
learning
Internship and Residency
Internal Medicine
Meta-Analysis
rating
evidence
Self-Assessment

ASJC Scopus subject areas

  • Medicine(all)
  • Education

Cite this

Test-enhanced web-based learning : Optimizing the number of questions (a Randomized Crossover Trial). / Cook, David Allan; Thompson, Warren G.; Thomas, Kris G.

In: Academic Medicine, Vol. 89, No. 1, 01.2014, p. 169-175.

Research output: Contribution to journalArticle

@article{ea7a42fc7c254c31a4cba21c752afa0f,
title = "Test-enhanced web-based learning: Optimizing the number of questions (a Randomized Crossover Trial)",
abstract = "PURPOSE: Questions enhance learning in Web-based courses, but preliminary evidence suggests that too many questions may interfere with learning. The authors sought to determine how varying the number of self-assessment questions affects knowledge outcomes in a Web-based course. METHOD: The authors conducted a randomized crossover trial in one internal medicine and one family medicine residency program between January 2009 and July 2010. Eight Web-based modules on ambulatory medicine topics were developed, with varying numbers of self-assessment questions (0, 1, 5, 10, or 15). Participants completed modules in four different formats each year, with sequence randomly assigned. Participants completed a pretest for half their modules. Outcomes included knowledge, completion time, and module ratings. RESULTS: One hundred eighty residents provided data. The mean (standard error) percent correct knowledge score was 53.2 (0.8) for pretests and 73.7 (0.5) for posttests. In repeated-measures analysis pooling all data, mean posttest knowledge scores were highest for the 10- and 15-question formats (75.7 [1.1] and 74.4 [1.0], respectively) and lower for 0-, 1-, and 5-question formats (73.1 [1.3], 72.9 [1.0], and 72.8 [1.5], respectively); P = .04 for differences across all modules. Modules with more questions generally took longer to complete and were rated higher, although differences were small. Residents most often identified 10 questions as ideal. Posttest knowledge scores were higher for modules that included a pretest (75.4 [0.9] versus 72.2 [0.9]; P = .0002). CONCLUSIONS: Increasing the number of self-assessment questions improves learning until a plateau beyond which additional questions do not add value.",
author = "Cook, {David Allan} and Thompson, {Warren G.} and Thomas, {Kris G.}",
year = "2014",
month = "1",
doi = "10.1097/ACM.0000000000000084",
language = "English (US)",
volume = "89",
pages = "169--175",
journal = "Academic Medicine",
issn = "1040-2446",
publisher = "Lippincott Williams and Wilkins",
number = "1",

}

TY - JOUR

T1 - Test-enhanced web-based learning

T2 - Optimizing the number of questions (a Randomized Crossover Trial)

AU - Cook, David Allan

AU - Thompson, Warren G.

AU - Thomas, Kris G.

PY - 2014/1

Y1 - 2014/1

N2 - PURPOSE: Questions enhance learning in Web-based courses, but preliminary evidence suggests that too many questions may interfere with learning. The authors sought to determine how varying the number of self-assessment questions affects knowledge outcomes in a Web-based course. METHOD: The authors conducted a randomized crossover trial in one internal medicine and one family medicine residency program between January 2009 and July 2010. Eight Web-based modules on ambulatory medicine topics were developed, with varying numbers of self-assessment questions (0, 1, 5, 10, or 15). Participants completed modules in four different formats each year, with sequence randomly assigned. Participants completed a pretest for half their modules. Outcomes included knowledge, completion time, and module ratings. RESULTS: One hundred eighty residents provided data. The mean (standard error) percent correct knowledge score was 53.2 (0.8) for pretests and 73.7 (0.5) for posttests. In repeated-measures analysis pooling all data, mean posttest knowledge scores were highest for the 10- and 15-question formats (75.7 [1.1] and 74.4 [1.0], respectively) and lower for 0-, 1-, and 5-question formats (73.1 [1.3], 72.9 [1.0], and 72.8 [1.5], respectively); P = .04 for differences across all modules. Modules with more questions generally took longer to complete and were rated higher, although differences were small. Residents most often identified 10 questions as ideal. Posttest knowledge scores were higher for modules that included a pretest (75.4 [0.9] versus 72.2 [0.9]; P = .0002). CONCLUSIONS: Increasing the number of self-assessment questions improves learning until a plateau beyond which additional questions do not add value.

AB - PURPOSE: Questions enhance learning in Web-based courses, but preliminary evidence suggests that too many questions may interfere with learning. The authors sought to determine how varying the number of self-assessment questions affects knowledge outcomes in a Web-based course. METHOD: The authors conducted a randomized crossover trial in one internal medicine and one family medicine residency program between January 2009 and July 2010. Eight Web-based modules on ambulatory medicine topics were developed, with varying numbers of self-assessment questions (0, 1, 5, 10, or 15). Participants completed modules in four different formats each year, with sequence randomly assigned. Participants completed a pretest for half their modules. Outcomes included knowledge, completion time, and module ratings. RESULTS: One hundred eighty residents provided data. The mean (standard error) percent correct knowledge score was 53.2 (0.8) for pretests and 73.7 (0.5) for posttests. In repeated-measures analysis pooling all data, mean posttest knowledge scores were highest for the 10- and 15-question formats (75.7 [1.1] and 74.4 [1.0], respectively) and lower for 0-, 1-, and 5-question formats (73.1 [1.3], 72.9 [1.0], and 72.8 [1.5], respectively); P = .04 for differences across all modules. Modules with more questions generally took longer to complete and were rated higher, although differences were small. Residents most often identified 10 questions as ideal. Posttest knowledge scores were higher for modules that included a pretest (75.4 [0.9] versus 72.2 [0.9]; P = .0002). CONCLUSIONS: Increasing the number of self-assessment questions improves learning until a plateau beyond which additional questions do not add value.

UR - http://www.scopus.com/inward/record.url?scp=84893681105&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84893681105&partnerID=8YFLogxK

U2 - 10.1097/ACM.0000000000000084

DO - 10.1097/ACM.0000000000000084

M3 - Article

VL - 89

SP - 169

EP - 175

JO - Academic Medicine

JF - Academic Medicine

SN - 1040-2446

IS - 1

ER -