Got power? A systematic review of sample size adequacy in health professions education research

David Allan Cook, Rose Hatala

Research output: Contribution to journalReview article

Abstract

Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

Original languageEnglish (US)
Pages (from-to)73-83
Number of pages11
JournalAdvances in health sciences education : theory and practice
Volume20
Issue number1
DOIs
StatePublished - Mar 1 2015

Fingerprint

Health Occupations
Health Education
Sample Size
profession
Education
health
Research
education
Professional Education
simulation
Meta-Analysis
Databases
statistical significance
health professionals

ASJC Scopus subject areas

  • Medicine(all)
  • Education

Cite this

Got power? A systematic review of sample size adequacy in health professions education research. / Cook, David Allan; Hatala, Rose.

In: Advances in health sciences education : theory and practice, Vol. 20, No. 1, 01.03.2015, p. 73-83.

Research output: Contribution to journalReview article

@article{533c423363884a4d84f22d99641f2562,
title = "Got power? A systematic review of sample size adequacy in health professions education research",
abstract = "Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3{\%}) had ≥80{\%} power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22{\%}) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43{\%}) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3{\%}) had ≥80{\%} power to detect a small difference and 79 (27{\%}) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3{\%}) excluded a small difference and 91 (71{\%}) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.",
author = "Cook, {David Allan} and Rose Hatala",
year = "2015",
month = "3",
day = "1",
doi = "10.1007/s10459-014-9509-5",
language = "English (US)",
volume = "20",
pages = "73--83",
journal = "Advances in Health Sciences Education",
issn = "1382-4996",
publisher = "Springer Netherlands",
number = "1",

}

TY - JOUR

T1 - Got power? A systematic review of sample size adequacy in health professions education research

AU - Cook, David Allan

AU - Hatala, Rose

PY - 2015/3/1

Y1 - 2015/3/1

N2 - Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

AB - Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011, and included all studies evaluating simulation-based education for health professionals in comparison with no intervention or another simulation intervention. Reviewers working in duplicate abstracted information to calculate standardized mean differences (SMD's). We included 897 original research studies. Among the 627 no-intervention-comparison studies the median sample size was 25. Only two studies (0.3%) had ≥80% power to detect a small difference (SMD > 0.2 standard deviations) and 136 (22%) had power to detect a large difference (SMD > 0.8). 110 no-intervention-comparison studies failed to find a statistically significant difference, but none excluded a small difference and only 47 (43%) excluded a large difference. Among 297 studies comparing alternate simulation approaches the median sample size was 30. Only one study (0.3%) had ≥80% power to detect a small difference and 79 (27%) had power to detect a large difference. Of the 128 studies that did not detect a statistically significant effect, 4 (3%) excluded a small difference and 91 (71%) excluded a large difference. In conclusion, most education research studies are powered only to detect effects of large magnitude. For most studies that do not reach statistical significance, the possibility of large and important differences still exists.

UR - http://www.scopus.com/inward/record.url?scp=85017320789&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85017320789&partnerID=8YFLogxK

U2 - 10.1007/s10459-014-9509-5

DO - 10.1007/s10459-014-9509-5

M3 - Review article

C2 - 24819405

VL - 20

SP - 73

EP - 83

JO - Advances in Health Sciences Education

JF - Advances in Health Sciences Education

SN - 1382-4996

IS - 1

ER -