The magnitude of small-study effects in the Cochrane Database of Systematic Reviews: An empirical study of nearly 30 000 meta-analyses

Lifeng Lin, Linyu Shi, Haitao Chu, Mohammad H Murad

Research output: Contribution to journalArticle

Abstract

Publication bias, more generally termed as small-study effect, is a major threat to the validity of meta-analyses. Most meta-analysts rely on the p values from statistical tests to make a binary decision about the presence or absence of small-study effects. Measures are available to quantify small-study effects' magnitude, but the current literature lacks clear rules to help evidence users in judging whether such effects are minimal or substantial. This article aims to provide rules of thumb for interpreting the measures. We use six measures to evaluate small-study effects in 29 932 meta-analyses from the Cochrane Database of Systematic Reviews. They include Egger's regression intercept and the skewness under both the fixed-effect and random-effects settings, the proportion of suppressed studies, and the relative change of the estimated overall result due to small-study effects. The cut-offs for different extents of small-study effects are determined based on the quantiles in these distributions. We present the empirical distributions of the six measures and propose a rough guide to interpret the measures' magnitude. The proposed rules of thumb may help evidence users grade the certainty in evidence as impacted by small-study effects.

Original languageEnglish (US)
JournalBMJ evidence-based medicine
DOIs
StatePublished - Jan 1 2019

Fingerprint

Meta-Analysis
Databases
Publication Bias

Keywords

  • epidemiology

ASJC Scopus subject areas

  • Medicine(all)

Cite this

@article{9821a27a71e34b3f954c3d5c70455b61,
title = "The magnitude of small-study effects in the Cochrane Database of Systematic Reviews: An empirical study of nearly 30 000 meta-analyses",
abstract = "Publication bias, more generally termed as small-study effect, is a major threat to the validity of meta-analyses. Most meta-analysts rely on the p values from statistical tests to make a binary decision about the presence or absence of small-study effects. Measures are available to quantify small-study effects' magnitude, but the current literature lacks clear rules to help evidence users in judging whether such effects are minimal or substantial. This article aims to provide rules of thumb for interpreting the measures. We use six measures to evaluate small-study effects in 29 932 meta-analyses from the Cochrane Database of Systematic Reviews. They include Egger's regression intercept and the skewness under both the fixed-effect and random-effects settings, the proportion of suppressed studies, and the relative change of the estimated overall result due to small-study effects. The cut-offs for different extents of small-study effects are determined based on the quantiles in these distributions. We present the empirical distributions of the six measures and propose a rough guide to interpret the measures' magnitude. The proposed rules of thumb may help evidence users grade the certainty in evidence as impacted by small-study effects.",
keywords = "epidemiology",
author = "Lifeng Lin and Linyu Shi and Haitao Chu and Murad, {Mohammad H}",
year = "2019",
month = "1",
day = "1",
doi = "10.1136/bmjebm-2019-111191",
language = "English (US)",
journal = "BMJ Evidence-Based Medicine",
issn = "2515-446X",
publisher = "BMJ Publishing Group",

}

TY - JOUR

T1 - The magnitude of small-study effects in the Cochrane Database of Systematic Reviews

T2 - An empirical study of nearly 30 000 meta-analyses

AU - Lin, Lifeng

AU - Shi, Linyu

AU - Chu, Haitao

AU - Murad, Mohammad H

PY - 2019/1/1

Y1 - 2019/1/1

N2 - Publication bias, more generally termed as small-study effect, is a major threat to the validity of meta-analyses. Most meta-analysts rely on the p values from statistical tests to make a binary decision about the presence or absence of small-study effects. Measures are available to quantify small-study effects' magnitude, but the current literature lacks clear rules to help evidence users in judging whether such effects are minimal or substantial. This article aims to provide rules of thumb for interpreting the measures. We use six measures to evaluate small-study effects in 29 932 meta-analyses from the Cochrane Database of Systematic Reviews. They include Egger's regression intercept and the skewness under both the fixed-effect and random-effects settings, the proportion of suppressed studies, and the relative change of the estimated overall result due to small-study effects. The cut-offs for different extents of small-study effects are determined based on the quantiles in these distributions. We present the empirical distributions of the six measures and propose a rough guide to interpret the measures' magnitude. The proposed rules of thumb may help evidence users grade the certainty in evidence as impacted by small-study effects.

AB - Publication bias, more generally termed as small-study effect, is a major threat to the validity of meta-analyses. Most meta-analysts rely on the p values from statistical tests to make a binary decision about the presence or absence of small-study effects. Measures are available to quantify small-study effects' magnitude, but the current literature lacks clear rules to help evidence users in judging whether such effects are minimal or substantial. This article aims to provide rules of thumb for interpreting the measures. We use six measures to evaluate small-study effects in 29 932 meta-analyses from the Cochrane Database of Systematic Reviews. They include Egger's regression intercept and the skewness under both the fixed-effect and random-effects settings, the proportion of suppressed studies, and the relative change of the estimated overall result due to small-study effects. The cut-offs for different extents of small-study effects are determined based on the quantiles in these distributions. We present the empirical distributions of the six measures and propose a rough guide to interpret the measures' magnitude. The proposed rules of thumb may help evidence users grade the certainty in evidence as impacted by small-study effects.

KW - epidemiology

UR - http://www.scopus.com/inward/record.url?scp=85068595022&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85068595022&partnerID=8YFLogxK

U2 - 10.1136/bmjebm-2019-111191

DO - 10.1136/bmjebm-2019-111191

M3 - Article

AN - SCOPUS:85068595022

JO - BMJ Evidence-Based Medicine

JF - BMJ Evidence-Based Medicine

SN - 2515-446X

ER -