Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences

Benjamin A. Goldstein, Eric Polley, Farren B.S. Briggs, Mark J. Van Der Laan, Alan Hubbard

Research output: Contribution to journalArticle

1 Citation (Scopus)

Abstract

Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In this paper we propose a general approach that focuses on the "conditional" risk difference (conditional on the model fits being fixed) for the improvement in prediction risk. Specifically, we derive a Wald-type test statistic and associated confidence intervals for cross-validated test sets utilizing the independent validation within cross-validation in conjunction with a test for multiple comparisons. We show that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semiparametric model alternative. We apply the test to a candidate gene study to test for the association of a set of genes in a genetic pathway.

Original languageEnglish (US)
Pages (from-to)117-129
Number of pages13
JournalInternational Journal of Biostatistics
Volume12
Issue number1
DOIs
StatePublished - May 1 2016
Externally publishedYes

Fingerprint

Risk Difference
Testing
Prediction
Cross-validation
Gene
Statistics
Multiple Comparisons
Type I error
Semiparametric Model
Likelihood Ratio Test
Test Set
Medicine
Test Statistic
Null
Confidence interval
Pathway
Relative performance
Metric
Alternatives
Modeling

Keywords

  • cross-validation
  • machine learning
  • Risk prediction
  • semi-parametric models

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Cite this

Testing the Relative Performance of Data Adaptive Prediction Algorithms : A Generalized Test of Conditional Risk Differences. / Goldstein, Benjamin A.; Polley, Eric; Briggs, Farren B.S.; Van Der Laan, Mark J.; Hubbard, Alan.

In: International Journal of Biostatistics, Vol. 12, No. 1, 01.05.2016, p. 117-129.

Research output: Contribution to journalArticle

Goldstein, Benjamin A. ; Polley, Eric ; Briggs, Farren B.S. ; Van Der Laan, Mark J. ; Hubbard, Alan. / Testing the Relative Performance of Data Adaptive Prediction Algorithms : A Generalized Test of Conditional Risk Differences. In: International Journal of Biostatistics. 2016 ; Vol. 12, No. 1. pp. 117-129.
@article{8d5f780a70f24add85db60d16201cabf,
title = "Testing the Relative Performance of Data Adaptive Prediction Algorithms: A Generalized Test of Conditional Risk Differences",
abstract = "Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In this paper we propose a general approach that focuses on the {"}conditional{"} risk difference (conditional on the model fits being fixed) for the improvement in prediction risk. Specifically, we derive a Wald-type test statistic and associated confidence intervals for cross-validated test sets utilizing the independent validation within cross-validation in conjunction with a test for multiple comparisons. We show that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semiparametric model alternative. We apply the test to a candidate gene study to test for the association of a set of genes in a genetic pathway.",
keywords = "cross-validation, machine learning, Risk prediction, semi-parametric models",
author = "Goldstein, {Benjamin A.} and Eric Polley and Briggs, {Farren B.S.} and {Van Der Laan}, {Mark J.} and Alan Hubbard",
year = "2016",
month = "5",
day = "1",
doi = "10.1515/ijb-2015-0014",
language = "English (US)",
volume = "12",
pages = "117--129",
journal = "International Journal of Biostatistics",
issn = "1557-4679",
publisher = "Berkeley Electronic Press",
number = "1",

}

TY - JOUR

T1 - Testing the Relative Performance of Data Adaptive Prediction Algorithms

T2 - A Generalized Test of Conditional Risk Differences

AU - Goldstein, Benjamin A.

AU - Polley, Eric

AU - Briggs, Farren B.S.

AU - Van Der Laan, Mark J.

AU - Hubbard, Alan

PY - 2016/5/1

Y1 - 2016/5/1

N2 - Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In this paper we propose a general approach that focuses on the "conditional" risk difference (conditional on the model fits being fixed) for the improvement in prediction risk. Specifically, we derive a Wald-type test statistic and associated confidence intervals for cross-validated test sets utilizing the independent validation within cross-validation in conjunction with a test for multiple comparisons. We show that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semiparametric model alternative. We apply the test to a candidate gene study to test for the association of a set of genes in a genetic pathway.

AB - Comparing the relative fit of competing models can be used to address many different scientific questions. In classical statistics one can, if appropriate, use likelihood ratio tests and information based criterion, whereas clinical medicine has tended to rely on comparisons of fit metrics like C-statistics. However, for many data adaptive modelling procedures such approaches are not suitable. In these cases, statisticians have used cross-validation, which can make inference challenging. In this paper we propose a general approach that focuses on the "conditional" risk difference (conditional on the model fits being fixed) for the improvement in prediction risk. Specifically, we derive a Wald-type test statistic and associated confidence intervals for cross-validated test sets utilizing the independent validation within cross-validation in conjunction with a test for multiple comparisons. We show that this test maintains proper Type I Error under the null fit, and can be used as a general test of relative fit for any semiparametric model alternative. We apply the test to a candidate gene study to test for the association of a set of genes in a genetic pathway.

KW - cross-validation

KW - machine learning

KW - Risk prediction

KW - semi-parametric models

UR - http://www.scopus.com/inward/record.url?scp=84975253355&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=84975253355&partnerID=8YFLogxK

U2 - 10.1515/ijb-2015-0014

DO - 10.1515/ijb-2015-0014

M3 - Article

C2 - 26529567

AN - SCOPUS:84975253355

VL - 12

SP - 117

EP - 129

JO - International Journal of Biostatistics

JF - International Journal of Biostatistics

SN - 1557-4679

IS - 1

ER -