A new strategy for speeding Markov chain Monte Carlo algorithms

Antonietta Mira, Daniel J. Sargent

Research output: Contribution to journalArticle

7 Citations (Scopus)

Abstract

Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive.

Original languageEnglish (US)
Pages (from-to)49-60
Number of pages12
JournalStatistical Methods and Applications
Volume12
Issue number1
DOIs
StatePublished - 2003

Fingerprint

Markov Chain Monte Carlo Algorithms
Stationary Distribution
kernel
Rao-Blackwellization
Metropolis-Hastings
Metropolis-Hastings Algorithm
Gibbs Sampler
Gibbs Sampling
Markov Chain Monte Carlo Methods
Long-run
Statistical Model
Updating
Tuning
State Space
Entire
Interval
Strategy
Kernel
Markov chain Monte Carlo
Estimate

Keywords

  • Asymptotic variance
  • Efficiency
  • Gibbs sampler
  • Metropolis Hastings algorithms
  • Rao-Blackwellization

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Cite this

A new strategy for speeding Markov chain Monte Carlo algorithms. / Mira, Antonietta; Sargent, Daniel J.

In: Statistical Methods and Applications, Vol. 12, No. 1, 2003, p. 49-60.

Research output: Contribution to journalArticle

Mira, Antonietta ; Sargent, Daniel J. / A new strategy for speeding Markov chain Monte Carlo algorithms. In: Statistical Methods and Applications. 2003 ; Vol. 12, No. 1. pp. 49-60.
@article{d34337c875e44a81b70b125eb58d23b5,
title = "A new strategy for speeding Markov chain Monte Carlo algorithms",
abstract = "Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive.",
keywords = "Asymptotic variance, Efficiency, Gibbs sampler, Metropolis Hastings algorithms, Rao-Blackwellization",
author = "Antonietta Mira and Sargent, {Daniel J.}",
year = "2003",
doi = "10.1007/s10260-003-0052-4",
language = "English (US)",
volume = "12",
pages = "49--60",
journal = "Statistical Methods and Applications",
issn = "1618-2510",
publisher = "Physica-Verlag",
number = "1",

}

TY - JOUR

T1 - A new strategy for speeding Markov chain Monte Carlo algorithms

AU - Mira, Antonietta

AU - Sargent, Daniel J.

PY - 2003

Y1 - 2003

N2 - Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive.

AB - Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive.

KW - Asymptotic variance

KW - Efficiency

KW - Gibbs sampler

KW - Metropolis Hastings algorithms

KW - Rao-Blackwellization

UR - http://www.scopus.com/inward/record.url?scp=57849109202&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=57849109202&partnerID=8YFLogxK

U2 - 10.1007/s10260-003-0052-4

DO - 10.1007/s10260-003-0052-4

M3 - Article

AN - SCOPUS:57849109202

VL - 12

SP - 49

EP - 60

JO - Statistical Methods and Applications

JF - Statistical Methods and Applications

SN - 1618-2510

IS - 1

ER -