A new strategy for speeding Markov chain Monte Carlo algorithms

Antonietta Mira, Daniel J. Sargent

Research output: Contribution to journalArticlepeer-review

7 Scopus citations

Abstract

Markov chain Monte Carlo (MCMC) methods have become popular as a basis for drawing inference from complex statistical models. Two common difficulties with MCMC algorithms are slow mixing and long run-times, which are frequently closely related. Mixing over the entire state space can often be aided by careful tuning of the chain's transition kernel. In order to preserve the algorithm's stationary distribution, however, care must be taken when updating a chain's transition kernel based on that same chain's history. In this paper we introduce a technique that allows the transition kernel of the Gibbs sampler to be updated at user specified intervals, while preserving the chain's stationary distribution. This technique seems to be beneficial both in increasing efficiency of the resulting estimates (via Rao-Blackwellization) and in reducing the run-time. A reinterpretation of the modified Gibbs sampling scheme introduced in terms of auxiliary samples allows its extension to the more general Metropolis-Hastings framework. The strategies we develop are particularly helpful when calculation of the full conditional (for a Gibbs algorithm) or of the proposal distribution (for a Metropolis-Hastings algorithm) is computationally expensive.

Original languageEnglish (US)
Pages (from-to)49-60
Number of pages12
JournalStatistical Methods and Applications
Volume12
Issue number1
DOIs
StatePublished - 2003

Keywords

  • Asymptotic variance
  • Efficiency
  • Gibbs sampler
  • Metropolis Hastings algorithms
  • Rao-Blackwellization

ASJC Scopus subject areas

  • Statistics and Probability
  • Statistics, Probability and Uncertainty

Fingerprint

Dive into the research topics of 'A new strategy for speeding Markov chain Monte Carlo algorithms'. Together they form a unique fingerprint.

Cite this