« Rao-blackwellization of particle Markov chain Monte Carlo methods using forward filtering backward sampling »

Présenté par Jimmy Olsson, université de Lund, Suède.

le jeudi 8 novembre à 14h00 salle C.06 à TELECOM SudParis – Evry

Title :

Rao-blackwellization of particle Markov chain Monte Carlo methods using forward filtering backward sampling

Abstract :

Smoothing in state-space models amounts to computing the conditional distribution of the latent state trajectory, given observations, or expectations of functionals of the state trajectory with respect to this distribution. In recent years there has been an increased interest in Monte Carlo-based methods, often involving particle filters, for approximate smoothing in nonlinear and/or non-Gaussian state-space models. One such method is to approximate filter distributions using a particle filter and then to simulate, using backward kernels, a state trajectory backwards on the set of particles. In this talk we show that by simulating multiple realizations of the particle filter and adding a Metropolis-Hastings step, one obtains a Markov chain Monte Carlo scheme whose stationary distribution is the exact smoothing distribution. This procedure expands upon a similar one recently proposed by Andrieu, Doucet, Holenstein, and Whiteley. We also show that simulating multiple trajectories from each realization of the particle filter can be beneficial from a perspective of variance versus computation time, and illustrate this idea using two examples.