presentation mcb seminar 09032011
DESCRIPTION
Slides on the SMC^2 algorithm, by N. Chopin, P.E. Jacob, O. Papaspiliopoulos. Presentation at the MCB seminar on March 9th, 2011.TRANSCRIPT
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
SMC2: A sequential Monte Carlo algorithm withparticle Markov chain Monte Carlo updates
N. CHOPIN1, P.E. JACOB2, & O. PAPASPILIOPOULOS3
MCB seminar, March 9th, 2011
1ENSAE-CREST2CREST & Universite Paris Dauphine, funded by AXA research3Universitat Pompeu Fabra
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 1/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 2/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 3/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
State Space Models
Context
In these models:
we observe some data Y1:T = (Y1, . . .YT ),
we suppose that they depend on some hidden states X1:T .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 4/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
State Space Models
A system of equations
Hidden states: p(x1|θ) = µθ(x1) and when t ≥ 1
p(xt+1|x1:t , θ) = p(xt+1|xt , θ) = fθ(xt+1|xt)
Observations:
p(yt |y1:t−1, x1:t−1, θ) = p(yt |xt , θ) = gθ(yt |xt)
Parameter: θ ∈ Θ, prior p(θ).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 5/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
State Space Models
Some interesting distributions
Bayesian inference focuses on:
p(θ|y1:T )
Filtering (traditionally) focuses on:
∀t ∈ [1,T ] pθ(xt |y1:t)
Smoothing (traditionally) focuses on:
∀t ∈ [1,T ] pθ(xt |y1:T )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 6/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
State Space Models
Some interesting distributions [spoiler]
PMCMC methods provide a sample from:
p(θ, x1:T |y1:T )
SMC2 provides a sample from:
∀t ∈ [1,T ] p(θ, x1:t |y1:t)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 7/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Local level yt = xt + σV εt , εt ∼ N (0, 1),
xt+1 = xt + σW ηt , ηt ∼ N (0, 1),
x0 ∼ N (0, 1)
Here: θ = (σV , σW ). The model is linear and Gaussian.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 8/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Stochastic Volatility (simple)yt |xt ∼ N (0, ext )
xt = µ+ ρ(xt−1 − µ) + σεt
x0 = µ0
Here: θ = (µ, ρ, σ), or can include µ0.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 9/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Population growth modelyt = nt + σwεt
log nt+1 = log nt + b0 + b1(nt)b2 + σεηt
log n0 = µ0
Here: θ = (b0, b1, b2, σε, σW ), or can include µ0.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 10/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Stochastic Volatility (sophisticated)
yt = µ+ βvt + v1/2t εt , t ≥ 1
k ∼ Poi(λξ2/ω2
)c1:k
iid∼ U(t, t + 1) ei :kiid∼ Exp
(ξ/ω2
)zt+1 = e−λzt +
k∑j=1
e−λ(t+1−cj )ej
vt+1 =1
λ
zt − zt+1 +k∑
j=1
ej
xt+1 = (vt+1, zt+1)′
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 11/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Time
Obs
erva
tions
−4
−2
0
2
100 200 300 400 500 600 700
(a)
Time
Squ
ared
obs
erva
tions
5
10
15
20
100 200 300 400 500 600 700
(b)
Figure: The S&P 500 data from 03/01/2005 to 21/12/2007.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 12/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Athletics records model
g(y1:2,t |µt , ξ, σ) = {1− G (y2,t |µt , ξ, σ)}2∏
i=1
g(yi ,t |µt , ξ, σ)
1− G (yi ,t |µt , ξ, σ)
xt = (µt , µt)′ , xt+1 | xt , ν ∼ N (Fxt ,Q) ,
with
F =
(1 10 1
)and Q = ν2
(1/3 1/21/2 1
)
G (y |µ, ξ, σ) = 1− exp
[−{
1− ξ(y − µσ
)}−1/ξ
+
]
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 13/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Examples
Year
Tim
es (
seco
nds)
480
490
500
510
520
530
1980 1985 1990 1995 2000 2005 2010
Figure: Best two times of each year, in women’s 3000 metres eventsbetween 1976 and 2010.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 14/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why are those models challenging?
It’s all about dimensions. . .
pθ(x1:T |y1:T ) =pθ(y1:T |x1:T )pθ(x1:T )
pθ(y1:T )∝ pθ(y1:T |x1:T )pθ(x1:T )
. . . even if it’s not obvious
p(θ|y1:T ) ∝ p(y1:T |θ)p(θ)
=
[∫XT
p(y1:T |x1:T , θ)p(x1:T |θ)dx1:T
]p(θ)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 15/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 16/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Metropolis-Hastings algorithm
A popular method to sample from a distribution π.
Algorithm 1 Metropolis-Hastings algorithm
1: Set some x (1)
2: for i = 2 to N do3: Propose x∗ ∼ q(·|x (i−1))4: Compute the ratio:
α = min
(1,
π(x?)
π(x (i−1))
q(x (i−1)|x?)
q(x?|x (i−1))
)
5: Set x (i) = x? with probability α, otherwise set x (i) = x (i−1)
6: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 17/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Metropolis-Hastings algorithm
Requirements
π can be evaluated point-wise, up to a multiplicative constant.
x is low-dimensional, otherwise designing q gets tedious oreven impossible.
Back to SSM
p(θ|y1:T ) cannot be evaluated point-wise.
pθ(x1:T |y1:T ) and p(x1:T , θ|y1:T ) are high-dimensional, andcannot be necessarily computed point-wise either.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 18/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Gibbs sampling
Suppose the target distribution π is defined on X d .
Algorithm 2 Gibbs sampling
1: Set some x(1)1:d
2: for i = 2 to N do3: for j = 1 to d do
4: Draw x(i)j ∼ π(x
(i)j |x
(i)1:j−1, x
(i−1)j+1:d)
5: end for6: end for
It allows to break a high-dimensional sampling problem into manylow-dimensional sampling problems!
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 19/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Gibbs sampling
Requirements
Conditional distributions π(xj |x1:j−1, xj+1:d) can be sampledfrom, otherwise MH within Gibbs.
The components xj are not too correlated one to another.
Back to SSM
The hidden states x1:T are typically very correlated one toanother.
If the target is p(θ, x1:T |y1:T ), θ is also very correlated withx1:T .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 20/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Context
Suppose we are interested in pθ(x1:T |y1:T ), with θ known.
We want to get a sample x(i)1:T , i ∈ [1,N] from it.
General idea
We introduce the following sequence of distributions:
{pθ(x1:t |y1:t), t ∈ [1,T ]}
Sample recursively from pθ(x1:t |y1:t) to pθ(x1:t+1|y1:t+1).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 21/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Definition
A particle filter is just a collection of weighted points, calledparticles.
Particles
Writing (w (i), x (i))Ni=1 ∼ π means that the empirical distribution:
N∑i=1
w (i)δx(i)(dx)
converges towards π when N → +∞.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 22/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Importance Sampling
Suppose:
(w(i)1 , x (i))Ni=1 ∼ π1
and if we define:
w(i)2 = w
(i)1 ∗
π2(x (i))
π1(x (i))
then(w
(i)2 , x (i))Ni=1 ∼ π2
under some common-sense assumptions on π1 and π2.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 23/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
From one time-step to the other
Suppose
(w(i)t , x
(i)1:t)Ni=1 ∼ pθ(x1:t |y1:t)
We want(w
(i)t+1, x
(i)1:t+1)Ni=1 ∼ pθ(x1:t+1|y1:t+1)
Decomposition
pθ(x1:t+1|y1:t+1) ∝ pθ(yt+1|xt+1)pθ(xt+1|xt)pθ(x1:t |y1:t)
∝ gθ(yt+1|xt+1)fθ(xt+1|xt)pθ(x1:t |y1:t)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 24/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Proposal
Propose x(i)t+1 ∼ qθ(xt+1|x1:t = x
(i)1:t , y1:t). Then:(
w(i)t , (x
(i)1:t , x
(i)t+1)
)Ni=1∼ qθ(xt+1|x1:t , y1:t+1)pθ(x1:t |y1:t)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 25/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Reweighting
w(i)t+1 = w
(i)t ×
gθ(yt+1|x (i)t+1)fθ(x
(i)t+1|x
(i)t )
qθ(x(i)t+1|x
(i)1:t , y1:t+1)
and finally we have
(w(i)t+1, x
(i)1:t+1)Ni=1 ∼ pθ(x1:t+1|y1:t+1)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 26/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Resampling
To fight the weight degeneracy we introduce a resampling step.
Notation
Family of probability distribution on {1, . . .N}N :
a ∼ r(·|w) for w ∈ [0, 1]N such thatN∑i=1
w (i) = 1
The variables (a(i)t−1)Ni=1 are the indices of the parents of (x
(i)1:t)Ni=1.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 27/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Algorithm 3 Sequential Monte Carlo algorithm
1: Propose x(i)1 ∼ µθ(·)
2: Compute weights w(i)1
3: for t = 2 to T do4: Resample at−1 ∼ r(·|wt−1)
5: Propose x(i)t ∼ qθ(·|xa
(i)t−1
1:t−1, y1:t), let x(i)1:t = (x
a(i)t−1
1:t−1, x(i)t )
6: Update w(i)t to get w
(i)t+1
7: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 28/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
timeFigure: Three weighted trajectories x1:t at time t.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 29/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
timeFigure: Three proposed trajectories x1:t+1 at time t + 1.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 30/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
timeFigure: Three reweighted trajectories x1:t+1 at time t + 1
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 31/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Output
In the end we get particles:
(w(i)T , x
(i)1:T )Ni=1 ∼ pθ(x1:T |y1:T )
Requirements
Proposal kernels qθ(·|x1:t−1, y1:t) from which we can sample.
Weight functions which we can evaluate point-wise.
These proposal kernels and weight functions must result inproperly weighted samples.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 32/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Sequential Monte Carlo for filtering
Marginal likelihood
A side effect of the SMC algorithm is that we can approximate themarginal likelihood ZT :
ZT = p(y1:T |θ)
with the following unbiased estimate:
ZNT =
T∏t=1
(1
N
N∑i=1
w(i)t
)P−−−−→
N→∞ZT
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 33/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 34/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Reference
Particle Markov Chain Monte Carlo methods
is an article by Andrieu, Doucet, Holenstein,JRSS B., 2010, 72(3):269–342
Motivation
Bayesian inference in state space models:
p(θ, x1:T |y1:T )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 35/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Idealized Metropolis–Hastings for SSM
If only. . .
. . . we had p(θ|y1:T ) ∝ p(θ)p(y1:T |θ) up to a multiplicativeconstant, we could run a MH algorithm with acceptance rate:
α(θ(i), θ?) = min
(1,
p(θ?)p(y1:T |θ?)
p(θ(i))p(y1:T |θ(i))
q(θ(i)|θ?)
q(θ?|θ(i))
)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 36/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Valid Metropolis–Hastings for SSM ??
Plug in estimates
However we have ZNT (θ) ≈ p(y1:T |θ) by running a SMC algorithm,
and we can try to run a MH algorithm with acceptance rate:
α(θ(i), θ?) = min
(1,
p(θ?)ZNT (θ?)
p(θ(i))ZNT (θ(i))
q(θ(i)|θ?)
q(θ?|θ(i))
)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 37/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
The Beauty of Particle MCMC
“Exact approximation”
Turns out it is a valid MH algorithm that targets exactly p(θ|y1:T ),regardless of the number N of particles used in the SMC algorithmthat provides the estimates ZN
T (θ) at each iteration.
State estimation
In fact the PMCMC algorithms provide samples fromp(θ, x1:T |y1:T ), and not only from the posterior distribution of theparameters.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 38/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Particle Metropolis-Hastings
Algorithm 4 Particle Metropolis-Hastings algorithm
1: Set some θ(1)
2: Run a SMC algorithm, keep ZNT (θ(1)), draw a trajectory x
(1)1:T
3: for i = 2 to I do4: Propose θ? ∼ q(·|θ(i−1))5: Run a SMC algorithm, keep ZN
T (θ?), draw a trajectory x?1:T
6: Compute the ratio:
α(θ(i−1), θ?) = min
(1,
p(θ?)ZNT (θ?)
p(θ(i−1))ZNT (θ(i−1))
q(θ(i−1)|θ?)
q(θ?|θ(i−1))
)
7: Set θ(i) = θ?, x(i)1:T = x?1:T with probability α, otherwise keep
the previous values8: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 39/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
Variables generated by SMC
∀t ∈ [1,T ] xt = (x(1)t , . . . x
(N)t )
∀t ∈ [1,T − 1] at = (a(1)t , . . . a
(N)t )
Joint distribution
ψ(x1, . . . xT , a1, . . . aT−1) =
(N∏i=1
qθ(x(i)1 )
)
×
(T∏t=2
r(at−1|wt−1)N∏i=1
qθ(x(i)t |x
a(i)1:t−1
1:t−1 )
)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 40/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
Extended proposal distribution
The PMH proposes: a new parameter θ?, a trajectory xk?,?
1:T , andthe rest of the variables generated by the SMC.
qN(θ?, k?, x?1 , . . . x?T , a
?1, . . . a
?T−1)
= q(θ?|θ(i))wk?,?T ψ?(x?1 , . . . x
?T , a
?1, . . . a
?T−1)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 41/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
Extended target distribution
πN(θ, k , x1, . . . xT , a1, . . . aT−1)
=p(θ, x1:T |y1:T )
NT
ψθ(x1, . . . xT , a1, . . . aT−1)
qθ(xbk11 )∏T
t=2 r(bkt−1|wt−1)qθ(xbktt |x
bkt−1
1:t−1)
with bk1:T the index history of particle x(k)1:T .
Valid algorithm
From the explicit form of the extended distributions, showing thatPMH is a standard MH algorithm becomes straightforward.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 42/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Particle MCMC: conclusion
Remarks
It is exact regardless of N . . .
. . . however a sufficient number N of particles is required toget decent acceptance rates.
SMC methods are considered expensive, but easy toparallelize.
Applies to a broad class of models.
More sophisticated SMC and MCMC methods can be used,and result in more sophisticated Particle MCMC methods.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 43/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Outline
1 Introduction and State Space Models
2 Reminder on some Monte Carlo methods
3 Particle Markov Chain Monte Carlo
4 SMC2
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 44/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Our idea. . .
. . . was to use the same, very powerful “extended distribution”framework, to build a SMC sampler instead of a MCMC algorithm.
Foreseen benefits
to sample more efficiently from the posterior distributionp(θ|y1:T ),
to sample sequentially from p(θ|y1), p(θ|y1, y2), . . . p(θ|y1:T ).
and it turns out, it allows even a bit more.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 45/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Idealized SMC sampler for SSM
Algorithm 5 Iterated Batch Importance Sampling
1: Sample from the prior θ(m) ∼ p(·) for m ∈ [1,Nθ]2: Set ω(m) ← 13: for t = 1 to T do4: Compute ut(θ
(m)) = p(yt |y1:t−1, θ(m))
5: Update ω(m) ← ω(m) × ut(θ(m))
6: if some degeneracy criterion is met then7: Resample the particles, reset the weights ω(m) ← 18: Move the particles using a Markov kernel leaving the dis-
tribution invariant9: end if
10: end for
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 46/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Valid SMC sampler for SSM ??
Plug in estimates
Similarly to PMCMC methods, we want to replacep(yt |y1:t−1, θ
(m)) with an unbiased estimate, and see whathappens.
SMC everywhere
We associate Nx x-particles to each of the Nθ θ-particles.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 47/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Valid SMC sampler for SSM ??
Marginal likelihood
Remember, a side effect of the SMC algorithm is that we canapproximate the incremental likelihood:
1
Nx
Nx∑i=1
w(i ,m)t ≈ p(yt |y1:t−1, θ
(m))
Move steps
Instead of simple MH kernels, use PMH kernels.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 48/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
A simple idea. . .
. . . especially after the PMCMC article.
Still. . .
. . . some work had to be done to justify the validity of thealgorithm.
In short, it leads to a standard SMC sampler on a sequence ofextended distributions πt (proposition 1 of the article).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 49/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
Additional notations
hnt denotes the index history of xnt , that is, hnt (t) = n, and
hnt (s) = ahnt (s+1)s recursively, for s = t − 1, . . . , 1.
xn1:t denotes a state trajectory finishing in xnt , that is:
xn1:t(s) = xhnt (s)s , for s = 1, . . . , t.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 50/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
Here is what the distribution πt looks like:
πt(θ, x1:Nx1:t , a1:Nx
1:t−1) = p(θ|y1:t)
× 1
Nx
Nx∑n=1
p(xn1:t |θ, y1:t)
Nt−1x
Nx∏i=1
i 6=hnt (1)
q1,θ(x i1)
×
t∏
s=2
Nx∏i=1
i 6=hnt (s)
Wais−1
s−1,θqs,θ(x is |xais−1
s−1 )
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 51/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Why does it work?
PMCMC move steps
These steps are valid because the PMCMC invariant distributionπ?t defined on
θ, k , x1:Nx1:t , a1:Nx
1:t−1
is such that πt is the marginal distribution of
θ, x1:Nx1:t , a1:Nx
1:t−1
with respect to π?t .
(Sections 3.2, 3.3 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 52/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Benefits
Explicit form of the distribution
It allows to prove the validity of the algorithm, but also:
to get samples from p(θ, x1:t |y1:t),
to validate an automatic calibration of Nx .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 53/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Benefits
Drawing trajectories
If for every θ-particle θ(m) one draws an index n?(m) uniformly on{1, . . .Nx}, then the weighted sample:
(ωm, θm, xn?(m),m1:t )m∈1:Nθ
follows p(θ, x1:t |y1:t).
Memory cost
Need to store the x-trajectories, if one wants to make inferenceabout x1:t (smoothing).If the interest is only on parameter inference (θ), filtering (xt) andprediction (yt+1), no need to store the trajectories.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 54/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Benefits
Estimating functionals of the states
We have a test function h and want to estimate E [h(θ, x1:t)|y1:t ].Estimator:
1∑Nθm=1 ω
m
Nθ∑m=1
ωmh(θm, xn?(m),m1:t ).
Rao–Blackwellized estimator:
1∑Nθm=1 ω
m
Nθ∑m=1
ωm
{Nx∑n=1
W nt,θmh(θm, xn,m1:t )
}.
(Section 3.4 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 55/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Benefits
Evidence
The evidence of the data given the model is defined as:
p(y1:t) =t∏
s=1
p(ys |y1:s−1)
And it can be used to compare models. SMC2 provides thefollowing estimate:
Lt =1∑Nθ
m=1 ωm
Nθ∑m=1
ωmp(yt |y1:t−1, θm)
(Section 3.5 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 56/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Benefits
Exchange importance sampling step
Launch a new SMC for each θ-particle, with Nx x-particles. Jointdistribution:
πt(θ, x1:Nx1:t , a1:Nx
1:t−1)ψt,θ(x1:Nx1:t , a1:Nx
1:t−1)
Retain the new x-particles and drop the old ones, updating theθ-weights with:
uexcht
(θ, x1:Nx
1:t , a1:Nx1:t−1, x
1:Nx1:t , a1:Nx
1:t−1
)=
Zt(θ, x1:Nx1:t , a1:Nx
1:t−1)
Zt(θ, x1:Nx1:t , a1:Nx
1:t−1)
(Section 3.6 of the article)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 57/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Warning
Plug in estimates
Not any SMC sampler can be turned into a SMC2 algorithm, byreplacing the exact weights with estimates: these have to beunbiased. . . !!
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 58/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Warning
Example
For instance, if instead of using the sequence of distributions:
{p(θ|y1:t)}Tt=1
one wants to use the “tempered” sequence:
{p(θ|y1:T )γk}Kk=1
with γk an increasing sequence from 0 to 1, then one should findunbiased estimates of p(θ|y1:T )γk−γk−1 to plug into the idealizedSMC sampler.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 59/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Stochastic Volatility (sophisticated)
yt = µ+ βvt + v1/2t εt , t ≥ 1
k ∼ Poi(λξ2/ω2
)c1:k
iid∼ U(t, t + 1) ei :kiid∼ Exp
(ξ/ω2
)zt+1 = e−λzt +
k∑j=1
e−λ(t+1−cj )ej
vt+1 =1
λ
zt − zt+1 +k∑
j=1
ej
xt+1 = (vt+1, zt+1)′
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 60/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Time
Squ
ared
obs
erva
tions
0
2
4
6
8
200 400 600 800 1000
(a)
Iterations
Acc
epta
nce
rate
s
0.0
0.2
0.4
0.6
0.8
1.0
0 200 400 600 800 1000
(b)
Iterations
Nx
100
200
300
400
500
600
700
800
0 200 400 600 800 1000
(c)
Figure: Squared observations (synthetic data set), acceptance rates, andillustration of the automatic increase of Nx .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 61/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
µ
Den
sity
0
2
4
6
8
T = 250
−1.0 −0.5 0.0 0.5 1.0
T = 500
−1.0 −0.5 0.0 0.5 1.0
T = 750
−1.0 −0.5 0.0 0.5 1.0
T = 1000
−1.0 −0.5 0.0 0.5 1.0
Figure: Concentration of the posterior distribution for parameter µ.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 62/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Multifactor model
yt = µ+βvt+v1/2t εt+ρ1
k1∑j=1
e1,j+ρ2
k2∑j=1
e2,j−ξ(wρ1λ1+(1−w)ρ2λ2)
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 63/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Time
Squ
ared
obs
erva
tions
5
10
15
20
100 200 300 400 500 600 700
(a)
Iterations
Evi
denc
e co
mpa
red
to th
e on
e fa
ctor
mod
el
−2
0
2
4
100 200 300 400 500 600 700
variableMulti factor without leverageMulti factor with leverage
(b)
Figure: S&P500 squared observations, and log-evidence comparisonbetween models (relative to the one-factor model).
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 64/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Athletics records model
g(y1:2,t |µt , ξ, σ) = {1− G (y2,t |µt , ξ, σ)}2∏
i=1
g(yi ,t |µt , ξ, σ)
1− G (yi ,t |µt , ξ, σ)
xt = (µt , µt)′ , xt+1 | xt , ν ∼ N (Fxt ,Q) ,
with
F =
(1 10 1
)and Q = ν2
(1/3 1/21/2 1
)
G (y |µ, ξ, σ) = 1− exp
[−{
1− ξ(y − µσ
)}−1/ξ
+
]
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 65/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Year
Tim
es (
seco
nds)
480
490
500
510
520
530
1980 1985 1990 1995 2000 2005 2010
Figure: Best two times of each year, in women’s 3000 metres eventsbetween 1976 and 2010.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 66/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Motivating question
How unlikely is Wang Junxia’s record in 1993?
A smoothing problem
We want to estimate the likelihood of Wang Junxia’s record in1993, given that we observe a better time than the previous worldrecord. We want to use all the observations from 1976 to 2010 toanswer the question.
Note
We exclude observations from the year 1993.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 67/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Some probabilities of interest
pyt = P(yt ≤ y |y1976:2010)
=
∫Θ
∫XG (y |µt , θ)p(µt |y1976:2010, θ)p(θ|y1976:2010) dµtdθ
The interest lies in p486.111993 , p502.62
1993 and pcondt := p486.11t /p502.62
t .
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 68/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Numerical illustrations
Year
Pro
babi
lity
10−4
10−3
10−2
10−1
1980 1985 1990 1995 2000 2005 2010
Figure: Estimates of the probability of interest (top) p502.62t , (middle)
pcondt and (bottom) p486.11t , obtained with the SMC2 algorithm. The
y -axis is in log scale, and the dotted line indicates the year 1993 whichmotivated the study.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 69/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Conclusion
A powerful framework
The SMC2 framework allows to obtain various quantities ofinterest, in a quite generic and “black-box” way.
It extends the PMCMC framework introduced by Andrieu,Doucet and Holenstein.
A package is available:
http://code.google.com/p/py-smc2/.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 70/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Acknowledgments
N. Chopin is supported by the ANR grantANR-008-BLAN-0218 “BigMC” of the French Ministry ofresearch.
P.E. Jacob is supported by a PhD fellowship from the AXAResearch Fund.
O. Papaspiliopoulos would like to acknowledge financialsupport by the Spanish government through a “Ramon yCajal” fellowship and grant MTM2009-09063.
The authors are thankful to Arnaud Doucet (University of BritishColumbia) and to Gareth W. Peters (University of New SouthWales) for useful comments.
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 71/ 72
Introduction and State Space ModelsReminder on some Monte Carlo methods
Particle Markov Chain Monte CarloSMC2
Bibliography
SMC2: A sequential Monte Carlo algorithm with particle Markovchain Monte Carlo updates, N. Chopin, P.E. Jacob, O.Papaspiliopoulos, submittedMain references:
Particle Markov Chain Monte Carlo methods, C. Andrieu, A.Doucet, R. Holenstein, JRSS B., 2010, 72(3):269–342
The pseudo-marginal approach for efficient computation, C.Andrieu, G.O. Roberts, Ann. Statist., 2009, 37, 697–725
Random weight particle filtering of continuous time processes,P. Fearnhead, O. Papaspiliopoulos, G.O. Roberts, A. Stuart,JRSS B., 2010, 72:497–513
Feynman-Kac Formulae, P. Del Moral, Springer
N. CHOPIN, P.E. JACOB, & O. PAPASPILIOPOULOS SMC2 72/ 72