expectation particle belief...

19
Expectation Particle Belief Propagation Thibaut Lienart, Yee Whye Teh and Arnaud Doucet Juho Kim December 1, 2016

Upload: vokhanh

Post on 08-Sep-2018

215 views

Category:

Documents


0 download

TRANSCRIPT

Expectation Particle Belief PropagationThibaut Lienart, Yee Whye Teh and Arnaud Doucet

Juho Kim

December 1, 2016

Goal

• Infer the marginals approximately in pairwise Markov Random Fields on a continuous state space.

• Improve an existing algorithm, Particle Belief Propagation (PBP) (Ihlerand McAllester, AISTATS 2009) by using expectation propagation.

• Attain more accurate and efficient inference results.

Motivation of Particle-based Belief Propagation

• Popular choice for inference in general Markov Random Fields.

→ Loopy Belief Propagation (LBP)

• When dealing with continuous random variables, computing exactly the messages transmitted by LBP is generally intractable.

• PBP and EPBP compute the messages based on sampling to attain the computational tractability.

Background and Notations

• For a pairwise MRF, a distribution over a set of continuous random variables is represented by:

• The LBP fixed-point update can be written as follows at iteration t:

Particle Belief Propagation (PBP)

• Messages from the LBP update:

→ The integration could be intractable.

Particle Belief Propagation (PBP)

• Messages from the LBP update:

→ The integration could be intractable.

• Main idea: use importance sampling to update the messages instead of the exact calculation of the integration

Importance sampling

• f is some function and P is the probability density function of X.

• Rather than sampling from P (Monte Carlo integration), we specify a different probability density function Q as the proposal distribution.

• Expectation under Q

Back to Particle Belief Propagation (PBP)

• Recall the messages from the LBP update:

• Given a proposal distribution 𝑞𝑢 on node 𝑢 and a set of N particles {𝑥𝑢(𝑖)}𝑖=1𝑁 ~𝑞𝑢(𝑥𝑢),

Back to Particle Belief Propagation (PBP)

• The PBP messages are written as:

• The choice of 𝑞𝑢 determines the approximation quality.

• However, the PBP paper does not provide a concrete way to select 𝑞𝑢.

Expectation Particle Belief Propagation (EPBP)• Address the issue of selecting the proposal distributions in PBP.

- The proposal distribution is constructed adaptively considering evidence

collected through message passing.

- Use exponential family distributions as proposals on a node for

computational efficiency.

- Choose the parameters of the proposals adaptively based on current

estimates of beliefs and expectation propagation.

• Notations

: exact (but unavailable) LBP messages from u to v

: particle approximation of

: exponential family estimation of

Expectation Particle Belief Propagation (cont’d)

• The approximate edge-wise belief over 𝑥𝑢 and 𝑥𝑣 is represented by:

• By drawing N independent samples {𝑥𝑢(𝑖)}𝑖=1𝑁 and {𝑥𝑣

(𝑗)}𝑗=1𝑁 from 𝑞𝑢 and 𝑞𝑣,

respectively, we can approximate the belief.

where

Expectation Particle Belief Propagation (cont’d)

• By marginalizing onto 𝑥𝑣, we have the particle approximation to 𝐵𝑢𝑣(𝑥𝑣)

where ෝ𝑚𝑢𝑣 = ෝ𝑚𝑢𝑣𝑃𝐵𝑃.

• The marginalized belief is proportional to:

Expectation Particle Belief Propagation (cont’d)

• EPBP uses a tractable exponential family distribution for 𝑞𝑢

where 𝜂ₒ𝑢 and 𝜂𝑤𝑢 are exponential family approximations of 𝜓𝑢 and

ෝ𝑚𝑤𝑢 respectively.

• Using the framework of expectation propagation, that is, minimizing KL divergence KL( 𝐵𝑢𝑣|𝑞𝑢𝑞𝑣) as the closeness measure, we can iteratively find good exponential family approximations.

Expectation Particle Belief Propagation (cont’d)

• Pick one node 𝑤 ∈ Γ𝑢 and update the related exponential family distribution 𝜂𝑤𝑢 by tuning the parameters of the distribution.

• Iteratively tune the parameters of each node distribution for the cavity distribution that removes the tuned distribution.

• The updated 𝜂𝑤𝑢+ is the exponential family factor minimizing the

following KL divergence:

Experiment 1 – synthetic data

Experiment 1 – synthetic data

Comparison of the belief on node 1, 5 and 9

Experiment 2 – denoising application

The value assigned to each pixel of the reconstruction is the estimated mean obtained over the corresponding node.

• Image size: 50 x 50

• Number of particles: 30

• Number of BP iterations: 5

original noisy recovered image

using EPBP

recovered image

using simple EP

Summary

• EPBP improves an existing particle-based belief propagation PBP by tuning proposal distributions adaptively using expectation propagation.

• Infer marginals in general Markov Random Fields more accurately and efficiently than PBP.

Questions?