integration and graphical models

Post on 09-Jan-2016

30 Views

Category:

Documents

3 Downloads

Preview:

Click to see full reader

DESCRIPTION

Integration and Graphical Models. Derek Hoiem CS 598, Spring 2009 April 14, 2009. Why?. The goal of vision is to make useful inferences about the scene. In most cases, this requires integrative reasoning about many types of information. Example: 3D modeling. Object context. - PowerPoint PPT Presentation

TRANSCRIPT

Integration and Graphical Models

Derek HoiemCS 598, Spring 2009

April 14, 2009

Why?

The goal of vision is to make useful inferences about the scene.

In most cases, this requires integrative reasoning about many types of information.

Example: 3D modeling

Object context

From Divvala et al. CVPR 2009

How?

• Feature passing

• Graphical models

Class Today

• Feature passing

• Graphical models– Bayesian networks– Markov networks– Various inference and learning methods

• Example

Properties of a good mechanism for integration

• Modular: different processes/estimates can be improved independently

• Symbiotic: each estimate improves

• Robust: mistakes in one process are not fatal for others that partially rely on it

• Feasible: training and inference is fast and easy

Feature Passing• Compute features from one estimated scene

property to help estimate another

Image X Estimate

Y Estimate

X Features

Y Features

Feature passing: example

ObjectWindow

Below

Above

Use features computed from “geometric context” confidence images to improve object detection

Hoiem et al. ICCV 2005

Features: average confidence within each window

Feature Passing• Pros and cons

– Simple training and inference– Very flexible in modeling interactions– Not modular

• if we get a new method for first estimates, we may need to retrain

– Requires iteration to be symbiotic• complicates things

– Robust in expectation but not instance

Probabilistic graphical modelsExplicitly model uncertainty and dependency structure

a

b

c

a

b

c

Directed Undirected Factor graph

d d

a

b

c d

Key concept: Markov blanket

Directed acyclical graph (Bayes net)

a

b

c d

P(a,b,c,d) = P(c|b)P(d|b)P(b|a)P(a)

a

b

c d

P(a,b,c,d) = P(b|a,c,d)P(a)P(c)P(d)

Arrow directions matter

a,c,d dependent when conditioned on b

c independent of a given b

d independent of a given b

Directed acyclical graph (Bayes net)

a

b

c d

P(a,b,c,d) = P(c|b)P(d|b)P(b|a)P(a)

• Can model causality• Parameter learning

– Decomposes: learn each term separately (ML)

• Inference– Simple exact inference if tree-

shaped (belief propagation)

Directed acyclical graph (Bayes net)

a

b

c d

• Can model causality• Parameter learning

– Decomposes: learn each term separately (ML)

• Inference– Simple exact inference if tree-shaped

(belief propagation)– Loops require approximation

• Loopy BP• Tree-reweighted BP• Sampling

P(a,b,c,d) = P(c|b)P(d|a,b)P(b|a)P(a)

• Example: Places and scenes

Directed graph

Place: office, kitchen, street, etc.

Car Person Toaster MicrowaveFire

Hydrant

Objects Present

P(place, car, person, toaster, micro, hydrant) = P(place) P(car | place) P(person | place) … P(hydrant | place)

• Example: “Putting Objects in Perspective”

Directed graph

Undirected graph (Markov Networks)

• Does not model causality• Often pairwise• Parameter learning difficult• Inference usually approximate

x1

x2

x3 x4

edgesji

jii

iZ dataxxdataxdataP,

24..1

11 ),;,(),;(),;( x

Markov Networks• Example: “label smoothing” grid

Binary nodes

0 10 0 K1 K 0

Pairwise Potential

Factor graphs• A general representation

a

b

c

Bayes Net

Factor Graph

d

a

b

c d

Factor graphs• A general representation

a

b

c

Markov Net

d

Factor Graph

a

b

c d

Factor graphs

),()(),,(),,,( 321 dafdfcbafdcbaP

Write as a factor graph

Inference: Belief Propagation• Very general • Approximate, except for tree-shaped graphs

– Generalizing variants BP can have better convergence for graphs with many loops or high potentials

• Standard packages available (BNT toolbox, my website)

• To learn more:– Yedidia, J.S.; Freeman, W.T.; Weiss, Y., "Understanding Belief Propagation and Its

Generalizations”, Technical Report, 2001: http://www.merl.com/publications/TR2001-022/

Inference: Graph Cuts• Associative: edge potentials penalize different labels• Associative binary networks can be solved optimally

(and quickly) using graph cuts

• Multilabel associative networks can be handled by alpha-expansion or alpha-beta swaps

• To learn more:– http://www.cs.cornell.edu/~rdz/graphcuts.html– Classic paper: What Energy Functions can be Minimized via Graph Cuts? (Kolmogorov

and Zabih, ECCV '02/PAMI '04)

Inference: Sampling (MCMC)• Metropolis-Hastings algorithm

– Define transitions and transition probabilities– Make sure you can get from any state to any other

(ergodicity)– Make proposal and accept if rand(1) < P(new

state)/P(old state) P(backward transition) / P(transition)

• Note: if P(state) decomposes, this is easy to compute

– Example: “Image parsing” by Tu and Zhu to find good segmentation

Learning parameters: maximize likelihood

• Simply count for Bayes network with discrete variables

• Run BP and do gradient descent for Markov network

• Often do not care about full likelihood

Learning parameters: maximize objective• SPSA (simultaneous perturbation stochastic

approximation) algorithm:– Take two trial steps in a random direction, one forward and

one backwards– Compute loss (or objective) for each and get a pseudo-

gradient– Take a step according to results– Refs

• Li and Huttenlocher, “Learning for Optical Flow Using Stochastic Optimization”, ECCV 2008

• Various papers by Spall on SPSA

Learning parameters: structured learning

See also Tsochantaridis et al.: http://jmlr.csail.mit.edu/papers/volume6/tsochantaridis05a/tsochantaridis05a.pdf

Szummer et al. 2008

How to get the structure?• Set by hand (most common)

• Learn (mostly for Bayes nets)– Maximize score (greedy search)– Based on independence tests

• Logistic regression with L1 regularization for finding Markov blanket

For more: www.autonlab.org/tutorials/bayesstruct05.pdf

Graphical Models• Pros and cons

– Very powerful if dependency structure is sparse and known

– Modular (especially Bayesian networks)– Flexible representation (but not as flexible as

“feature passing”)– Many inference methods– Recent development in learning Markov network

parameters, but still tricky

Which techniques have I used?• Almost all of them

– Feature passing (ICCV 2005, CVPR 2008)– Bayesian networks (CVPR 2006)

• In factor graph form (ICCV 2007)• Semi-naïve Bayes (CVPR 2004)

– Markov networks (ECCV 2008, CVPR 2007, CVPR 2005: HMM)

– Belief propagation (CVPR 2006, ICCV 2007)– Structured learning (ECCV 2008)– Graph cuts (CVPR 2008, ECCV 2008)– MCMC (IJCV 2007… didn’t work well)– Learning Bayesian structure (2002-2003, not published)

Example: faces, skin, cloth

top related