learning to combine bottom-up and top-down segmentation

Post on 19-Jan-2016

37 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Learning to Combine Bottom-Up and Top-Down Segmentation. Anat Levin and Yair Weiss School of CS&Eng, The Hebrew University of Jerusalem, Israel. Bottom-up segmentation. Bottom-up approaches: Use low level cues to group similar pixels. Malik et al, 2000 Sharon et al, 2001 - PowerPoint PPT Presentation

TRANSCRIPT

Learning to Combine Bottom-Up and Top-Down Segmentation

Anat Levin and Yair Weiss

School of CS&Eng,

The Hebrew University of Jerusalem, Israel

Bottom-up segmentation

• Malik et al, 2000 • Sharon et al, 2001•Comaniciu and Meer, 2002•…

Bottom-up approaches: Use low level cues to group similar pixels

Bottom-up segmentation is ill posed

Some segmentation example (maybe horses from Eran’s paper)

Many possible segmentation are equally good based on low level cues alone.

images from Borenstein and Ullman 02

Top-down segmentation •Class-specific, top-down segmentation (Borenstein & Ullman Eccv02)

•Winn and Jojic 05

•Leibe et al 04

•Yuille and Hallinan 02.

•Liu and Sclaroff 01

•Yu and Shi 03

Combining top-down and bottom-up segmentation

Find a segmentation:

1. Similar to the top-down model

2. Aligns with image edges

+

Previous approaches

• Borenstein et al 04 Combining top-down and bottom up segmentation.

• Tu et al ICCV03 Image parsing: segmentation, detection, and recognition.

• Kumar et al CVPR05 Obj-Cut.

•Shotton et al ECCV06: TextonBoost

Previous approaches: Train top-down and bottom-up models independentlyindependently

Why learning top-down and bottom-up models simultaneously?

•Large number of freedom degrees in tentacles configuration- requires a complex deformable top down model

•On the other hand: rather uniform colors- low level segmentation is easy

•Learn top-down and bottom-up models simultaneouslysimultaneously

•Reduces at run time to energy minimization with binary labels (graph min cut)

Our approach

Energy model

k

IFkij

kxxjxixjiwIxE ,)()(),();(

Consistency with fragments segmentation

Segmentation alignment with image edges

Energy model

k

IFkij

kxxjxixjiwIxE ,)()(),();(

Segmentation alignment with image edges

Consistency with fragments segmentation

Energy model

k

IFkij

kxxjxixjiwIxE ,)()(),();(

Segmentation alignment with image edges

Resulting min-cut segmentation

Consistency with fragments segmentation

Learning from segmented class images

Training data:Ttt

Ttt xI 11 }{ }{

Goal: Learn fragments for an energy function

Learning energy functions using conditional random fields

t

IxE

tttt

tteIZ

IxP ),;(

);(

1);|(

Theory of CRFs:

•Lafferty et al 2001

•LeCun and Huang 2005

x

IxEt

teIZ ),;();(

CRFs For vision:

•Kumar and Hebert 2003

•Ren et al 2006

•He et al 2004, 2006

•Quattoni et al 2005

•Torralba et al 04

tx

E(x)

tx

E(x)

Minimize energy of true segmentation

Maximize energy of all other configurations

t

IxE

tttt

tteIZ

IxP ),;(

);(

1);|(

Learning energy functions using conditional random fields

“It's not enough to succeed. Others must fail.” –Gore Vidal

Minimize energy of true segmentation

Maximize energy of all other configurations

tx

P(x)

tx

P(x)

t

IxE

tttt

tteIZ

IxP ),;(

);(

1);|(

Learning energy functions using conditional random fields

“It's not enough to succeed. Others must fail.” –Gore Vidal

Differentiating CRFs log-likelihood

Log-likelihood gradients with respect to :

Expected feature response minus observed feature response

ObsIFt

CurrentIFt tktk

xxxx ,

,

k

IFkij

kxxjxixjiwFIxE ,)()(),(),,,;(

Log-likelihood is convex with respect to

Yair- in the original version of this slide I had another equation expressing the expectation as a sum of marginals (see next hidden slide). At least for me, it wasn’t originally clear what this expectation means before I saw the other equation. However, I try to delete un necessary equations..

CRFs cost- evaluating partition function

Derivatives- evaluating marginal probabilities

Use approximate estimations:

•Sampling

•Belief Propagation and Bethe free energy

•Used in this work: Tree reweighted belief propagation and Tree reweighted upper bound (Wainwright et al 03)

Conditional random fields-computational challenges

);(log

tIZ

);|(

ti IrxP

Fragments selection

Candidate fragments pool:

Greedy energy design:

ij

jxixjiwIxE )()(),();(

IFxx ,1 1 IFxx ,2 2

IFxx ,3 3

Fragments selection challenges

Straightforward computation of likelihood improvement is impractical

2000 Fragments

50 Training images

10 Fragments selection iterations

1,000,000 inference operations!

Fragments selection

Fragment with low error on

the training set

First order approximation to log-likelihood gain:

ModelCurrentIFt

ObsIFt tt

xxxx

,,

Fragment not accounted for by the

existing model

Similar idea in different contexts:

•Zhu et al 1997

•Lafferty et al 2004

•McCallum 2003

•Requires a single inference process on the previous iteration energy to evaluate approximations with respect to all fragments

•First order approximation evaluation is linear in the fragment size

First order approximation to log-likelihood gain:

ModelCurrentIFt

ObsIFt tt

xxxx

,,

Fragments selection

Fragments selection- summary

Initialization: Low- level term

For k=1:K

•Run TRBP inference using the previous iteration energy.

•Approximate likelihood gain of candidate fragments

•Add to energy the fragment with maximal gain.

ModelCurrentIFt

ObsIFt tt

xxxx

,,

Training horses model

Training horses model-one fragment

Training horses model-two fragments

Training horses model-three fragments

Results- horses dataset

Results- horses dataset

Fragments number

Mis

lab

eled

pix

els

per

cen

t

Comparable to previous results (Kumar et al, Borenstein et al.) but with far fewer fragments

Results- artificial octopi

Results- cows datasetFrom the TU Darmstadt Database

Results- cows dataset

Fragments number

Mis

lab

eled

pix

els

per

cen

t

Conclusions

•Simultaneously learning top-down and bottom-up segmentation cues.

•Learning formulated as estimation in Conditional Random Fields

•Novel, efficient fragments selection algorithm

•Algorithm achieves state of the art performance with a significantly smaller number of fragments

top related