cvpr2007 tutorial bag_of_words

69
Part 1: Bag-of-words models by Li Fei-Fei (Princeton)

Upload: bo-li

Post on 27-Jan-2015

110 views

Category:

Education


0 download

DESCRIPTION

 

TRANSCRIPT

Page 1: Cvpr2007 tutorial bag_of_words

Part 1: Bag-of-words models

by Li Fei-Fei (Princeton)

Page 2: Cvpr2007 tutorial bag_of_words

Related worksRelated works• Early “bag of words” models: mostly texture

recognition– Cula & Dana, 2001; Leung & Malik 2001; Mori, Belongie & Malik,

2001; Schmid 2001; Varma & Zisserman, 2002, 2003; Lazebnik, Schmid & Ponce, 2003;

• Hierarchical Bayesian models for documents (pLSA, LDA, etc.)– Hoffman 1999; Blei, Ng & Jordan, 2004; Teh, Jordan, Beal &

Blei, 2004

• Object categorization– Csurka, Bray, Dance & Fan, 2004; Sivic, Russell, Efros,

Freeman & Zisserman, 2005; Sudderth, Torralba, Freeman & Willsky, 2005;

• Natural scene categorization– Vogel & Schiele, 2004; Fei-Fei & Perona, 2005; Bosch,

Zisserman & Munoz, 2006

Page 3: Cvpr2007 tutorial bag_of_words

ObjectObject Bag of ‘words’Bag of ‘words’

Page 4: Cvpr2007 tutorial bag_of_words

Analogy to documentsAnalogy to documents

Of all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.

sensory, brain, visual, perception,

retinal, cerebral cortex,eye, cell, optical

nerve, imageHubel, Wiesel

China is forecasting a trade surplus of $90bn (£51bn) to $100bn this year, a threefold increase on 2004's $32bn. The Commerce Ministry said the surplus would be created by a predicted 30% jump in exports to $750bn, compared with a 18% rise in imports to $660bn. The figures are likely to further annoy the US, which has long argued that China's exports are unfairly helped by a deliberately undervalued yuan. Beijing agrees the surplus is too high, but says the yuan is only one factor. Bank of China governor Zhou Xiaochuan said the country also needed to do more to boost domestic demand so more goods stayed within the country. China increased the value of the yuan against the dollar by 2.1% in July and permitted it to trade within a narrow band, but the US wants the yuan to be allowed to trade freely. However, Beijing has made it clear that it will take its time and tread carefully before allowing the yuan to rise further in value.

China, trade, surplus, commerce,

exports, imports, US, yuan, bank, domestic,

foreign, increase, trade, value

Page 5: Cvpr2007 tutorial bag_of_words

• Looser definition– Independent features

A clarification: definition of “BoW”

Page 6: Cvpr2007 tutorial bag_of_words

A clarification: definition of “BoW”• Looser definition

– Independent features

• Stricter definition– Independent features – histogram representation

Page 7: Cvpr2007 tutorial bag_of_words

categorycategorydecisiondecision

learninglearning

feature detection& representation

codewords dictionarycodewords dictionary

image representation

category modelscategory models(and/or) classifiers(and/or) classifiers

recognitionrecognition

Page 8: Cvpr2007 tutorial bag_of_words

feature detection& representation

codewords dictionarycodewords dictionary

image representation

RepresentationRepresentation

1.1.

2.2.

3.3.

Page 9: Cvpr2007 tutorial bag_of_words

1.Feature detection and representation1.Feature detection and representation

Page 10: Cvpr2007 tutorial bag_of_words

1.Feature detection 1.Feature detection and representationand representation

• Regular grid– Vogel & Schiele, 2003– Fei-Fei & Perona, 2005

Page 11: Cvpr2007 tutorial bag_of_words

1.Feature detection 1.Feature detection and representationand representation

• Regular grid– Vogel & Schiele, 2003– Fei-Fei & Perona, 2005

• Interest point detector– Csurka, et al. 2004– Fei-Fei & Perona, 2005– Sivic, et al. 2005

Page 12: Cvpr2007 tutorial bag_of_words

1.Feature detection 1.Feature detection and representationand representation

• Regular grid– Vogel & Schiele, 2003– Fei-Fei & Perona, 2005

• Interest point detector– Csurka, Bray, Dance & Fan, 2004– Fei-Fei & Perona, 2005– Sivic, Russell, Efros, Freeman & Zisserman, 2005

• Other methods– Random sampling (Vidal-Naquet & Ullman, 2002)– Segmentation based patches (Barnard, Duygulu,

Forsyth, de Freitas, Blei, Jordan, 2003)

Page 13: Cvpr2007 tutorial bag_of_words

1.Feature 1.Feature detectiondetection and and representationrepresentation

Normalize patch

Detect patches[Mikojaczyk and Schmid ’02]

[Mata, Chum, Urban & Pajdla, ’02]

[Sivic & Zisserman, ’03]

Compute SIFT

descriptor

[Lowe’99]

Slide credit: Josef Sivic

Page 14: Cvpr2007 tutorial bag_of_words

1.Feature 1.Feature detectiondetection and and representationrepresentation

Page 15: Cvpr2007 tutorial bag_of_words

2. Codewords dictionary formation2. Codewords dictionary formation

Page 16: Cvpr2007 tutorial bag_of_words

2. Codewords dictionary formation2. Codewords dictionary formation

Vector quantization

Slide credit: Josef Sivic

Page 17: Cvpr2007 tutorial bag_of_words

2. Codewords dictionary formation2. Codewords dictionary formation

Fei-Fei et al. 2005

Page 18: Cvpr2007 tutorial bag_of_words

Image patch examples of codewordsImage patch examples of codewords

Sivic et al. 2005

Page 19: Cvpr2007 tutorial bag_of_words

3. Image representation3. Image representation

…..

fre

que

ncy

codewords

Page 20: Cvpr2007 tutorial bag_of_words

feature detection& representation

codewords dictionarycodewords dictionary

image representation

RepresentationRepresentation

1.1.

2.2.

3.3.

Page 21: Cvpr2007 tutorial bag_of_words

categorycategorydecisiondecision

codewords dictionarycodewords dictionary

category modelscategory models(and/or) classifiers(and/or) classifiers

Learning and Learning and RecognitionRecognition

Page 22: Cvpr2007 tutorial bag_of_words

category modelscategory models(and/or) classifiers(and/or) classifiers

Learning and Learning and RecognitionRecognition

1. Generative method: - graphical models

2. Discriminative method: - SVM

Page 23: Cvpr2007 tutorial bag_of_words

2 generative models2 generative models

1. Naïve Bayes classifier– Csurka Bray, Dance & Fan, 2004

2. Hierarchical Bayesian text models (pLSA and LDA)

– Background: Hoffman 2001, Blei, Ng & Jordan, 2004

– Object categorization: Sivic et al. 2005, Sudderth et al. 2005

– Natural scene categorization: Fei-Fei et al. 2005

Page 24: Cvpr2007 tutorial bag_of_words

• wn: each patch in an image– wn = [0,0,…1,…,0,0]T

• w: a collection of all N patches in an image– w = [w1,w2,…,wN]

• dj: the jth image in an image collection

• c: category of the image

• z: theme or topic of the patch

First, some notationsFirst, some notations

Page 25: Cvpr2007 tutorial bag_of_words

wN

c

Case #1: the Naïve Bayes modelCase #1: the Naïve Bayes model

)|()( cwpcp

Prior prob. of the object classes

Image likelihoodgiven the class

Csurka et al. 2004

N

nn cwpcp

1

)|()(

Object classdecision

)|( wcpc

c maxarg

Page 26: Cvpr2007 tutorial bag_of_words

Csurka et al. 2004

Page 27: Cvpr2007 tutorial bag_of_words

Csurka et al. 2004

Page 28: Cvpr2007 tutorial bag_of_words

Hoffman, 2001

Case #2: Hierarchical Bayesian Case #2: Hierarchical Bayesian text modelstext models

wN

d z

D

wN

c z

D

Blei et al., 2001

Probabilistic Latent Semantic Analysis (pLSA)

Latent Dirichlet Allocation (LDA)

Page 29: Cvpr2007 tutorial bag_of_words

wN

d z

D

Case #2: Hierarchical Bayesian Case #2: Hierarchical Bayesian text modelstext models

Probabilistic Latent Semantic Analysis (pLSA)

“face”

Sivic et al. ICCV 2005

Page 30: Cvpr2007 tutorial bag_of_words

wN

c z

D

Case #2: Hierarchical Bayesian Case #2: Hierarchical Bayesian text modelstext models

Latent Dirichlet Allocation (LDA)

Fei-Fei et al. ICCV 2005

“beach”

Page 31: Cvpr2007 tutorial bag_of_words

Case #2: the pLSA modelCase #2: the pLSA modelwN

d z

D

Page 32: Cvpr2007 tutorial bag_of_words

Case #2: the pLSA modelCase #2: the pLSA modelwN

d z

D

Observed codeword distributions

Codeword distributionsper theme (topic)

Theme distributionsper image

Slide credit: Josef Sivic

K

kjkkiji dzpzwpdwp

1

)|()|()|(

Page 33: Cvpr2007 tutorial bag_of_words

)|(maxarg dzpzz

Case #2: Recognition using pLSACase #2: Recognition using pLSA

Slide credit: Josef Sivic

Page 34: Cvpr2007 tutorial bag_of_words

Maximize likelihood of data using EM

Observed counts of word i in document j

M … number of codewords

N … number of images

Case #2: Learning the pLSA parametersCase #2: Learning the pLSA parameters

Slide credit: Josef Sivic

Page 35: Cvpr2007 tutorial bag_of_words

DemoDemo

• Course website

Page 36: Cvpr2007 tutorial bag_of_words

task: face detection – no labelingtask: face detection – no labeling

Page 37: Cvpr2007 tutorial bag_of_words

• Output of crude feature detector– Find edges– Draw points randomly from edge set– Draw from uniform distribution to get scale

Demo: feature detectionDemo: feature detection

Page 38: Cvpr2007 tutorial bag_of_words

Demo: learnt parametersDemo: learnt parameters

Codeword distributionsper theme (topic)

Theme distributionsper image

)|( zwp )|( dzp

• Learning the model: do_plsa(‘config_file_1’)• Evaluate and visualize the model: do_plsa_evaluation(‘config_file_1’)

Page 39: Cvpr2007 tutorial bag_of_words

Demo: recognition examplesDemo: recognition examples

Page 40: Cvpr2007 tutorial bag_of_words

• Performance of each theme

Demo: categorization resultsDemo: categorization results

Page 41: Cvpr2007 tutorial bag_of_words

category modelscategory models(and/or) classifiers(and/or) classifiers

Learning and Learning and RecognitionRecognition

1. Generative method: - graphical models

2. Discriminative method: - SVM

Page 42: Cvpr2007 tutorial bag_of_words

Zebra

Non-zebra

Decisionboundary

Discriminative methods based on ‘bag of words’ representation

Page 43: Cvpr2007 tutorial bag_of_words

Discriminative methods based on ‘bag of words’ representation

• Grauman & Darrell, 2005, 2006:– SVM w/ Pyramid Match kernels

• Others– Csurka, Bray, Dance & Fan, 2004– Serre & Poggio, 2005

Page 44: Cvpr2007 tutorial bag_of_words

Summary: Pyramid match kernel

optimal partial matching between

sets of features

Grauman & Darrell, 2005, Slide credit: Kristen Grauman

Page 45: Cvpr2007 tutorial bag_of_words

Pyramid Match (Grauman & Darrell 2005)

Histogram intersection

Slide credit: Kristen Grauman

Page 46: Cvpr2007 tutorial bag_of_words

Difference in histogram intersections across levels counts number of new pairs matched

matches at this level matches at previous level

Histogram intersection

Pyramid Match (Grauman & Darrell 2005)

Slide credit: Kristen Grauman

Page 47: Cvpr2007 tutorial bag_of_words

Pyramid match kernel

• Weights inversely proportional to bin size

• Normalize kernel values to avoid favoring large sets

measure of difficulty of a match at level i

histogram pyramids

number of newly matched pairs at level i

Slide credit: Kristen Grauman

Page 48: Cvpr2007 tutorial bag_of_words

Example pyramid matchLevel 0

Slide credit: Kristen Grauman

Page 49: Cvpr2007 tutorial bag_of_words

Example pyramid matchLevel 1

Slide credit: Kristen Grauman

Page 50: Cvpr2007 tutorial bag_of_words

Example pyramid matchLevel 2

Slide credit: Kristen Grauman

Page 51: Cvpr2007 tutorial bag_of_words

Example pyramid match

pyramid match

optimal match

Slide credit: Kristen Grauman

Page 52: Cvpr2007 tutorial bag_of_words

Summary: Pyramid match kernel

optimal partial matching between

sets of features

number of new matches at level idifficulty of a match at level i

Slide credit: Kristen Grauman

Page 53: Cvpr2007 tutorial bag_of_words

Object recognition results

• ETH-80 database 8 object classes (Eichhorn and Chapelle 2004)

• Features: – Harris detector– PCA-SIFT descriptor, d=10

Kernel Complexity Recognition rate

Match [Wallraven et al.] 84%

Bhattacharyya affinity [Kondor & Jebara]

85%

Pyramid match 84%

Slide credit: Kristen Grauman

Page 54: Cvpr2007 tutorial bag_of_words

Object recognition results

• Caltech objects database 101 object classes

• Features:– SIFT detector– PCA-SIFT descriptor, d=10

• 30 training images / class• 43% recognition rate (1% chance performance)• 0.002 seconds per match

Slide credit: Kristen Grauman

Page 55: Cvpr2007 tutorial bag_of_words

categorycategorydecisiondecision

learninglearning

feature detection& representation

codewords dictionarycodewords dictionary

image representation

category modelscategory models(and/or) classifiers(and/or) classifiers

recognitionrecognition

Page 56: Cvpr2007 tutorial bag_of_words

What about spatial info?What about spatial info?

?

Page 57: Cvpr2007 tutorial bag_of_words

What about spatial info?What about spatial info?

• Feature level– Spatial influence through correlogram features:

Savarese, Winn and Criminisi, CVPR 2006

Page 58: Cvpr2007 tutorial bag_of_words

What about spatial info?What about spatial info?

• Feature level• Generative models

– Sudderth, Torralba, Freeman & Willsky, 2005, 2006– Niebles & Fei-Fei, CVPR 2007

Page 59: Cvpr2007 tutorial bag_of_words

What about spatial info?What about spatial info?

• Feature level• Generative models

– Sudderth, Torralba, Freeman & Willsky, 2005, 2006– Niebles & Fei-Fei, CVPR 2007

P3

P1 P2

P4

BgImage

w

Page 60: Cvpr2007 tutorial bag_of_words

What about spatial info?What about spatial info?

• Feature level• Generative models• Discriminative methods

– Lazebnik, Schmid & Ponce, 2006

Page 61: Cvpr2007 tutorial bag_of_words

Invariance issuesInvariance issues• Scale and rotation

– Implicit– Detectors and descriptors

Kadir and Brady. 2003

Page 62: Cvpr2007 tutorial bag_of_words

• Scale and rotation

• Occlusion– Implicit in the models– Codeword distribution: small variations– (In theory) Theme (z) distribution: different

occlusion patterns

Invariance issuesInvariance issues

Page 63: Cvpr2007 tutorial bag_of_words

• Scale and rotation

• Occlusion

• Translation– Encode (relative) location information

• Sudderth, Torralba, Freeman & Willsky, 2005, 2006

• Niebles & Fei-Fei, 2007

Invariance issuesInvariance issues

Page 64: Cvpr2007 tutorial bag_of_words

• Scale and rotation

• Occlusion

• Translation

• View point (in theory)– Codewords: detector

and descriptor– Theme distributions:

different view points

Invariance issuesInvariance issues

Fergus, Fei-Fei, Perona & Zisserman, 2005

Page 65: Cvpr2007 tutorial bag_of_words

Model propertiesModel propertiesOf all the sensory impressions proceeding to the brain, the visual experiences are the dominant ones. Our perception of the world around us is based essentially on the messages that reach the brain from our eyes. For a long time it was thought that the retinal image was transmitted point by point to visual centers in the brain; the cerebral cortex was a movie screen, so to speak, upon which the image in the eye was projected. Through the discoveries of Hubel and Wiesel we now know that behind the origin of the visual perception in the brain there is a considerably more complicated course of events. By following the visual impulses along their path to the various cell layers of the optical cortex, Hubel and Wiesel have been able to demonstrate that the message about the image falling on the retina undergoes a step-wise analysis in a system of nerve cells stored in columns. In this system each cell has its specific function and is responsible for a specific detail in the pattern of the retinal image.

sensory, brain, visual, perception,

retinal, cerebral cortex,eye, cell, optical

nerve, imageHubel, Wiesel

• Intuitive– Analogy to documents

Page 66: Cvpr2007 tutorial bag_of_words

Model propertiesModel properties

• Intuitive• generative models

– Convenient for weakly- or un-supervised, incremental training

– Prior information– Flexibility (e.g. HDP)

Li, Wang & Fei-Fei, CVPR 2007

model

Classification

Dataset Incremental

learning

Sivic, Russell, Efros, Freeman, Zisserman, 2005

Page 67: Cvpr2007 tutorial bag_of_words

Model propertiesModel properties

• Intuitive• generative models• Discriminative method

– Computationally efficient

Grauman et al. CVPR 2005

Page 68: Cvpr2007 tutorial bag_of_words

Model propertiesModel properties

• Intuitive• generative models• Discriminative method• Learning and

recognition relatively fast– Compare to other

methods

Page 69: Cvpr2007 tutorial bag_of_words

• No rigorous geometric information of the object components

• It’s intuitive to most of us that objects are made of parts – no such information

• Not extensively tested yet for– View point invariance– Scale invariance

• Segmentation and localization unclear

Weakness of the modelWeakness of the model