clustering art & learning the semantics of words and pictures manigantan sethuraman

13
Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Upload: vanessa-marty

Post on 28-Mar-2015

216 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Clustering Art &

Learning the Semantics of Words and Pictures

Manigantan Sethuraman

Page 2: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Key Applications

Auto Annotation Given image generate associated words.

Auto Illustration Given words generate associated images.

Sounds Familiar Isn’t It ?

Page 3: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Key Ideas

Joint Probability Distribution Complete Sense is conveyed by considering words and

images together.

Hierarchical Model Going from General to Specific. Allowing shared use of information. Providing a search path.

Clustering Basically grouping, Images or Regions ?? Soft (Membership is distributed)

Page 4: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Joint Prob. Distr. -> Text Only

Page 5: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Joint Prob. Distr. -> Images Only

Page 6: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Joint Prob. Distr. –> Words & Images

Page 7: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Hierarchical Model

Each Node has a probability of generating a word/ image w.r.t the document under consideration.

Cluster defines the path.

Cluster,Level identifies the node.

Page 8: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Associated Math

P(c | d) – Probability of cluster given the document.

P(L | c,d) – Probability of the level given the cluster and document.

P(i | l,c) – Probability of item given the level and cluster.

P(L | c,d) can be roughly represented by their average P(L | c).

Model 1 uses the document specific value. Model 2 uses the average value.

Page 9: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Auto Annotation

Generate words for a given image Consider the probability of the image belonging to the

current cluster. Consider the probability of the items in the image being

generated by the nodes at various levels in the path associated to the current cluster.

Work the above out for all clusters.

We are computing the probability that an image emits a proposed word, given the observed segments, B:

Page 10: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Auto Illustration

Page 11: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Is E-M Used ?

E-M is used to train and obtain the hidden information.

Clustering Probability of a document d being in the cluster c

Image-Word Correlation Probability that Item i of Document d was generated at

level L.

Page 12: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Word Sense Disambiguation

Semantic Hierarchies Bank -> Financial Institution -> Institution -> Organization. Bank -> slope -> geological formation -> natural object.

Word Sense defined by the path to the root. Rather than considering the word as an item,

consider the word-sense as an item Six closest words for each occurrence of a word

used to disambiguate its sense. For each word the sense which has the largest

hypernym (IS_A) sense in common with the neighboring words is chosen.

Page 13: Clustering Art & Learning the Semantics of Words and Pictures Manigantan Sethuraman

Questions & Discussion

Relationship between Object Recognition paper and this paper…

Handling Noise ? Irrelevant descriptions for images

Dependence on semantically meaningful segmentation…