information retrieval: the vector space model · doc4 i we see that doc1 and doc4 are indeed...

30
1 COPYRIGHT 2019, THE UNIVERSITY OF MELBOURNE Information Retrieval: the Vector space model COMP90042 Lecture 2

Upload: others

Post on 04-Jun-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

1COPYRIGHT 2019, THE UNIVERSITY OF MELBOURNE

Information Retrieval:the Vector space model

COMP90042 Lecture 2

Page 2: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

2

COMP90042 W.S.T.A. (S1 2019) L2

Overview

• Concepts of the Term-Document Matrix and inverted index

• Vector space measure of query-document similarity

• Efficient search for best documents

Page 3: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

3

COMP90042 W.S.T.A. (S1 2019) L2

The problem of Info. retrieval

• Given a large document collection and a query string* which of the documents are (most) relevant to the user’s

information need?

• Information need ≠ query string

• Raises questions:* how to define query-document relevance?

* how can we service queries efficiently?

* how can we evaluate effectiveness?

Page 4: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

4

What is the information need?

Does the user just want one answer or several?

Engine returns ranked answers with short snippets (what underlies the ranking?)

Page 5: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

5

Engines also suggest similar queries

Page 6: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

6

Often results might include lists of things, or many (diverse) results are desired

Page 7: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

7

COMP90042 W.S.T.A. (S1 2019) L2

Evaluation on Test collections

• IR research often use reusable test collections

constructed for reproducible IR evaluation, e.g.,

TREC competitions; comprising

* corpus of documents

* set of queries (“topics”), often including long-form elaboration of information need

* relevance judgements (qrels) for each document and

query, a human judgement of whether the document is

relevant to the information need in the given query.

Page 8: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

8

COMP90042 W.S.T.A. (S1 2019) L2

Example from TREC 5Example TREC datasets

TREC 5, 1996

Topichnumi Number: 252htitlei Topic: Combating Alien Smugglinghdesci Description: What steps are being taken by governmental or even privateentities world-wide to stop the smuggling of aliens.hnarri Narrative: To be relevant, a document must describe an e↵ort being made(other than routine border patrols) in any country of the world to prevent theillegal penetration of aliens across borders.

Qrels

Topic Docid Rel

252 AP881226-0140 1

252 AP881227-0083 0

252 CR93E-10038 0

252 CR93E-1004 0

252 CR93E-10211 0

252 CR93E-10529 1

. . .

Runfile

Topic Docid Score

252 CR93H-9548 0.5436

252 CR93H-12789 0.4958

252 CR93H-10580 0.4633

252 CR93H-14389 0.4616

252 AP880828-0030 0.4523

252 CR93H-10986 0.4383

. . .

Example TREC datasetsTREC 5, 1996

Topichnumi Number: 252htitlei Topic: Combating Alien Smugglinghdesci Description: What steps are being taken by governmental or even privateentities world-wide to stop the smuggling of aliens.hnarri Narrative: To be relevant, a document must describe an e↵ort being made(other than routine border patrols) in any country of the world to prevent theillegal penetration of aliens across borders.

Qrels

Topic Docid Rel

252 AP881226-0140 1

252 AP881227-0083 0

252 CR93E-10038 0

252 CR93E-1004 0

252 CR93E-10211 0

252 CR93E-10529 1

. . .

Runfile

Topic Docid Score

252 CR93H-9548 0.5436

252 CR93H-12789 0.4958

252 CR93H-10580 0.4633

252 CR93H-14389 0.4616

252 AP880828-0030 0.4523

252 CR93H-10986 0.4383

. . .

Page 9: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

9

COMP90042 W.S.T.A. (S1 2019) L2

Document representation

• Assume each document is a bag of words, i.e., discarding word order information, just recording counts

• The whole collection could be modelled as a list of bag of words* but this doesn’t allow efficient access, e.g., to find a specific

word

• Solution: the term-document matrix* rows represent documents* columns represent terms which are typically word types

(excluding very high frequency “stop words”, and often stemmed)

* matrix cells might be binary indicators, frequency counts or some other kind of ‘score’ attached to a word and a document

Page 10: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

10

COMP90042 W.S.T.A. (S1 2019) L2

Term-document matrix

Example TDM

doc1 Two for tea and tea for twodoc2 Tea for me and tea for youdoc3 You for me and me for you

two tea me you

doc1 2 2 0 0doc2 0 2 1 1doc3 0 0 2 2

Page 11: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

11

COMP90042 W.S.T.A. (S1 2019) L2

Matrix representation for points in 3-d space

Point x y z

p1 2 0 2p2 2 1 0p3 0 2 0

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

2.0

0.00.5

1.01.5

2.0

x

yz

p1

p2p3

And similarly for points in 3d space (and higher dimensional space,too, though it gets tricky to draw)

Geometrical view of the tdm

• The TDM not just a useful document representation* also suggests a useful way of modelling documents

* consider documents as points (vectors) in a multi-dimensional term space

• E.g., points in 3d

Page 12: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

12

COMP90042 W.S.T.A. (S1 2019) L2

Documents in term space

Point tea me two

doc1 2 0 2doc2 2 1 0doc3 0 2 0

0.0 0.5 1.0 1.5 2.00.0

0.5

1.0

1.5

2.0

0.00.5

1.01.5

2.0

tea

me

two

doc1

doc2doc3

Like so

Documents in term space

• Each of the |T| terms becomes a dimension

• Documents are points, with the value for each dimension determined by the score sd,t* this might be the frequency of term t in document d

Page 13: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

13

COMP90042 W.S.T.A. (S1 2019) L2

Angular distance

Point tea two

doc1 2 2doc2 2 0doc4 5 7

0 1 2 3 4 5 6 7

0

1

2

3

4

5

6

7

tea

two

doc1

doc2

doc4

I We see that Doc1 and Doc4 are indeed similar

Measuring similarity with cosine

• To account for length bias consider documents as vectors* and measure the angle between them

Page 14: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

14

COMP90042 W.S.T.A. (S1 2019) L2

cosine distance

• Given two vectors, a and b, the cosine between them is

• where . is the vector dot product, i.e.,

• and |a| denotes the vector magnitude

cos(a,b) =a.b

|a|⇥ |b|

a.b =

|T |X

i=1

aibi

|a| =

vuut|T |X

i=1

a2i

Page 15: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

15

COMP90042 W.S.T.A. (S1 2019) L2

Speeding up cosine distance

• All documents pre-normalised to unit length (or normalisation factors pre-computed and stored)

• Cosine then just requires a dot product a.b* but these vectors are sparse, with many 0 entries* only need to consider non-zero entries in both a and b* use inverted index to exploit sparsity, and allow efficient

storage and querying

• This is known as the vector space model

Page 16: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

16

COMP90042 W.S.T.A. (S1 2019) L2

TF*IDF

• Scores in TDM typically combine two elements,

tfd,t⨉ idft

* The term frequency or TF, based on the occurrence count

for the term in a document (e.g. raw count, log(1 + c),…)

* The inverse document frequency or IDF, based on how rare

the word is, and therefore how

informative an occurrence will be

* IDF typically formulated as

where N is the number of documents

and dft the number of documents containing t

• Consider extrema for idf, rare words vs. stop-words

Inverse document frequency

I A commonly-used measure of a term’s discriminative power isits inverse document frequency or IDF.

I If N is the number of documents in the corpus, and dft is thenumber of documents that term t appears in . . .

I Then (under one formulation) the IDF of t is defined as:

idft = logN

dft

IDF extrema, ranging from

logN

1= logN for unique terms to the document; to

logN

N= 0 for terms common to all documents

Page 17: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

17

COMP90042 W.S.T.A. (S1 2019) L2

TF x IDF example

• Raw TF

• IDF* “two” occurs in 1 doc (doc1)

idftwo = log2(3/1) = 1.58* “tea” occurs in 2 docs (doc1, doc2)

idftea = log2(3/2) = 0.58* …* what about a stop word? e.g., “for” is in all docs.

Example TDM

doc1 Two for tea and tea for twodoc2 Tea for me and tea for youdoc3 You for me and me for you

two tea me you

doc1 2 2 0 0doc2 0 2 1 1doc3 0 0 2 2

Page 18: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

18

COMP90042 W.S.T.A. (S1 2019) L2

TF x IDF example

Example TDM

doc1 Two for tea and tea for twodoc2 Tea for me and tea for youdoc3 You for me and me for you

two tea me you

doc1 2 2 0 0doc2 0 2 1 1doc3 0 0 2 2

idf 1.58 0.58 0.58 0.58

two tea me you

doc1 3.17 1.16 0 0

doc2 0 1.16 0.58 0.58

doc3 0 0 1.16 1.16

TF x IDF: scale each TFcolumn (term)

by its IDF score

TF

Fixed mistake 6/3/19

Page 19: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

19

COMP90042 W.S.T.A. (S1 2019) L2

Query processing in VSM

• Treat the query as a short pseudo-document

• Calculate the similarity between the query pseudo-document and each document in the collection

• Rank documents by decreasing similarity with cosine

• Return to user the top k ranked documents

Page 20: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

20

COMP90042 W.S.T.A. (S1 2019) L2

ExampleCorpus:

two tea me you

doc1 2 2 0 0doc2 0 2 1 1doc3 0 0 2 2

Query: tea me

two tea me you

query 0 1 1 0

Page 21: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

21

COMP90042 W.S.T.A. (S1 2019) L2

Example: TDM

• Corpus represented as TF x IDF matrix

• Normalise TF x IDF

* divide each row by vector magnitude, |row|

two tea me youdoc1 0.93 0.34 0 0

doc2 0 0.82 0.41 0.41

doc3 0 0 0.71 0.71

two tea me youdoc1 3.17 1.16 0 0

doc2 0 1.16 0.58 0.58

doc3 0 0 1.16 1.16

Fixed mistake 6/3/19

e.g., |doc1| = (3.17^2 + 1.16^2 +

0^2 + 0^2)^(0.5)

normalised matrix has row for

doc/|doc1|

Page 22: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

22

COMP90042 W.S.T.A. (S1 2019) L2

Example: query similarity

• Compute dot product for each document* doc1.q = 0 * 0.93 + 0.71 * 0.34 + 0 * 0.71 + 0 * 0 = 0.24

* doc2.q = 0 * 0 + 0.71 * 0.82 + 0.71 * 0.41 + 0 * 0.41 = 0.87

* doc3.q = 0 * 0 + 0.71 * 0 + 0.71 *0.71 + 0 * 0.71 = 0.5

• Rank by cosine score: doc2 > doc3 > doc1

two tea me youq 0 0.71 0.71 0

query as unit vector

normalisedTDM

two tea me youdoc1 0.93 0.34 0 0

doc2 0 0.82 0.41 0.41

doc3 0 0 0.71 0.71

Fixed mistake 6/3/19

Page 23: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

23

COMP90042 W.S.T.A. (S1 2019) L2

Observations

1. Elements of the vectors which are zero do not contribute to cosine calculation* Zeros common with real data and large vocabularies* Also true of other scoring functions, e.g.,

log (1+TF), TF*IDF, BM25

2. Enumerating all the documents is inefficient

Can we devise a way to find the most similar documents efficiently?

Page 24: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

24

COMP90042 W.S.T.A. (S1 2019) L2

Beyond cosine: BM25

• Parameterised scoring formula: k1, b, k3 need to be tuned* defaults k1 = 1.5, b = 0.5, k3 = 0

• BM25 most widely used method in IR

Okapi BM25

wt =

log

N � ft + 0.5

ft + 0.5

�(idf)

⇥(k1 + 1)fd ,t

k1⇣(1� b) + b Ld

Lave

⌘+ fd ,t

(tf and doc. length)

⇥ (k3 + 1) fq,tk3 + fq,t

(query tf)

I Parameters k1, b, and k3 need to be tuned (k3 only for verylong queries).

I k1 = 1.5 and b = 0.75 common defaults.

I BM25 highly e↵ective, most widely used weighting in IR

Robertson, Walker et al. (1994, 1998).

d

Page 25: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

25

COMP90042 W.S.T.A. (S1 2019) L2

Index

• Imagine we pre-compute the TF*IDF vectors for all documents, and their vector lengths (for normalisation)

• These do not change from query to query, so save time by pre-calculating their values

• But still need to iterate over every document…

Page 26: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

26

COMP90042 W.S.T.A. (S1 2019) L2

Term-wise processing

• When measuring document similarity, only terms occurring in both vectors contribute to cosine score.

• So with the query as a pseudo-document need only consider terms that are* in the query; and in the document

• If we can efficiently index the documents in which each term occurs* complexity reduced to * note that most frequent term will dominate

(consider stop-words)

Term-wise processing

I In document similarity, only terms occurring in bothdocuments contribute to cosine score (remember thedot-product)

I In query processing by pseudo-document model, therefore,only documents that contain query terms need to beconsidered (which makes intuitive sense)

I Complexity reduced to O(P

t dft)I Note that most frequent term dominates.I Very good reason to drop stop-words!

I Need an index that supports quickly finding which documentsa term occurs in

Page 27: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

27

COMP90042 W.S.T.A. (S1 2019) L2

Inverted index

• Inverted index comprises* Terms as rows

* Values as lists of (docID, weight) pairs, aka posting list

* weights listed are the normalised TF*IDF values (although may chose to store raw counts in practice)

two → 1: 0.88

tea → 1: 0.47 2: 0.9

me → 2: 0.32 3: 0.71

Page 28: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

28

COMP90042 W.S.T.A. (S1 2019) L2

Querying an inverted indexQuery processing on inverted index

Assuming normalised TF*IDF weights:

Set accumulator ad 0 for all documents dfor all terms t in query do

Load postings list for tfor all postings hd ,wt,di in list do

ad ad + wt,d

end forend forSort documents by decreasing ad

return sorted results to user

Ine�cient use of space!

wt,d denotes normalisedtfd,t⨉ idft vector

Page 29: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

29

COMP90042 W.S.T.A. (S1 2019) L2

Example

• ad = < 0, 0, 0 >

• term “tea”* ad[1] += 0.47* ad[2] += 0.9* end of block, ad = < 0.47, 0.9, 0 >

• term “me”* ad[2] += 0.31* ad[3] += 0.71* end of block, ad = < 0.47, 1.21, 0.71 >

• sort to produce doc ranking* 2: 1.21; 3: 0.71; 1: 0.47 → [2, 3, 1]

tea → 1: 0.47 2: 0.9

me → 2: 0.32 3: 0.71

Page 30: Information Retrieval: the Vector space model · doc4 I We see that Doc1 and Doc4 are indeed similar Measuring similarity with cosine •To account for length bias consider documents

30

COMP90042 W.S.T.A. (S1 2019) L2

Summary

• Concept of the term-document matrix and inverted

index

• The vector space model, TF*IDF similarity and other

scoring methods

• Inverted index & querying algorithm

• Reading

* Chapter 6, “Scoring, term weighting & the vector space

model” of Manning, Raghavan, and Schütze, Introduction

to Information Retrieval

* (opt.) Section 11.4.3 “Okapi BM25: A nonbinary model”