cs3245 information retrieval
TRANSCRIPT
Introduction to Information Retrieval
CS3245
Information Retrieval
Lecture 4: Dictionaries and Tolerant Retrieval4
CS3245 – Information Retrieval
Last Time: Postings lists and Choosing terms Faster merging of posting lists Skip pointers
Handling of phrase and proximity queries Biword indexes for phrase queries Positional indexes for phrase/proximity queries
Steps in choosing terms for the dictionary Text extraction Granularity of indexing Tokenization Stop word removal Normalization Lemmatization and stemming
Information Retrieval 2
Ch. 2
CS3245 – Information Retrieval
Today: the dictionary and tolerant retrieval
Dictionary data structures
“Tolerant” retrieval Wild-card queries Spelling correction Soundex
Information Retrieval 3
Ch. 3
CS3245 – Information Retrieval
Dictionary data structures for inverted indexes The dictionary data structure stores the term
vocabulary, document frequency, pointers to each postings list … in what data structure?
Information Retrieval 4
Sec. 3.1
CS3245 – Information Retrieval
A naïve dictionary An array of struct:
char[20] int Postings Pointer
20 bytes 4/8 bytes 4/8 bytes
Information Retrieval 5
Sec. 3.1
Quick Q: What’s wrong with using this data structure?
CS3245 – Information Retrieval
A naïve dictionary
char[20] int Postings Pointer
20 bytes 4/8 bytes 4/8 bytes
Words can only be 20 chars long. Waste of space for some words, not enough for others.
How do we store a dictionary in memory efficiently?
Most important: Slow to access, linear scan needed! How do we quickly look up elements at query time?
Information Retrieval 6
Sec. 3.1
CS3245 – Information Retrieval
Dictionary data structures Two main choices: Hash table Tree
Some IR systems use hashes, some trees
Information Retrieval 7
Sec. 3.1
To think about: what issues influence the choice between these two data structures? (Hint: see IIR)
CS3245 – Information Retrieval
Hash TableEach vocabulary term is hashed to an integer Pros: Lookup is faster than for a tree: O(1)
Cons: No easy way to find minor variants:
judgment/judgement
No prefix search If vocabulary keeps growing, need to occasionally do the
expensive operation of rehashing everything
Information Retrieval 8
Sec. 3.1
Not very tolerant!
CS3245 – Information Retrieval
Roota-m n-z
a-hu hy-m n-sh si-z
Tree: binary tree
Information Retrieval 9
Sec. 3.1
CS3245 – Information Retrieval
Tree: B-tree
Definition: Every internal nodel has a number of children in the interval [a,b] where a, b are appropriate natural numbers, e.g., [2,4].
Information Retrieval 10
a-hu
hy-m
n-z
Sec. 3.1
CS3245 – Information Retrieval
Trees Simplest: binary tree More common: B-trees Trees require a standard ordering of characters and hence
strings … but we have one: lexicographical ordering Pros: Solves the prefix problem (e.g., terms starting with “hyp”)
Cons: Slower: O(log M) [and this requires a balanced tree] Rebalancing binary trees is expensive
B-trees mitigate the rebalancing problem
Information Retrieval 11
Sec. 3.1
CS3245 – Information Retrieval
WILDCARD QUERIES
Information Retrieval 12
CS3245 – Information Retrieval
Wildcard queries: *
mon*: find all docs containing any word beginning “mon”.
Easy with binary tree (or B-tree) lexicon: retrieve all words in range: mon ≤ w < moo
*mon: find words ending in “mon”: need help! Maintain an additional B-tree for terms reversedCan retrieve all words in range: nom ≤ w < non.
Information Retrieval 13
Quick Q2: from this, how can we enumerate all termsmeeting the wildcard query pro*cent ?
Sec. 3.2
Quick Q1: why would someone use this feature?
CS3245 – Information Retrieval
Intersection, reduxAnswer: Use the forward part for “pro*”, and the
backward part for “*cent”, then intersect them.
Information Retrieval 14
CS3245 – Information Retrieval
Handling general wildcard queries General wildcard queries: X*Y
Look up X* in a normal B-tree AND *Y in a reverse B-tree, and then intersect the two term sets Expensive
The solution: transform wild-card queries so that the *’s always occur at the end
This gives rise to the Permuterm Index.Information Retrieval 15
Sec. 3.2
CS3245 – Information Retrieval
Permuterm index For term hello, index under: hello$, ello$h, llo$he, lo$hel, o$hell and $hellowhere $ is a special symbol.
Queries: X lookup on X$ X* lookup on $X* *X lookup on X$* *X* lookup on X* X*Y lookup on Y$X*
Information Retrieval 16
Sec. 3.2.1
Query = hel*oX=hel, Y=o
Lookup o$hel*
Not so quick Q: What about X*Y*Z?
CS3245 – Information Retrieval
Permuterm query processing Rotate query wild-card to the right Now use B-tree lookup as before
Permuterm problem: lexicon size blows up, proportional to average word length
Information Retrieval 17
Sec. 3.2.1
Is there any other solution?
CS3245 – Information Retrieval
Bigram (k-gram) index Enumerate all k-grams (sequence of k chars)
occurring in any term e.g., from text “April is the cruelest month” we get
the 2-grams (bigrams)
As before “$” is a special word boundary symbol
Maintain a second inverted index from bigrams todictionary terms that match each bigram.
Information Retrieval 18
$a,ap,pr,ri,il,l$,$i,is,s$,$t,th,he,e$,$c,cr,ru,ue,el,le,es,st,t$,$m,mo,on,nt,h$
Sec. 3.2.2
CS3245 – Information Retrieval
Bigram index example The k-gram index finds terms based on a query
consisting of k-grams (here k=2).
Information Retrieval 19
mo
on
among
$m mace
among
amortize
madden
axon
Sec. 3.2.2
CS3245 – Information Retrieval
Bigram query processing Query mon* can now be run as $m AND mo AND on
Gets terms that match AND version of our wildcard query.
Oops! We also included moon, a false positive! Must post-filter these terms against query. Surviving enumerated terms are then looked up in
the term-document inverted index. Fast, space efficient (compared to permuterm).
Information Retrieval 20
Sec. 3.2.2
CS3245 – Information Retrieval
Processing wildcard queries After getting the possible terms, we still need to
execute a Boolean query for each possible term. Wildcards can result in expensive query execution
(very large disjunctions…) pyth* AND prog*
If you encourage laziness, people will respond!
Which web search engines allow wildcard queries?Information Retrieval 21
SearchType your search terms, use ‘*’ if you need to.E.g., Alex* will match Alexander.
Sec. 3.2.2
CS3245 – Information Retrieval
SPELLING CORRECTION
Information Retrieval 22
CS3245 – Information Retrieval
Spellling corektion Two principal uses:
1. Correcting document(s) being indexed2. Correcting user queries to retrieve “right” answers
Two main flavors: Isolated word
Check each word on its own for misspelling Will not catch typos resulting in correctly spelled words
e.g., from → form
Context-sensitive Look at surrounding words
e.g., I flew form Heathrow to Narita.Information Retrieval 23
Sec. 3.3
CS3245 – Information Retrieval
Document correction Especially needed for OCR’ed documents Correction algorithms are tuned for common errors: rn/m Can use domain-specific knowledge
E.g., OCR can confuse O and D more often than it would confuse O and I (adjacent on the QWERTY keyboard, so more likely interchanged in typing).
But also: web pages and even printed material have typos
Goal: the dictionary contains fewer misspellings But often we don’t change the documents but aim to
fix the query-document mappingInformation Retrieval 24
Sec. 3.3
CS3245 – Information Retrieval
Query misspellings Our principal focus here E.g., the query Britiny Speares
We can Return several suggested alternative queries with the
correct spelling “Did you mean … ?”
Retrieve documents indexed by the correct spelling
Information Retrieval 25
Sec. 3.3
CS3245 – Information Retrieval
Isolated word correction Fundamental premise – there is a lexicon from which
the correct spellings come Two basic choices for this A standard lexicon such as
Merriam-Webster’s English Dictionary A domain-specific lexicon – often hand-maintained
The lexicon of the indexed corpus E.g., all words on the web All names, acronyms, etc. (including misspellings)
Information Retrieval 26
Sec. 3.3.2
CS3245 – Information Retrieval
Isolated word correction Given a lexicon and a character sequence Q, return
the words in the lexicon closest to Q How do we define “closest”? We’ll study several alternatives
1. Edit distance (Levenshtein distance)2. Weighted edit distance3. ngram overlap
Information Retrieval 27
Sec. 3.3.2
CS3245 – Information Retrieval
1. Edit distance Given two strings S1 and S2, the minimum number of
operations to convert one to the other Fundamentally related to the longest common
subsequence (LCS) problem you may already know
Operations are typically character-level Insert, Delete, Replace, (Transposition)
E.g., the edit distance from dof to dog is 1 From cat to act is 2. (Just 1 with transpose) from cat to dog is 3.
Generally found by dynamic programmingInformation Retrieval 28
Sec. 3.3.3
CS3245 – Information Retrieval
Dynamic ProgrammingNot dynamic and not programming
Build up solutions of “simpler” instances from small to large Save results of solutions of “simpler” instances Use those solutions to solve larger problems
Useful when problem can be solved using solution of two or more instances that are only slightly simpler than original instances
Information Retrieval
CS3245 – Information Retrieval
Computing Edit DistanceLet’s diagram this as an array, with
S1 (PAT) on the x-axis, S2 (APT) on the y-axis.
Possible moves: Insert Delete Match or replace
Information Retrieval 30
_ P A T
_ 0 1 2 3
A 1 1 1 2
P 2 1 2 2
T 3 2 2 2
Store edit distance between substrings S1(1,i)
and S2(1,j) at entry i,j
E(i, j) = min{ E(i, j-1) + 1, where m = 1 if Pi ≠Tj, E(i-1, j) + 1, 0 otherwiseE(i-1, j-1) + m}
S1S2
CS3245 – Information Retrieval
Practice your edit distance_ C H I C K E N
_ 0 1 2 3 4 5 6 7C 1 0 1 2 3 4 5 6H 2 1 0 1 2 3 4 5E 3 2 1 1 2 3 3 4E 4 3 2 2 2 3 4 4K 5 4 3 3 3 2 3 4Y 6 5 4 4 4 3 3 4
Edit Distance: 4
Blanks on slides, you may want to fill in
Information Retrieval
CS3245 – Information Retrieval
Practice your edit distance_ C H I C K E N
_ 0 1 2 3 4 5 6 7C 1 0 1 2 3 4 5 6H 2 1 0 1 2 3 4 5E 3 2 1 1 ?
EKY
Blanks on slides, you may want to fill in
Information Retrieval
CS3245 – Information Retrieval
2. Weighted edit distance As above, but the weight of an operation depends on
the character(s) involved Meant to capture OCR or keyboard errors, e.g. m more
likely to be mis-typed as n than as q Therefore, replacing m by n is a smaller edit distance than
by q This may be formulated as a probability model
Requires a weighted matrix as input Modify dynamic programming to handle weights
Information Retrieval 33
Sec. 3.3.3
CS3245 – Information Retrieval
Edit distance to all dictionary terms?
Given a (misspelled) query – do we compute its edit distance to every dictionary term? Expensive and slow Alternative?
How do we cut the set of candidate dictionary terms? One possibility is to use ngram overlap for this This can also be used by itself for spelling correction
Information Retrieval 34
Sec. 3.3.4
CS3245 – Information Retrieval
3. Ngram overlap Enumerate all the ngrams in the query string as well
as in the lexicon Use the ngram index (recall wildcard search) to
retrieve all lexicon terms matching any of the query ngrams
Threshold by number of matching ngrams Variants – weight by keyboard layout, assume initial letter
correct, etc.
Information Retrieval 35
Sec. 3.3.4
Arocdnicg to rsceearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer are in the rghit pcale. The rset can be a toatl mses and you can sitll raed it wouthit pobelrm. Tihs is buseace the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.
This story is actually an urban legend? No such study was done at Cambridge
CS3245 – Information Retrieval
Example with trigrams Suppose the text is november Trigrams are nov, ove, vem, emb, mbe, ber.
The query is december Trigrams are dec, ece, cem, emb, mbe, ber.
So 3 trigrams overlap (out of 6 in each term)
How can we turn this into a normalized measure of overlap?
Information Retrieval 36
Sec. 3.3.4
CS3245 – Information Retrieval
One option – Jaccard coefficient A commonly-used measure of overlap Let X and Y be two sets; then the J.C. is
Equals 1 when X and Y have the same elements and zero when they are disjoint
X and Y don’t have to be of the same size Always assigns a number between 0 and 1 Now threshold to decide if you have a match E.g., if Jaccard > 0.8, declare a match
Information Retrieval 37
YXYX ∪∩ /
Sec. 3.3.4
A generally useful overlap measure, even outside of IR
“coefficient de communauté”
CS3245 – Information Retrieval
Matching trigrams Consider the query lord – we wish to identify words
matching 2 of its 3 bigrams (lo, or, rd)
Information Retrieval 38
lo
or
rd
alone lore sloth
lore morbid
border card
border
ardent
Standard postings “merge” enumerates hits
Adapt this to using Jaccard (or another) measure.
Sec. 3.3.4
CS3245 – Information Retrieval
Context-sensitive spelling correction Text: I flew from Heathrow to Narita. Consider the phrase query “flew form Heathrow”
We’d like to respondDid you mean “flew from Heathrow”?
because no docs matched the query phrase.
Information Retrieval 39
Sec. 3.3.5
CS3245 – Information Retrieval
Context-sensitive correction Need surrounding context to catch this. Retrieve dictionary terms close (in weighted edit
distance) to each query term Now try all possible resulting phrases with one word
“corrected” at a time flew from Heathrow fled form Heathrow flea form Heathrow
Hit-based spelling correction: Suggest the alternative with most hits (in queries or documents)
Information Retrieval 40
Sec. 3.3.5
The hit-based paradigm is applied in many other
places too!
The correct query “flew from Heathrow” has the
most hits
CS3245 – Information Retrieval
Another approach Break phrase queries into conjunctions of biwords. Look for biwords that need only one term corrected. E.g., “flew from”, “from Heathrow”, “flea form”
Enumerate phrase matches and … rank them!
Information Retrieval 41
Sec. 3.3.5
CS3245 – Information Retrieval
General issues in spelling correction We enumerate multiple possible corrections for
“Did you mean?”but we need to decide which to present to the user
Use heuristics The correction with most hits Query log analysis + tweaking
For especially popular, topical queries
Information Retrieval 42
Sec. 3.3.5
CS3245 – Information Retrieval
General issues in spelling correction Alternatively, we can automatically search for all possible corrections in our inverted index and return all
docs … slow a single most likely correction
The alternatives disempower the user, but may save a round of interaction with the user
Spelling correction is computationally expensive Avoid running routinely on every query? Run only on queries that matched few docs
Information Retrieval 43
Sec. 3.3.5
CS3245 – Information Retrieval
SOUNDEX
Information Retrieval 44
CS3245 – Information Retrieval
Soundex Class of heuristics to expand a query into phonetic
equivalents Language specific – mainly for names E.g., chebyshev → tchebycheff
Invented for the U.S. census … in 1918
We’ll explore this just in the context of English
Information Retrieval 45
Sec. 3.4
Blanks on slides, you may want to fill in
To think about: what other languages does it make sense for?
CS3245 – Information Retrieval
Soundex – typical algorithm Turn every token to be indexed into a 4-character
reduced form Do the same with query terms Build and search an index on the reduced forms
(when the query calls for a Soundex match)
See Wikipedia’s entry:https://en.wikipedia.org/wiki/Soundex
Information Retrieval 46
Sec. 3.4
CS3245 – Information Retrieval
Soundex – typical algorithm1. Retain the first letter of the word. 2. Change all occurrences of the following letters to '0'
(zero):'A', E', 'I', 'O', 'U', 'H', 'W', 'Y'.
3. Change letters to digits as follows: B, F, P, V → 1 C, G, J, K, Q, S, X, Z → 2 D,T → 3 L → 4 M, N → 5 R → 6
Information Retrieval 47
Sec. 3.4
CS3245 – Information Retrieval
Soundex continued4. Repeatedly remove one out of each pair of
consecutive identical digits5. Remove all zeros from the resulting string.6. Pad the resulting string with trailing zeros and
return the first four positions, which will be of the form <uppercase letter> <digit> <digit> <digit>.
E.g., Herman becomes H655.
Information Retrieval 48
Will hermann generate the same code?
Sec. 3.4
CS3245 – Information Retrieval
Soundex Soundex is the classic algorithm, provided by most
databases (Oracle, Microsoft, …)
How useful is Soundex? Not very – for general IR, spelling correction Okay for “high recall” tasks (e.g., Interpol), though
biased to names of certain nationalities Sucks for Chinese names: Xin (Pinyin) and Hsin (Wade-
Giles) mapped completely different
Information Retrieval 49
Sec. 3.4
CS3245 – Information Retrieval
Now what queries can we process? We have Positional inverted index with skip pointers Wildcard index Spelling correction Soundex
Queries such as(SPELL(moriset) /3 toron*to) OR SOUNDEX(chaikofski)
Information Retrieval 50
CS3245 – Information Retrieval
Summary Data Structures for the
Dictionary Hash Trees
Learning to be tolerant1. Wildcards General Trees Permuterm Ngrams, redux
2. Spelling Correction Edit Distance Ngrams, re-redux
3. Phonetic – Soundex
Information Retrieval 51
CS3245 – Information Retrieval
Resources IIR 3, MG 4.2 Efficient spelling retrieval:
K. Kukich. Techniques for automatically correcting words in text. ACM Computing Surveys 24(4), Dec 1992.
J. Zobel and P. Dart. Finding approximate matches in large lexicons. Software - practice and experience 25(3), March 1995. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.14.3856&rep=rep1&type=pdf
Mikael Tillenius: Efficient Generation and Ranking of Spelling Error Corrections. Master’s thesis at Sweden’s Royal Institute of Technology. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.49.1392
Nice, easy reading on spelling correction: Peter Norvig: How to write a spelling corrector http://norvig.com/spell-correct.html
Information Retrieval 52
Sec. 3.5
It’s in python!