asynchronous pattern matching - metrics amihood amir

71
Asynchronous Pattern Matching - Metrics Amihood Amir

Post on 18-Dec-2015

234 views

Category:

Documents


3 download

TRANSCRIPT

Asynchronous Pattern Matching -

Metrics

Amihood Amir

Motivation

Motivation

In the “old” days: Pattern and text are given in correct sequential order. It is possible that the content is erroneous.

New paradigm: Content is exact, but the order of the pattern symbols may be scrambled.

Why? Transmitted asynchronously? The nature of the application?

Example: Swaps

Tehse knids of typing mistakes are very common

So when searching for pattern These we are seeking the symbols of the pattern but with an order changed by swaps.

Surprisingly, pattern matching with swaps is easier than pattern matching with mismatches (ACHLP:01)

Example: Reversals

AAAGGCCCTTTGAGCCC

AAAGAGTTTCCCGGCCC

Given a DNA substring, a piece of it can detach and reverse .

This process still computationally tough.

Question: What is the minimum number of reversals necessary to sort a permutation of 1,…,n

Global Rearrangements?

Berman & Hannenhalli (1996) called this Global Rearrangement as opposed to Local Rearrangement (edit distance). Showed it is NP-hard.

Our Thesis: This is a special case of errors in the address rather than content .

Example: Transpositions

AAAGGCCCTTTGAGCCC

AATTTGAGGCCCAGCCC

Given a DNA substring, a piece of it can be transposed to another area .

Question: What is the minimum number of transpositions necessary to sort a permutation

of 1,…,n?

Complexity?

Bafna & Pevzner (1998), Christie (1998), Hartman (2001): 1.5 Polynomial Approximation.

Not known whether efficiently computable.

This is another special case of errors in the address rather than content .

Example: Block Interchanges

AAAGGCCCTTTGAGCCC

AAGTTTAGGCCCAGCCC

Given a DNA substring, two non-empty subsequences can be interchanged .

Question: What is the minimum number of block interchanges necessary to sort a

permutation of 1,…,n?

Christie (1996): O(n )2

SummaryBiology: sorting permutations

Reversals(Berman & Hannenhalli, 1996)

Transpositions(Bafna & Pevzner, 1998)

Pattern Matching:

Swaps(Amir, Lewenstein & Porat, 2002)

NP-hard

?

Block interchanges O(n2)(Christie, 1996)

O(n log m)

Note: A swap is a block interchange simplification1. Block size 2. Only once 3. Adjacent

Edit operations map

Reversal, Transposition, Block interchange:

1. arbitrary block size 2. not once 3. non adjacent

4. permutation 5. optimization

Interchange:

1. block of size 1 2. not once 3. non adjacent

4. permutation 5. optimization

Generalized-swap:

1. block of size 1 2. once 3. non adjacent

4. repetitions 5. optimization/decision

Swap:

1. block of size 1 2. once 3. adjacent

4. repetitions 5. optimization/decision

S=abacbF=bbaca

interchange

S=abacbF=bbaac

interchange

matches

S1=bbaca

S2=bbaac

S=abacbF=bcaba

generalized-swap

matches

S1=bbaca

S2=bcaba

Definitions

Generalized Swap MatchingINPUT: text T[0..n], pattern P[0..m]

OUTPUT: all i s.t. P generalized-swap matches T[i..i+m]Reminder: Convolution

The convolution of the strings t[1..n] and p[1..m] is the string t*p such that:

(t*p)[i]=k=1,m(t[i+k-1]p[m-k+1]) f or all 1 i n-m

Fact: The convolution of n-length text and m-length pattern can be done in O(n log m) time using FFT.

In Pattern MatchingConvolutions:

O(n log m) using FFT

210

2423222120

1413121110

0403020100

012

43210

rrr

bababababa

bababababa

bababababa

bbb

aaaaab0 b1 b2 b0 b1 b2b0 b1 b2

Problem: O(n log m) only in algebraically closed fields, e.g. C.

Solution: Reduce problem to (Boolean/integer/real) multiplication. SThis reduction costs!

Example: Hamming distance.

Counting mismatches is equivalent to Counting matches

A B A B C

A B B B A

Example:

Count all “hits” of 1 in pattern and 1 in text.

011

01001

00000

01001

101

010011 0 11 0 11 0 1

For a

Define:

)(ba1 if a=b

0 o/w

)()...()()()...( 321321 naaaana SSSSSSSS

Example:

1001100)( abbaabba

For cba ,,

Do:

)()(

)()(

)()(

Rcc

Rbb

Raa

PT

PT

PT

+

+

Result: The number of times a in pattern matches a in text + the number of times b in pattern matches b in text + the number of times c in pattern matches c in text.

Idea: assign natural numbers to alphabet symbols, and construct:

T’: replacing the number a by the pair a2,-a

P’: replacing the number b by the pair b, b2.

Convolution of T’ and P’ gives at every location 2i:

j=0..mh(T’[2i+j],P’[j])

where h(a,b)=ab(a-b).

3-degree multivariate polynomial.

Generalized Swap Matching: a Randomized Algorithm…

Generalized Swap Matching: a Randomized Algorithm…

Since: h(a,a)=0 h(a,b)+h(b,a)=ab(b-a)+ba(a-b)=0,

a generalized-swap match 0 polynomial.

Example:

Text: ABCBAABBC

Pattern: CCAABABBB

1- 1 ,4- 2 ,9- 3,4- 2,1- 1,1- 1,4- 2,4- 2,9- 3 3 9 ,3 9 ,1 1,1 1,2 4 ,1 1,2 4 ,2 4,2 4

3- 9,12 -18,9- 3,4- 2,2- 4,1- 1,8- 8,8- 8,18- 12

Problem: It is possible that coincidentally the result will be 0 even if no swap match.

Example: for text ace and pattern bdf we get a multivariate degree 3 polynomial:

We have to make sure that the probability for such a possibility is quite small.

0222222 effecddcabba

Generalized Swap Matching: a Randomized Algorithm…

Generalized Swap Matching: a Randomized Algorithm…

What can we say about the 0’s of the polynomial?

By Schwartz-Zippel Lemma prob. of 0degree/|domain|.

Conclude:

Theorem: There exist an O(n log m) algorithm that reports all generalized-swap matches and reports false matches with prob.1/n.

Generalized Swap Matching:De-randomization?

Can we detect 0’s thus de-randomize the algorithm?

Suggestion: Take h1,…hk having no common root.

It won’t work,

k would have to be too large !

Generalized Swap Matching: De-randomization?…

Theorem: (m/log m) polynomial functions are required to guarantee a 0 convolution value is a 0 polynomial.Proof: By a linear reduction from word equality.

Given: m-bit words w1 w2 at processors P1 P2

Construct: T=w1,1,2,…,m P=1,2,…,m,w2.

Now, T generalized-swap matches P iff w1=w2.

Communication Complexity: word equality requires exchanging (m)

bits,

We get: klog m= (m), so k must be (m/log m).

P1 computes:

w1 * (1,2,…,m)

log m bit result

P2 computes:

(1,2,…,m) * w2

Interchange Distance Problem

INPUT: text T[0..n], pattern P[0..m]

OUTPUT: The minimum number of interchanges s.t. T[i..i+m] interchange matches P.

Reminder: permutation cycle

The cycles (143) 3-cycle, (2) 1-cycle represent 3241.

Fact: The representation of a permutation as a product of disjoint permutation cycles is unique.

Interchange Distance Problem…

Lemma: Sorting a k-length permutation cycle requires exactly k-1 interchanges.

Proof: By induction on k.

Theorem: The interchange distance of an m-length permutation is m-c(), where c() is the number of permutation cycles in .

Result: An O(nm) algorithm to solve the interchange distance problem.

A connection between sorting by interchanges and generalized-swap matching?

Cases: (1), (2 1), (3 1 2)

Interchange Generation Distance Problem

INPUT: text T[0..n], pattern P[0..m]

OUTPUT: The minimum number of interchange-generations s.t. T[i..i+m] interchange

matches P.Definition: Let S=S1,S2,…,Sk=F, Sl+1 derived from Sl via interchange Il. An interchange-generation is a subsequence of I1,…,Ik-1 s.t. the interchanges have no index in common.

Note: Interchanges in a generation may occur in parallel.

Interchange Generation Distance Problem…

Lemma: Let be a cycle of length k>2. It is possible to sort in 2 generations and k-1 interchanges.

Example: (1,2,3,4,5,6,7,8,0)

generation 1:

(1,8),(2,7),(3,6),(4,5)

(8,7,6,5,4,3,2,1,0)

generation 2:

(0,8),(1,7),(2,6),(3,5)

(0,1,2,3,4,5,6,7,8)

Sorting a General Cycle in Two Rounds

Algorithm:

Exchange contents of locations 2 and n 3 and n-1 4 and n-2

...

Then bring every one to place simultaneously.

EXAMPLE:Index 6 3 5 1 4 2 Sorted 0 1 2 3 4 5 Permutation 2 5 4 0 1 3

3 4 5 0 1 2

0 4 5 3 1 2 0 1 5 3 4 2 0 1 2 3 4 5

Why Does it Work?

We want to send 0 to its place

So in the last location should be the number where 0 is

now.

This number is in location 2.

Same reasoning true for 1, 2… ,

Interchange Generation Distance Problem…

Theorem: Let maxl() be the length of the longest permutation cycle in an m-length permutation . The interchange generation distance of is exactly:

1. 0, if maxl()=1.

2. 1, if maxl()=2.

3. 2, if maxl()>2.

Note: There is a generalized-swap match iff sorting by interchanges is done in 1 generation.

L1-Distance Problem

INPUT: text T[0..n], pattern P[0..m]

OUTPUT: The L1-distance between T[i..i+m] and P for all i[0,n-m].

Where, L1-distance(T[i..i+m],P)=|j-Ti(j)|

Do we need to try all pairings of same letters?

How do we pair the symbols?

Example:

Text: ABCBAABBC

Pattern: CCAABABBB

L1-Distance Problem

n

i

LM iMiTTd

021

1 |)(|),(

Definition: Let T1[0..n], be a string over ∑ and T2[0..n] a permutation of T1.

A pairing between T1 and T2 is a bijection

M:{0,…,n}→{0,…,n}

Where T1[i]=T2[M(i)] , for all i=0,…,n

The L1-distance between T1 and T2 under M is

The L1-distance between T1 and T2 is)(min),( 2,1

121

1 TTdTTd LMM

L

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5=13

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2=15

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2+4=19

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2+4+1=20

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2+4+1+3=23

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2+4+1+3+5=28

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2+4+1+3+5+7=35

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M1

T2= C A C B B B B A A

dM1L1(T1,T2)=8+5+2+4+1+3+5+7+5=40

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2=3

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2=5

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2+4=9

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2+4+1=10

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2+4+1+5=15

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2+4+1+5+2=17

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2+4+1+5+2+5=22

L1-Distance Problem

EXAMPLE:

T1= A B B A B C A C B

M2

T2= C A C B B B B A A

dM2L1(T1,T2)=1+2+2+4+1+5+2+5+2=24

Note that dM2L1(T1,T2)=40

How do we choose the pairing function?

L1-Distance Problem…Fortunately, we know how an optimal pairing looks...

lemma: Let T1,T2∑m be two strings s.t. T2 is a permutation of T1. Let M be the pairing function that, for all a and k, moves the k-th a in T1 to the location of the k-th a in T2. Then,

dL1(T1,T2)=dML1(T1,T2)

Result: An O(nm) algorithm to solve the L1-distance problem.

Example: An optimal pairing

Text: ABCBAABBC

Pattern: CCAABABBB

L1-Distance Problem…Proof of pairing lemma:

Note that in M, for a fixed alphabet symbol a, there are no “crossovers”.

We will show that crossovers do not help.

Cases: M M’

1)

x y z

Cost in M=x+y+y+z

Cost in M’=x+y+z+y

L1-Distance Problem…Proof of pairing lemma (continued…):

2) M M’

x y z

Cost in M=x+z

Cost in M’=x+y+y+z

L1-Distance Problem…Proof of pairing lemma (continued…):

3) M M’

x y z

Cost in M=x+z

Cost in M’=x+y+z+y

L1-Distance Problem…For pattern with distinct entries we can do better...

Idea: Use computations from position i to position i+1.Example:

Text: 2312331

Pattern: 123

Dist(1)=|1-3|+|2-1|+|3-2|=4

Direction relative to pattern:

Left Match Right

2,3 1

L1-Distance Problem…

Dist(2)=?

Now we move to the next location…

Text: 2312331

Pattern: 123

Direction relative to pattern:

Left (+1 to Dist) Match Right (-1 to Dist)

2,3 1,2

L1-Distance Problem…

Dist(2)=Dist(1)+|Left|-|Right|-left most+right most=

=4+1-1-1+|3-2|=4 (=|1-2|+|2-3|+|3-1|)

Result: An O(n) algorithm to solve the L1-distance problem for pattern with distinct entries.

Next location distance computation…

Text: 2312331

Pattern: 123

L2-Distance Problem

n

i

LM iMiTTd

0

221

2 ))((),(

Definition: Let T1[0..n], be a string over ∑ and T2[0..n] a permutation of T1.

A pairing between T1 and T2 is a bijection

M:{0,…,n}→{0,…,n}

Where T1[i]=T2[M(i)] , for all i=0,…,n

The L2-distance between T1 and T2 under M is

The L2-distance between T1 and T2 is)(min),( 2,1

221

2 TTdTTd LMM

L

L2-Distance ProblemINPUT: text T[0..n], pattern P[0..m]

OUTPUT: The L2-distance between T[i..i+m] and P for all i[0,n-m].

Where, L2-distance(T(i),P)=|j-MT(i)(j)|2

Do we need to try all pairings of same letters?No, the pairing lemma works here too.

Result: An O(nm) algorithm to solve the L2-distance problem.

We can do better…

L2-Distance Problem…

Idea: Consider T and P of same length.

Let lista(P) and lista(T) for each letter a, be the list of locations in which a occurs in P and T.

The L2-distance between T and P is:

dL2(T,P)=aPj[0,…|list(a)|](lista(P)[j]-lista(T(i))[j])2

Schematically: Fix letter a:

T

P

list(T) = i1, i2, …, ik

list(P) = j1, j2, …, jk

k

lll

L jiPTd1

22 )(),(

L2-Distance Problem…

k

lll

L jiPTd1

22 )(),(

k

lllll jjii

1

22 )2(

k

l

k

l

k

lllll jiji

1 1 1

22 2

easyto

compute

easyto

compute

convolution

L2-Distance Problem…large text

Idea: Consider lista(P) and lista(T(i)) for each letter a, the list of locations in which a occurs in P and T(i).

The L2-distance between T(i) and P is:

dL2(T(i),P)=aPj[0,…|list(a)|](lista(P)[j]-lista(T(i))[j])2

Now, we want to use lista(T) instead of lista(T(i)).

Now the text location indices are not fixed…

L2-Distance Problem…large text

Schematically: Fix letter a:

T x

P

list(T) = i1, i2, …, ik,...,ir

list(P) = j1, j2, …, jk

For location x, the text indices need to be i2-x, i3-x, …

Convolution formula:

k

lll

xL jxiPTd1

2)(2 ))((),(

L2-Distance Problem…large text

k

lll

xL jxiPTd1

2)(2 ))((),(

k

lll

k

ll

k

ll jxijxi

11

2

1

2 )(2)(

k

llll

k

l

k

l

k

llll xjjijixkxi

11 1 1

222 )(22

k

l

k

llll

k

l

k

l

k

llll jxjijixkxi

1 11 1 1

222 222

easy to compute convolutioneasy tocompute

L2-Distance Problem…large alphabet

)log( mnO i

Problem: What happens when unbounded number of alphabet symbols? How many convolutions?

Let ∑={a1,…,ak} and let ni be the number of times ai occurs in T, i=1,…,k.

Clearly, n1+n2+…+nk=n.

Time: For each symbol ai:

Total Time:

Result: An O(n log m) algorithm to solve the L2-distance problem.

)log()log)(()log(1 1

mnOmnOmnOk

l

k

lii

Open Problems

1. Interchange distance faster than O(nm)?

2. Asynchronous communication – different errors in address bits.

3. Different error measures than interchange/block interchange/transposition/reversals for errors arising from address bit errors.

Note: The techniques employed in asynchronous pattern matching have so far proven new and different from traditional pattern matching.

The End