introduction to streaming algorithms

30
Introduction to Streaming Algorithms Jeff M. Phillips September 21, 2011

Upload: others

Post on 03-Feb-2022

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Introduction to Streaming Algorithms

Introduction to Streaming Algorithms

Jeff M. Phillips

September 21, 2011

Page 2: Introduction to Streaming Algorithms

Network Router

Internet Router

I data per day: at least ITerabyte

I packet takes 8nanoseconds to passthrough router

I few million packets persecond

What statistics can we keep ondata?Want to detect anomalies forsecurity.

packet

router

Page 3: Introduction to Streaming Algorithms

Telephone Switch

Cell phones connect throughswitches

I each message 1000 Bytes

I 500 Million calls / day

I 1 Terabyte per month

Search for characteristics fordropped calls?

txt msg

switch

Page 4: Introduction to Streaming Algorithms

Ad Auction

Serving Ads on webGoogle, Yahoo!, Microsoft

I Yahoo.com viewed 77trillion times

I 2 million / hour

I Each page serves ads;which ones?

How to update ad deliverymodel?

serverkeyword search

adclick pa

ge v

iew

adserved

deliverymodel

Page 5: Introduction to Streaming Algorithms

Flight Logs on Tape

All airplane logs over Washington, DC

I About 500 - 1000 flights per day.

I 50 years, total about 9 millionflights

I Each flight has trajectory,passenger count, control dialog

Stored on Tape. Can make 1 pass!What statistics can be gathered?

CPUstatistics

Page 6: Introduction to Streaming Algorithms

Streaming Model

CPU

word 2 [n]

lengt

hm

memory

CPU makes ”one pass” on data

I Ordered set A = 〈a1, a2, . . . , am〉I Each ai ∈ [n], size log n

I Compute f (A) or maintain f (Ai )for Ai = 〈a1, a2, . . . , ai 〉.

I Space restricted toS = O(poly(logm, log n)).

I Updates O(poly(S)) for each ai .

Page 7: Introduction to Streaming Algorithms

Streaming Model

CPU

word 2 [n]

lengt

hm

memory

CPU makes ”one pass” on data

I Ordered set A = 〈a1, a2, . . . , am〉I Each ai ∈ [n], size log n

I Compute f (A) or maintain f (Ai )for Ai = 〈a1, a2, . . . , ai 〉.

I Space restricted toS = O(poly(logm, log n)).

I Updates O(poly(S)) for each ai .

Page 8: Introduction to Streaming Algorithms

Streaming Model

CPU

word 2 [n]

lengt

hm

memory

Space:

I Ideally S = O(logm + log n)

I log n = size of 1 word

I logm = to store number of words

Updates:

I O(S2) or O(S3) can be too much!

I Ideally updates in O(S)

Page 9: Introduction to Streaming Algorithms

Streaming Model

CPU

word 2 [n]

lengt

hm

memory

Space:

I Ideally S = O(logm + log n)

I log n = size of 1 word

I logm = to store number of words

Updates:

I O(S2) or O(S3) can be too much!

I Ideally updates in O(S)

Page 10: Introduction to Streaming Algorithms

Easy Example: Average

CPU

word 2 [n]

lengt

hm

memory

I Each ai a number in [n]

I f (Ai ) = average({a1, . . . , ai})

I Maintain: i and s =∑i

j=1 ai .

I f (Ai ) = s/i

I Problem? s is bigger than a word!

I s is not bigger than (log s/ log n)words (big int data structure)

I usually 2 or 3 words is fine

Page 11: Introduction to Streaming Algorithms

Easy Example: Average

CPU

word 2 [n]

lengt

hm

memory

I Each ai a number in [n]

I f (Ai ) = average({a1, . . . , ai})

I Maintain: i and s =∑i

j=1 ai .

I f (Ai ) = s/i

I Problem? s is bigger than a word!

I s is not bigger than (log s/ log n)words (big int data structure)

I usually 2 or 3 words is fine

Page 12: Introduction to Streaming Algorithms

Easy Example: Average

CPU

word 2 [n]

lengt

hm

memory

I Each ai a number in [n]

I f (Ai ) = average({a1, . . . , ai})

I Maintain: i and s =∑i

j=1 ai .

I f (Ai ) = s/i

I Problem? s is bigger than a word!

I s is not bigger than (log s/ log n)words (big int data structure)

I usually 2 or 3 words is fine

Page 13: Introduction to Streaming Algorithms

Easy Example: Average

CPU

word 2 [n]

lengt

hm

memory

I Each ai a number in [n]

I f (Ai ) = average({a1, . . . , ai})

I Maintain: i and s =∑i

j=1 ai .

I f (Ai ) = s/i

I Problem? s is bigger than a word!

I s is not bigger than (log s/ log n)words (big int data structure)

I usually 2 or 3 words is fine

Page 14: Introduction to Streaming Algorithms

Trick 1: ApproximationReturn f (A) instead of f (A) where

|f (A)− f (A)| ≤ ε · f (A).

f (A) is a (1 + ε)-approximation of f (A).

0 0 0 0

k = log(1/")

Example: Average

I (a) the count: i

I (b) top k = log(1/ε) + 1 bits of s: s

I (c) number of bits in s

I Let f (A) = s/iFirst bit has ≥ (1/2)f (A)Second bit has ≤ (1/4)f (A)jth bit has ≤ (1/2j)f (A)

∞∑j=k+1

(1/2j)f (A) < (1/2k)f (A) < ε · f (A)

Where did I cheat?

Page 15: Introduction to Streaming Algorithms

Trick 1: ApproximationReturn f (A) instead of f (A) where

|f (A)− f (A)| ≤ ε · f (A).

f (A) is a (1 + ε)-approximation of f (A).

0 0 0 0

k = log(1/")

Example: Average

I (a) the count: i

I (b) top k = log(1/ε) + 1 bits of s: s

I (c) number of bits in s

I Let f (A) = s/i

First bit has ≥ (1/2)f (A)Second bit has ≤ (1/4)f (A)jth bit has ≤ (1/2j)f (A)

∞∑j=k+1

(1/2j)f (A) < (1/2k)f (A) < ε · f (A)

Where did I cheat?

Page 16: Introduction to Streaming Algorithms

Trick 1: ApproximationReturn f (A) instead of f (A) where

|f (A)− f (A)| ≤ ε · f (A).

f (A) is a (1 + ε)-approximation of f (A).

0 0 0 0

k = log(1/")

Example: Average

I (a) the count: i

I (b) top k = log(1/ε) + 1 bits of s: s

I (c) number of bits in s

I Let f (A) = s/iFirst bit has ≥ (1/2)f (A)Second bit has ≤ (1/4)f (A)jth bit has ≤ (1/2j)f (A)

∞∑j=k+1

(1/2j)f (A) < (1/2k)f (A) < ε · f (A)

Where did I cheat?

Page 17: Introduction to Streaming Algorithms

Trick 1: ApproximationReturn f (A) instead of f (A) where

|f (A)− f (A)| ≤ ε · f (A).

f (A) is a (1 + ε)-approximation of f (A).

0 0 0 0

k = log(1/")

Example: Average

I (a) the count: i

I (b) top k = log(1/ε) + 1 bits of s: s

I (c) number of bits in s

I Let f (A) = s/iFirst bit has ≥ (1/2)f (A)Second bit has ≤ (1/4)f (A)jth bit has ≤ (1/2j)f (A)

∞∑j=k+1

(1/2j)f (A) < (1/2k)f (A) < ε · f (A)

Where did I cheat?

Page 18: Introduction to Streaming Algorithms

Trick 2: Randomization

Return f (A) instead of f (A) where

Pr[|f (A)− f (A)| > ε · f (A)

]≤ δ.

f (A) is a (1 + ε, δ)-approximation of f (A).

Can fix previous cheat using randomization and Morris Counter(Morris 78, Flajolet 85)

Page 19: Introduction to Streaming Algorithms

Trick 2: Randomization

Return f (A) instead of f (A) where

Pr[|f (A)− f (A)| > ε · f (A)

]≤ δ.

f (A) is a (1 + ε, δ)-approximation of f (A).

Can fix previous cheat using randomization and Morris Counter(Morris 78, Flajolet 85)

Page 20: Introduction to Streaming Algorithms

Decreasing Probability of FailureInvestment Company (IC) sends out 100× 2k emails:

I 2k−1 say Stock A will go up in next weekI 2k−1 say Stock A will go down in next week

After 1 week, 1/2 of email receivers got good advice.

Next week, IC sends letters 2k−1 letters, only to those who gotgood advice.

I 2k−2 say Stock B will go up in next week.I 2k−2 say Stock B will go down in next week.

After 2 weeks, 1/4 of all receivers have gotten good advice twice.

After k weeks 100 receivers got good advice

I IC now asks each for money to receive future stock tricks.I Don’t actually do this!!!

If you are on IC’s original email list, with what probability will youreceive k good stock tips?

1− (1/2)k

Page 21: Introduction to Streaming Algorithms

Decreasing Probability of FailureInvestment Company (IC) sends out 100× 2k emails:

I 2k−1 say Stock A will go up in next weekI 2k−1 say Stock A will go down in next week

After 1 week, 1/2 of email receivers got good advice.

Next week, IC sends letters 2k−1 letters, only to those who gotgood advice.

I 2k−2 say Stock B will go up in next week.I 2k−2 say Stock B will go down in next week.

After 2 weeks, 1/4 of all receivers have gotten good advice twice.

After k weeks 100 receivers got good advice

I IC now asks each for money to receive future stock tricks.I Don’t actually do this!!!

If you are on IC’s original email list, with what probability will youreceive k good stock tips?

1− (1/2)k

Page 22: Introduction to Streaming Algorithms

Decreasing Probability of FailureInvestment Company (IC) sends out 100× 2k emails:

I 2k−1 say Stock A will go up in next weekI 2k−1 say Stock A will go down in next week

After 1 week, 1/2 of email receivers got good advice.

Next week, IC sends letters 2k−1 letters, only to those who gotgood advice.

I 2k−2 say Stock B will go up in next week.I 2k−2 say Stock B will go down in next week.

After 2 weeks, 1/4 of all receivers have gotten good advice twice.

After k weeks 100 receivers got good advice

I IC now asks each for money to receive future stock tricks.

I Don’t actually do this!!!

If you are on IC’s original email list, with what probability will youreceive k good stock tips?

1− (1/2)k

Page 23: Introduction to Streaming Algorithms

Decreasing Probability of FailureInvestment Company (IC) sends out 100× 2k emails:

I 2k−1 say Stock A will go up in next weekI 2k−1 say Stock A will go down in next week

After 1 week, 1/2 of email receivers got good advice.

Next week, IC sends letters 2k−1 letters, only to those who gotgood advice.

I 2k−2 say Stock B will go up in next week.I 2k−2 say Stock B will go down in next week.

After 2 weeks, 1/4 of all receivers have gotten good advice twice.

After k weeks 100 receivers got good advice

I IC now asks each for money to receive future stock tricks.I Don’t actually do this!!!

If you are on IC’s original email list, with what probability will youreceive k good stock tips?

1− (1/2)k

Page 24: Introduction to Streaming Algorithms

Decreasing Probability of FailureInvestment Company (IC) sends out 100× 2k emails:

I 2k−1 say Stock A will go up in next weekI 2k−1 say Stock A will go down in next week

After 1 week, 1/2 of email receivers got good advice.

Next week, IC sends letters 2k−1 letters, only to those who gotgood advice.

I 2k−2 say Stock B will go up in next week.I 2k−2 say Stock B will go down in next week.

After 2 weeks, 1/4 of all receivers have gotten good advice twice.

After k weeks 100 receivers got good advice

I IC now asks each for money to receive future stock tricks.I Don’t actually do this!!!

If you are on IC’s original email list, with what probability will youreceive k good stock tips?

1− (1/2)k

Page 25: Introduction to Streaming Algorithms

Decreasing Probability of FailureInvestment Company (IC) sends out 100× 2k emails:

I 2k−1 say Stock A will go up in next weekI 2k−1 say Stock A will go down in next week

After 1 week, 1/2 of email receivers got good advice.

Next week, IC sends letters 2k−1 letters, only to those who gotgood advice.

I 2k−2 say Stock B will go up in next week.I 2k−2 say Stock B will go down in next week.

After 2 weeks, 1/4 of all receivers have gotten good advice twice.

After k weeks 100 receivers got good advice

I IC now asks each for money to receive future stock tricks.I Don’t actually do this!!!

If you are on IC’s original email list, with what probability will youreceive k good stock tips?

1− (1/2)k

Page 26: Introduction to Streaming Algorithms

Markov Inequality

Let X be a random variable (RV).Let a > 0 be a parameter.

Pr [|X | ≥ a] ≤ E[|X |]a

.

Page 27: Introduction to Streaming Algorithms

Chebyshev’s Inequality

Let Y be a random variable.Let b > 0 be a parameter.

Pr [|Y − E[Y ]| ≥ b] ≤ Var[|Y |]b2

.

Page 28: Introduction to Streaming Algorithms

Chernoff Inequality

Let {X1,X2, . . . ,Xr} be independent random variables.Let ∆i = max{Xi} −min{Xi}.Let M =

∑ri=1 Xi .

Let α > 0 be a parameter.

Pr

[|M −

r∑i=1

E[Xi ]| ≥ α]≤ 2 exp

( −2α2∑i ∆2

i

)

Often: ∆ = maxi ∆i and E[Xi ] = 0 then:

Pr [|M| ≥ α] ≤ 2 exp

(−2α2

r∆2i

)

Page 29: Introduction to Streaming Algorithms

Chernoff Inequality

Let {X1,X2, . . . ,Xr} be independent random variables.Let ∆i = max{Xi} −min{Xi}.Let M =

∑ri=1 Xi .

Let α > 0 be a parameter.

Pr

[|M −

r∑i=1

E[Xi ]| ≥ α]≤ 2 exp

( −2α2∑i ∆2

i

)

Often: ∆ = maxi ∆i and E[Xi ] = 0 then:

Pr [|M| ≥ α] ≤ 2 exp

(−2α2

r∆2i

)

Page 30: Introduction to Streaming Algorithms

Attribution

These slides borrow from material by Muthu Muthukrishnan:http://www.cs.mcgill.ca/~denis/notes09.pdf

and Amit Chakrabarti:http://www.cs.dartmouth.edu/~ac/Teach/CS85-Fall09/