stochastic models for communication networks jean walrand university of california, berkeley

42
Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

Upload: frank-strickland

Post on 17-Jan-2016

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

Stochastic Models for Communication

Networks

Jean Walrand

University of California, Berkeley

Page 2: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

2

Contents (tentative)

Big Picture Store-and-Forward Packets, Transactions, Users

Queuing Little’s Law and Applications Stability of Markov Chains Scheduling

Markov Decision Problems Discrete Time DT: LP Formulation Continuous Time

Transaction-Level Models Models of TCP Stability

Random Networks Connectivity Results: Percolations

Page 3: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

3

Big Picture Store-and-Forward

Packets

Actual Packets

Header Descriptors- Example: Linked Lists

IN-F

IFO

Ser

ial/

Par

alle

l OU

T-F

IFO

Par

alle

l/S

eria

l

Rx

Tx

Scheduler

Page 4: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

4

Big Picture Packets

Delay Backlog

Transactions Bit rate Duration

Users

Rate

Time

ActiveIdle

Get/Send files as Poissonprocess with given rate

Transition rates maydepend on perceived QoS

Page 5: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

5

Queuing Little’s Law Stability of Markov Chains

Page 6: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

6

Little’s Law Roughly:

Average number in system = Average arrival rate x Average time in system L = W

Example: 1000 packet arrivals/second

each packet spends 0.1 second in system 100 packets in system, on average

Notes: System does not have to be FIFO nor work-conserving;

Applies to subset of customers True under weak assumptions (stability)

Page 7: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

7

Little’s Law… Extension:

Average income per unit time = Average arrival rate x Average cost per customer H = G

Example: 1000 customer arrivals/second

each customer spends $5.00 on average system gets $5,000.00 per second, on average

Page 8: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

8

Little’s Law… Illustration 1: Utilization

Packets arrive at rate (per second)Average transmission time = 1/ (seconds)

Transmitter is busy a fraction / of the time Indeed:

System = transmitter Number in system = 1 if busy, 0 otherwise W = Average time in system = 1/ L = Average number in system = fraction of time busyHence, Fraction of time busy = /=: link utilization

L = WH = G

L = WH = G

Page 9: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

9

Little’s Law… Illustration 2: Delay in M/G/1 queue

Packets arrive at rate (per second)Transmission time = S; E(S) = 1/; var(S) = 2; = /

Indeed:H = E(queuing delay) = E(Q) [see *]G = E(SQ + S2/2) = E(S)E(Q) + E(S2)/2 where Q = queuing delayThus, E(Q) = {E(S)E(Q) + E(S2)/2}We solve for E(Q), then add E(S) to get the expected delay

L = WH = G

L = WH = G

S

Instantaneous costof a given customer

= residual service time

arrivaltime

start ofservice

departuretime

Q

Page 10: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

10

Little’s Law… Illustration 2: Delay in M/G/1 queue…* We used the fact that a typical customer sees the typical delay in

the system… This is true because the arrivals are Poisson. Indeed, P[ queue is in state x at t | arrival in (t, t + )] = P(queue is in state x at time t) because state at time t = f(past of Poisson process)

and process has independent increments Note: Not true if not Poisson

Work-to-be-done

Time

Customer sees less thanaverage congestion

Customer sees more thanaverage congestion

Page 11: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

11

Little’s Law… Illustration 2: Delay in M/G/1 queue…

M/M/1: 2 = 1/2

M/D/1: 2 = 0

2 >> 1 Average delay is very large

Page 12: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

12

Little’s Law… Illustration 3: Delay in M/G/1 queue with vacations

Model: Server goes on vacations every time the queue is empty. The vacations are i.i.d. and distributed like V.Then,

where T0 = average delay without vacations.

Derivation:Assume servers pays U when his residual vacation is U and each customer pays as in M/G/1 queue. Then, the total expected pay is the average waiting time until service. This is

E(QS + S2/2) + E(V2/2) = E(Q)where = rate of vacations.To find , note that the server is idle for E(V)t = (1 – )t seconds out of t >> 1 seconds. Hence, (1 – )/E(V). Substituting gives the result.

Page 13: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

13

Little’s Law…

S D1 2

Latest bit seen by time t

at point 1 at point 2n

Delay of bit n

Page 14: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

14

S D1 2

Little’s Law…N

T

S = area

S = T(1) + … + T(N) = integral of X(t)

T(N)T(N - 1)

X(t)

T(1) + … + T(N)

N

N

T1 X(t)dt =

TS = .

T

Average occupancy = (average delay)x(average arrival rate)

Page 15: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

15

Stability of Markov Chains Markov Chain (DT):

Assume irreducible. Then all states are NR, PR, or T together (certainly PR if

finite) there is 0 or 1 Inv. Dist.: P = 1 if PR, 0 otherwise Moreover, for all state i, one has almost surely

Finally, if PR and aperiodic, then

Page 16: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

16

Stability of Markov Chains…

Pakes’ Lemma: Assume irreducible and aperiodic on {0, 1, 2, …}. Define

Assume there is some i0 and some a so that

Then, the MC is PR Proof: The MC cannot stay away from {0, 1, …, i0}; If it does for k steps,

E(Xn) decreases by ka … . Formally:

Page 17: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

17

Stability of Markov Chains…

Pakes’ Lemma … simple variation:

Pakes’ Lemma … other variation: Same conclusion if there is some finite m such that

has the properties indicated above.

Page 18: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

18

Stability of Markov Chains…

Application 1: (Inspired by TCP)

First note that X(n) is irreducible and aperiodic. Also, the original form of Pakes’ Lemma applies.

Page 19: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

19

Stability of Markov Chains…

Application 2: (Inspired by switches)

Virtual Output Buffer switch:

(1)

(2)

(3)

(4)

Think of Bernoulli arrivals, cells of size 1 … . Note (1) + (2) < 1 and (3) + (4) < 1. At each time: X or =.Stability requires (1) + (3) < 1 and (2) + (4) < 1 .Maximum throughput scheduling: If this condition suffices.

Page 20: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

20

Stability of Markov Chains…

Application 2…

A = 14

B = 11

C = 15

D = 10

B + C > A + D => Serve (B, C)

Maximum Weighted Matching:

Page 21: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

21

Stability of Markov Chains…

Application 2 (Tassiulas, McKeown et al)

Maximum Weighted Matching maximum throughput.Proof: V(x) = ||x||2 is a Lyapunov function

That is, E[V(X(n+1) – V(X(n)) | X(n) = x] is finite and < - < 0 for x outside a finite set.

Page 22: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

22

Stability of Markov Chains…

Application 3

A = 14

B = 8

C = 15

D = 10

Serve longest queue, then next longest that is compatible, etc…. Here: C, B

Iterated Longest Queue:

Page 23: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

23

Stability of Markov Chains…

Application 3 (A. Dimakis)

Iterated Longest Queue maximum throughput.

Proof: V(x) = maxi(xi) is a Lyapunov function.

Page 24: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

24

Stability of Markov Chains…

Application 4: Wireless

L3

L1

L2

Interference Radius

L2

L1

L3

ConflictGraph:

Page 25: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

25

Possible matches: {1},{2,3} Maximum Weight Matching: {2,3} Longest Queue First (LQF): {1}

Classes: k2 K Resources: j2 J

λ2

λ3

λ1

12 3

conflict graph

Stability of Markov Chains…

Page 26: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

26

As in previous example: Consider fluid limits; Use longest queue size as

Lyapunov function. Carries over to conflict

graph topologies that strictly include trees. Example:

Stability of Markov Chains…

12 3

conflict graph

λ1λ2 λ3

Iterated Longest Queue maximum throughput . [Dimakis]

Page 27: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

27

Nominal stability cnd: i+i+1<1. One fluid limit is unstable for

i>3/7* Stochastic system is stable for

almost all feasible ’s (A. Dimakis)

Stability of Markov Chains…

6

1

2

3

4

5

Iterated Longest Queue maximum throughput.

* Big match is picked 2/3 times in fluid limit when queues are equal, which happens after 3 steps after a small match and 2 steps after a big match.

Page 28: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

28

Possible matches:{1,3,5,7}, {2,4,6,8}

{1,4,6}, {2,5,7},…

8 3

4

6

7

1 2

5

Two distinct phases: one stable, the other unstable.

Stability of Markov Chains…

Metastability of ILQ in 8-Ring (Antonios Dimakis)

Page 29: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

29

Scheduling Key idea: Order of processing matters Example:

Two jobs 1 and 2, processing times X1 = 1 and X2 = 10 1, 2 1 ends at T1 = 1, 2 at T2 = 11, so

sum of waiting times = T1 + T2 = 12 2, 1 T2 = 10, T1 = 11, so T1 + T2 = 21

For a set of jobs, shortest job first (SRPT) minimizes sum of waiting times For random processing times, shortest expected first (SEPT) The interesting case is when preemption is allowed and when new jobs arrive. We explore preemption next.

T1 T2 T2 T1

X1 X2 X2 X1

Page 30: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

30

Scheduling … Example:

Two jobs 1 and 2processing times X1 = 1 w.p. 0.9, 11 w.p. 0.1, X2 = 2

1, 2 E(T1 + T2 ) = E(X1 + X1 + X2) = 6 2, 1 E(T2 + T1 ) = E(X2 + X2 + X1) = 6 1, switch to 2 if X1 > 1, complete 1 E(T1 + T2 ) = 5.2

[T1 + T2 = 1 + 3 w.p. 0.9 and 3 + 13 w.p. 0.1.]

1 2

1 2 10p = 0.1

p = 0.9

T1 T2

T2 T1

Page 31: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

31

Scheduling … Other Example:

Jobs 1 and 2 have random processing times; can only interruptonce a stage is completed:

Question: How to schedule to minimize E(sum of completion times)? Intuition: Maximize expected rate of progress

a

d

10 9 8 7 6 5 4 3 2 1 b 1

d

p = 0.9

p = 0.1

X1: X2:

Page 32: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

32

Scheduling … One more example: a b c

d

done

p= 0.65

p= 0.35

3 4

p = 0.5

p = 0.2

3p = 0.3

Stage duration

Thus, one should stop once one reaches a state with a lower (.). Hence, the highest value of (.) is achieved in a single step.

Page 33: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

33

Scheduling …a b c

d

done

p= 0.65

p= 0.35

3 4

p = 0.5

p = 0.2

3p = 0.3

Stage duration

Here, the max. is

The second-highest max. is (j) for some j in {a, b} and it is achievedby the first time that the state leaves {j, c}. Hence, this is max{(a), (b)}where

It follows that the state with the second highest index is a and (a) = 0.117.

Finally, we get

Page 34: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

34

Scheduling … Another example: Jobs with increasing hazard rates.

Consider a set of jobs with a service time X distributed in {1, 2, …} with the property that the hazard rate h(.) defined by

Assume also that one can switch job after each step. Then the optimalPolicy is to serve the jobs exhaustively one by one.

Proof: We show, by induction on n, that (n) is achieved by the completion time of the job. Assume that this is true for n > m. Then

Page 35: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

35

Scheduling …

Claim: It is optimum to process job with highest (.) first: INDEX RULE.

Proof: Interchange argument. Consider the following interchange step:

(1)(2), (3), …, (k)

Page 36: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

36

Markov Decision Problems Objective: How to make decisions in face of

uncertainty Example: Guessing next card Example: Serving queues Discrete-Time Formulation Continuous-Time Formulation Linear Programming Formulation

Page 37: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

37

Markov Decision Problems Decision in the face of uncertainty

Should you carry an umbrella? Should get vaccinated against the flu? Take another card at blackjack? Buy a lottery ticket Fill up at the next gas station? Guess that the transmitter sent a 1? Take the prelims next time? Stay on for a PhD? Marry my boyfriend? Choose another advisor? Drop out of this silly course? ….

Page 38: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

38

Markov Decision Problems Example (from S. Ross “Introduction to Stochastic Dynamic Programming”)

A perfectly shuffled deck of 52 cards Guess once if next card is an ace:

+ $10.00 if correct; $ 0.00 if not. Can decide when to bet on the next card. You see the cards

as they are being turned over. Optimal policy: Any guess is correct as long as there is still

an ace in the deck. Proof:

Let V(n, m) be the best chance of guessing correctly if there are still n cards with m aces in the deck (0 <= m <= n).

Then V satisfies the following Dynamic Programming Equations:

Page 39: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

39

Markov Decision Problems DISCRETE TIME:

X(n) = DTMC with P(i, j, a), a in A(i) Objective: Minimize E{c(X(1), a(1)) + … + C(X(N), a(N))} Solution: Let V(i, n) = min E[c(X(1), a(1)) + … + C(X(N), a(n))|X(1) = i] Then

V(i, n) = min{c(i, a) + jP(i, j, a)V(j, n – 1)}; V(., 0) = 0where the minimum is over a in A(i).

Moreover, the optimum action a = g(x, n) is the minimizing value.

V(1, n) = min{1 + a2 + (1 – a)V(1, n – 1), a in [0, 1]} V(1, 1) = min{1 + a2} = 1, g(1, 1) = 0 V(1, 2) = min{1 + a2 + (1 – a)1} = 7/4, g(1, 2) = ½ V(1, 3) = min{1 + a2 + (1 – a)7/4} = 127/64, g(1, 3) = 7/8, …

1 0a

1 - a

A(1) = [0, 1], c(1, a) = 1 + a2, c(0) = 0

• Example

Page 40: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

40

Markov Decision Problems DISCRETE TIME: Average cost

X(n) = DTMC with P(i, j, a), a in A(i) Objective: Minimize E{c(X, a(X)) } where X is invariant under P(i, j,

a(i)) Solution: Roughly, V(i, n) = nV + h(i), so that

nV + h(i) = min{c(i, a) + (n – 1)V + jP(i, j, a)h(j)}Hence,

V + h(i) = min{c(i, a) + jP(i, j, a)h(j)}

Page 41: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

41

Markov Decision Problems DISCRETE TIME: Average cost

V + h(i) = min{c(i, a) + jP(i, j, a)h(j)}

V + h(1) = min{1 + ar + (1 – a)h(1) + ah(0), a in [0, 1]}V + h(0) = 0 + h(1) V = h(1) – h(0)

V + h(1) = min{1 + h(1), 1 + r + h(0)}, a = 0 or a = 1 accordingly h(1) – h(0) = min{1, 1 + r + h(0) – h(1)} h(1) – h(0) = min{1, r} = V a = 1 if r < 1 and a = 0 if r > 1

• Example

1 0a

1 - a

A(1) = [0, 1], c(1, a) = 1 + ar, c(0) = 0, r > 0

1

Page 42: Stochastic Models for Communication Networks Jean Walrand University of California, Berkeley

42

Markov Decision Problems DISCRETE TIME: Average Cost - Linear Programming