tcom 501: networking theory & fundamentals

36
1 TCOM 501: Networking Theory & Fundamentals Lecture 3 & 4 January 23 & 30, 2002 Prof. Yannis A. Korilis

Upload: niel

Post on 04-Jan-2016

70 views

Category:

Documents


1 download

DESCRIPTION

TCOM 501: Networking Theory & Fundamentals. Lecture 3 & 4 January 23 & 30, 2002 Prof. Yannis A. Korilis. Topics. Markov Chains Discrete-Time Markov Chains Calculating Stationary Distribution Global Balance Equations Detailed Balance Equations Birth-Death Process - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: TCOM 501:  Networking Theory & Fundamentals

1

TCOM 501: Networking Theory & Fundamentals

Lecture 3 & 4January 23 & 30, 2002Prof. Yannis A. Korilis

Page 2: TCOM 501:  Networking Theory & Fundamentals

3-2 Topics Markov Chains Discrete-Time Markov Chains Calculating Stationary Distribution Global Balance Equations Detailed Balance Equations Birth-Death Process Generalized Markov Chains Continuous-Time Markov Chains

Page 3: TCOM 501:  Networking Theory & Fundamentals

3-3 Markov Chain Stochastic process that takes values in a

countable set Example: {0,1,2,…,m}, or {0,1,2,…} Elements represent possible “states” Chain “jumps” from state to state

Memoryless (Markov) Property: Given the present state, future jumps of the chain are independent of past history

Markov Chains: discrete- or continuous- time

Page 4: TCOM 501:  Networking Theory & Fundamentals

3-4 Discrete-Time Markov Chain

Discrete-time stochastic process {Xn: n = 0,1,2,…}

Takes values in {0,1,2,…} Memoryless property:

Transition probabilities Pij

Transition probability matrix P=[Pij]

1 1 1 0 0 1

1

{ | , ,..., } { | }

{ | }n n n n n n

ij n n

P X j X i X i X i P X j X i

P P X j X i

0

0, 1ij ijj

P P

Page 5: TCOM 501:  Networking Theory & Fundamentals

3-5

n step transition probabilities

Chapman-Kolmogorov equations

is element (i, j) in matrix Pn

Recursive computation of state probabilities

Chapman-Kolmogorov Equations

{ | }, , 0, , 0nij n m mP P X j X i n m i j

nijP

0

, , 0, , 0n m n mij ik kj

k

P P P n m i j

Page 6: TCOM 501:  Networking Theory & Fundamentals

3-6

State Probabilities – Stationary Distribution

State probabilities (time-dependent)

In matrix form:

If time-dependent distribution converges to a limit

is called the stationary distribution

Existence depends on the structure of Markov chain

11 1

0 0

{ } { } { | } π πn nn n n n j i ij

i i

P X j P X i P X j X i P

0 1π { }, π (π ,π ,...)n n n n

j nP X j

1 2 2 0π π π ... πn n n nP P P

π lim πn

n

π πP

Page 7: TCOM 501:  Networking Theory & Fundamentals

3-7

Aperiodic: State i is periodic:

Aperiodic Markov chain: none of the states is periodic

Classification of Markov Chains

Irreducible: States i and j

communicate:

Irreducible Markov chain: all states communicate

, : 0, 0n mij jin m P P 1: 0n

iid P n d

0

3 4

21

0

3 4

21

Page 8: TCOM 501:  Networking Theory & Fundamentals

3-8 Limit Theorems

Theorem 1: Irreducible aperiodic Markov chain For every state j, the following limit

exists and is independent of initial state i Nj(k): number of visits to state j up to time k

j: frequency the process visits state j

0π lim { | }, 0,1,2,...j nnP X j X i i

0

( )π lim 1j

j k

N kP X i

k

Page 9: TCOM 501:  Networking Theory & Fundamentals

3-9

Existence of Stationary DistributionTheorem 2: Irreducible aperiodic Markov chain. Thereare two possibilities for scalars:

1. j = 0, for all states j No stationary distribution

2. j > 0, for all states j is the unique stationary distribution

Remark: If the number of states is finite, case 2 is theonly possibility

0π lim { | } lim nj n ij

n nP X j X i P

Page 10: TCOM 501:  Networking Theory & Fundamentals

3-10 Ergodic Markov Chains Markov chain with a stationary distribution

States are positive recurrent: The process returns to state j “infinitely often”

A positive recurrent and aperiodic Markov chain is called ergodic

Ergodic chains have a unique stationary distribution

Ergodicity ⇒ Time Averages = Stochastic Averages

π 0, 0,1,2,...j j

π lim nj ijn

P

Page 11: TCOM 501:  Networking Theory & Fundamentals

3-11

Calculation of Stationary Distribution

A. Finite number of states Solve explicitly the

system of equations

Numerically from Pn which converges to a matrix with rows equal to Suitable for a small number of states

B. Infinite number of states Cannot apply previous

methods to problem of infinite dimension

Guess a solution to recurrence:

0

0

π π , 0,1,...,

π 1

m

j i iji

m

ii

P j m

0

0

π π , 0,1,...,

π 1

j i iji

ii

P j

Page 12: TCOM 501:  Networking Theory & Fundamentals

3-12 Example: Finite Markov Chain

Markov chain formulation i is the number of umbrellas

available at her current location

Transition matrix

Absent-minded professor uses two umbrellas when commuting between home and office. If it rains and an umbrella is available at her location, she takes it. If it does not rain, she always forgets to take an umbrella. Let p be the probability of rain each time she commutes. What is the probability that she gets wet on any given day?

0 2 1 1 p

p1 p

1 p

0 0 1

0 1

1 0

P p p

p p

Page 13: TCOM 501:  Networking Theory & Fundamentals

3-13 Example: Finite Markov Chain

0 2 1 1 p

p1 p

1 p0 0 1

0 1

1 0

P p p

p p

2 0

0 2

1 1 20 1 2

1 2

1

0

π π π

π (1 )π

π π π (1 )π π 1 1 1π , π , π

π 1 3 3 3

π π π 1i i

p

P p p p

pp p p

0

1{gets wet} π

3

pP p p

p

Page 14: TCOM 501:  Networking Theory & Fundamentals

3-14 Example: Finite Markov Chain Taking p = 0.1:

Numerically determine limit of Pn

Effectiveness depends on structure of P

0 0 1

0 0.9 0.1

0.9 0.1 0

P

1 1 1π , , 0.310, 0.345, 0.345

3 3 3

p

p p p

0.310 0.345 0.345

lim 0.310 0.345 0.345 ( 150)

0.310 0.345 0.345

n

nP n

Page 15: TCOM 501:  Networking Theory & Fundamentals

3-15 Global Balance Equations Markov chain with infinite number of states Global Balance Equations (GBE)

is the frequency of transitions from j to i

Intuition: j visited infinitely often; for each transition out of j there must be a subsequent transition into j with probability 1

0 0

π π π π , 0j ji i ij j ji i iji i i j i j

P P P P j

π j jiP

Frequency of Frequency of

transitions out of transitions into j j

Page 16: TCOM 501:  Networking Theory & Fundamentals

3-16 Global Balance Equations

Alternative Form of GBE

If a probability distribution satisfies the GBE, then it is the unique stationary distribution of the Markov chain

Finding the stationary distribution: Guess distribution from properties of the system Verify that it satisfies the GBE Special structure of the Markov chain simplifies task

π π , 0,1,2,...j ji i ijj S i S i S j S

P P S

Page 17: TCOM 501:  Networking Theory & Fundamentals

3-17 Global Balance Equations – Proof

0 0 0 0

π π π π

π π π

π π

j ji i ij j ji i iji i j S i j S i

j ji ji i ij i ijj S i S i S j S i S i S

j ji i ijj S i S i S j S

P P P P

P P P P

P P

0 0

0 0

π π and 1

π π π π

j i ij jii i

j ji i ij j ji i iji i i j i j

P P

P P P P

Page 18: TCOM 501:  Networking Theory & Fundamentals

3-18 Birth-Death Process

One-dimensional Markov chain with transitions only between neighboring states: Pij=0, if |i-j|>1

Detailed Balance Equations (DBE)

Proof: GBE with S ={0,1,…,n} give:

0 1 n+1n2

, 1n nP

1,n nP ,n nP, 1n nP

1,n nP 01P

10P00P

S Sc

, 1 1 1,π π 0,1,...n n n n n nP P n

, 1 1 1,0 1 0 1

π π π πn n

j ji i ij n n n n n nj i n j i n

P P P P

Page 19: TCOM 501:  Networking Theory & Fundamentals

3-19 Example: Discrete-Time Queue

In a time-slot, one arrival with probability p or zero arrivals with probability 1-p

In a time-slot, the customer in service departs with probability q or stays with probability 1-q

Independent arrivals and service times State: number of customers in system

0 1 n+1n2

(1 )p

p (1 )p q

(1 )(1 )p q pq (1 )q p (1 )q p

p (1 )p q

(1 )(1 )p q pq (1 )q p

Page 20: TCOM 501:  Networking Theory & Fundamentals

3-20 Example: Discrete-Time Queue

0 1 n+1n2

(1 )p

p (1 )p q

(1 )(1 )p q pq (1 )q p (1 )q p

(1 )p q

(1 )(1 )p q pq (1 )q p

(1 )p q

1 1

(1 )π (1 ) π (1 ) π π , 1

(1 )n n n n

p qp q q p n

q p

0 1 1 0

/π π (1 ) π π

1

p qp q p

p

(1 )Define: / ,

(1 )

p qp q

q p

1 0 10

1

π 1 π π , 1

1π π , 1

nn

n n

p np

n

Page 21: TCOM 501:  Networking Theory & Fundamentals

3-21 Example: Discrete-Time Queue

Have determined the distribution as a function of 0

How do we calculate the normalization constant 0? Probability conservation law:

Noting that

10π π , 1

1n

n np

1 1

100 1

1π 1 π 1 1

1 (1 ) (1 )n

nn np p

1

)1(

)1()1(111

q

pq

pq

qppqpp

01

π 1

π (1 ) , 1nn n

Page 22: TCOM 501:  Networking Theory & Fundamentals

3-22 Detailed Balance Equations General case:

Imply the GBE Need not hold for a given Markov chain Greatly simplify the calculation of stationary distribution

Methodology: Assume DBE hold – have to guess their form Solve the system defined by DBE and ii = 1

If system is inconsistent, then DBE do not hold If system has a solution {i: i=0,1,…}, then this is the unique

stationary distribution

π π , 0,1,...j ji i ijP P i j

Page 23: TCOM 501:  Networking Theory & Fundamentals

3-23 Generalized Markov Chains

Markov chain on a set of states {0,1,…}, that whenever enters state i

The next state that will be entered is j with probability Pij

Given that the next state entered will be j, the time it spends at state i until the transition occurs is a RV with distribution Fij

{Z(t): t ≥ 0} describing the state the chain is in at time t: Generalized Markov chain, or Semi-Markov processIt does not have the Markov property: future depends on

The present state, and The length of time the process has spent in this state

Page 24: TCOM 501:  Networking Theory & Fundamentals

3-24 Generalized Markov Chains

Ti: time process spends at state i, before making a transition – holding time

Probability distribution function of Ti

Tii: time between successive transitions to i Xn is the nth state visited. {Xn: n=0,1,…}

Is a Markov chain: embedded Markov chain Has transition probabilities Pij

Semi-Markov process irreducible: if its embedded Markov chain is irreducible

0 0

( ) { } { | next state } ( )i i i ij ij ijj j

H t P T t P T t j P F t P

0[ ] ( )i iE T t dH t

Page 25: TCOM 501:  Networking Theory & Fundamentals

3-25 Limit Theorems

Theorem 3: Irreducible semi-Markov process, E[Tii] < ∞

For any state j, the following limit

exists and is independent of the initial state.

Tj(t): time spent at state j up to time t

pj is equal to the proportion of time spent at state j

lim { ( ) | (0) }, 0,1,2,...j tp P Z t j Z i i

[ ]

[ ]j

jjj

E Tp

E T

( )lim (0) 1j

j t

T tP p Z i

t

Page 26: TCOM 501:  Networking Theory & Fundamentals

3-26 Occupancy Distribution

Theorem 4: Irreducible semi-Markov process; E[Tii] < ∞.Embedded Markov chain ergodic; stationary

distribution π

Occupancy distribution of the semi-Markov process

πj proportion of transitions into state j E[Tj] mean time spent at j

Probability of being at j is proportional to πjE[Tj]

0 0

π π , 0; π 1j i ij ii i

P j

π [ ], 0,1,...

π [ ]j j

ji i

i

E Tp j

E T

Page 27: TCOM 501:  Networking Theory & Fundamentals

3-27 Continuous-Time Markov Chains

Continuous-time process {X(t): t ≥ 0} taking values

in {0,1,2,…}. Whenever it enters state i Time it spends at state i is exponentially

distributed with parameter νi

When it leaves state i, it enters state j with probability Pij, where j Pij = 1

Continuous-time Markov chain is a semi-Markov process with

Exponential holding times: a continuous-time Markov chain has the Markov property

( ) 1 , , 0,1,...itijF t e i j

Page 28: TCOM 501:  Networking Theory & Fundamentals

3-28 Continuous-Time Markov Chains

When at state i, the process makes transitions to state j≠i with rate:

Total rate of transitions out of state i

Average time spent at state i before making a transition:

ij i ijq P

ij i ij ij i j i

q P

[ ] 1/i iE T

Page 29: TCOM 501:  Networking Theory & Fundamentals

3-29 Occupancy Probability

Irreducible and regular continuous-time Markov chain Embedded Markov chain is irreducible Number of transitions in a finite time interval is finite with

probability 1 From Theorem 3: for any state j, the limit

exists and is independent of the initial state pj is the steady-state occupancy probability of state j pj is equal to the proportion of time spent at state j

[Why?]

lim { ( ) | (0) }, 0,1,2,...j tp P X t j X i i

Page 30: TCOM 501:  Networking Theory & Fundamentals

3-30 Global Balance Equations Two possibilities for the occupancy probabilities:

pj= 0, for all j pj> 0, for all j, and j pj = 1

Global Balance Equations

Rate of transitions out of j = rate of transitions

into j If a distribution {pj: j = 0,1,…} satisfies GBE, then it

is the unique occupancy distribution of the Markov chain

Alternative form of GBE:

, 0,1,...j ji i iji j i j

p q p q j

, {0,1,...}j ji i ijj S i S i S j S

p q p q S

Page 31: TCOM 501:  Networking Theory & Fundamentals

3-31 Detailed Balance Equations Detailed Balance Equations

Simplify the calculation of the stationary distribution

Need not hold for any given Markov chain Examples: birth-death processes, and

reversible Markov chains

, , 0,1,...j ji i ijp q p q i j

Page 32: TCOM 501:  Networking Theory & Fundamentals

3-32 Birth-Death Process

Transitions only between neighboring states

Detailed Balance Equations

Proof: GBE with S ={0,1,…,n} give:

0 1 n+1n2

0

1

n

1n

1

2

1n

n

, 1 , 1, , 0, | | 1i i i i i i ijq q q i j

1 1, 0,1,...n n n np p n

1 10 1 0 1

n n

j ji i ij n n n nj i n j i n

p q p q p p

S Sc

Page 33: TCOM 501:  Networking Theory & Fundamentals

3-33 Birth-Death Process

Use DBE to determine state probabilities as a function of p0

Use the probability conservation law to find p0

Using DBE in problems: Prove that DBE hold, or Justify validity (e.g. reversible process), or Assume they hold – have to guess their form – and solve system

1 11

1 1 2 1 2 01 2 0 0

01 1 1 1

...

n n n nn

n n n n n in n n

in n n n n i

p p

p p p p p

11 1 1

0 00 1 1 10 0 01 1 1

1 1 1 1 , if n n n

i i in

n n n ni i ii i i

p p p

Page 34: TCOM 501:  Networking Theory & Fundamentals

3-34 M/M/1 Queue Arrival process: Poisson with rate λ Service times: iid, exponential with parameter

µ Service times and interarrival times:

independent Single server Infinite waiting room N(t): Number of customers in system at time t

(state)0 1 n+1n2

Page 35: TCOM 501:  Networking Theory & Fundamentals

3-35 M/M/1 Queue

Birth-death process → DBE

Normalization constant

Stationary distribution

0 1 n+1n2

1

1 1 0...

n n

nn n n

p p

p p p p

0 00 1

1 1 1 1 , if 1nn

n n

p p p

(1 ), 0,1,...nnp n

Page 36: TCOM 501:  Networking Theory & Fundamentals

3-36

The M/M/1 Queue

Average number of customers

Applying Little’s Theorem, we have

Similarly, the average waiting time and number of customers in the queue is given by

1

0 0 0

2

(1 ) (1 )

1(1 )

(1 ) 1

n nn

n n n

N np n n

N

11NT

1 and

1 2

WNTW Q