1 markov chains and processes: motivations random walk one-dimensional walk you can only move one...

23
1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk 0 p q house N S E W -1 -2 -3 3 2 1

Upload: homer-willis

Post on 17-Jan-2016

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

1

Markov chains and processes: motivations

Random walk One-dimensional walk

You can only move one step right or left every time unit

Two-dimensional walk

0pq

houseN

S

EW

-1-2

-3 3

21

Page 2: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

2

One-dimensional random walk with reflective barriers

Hypothesis Probability (object moves to the right) = p

Probability (object moves to the left) = q

Rule Object at position 2 (resp. -2) takes a step to the right

hits the reflective wall and bounces back to 2 (resp. -2)

0pq

-1-2

-3 3

21

Page 3: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

3

Discrete state space and discrete time

Let Xt = r.v. indicating the position of the object at time t

Where t takes on multiples of the time unit value i.e., t = 0, 1, 2, 3, …..

=> discrete time

And Xt belongs to {-2, -1, 0, 1, 2} => discrete state space

Discrete time + discrete state space + other conditions Markov chain

Example: random walk

Page 4: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

4

Discrete state space and continuous time

In this case State space is discrete

But shifts from one value to the other occurs continuously

Example # packets in the output buffers of a router

The number of packets is discrete and changes

Whenever you have an arrival or departure

All the queues done so far fall under this category Discrete state space + continuous time + other conditions

=> Markov process (Ex: M/M queues)

Page 5: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

5

Random walk: one-step transition probability

Xt = i => Xt+1 = i +/- 1 One step probability indicates where the object

is going to be in one step => Pij(1)

Example: P[Xt+1 = 1 | Xt = 0] = p

P[Xt+1 = 1 | Xt = 0] = q

P[Xt+1 = 0 | Xt = 1] = q

P[Xt+1 = 2 | Xt = 1] = p

Page 6: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

6

One step transition matrix

One step transition matrix P(1)

pq

pq

pq

pq

pq

P

000

000

000

000

000

)1(

States at time t+1-2 -1 0 1 2

States at time t

-2-1

0

12

Page 7: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

7

2-step transition probability

Pij(2) = 2-step transition probability

Given that at time t, the object is in state i With what probability it gets to j in exactly 2 steps

Pij(2) =P[Xt+2=j | Xt=i]

P[Xt+2=2|Xt=0] = p2

P[Xt+2=2|Xt=1] = p2

P[Xt+2=0|Xt=-1] = 0

Next, we will populate the 2-step transition matrix P(2)

Page 8: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

8

2-step transition matrix

2-step transition matrix P(2)

Observation: 2-step transition matrix can be obtained

By multiplying 2 1-step transition matrices

qppqpq

pqpq

pqpq

pqpq

pqpqpq

P

22

22

22

22

22

)2(

00

200

020

002

00

2)1()1()1()2( . PPPP

Page 9: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

9

3-step transition probability

Pij(3) may be derived as follows

]|'[].'|''[].''|[

]|[

'','11223

3)3(

jXjXPjXjXPjXjXP

iXjXPP

jjtttttt

ttij

]|[].|[

]|[].|[

223

113

iXkXPkXjXP

iXkXPkXjXP

ktttt

ktttt

Page 10: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

10

3-step transition probability: example For instance

Once you construct P(3) (3-step transition matrix)

k

tttttt XkXPkXXPXXP ]0|[].|2[]0|2[ 1133

0

1

-1

2

p

q

p2

0

33 ]0|2[ pXXP tt

)1()2()3(

)1()1()1()3(

.

..

PPP

PPPP

Page 11: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

11

Chapman-Kolmogrov equation Let Pij

(n) be the n steps transition probability

It depends on The probability of jumping to an intermediate state k

In v steps

And then in the remaining steps to go from k to j

n-step transition matrix P(n)

k

vnkj

viktnt

nij PPiXjXPP )()( .]|[

i k jv steps n-v steps

)()()( . vnvn PPP

Page 12: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

12

Markov chain: main feature

Markov chain Discrete state space

Discrete time structure

Assumption P[Xt+1=j | Xt=i]=P[Xt+1=j | X0 =k, X1 =k’,…, Xt=i]

In other words, probability that the object is in position j At time t+1, given that it was in position i at time t

Is independent of the entire history

Page 13: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

13

Markov chain: main objective

Objective Obtain

The long term probabilities also called Equilibrium or stationary probabilities

In the case of a random walk The probability to be at position i on the long run

Pij(n) becomes less dependent on i when n is very large

It will only depend on the destination state j

jn

ijn

P

)(lim

Page 14: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

14

n-step transition matrix: long run

πj

= Prob [system will be in state j in the long run, i.e., after a large number of transitions]

m

m

m

nmm

nm

nm

nm

nn

nm

nn

n

n

n

n

nn

n

ppp

ppp

ppp

P

PP

..

.....

.....

..

..

..

.....

.....

..

..

limlim

lim

21

21

21

)()(2

)(1

)(2

)(22

)(21

)(1

)(12

)(11

)(

)(

Page 15: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

15

Random walk: application

Prob[ at time 0, the object will be in state i] = ?

0pq

-1-2

-3 3

21

)1(

~

)1()0(

~

)1(2

)1(1

)1(0

)1(1

)1(2

)1(

~

)0(2

)0(1

)0(0

)0(1

)0(2

)0(

~

.

),,,,(

),,,,(

P

Page 16: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

16

Initial states are equiprobable

If all states are equiprobable at time 0 =>

)4,.2,.2,.2,.4.0(

)4,.2,.2,.2,.4.0(

000

000

000

000

000

).2.0,2.0,2.0,2.0,2.0(

.

)2.0,2.0,2.0,2.0,2.0(

)1(

~

)1(

~

)1()0(

~

)0(

~

pq

pq

pq

pq

pq

pq

pq

P

Page 17: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

17

Object initially at a specific position

)1(

~

)1()(

~

)2(

~

)1()1(

~

)1(

~

)1()0(

~

)0(

~

.

.

.

.

)0,,0,,0(.

)0,0,1,0,0(

nnP

P

pqP

As you move along, you get away from The original vector => behavior independent from initial position

1

;lim

1

~

)(

~

m

ii

n

n

Page 18: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

18

The power method

Assume a Markov chain with m+1 states 1, 2, 3, …,m

m

m

m

n

n

nmm

nm

nm

nm

nn

nm

nn

n

P

ppp

ppp

ppp

P

..

.....

.....

..

..

lim

..

.....

.....

..

..

21

21

21

)(

)()(2

)(1

)(2

)(22

)(21

)(1

)(12

)(11

)(

Page 19: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

19

Long term probabilities: system of equations

)()(2

)(1

)(2

)(22

)(21

)(1

)(12

)(11

)(

~

~~

~

)1(

~

..

.....

.....

..

..

1

.

.

1

,

1.

.

nmm

nm

nm

nm

nn

nm

nn

n

ppp

ppp

ppp

P

where

P

e

e

Page 20: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

20

Solving the system of equations

1...

.....

.

.

.....

.....

21

2211

22221212

12121111

m

mmmmmm

mm

mm

ppp

ppp

ppp

So we have m+1 equations and m unknowns You get rid of one of the equations

While keeping the normalizing equation

Page 21: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

21

The long term probabilities: solution

Application to the random walk Try to find the long term probabilities

bAXbXA

ppp

ppp

m

m

m

..

0

.

.

0

0

.

.

1..11

.....

.....

..1

..1

1

2

1

22221

11211

Page 22: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

22

Markov process

Markov process Discrete state space

Continuous time structure

Example M/M/1 queue

Xt =number of customers in the queue at time t= {0,1,…}

pij (s,t) = P[Xt =j | Xs = i] => pij (ζ) = P[Xt+ζ =j | Xt = i]

Page 23: 1 Markov chains and processes: motivations Random walk One-dimensional walk You can only move one step right or left every time unit Two-dimensional walk

23

Rate matrix

Stationary probability X = (X0 , X1 , X2 , …)

Rate matrix M/M/1 queue

Solution

1;0.

......

......

......

..)(0

...)(

...0

iXQX

Q