nomor 19

4

Click here to load reader

Upload: hendy-kurniawan

Post on 30-May-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: nomor 19

IERG 5300 Random Process

Suggested Solution of Assignment 1 1. Questions in chapter 4 of textbook(8th edition) (1) Define 4 states in the Markov chain.

State 0: last two trials were all successes; 1: last two trials were a success followed by a failure; 2: last two trials were a failure followed by a success; 3: last two trials were all failures;

Then the transition matrix is

.8 .2 0 00 0 .5 .5

P=.5 .5 0 00 0 .5 .5

.

By solving the linear system We get (π0, π1, π2, π3) = (5/11, 2/11, 2/11, 2/11). In the long run, proportion of success trials is .8π0 + .5(π1 + π2 + π3) = 7/11 Remarks: In this question, we are calculating the long run proportion of success.

If we want to calculate the limiting probabilities, it is necessary to check for ergodicity of the finite Markov chain.

(2) P1 : One class: {0, 1, 2}, recurrent.

P2 : One class: {0, 1, 2, 3}, recurrent.

P3 : Three classes: {0, 2}, recurrent; {1}, transient; {3, 4}, recurrent.

P4 : Four classes {0, 1}, recurrent; {2}, recurrent; {3}, transient; {4}, transient.

(3) Let A be the event that all states have been visited by time T. Conditioning on the

initial transition, P(A) = P(A | X0=0, X1=1)p + P(A | X0=0, X1=-1)q

0 1 2 3 0 1 2 3

0 1 2 3

.8 .2 0 00 0 .5 .5

( , , , )=( , , , ).5 .5 0 00 0 .5 .5

1

π π π π π π π π

π π π π

+ + + =

Page 2: nomor 19

1 / 1 / if 1 ( / ) 1 ( / )

1 if 1/2

n nq p p qp q p qq p p q

pn

− − + ≠ − − =

=

The conditional probabilities in the preceding follow by noting that they are equal to the probability in the gambler’s ruin problem that a gambler starting with 1 will reach n before going broke when the gambler’s win probabilities are p and q.

(4) (a)2

, 1 2

( )i i

m iPm+

−= ,

2

, 1 2i iiPm− = , , 2

2 ( )i i

i m iPm−

=

(b) Since in the limit the set of m balls in urn 1 is equally likely to be any subset of m balls, it is intuitively clear that

2

2 2i

m m mi m i i

m mm m

π

− = =

(c) We must verify that, with πi given in (b), πiPi,i+1 = πi+1Pi+1,I

It is easy to show that

( )( )

( )( )( )

222 2

, 1 2 2

2 2

2

2

2

1 1,2

( ) ! ( ) 122 ( )! !

! ( 1) 1 2( 1)!( 1)!

1 ( 1) 2

i i i

i i i

mi m i m m iP

mm m m i i mmm

m imm i i m

m

mi i P

m mm

π

π

+

+ +

− −= = −

+= − − +

+ += =

With ∑iπi =1, the Markov chain is time reversible. (5) Rate at which transition from i to j to k occur = πiPijPjk, whereas the rate in the

reverse order is πkPkjPji. So owe must show πiPijPjk = πkPkjPji Now,

πiPijPjk = πjPjiPjk // time reversible Markov chain = πjPjkPji

= πkPkjPji // time reversible Markov chain

// by result on page 60 of lecture // note part I

Page 3: nomor 19

2. Questions in the lecture notes(Chapter 1) (1)

There are 9 states. For every state S, define eS = E[waiting time till HTHT or THTT | current state at S]. In particular, eHTHT = eTHTT = 0. We have the following linear system:

After solving it, we got enull = 90/7 = 12.86. (2) Let state i be a recurrent state, j be a transient state.

Assume that j can be accessible from i, i.e. Pmij > 0 for some m>0

Consider the two possible cases: 1) i is also accessible from j. Then i communicates with j, so they are in the same

class. Because all states in a same class can only be either recurrent or transient at the same time, this contradicts with the assumption.

2) i is not accessible form j. i.e. Pnji

= 0 for all n>0 Since Pm

ij > 0 for some m>0, it implies that there is a chance for the Markov chain to enter state j from state i after m transitions, in which case the Markov chain will never return to state i no matter how many transitions n it takes. Therefore, state i must be transient in this case, which contradicts with the assumption.

Since both situations lead to a contradiction, it suggests that the assumption does not hold, therefore a transient state cannot be accessible from a recurrent state.

3. Other Questions Denote α = P[will ever visit state 1 | X0 = 0].

α = P[will ever visit state 1|X0=0,X1=1]P01+P[will ever visit state 1| X0=0, X1=-1]P0-1 = 1•P01 + P[will ever visit state 1| X1=-1]P0-1

Page 4: nomor 19

= 1/2(1+α2) //P[will ever visit state 1|X0 = 0]=P[will ever visit state 0|X0 = -1], // P[will ever visit state 1| X0=-1]= P[will ever visit state 0| X0=-1] P[will ever visit //state 1| X0=0] //P01 = P0-1

To solve the equation, we get α=1 Note: P{Xt = i for some t > 0 | X1 = i+2} = ∑m=1 P{Xt = i for some t > 0 | X0 = i+2, 1st occurrence of state i+1 at time m} P{1st occurrence of state i+1 at time m | X0 = i+2} = ∑m=1 α P{1st occurrence of state i+1 at time m | X0 = i+2} = α ∑m=1 P{1st occurrence of state i+1 at time m | X0 = i+2} = α2