modeling and simulation markov chain

22
Modeling and Simulation Markov chain 1 Arwa Ibrahim Ahmed Princess Nora University

Upload: silver

Post on 24-Feb-2016

65 views

Category:

Documents


0 download

DESCRIPTION

Princess Nora University. Modeling and Simulation Markov chain. Arwa Ibrahim Ahmed. Markov chain . - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Modeling and Simulation Markov chain

Modeling and Simulation

Markov chain

1

Arwa Ibrahim Ahmed

Princess Nora University

Page 2: Modeling and Simulation Markov chain

2

Markov chain Markov chain, named after Andrey

Markov, is a mathematical system that undergoes transitions from one state to another, between a finite or countable number of possible states. It is a random process usually characterized as memoryless: the next state depends only on the current state and not on the sequence of events that preceded it. This specific kind of "memorylessness" is called the Markov property.

2

Page 3: Modeling and Simulation Markov chain

3

Markov chain :Formally, a Markov chain is a random process with the

Markov property. Often, the term "Markov chain" is used to mean a Markov process which has a discrete (finite or countable) state-space. Usually a Markov chain is defined for a discrete set of times (i.e., a discrete-time Markov chain) although some authors use the same terminology where "time" can take continuous values.

A discrete-time random process involves a system which is in a certain state at each step, with the state changing randomly between steps. The steps are often thought of as moments in time, but they can equally well refer to physical distance or any other discrete measurement; formally, the steps are the integers or natural numbers, and the random process is a mapping of these to states. The Markov property states that the conditional probability distribution for the system at the next step (and in fact at all future steps) depends only on the current state of the system, and not additionally on the state of the system at previous steps.

3

Page 4: Modeling and Simulation Markov chain

4

Markov chain:Since the system changes randomly, it is generally impossible

to predict with certainty the state of a Markov chain at a given point in the future. However, the statistical properties of the system's future can be predicted. In many applications, it is these statistical properties that are important.

The changes of state of the system are called transitions, and the probabilities associated with various state-changes are called transition probabilities. The set of all states and transition probabilities completely characterizes a Markov chain. By convention, we assume all possible states and transitions have been included in the definition of the processes, so there is always a next state and the process goes on forever.

4

Page 5: Modeling and Simulation Markov chain

5

Applications of Markov chain:The application and usefulness of Markov

chains:Information sciences:Markov chains are used throughout information processing.

Claude Shannon's famous 1948 paper A mathematical theory of communication, which in a single step created the field of information theory, opens by introducing the concept of entropy through Markov modeling of the English language.

Queuing theoryMarkov chains are the basis for the analytical treatment of

queues (queuing theory). Agner Krarup Erlang initiated the subject in 1917. This makes them critical for optimizing the performance of telecommunications networks, where messages must often compete for limited resources

5

Page 6: Modeling and Simulation Markov chain

6

Applications of Markov chain:Internet applicationsThe Page Rank of a webpage as used by Google is

defined by a Markov chain.It is the probability to be at page j in the stationary distribution on the following Markov chain on all (known) WebPages.

Economics and financeMarkov chains are used in Finance and Economics

to model a variety of different phenomena, including asset prices and market crashes. The first financial model to use a Markov chain was from Prasad et al. in 1974.

6

Page 7: Modeling and Simulation Markov chain

7

Applications of Markov chain:

7

Social sciencesMarkov chains are generally used in describing

path-dependent arguments, where current structural configurations condition future outcomes. An example is the reformulation of the idea, originally due to Karl Marx's Das Kapital, tying economic development to the rise of capitalism. In current research, it is common to use a Markov chain to model how once a country reaches a specific level of economic development.

GamesMarkov chains can be used to model many games of chance.

The children's games Snakes and Ladders and "Hi Ho! Cherry-O", for example, are represented exactly by Markov chains.

Page 8: Modeling and Simulation Markov chain

8

DISCRETE-TIME FINITE-STATE MARKOV CHAINS

Discrete-time Markov chains : Let X be a discrete random variable, indexed by

time t as X(t), that evolves in time as follows. X(t) ∈ X for all t = 0, 1, 2, . . . . State transitions can occur only at the discrete

times t = 0, 1, 2, . . . . and at these timesthe random variable X(t) will shift from its current

state x ∈ X to another state, say X’ ∈ X, with fixed probability

p(x; x’) = Pr(X(t + 1) = x’ | X(t) = x) ≥ 0.

8

Page 9: Modeling and Simulation Markov chain

9

DISCRETE-TIME FINITE-STATE MARKOV CHAINS:

If | X | < 1 the stochastic process defend by X(t) is called a discrete-time, finite-state Markov chain.

Without loss of generality, throughout this section we assume that the finite state space is

X = {0, 1, 2, . . . , k} where k is a finite, but perhaps quite

large, integer.

9

Page 10: Modeling and Simulation Markov chain

10

DISCRETE-TIME FINITE-STATE MARKOV CHAINS:A discrete-time, finite-state Markov chain is completely characterized

by the initialstate at t = 0, X(0), and the function p(x; x’) defined for all (x, x’) ∈

X X X. When thestochastic process leaves the state x the transition must be either to

state x’ = 0 withprobability p(x, 0), or to state x’ = 1 with probability p(x, 1), . . . . , or

to state x’ = k with probability p(x, k), and the sum of these probabilities must be 1. That is

x = 0, 1, . . . . , k.

Because p(x, x’) is independent of t for all (x, x’), the Markov chain is said to be homogeneous or stationary.

10

1)',(0'

k

xxxp

Page 11: Modeling and Simulation Markov chain

11

DISCRETE-TIME FINITE-STATE MARKOV CHAINS:

11

The state transition probability p(x; x’) represents the probability of

a transition from state x to state x’. The corresponding (k + 1) X (k + 1) matrix

),().......1,()0,(...............................................................................................................),1.().........2,1()0,1(),0.().........1,0()0,0(

kkpkpkp

kppkpp

with elements p(x, x’) is called the state transition matrix.

P=

Page 12: Modeling and Simulation Markov chain

12

DISCRETE-TIME FINITE-STATE MARKOV CHAINS:

The elements of the state transition matrix p are non-negative and the elements of each row sum to 1.0.

(A matrix with these properties is said to be a stochastic matrix.)

12

Page 13: Modeling and Simulation Markov chain

13

EXAMPLE:If we know the probability that the child of

a lower-class parent becomes middle-class or upper-class, and we know similar information for the child of a middle-class or upper-class parent, what is the probability that the grandchild of a lower –class parent is middle or upper class?

13

Page 14: Modeling and Simulation Markov chain

14

EXAMPLE: in sociology, it is convenient to classify

people by income as lower-class ,middle-class and upper-class. Sociologists have found that the strongest determinate of the income class of an individual is the income class of the individual's parents.

for example , if an individual in the lower-income class is said to be in state 1, an individual in the middle-income class is in state 2,and an individual in the upper -income class is in state3, then the following probabilities of change in income class from one generation to the next might apply.

14

Page 15: Modeling and Simulation Markov chain

15

EXAMPLE:Table1 shows that if an individual is in

state1 (lower income class) then there is a probability of 0.65 that any offspring will be in the lower-income class ,a probability of 0.28 that offspring will be in the middle income class ,and a probability of 0.07 that offspring will be in the upper-income class.

15

Page 16: Modeling and Simulation Markov chain

16

EXAMPLE:state 1 2 3

1 0.65 0.28 0.07

2 0.15 0.67 0.18

3 0.12 0.36 0.52

16

The symbol Pij will be used for the probability of transaction from state I to state j in one generation . For example , p23 represents the probability that a person in state 2 will have offspring in state 3 , from that table above , p23 =0.18 .Also from the table ,p31= 0.12 , p22=0.67 , and so on

Page 17: Modeling and Simulation Markov chain

17

EXAMPLE:The information from table can be written

in other forms . Figure 1 is a transition diagram that shows the three states and probabilities of going from one state to another.

17

Page 18: Modeling and Simulation Markov chain

18

EXAMPLE: 1

8

1 2

3

0.65 0.28

0.07 0.52

0.36

0.12

0.15 0.67

0.18

Page 19: Modeling and Simulation Markov chain

19

EXAMPLE:In a transition matrix, the states are

indicated at the side and the top .if P represent the transition matrix for the table above, then

0.56 0.28 0.07 0.15 0.67 0.18 0.12 0.36 0.52

19

Page 20: Modeling and Simulation Markov chain

20

EXAMPLE:A transition matrix has several features:

1. It is square , since all possible states must be used both as rows as columns.

2. All entries are between 0 and 1 , inclusive ; this is because all entries represent probabilities.

3. The sum of the entries in any row must be 1.

20

Page 21: Modeling and Simulation Markov chain

21

EXAMPLE:The transition matrix P shows the probability of change in

income class from one generation to the next . now let us investigate the probability for change in income class over two generation. For example ,if a parent in state 3(the upper income class) .what is the probability that a grandchild will be in state 2?

To find out, start with a tree diagram ,as shown in fig2.The various probabilities come from transition matrix P.The arrows point to the outcomes “grandchild in state 2”grandchild in state2 is given by the sum of the probabilities

indicated with arrows ,or 0.0336+0.2412+0.1872=4620

21

Page 22: Modeling and Simulation Markov chain

22

EXAMPLE:

3

22

O.21

O.36

O.52

1

2

3

1

1

23

2

3

1

2

3

(0.12) (0.65)=0.078

(0.12) (0.07)=0.0084

(0.36) (0.15)=0.054

(0.36) (067)=0.02412

(0.36) (0.18)=0.0648

(0.52) (012)=0.0624

(0.52) (0.36)=0.1872

(0.52) (0.52)=0.2704

(0.12) (0.28)=0.0336