probabilistic reasoning over time using hidden markov models

28
PROBABILISTIC REASONING OVER TIME USING HIDDEN MARKOV MODELS Minmin Chen

Upload: leda

Post on 04-Feb-2016

34 views

Category:

Documents


0 download

DESCRIPTION

Probabilistic Reasoning Over Time Using Hidden Markov Models. Minmin Chen. Contents. 15.1~15.3. Time and Uncertainty. Noisy sensor. Agent: security guard at some secret underground installation Observation: Is the director coming with an umbrella State: Rain or not. Not fully observable. - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Probabilistic Reasoning Over Time Using Hidden Markov Models

PROBABILISTIC REASONING OVER TIME USING HIDDEN MARKOV MODELSMinmin Chen

Page 2: Probabilistic Reasoning Over Time Using Hidden Markov Models

CONTENTS

15.1~15.3

Page 3: Probabilistic Reasoning Over Time Using Hidden Markov Models

TIME AND UNCERTAINTY

Agent: security guard at some secret underground installation

Observation: Is the director coming with an umbrella

State: Rain or not

Noisy sensor

Not fully observable

time

Page 4: Probabilistic Reasoning Over Time Using Hidden Markov Models

TIME AND UNCERTAINTY

Observation: Measured Heart Rate Electrocardiogram

(ECG) Patient’s Activity

State Atria Fibrillation? Tachycardia? Bradycardia?

Noisy sensor

Not fully observable

time

Page 5: Probabilistic Reasoning Over Time Using Hidden Markov Models

STATES AND OBSERVATIONS

Unobservable state variable : Xt Observable evidence variable: Et Example 1: for each day

U1,U2,U3, …… R1, R2, R3, ……

Example 2: for each recording Et = {Measured_heart_rate t, ECG t, activity t} Xt = {AF t, Tachycardia t, Bradycardia t}

Page 6: Probabilistic Reasoning Over Time Using Hidden Markov Models

ASSUMPTION1: STATIONARY PROCESS

Changing world Unchanged laws remains the same for

different t

Page 7: Probabilistic Reasoning Over Time Using Hidden Markov Models

ASSUMPTION 2: MAKROV PROCESS

Current states depends only on a finite history of previous states

First-order markov process

States

Transition Probability Matrix

Initial Distribution

Page 8: Probabilistic Reasoning Over Time Using Hidden Markov Models

ASSUMPTION 3: RESTRICTION TO THE PARENTS OF EVIDENCE

The evidence variable at time t only depends on the current state:

Page 9: Probabilistic Reasoning Over Time Using Hidden Markov Models

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

HIDDEN MARKOV MODEL

Hidden state

sequence

Evidence sequence

Rt-1

RtRt+

1

Ut-1 Ut Ut+1

Rt P(Ut|Rt)

true 0.9

false 0.2

Page 10: Probabilistic Reasoning Over Time Using Hidden Markov Models

JOINT DISTRIBUTION OF HMMS

Bayes rule

Chain rule

Conditional independence

Page 11: Probabilistic Reasoning Over Time Using Hidden Markov Models

EXAMPLE

DAY: 1 2 3 4 5 Umbrella: true true false true true Rain: true true false true true

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

Page 12: Probabilistic Reasoning Over Time Using Hidden Markov Models

EXAMPLE

Page 13: Probabilistic Reasoning Over Time Using Hidden Markov Models

HOW TRUE THESE ASSUMPTIONS ARE

Depends on the problem domain To overcome violations to the assumptions

Increasing the order of Markov process model Increasing the set of state variables

Page 14: Probabilistic Reasoning Over Time Using Hidden Markov Models

INFERENCE IN TEMPORAL MODELS

Filtering: posterior distribution over the current state,

given all evidence to date Prediction:

Posterior distribution over the future state, given all evidence to date

Smoothing: Posterior distribution over a past state, given all

evidence to date Most likely explanation:

The sequence of states most likely to generate those observations

Page 15: Probabilistic Reasoning Over Time Using Hidden Markov Models

FILTERING & PREDICTION

Transition modelPosterior

distribution at time t

Prediction

Sensor model

Filtering

Page 16: Probabilistic Reasoning Over Time Using Hidden Markov Models

PROOF

Forward Alg

Bayes Rule

Chain Rule

Conditional Independence

Marginal Probability

Chain Rule

Conditional Independence

Page 17: Probabilistic Reasoning Over Time Using Hidden Markov Models

INTERPRETATION & EXAMPLE

0.5

0.5

U1=true

U2=true

0.50.7

0.3

0.5

0.3

0.7

0.45

0.9

0.10.2

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

Page 18: Probabilistic Reasoning Over Time Using Hidden Markov Models

INTERPRETATION & EXAMPLE

0.5 0.818

0.5 0.182

0.5

0.5

U1=true

U2=true

0.7

0.3

0.3

0.7

0.9

0.2

0.627

0.7

0.3

0.373

0.3

0.7

0.565

0.9

0.075

0.2

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

Page 19: Probabilistic Reasoning Over Time Using Hidden Markov Models

INTERPRETATION & EXAMPLE

0.5 0.818

0.883

0.5 0.117

0.182

0.5

0.5

0.627

0.373

U1=true

U2=true

0.7

0.3

0.3

0.7

0.9

0.2

0.7

0.3

0.3

0.7

0.9

0.2

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

Page 20: Probabilistic Reasoning Over Time Using Hidden Markov Models

LIKELIHOOD OF EVIDENCE SEQUENCE

The likelihood of the evidence sequence

The forward algorithm computes

Page 21: Probabilistic Reasoning Over Time Using Hidden Markov Models

SMOOTHING

Divide Evidence

Bayes Rule

Chain Rule

Conditional Independence

Page 22: Probabilistic Reasoning Over Time Using Hidden Markov Models

INTUITION

Sensor modelBackward

message at time

k+1

Sensor model

Backward Message at time k

Page 23: Probabilistic Reasoning Over Time Using Hidden Markov Models

BACKWARD

Backward Alg

Marginal Probability

Chain Rule

Conditional Independence

Conditional Independence

Page 24: Probabilistic Reasoning Over Time Using Hidden Markov Models

INTERPRETATION & EXAMPLE

0.5 0.818

1

0.5 10.182

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

U1=true

U2=true

0.90.9

0.20.2

0.69

0.7

0.3

0.41

0.3

0.7

0.883

0.117

Page 25: Probabilistic Reasoning Over Time Using Hidden Markov Models

FINDING THE MOST LIKELY SEQUENCE

true true true true true

true true true true true

Page 26: Probabilistic Reasoning Over Time Using Hidden Markov Models

FINDING THE MOST LIKELY SEQUENCE

Enumeration Enumerate all possible state sequence Compute the joint distribution and find the

sequence with the maximum joint distribution Problem: total number of state sequence grows

exponentially with the length of the sequence Smooth

Calculate the posterior distribution for each time step k

In each step k, find the state with maximum posterior distribution

Combine these states to form a sequence Problem:

Page 27: Probabilistic Reasoning Over Time Using Hidden Markov Models

VITERBI ALGORITHM

true true false true true

.8182

.5155

.0361

.0334

.0210

.1818

.0491

.1237

.0173

.0024

Page 28: Probabilistic Reasoning Over Time Using Hidden Markov Models

PROOF

Divide Evidence

Bayes Rule

Chain Rule

Conditional Independence

Chain Rule