probabilistic reasoning over time using hidden markov models

Post on 04-Feb-2016

34 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

DESCRIPTION

Probabilistic Reasoning Over Time Using Hidden Markov Models. Minmin Chen. Contents. 15.1~15.3. Time and Uncertainty. Noisy sensor. Agent: security guard at some secret underground installation Observation: Is the director coming with an umbrella State: Rain or not. Not fully observable. - PowerPoint PPT Presentation

TRANSCRIPT

PROBABILISTIC REASONING OVER TIME USING HIDDEN MARKOV MODELSMinmin Chen

CONTENTS

15.1~15.3

TIME AND UNCERTAINTY

Agent: security guard at some secret underground installation

Observation: Is the director coming with an umbrella

State: Rain or not

Noisy sensor

Not fully observable

time

TIME AND UNCERTAINTY

Observation: Measured Heart Rate Electrocardiogram

(ECG) Patient’s Activity

State Atria Fibrillation? Tachycardia? Bradycardia?

Noisy sensor

Not fully observable

time

STATES AND OBSERVATIONS

Unobservable state variable : Xt Observable evidence variable: Et Example 1: for each day

U1,U2,U3, …… R1, R2, R3, ……

Example 2: for each recording Et = {Measured_heart_rate t, ECG t, activity t} Xt = {AF t, Tachycardia t, Bradycardia t}

ASSUMPTION1: STATIONARY PROCESS

Changing world Unchanged laws remains the same for

different t

ASSUMPTION 2: MAKROV PROCESS

Current states depends only on a finite history of previous states

First-order markov process

States

Transition Probability Matrix

Initial Distribution

ASSUMPTION 3: RESTRICTION TO THE PARENTS OF EVIDENCE

The evidence variable at time t only depends on the current state:

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

HIDDEN MARKOV MODEL

Hidden state

sequence

Evidence sequence

Rt-1

RtRt+

1

Ut-1 Ut Ut+1

Rt P(Ut|Rt)

true 0.9

false 0.2

JOINT DISTRIBUTION OF HMMS

Bayes rule

Chain rule

Conditional independence

EXAMPLE

DAY: 1 2 3 4 5 Umbrella: true true false true true Rain: true true false true true

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

EXAMPLE

HOW TRUE THESE ASSUMPTIONS ARE

Depends on the problem domain To overcome violations to the assumptions

Increasing the order of Markov process model Increasing the set of state variables

INFERENCE IN TEMPORAL MODELS

Filtering: posterior distribution over the current state,

given all evidence to date Prediction:

Posterior distribution over the future state, given all evidence to date

Smoothing: Posterior distribution over a past state, given all

evidence to date Most likely explanation:

The sequence of states most likely to generate those observations

FILTERING & PREDICTION

Transition modelPosterior

distribution at time t

Prediction

Sensor model

Filtering

PROOF

Forward Alg

Bayes Rule

Chain Rule

Conditional Independence

Marginal Probability

Chain Rule

Conditional Independence

INTERPRETATION & EXAMPLE

0.5

0.5

U1=true

U2=true

0.50.7

0.3

0.5

0.3

0.7

0.45

0.9

0.10.2

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

INTERPRETATION & EXAMPLE

0.5 0.818

0.5 0.182

0.5

0.5

U1=true

U2=true

0.7

0.3

0.3

0.7

0.9

0.2

0.627

0.7

0.3

0.373

0.3

0.7

0.565

0.9

0.075

0.2

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

INTERPRETATION & EXAMPLE

0.5 0.818

0.883

0.5 0.117

0.182

0.5

0.5

0.627

0.373

U1=true

U2=true

0.7

0.3

0.3

0.7

0.9

0.2

0.7

0.3

0.3

0.7

0.9

0.2

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

LIKELIHOOD OF EVIDENCE SEQUENCE

The likelihood of the evidence sequence

The forward algorithm computes

SMOOTHING

Divide Evidence

Bayes Rule

Chain Rule

Conditional Independence

INTUITION

Sensor modelBackward

message at time

k+1

Sensor model

Backward Message at time k

BACKWARD

Backward Alg

Marginal Probability

Chain Rule

Conditional Independence

Conditional Independence

INTERPRETATION & EXAMPLE

0.5 0.818

1

0.5 10.182

Rt-1 P(Rt|Rt-1)

true 0.7

false 0.3

Rt P(Ut|Rt)

true 0.9

false 0.2

U1=true

U2=true

0.90.9

0.20.2

0.69

0.7

0.3

0.41

0.3

0.7

0.883

0.117

FINDING THE MOST LIKELY SEQUENCE

true true true true true

true true true true true

FINDING THE MOST LIKELY SEQUENCE

Enumeration Enumerate all possible state sequence Compute the joint distribution and find the

sequence with the maximum joint distribution Problem: total number of state sequence grows

exponentially with the length of the sequence Smooth

Calculate the posterior distribution for each time step k

In each step k, find the state with maximum posterior distribution

Combine these states to form a sequence Problem:

VITERBI ALGORITHM

true true false true true

.8182

.5155

.0361

.0334

.0210

.1818

.0491

.1237

.0173

.0024

PROOF

Divide Evidence

Bayes Rule

Chain Rule

Conditional Independence

Chain Rule

top related