chapter 3: kalmanfiltersbi.snu.ac.kr/courses/aplc12/kalmanfilter.pdf · chapter 3: kalmanfilters...

25
Chapter 3: Kalman Filters September 2010 Byoung-Tak Zhang School of Computer Science and Engineering & Cognitive Science and Brain Science Programs Seoul National University http://bi.snu.ac.kr/~btzhang/ Fall 2010 Graduate Course on Dynamic Learning

Upload: others

Post on 01-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

Chapter 3: Kalman Filters

September 2010

Byoung-Tak Zhang

School of Computer Science and Engineering &Cognitive Science and Brain Science Programs

Seoul National University

http://bi.snu.ac.kr/~btzhang/

Fall 2010 Graduate Course on Dynamic Learning

Page 2: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

References

• Slides of Michael Williams: see below• Slides of Chris Williams: see another slide

file

Page 3: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

3

Introduction to Kalman Filters

Michael Williams5 June 2003

Page 4: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

4

Overview

• The Problem – Why do we need Kalman Filters?

• What is a Kalman Filter?• Conceptual Overview• The Theory of Kalman Filter• Simple Example

Page 5: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

5

The Problem

• System state cannot be measured directly• Need to estimate “optimally” from

measurements

Measuring Devices Estimator

MeasurementError Sources

System State (desired but not known)

External Controls

Observed Measurements

Optimal Estimate of

System State

SystemError Sources

System

Black Box

Page 6: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

6

What is a Kalman Filter?• Recursive data processing algorithm• Generates optimal estimate of desired

quantities given the set of measurements• Optimal?

– For linear system and white Gaussian errors, Kalman filter is “best” estimate based on all previous measurements

– For non-linear system optimality is ‘qualified’

• Recursive?– Doesn’t need to store all previous measurements

and reprocess all data each time step

Page 7: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

7

Conceptual Overview

• Simple example to motivate the workings of the Kalman Filter

• Theoretical Justification to come later –for now just focus on the concept

• Important: Prediction and Correction

Page 8: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

8

Conceptual Overview

• Lost on the 1-dimensional line• Position – y(t)• Assume Gaussian distributed measurements

y

Page 9: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

9

Conceptual Overview

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

• Sextant Measurement at t1: Mean = z1 and Variance = σz1

• Optimal estimate of position is: ŷ(t1) = z1

• Variance of error in estimate: σ2x (t1) = σ2

z1

• Boat in same position at time t2 - Predicted position is z1

Page 10: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

10

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• So we have the prediction ŷ-(t2)• GPS Measurement at t2: Mean = z2 and Variance = σz2

• Need to correct the prediction due to measurement to get ŷ(t2)• Closer to more trusted measurement – linear interpolation?

prediction ŷ-(t2)measurement z(t2)

Page 11: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

11

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• Corrected mean is the new optimal estimate of position• New variance is smaller than either of the previous two

variances

measurement z(t2)

corrected optimal estimate ŷ(t2)

prediction ŷ-(t2)

Page 12: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

12

Conceptual Overview

• Lessons so far:Make prediction based on previous data - ŷ-, σ-

Take measurement – zk, σz

Optimal estimate (ŷ) = Prediction + (Kalman Gain) * (Measurement - Prediction)

Variance of estimate = Variance of prediction * (1 – Kalman Gain)

Page 13: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

13

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• At time t3, boat moves with velocity dy/dt=u• Naïve approach: Shift probability to the right to predict• This would work if we knew the velocity exactly (perfect model)

ŷ(t2)Naïve Prediction ŷ-(t3)

Page 14: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

14

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• Better to assume imperfect model by adding Gaussian noise• dy/dt = u + w• Distribution for prediction moves and spreads out

ŷ(t2)

Naïve Prediction ŷ-(t3)

Prediction ŷ-(t3)

Page 15: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

15

0 10 20 30 40 50 60 70 80 90 1000

0.02

0.04

0.06

0.08

0.1

0.12

0.14

0.16

Conceptual Overview

• Now we take a measurement at t3• Need to once again correct the prediction• Same as before

Prediction ŷ-(t3)

Measurement z(t3)

Corrected optimal estimate ŷ(t3)

Page 16: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

16

Conceptual Overview

• Lessons learnt from conceptual overview:– Initial conditions (ŷk-1 and σk-1)– Prediction (ŷ-

k , σ-k)

• Use initial conditions and model (eg. constant velocity) to make prediction

– Measurement (zk)• Take measurement

– Correction (ŷk , σk)• Use measurement to correct prediction by ‘blending’

prediction and residual – always a case of merging only two Gaussians

• Optimal estimate with smaller variance

Page 17: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

17

Theoretical Basis• Process to be estimated:

yk = Ayk-1 + Buk + wk-1

zk = Hyk + vk

Process Noise (w) with covariance Q

Measurement Noise (v) with covariance R

• Kalman FilterPredicted: ŷ-

k is estimate based on measurements at previous time-steps

ŷk = ŷ-k + K(zk - H ŷ-

k )

Corrected: ŷk has additional information – the measurement at time k

K = P-kHT(HP-

kHT + R)-1

ŷ-k = Ayk-1 + Buk

P-k = APk-1AT + Q

Pk = (I - KH)P-k

Page 18: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

18

Blending Factor

• If we are sure about measurements:– Measurement error covariance (R) decreases to zero– K decreases and weights residual more heavily than prediction

• If we are sure about prediction– Prediction error covariance P-

k decreases to zero– K increases and weights prediction more heavily than residual

Page 19: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

19

Theoretical Basis

ŷ-k = Ayk-1 + Buk

P-k = APk-1AT + Q

Prediction (Time Update)

(1) Project the state ahead

(2) Project the error covariance ahead

Correction (Measurement Update)

(1) Compute the Kalman Gain

(2) Update estimate with measurement zk

(3) Update Error Covariance

ŷk = ŷ-k + K(zk - H ŷ-

k )

K = P-kHT(HP-

kHT + R)-1

Pk = (I - KH)P-k

Page 20: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

20

Quick Example – Constant Model

Measuring Devices Estimator

MeasurementError Sources

System State

External Controls

Observed Measurements

Optimal Estimate of

System State

SystemError Sources

System

Black Box

Page 21: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

21

Quick Example – Constant Model

Prediction

ŷk = ŷ-k + K(zk - H ŷ-

k )

Correction

K = P-k(P-

k + R)-1

ŷ-k = yk-1

P-k = Pk-1

Pk = (I - K)P-k

Page 22: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

22

Quick Example – Constant Model

0 10 20 30 40 50 60 70 80 90 100-0.7

-0.6

-0.5

-0.4

-0.3

-0.2

-0.1

0

Page 23: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

23

Quick Example – Constant Model

0 10 20 30 40 50 60 70 80 90 1000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Convergence of Error Covariance - Pk

Page 24: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

24

0 10 20 30 40 50 60 70 80 90 100-0.7

-0.6

-0.5

-0.4

-0.3

-0.2

-0.1

0

Quick Example – Constant ModelLarger value of R – the measurement error covariance (indicates poorer quality of measurements)

Filter slower to ‘believe’ measurements – slower convergence

Page 25: Chapter 3: KalmanFiltersbi.snu.ac.kr/Courses/aplc12/KalmanFilter.pdf · Chapter 3: KalmanFilters September 2010 Byoung-TakZhang School of Computer Science and Engineering & Cognitive

25

References1. Kalman, R. E. 1960. “A New Approach to Linear Filtering and Prediction

Problems”, Transaction of the ASME--Journal of Basic Engineering, pp. 35-45 (March 1960).

2. Maybeck, P. S. 1979. “Stochastic Models, Estimation, and Control, Volume 1”, Academic Press, Inc.

3. Welch, G and Bishop, G. 2001. “An introduction to the Kalman Filter”, http://www.cs.unc.edu/~welch/kalman/