Download - State Estimation and Kalman Filtering
![Page 1: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/1.jpg)
State Estimation and Kalman Filtering
![Page 2: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/2.jpg)
Sensors and Uncertainty Real world sensors are noisy
and suffer from missing data (e.g., occlusions, GPS blackouts)
Use sensor models to estimate ground truth, unobserved variables, make forecasts
![Page 3: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/3.jpg)
How does this relate to motion?
A robot must: Know where it is (localization) Model its environment to avoid collisions
(mapping, model building) Move in order to get a better view of an
object (view planning, inspection) Predict how other agents move (tracking)
![Page 4: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/4.jpg)
Intelligent Robot ArchitecturePercept (raw sensor data)
Action
Sensing algorithms
Models
Planning
State estimationMappingTrackingCalibrationObject recognition…
![Page 5: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/5.jpg)
State Estimation Framework
Sensor model(aka observation model) State => observation Observations may be partial, noisy
Dynamics model(aka transition model) State => next state Transition may be noisy, control-
dependent Probabilistic state estimation
Given observations, estimate probability distribution over state
xt
zt
xt+1
Sensor model
Dynamics model
![Page 6: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/6.jpg)
Markov Chain
Given an initial distribution and the dynamics model, can predict how the state evolves (as a probability distribution) over time
X0 X1 X2 X3
![Page 7: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/7.jpg)
Hidden Markov Model
Use observations to get a better idea of where the robot is at time t
X0 X1 X2 X3
z1 z2 z3
Hidden state variables
Observed variables
Predict – observe – predict – observe…
![Page 8: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/8.jpg)
General Bayesian Filtering
Compute P(xt|zt), given zt, prior on P(xt-1) Bayes rule:
P(xt|zt) = P(zt|xt)P(xt)
P(xt) = integral [ P(xt | xt-1) P(xt-1) dxt-1 ]
= integral [ P(zt|xt)P(xt) dyt ]
Nasty, hairy integrals
![Page 9: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/9.jpg)
Key Representational Issue
How to represent and perform calculations on probability distributions?
![Page 10: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/10.jpg)
Kalman Filtering
In a nutshell Efficient filtering in continuous state
spaces Gaussian transition and observation
models Ubiquitous for tracking with noisy
sensors, e.g. radar, GPS, cameras
![Page 11: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/11.jpg)
Kalman Filtering
Closed form solution for HMM Assuming linear Gaussian process
Transition and observation function are linear transformation + multivariate Gaussian noise
y = A x + ~ N(,)
![Page 12: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/12.jpg)
Linear Gaussian Transition Model for Moving 1D Point Consider position and velocity xt, vt
Time step h Without noise
xt+1 = xt + h vt
vt+1 = vt
With Gaussian noise of std
P(xt+1|xt) exp(-(xt+1 – (xt + h vt))2/(22
i.e. xt+1 ~ N(xt + h vt, )
![Page 13: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/13.jpg)
Linear Gaussian Transition Model If prior on position is Gaussian, then the
posterior is also Gaussianvh 1
N(,) N(+vh,+1)
![Page 14: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/14.jpg)
Linear Gaussian Observation Model
Position observation zt
Gaussian noise of std 2
zt ~ N(xt,)
![Page 15: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/15.jpg)
Linear Gaussian Observation Model If prior on position is Gaussian, then the
posterior is also Gaussian
(2z+22)/(2+2
2)
2 222/(2+2
2)
Position prior
Posterior probability
Observation probability
![Page 16: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/16.jpg)
Kalman Filter
xt ~ N(x,x)
xt+1 = F xt + g + v
zt+1 = H xt+1 + w
v ~ N(,v), w ~ N(,w)
Dynamics noise
Observation noise
![Page 17: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/17.jpg)
Two Steps
Maintain t, t the gaussian distribution over state at time t
PredictCompute distribution of xt+1 using dynamics
model alone Update
Compute xt+1|zt+1 with Bayes rule
![Page 18: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/18.jpg)
Multivariate Computations
Linear transformations of gaussians If x ~ N(,), y = A x + b Then y ~ N(A+b, AAT)
Consequence If x ~ N(x,x), y ~ N(y,y), z=x+y Then z ~ N(x+y,x+y)
Conditional of gaussian If [x1,x2] ~ N([1 2],[11,12;21,22]) Then on observing x2=z, we have
x1 ~ N(1-1222-1(z-2), 11-1222
-121)
![Page 19: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/19.jpg)
Predict Step
Linear transformation rule: xt+1 ~ N(Ft + g, F t FT
+ v) Let these be ’ and ’
![Page 20: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/20.jpg)
Update step
Given ’ and ’, and new observation z Compute the final distribution t+1 and t+1
using the conditional distribution formulas
![Page 21: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/21.jpg)
Presentation
![Page 22: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/22.jpg)
Deriving the Update Rulext
zt
’a= N ( , )’ B
BT C
xt ~ N(’ , ’)
(1) Unknowns a,B,C
(3) Assumption
(7) Conditioning (1)xt | zt ~ N(’-BC-1(zt-a), ’-BC-1BT)
(2) Assumption
zt | xt ~ N(H xt, W)
C-BT’-1B = W => C = H ’ HT + W
H xt = a-BT’-1(xt-’) => a=H’, BT=H’ (5) Set mean (4)=(3)
(6) Set cov. (4)=(3)
(8,9) Kalman filtert = ’ - ’HTC-1(zt-H’)
(4) Conditioning (1)zt | xt ~ N(a-BT’-1xt, C-BT’-1B)
t = ’ - ’HTC-1H’
![Page 23: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/23.jpg)
Filtering Variants
Extended Kalman Filter (EKF) Unscented Kalman Filter (UKF) Particle Filtering
![Page 24: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/24.jpg)
Extended/Unscented Kalman Filter Dynamics / sensing model is nonlinear
xt+1 = f(xt) + vyt+1 = h(xt+1) + w
Gaussian noise, gaussian state estimate as before
Strategy:Propagate means as usualWhat about covariances & conditioning?
![Page 25: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/25.jpg)
Extended Kalman Filter
Approach: linearize about current estimateF = f/x(xt)H = h/x(xt+1)
Use Jacobians to propagate covariance matrices and perform conditioning
Jacobian = matrix of partial derivatives
![Page 26: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/26.jpg)
Unscented Kalman Filter
Approach: propagate a set of points with the same mean/covariance as the input, reconstruct gaussian
![Page 27: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/27.jpg)
Next Class
Particle FilterNon-Gaussian distributionsNonlinear observation/transition function
ReadingsRekleitis (2004)P.R.M. Chapter 9
![Page 28: State Estimation and Kalman Filtering](https://reader033.vdocuments.us/reader033/viewer/2022061606/568158c7550346895dc60ff3/html5/thumbnails/28.jpg)
Midterm Project Report Schedule (tentative) Tuesday
Changsi and Yang: Indoor person following Roland: Person tracking Jiaan and Yubin: Indoor mapping You-wei: Autonomous driving
Thursday Adrija: Dynamic collision checking with point clouds Damien: Netlogo Santhosh and Yohanand: Robot chess Jingru and Yajia: Ball collector with robot arm Ye: UAV simulation