eth master course: 151-0854-00l autonomous mobile robots ... · zürich eth master course:...
TRANSCRIPT
Zürich
ETH Master Course: 151-0854-00L
Autonomous Mobile RobotsLocalization II
Roland SiegwartMargarita ChliPaul FurgaleMarco HutterMartin RufliDavide Scaramuzza
Zürich
ACT and SEE
Localization II
2
For all do
∑ , (prediction update / ACT)
, (measurement update / SEE)
endfor
Return
ZürichLocalization II
Map RepresentationContinuous Line-Based
a) Architecture mapb) Representation with set of finite or infinite lines
3
ZürichLocalization II
Map RepresentationExact cell decomposition
Exact cell decomposition - Polygons
4
ZürichLocalization II
Map RepresentationApproximate cell decomposition
Fixed cell decomposition Narrow passages disappear
5
ZürichLocalization II
Map RepresentationAdaptive cell decomposition
Exercise: how do we implement an adaptive cell decomposition algorithm?
6
ZürichLocalization II
Map RepresentationTopological map
node(location)
edge(connectivity)
8
A topological map represents the environment as a graph with nodes and edges. Nodes correspond to spaces Edge correspond to physical connections between nodes
Topological maps lack scale anddistances, but topological relationships (e.g., left, right, etc.)are mantained
Zürich
Map RepresentationTopological map
London underground map
Localization II
9
Zürich
Probabilistic Map Based Localization
Localization II
13
Probabilistic Map Based Localization
Zürich
Solution to the probabilistic localization problem
A probabilistic approach to the mobile robot localization problem is a method able to compute the probability distribution of the robot configuration during each Action (ACT) and Perception (SEE) step.
The ingredients are:1. The initial probability distribution
2. The statistical error model of the proprioceptive sensors (e.g. wheel encoders)
3. The statistical error model of the exteroceptive sensors (e.g. laser, sonar, camera)
4. Map of the environment(If the map is not known a priori then the robot needs to build a map of the environment and then localize in it. This is called SLAM, Simultaneous Localization And Mapping)
Localization II
0)( txp
14
ZürichLocalization II
x
)()|()( tttt xbelxzpxbel
0)( txp
)|( tt xzp
Initial probability distribution
Perception update
Action update
)()|()( tttt xbelxzpxbel Perception update
)|( tt xzp
Illustration of probabilistic bap based localization18
ZürichLocalization II
x
)()|()( tttt xbelxzpxbel
0)( txp
)|( tt xzp
Initial probability distribution
Perception update
Action update
)()|()( tttt xbelxzpxbel Perception update
)|( tt xzp
Illustration of probabilistic bap based localization19
ZürichLocalization II
x
)()|()( tttt xbelxzpxbel
0)( txp
)|( tt xzp
Initial probability distribution
Perception update
Action update
)()|()( tttt xbelxzpxbel Perception update
)|( tt xzp
Illustration of probabilistic bap based localization20
ZürichLocalization II
Illustration of probabilistic bap based localization
x
)()|()( tttt xbelxzpxbel
0)( txp
)|( tt xzp
Initial probability distribution
Perception update
Action update
)()|()( tttt xbelxzpxbel Perception update
)|( tt xzp
21
ZürichLocalization II
Markov Localization22
Probabilistic Map Based Localization:Markov Localization
Zürich
Markov localization
Markov localization uses a grid space representation of the robot configuration.
Localization II
23
For all do
∑ , (prediction update)
, (measurement update)
endfor
Return
Zürich
Markov localization
Let us discretize the configuration space into 10 cells
Suppose that the robot’s initial belief is a uniform distribution from 0 to 3. Observe that all the elements were normalized so that their sum is 1.
Localization II
24
Zürich
Markov localization
Initial belief distribution
Action phase: Let us assume that the robot moves forward with the following statistical model
This means that we have 50% probability that the robot moved 2 or 3 cells forward. Considering what the probability was before moving, what will the probability be after the motion?
Localization II
25
Zürich
Markov localizationAction update
The solution is given by the convolution (cross correlation) of the two distributions
Localization II
*
26
, ∗ ∑ ,
Zürich
Markov localizationPerception update
Let us now assume that the robot uses its onboard range finder and measures the distance from the origin. Assume that the statistical error model of the sensors is:
This plot tells us that the distance of the robot from the origin can be equally 5 or 6 units. What will the final robot belief be after this measurement?
The answer is again given by the Bayes rule:
Localization II
28
,
ZürichLocalization II
Markov LocalizationCase Study – Grid Map
Example 2: Museum Laser scan 1
Courtesy of W. Burgard
29
ZürichLocalization II
Markov LocalizationCase Study – Grid Map
Example 2: Museum Laser scan 2
Courtesy of W. Burgard
30
ZürichLocalization II
Markov LocalizationCase Study – Grid Map
Example 2: Museum Laser scan 3
Courtesy of W. Burgard
31
ZürichLocalization II
Markov LocalizationCase Study – Grid Map
Example 2: Museum Laser scan 13
Courtesy of W. Burgard
32
ZürichLocalization II
Markov LocalizationCase Study – Grid Map
Example 2: Museum Laser scan 21
Courtesy of W. Burgard
33
ZürichLocalization II
Kalman filter Localization34
Probabilistic Map Based Localization:Kalman Filter Localization
Zürich
Kalman filter LocalizationAssumptions and properties
Assumptions Linear or linearizable system Robot belief, motion model, and measurement model are affected by
white Gaussian noise
OutcomeGuaranteed to be optimalOnly μ and Σ are updated during the action and perception updates
Localization II
35
Zürich
Kalman Filter LocalizationIllustration
Localization II
37
Action (ACT)
Perception (SEE)
Zürich
Introduction to Kalman filter theory
A Gaussian distribution is repsented only by its first and second moments: mean μ and variance σ2 and is indicated by N(μ ,σ )
When the robot configuration is a vector, the distribution is a multivariate Gaussian represented by a mean vector μ and a covariance matrix Σ
Localization II
38
Zürich
Introduction to Kalman filter theoryApplying the theorem of total probability
Let x1, x2 be two random variables which are Independent and Normallydistributed
Let y be a function of x1, x2
What will the distribution of y be?
Localization II
40
Zürich
Introduction to Kalman filter theoryApplying the theorem of total probability
The answer is simple if f is linear
If x1, x2 are independent and normal, the output is also a Gaussian with
If x1, x2 are vectors with covariances Σ1, Σ2 respectively, then
Localization II
41
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
Here, we wish to demonstrate that the product of two Gaussian functions is still a Gaussian
Let now q denote the position of the robot.
Let p1(q) be the robot belief resulting from the Action update (i.e., )
Let p2(q) be the robot belief from the observation (i.e., )
We wish to show that p1 and p2 are Gaussian functions, their product is also a Gaussian
Localization II
47
)()|()( tttt xbelxzpxbel
)( txbel
)|( tt xzp
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
By formalizing this, we want to show that if we have
then their product is also Gaussian
Additionally, we want to find an expression of the mean value and variance of the new Gaussian as a function of the mean values and variances of the input variables
Localization II
48
),ˆ()()( 221 qNqpqp
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
From the product of two Gaussians , we obtain
Localization II
49
)()( 21 qpqp
)(1 qp )(2 qp
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
From the product of two Gaussians , we obtain
As we can see, the argument of this exponential is quadratic in q, hence is a Gaussian.We now need to determine its mean value and variance that allow us to
rewrite this exponential in the form
Localization II
50
)()( 21 qpqp
)(1 qp )(2 qp
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
By rearranging the exponential, we get
Localization II
51
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
Where the mean value q can be written as
52
Localization II
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
Where the mean value q can be written as
53
Localization II
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
Where the mean value q can be written as
And the variance can be written as
Localization II
54
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
Where the mean value q can be written as
And the variance can be written as
Localization II
55
Zürich
Introduction to Kalman filter theoryApplying the Bayes rule
By rearranging the terms, the expressions of the mean value and variance can also be written as
The resulting variance is smaller than the input variances. Thus, the uncertainty of the position estimate has shrunk as a result of
the observation Even poor measurements will only increase the precision of the
estimate. This is a result that we expect based on information theory.Localization II
56
Kalman gain
Zürich
Introduction to Kalman filter theoryEquations applied to mobile robots
One-Dimentional Case N-Dimensional Case
Localization II
59
),(ˆ 1 ttt uxfx
22
22
1
2ˆ 1
ˆttt u
tx
tx u
fxf
)ˆ(ˆ
ˆˆ 22
ˆ
2ˆ
tzxx
xtt xxxx
t
tzt
t
22ˆ
4ˆ22
ˆˆ
ˆtzt
t
ttxx
xxx
Action Update (or Prediction Update)
Perception Update (or Measurement Update)
Action Update (or Prediction Update)
Perception Update (or Measurement Update)
),(ˆ 1 ttt uxfx
Tutu
Txtxt FQFFPFP 1
ˆ
)ˆ()ˆ(ˆˆ 1tottttt xxRPPxx
t
tttttt PRPPPP ˆ)ˆ(ˆˆ 1NB:
• The new mean value is closer to the one of the two estimates that has smaller uncertainty
• The new uncertainty is smaller than the two initial uncertainties
Zürich
Introduction to Kalman filter theoryEquations applied to mobile robots
One-Dimentional Case N-Dimensional Case
Localization II
60
),(ˆ 1 ttt uxfx
22
22
1
2ˆ 1
ˆttt u
tx
tx u
fxf
)ˆ(ˆ
ˆˆ 22
ˆ
2ˆ
tzxx
xtt xxxx
t
tzt
t
22ˆ
4ˆ22
ˆˆ
ˆtzt
t
ttxx
xxx
Action Update (or Prediction Update)
Perception Update (or Measurement Update)
Action Update (or Prediction Update)
Perception Update (or Measurement Update)
),(ˆ 1 ttt uxfx
Tutu
Txtxt FQFFPFP 1
ˆ
NB:
• The new mean value is closer to the one of the two estimates that has smaller uncertainty
• The new uncertainty is smaller than the two initial uncertainties
tttt Kxx ˆ
Ttttt KKPP ˆ
R)ˆ(ˆ -1 ttt PPK
)ˆ( tzt xxt
tt RPt
ˆ
ZürichLocalization II
588
Kalman Filter LocalizationMarkov versus Kalman localization
Markov Kalman
PROS localization starting from any unknown
position recovers from ambiguous situation
CONS However, to update the probability of all
positions within the whole state space at any time requires a discrete representation of the space (grid). The required memory and calculation power can thus become very important if a fine grid is used.
PROS Tracks the robot and is inherently very precise and efficient
CONS If the uncertainty of the robot becomes to large (e.g. collision with an object) the Kalman filter will fail and the position is definitively lost