presentazione di powerpointchrome.ws.dei.polimi.it/images/6/60/robotics_04_2020_localization.pdf16...
TRANSCRIPT
Robotics
Matteo [email protected]
Artificial Intelligence and Robotics Lab - Politecnico di Milano
Robot Localization – Sensor Models and Bayesian Filtering
2
Sensors
Where am I?
A Simplified Sense-Plan-Act Architecture
Trajectory Planning
Lower Frequency
Trajectory
Trajectory Following
(and Obstacle Avoidance)
Motion CommandsCurrent
Position
Goal PositionMap
High Frequency
Localization
3
Localization with Knowm Map
Motion Model
(Kinematics)
Sensor Model
4
Disclaimer …
Slides from now on have been heavily “inspired” by the teaching material kindly
provided with: S. Thrun, D. Fox, W. Burgard. “Probabilistic Robotics”. MIT Press, 2005
http://robots.stanford.edu/probabilistic-robotics/
You can refer to the original source for deeper analysis and references on the topic …
5
Range Sensors Models
The sensor model describes P(z|x), i.e., the probability of a measurement z given that
the robot is at position x.
6
Range Sensors
The sensor model describes P(z|x), i.e., the probability of a measurement z given that
the robot is at position x.
In particular a scan z consists of K measurements.
Individual measurements are independent given robot
position and surrounding map.
},...,,{ 21 Kzzzz =
=
=K
k
k mxzPmxzP1
),|(),|(
7
Typical Measurement Errors of an Range Measurements
The sensor model describes P(z|x), i.e., the probability of a measurement z given that
the robot is at position x.
Masurements can come from:
1.Beams reflected by obstacles
2.Beams reflected by persons orcaused by crosstalk
3.Random measurements
4.Max range measurements
8
Distance perception: Laser Range Finder
Lasers are definitely more accurate sensors
• 180 ranges over 180° (up to 360 °)
• 1 to 64 planes scanned, 10-75 scans/s
• <1cm range resolution
• Max range up to 50-80 m
• Issues with mirrors, glass, and matte black.
> 80.000 €~ 40.000 €~ 6000 €
< 1000 €~ 10.000 €
9
Beam Sensor Model (I)
The laser range finder model describes each single measurement using
Measurement noise
zexp zmax0
b
zz
hit eb
mxzP
2exp)(
2
1
2
1),|(
−−
=
=−
otherwise
zzmxzP
z
0
e),|( exp
unexp
Unexpected obstacles
zexp zmax0
10
Beam Sensor Model (II)
The laser range finder model describes each single measurement using
Random measurement Max range
max
1),|(
zmxzPrand =
smallzmxzP
1),|(max =
zexp zmax0zexp zmax0
11
Beam Sensor Model (III)
The laser range finder model describes each single measurement using
=
),|(
),|(
),|(
),|(
),|(
rand
max
unexp
hit
rand
max
unexp
hit
mxzP
mxzP
mxzP
mxzP
mxzP
T
Sonar
Laser
12
Acquire some data from the sensor, e.g., when the target is at 300 cm and 400 cm
Then estimate the model parameters via maximum likelihood:
Sensor Model Calibration (Sonar)
Sonar
)|( expzzP
300 cm 400 cm
13
Laser
Acquire some data from the sensor, e.g., when the target is at 300 cm and 400 cm
Then estimate the model parameters via maximum likelihood:
Sensor Model Calibration (Laser)
)|( expzzP
300 cm 400 cm
14
Discete Model for Range Sensor
Instead of densities, consider discrete steps along the sensor beam
Sonar
Laser
15
Sensor Model Likelihood
z
P(z|x,m)
16
Scan Sensor Model
The Beam sensor model assumes independence between beams and between
physical causes of measurements and has some issues:
• Overconfident because of independency assumptions
• Need to learn parameters from data
• A different model should be learned for different angles w.r.t. obstacles
• Inefficient because it uses ray tracing
The Scan sensor model simplifies with:
• Gaussian distribution with mean at distance to closest obstacle,
• Uniform distribution for random measurements, and
• Small uniform distribution for max range measurements
17
Scan Sensor Model Example
P(z|x,m)Map m Likelihood field
18
San Jose Tech Museum
Occupancy grid map Likelihood field
19
Scan Matching via Likelihood Field
Extract likelihood field from scan and use it to match different scan:
20
Scan Matching via Likelihood Field
Extract likelihood field from scan and use it to match different scan:
• Highly efficient, uses 2D tables only.
• Smooth with respect to small changes
in robot position
• Allows gradient descent pose optimization
• Ignores physical properties of beams.
In this video we are doing
more than simple scan
matching …
However it does not
work with sonars …
21
Landmarks
Landmark sensors provides
• Distance (or)
• Bearing (or)
• Distance and bearing
Can be obtained via
• Active beacons (e.g., radio, GPS)
• Passive (e.g., visual, retro-reflective)
Standard approach is triangulation
22
Landmark Models with Uncertainty
Explicitly modeling uncertainty in sensing is key to robustness:
• Determine parametric model for
noise free measurement
• Analyze sources of noise (e.g., distance and angle)
• Add adequate noise to parameters
(eventually mix in densities for noise)
• Learn (and verify) parameters by fitting model to data
The likelihood of measurement is given by “probabilistically comparing”
actual measurements against the expected ones.
23
Landmark Detection Model
For landmak 𝑖 in map 𝑚, i.e., 𝑚 𝑖 , the measurement 𝑧 = (𝑖, 𝑑, 𝛼) for a robot at
position (𝑥, 𝑦, 𝜃) is given by
Detection probability might depend on the distance/bearing
Then we have to take into account false positives too
22 ))(())((ˆ yimximd yx −+−=
),ˆprob(),ˆprob(det −−= dddp
−−−= ))(,)(atan2(ˆ ximyima xy
),|(uniformfpdetdet mxzPzpz +
ො𝛼 መ𝑑
24
RoboCup Example
𝑃(𝑧1, 𝑧2, 𝑧3|𝑥,𝑚)
𝑃(𝑧1|𝑥,𝑚) 𝑃(𝑧2|𝑥,𝑚) 𝑃(𝑧3|𝑥,𝑚)
25
Localization with Knowm Map
Motion Model
(Kinematics)
Sensor Model
Bayesian Filtering
26
Dynamic Bayesian Networks and Localization
−
=1:1
1:1:1:11:1:1 ),,,,|(),,,,|(:Filteringt
NtttNttt llUZpllUZp
1 2
u2
mapL1
z1 z2
L2 L3
z3 z4
L4
3
u2
L5
z6 z7
L6
z5
pose
Motion Model
Sensor Model
27
Bayesian Filtering Framework
We want to compute an estimate of the posterios probabibility of robot state 𝑥𝑡
from the stream of information about movement and sensors
In particular we assume known:
• The prior probability of the system state 𝑃(𝑥0)
• The motion model 𝑃(𝑥′|𝑥, 𝑢)
• The sensor model 𝑃(𝑧|𝑥,𝑚)
𝐵𝑒𝑙(𝑥𝑡) = 𝑃(𝑥𝑡|𝑢1, 𝑧1 … , 𝑢𝑡, 𝑧𝑡 , 𝑚)
},,,{ 11 ttt zuzud =
28
Markov Assumptions
Underlining assumption behind Bayes filtering:
• Perfect model, no approximation errors
• Static and stationary world
• Independent noise
),|(),,|( 1:1:11:1 ttttttt uxxpuzxxp −− =
𝑝(𝑧𝑡+1|𝑥0:𝑡+1, 𝑧1:𝑡 , 𝑢1:𝑡+1) = 𝑝(𝑧𝑡+1|𝑥𝑡+1)
Map is know as well, these
are simplified here …
29
111 )(),|(),|( −−−= ttttttt dxxBelxuxPmxzP
Bayes Filters
),,,,|(),,,,,|( 1111 muzuxPmuzuxzP ttttt =Bayes
z = observationu = actionx = statem = map
Markov ),,,,|(),|( 11 muzuxPmxzP tttt =
Markov11111 ),,,,|(),|(),|( −−−= tttttttt dxmuzuxPxuxPmxzP
1111
111
),,,,|(
),,,,,|(),|(
−−
−=
ttt
ttttt
dxmuzuxP
mxuzuxPmxzP
Total prob.
Markov111111 ),,,,|(),|(),|( −−−−= tttttttt dxmzzuxPxuxPmxzP
),,,,|()( 11 mzuzuxPxBel tttt =
30
Bayes Filter Algorithm𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂 𝑃(𝑧𝑡|𝑥𝑡 , 𝑚) න𝑃 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
Algorithm Bayes_filter( Bel(x), d ):
if d is a perceptual data item z then
For all x do
Normalize Bel’(x)
else if d is an action data item u then
For all x do
return Bel’(x)
)()|()(' xBelxzPxBel =
')'()',|()(' dxxBelxuxPxBel =
How to represent
such belief?
Based on such representation:
• Discrete filters
• Kalman filters
• Sigma-point filters
• Particle filters
• ...
31
Piecewise Constant Approximation
)|( xzP
)( 0xBel
)()|()(' 00 xBelxzPxBel =
32
Piecewise Constant Approximation
)( 0xBel
)|( xzP
)()|()(' 00 xBelxzPxBel =
),|()( 1011 uxxPxBel =
)()|()(' 111 xBelxzPxBel =
33
Discrete Bayesian Filter Algorithm
Algorithm Discrete_Bayes_filter( Bel(x),d ):
h=0
If d is a perceptual data item z then
For all x do
For all x do
Else if d is an action data item u then
For all x do
Return Bel’(x)
)()|()(' xBelxzPxBel =)(' xBel+=
)(')(' 1 xBelxBel −=
='
)'()',|()('x
xBelxuxPxBel
34
Tips and Tricks
Belief update upon sensory input and normalization iterates over all cells
• When the belief is peaked (e.g., during position tracking), avoid
updating irrelevant parts.
• Do not update entire sub-spaces of the state space and monitor
whether the robot is de-localized or not by considering likelihood
of observations given the active components
To update the belief upon robot motions, assumes a bounded Gaussian model to
reduce the update from O(n2) to O(n)
• Update by shifting the data in the grid according to measured motion
• Then convolve the grid using a Gaussian Kernel.
35
Grid Based Localization
36
Algorithm Bayes_filter( Bel(x), d ):
if d is a perceptual data item z then
For all x do
Normalize Bel’(x)
else if d is an action data item u then
For all x do
return Bel’(x)
Bayes Filter Algorithm
)()|()(' xBelxzPxBel =
')'()',|()(' dxxBelxuxPxBel =
How to represent
such belief?
Based on such representation:
• Discrete filters
• Kalman filters
• Sigma-point filters
• Particle filters
• ...
𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂 𝑃(𝑧𝑡|𝑥𝑡 , 𝑚) න𝑃 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
37
Bayes Filter Reminder
Prediction:
Correction/Update:
𝐵𝑒𝑙(𝑥𝑡|𝑚) = න𝑝 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂𝑝(𝑧𝑡|𝑥𝑡 . , 𝑚)𝐵𝑒𝑙(𝑥𝑡|𝑚)
𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂 𝑃(𝑧𝑡|𝑥𝑡 , 𝑚) න𝑃 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
38
Localization with Knowm Map
Update
Prediction
Update
Prediction
39
Bayes Filter Reminder
Prediction:
Correction/Update:
Can we compute the integrals (η is an integral too) in closed form
for continuos distributions?
Is there any continuous distribution for which this is possible?
𝐵𝑒𝑙(𝑥𝑡|𝑚) = න𝑝 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂𝑝(𝑧𝑡|𝑥𝑡 , 𝑚)𝐵𝑒𝑙(𝑥𝑡|𝑚)
NO!
YES!
𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂 𝑃(𝑧𝑡|𝑥𝑡 , 𝑚) න𝑃 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
40
Gaussian Distribution
2
2)(
2
1
2
2
1)(
),(~)(
−−
=
x
exp
Nxp
-
Univariate
)()(2
1
2/12/
1
)2(
1)(
)(~)(
μxΣμx
Σx
Σμx
−−− −
=t
ep
,Νp
d
Multivariate
41
Properties of Gaussian Distribution
2
2)(
2
1
2
2
1)(
),(~)(
−−
=
x
exp
Nxp
-
Univariate
),(~),(~ 22
2
abaNYbaXY
NX+
+=
+++
+
−− 2
2
2
1
22
2
2
1
2
112
2
2
1
2
2212
222
2
111 1,~)()(
),(~
),(~
NXpXp
NX
NX
42
)()(2
1
2/12/
1
)2(
1)(
)(~)(
μxΣμx
Σx
Σμx
−−− −
=t
ep
,Νp
d
Multivariate
Properties of Gaussian Distribution
),(~),(~
TAABANYBAXY
NX+
+=
++
+
+
−− 1
2
1
1
2
21
11
21
221
222
111 1,~)()(
),(~
),(~
NXpXp
NX
NX
43
Discrete Time Kalman Filter
• (n x n) describes how state evolves from t to t-1 w/o controls or noise
• (n x l) describes how control ut changes the state from t to t-1
• (k x n) describes how to map the state xt to an observation zt
• random variables representing process and measurement noise assumed
independent and normally distributed with covariance Rt and Qt respectively.
tttttt uBxAx ++= −1
𝑧𝑡+1 = 𝐶𝑡+1𝑥𝑡+1 + 𝛿𝑡+1
t
tA
tB
tC
t
44
Linear Gaussian Systems
Initial belief is normally distributed:
Dynamics are linear function of state and control plus additive noise:
Observations are linear function of state plus additive noise:
𝐵𝑒𝑙(𝑥0) = 𝑁 𝜇0, Σ0
tttttt uBxAx ++= −1 ( )ttttttttt RuBxAxNxuxp ,;),|( 11 += −−
tttt xCz += ( )tttttt QxCzNxzp ,;)|( =
tttttt uBxAx ++= −1
𝑧𝑡+1 = 𝐶𝑡+1𝑥𝑡+1 + 𝛿𝑡+1
45
Linear Gaussian System: Prediction
Prediction:
( ) ( )
+=
+==
−−−
−−−−−=
+
=
−
−
−−−
−
−−−
−
−
−
−−−−
−−−
t
T
tttt
ttttt
t
tttt
T
tt
tttttt
T
tttttt
ttttttttt
tttttt
RAA
uBAxBel
dxxx
uBxAxRuBxAxxBel
xNRuBxAxN
dxxBelxuxpxBel
1
1
111
1
111
1
1
1
1111
111
)(
)()(2
1exp
)()(2
1exp)(
,;~,;~
)(),|()(
Closed form prediction step
46
Linear Gaussian System: Observation
Update:
( ) ( )
1
11
)(with )(
)()(
)()(2
1exp)()(
2
1exp)(
,;~,;~
)()|()(
−
−−
+=
−=
−+==
−−−
−−−=
=
t
T
ttt
T
ttttttt
tttttt
t
ttt
T
tttttt
T
tttt
ttttttt
tttt
QCCCKCKI
CzKxBel
xxxCzQxCzxBel
xNQxCzN
xbelxzpxBel
Closed form update step
47
Kalman Filter Algorithm
Algorithm Kalman_filter( µt-1, Σt-1, ut, zt):
Prediction:
Correction:
Return µt, Σt
ttttt uBA += −1
t
T
tttt RAA += −1
1)( −+= t
T
ttt
T
ttt QCCCK
)( tttttt CzK −+=
tttt CKI −= )(
48
Kalman Filter Algorithm
Algorithm Kalman_filter( µt-1, Σt-1, ut, zt):
Prediction:
Correction:
Return µt, Σt
ttttt uBA += −1
t
T
tttt RAA += −1
1)( −+= t
T
ttt
T
ttt QCCCK
)( tttttt CzK −+=
tttt CKI −= )(
• Polynomial in measurement
dimensionality k and state
dimensionality n: O(k2.376 + n2)
• Optimal for linear Gaussian systems ☺
• Most robotics systems are nonlinear
• It represents unimodal distributions
49
How to Deal with Non Linear Dynamic Systems?
Gaussian noise in linear systems
tttt xCz +=tttttt uBxAx ++= −1
50
How to Deal with Non Linear Dynamic Systems?
Gaussian noise in non-linear systems
),( 1−= ttt xugx
)( tt xhz =
51
Extended Kalman Filter (First order Taylor approximation)
Gaussian noise in non-linear systems
Prediction:
Correction
),( 1−= ttt xugx
)( tt xhz =
)(),(),(
)(),(
),(),(
1111
11
1
111
−−−−
−−
−
−−−
−+
−
+
ttttttt
tt
t
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
tt
t
ttt
xHhxh
xx
hhxh
−+
−
+
52
Extended Kalman Filter (First order Taylor approximation)
Gaussian noise in non-linear systems
Prediction:
Correction
),( 1−= ttt xugx
)( tt xhz =
)(),(),(
)(),(
),(),(
1111
11
1
111
−−−−
−−
−
−−−
−+
−
+
ttttttt
tt
t
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
tt
t
ttt
xHhxh
xx
hhxh
−+
−
+
53
Extended Kalman Filter (First order Taylor approximation)
Gaussian noise in non-linear systems
Prediction:
Correction
),( 1−= ttt xugx
)( tt xhz =
)(),(),(
)(),(
),(),(
1111
11
1
111
−−−−
−−
−
−−−
−+
−
+
ttttttt
tt
t
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
tt
t
ttt
xHhxh
xx
hhxh
−+
−
+
54
Extended Kalman Filter (First order Taylor approximation)
Gaussian noise in non-linear systems
Prediction:
Correction
),( 1−= ttt xugx
)( tt xhz =
)(),(),(
)(),(
),(),(
1111
11
1
111
−−−−
−−
−
−−−
−+
−
+
ttttttt
tt
t
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
tt
t
ttt
xHhxh
xx
hhxh
−+
−
+
55
Extended Kalman Filter (First order Taylor approximation)
Gaussian noise in non-linear systems
Prediction:
Correction
),( 1−= ttt xugx
)( tt xhz =
)(),(),(
)(),(
),(),(
1111
11
1
111
−−−−
−−
−
−−−
−+
−
+
ttttttt
tt
t
tttttt
xGugxug
xx
ugugxug
)()()(
)()(
)()(
ttttt
tt
t
ttt
xHhxh
xx
hhxh
−+
−
+
56
EKF Algorithm
Extended_Kalman_filter(µt-1, Σt-1, ut, zt):
Prediction:
Correction:
Return µt, Σt
),( 1−= ttt ug
t
T
tttt RGG += −1
1)( −+= t
T
ttt
T
ttt QHHHK
))(( ttttt hzK −+=
tttt HKI −= )(
1
1),(
−
−
=
t
ttt
x
ugG
t
tt
x
hH
=
)(
ttttt uBA += −1
t
T
tttt RAA += −1
1)( −+= t
T
ttt
T
ttt QCCCK
)( tttttt CzK −+=
tttt CKI −= )(
57
EKF and Friends
Extended Kalman Filter:
• Polynomial in measurement k and state n dimensionality: O(k2.376 + n2)
• Not optimal and can diverge if nonlinearities are large!
• Works surprisingly well even when all assumptions are violated!
• There are possible alternative like the Unscented Kalman Transform ...
EKF UKF
58
Algorithm Bayes_filter( Bel(x), d ):
if d is a perceptual data item z then
For all x do
Normalize Bel’(x)
else if d is an action data item u then
For all x do
return Bel’(x)
Bayes Filter Algorithm
)()|()(' xBelxzPxBel =
')'()',|()(' dxxBelxuxPxBel =
How to represent
such belief?
Based on such representation:
• Discrete filters
• Kalman filters
• Sigma-point filters
• Particle filters
• ...
𝐵𝑒𝑙(𝑥𝑡|𝑚) = 𝜂 𝑃(𝑧𝑡|𝑥𝑡 , 𝑚) න𝑃 𝑥𝑡 𝑢𝑡 , 𝑥𝑡−1, 𝑚 𝐵𝑒𝑙(𝑥𝑡−1|𝑚) 𝑑𝑥𝑡−1
59
Particle Filters
Represent belief by random samples
Estimation of non-Gaussian, nonlinear processes
• Monte Carlo filter
• Survival of the fittest
• Condensation
• Bootstrap filter
• Particle filter
Filtering: [Rubin, 88], [Gordon et al., 93], [Kitagawa 96]
Computer vision: [Isard and Blake 96, 98]
Dynamic Bayesian Networks: [Kanazawa et al., 95]
60
Importance Resampling
61
Importance Resampling
w = f/g
62
Importance Resampling (with smoothing)
63
draw xit−1 from Bel(xt−1)
draw xit from p(xt | x
it−1,ut−1)
Importance factor for xit:
)|(
)(),|(
)(),|()|(
ondistributi proposal
ondistributitarget
111
111
tt
tttt
tttttt
i
t
xzp
xBeluxxp
xBeluxxpxzp
w
=
=
−−−
−−−
1111 )(),|()|()( −−−−= tttttttt dxxBeluxxpxzpxBel
Particle Filter Algorithm
g
f
64
Algorithm particle_filter(St-1, ut-1, zt):
For Generate new samples
Sample index j(i) from the discrete distribution given by wt-1
Sample from using and
Compute importance weight
Update normalization factor
Insert
For
Normalize weights
0, == tS
ni 1=
},{ = i
t
i
ttt wxSS
i
tw+=
i
tx ),|( 11 −− ttt uxxp )(
1
ij
tx − 1−tu
)|( i
tt
i
t xzpw =
ni 1=
/i
t
i
t ww =
Particle Filter Algorithm
65
Sensor Information: Importance Sampling
)|()(
)()|(xzp
xBel
xBelxzpw
=
−
−
)()|()( xBelxzpxBel −
66
Sensor Information: Importance Sampling
− 'd)'()'|()( , xxBelxuxpxBel
67
Sensor Information: Importance Sampling
)|()(
)()|(xzp
xBel
xBelxzpw
=
−
−
)()|()( xBelxzpxBel −
68
Sensor Information: Importance Sampling
− 'd)'()'|()( , xxBelxuxpxBel
69
Start
Monte Carlo Localization with Laser
Stochastic motion model
Range sensor model
Laser sensor Sonar sensor
70
Sample-based Localization (sonar)
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
RoboCup Example
𝑃(𝑧1|𝑥,𝑚) 𝑃(𝑧2|𝑥,𝑚) 𝑃(𝑧3|𝑥,𝑚)
Weighted samplesAfter resampling
90
Localization for AIBO robots
91
Project Minerva
92
Using Ceiling Maps for Localization
93
Vision-based Localization
P(z|x)
h(x)
z
94
Under a Light
Measurement z: P(z|x):
95
Next to a Light
Measurement z: P(z|x):
96
Next to a Light
Measurement z: P(z|x):
97
Elsewhere
Measurement z: P(z|x):
98
Global Localization Using Vision