camera/laser/gps fusion method for vehicle positioning under extended nis-based sensor validation

13
3110 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013 Camera/Laser/GPS Fusion Method for Vehicle Positioning Under Extended NIS-Based Sensor Validation Lijun Wei, Cindy Cappelle, and Yassine Ruichek Abstract —Vehicle localization and autonomous navigation con- sist of precisely positioning a vehicle on road by the use of different kinds of sensors. This paper presents a vehicle localization method by integrating a stereoscopic system, a laser range finder (LRF) and a global localization sensor GPS. For more accurate LRF-based vehicle motion estimation, an outlier- rejection invariant closest point method (ICP) is proposed to reduce the matching ambiguities of scan alignment. The fusion approach starts by a sensor selection step that is applied to validate the coherence of the observations from different sensors. Then the information provided by the validated sensors is fused with an unscented information filter. To demonstrate its performance, the proposed multisensor localization method is tested with real data and evaluated by RTK-GPS data as ground truth. The fusion approach also facilitates the incorporation of more sensors if needed. Index Terms—GPS, laser range finder, multisensor fusion, sensor coherence, stereovision, vehicle localization. I. Vehicle Localization Problem O VER the past three decades, intelligent vehicle systems have continued to be an important research topic for constructing innovative transportation methods. In order to improve the accuracy of assisted vehicle navigation systems, so as to guarantee the driving security and service continuity on road, it is essential to know the absolute/relative positions and orientation of the vehicle at all times. To ensure this function, different types of sensors have been used: dead- reckoning (DR) sensors, such as gyrometer, inertial navigation system (INS), wheel encoder (odometer), can estimate the relative motion of the vehicle; and global sensors such as global positioning system (GPS) [2] can provide the absolute positions of the vehicle. Environment perception sensors, such as laser range finder [3] and camera systems [4], have also been used to provide DR information. Camera-based visual odometry [5], [6] can estimate the camera motion from con- secutive frames [7]. The advantage of GPS lies in its precision Manuscript received August 9, 2012; revised February 6, 2013; accepted February 7, 2013. Date of publication August 15, 2013; date of current version October 7, 2013. The Associate Editor coordinating the review process was Dr. Jesús Ureña. The authors are with IRTES-SET, UTBM, Belfort Cedex 90010, France (e- mail: [email protected]; [email protected]; [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIM.2013.2265476 in long term. However, as the GPS signals are affected by atmospheric conditions, satellite positions, radio signal noises, etc., the localization accuracy in short term is only to few meters. When a GPS receiver-equipped vehicle is driven along a street with tall buildings around, the GPS signals might be blocked or reflected by objects around the antenna. In the case of GPS reflection, the pseudo-ranges provided by the reflected signals would be longer than the real ranges. If these contaminated ranges are used for position estimation, the localization result would be erroneous. On the contrary, the vehicle motion estimated by the DR sensors is mostly accurate in short term, though the vehicle trajectory might drift in long term due to error accumulation. Considering the drawbacks and advantages of every sensor, it is better to fuse the information from different sensors under a probabilistic framework. Based on Bayesian theory, Kalman filter and its derivations [e.g., EKF, sigma-point KF, interacting-multiple-model (IMM) system] [11]– [13] have been the mostly used methods for data fusion under the assumption that both the process noise and observation noise obey Gaussian distribution. The current vehicle state is pre- dicted from the previous state and an input vector according to the vehicle process model and the process noise. Then, the observations from different sensors are used to correct this prediction by taking into account their uncertainties. These observations are supposed to be independent. Information filter (IF) [10] deals with the information state vector and the information matrix (inverse of covariance matrix) associated with Fisher information. As the information is related to the underlying likelihood of the vehicle state, the inverse-covariance form of the IF is able to update the prediction by directly integrating the information state vectors and information matrices of multiple observations. The maxi- mum size of the matrix to be inverted in IF is the dimension of the state vector. When the number of observations largely increases, the update part of IF is simpler than the usually used group sensor approach [8]. For nonlinear systems, extended IF (EIF) [14]– [16] is used by first-order approximation of the system model as Taylor’s series. The nonlinear system can be better approximated by embedding an unscented transforma- tion (UIF) [17], [18]. In this paper, the authors propose to localize a vehicle by using an unscented IF to fuse the information from a GPS receiver, an onboard stereoscopic system, and an onboard laser 0018-9456 c 2013 IEEE

Upload: yassine

Post on 24-Jan-2017

216 views

Category:

Documents


1 download

TRANSCRIPT

3110 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

Camera/Laser/GPS Fusion Method for VehiclePositioning Under Extended NIS-Based

Sensor ValidationLijun Wei, Cindy Cappelle, and Yassine Ruichek

Abstract—Vehicle localization and autonomous navigation con-sist of precisely positioning a vehicle on road by the useof different kinds of sensors. This paper presents a vehiclelocalization method by integrating a stereoscopic system, a laserrange finder (LRF) and a global localization sensor GPS. Formore accurate LRF-based vehicle motion estimation, an outlier-rejection invariant closest point method (ICP) is proposed toreduce the matching ambiguities of scan alignment. The fusionapproach starts by a sensor selection step that is applied tovalidate the coherence of the observations from different sensors.Then the information provided by the validated sensors isfused with an unscented information filter. To demonstrate itsperformance, the proposed multisensor localization method istested with real data and evaluated by RTK-GPS data as groundtruth. The fusion approach also facilitates the incorporation ofmore sensors if needed.

Index Terms—GPS, laser range finder, multisensor fusion,sensor coherence, stereovision, vehicle localization.

I. Vehicle Localization Problem

OVER the past three decades, intelligent vehicle systemshave continued to be an important research topic for

constructing innovative transportation methods. In order toimprove the accuracy of assisted vehicle navigation systems,so as to guarantee the driving security and service continuityon road, it is essential to know the absolute/relative positionsand orientation of the vehicle at all times. To ensure thisfunction, different types of sensors have been used: dead-reckoning (DR) sensors, such as gyrometer, inertial navigationsystem (INS), wheel encoder (odometer), can estimate therelative motion of the vehicle; and global sensors such asglobal positioning system (GPS) [2] can provide the absolutepositions of the vehicle. Environment perception sensors, suchas laser range finder [3] and camera systems [4], have alsobeen used to provide DR information. Camera-based visualodometry [5], [6] can estimate the camera motion from con-secutive frames [7]. The advantage of GPS lies in its precision

Manuscript received August 9, 2012; revised February 6, 2013; acceptedFebruary 7, 2013. Date of publication August 15, 2013; date of current versionOctober 7, 2013. The Associate Editor coordinating the review process wasDr. Jesús Ureña.

The authors are with IRTES-SET, UTBM, Belfort Cedex 90010, France (e-mail: [email protected]; [email protected]; [email protected]).

Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.

Digital Object Identifier 10.1109/TIM.2013.2265476

in long term. However, as the GPS signals are affected byatmospheric conditions, satellite positions, radio signal noises,etc., the localization accuracy in short term is only to fewmeters. When a GPS receiver-equipped vehicle is driven alonga street with tall buildings around, the GPS signals mightbe blocked or reflected by objects around the antenna. Inthe case of GPS reflection, the pseudo-ranges provided bythe reflected signals would be longer than the real ranges.If these contaminated ranges are used for position estimation,the localization result would be erroneous. On the contrary, thevehicle motion estimated by the DR sensors is mostly accuratein short term, though the vehicle trajectory might drift in longterm due to error accumulation.

Considering the drawbacks and advantages of every sensor,it is better to fuse the information from different sensorsunder a probabilistic framework. Based on Bayesian theory,Kalman filter and its derivations [e.g., EKF, sigma-point KF,interacting-multiple-model (IMM) system] [11]– [13] havebeen the mostly used methods for data fusion under theassumption that both the process noise and observation noiseobey Gaussian distribution. The current vehicle state is pre-dicted from the previous state and an input vector accordingto the vehicle process model and the process noise. Then, theobservations from different sensors are used to correct thisprediction by taking into account their uncertainties. Theseobservations are supposed to be independent.

Information filter (IF) [10] deals with the information statevector and the information matrix (inverse of covariancematrix) associated with Fisher information. As the informationis related to the underlying likelihood of the vehicle state,the inverse-covariance form of the IF is able to update theprediction by directly integrating the information state vectorsand information matrices of multiple observations. The maxi-mum size of the matrix to be inverted in IF is the dimensionof the state vector. When the number of observations largelyincreases, the update part of IF is simpler than the usually usedgroup sensor approach [8]. For nonlinear systems, extendedIF (EIF) [14]– [16] is used by first-order approximation of thesystem model as Taylor’s series. The nonlinear system can bebetter approximated by embedding an unscented transforma-tion (UIF) [17], [18].

In this paper, the authors propose to localize a vehicle byusing an unscented IF to fuse the information from a GPSreceiver, an onboard stereoscopic system, and an onboard laser

0018-9456 c© 2013 IEEE

WEI et al.: CAMERA/LASER/GPS FUSION METHOD FOR VEHICLE POSITIONING 3111

Fig. 1. Overview of the proposed UIF-based vehicle localization method.

range finder. The proposed approach is described in Fig. 1: aprocess model, being considered as a virtual sensor, is used topredict the vehicle pose at first; then, the LRF and the stereo-scopic system estimate the vehicle ego-motion respectivelywith laser scans and images, and the GPS receiver providesthe absolute position of the vehicle. After that, the coherenceof the different sensor measurements are validated and theprediction from the process model and the measurements fromthe three sensors are integrated only if they can pass theextended normalized innovation squared (NIS) checking test.If one of the sensors fails to work, a solution is given by theother sensors.

Based on our previous work [19], we propose the fol-lowing extensions: 1) instead of using visual odometry forstate prediction, a constant-velocity process model is applied(Section III-B); 2) in addition to the uncertainty of landmarks,the uncertainty of minimization processes is also taken intoaccount in estimating the uncertainty of LRF measurementsand stereoscopic system measurements (Section III-C, III-D);and 3) when more than one sensor are incorporated into thesystem, a sensor selection step is applied before combining allthe information together, so that the fusion result would not beruined by a failed sensor: the coherence between the differentmeasurements is validated and the fault sensor observation isremoved (Section II-C). The benefit of this fusion approach isthat it is able to directly integrate multiple sensor informationfor correction, to supersede the GPS errors and to reduce theerror accumulation from the stereoscopic system and LRF.Information from other sensors can also be easily incorporatedinto the system if needed.

The structure of this paper is organized as follows.Section II introduces the theory of unscented IF and theproposed sensor coherence validation approach; Section IIIdetails the UIF-based vehicle localization method by fusingmultiple sensors; and Section IV presents some experimentalresults with real data acquired by our experimental vehi-cle. Finally, conclusions and perspectives are presented inSection V.

II. Theory of Unscented IF

The vehicle process model can be written as: Xt =f (Xt−1, ut, �t) + αt , where Xt denotes the state vector attime instant t, ut is the input vector, and �t is the timeinterval, αt = Gtrt is the process noise, Gt is a n × m noisematrix (n is the size of the state, m is the size of the noise),rt ∼ N(0, Rt) is a zero mean Gaussian variable describing theuncertainty of the process model. A measurement st is writtenas: st = h(Xt, t) + qt , where h is the measurement model,qt ∼ N(0, Qt) is the zero mean and Gaussian measurementnoise at instant t [20].

Kalman filter represents the belief of the Gaussian withmean and covariance of the state, while IF represents theGaussian in the form of Fisher information with informationmatrix and information state. Both representations are dualsof each other, and each of them can be recovered from theother by matrix inversion [20].

Although the prediction stage is more complex for IFthan Kalman filter, IF has the advantage that the updatestage is computationally easier because no gain or innovationcovariance matrices need to be calculated. When the number ofobservations largely increases, the update part of IF is simplerthan the usually used group sensor approach [8]. Informationfilter for linear and nonlinear systems are respectively pre-sented in the following section.

A. IF for Linear Process Model

1) Information State/Matrix Prediction: With a linear statetransition model Xt = FtXt−1 + Utut + Gtrt , the previousinformation vector ift−1 and information matrix IFt−1, thecurrent information vector ˜if t can be predicted by [8]

At = (F−1t )T (IFt−1)F−1

t

Et = GTt AtGt + R−1

t

Bt = AtGtE−1t

˜IFt = At − BtEtBTt

˜if t = [I − BtGTt ](F−1

t )T (ift−1) + ( ˜IFt)Utut

(1)

where Ft is the linear process model and is supposed to beinvertible, Ut is the input matrix of ut , ˜if t is the predictedinformation state vector and ˜IF t is the predicted informationmatrix.

2) Information State/Matrix Update: For a sensor mea-surement with a linear measurement model st = HtXt + qt =st + qt , its information vector it and information matrix It atinstant t can be obtained by

it = HTt Q−1

t st, It = HTt Q−1

t Ht (2)

where Ht is the linear observation model matrix, Qt is theobservation uncertainty. The information state vector andinformation matrix can be obtained by combination of theprediction and the contribution from the observation sensor

if t = ˜if t + it, IF t = ˜IF t + It. (3)

The vehicle state X(t|t) and covariance P(t|t) can be derivedfrom the information vector and information matrix through

X(t|t) = if t(IFt)−1, P(t|t) = (IFt)

−1.

3112 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

B. IF for Nonlinear Process Model With Unscented Transform

For a nonlinear system, instead of linearization of the pro-cess and observation models, unscented transform is embeddedto approximate the prediction and observation.

1) Information State/Matrix Prediction: Previous state andcovariance are augmented by the mean and covariance ofthe process noise rt ∼ N(0, Rt). Then, unscented transformchooses a fixed number of sigma points from the originaldistribution and propagates the sigma points through thenonlinear process model with the noise variable, and estimatesthe mean and covariance of the current state on the basis ofthese sigma points and their weights, with

mt−1 =

[X(t−1|t−1)

rt

], Pt−1 =

[P(t−1|t−1) 0

0 Rt

]. (4)

A set of (2n + 1) sigma points X0:2nt−1 are generated by

X(0)t−1 = mt−1

X(i)t−1 = mt−1 + [

√(n + λ)Pt−1]i, i = 1, . . . , n

X(i)t−1 = mt−1 − [

√(n + λ)Pt−1]i−n, i = n + 1, . . . , 2n

(5)

where n is the size of the state vector, λ is the scaling factordefined by λ = α2(n + κ) − n, α and κ are positive constantscontrolling the spread of sigma points. [

√(n + λ)Pt−1]i is the

ith column of the matrix square root. The matrix square rootof Pt−1 could be calculated by Cholesky decomposition [11].

The associated weights of the state of sigma points w(i)m , and

the weights of the covariance of sigma points w(i)c could be

calculate by

w(0)m = λ/(n + λ), w(0)

c = λ/(n + λ) + (1 − α2 + β)w(i)

m = w(i)c = 1/(2(n + λ)), i = 1, . . . , 2n

(6)

where β is a positive constant related to the distribution of thestate vector. Then, the sigma points are propagated to obtainthe current sigma point state X

(i)(t|t−1) by using the process

transition model

X(i)(t|t−1) = f (X(i)

t−1, ut, �t), i = 0, . . . , 2n. (7)

The mean and covariance of the prediction X(i)(t|t−1) are

X(t|t−1) =∑2n

i=0 w(i)m X

(i)(t|t−1)

P(t|t−1) =∑2n

i=0 w(i)c [X(i)

(t|t−1) − X(t|t−1)][X(i)(t|t−1) − X(t|t−1)]T .

(8)Then, the information state vector and information matrix canbe predicted based on the predicted mean and covariance ofthe state

˜IF t = (P(t|t−1))−1, ˜if t = ( ˜IF t)X(t|t−1). (9)

2) Information State/Matrix Update: The predicted stateX(t|t−1) and covariance P(t|t−1) can be approximated by a set ofsigma points {X(i)

(t)}, i = 0, . . . , 2n like in (5) and (6). Under theerror propagation, the observation vector can be approximatedby a set of measurement points {Yt

(i)}, i = 0, . . . , 2n, throughthe measurement model h

Yt(i)

= h(X(i)(t)), i = 0, . . . , 2n. (10)

Then, the predicted measurement st is approximated by

st =∑2n

i=0 w(i)m Y

(i)t . (11)

By assuming that the nonlinear measurement equation can bemapped into a function of its statistical estimates such as meanand variances [17], the information matrix It in (2) can bewritten as

It = HTt Q−1

t Ht = P−1(t|t−1)P(t|t−1)H

Tt Q−1

t HtPT(t|t−1)(P

T(t|t−1))

−1.

(12)By replacing P(t|t−1)H

Tt with Ct , the information matrix func-

tion is written as

It = P−1(t|t−1)CtQt

−1CtT (P−1

(t|t−1))T (13)

where Ct is the cross covariance of the state vector andmeasurement vector. It is approximated by

Ct =∑2n

i=0 w(i)c [X(i)

(t) − X(t)][Yt(i) − st]T . (14)

Then, the information state vector is written as

it = HTt Q−1

t st = HTt Q−1

t [vt + HtX(t|t−1)]= P−1

(t|t−1)P(t|t−1)HTt Q−1

t [vt + HtPT(t|t−1)(P

T(t|t−1))

−1X(t|t−1)]= P−1

(t|t−1)CtQ−1t [vt + CT

t (PT(t|t−1))

−1X(t|t−1)](15)

where st is the observation, vt = st − h(X(t|t−1)) is theinnovation vector. A pseudo-measurement matrix Ht can bedefined as in [17]

Ht ≡ (P−1(t|t−1)Ct)

T . (16)

The information contribution it in (15), and It in (13), can berespectively expressed by

it = HtTQ−1

t [vt + HtX(t|t−1)], It = HtTQ−1

t Ht . (17)

3) With N Observation Sensors: The information statevector and information matrix can be obtained by a linearcombination of local information contributions from the con-sidered sensors

if t = ˜if t + (N∑

sen=1

isent ), IF t = ˜IF t + (N∑

sen=1

Isent ) (18)

where isent and Isent are the information state and information

matrix of the (sen)th sensor, respectively.

C. Measurements Coherence Validation

During the multisensor fusion process, the measurementsmight be contaminated by large disturbances. These erroneousmeasurements would result in sequences of unreliable data.Accurately measuring the coherence of different localizationmethods is important for highly reliable and accurate local-ization demand. The coherence of a sensor measurement withthe other measurements should be validated to check if itis unbiased; then the biased measurement source should beremoved.

WEI et al.: CAMERA/LASER/GPS FUSION METHOD FOR VEHICLE POSITIONING 3113

1) Normalized Innovation Squared (NIS): If x1, x2, ..., xk

are k independent standard normal random variables, the sumof their squares dx =

∑ki=1 x2

i is distributed as a Chi-squaredvariable with k degrees of freedom, denoted as dx ∼ χ2(k).For a k-dimensional Gaussian random vector s with mean s

and covariance Ps, s ∼ N(s, Ps), the variable

ds = (s − s)T P−1s (s − s) (19)

can be considered as the squared sum of k Gaussian randomvariables with mean 0 and variance 1, thus the variable ds

should be Chi-squared distributed with k degrees of freedom.Based on Chi-squared distribution, the normalized estima-

tion error squared test (NEES) and the NIS test [1] are popularin probabilistic robotics to test the consistency of two statis-tical distributions. When a sensor measurement is available,as the true state is not known, the innovation between theobservation data and the predicted state is supposed to obeya Gaussian distribution with covariance It

v and used to verifythe coherence of the measurement with the prediction [21]. Letvst denote the difference between the observed measurementst and its predicted value st at time t

vst = st − st . (20)

The covariance matrix of vst is

Itv = HtP

−1(t|t−1)Ht

−1 + Qt (21)

where P(t|t−1) is the covariance of prediction and Qt is thecovariance of measurement, Ht is the measurement model.The Mahalanobis distance dM between the prediction and themeasurement is given by

dM = vsTt (It

v)−1vst. (22)

For an uncorrelated Gaussian process, dM follows a Chi-squared distribution dM ∼ χ2(m), where m is the dimension ofthe measurement vector (degrees of freedom). A measurementwill be rejected if dM is outside the confidence region definedby the degrees of freedom in χ2 table. Vice versa, if dM issmaller than the threshold, the measurement is considered tobe reliable and coherent with the prediction.

2) Extended NIS for Multiple Measurements Validation:When multiple sensors are used (e.g., GPS, odometer, gyrom-eter, cameras, LRF, etc.), besides the coherence between everysensor measurement and the process prediction, the coherencebetween the measurements also needs to be validated in casethe process model prediction is not correct. Therefore, wepropose an extended NIS measurements validation methodto verify the coherence of the sensors when they provideobservations for the same state element. The prediction fromthe process model is considered as a virtual sensor observation.It might be rejected if it is not coherent with all the othersensors’ observations.

A set of parity relations are calculated for every two sensors,as shown in Fig. 2. This method is under the assumption thatonly one fault happens at a time. A fault in any one of thesensors causes a unique subset of these relations to increase,thus the fault sensor could be detected and the measurementwould be rejected [22].

Fig. 2. Extended NIS between multiple sensors.

In the table of Fig. 2, Si denotes a sensor, sit denotes the

sensor measurement and Qit denotes the measurement noise.

dij is the Mahalanobis distance between sensors Si and Sj forselected measurement elements at time t (in 23 to 25). “1”in the table means the Mahalanobis distance dij is correlatedwith these sensors, while “0” in the table means dij is notcorrelated with these sensors. Let vs

ijt denote the difference

between the measurements sit and s

jt at time instant t

vsijt = fis

it − fjs

jt (23)

where the matrices fi and fj are defined by the two sensormeasurements to find their same observation element. Forexample, if Si is a gyrometer sensor, it provides an orientationmeasurement s

gyrot = [θt] at instant t; if Sj is an inertial naviga-

tion sensor, it provides a measurement sinst = [xt, yt, θt]T . The

two sensors provide redundant orientation measurements, thusthe distance dij is calculated based on this orientation elementθ, setting fi = [1] and fj = [0 0 1]. With fi and fj , thecovariance matrix It

v of vsijt and the distance dij are written as

Itv = fiQ

itf

Ti + fjQ

jt f

Tj (24)

dij = (vsijt )T (It

v)−1(vsijt ). (25)

If two sensors do not provide any redundant information, nomeasuring distance exists. Then, a parity checking method isused to detect sensor fault: the change of a special subset of{dij} should be aroused by a unique sensor fault. For example,if the distances d12, d1j , ..., and d1N increase simultaneously,the fault should correspond to the first sensor; else if d21, d2j ,..., and d2N increase simultaneously, the fault sensor is thesecond one, etc. After the validation process, the informationcontributions of the sensors, which are consistent with eachother, are integrated together to provide the final estimation.

III. UIF-Based Multisensor Fusion for Vehicle

Localization

In the proposed approach, a loose coupling method is usedto combine the estimations from a stereoscopic system, alaser range finder (LRF) system and a GPS receiver using anUIF (Fig. 3). The vehicle state is predicted with the processmodel. Then, data delivered from the stereoscopic system,LRF and GPS receiver are used as measurements to update

3114 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

Fig. 3. Proposed UIF-based multisensor fusion method.

the prediction. The prediction and measurements are integratedonly if they can pass the extended NIS checking test. If oneof the sensors fails to work, a solution is given by the othersensors.

Because the frequencies of the different sensors are not thesame, two fusion strategies could be chosen.

1) Fusion when at least one sensor observation is avail-able. In this case, the information is updated when ameasurement is provided. For example, in our system,the stereoscopic system can provide information mostquickly, thus information data can be firstly updatedwith validated stereovision based estimation; then whena GPS measurement is provided, the two sensors areused for the updating process; when the three sensors’observations are available, all the measurements aretaken into account.

2) Fusion when all the sensors’ observations are available.Since the estimation is updated using all the sensors’observations and uncertainties, only when the slowestsensor provides an observation, the other estimationscould be fused.

In order to have more redundant information and to checkthe coherence between different measurements, the secondfusion strategy is applied in our work.

A. Coordinate Reference Frames of the Multisensor System

Our multisensor navigation system consists of a stereoscopicsystem, a LRF and a GPS receiver. For these different sensors,the following coordinate systems are defined as shown inFig. 4

1) Camera frame: its origin is at the center of the leftcamera of the stereoscopic system, xc axis points to theopposite direction of the right camera center (parallel tothe baseline), yc axis points upward and zc axis pointsalong the camera optical axes.

2) LRF frame: its origin is at the center of the LRF, andthe plane of {xl, zl} is the laser scanning plane.

3) GPS frame: the GPS reference coordinate system is theWorld Geodetic System (WGS84). For fusion, GPS dataare converted from WGS84 system to a 2-D local surface(e.g., Lambert II), which is parallel to the local earthsurface.

4) Reference navigation frame (working frame): we takethe initial vehicle position measured by GPS data as the

Fig. 4. Coordinate reference frames of sensors.

origin of the reference navigation system (0, 0, 0), theinitial forward orientation as the positive direction z, thepositive direction x pointing to the left side, and the axisy pointing upward.

B. Vehicle Information Prediction

As the altitude information provided by a low-cost GPSreceiver is not accurate, and the used LRF could provideonly 2-D information, vehicle movements on {x, z} plane areconsidered for fusion in this paper. Let Xt = [xt, zt, θt] denotethe vehicle state vector at time t, where xt , zt , and θt are thevehicle position and orientation in the reference navigationsystem. With the assumption that the road is flat and thevehicle moves with a constant speed, the previous estimatedvelocity is used to predict the current state and uncertainty bya nonlinear transition model. This evolution vehicle model isgiven by Xt = f (Xt−1, δt) + αt , and can be written as

xt = xt−1 +√

v2x,t−1 + v2

z,t−1sin(θt−1 + ωt−1δt2 )δt + α1t

zt = zt−1 +√

v2x,t−1 + v2

z,t−1cos(θt−1 + ωt−1δt2 )δt + α2t

θt = θt−1 + ωt−1δt + α3t

(26)

where vx,t−1, vz,t−1 are the linear velocities on x and z direc-tions at time t − 1 respectively. ωt−1 is the angular velocity atinstant t−1, calculated by the previous vehicle heading at timet-1 and t-2. αt = [α1t , α2t , α3t]T is the process noise, whichis estimated by the vehicle velocity and the time interval δt:(α1t)2 = (vx,t−1 × δt)2, (α2t)2 = (vz,t−1 × δt)2, (α3t)2 = (ωt−1)2.Then, the vehicle state [xt, zt, θt] and state covariance areapproximated by unscented transform according to (4)–(8).The information vector ˜if t and information matrix ˜IFt arepredicted according to (9).

C. Laser Range Finder Subsystem-Based Observation

In this part, the laser range finder subsystem is introduced.The laser range finder uses laser beams to measure its distanceto an object by the time of flight (TOF) of the laser beam.The LRF used in our system is a SICK LMS221 laser rangefinder. By setting the angular resolution of LRF as 0.5 °C andthe maximum range as 80 m, the LRF provides 361 rangedata (a LRF scan) in a 180 °C arc every 0.2 s. As a LRF candirectly measure the detected environment in Euclidean space,it is suitable for objects detection and vehicle localization.

WEI et al.: CAMERA/LASER/GPS FUSION METHOD FOR VEHICLE POSITIONING 3115

Fig. 5. LRF scans and alignment results with classic ICP and OR-ICP.(a) Two LRF scans taken at consecutive positions. (b) Transformed data setsby classic ICP and OR-ICP.

1) Laser Range Finder-Based Vehicle Motion EstimationWith OR-ICP Scan Alignment: Scan alignment is the processof aligning a current LRF points set with the previous pointsset, which is considered as the reference model. Iterative clos-est point (ICP) [23] is one of the mostly used alignment meth-ods for its simplicity. Let Model = {q1, q2, . . . , qnm

} denotethe reference model points set, let Data = {p1, p2, . . . , pnd

}denote the observation data set (nm and nd are the numbersof points in the Model set and in the Data set, respectively).ICP tries to find the optimal transformation T = {�x, �z, �θ}(relative translation and rotation) between the two point sets byiterative data association with the nearest neighbor algorithmand least square minimization.

However, LRF scans are often sparse and uncertain in out-door environments. Thus, the estimated pose by ICP algorithmmight not be reliable. Therefore, we propose to modify theclassic ICP by adding an outlier reduction strategy with adynamic distance error threshold. For every new estimatedtransformation, the standard deviation σ of errors between thetransformed data set and model set is calculated for k couplesof point correspondences between the previous and currentlaser scans {(p1, q1); ...(pi, qi)...; (pk, qk)}, as follows:

ei(pi, qi, T ) = ‖ Tpi − qi ‖, i = 1, 2, · · · , k

e = 1k

∑ki=1 ei

σ =√

1k

∑ki=1(ei − e)2.

(27)

Data points that satisfy the following threshold relation areconsidered to be outlier and eliminated

|ei − e| > 3σ (28)

Then, the ICP process continues with the rest points. Further-more, in order to avoid local convergence of the minimizationprocess, the optimal transformation of the previous scan isused as an initial transformation for the current scan alignment.As seen in Fig. 5, the green data set transformed by theoutlier-rejection ICP (OR-ICP) fits the model set better thanthe classic ICP. The comparison of their estimation results inlong trajectory is shown in Section IV.

2) Covariance of the OR-ICP Estimation: In our previouswork [19], the uncertainty of LRF-based pose observationis approximated by the covariance of residual after OR-ICPminimization together with the uncertainty of the previousupdated vehicle state. The influence of the error function with

respect to the rotation and translation is not taken into account,thus the estimated uncertainty is underestimated. In this paper,the covariance of the vehicle pose predicted by the OR-ICPmethod is estimated considering both the uncertainties of thelaser points and the minimization process. If T is the solutionof a minimization process J with variables B, its covariancecould be estimated in closed form by [24]

T = T (B) = argminJ(B, T ) (29)

cov(T ) = (∂2J

∂T 2)−1 ∂2J

∂T∂Bcov(B)

∂2J

∂T∂B

T

(∂2J

∂T 2)−1. (30)

For the OR-ICP minimization, the solution T is the relativetranslation and rotation estimated by the OR-ICP method, writ-ten as T = (�x, �z, �θ), B is the point vector composed ofcouples of the corresponding points in the previous and currentlaser scans: B = {(p1, q1); ...; (pk, qk)}, where pi = (px

i , pzi ) and

qi = (qxi , q

zi ). The error function J is written as

J(Model, Data, T ) =∑k

i=1 ‖ Tpi − qi ‖=

∑ki=1{(mx

i )2 + (mzi )

2}. (31)

Thus, the partial derivative of J with respect to T at T is

∂J

∂T |T=T= 2[

∑k

i=1(mx

i )∑k

i=1(mz

i )∑k

i=1{mx

i n1i + mz

i n2i }](32)

with

mxi = cos(�θ)px

i − sin(�θ)pzi + �x − qx

i ,

mzi = sin(�θ)px

i + cos(�θ)pzi + �z − qz

i ,

n1i = −sin(�θ)px

i − cos(�θ)pzi ,

n2i = cos(�θ)px

i − sin(�θ)pzi .

Then, the second-order partial derivative is

∂2J

∂T 2 |T=T= 2

⎡⎣

∑ki=11 0

∑ki=1(n1

i )0

∑ki=11

∑ki=1(n2

i )∑ki=1(n1

i )∑k

i=1(n2i ) J33

⎤⎦ (33)

with J33 =∑k

i=1{n1i n

1i + mx

i (−n2i ) + n2

i n2i + mz

i n1i }. Then, the

second-order mixed derivative is a 3 × 4k matrix, written as

∂2J

∂T∂B |T=T=2

⎡⎣ JTB(1) · · · JTB(1)

JTB(2) · · · JTB(2)JTB(3)1 JTB(3)i JTB(3)k

⎤⎦

3×4k

, i = 1, 2, · · · , k

(34)where JTB(1) = [cos(�θ), −sin(�θ), −1, 0]JTB(2) = [sin(�θ), cos(�θ), 0, −1]JTB(3)i = [cos(�θ)(n1

i ) + (mxi )(−sin(�θ)) + sin(�θ)(n2

i ) +(mz

i )(cos(�θ)), −sin(�θ)(n1i )+mx

i (−cos(�θ))+cos(�θ)(n2i )+

mzi (−sin(�θ)), −n1

i , −n2i ].

The uncertainty of every laser point can be measured bythe range and orientation of the laser beam. As the systematicerror of LMS221 is about 5 cm when using cm-mode, theuncertainty of a range d is measured by �d = d/400, and theuncertainty of an orientation φ is measured by �φ = 0.001 +φ/50. Thus, the uncertainty of a laser point pi = (px

i , pzi ) =

3116 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

Fig. 6. Working flow of the stereovision-based visual odometry method.

(dcos(φ), dsin(φ)) in Euclidean space can be estimated by first-order approximation or by a sigma-point transformation. Here,the former is used, written as

cov(pxi , p

zi ) = Hp

[�d2 0

0 �φ2

]HpT (35)

with Hp =

[cos(φ) −dsin(φ)sin(φ) dcos(φ)

]. As all of the laser points

are independent from each other, the covariance matrix cov(B)of k point correspondences could be obtained by cov(B)4k×4k =diag{cov(px

1, pz1); cov(qx

1, qz1); ...; cov(px

k, pzk); cov(qx

k, qzk)}.

3) LRF Observation: As shown in the previous part, thedirect output of LRF alignment is the relative vehicle motionT = {�x, �z, �θ} = {�xl

t, �zlt, �θl

t} and motion uncertaintycov(T ) at time t. The measurement sl

t = [xlt, z

lt, θ

lt]

T andthe observation noise Ql

t provided by the LRF are obtainedby the unscented transform approach with the set of vari-ables {xt−1, zt−1, θt−1, �xl

t, �zlt, �θl

t}. This transform choosesa fixed number of sigma points of the original distributionof the variables, propagates the sigma points through thenonlinear function, and updates the LRF observation meanand covariance by

xlt = xt−1 + dl

t sin(θt−1 + �θlt/2)

zlt = zt−1 + dl

t cos(θt−1 + �θlt/2)

θlt = θt−1 + �θl

t

(36)

where Qlt is the uncertainty of LRF-based pose observation

after unscented estimation, and dlt =

√�xl

t2 + �zl

t2. The

measurement model of LRF observation is a 3 × 3 identitymatrix. The information contribution ilt and Il

t of the LRF canbe calculated using (2).

D. Stereoscopic Subsystem-Based Observation

1) Stereovision-Based Motion Estimation: Visual odome-try (VO) permits to estimate the vehicle pose with images.Compared with the monocular method, stereovision-basedvisual odometry method is adopted in our system becausethe baseline between the left and right cameras is alreadyknown by stereo calibration, thus the scale of Euclideanreconstruction is directly provided. As shown in Fig. 6, thismethod consists of four major steps: 1) extracting features(e.g., Harris, SURF, CenSurE) from the left and right images,finding correspondences between the features; 2) reconstruct-ing the correspondences into 3-D space; 3) tracking the imagefeatures from frame-to-frame; and 4) estimating the motionwith RANSAC based outlier rejection method. More detailscan be found in our previous work [25].

In this article, the location uncertainty of a scale invariantfeature (SURF point in our work) is estimated from thedetector response map in the neighborhood of the featurepoint [27]. Then, the uncertainty of the reconstructed 3-Dpoint is propagated from the 2-D image positions in the leftand right images (for the left image: (plx, ply); for the rightimage: (prx, pry)) according to the triangulation model [26].The uncertainty of the tracked 2-D features is supposed to beproportional to the distance between the reference and trackedimage frames (though the 2-D locations of features are notindependent from each other when features are tracked fromone frame to another, and considering the computation timefor independently detecting new features in every image frame,feature tracking is used here and the cross-correlation betweenfeatures are not considered). Then, the motion uncertainty isestimated with the same closed-form covariance estimationsteps as in the ICP process, by replacing the solution T withthe solution from the stereoscopic system, and the point vectorB with the correspondences of the 3-D points respectivelyreconstructed by stereo images at two consecutive instants.

2) Unscented Transform for VO-Based Observation: Theoutput of stereovision-based odometry is the relative vehiclemotion {�xc

t , �zct , �θc

t } at instant t with respect to the (t−1)th

vehicle local frame. The stereovision-based measurement sct =

[xct , z

ct , θ

ct ]T of the vehicle state can be obtained as for the LRF

by unscented transform using (36). The measurement modelis a 3 × 3 identity matrix. The information contribution ict andIct of the stereoscopic sensor can be calculated using (2).

E. GPS Subsystem-Based Observation

GPS provides longitude and latitude information in Earthframe. This information can be obtained from GPS StandardNational Marine Electronics Association (NMEA) sentenceGGA. In order to be integrated with the other sensors, GPSpositions are projected from WGS84 system to the localCartesian space as (xg

t , zgt ). Considering the current sensors

installed on our experimental platform, the GPS headingis provided by differentiation of the GPS positions throughθ

gt = arctan(

xgt −x

g

t−1

zgt −z

g

t−1). In the future work, the estimation of the

GPS heading might be enhanced by using Doppler informationor other data sources.

The GPS observation is written in the form sgt = [xg

t , zgt , θ

gt ].

The uncertainty Qgt of the GPS position is estimated by

Qgt =

(δ2x ρ · δx · δz

ρ · δx · δz δ2z

)(37)

where δx and δz are the standard deviations of the longitudeerror and the latitude error obtained from GPS NMEA sen-tence GST, ρ is the spatial correlation coefficient calculatedaccording to the method in [28]

ρ =

{tg(2ϕ)(δ2

x−δ2z )

2δxδz, 0 < ϕ < π

2tg(2ϕ)(δ2

z−δ2x)

2δxδz, −π

2 < ϕ < 0(38)

where ϕ is the orientation of the semi-major axis of theerror ellipse in degrees from true North and obtained fromGPS NMEA sentence GST. The uncertainty of the GPSorientation θ

gt can be approximated by unscented transform

WEI et al.: CAMERA/LASER/GPS FUSION METHOD FOR VEHICLE POSITIONING 3117

TABLE I

NIS Relation Between Every Two Sensors

Fig. 7. Experimental vehicle equipped with a stereoscopic system, two LRF,a GPS, and a RTK-GPS receivers.

with the known noise of {xgt , z

gt , x

gt−1, z

gt−1}. The information

contribution igt and I

gt of GPS can be calculated using (2).

F. Validation of Different Sensor Observations

There are two possibilities to monitor the integrity ofdifferent sensor measurements: monitor the integrity beforeor after the integration of the relative sensor measurements. Ifthe vehicle movements and global poses can be respectivelyprovided by multiple sensors (>=3), we can monitor the in-tegrity of the relative localization sensors before the integrationprocess. In our system, there are two relative pose estimations(LRF and stereoscopic system) and one global pose estimation(GPS receiver), thus we cannot monitor the integrity of the tworelative estimations before the integration. Instead, we chooseto integrate the sensor increments at first, then monitor theintegrity of the three sensor observations after the integration.Parity relations between the different sensor measurements,{Xt , sl

t , sct , s

gt }, are calculated considering the uncertainties of

their observations with (23)–(25).As described in Section II-C.2, in Table I, “1” means the

error dij is correlated with the sensor, while “0” means dij

is not correlated with it. The change of a special subset of{dij} should be aroused by a unique sensor fault, then thisdetected sensor measurement is rejected and not fused. Forexample, if d12, d13, and d14 increase simultaneously, the faultsensor is determined to be the first sensor corresponding tothe process model that is considered as a virtual sensor asdescribed before.

Fig. 8. (a) LRF scan: the points are in red if they are in the FOV of leftcamera, the others are in blue. (b) Corresponding image from the left camerawith transformed LRF points (red) in the FOV of the left camera.

Fig. 9. Landmarks from the stereoscopic system and from the LRF overlaidon the satellite image (Google Earth).

IV. Experimental Results

The proposed method is tested with real data acquired byour experimental GEM vehicle equipped with a stereoscopicBumblebee XB3 system (16 Hz, image size is 1280 × 960), aGPS receiver (10 Hz) and a horizontal SICK LMS221 laserrange finder (LRF1 in Fig. 7, 5Hz). The baseline of thestereoscopic system is 0.24 m, the cameras’ field of view(FOV) is 66 °C. Every LRF scan provides 361 data points ina 180 °C arc with a 0.5 °C angular resolution and a maximumrange of 80 m. As RTK-GPS receiver can achieve up to 1cm accuracy in an horizontal plane, it is used to produceground-truth data in order to evaluate the proposed method.The measurements from the different sensors are associatedaccording to their logged time.

A. Sensors Calibration

1) Stereoscopic System: The stereoscopic system is com-posed of two digital cameras aligned with a fixed baseline.It permits to simultaneously capture two images of the samescene from two different points of view. Cameras are calibratedand stereo images are rectified using toolbox in [29].

2) Stereoscopic System—Laser Range Finder: The extrin-sic calibration between the 2-D LRF and the stereoscopic

3118 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

Fig. 10. Comparison of the vehicle trajectories obtained from the LRF, the stereoscopic system and RTK-GPS. (a) Localization results by the LRF usingclassic ICP and OR-ICP. (b) Localization results by the stereoscopic system using 2-D–3-D visual odometry.

Fig. 11. Comparison of the vehicle yaw angles obtained from the LRF, the stereoscopic system and the RTK-GPS. (a) Comparison of the vehicle yawangles obtained from the LRF with OR-ICP and from RTK-GPS. (b) Comparison of the vehicle yaw angles obtained from the stereoscopic system and fromRTK-GPS.

system is implemented using the calibration method proposedin [30]. In this method, a chessboard (planar calibration pat-tern) is placed in front of the vehicle with different poses. Thechessboard should be visible for both the cameras and laserrange finder. Then, the chessboard grid points are extractedfrom the laser scan, and also detected in one of the cameras’image. Considering the geometric constraint that the laserpoints must lie on the calibration plane estimated from one ofthe two cameras, together with the known intrinsic calibrationparameters of the camera, this constraint can be solved toobtain the relative rotation and translation between the cameraand laser range finder. Using these relative parameters, theLRF points, which are in the FOV of the stereoscopic system,can be transformed into its corresponding camera frame, thento the image frame by the camera intrinsic parameters, asshown in Fig. 8.

B. Experimental Results

An experimental data set was captured in March, 2011,at Belfort, France. The vehicle was driven in an industrialarea with buildings around (Fig. 9). The GPS positions are

projected from WGS84 system to Extended Lambert II systemthat covers the experimental area. The landmarks reconstructedby stereovision and 2-D LRF scans are shown on a satelliteimage with the vehicle positions provided by the RTK-GPSreceiver.

1) Estimation by LRF Sensor and Stereoscopic System: InFig. 10(a), the vehicle positions are separately estimated by theclassic ICP and the proposed outlier-rejection ICP (OR-ICP).The localization results in this figure show that the OR-ICPapproach can better estimate the vehicle moving distance thanthe classic ICP method, and increase the localization precisionof the LRF subsystem. In Fig. 11(a), the vehicle orientationsestimated from the LRF with OR-ICP are compared withthe ground truth given by RTK-GPS. It can be seen in thisfigure that from the starting point to the second turning, thevehicle orientation is almost consistent with the ground truth.Then, the trajectory begins to drift. As the LRF used in thisexperiment is installed on the bottom front of the vehicle andis close to the ground, when the vehicle moves in the field witha certain slope, it might scan at the ground and the consecutivescans could not be associated.

WEI et al.: CAMERA/LASER/GPS FUSION METHOD FOR VEHICLE POSITIONING 3119

Fig. 12. Some examples of UIF-based sensor fusion uncertainties. (a) Frame 1, all the four estimations are considered to be reliable. (b) Frame 6, GPS isconsidered as an outlier, the other three sensors are used for current estimation. (c) Frame 19, LRF observation is considered as an outlier, the other threesensors are used for current estimation. (d) Frame 151, GPS observations jumped to the left side with large errors, they are treated as outliers and not usedfor estimation.

In Fig. 10(b), the vehicle positions are separately estimatedby the stereoscopic system with 2-D visual odometry and3-D visual odometry after the camera calibration and imagesrectification. The stereovision-based vehicle yaw angles arecompared with the ground truth provided by RTK-GPS inFig. 11. It is to note that the vehicle localization error ofstereovision-based visual odometry is small till the secondbig turning of the vehicle. When the vehicle encounters sharpturns, where the movement of the vehicle is large and theillumination condition changes quickly, it is difficult to detectenough image features and apply feature matching. Therefore,during these cases the inaccurate translation and roll angleestimations will lead to unreal vehicle motion, and the vehicletrajectory gradually drifts with the error accumulation.

2) Estimation by Fusion of the Different Sensors: The UIF-based fusion method is tested with the same dataset. The initialvehicle orientation and velocity are set by GPS observation.The state covariance matrix is set by the position uncertaintyof the first GPS position. Unscented transform is used inseveral parts of the method and the parameters are empirically

chosen. As described in Table I, the parity checking constraintis applied for information fusion. Some examples of sensoruncertainties at a certain instant are shown in Fig. 12(a) toFig. 12(d).

In Fig. 12(a), all the sensor measurements are acceptedas their observation differences are small, then the vehiclestate is obtained by integrating the information from the foursensors. In Fig. 12(b), the distances of GPS with all the othersensors increase at the same time, thus GPS is considered tobe an outlier, the other three sensors are used for the currentestimation. In Fig. 12(c), as the LRF measurement is far awayfrom the other observations, and the error distances in thesubset (d12, d23, d24) of LRF increase, the LRF is not used forthe state estimation. In Fig. 12(d), the trajectory in yellow isthe ground truth provided by RTK-GPS. When GPS fails toprovide accurate observations, the fusion results without GPSare more accurate than GPS observations.

The performance of the proposed UIF-based localizationmethod is compared with the different sensor observations,EIF and UKF (unscented Kalman filter) with multiple updates

3120 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

Fig. 13. Estimated vehicle trajectory and comparison of the fusion based vehicle position and orientation with the ground truth. (a) Comparison of vehicletrajectory estimated by EIF, UKF, UIF, and UIF (rejection) based multisensor fusion. (b) Zoom of the estimated trajectory. (c) Vehicle position errors observedby the different sensors (compared with RTK-GPS. (d) Vehicle orientation errors observed by the different sensors (compared with RTK-GPS.

TABLE II

Localization Results With the Different Sensors(/Meter),

UKF (Unscented Kalman Filter), EIF (Extended IF),

UIF (Unscented IF)

Observations RTK-GPS Dist. Error% Mean Std.LRF (integration) 603.73 621.67 2.97 1.74 1.52Stereovision (integration) 603.73 685.43 13.53 1.69 1.38GPS 603.73 614.78 1.83 1.08 1.90EIF fusion 603.73 662.11 9.67 3.73 6.89UKF fusion (three updates) 603.73 603.07 0.11 1.38 1.87UIF fusion 603.73 601.20 0.41 1.11 1.17UIF fusion (with rejection) 603.73 603.88 0.02 1.18 1.08

using estimated trajectory length error (trajectory length iscalculated by the integration of movement between consec-utive frames), mean and standard deviation of error betweencorresponding estimated positions and ground truth, as shownin Table II.

As seen in Table II, neither the LRF nor the stereoscopicsystem-based approaches could provide an accurate absolutelocalization results in long term due to the error accumulation.GPS works well in long term, with an error of 1.83% for

the whole trajectory. However, as shown in Fig. 12(b), GPSpositions jump to the left side with an error of about 10m. This GPS failure lasts about 70 m during the periodbetween frame 100 and frame 200 [Fig. 13(a) and (b)]. Theposition and orientation errors of the different fusion methodsare shown in Fig. 13(c) and (d). Without error rejectionstep, information from all the four subsystems are integratedtogether to provide a final estimation. By applying the errorrejection step in the fusion process, the GPS failure is detectedby the validation test. Nevertheless, without reliable GPSinformation, the localization system can continue to correctlyestimate the vehicle positions by integrating the process model,stereoscopic system and LRF. The error rejection step makesthe estimated trajectory more accurate.

Then, two experiments are added to compare the proposedmethod with EIF and the standard fusion method UKF (withmultiupdates) to verify the proposed method.

At first, in order to quantify the performance gain of theunscented approach, the proposed UIF method is comparedwith the EIF method. The whole trajectory estimated by theextended transformation is compared with the ground truth. Asshown in Table II (the 5th row) and Fig. 13(a), the unscentedtransformation can provide better localization results in long

WEI et al.: CAMERA/LASER/GPS FUSION METHOD FOR VEHICLE POSITIONING 3121

term. Nevertheless, generally, if the sampling frequency of thesystem is high enough, the extended transformation should beefficient.

Secondly, in order to quantify the performance gain of theinformation formulation over the covariance matrix formula-tion, a classical UKF with multiple updates is implementedand compared with the results of UIF. The prediction isfirstly updated by GPS observation, then the corrected result isupdated by LRF estimation, and finally by stereo visual odom-etry. No outlier rejection steps were used when comparing thetwo methods. Results are shown in Table II (the 6th row) andFig. 13(a).

It can be seen that, for the whole length of trajectory, thereis no big difference between UKF and UIF. However, theperformance of UIF is better than UKF when comparing thestandard deviation and mean error. Besides, the informationform has the advantage that the update stage is computation-ally easier because no gain or innovation covariance matricesneeds to be calculated. Even though this advantage is notsignificant for our experiment (because we only have threesensor observations), other sensors can be easily added intothe same framework in the future.

V. Conclusion and Future Work

Unscented IF was used for vehicle localization by takingadvantages of the complementarity and redundancy of mul-tiple sensors. A constant velocity process model was usedto predict the information state vector of the vehicle, thenlocal contribution from a GPS, a stereoscopic system and aLRF were added after coherence validation. The proposedoutlier-rejection ICP (OR-ICP) of LRF alignment and UIF-based multisensor localization methods were tested with realdata and evaluated by RTK-GPS data as ground truth. TheUIF-based method was also compared with the extendedIF to evaluate the performance of unscented transformation,and compared with the classic unscented Kalman filter withmultiple updates to evaluate the performance of informationformulation. The results showed that the OR-ICP could reducethe matching ambiguities of scan alignment; the UIF canbetter estimate the vehicle position than the other two fusionmethods; and the fusion of the stereoscopic system and LRFcan continue to localize the vehicle during GPS outages, whileGPS measurements permit to avoid the trajectory drift whenonly the stereoscopic system or the LRF is used. In futureworks, sensors such as IMU or odometry can also be directlyintegrated thanks to the convenience of IF. The coherence ofdifferent sensors might be tested in the information space.Tight coupling approach between LRF and image data is alsoenvisaged instead of loose coupling of their estimated motions.

Acknowledgment

The authors would like to thank the France-Comte RegionalCouncil and the State-Regional program for their financialsupport in the framework of the Ground Vehicle Intelligenceproject developed in IRTES-SET at UTBM.

References

[1] Y. Bar-Shalom, X. R. Li, and T. Kirubarajan, Estimation With Appli-cations to Tracking and Navigation: Theory Algorithms and Software.New York, NY, USA: Wiley, 2004.

[2] A. El Rabbany, Introduction to GPS: The Global Positioning System.Norwood, MA, USA: Artech House, 2002.

[3] O. G. Favrot and M. Parent, “Laser scanner based SLAM in real roadand traffic environment,” in Proc. ICRA09 Workshop Safe NavigationOpen Dynamic Environ. Applicat. Autonomous Vehicles, 2009.

[4] K. Konolige, M. Agrawal, and J. Sola, “Large scale visual odom-etry for rough terrain,” in Proc. Int. Symp. Robot. Res., 2007, pp.201–212.

[5] D. Nister, O. Naroditsky, and J. Bergen, “Visual odometry,” in Proc.IEEE Comput. Soc. Conf. Comput. Vision Pattern Recognit., 2004, pp.652–659.

[6] D. Scaramuzza and F. Fraundorfer, “Visual odometry: Part I—The first30 years and fundamentals,” IEEE Robot. Automation Mag., vol. 18,no. 4, pp. 80–92, Jan. 2011.

[7] R. Jirawimut, S. Prakoonwit, F. Cecelja, and W. Balachandran, “Vi-sual odometer for pedestrian navigation,” IEEE Trans. Instrum. Meas.,vol. 52, no. 4, pp. 1166–1173, Sep. 2003.

[8] H. Durrant-Whyte, Introduction to Decentralised Data Fusion. Univer-sity of Sydney, Sydeney, Australia: Australian Centre for Field Robotics,2002.

[9] C. Boucher and J. Noyer, “A hybrid particle approach for gnss applica-tions with partial GPS outages,” IEEE Trans. Instrum. Meas., vol. 59,no. 3, pp. 498–505, Mar. 2010.

[10] B. Siciliano and O. Khatib, Springer Handbook of Robotics. Berlin,Germany: Springer, pp. 593–594, 2008.

[11] S. Julier and J. Uhlmann, "A new extension of the Kalman filter tononlinear systems," in Proc. Int. Symp. Aerospace/Defense Sens. Simul.Controls, Orlando, FL, USA, 1997, pp. 182–193.

[12] S. Cho and W. Choi, “Robust positioning technique in low-cost DR/GPSfor land navigation,” IEEE Trans. Instrum. Meas., vol. 55, no. 4, pp.1132–1142, Aug. 2006.

[13] C. Barrios and Y. Motai, “Improving estimation of vehicles trajectoryusing the latest global positioning system with kalman filtering,” IEEETrans. Instrum. Meas., vol. 60, no. 12, pp. 3747–3755, Dec. 2011.

[14] C. Andrea, R. Alessandro, and M. Agostino, “Distributed informationfilters for MAV cooperative localization,” in Proc. 10th Int. Symp.Distrib. Autonomous Robot. Syst., Lausanne, Switzerland, Nov. 2010, pp.133–146.

[15] S. Thrun, D. Koller, and Z. Ghahramani, H. Durrant-Whyte andA. Y. Ng, “Simultaneous mapping and localization with sparse extendedinformation filters,” Int. J. Robot. Res., vol. 23, nos. 7–8, pp. 693–716,Aug. 2004.

[16] A. Tsalatsanis, K. P. Valavanis, A. Kandel, and A. Yalcin, “Multiplesensor based UGV localization using fuzzy extended Kalman filtering,”in Proc. Mediterranean Conf. Control Autom., 2007, pp. 1–8.

[17] D. J. Lee, “Nonlinear estimation and multiple sensor fusion usingunscented information filtering,” IEEE Signal Process. Lett., vol. 15,no. 3, pp. 861–864, 2008.

[18] K. Baek and H. Bang. “Vision-based target motion estimation of multipleair vehicles using unscented IF,” in Proc. Int. Conf. Control Autom. Syst.,2010, pp. 1567–1571.

[19] L. Wei, C. Cappelle, and Y. Ruichek, “Unscented IF based multi-sensordata fusion using stereo camera, laser range finder and GPS receiverfor vehicle localization,” in Proc. 14th IEEE Intell. Transp. Syst. Conf.,Washington, DC, USA, Oct. 2011.

[20] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics (IntelligentRobotics and Autonomous Agents Series). Cambridge, MA, USA: MITPress, 2005, p. 75.

[21] M. Jabbour, Ph. Bonnifait, and V. Cherfaoui, “Management of landmarksin a GIS for an enhanced localisation in urban areas,” in Proc. IEEEIntell. Vehicle Symp., Tokyo, Japan, Jun. 2006.

[22] Y. Lu, Emmanuel G. Collins, Jr., and Majura F. Selekwa, “Parity relationbased fault detection, isolation and reconfiguration for autonomousground vehicle localization sensors,” in Proc. 24th Army Sci. Conf.,Orlando, Florida, USA, Nov.–Dec. 2005.

[23] P. J. Besl and H. D. Mckay, “A method for registration of 3-D shapes,”IEEE Trans. Pattern Anal. Mach. Intell., vol. 14, no. 2, pp. 239–256,Feb. 1992.

[24] A. Censi, “An accurate closed-form estimate of ICPs covariance,” inProc. 2007 IEEE Int. Conf. Robot. Autom., Apr. 2007, pp. 3167–3172.

[25] L. Wei, C. Cappelle, and Y. Ruichek, “Localization of intelligent groundvehicles in outdoor urban environments using stereovision and GPS

3122 IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, VOL. 62, NO. 11, NOVEMBER 2013

integration,” in Proc. 15th Int. Conf. Adv. Robot., Tallinn, Estonia, Jun.2011, pp. 192–197.

[26] Y. Cheng, M. Maimone, and L. Matthies, “Visual odometry on the Marsexploration rovers,” IEEE Robot. Autom. Mag., vol. 13. no. 2, pp. 54–62,Jun. 2006.

[27] B. Zeisl, P. Georgel, F. Schweiger, E. Steinbach, and N. Navab, “Es-timation of location uncertainty for scale invariant feature points,” inProc. Brit. Mach. Vision Conf., London, U.K., Sep. 2009.

[28] M. E. E. Najjar and P. Bonnifait, “A road-matching method for pre-cise vehicle localization using belief theory and Kalman filtering,”Autonomous Robots, vol. 19, no. 2, pp. 173–191, Sep. 2005.

[29] J. Y. Bouguet, Camera Calibration Toolbox for MATLAB, 2008.[Online]. Available: http://www.vision.caltech.edu/bouguetj/calib-doc/

[30] Q. Zhang, R. Pless, “Extrinsic calibration of a camera and laserrange finder (improves camera calibration),” in Proc. IROS, vol. 3.pp. 2301–2306, 2004.

Lijun Wei received the B.Sc. degree in urban plan-ning and natural resource management (geography)from Wuhan University, Wuhan, China, in 2008.From 2008 to 2009, she was in a successive M.Sc.–Ph.D. program in geography in Wuhan University. In2009, she suspended her studies at Wuhan Universityand started the Ph.D. program at the University ofTechnology of Belfort-Montbéliard, France.

Her current research interests include computer vi-sion, GIS applications and multisensor fusion basedvehicle localization.

Cindy Cappelle received the M.S. degree in intelli-gent instrumentation from Polytech’Lille University,Lille, France, in 2005, and the Ph.D degree inautomatic control and computer engineering fromLille University, in 2008.

After a post-doctoral position with INRIA Insti-tute in Nancy, she joined the UTBM Universityand IRTES-SET laboratory in 2009 as an Asso-ciate Professor. Within the framework of advanceddriver assistance systems and intelligent vehiclesdevelopment, her current research interests include

multisensor vehicle localization and environment perception.

Yassine Ruichek received the Ph.D. degree in con-trol and computer engineering and Habilitation àDiriger des Recherches (HDR) in physic sciencefrom the University of Lille, France, in 1997 and2005, respectively.

From 1998 to 2001, he was an Assistant Re-searcher and Teacher. From 2001 to 2007, he was anAssociate Professor at the University of Technologyof Belfort-Montbéliard, Belfort and Montbéliard,France. Since 2007, he has been a Full Professor andconducts research activities in multisources/sensors

fusion for environment perception and localization, with applications inintelligent transportation systems. He participates in several projects in theseareas. From 2006 to 2008, he was the Head of the computer science depart-ment. Since 2012, he has been the Head of the Systems and TransportationLaboratory of the Research Institute on Transportation, Energy and Society.His current research interests include computer vision, data fusion, and patternrecognition.