autonomous rover navigation on unknown terrains...

8
Autonomous Rover Navigation on Unknown Terrains Demonstrations in the Space Museum “Cit´ e de l’Espace” at Toulouse Simon Lacroix, Anthony Mallet, David Bonnafous Grard Bauzil, Sara Fleury, Matthieu Herrb, and Raja Chatila LAAS/CNRS 7, av. du Colonel Roche F-31077 Toulouse Cedex 4 France Abstract Autonomous long range navigation in partially known planetary-like terrain is on open challenge for robotics. Nav- igating several hundreds of meters without any human interven- tion requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we briefly describe some functionalities that are currently being integrated on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to inte- grate various instances of the perception and decision function- alities, and on the difficulties raised by this integration. 1 Introduction To foster ambitious exploration missions, future planetary rovers will have to fulfill tasks described at a high abstrac- tion level, such as ‘‘reach the top of that hill’’ or ‘‘explore this area’’. This calls for the ability to navigate for several hundreds of meters, dealing with various and complex situations, without any operator intervention. Such an ability is still quite an open challenge: it requires the inte- gration and control of a wide variety of autonomous processes, ranging from the lowest level servoings to the highest level de- cisions, considering time and resource constraints. We are convinced that no simple autonomy concept can lead to the development of robots able to tackle such complex tasks: we believe in the efficiency of deliberative approaches [4], that are able to plan and control a variety of processes. Following such a paradigm and according to a general economy of means principle, we want the robot to autonomously adapt its deci- sions and behavior to the environment and to the task it has to achieve [5]. This requires the development of: Various methods to implement each of the perception, de- cision and action functionalities, adapted to given contexts; An architecture that allowsfor the integration of these meth- ods, in which deliberative and reactive processes can coexist; Specific decision-making processes, that dynamically se- lect the appropriate decision, perception and action processes among the ones the robot is endowed with. In this paper, we present the current state of development of the robot Lama, an experimental platform within which our developments related to autonomous long range navigation are integrated and tested. We especially focus on the necessity to in- tegrate various implementations of each of the main functional- ities required by autonomous navigation (i.e. environment mod- eling, localization, path and trajectory generation). After a brief description of Lama and its equipment, the rest of the paper is split in two parts: the first part briefly presents the main func- tionalities required by long range navigation we currently con- sider (terrain modeling, path and trajectory planning, rover lo- calization), while the second part insists on the problems raised by the integration of these functionalities. 2 The Robot Lama Lama is a 6-wheels Marsokhod model chassis [10] that has been totally equipped at LAAS 1 . The chassis is composed of three pairs of independently driven wheels, mounted on axes that can roll relatively to one another, thus giving the robot high obsta- cle traversability capacities. Lama is wide, its length varies from to , depending on the axes configu- ration ( in its “nominal” configuration), and weighs ap- proximately . Each motor is driven by a servo-control board, and its maximal speed is . Lama is equipped with the following sensors: Each wheel is equipped with a high resolution optical en- coder, allowing fine speed control and odometry; Five potentiometers provide the chassis configuration; A 2 axes inclinometer provides robot attitude, a magnetic fluxgate compass and a optical fiber gyrometer provide robot orientation and rotational speed; A first stereo bench on top of a pan and tilt unit, is mounted on a mast rigidly tied to the middle axis. This bench has a horizontal field of view of approximately , and is mainly dedicated to goal and landmarks tracking; A second stereo bench, also supported by a PTU, is mounted upon the front axis, at a elevation. It has a horizontal field of view of approximately , and is mainly dedicated to terrain modeling in front of the robot; A differential carrier-phase GPS receiver 2 is used to qualify the localization algorithms. All the computing equipment is in a VME rack mounted on the rear axis of the robot. The rack contains four CPU’s (two PowerPc and two ) operated by the real-time OS Vx- Works. The are in charge of the data acquisitions (except the camera images) and of the locomotion and PTU control, whereas all the environment modeling and planning functional- ities are run on the PowerPc’s. The terrain on which we test the navigation algorithms is ap- proximately meters long by wide. It contains a variety of terrain types, ranging from flat obstacle-free areas to rough areas, including gentle and steep slopes, rocks, gravel, trees and bushes. Part A: Navigation functionalities 1 Lama is currently lent to LAAS by Alcatel Space Industries. 2 currently lent to LAAS by CNES.

Upload: others

Post on 05-Aug-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

AutonomousRoverNavigationon Unknown TerrainsDemonstrationsin theSpaceMuseum“Cit e del’Espace”atToulouse

SimonLacroix,Anthony Mallet, David BonnafousGrardBauzil,SaraFleury, MatthieuHerrb,andRajaChatila

LAAS/CNRS7, av. du ColonelRoche

F-31077ToulouseCedex 4France

AbstractAutonomous long range navigation in partially knownplanetary-like terrain is on openchallengefor robotics. Nav-igatingseveralhundredsof meterswithoutany humaninterven-tion requirestherobotto beableto build variousrepresentationsof its environment,to planandexecutetrajectoriesaccordingtothe kind of terraintraversed,to localizeitself asit moves,andto schedule,start,controlandinterruptthesevariousactivities.In this paper, we briefly describesomefunctionalitiesthat arecurrentlybeingintegratedonboardtheMarsokhodmodelrobotLamaat LAAS/CNRS.We thenfocuson thenecessityto inte-gratevariousinstancesof theperceptionanddecisionfunction-alities,andon thedifficultiesraisedby this integration.

1 IntroductionTo foster ambitious exploration missions, future planetaryrovers will have to fulfill tasksdescribedat a high abstrac-tion level, suchas‘‘reach the top of that hill’’or ‘‘explore this area’’. This calls for theability tonavigate for several hundredsof meters,dealingwith variousandcomplex situations,withoutany operatorintervention.Suchan ability is still quite an openchallenge:it requiresthe inte-gration andcontrol of a widevarietyof autonomousprocesses,rangingfrom thelowestlevel servoingsto thehighestlevel de-cisions,consideringtimeandresourceconstraints.

We areconvincedthatno simpleautonomyconceptcanleadto thedevelopmentof robotsableto tacklesuchcomplex tasks:we believe in theefficiency of deliberativeapproaches[4], thatareableto plan andcontrol a variety of processes.Followingsucha paradigmandaccordingto a generaleconomyof meansprinciple, we want the robot to autonomouslyadapt its deci-sionsandbehavior to theenvironmentandto the taskit hastoachieve [5]. This requiresthedevelopmentof:� Variousmethodsto implementeachof theperception,de-cisionandactionfunctionalities,adaptedto givencontexts;� An architecturethatallowsfortheintegrationof thesemeth-ods,in which deliberative andreactive processescancoexist;� Specificdecision-makingprocesses,that dynamicallyse-lect the appropriatedecision,perceptionandaction processesamongtheonestherobotis endowedwith.

In this paper, we presentthe currentstateof developmentof therobotLama,anexperimentalplatformwithin which ourdevelopmentsrelatedto autonomouslong rangenavigationareintegratedandtested.Weespeciallyfocusonthenecessityto in-tegratevariousimplementationsof eachof themainfunctional-itiesrequiredby autonomousnavigation(i.e. environmentmod-eling, localization,pathandtrajectorygeneration).After abriefdescriptionof Lamaandits equipment,the restof thepaperissplit in two parts: the first part briefly presentsthemain func-tionalitiesrequiredby long rangenavigationwe currentlycon-

sider(terrainmodeling,pathandtrajectoryplanning,rover lo-calization),while thesecondpartinsistson theproblemsraisedby the integration of thesefunctionalities.

2 The Robot LamaLamais a6-wheelsMarsokhodmodelchassis[10] thathasbeentotally equippedat LAAS1. The chassisis composedof threepairsof independentlydrivenwheels,mountedonaxesthatcanroll relatively to oneanother, thusgiving the robothigh obsta-cle traversability capacities. Lama is

��� �����wide, its length

variesfrom��� ����

to�� �����

, dependingon the axesconfigu-ration (

��� �����in its “nominal” configuration),andweighsap-

proximately����� ��

. Eachmotor is driven by a servo-controlboard,andits maximalspeedis

����������� �����. Lamais equipped

with thefollowing sensors:� Eachwheelis equippedwith a high resolutionopticalen-coder, allowing fine speedcontrolandodometry;� Fivepotentiometersprovide thechassisconfiguration;� A 2 axesinclinometerprovidesrobotattitude,a magneticfluxgatecompassanda optical fiber gyrometerprovide robotorientationandrotationalspeed;� A first stereobenchontopof apanandtilt unit, is mountedona

��� �����mastrigidly tied to themiddleaxis.Thisbenchhas

a horizontalfield of view of approximately����

, andis mainlydedicatedto goalandlandmarkstracking;� A secondstereobench, also supportedby a PTU, ismountedupon the front axis, at a

��� �����elevation. It hasa

horizontalfield of view of approximately�����

, and is mainlydedicatedto terrainmodelingin front of therobot;� A differentialcarrier-phaseGPSreceiver2 is usedto qualifythelocalizationalgorithms.

All thecomputingequipmentis in a VME rackmountedonthe rearaxis of the robot. The rack containsfour CPU’s (twoPowerPc and two

��������) operatedby the real-time OS Vx-

Works.The��������

arein chargeof thedataacquisitions(exceptthe cameraimages)and of the locomotionand PTU control,whereasall theenvironmentmodelingandplanningfunctional-ities arerunon thePowerPc’s.

Theterrainonwhichwetestthenavigationalgorithmsis ap-proximately

�����meterslong by � � wide. It containsa variety

of terraintypes,rangingfrom flat obstacle-freeareasto roughareas,includinggentleandsteepslopes,rocks,gravel, treesandbushes.

Part A:Navigation functionalities

1Lamais currentlylent to LAAS by AlcatelSpaceIndustries.2currentlylent to LAAS by CNES.

Page 2: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

Figure1: TherobotLamaon theexperimentationsite

3 Environment ModelingPerceiving andmodelingtheenvironmentis of courseakey ca-pacity for the developmentof autonomousnavigation. Envi-ronmentmodelsareactuallyrequiredfor severaldifferentfunc-tionalities: to planpaths,trajectoriesandperceptiontasks(sec-tion 4), to localizethe robot (section5), andalsoto servo theexecutionof trajectories.Thereis no “universal” terrainmodelthat containsall the necessaryinformationsfor thesevariousprocesses.It is more direct and easierto build different rep-resentationsadaptedto their use. The environmentmodel isthenmulti-layeredandheterogeneous, andperceptionis multi-purpose: severalmodelingprocessescoexist in thesystem,eachdedicatedto thebuilding of specificrepresentations.

3.1 Qualitative ModelWe developeda methodthatproducesa descriptionof the ter-rain in termof navigabilityclasses, on thebasisof stereovisiondata[7]. Most of the existing contributions to producesimi-lar terrainmodelscometo a datasegmentationprocedure(e.g.[15, 9]), thatproducea binary descriptionof theenvironment,in termsof traversableandnon-traversableareas.Our methodis aclassificationprocedurethatproducesaprobabilisticallyla-beledpolygonalmap, closeto an occupancy grid representa-tion. It is an identificationprocess,anddoesnot requireanythresholddetermination(a tediousproblemwith segmentationalgorithms).

Ourmethodreliesonaspecificdiscretisationof theperceivedarea,thatdefinesa cell image. The discretisationcorrespondsto thecentralprojectiononavirtual horizontalgroundof a reg-ular (Cartesian)discretisationin thesensorframe(figure2). It“respects”the sensorscharacteristics:the cell’s resolutionde-creaseswith thedistanceaccordingto thedecreaseof thedataresolution.

Figure2: Discretisationof a 3D stereoimage. Left: regular Cartesiandiscretisationin the sensorframe; right: its projectionon the ground(theactualdiscretisationis much finer)

Featuresarecomputedfor eachcell, andareusedto labelthecells thanksto a supervisedBayesianclassifier: a probability

for eachcell to correspondto a pre-definedtraversabilityclassis estimated.Figure3 shows a classificationresults,with twoterrainclassesconsidered(flat andobstacle).Thereareseveralextensionsto themethod:thediscretisationcanbedynamicallycontrolledto allow a finer description,andtheclassificationre-sultscanbe combinedwith a terrainphysicalnatureclassifierusing texture or color attributes. One of its greatadvantagesis that thanksto the probabilisticdescription,local mapsper-ceivedfrom differentviewpointscanbeveryeasilymergedintoa globaldescription.Theproducedterrainmodelcanbeeitherusedto generateelementarymotionson ratherobstacle-clearterrains(section4.1),or to reasonat thepathlevel (section4.3).

© L

ama

Pilo

t

Figure3: An exampleof classificationresult.Fromleft to right: imagefrom stereo, partial probabilitiesof the cells to be an obstacle(repre-sentedasgray levels),andreprojectionof thecellsin thesensorframe,after theapplicationof a symmetricdecisionfunction

3.2 Digital Elevation Map

Digital elevation maps,i.e. groundelevationscomputedon aregularCartesiangrid, area very commonway to modelroughterrains[11, 2]. Althoughtherehasbeenseveral contributionsto this problem,we think that it hasstill not beenaddressedin very satisfactory way: the main difficulty comesfrom theuncertaintieson the3D input data,thatcanbe fairly well esti-mated,but hardlypropagatedthroughoutthecomputationsandrepresentedin thegrid structure.

However, a quiterealisticmodelcanbeeasilybuilt by com-puting the meanelevation of the datapointson the grid cells,usingonly thepointsthatareprovidedwith precisecoordinates.With our stereovision algorithmfor instance,3D pointswhosedepthis below

�����canbeusedto build a realistic

������ �����!�cell digital elevationmap. Providedtherobot is localizedwitha precisionof theorderof thecell size,dataacquiredfrom sev-eral view-points can be merged into a global map (figure 4).Thismodelis thenusedto detectlandmarks(section3.3)andtogenerateelementarytrajectories(section4.2).

Figure4: A digital elevationmapbuilt by Lamaduring a 20 meterrunusing50 stereovision pairs. Thedisplayedgrid is "$#%"'& , theactualmapgrid is (�)*"+#,(�)*"'& .

Page 3: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

3.3 Finding LandmarksAn efficient way to localizea rover is to rely on particularele-mentspresentin theenvironment,referredto aslandmarks(sec-tion 5.3). A landmarkshouldhave thefollowing properties:(i)it mustbeeasyto identify, so that landmarkassociationin dif-ferentimagescanbeperformed;(ii) it mustbe “geometricallyrich enough”to allow the refinementof all the parametersofan initial position estimation;(iii) its perceived geometricat-tributesmustbeasstableaspossible,i.e. independentto view-point changes. The first requirementcan be relaxed when asufficiently goodinitial positionestimationis available(whichis thecasewe considerin section5): matchinglandmarksex-tractedfrom differentimagesis thensimplydoneby comparingtheir estimatedposition. Thesecondpropertycanbebypassedwhen several landmarksare perceived. Local peaks,suchasthesummitof obstacles,satisfythethird property, areoftennu-merousin planetary-like environmentsandcanbe quite easilyextracted:wethereforedecidedto extractsuchfeaturesasland-marks.

Several authorspresented3D datasegmentationproceduresto extract salientobjects,andour first attemptswerebasedona similar principle [3]. However, suchtechniquesareefficientonly in simplecases,i.e. in scenesweresparserockslie on avery flat terrain,but ratherfragile on roughor highly clutteredterrainsfor instance. To robustly detectsuchlocal peaks,wearecurrently investigatinga techniquethat relieson the com-putationof similarity scoresbetweena digital elevation mapareaanda pre-defined3D peak-like pattern(a paraboloidforinstance),at variousscales.First resultsareencouraging(fig-ure5), andthedetectedlandmarkcouldbeusedto feeda posi-tion estimationtechnique(section5.3).

Figure5: Landmarks(black + signs)detectedon the locally built dig-ital elevation maps(left), andreprojectedin thecamera frame(right).Threemeters separate the two image acquisitions,and landmarksarefrom3 to 10meters away.

4 Trajectory generationNatural terrainsbeing unstructured,specific trajectorygener-ation algorithmshave to be developed. A generictrajectoryplannerableto dealwith any situationshouldtake into accountall theconstraints,suchasrover stability, rover bodycollisionswith theground,kinematicandevendynamicconstraints.Thedifficulty of the problemcalls for high time-consumingalgo-rithms, which would actuallybe quite inefficient in situationswheremuch simpler techniquesare applicable. We therefore

think it is worth to endow the rover with various trajectorygenerationalgorithms,dedicatedthekind of terrainto traverse.Section8 describeshow they areactively startedandcontrolled.

4.1 On easy terrainsOn easyterrains,i.e. ratherflat andlightly cluttered,dead-endsarevery unlikely to occur. Therefore,the robot canefficientlymove just on a basisof a goal to reach3, andof a terrainmodelthatexhibits non-traversableareas,usingtechniquesthatevalu-ateelementarymotions[7].

To generatemotionsin suchterrains,we usean algorithmthat evaluatescircle arcson the basisof the global qualitativeprobabilisticmodel.Everycycle, thealgorithmis runonanup-datedterrain model. It consistsin evaluatingthe interest(interms of reachingthe goal) and the risk (in termsof terraintraversability)of asetof circlearcs(figure6). Therisk of anarcis definedin termsof theprobability to encounteran obstacle;arcswhoserisk is biggerthana chosenthresholdarediscarded,andthearcthatmaximizestheinterest/riskratio is chosen.

Figure 6: A set of circle arcs to evaluateon the global probabilisticmodel(left), and reprojectionof the arcs in the current camera view(right)

4.2 On rough terrainsOnuneventerrain,thenotionof obstacleclearlydependsonthecapacityof the locomotionsystemto overcometerrainirregu-larities, andon specificconstraintsactingon the placementoftherobotover theterrain.Theseconstraintsarethestabilityandcollision constraints,plus,if thechassisis articulated,thecon-figurationconstraints(figure 7). To evaluatesuchconstraints,theprobabilisticqualitativemodelis notanymoresufficient: thedigital elevationmapis required.

z

ψavψar

ϕ

y

x

z

z

C

C

Figure7: Thechassisinternalconfiguration anglescheckedonthedig-ital elevationmap

We developeda planner[8] thatcomputesmotionsverifyingsuchconstraintsby exploringa threedimensionalconfigurationspace-�.0/2143�57685'9�: on thedigital elevationmap.This plan-nerbuildsagraphof discreteconfigurationsthatcanbereachedfrom theinitial position,by applyingsequencesof discretecon-trols.

3notnecessarilythedistantglobalgoal,it canbeaformerlyselectedsub-goal- seesection4.3

Page 4: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

It is however quite time-consuming:we thereforeevaluateelementarytrajectories,in a way very similar to section4.1. Asetof circle arcsis produced,andfor eacharc,a discretesetofconfigurationsareevaluated.Eacharc is thengivena costthatintegratesthedangerousnessof thesuccessive configurationsitcontains,the arc to executebeing the one that maximizestheinterest/costratio.

Figure8: A trajectoryresultingfrom the applicationof the roughter-rain local planner(approximately30cycles)

4.3 Planning PathsThe two techniquesdescribedabove only evaluateelementarylocal trajectories,andarethereforenot ableto efficiently dealwith highly clutteredareasanddead-ends.For thatpurpose,weusea pathplanner, thatreasonson theglobalqualitative modelto find sub-goalsandperceptiontasks[12].

The global qualitative model,which is built upona bitmapstructure,is segmentedto producea region map.This mapde-finesa graph,in which a searchalgorithmprovidesanoptimalpathto reachtheglobal goal. The “optimality” criterion takeshereacrucialimportance:it is a linearcombinationof timeandconsumedenergy, weightedby the terrain classto crossandtheconfidenceof theterrain labeling. Introducingthelabelingconfidencein thecrossingcostof anarc amountsto implicitlyconsiderthe modelingcapabilitiesof the robot: toleratingtocrossobstacleareaslabeledwith a low confidencemeansthattherobotis ableto easilyacquireinformationsonthisarea.Thereturnedpathis thenanalyzed,to producea sub-goalto reach:it is thelastnodeof thepaththatlies in a traversablearea.

Figure9: A resultof the navigationplanneron the qualitativemodel:the resultof the analysisof the shortestpath foundcanbe interpretedastheanswerto thequestion“what areashouldbeperceivedto reachthegoal ?”

5 LocalizationA positionestimateis notonly necessaryto build coherentenvi-ronmentmodels,it is alsorequiredto ensurethatthegivenmis-sionis successfullybeingachieved,or to servo motionsalonga

definedtrajectory. Robotself-localizationis actuallyoneof themostimportantissueto tacklein autonomousnavigation.

The varioustechniquesrequiredto computethe robot’s po-sition asit navigatesrangefrom inertial or odometrydatainte-grationto absolutelocalizationwith respectto aninitial model.Onecandistinguishvariousalgorithmcategories:(i) motiones-timation techniques,that integratedataat a very high paceasthe robot moves (odometry, inertial navigation, visual motionestimation- sections5.1and5.2), (ii) positionrefinementtech-niques,that rely on thematchingof landmarksperceived fromdifferentpositions,and(iii) absolutelocalizationwith respecttoaninitial globalmodelof theenvironment.All thesealgorithmsarecomplementary, andprovide positionestimateswith differ-entcharacteristics:we areconvincedthatanautonomousrovershouldbeendowedwith at leastoneinstanceof eachcategory.

5.1 Odometry on Natural TerrainOdometryonnaturalterrainsis of coursemuchlessprecisethanon a perfectflat ground,but it canhowever bring someusefulinformations. We use3D odometrywith Lamaby incorporat-ing theattitudeinformationsprovidedby the2 axesinclinome-ter4 to the translationsmeasuredby theencodersof thecentralwheels.Dueto skid-steering,theangularorientationmeasuredby the odometersis not reliable: the informationprovided bytheintegrationof thegyrometerdatais muchbetter, anddo notdrift significantlybeforea few tensof minutes.

To have a quantitative ideaof theprecisionof odometry, wegatheredsomestatistics,usinga carrier-phaseDGPSasa refer-ence.Figure10 presentsanhistogramof themeasuredtransla-tion errorsof odometryevery

��� � � . What is noticeableis thesecondarypeakaround

�����!�, thatappearedduringthetraverse

of a gravel area,wheresomelongitudinalandlateralslippagesoccurred.

0

0.02

0.04

0.06

0.08

0.1

0.12

0 0.05 0.1 0.15 0.2 0.25

Odometry

Figure10: Histogram of odometryerrors measured every (�) ;!& stepsduring a 50meterrun with Lamaonvariouskindsof ground.

Thesepreliminaryfiguresshow thatodometrycanhardlybemodeledanestimatorwith Gaussianuncertainties:somegrosserrorsactuallyoccurquiteoften.Wearecurrentlyinvestigatingthe possibility to analyzeon line a set of proprioceptive datain order to be able to dynamicallyqualify the odometry, andespeciallyto detectsucherrors. Thesedataare the 6 wheelencoders,the measuredcurrents,the two attitudeparametersandthefive chassisconfigurationparameters.

5.2 Visual motion estimationWe developedan exteroceptive position estimationtechniquethat is ableto estimatethe6 parametersof the robot displace-ments in any kind of environments,provided it is texturedenoughso that pixel-basedstereovision works well: the pres-enceof no particularlandmarkis required[13]. The techniquecomputesthemotionparametersbetweentwo stereoframeson

4aftertheapplicationof aslight smoothingfilter on its data.

Page 5: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

thebasisof a setof 3D point to 3D point matches,establishedby trackingthecorrespondingpixels in theimagesequenceac-quiredwhile therobotmoves.

The principle of the approachis extremely simple,but wepaid a lot of attentionto the selectionof the pixel to track: inorderto avoid wrongcorrespondences,onemustmakesurethatthey can be faithfully tracked, and in order to have a preciseestimationof the motion, onemustchoosepixels whosecor-responding3D pointsareknown with a goodaccuracy. Pixelselectionis donein threesteps: an a priori selectionis doneon the basisof the stereoimages;an empirical model of thepixel trackingalgorithm is usedto discardthe dubiouspixelsduringthetrackingphase;andfinally anoutlier rejectionis per-formedwhencomputingan estimateof displacementbetweentwo stereoframes(a posterioriselection).

Figure11 presentsa setof positionsvisually estimated,on a� � � run. On this run, thealgorithmgivestranslationestimatesof about

��<. Similar precisionhasbeenobtainedover several

experiments,processingseveral hundredsof images.Work re-latedto thisalgorithmis still underway, with thegoalof reach-ing aprecisionon translationestimatesof about

��<.

Odometry

D-GPS

Steo

Figure11: Comparisonof the positionmeasured by odometry, the vi-sualmotionestimationestimator, andtheDGPSreference

5.3 Landmark Based LocalizationWe are currently investigatingSet Theoreticapproachesforlandmark-basedlocalization[14]. No statisticalassumptionsaremadeon thesensorerrors:theonly hypothesisis thaterrorsareboundedin norm.Estimatesof therobotandlandmarkspo-sitionsarederivedin termsof feasibleuncertaintysets, definedasregionsin which therobotandthelandmarksareguaranteedto lie, accordingto all theavailableinformations.

Somesimulation resultsusing realistic bounds,using thelandmarksdetectionalgorithmspresentedin section3.3 arepromising.Theintegrationof thesealgorithmson boardLamais currentlyunderway.

Part B: Integration

6 A general architecture for auton-omy

Our researchgrouphasbeenworking for several yearson thedefinitionanddevelopmentof agenericsoftwareanddecisionalarchitecturefor autonomousmachines.We briefly presentherethe conceptsof this architecturethat allows the integrationof

both decision-makingandreactive capabilities(a detailedpre-sentationcanbe found in [1]). This architecturehasbeensuc-cessfully instantiatedin multi-robot cooperationexperiments,indoor mobile roboticsexperiments,andautonomoussatellitesimulations[6]. The architecture,sketchedin figure 12, con-tainsthreedifferentlayers:� Thefunctionallevel includesall thebasicrobotactionandperceptioncapacities.Theseprocessingfunctionsandcontrolloops (imageprocessing,obstacleavoidance,motion control,etc.) are encapsulatedinto controllablecommunicatingmod-ules. A modulemayreaddataexportedby othermodules,andoutput its own processingresultsin exporteddatastructures.Theorganizationof themodulesis not fixed, their interactionsdependonthetaskbeingexecutedandontheenvironmentstate.This is animportantpropertythatenablesto achieve a flexible,reconfigurablerobot behavior. Modules fit a standardstruc-ture, and are implementedthanksto a developmentenviron-ment,Genom.

Note that in order to make this level ashardware indepen-dent as possible,andhenceportablefrom a robot to another,the functionallevel is interfacedwith thesensorsandeffectorsthrougha logical robotlevel.� The Executivecontrolsand coordinatesthe executionofthe functionsdistributed in the modulesaccordingto the taskrequirements.It’ s function is to fill the gapbetweenthe deci-sion andfunctional levels decision,i.e. betweenthe slow ratelogical reasoningon symbolicdata,andthe higher bandwidthcomputationon numericaldata. It is a purely reactive system,with no planningcapability. It receivesfrom thedecisionlevelthe sequencesof actionsto be executed,andselects,parame-terizesandsynchronizesdynamicallytheadequatefunctionsofthefunctionallevel.� Thedecisionlevel includesthecapacitiesof producingthetaskplansandsupervisingtheir execution,while beingat thesametime reactive to eventsfrom thepreviouslevel. This levelmaybedecomposedinto two or morelayers,basedonthesameconceptualdesign,but using different representationabstrac-tionsor differentalgorithmictools,andhaving differenttempo-ral properties.Thischoiceis mainly applicationdependent.

N

S

EW

perception

mission

10 s

1 s

0.1 s

Modules

communication

OPERATOR=

reports

state

requests reports

control

proprioceptive sensors

proximetric sensors

effectorsexteroceptive sensors

ENVIRONMENT>

Dec

isio

nal

Lev

elE

xecu

tio

nC

on

tro

l Lev

elF

on

ctio

nal

Lev

elL

og

ical

Sys

tem

Ph

ysic

alS

yste

m

Executive

servo−controlcontrol

reflex actions

Plan Supervisor

Task Supervisor? Task

@Refinment

MissionPlanner

Sensor and Effector InterfaceA

monitoring

modelling

requests

Figure12: LAASarchitecture for robotautonomy.

Up to now, all the algorithmsareintegratedasmodulesonboard Lama, while the executive is simply implementedasscriptswritten in Tcl.

Page 6: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

7 Integration of Concurrent Local-ization Algorithms

Someparticularintegrationproblemsarerelatedto thecoexis-tenceof several localizationalgorithmsrunning in parallelonboardthe robot. To tacklethis in a genericandreconfigurableway, we developeda particularmodulenamedPoM (positionmanager),that receivesall the positionestimatesproducedbythelocalizationasinputs,andproducesasingleconsistentposi-tion estimateasanoutput.PoMaddressesthefollowing issues:� Sensorgeometricaldistribution. The sensorsbeing dis-tributedall over the robot, onemustknow their relative posi-tions,which candynamicallychange.Indeed,we wantall themodulesto be generic,i.e. to handledatawithout having toconsiderthepositionfrom which it hadbeenacquired.This isparticularlytruefor videoimages,sinceLamais equippedwithtwo orientablestereobenches:we maywantto switchbencheswhenever required,with the minimal effort. For this purpose,we developeda framework named“InSitu” (internalsituation,section7.1)which is partof PoM.� Localizationmodulesasynchronism.Thelocalizationalgo-rithmshaveindividual timeproperties:someproduceapositionestimateat a regularhigh frequency, while othersrequiresomenon-constantcomputationtime. To beableto provide a consis-tentpositionestimateat any time, the“time-management”partof PoMhasbeendeveloped(section7.2).� Fusionof thevariouspositionestimates.Thefusionof var-iouspositionestimatesis a key issuein robotlocalization.Thisis donewithin the “fusion” part of PoM (discussedin section7.2).

7.1 Internal situationSomeproblemsarisewhenthe sensorsof a robot aregeomet-rically distributed. For instance,the vision-basedlocalizationmodulehas to know the orientationof the camerasfor eachimageit processes,whereasthe digital elevation mapmoduleneedstherelative and/orabsolutepositionof the3-D imagesitis usingin a predefinedcoordinateframe.

Distributing the geometricalinformationsin the modulesisnot satisfying: someinformationsare hard-codedand dupli-catedwithin the modules,and it complicatesthe porting of amoduleto anotherrobotor anothersensor. For thatpurpose,weuseInSitu, a centralizedgeometricaldescriptionof a robot. Itreadsaconfigurationfile uponstartup,andit providestheneces-saryframecoordinatesto any modulewhentherobotnavigates.All thedataacquisitionmodulesusethis informationto tagthedatathey producewith thenecessaryinformations.

Theconfigurationfile is thetextual descriptionof a geomet-rical graph(figure13). Thenodesof thegraphareframesco-ordinatesthatneedsto beexported.They usuallycorrespondtosensorslocationsbut canalsobea convenientway to split oth-erwisecomplex links. Thisgraphis theonly robot-specificpartof PoM.

Thelinks betweenframesareeitherstatic(rigid) or dynamic(mobile). A static link cannotchangeduring executionandisusually relatedto somemechanicalpart of the robot. On theotherhand,dynamiclinks, thatdependon thechassisconfigu-rationor on a PTU configurationfor instance,areupdatedcon-tinuouslyat a predefinedfrequency. Updatesaremadepossiblewith the definition of link specificfunctionsthat get links pa-rametersfrom theunderlyinghardwaresystem.Thesefunctionsaregroupedtogetherin a library associatedwith theconfigura-tion file. Last, the configurationfile definesa commonframe(alsocalledmain frame). To facilitateandhomogenizeinter-moduledatatransfers,the variousmodulesdataareexpressedin this frame.

To easedatamanipulationandtransferamongthemodules,InSitu continuouslyexportsall theframesconfigurationsfoundin theconfigurationfile with thestructureshown in figure14.

(top)

(bottom)

PSfragreplacements

Rigid

Mobile

GPS

Camera

Camera

Camera

Camera

PTU

PTU

Mast

Front

Robot

Robot

(top, left)

(top, right)

(bottom, left)

(bottom, right)

(vertical)

Figure 13: The geometricalgraph usedon the robot Lama. Rectan-gular boxesare framesconfigurationsthat are exportedin the system.Thethick box (Robot in the figure) is the main frame. Solid lines arelinks (eitherrigid or mobile)thatare declaredin theconfiguration file.Each mobilelink hasan associatedfunctionthat getsthelink parame-ters fromtheunderlyinghardware, computesthecurrenttransformationmatrixandsendsit back to PoM.

The headerof exporteddatacontainsthreefields: the firstfield is thecurrentPoM-dateexpressedin tickssinceboot-time.Thenext two fieldsarepositions:theMain to Origin transfor-mationis thecurrentabsolutepositionof themain framerela-tive to anabsoluteframe.TheOrigin frameis usuallytheposi-tion of therobotat boot-timebut it canbespecifiedanywhere.The Main to Basetransformationis a relative position whichcannotbeusedassuch.Thesoleoperationpermittedwith thisframeis thecompositionwith anotherMain to Basetransforma-tion (seesection7.2for adetaileddescription).Baseis avirtualframethatis maintainedandcomputedby PoM.

PSfragreplacements

Date

Main to BaseMain to Origin

Camera to Main

Camera to Main(top, left)

(top, right)

GPS to Main

Figure 14: Structure exported byInSitu.

PSfragreplacementsDate

Main to BaseMain to Origin

Frame to Main

Figure 15: Structure fordatatagging.

After the header, thereis a variablenumberof transforma-tions which are the current framesconfiguration. They arenamedaccordingto the scheme“Frameto Main” andare thetransformationmatricesthat map 3-D points in the “Frame”frameto 3-D pointsin the“Main” frame.

Every dataacquisitionmodulereadstheInSitu framewhichit relatesto,andassociateto itsdataa“tag” structure(figure15).Thanksto this tagging,dataacquisitionmodulesonly have toknow thenameof theframethey areconnectedto,whichcanbespecifieddynamically. Oncetaggingis done,clientsusingsuch

Page 7: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

datadonothaveto carefrom wherethedatacomes,sinceall thenecessarygeometricalandtime informationis containedin thedataitself. The tag is propagatedalongwith the databetweenmodules,thus making inter-moduledatacommunicationveryflexible.

7.2 Position ManagementIn addition to internal frameconfigurations,PoM collectsthevariouspositionestimatorspresentin thesystem.It is theplacewherepositionalinformationis centralizedandmadeavailablefor any modulethatrequireit.

Positionscomputedby thevariouspositionestimatorsareal-ways producedwith somedelay, that dependson the compu-tation time requiredto producea particularposition. Thustherobothasalwaysto dealwith outdatedpositions.Oneof theroleof PoM is to maintainthetime consistency betweenthevariouspositionestimates.

Internally, thereis onetimechartfor eachpositionestimatorhandledby PoM,plusoneparticularchartfor thefusedposition.The fusedpositionis the resultof the fusion of every positionestimator(seesection7.2). All chartshave thesamelengthandhold every producedpositionssincethebeginningof thechart,upto thecurrentdate.Lengthis variablebut is computedsothatwealwayshaveat leasttwo positionsof everymotionestimator(figure16).

PoM periodicallypolls every positionestimatorandlook ifa positionhave beenupdated.Whena new positionis found,itis storedin thechartof thecorrespondingmotionestimator, atdateits correspondingdate(“updates”arrows in figure 16). Ifthechartis not longenoughto storeatooold date,it is enlargedasneeded.Onceevery motion estimatorhasbeenpolled, ev-ery fusedpositionfrom theoldestupdateto thecurrenttime ismarkedfor re-estimation.

PSfragreplacements

GPSOdometry

LandmarksFusion

Updates

Now

timehistory length

Figure16: Timemanagementandinternal representationof motiones-timators in PoM. Position estimators shownhere are only examples.Black dots showpreviously stored positions. Dasheddots shownewupcomingpositions.

Actually, no datafusionalgorithmis currentlyimplementedwithin PoM: given the individual positionestimatorcharacter-istics,we indeedconsiderthata consistencycheck (fault detec-tion) hasto be performedformerly to any fusion. Up to now,anestimateselectionis performedon thebasisof a confidence(realvaluebetween

�� �and��� �

,��� �

beingthebest)thatis hard-codedfor eachpositionestimator.

Oncethecurrentbestpositionis computed,PoM storesit inthe fusiontime chart(figure16). Theimportantthing to noteisthatwe cannotsimply storethenew positionassuchbecauseitmight bevery differentfrom thepreviousposition.Sincesomemodulemay usethe robot positioncontinuously(suchasser-voing on a computedtrajectoryfor instance),virtual jumpsarenot permitted.This is why we defineda referencepositiones-timator, anda virtual framenamedBase. The fusedpositioncomputedby PoM is the Robotto Origin position. This oneis exportedassuchandusedby modulesthat requireabsoluteposition. The secondexportedposition,Robotto Baseis thepositionof thereferenceestimatorandcanonly beusedlocally,i.e. to computeadeltaposition.PoMcomputesaBaseto Origintransformationfor everypositionestimatorfoundin thesystem.

This transformationis anindicatorof thedrift of eachpositionestimator.

8 Navigation strategiesThe integrationof thevariousalgorithmspresentedin thefirstpart of the paperrequiresspecificdecisionalabilities, that arecurrentlyinstantiatedasTcl scripts.Thefollowing simplestrat-egy is currentlyapplied: the threeenvironmentmodels(qual-itative map, digital map and landmarkmap) arecontinuouslyupdatedevery time new dataare gathered,and the two inte-gratedlocalizationalgorithms(odometryandvisualmotiones-timate) are also continuouslyrunning5. The selectionof thetrajectorygenerationalgorithmis thefollowing: givena globalgoal to reach,theeasyterrainalgorithm,which requiresnegli-gible CPUtime, is applieduntil no feasiblearcscanbefound.In suchcases,the roughterrainalgorithmis applied. It is rununtil eithertheeasyterrainalgorithmsucceedsagain,or until nofeasiblearcsarefoundin thedigital map.In thelattercase(thatcanbeassimilatedto a deadend),thepathplanningalgorithmis run, to selecta sub-goalto reach.Thewholestrategy is thenappliedto reachthesub-goal,andsoon.

9 ConclusionWe insistedon the fact that to efficiently achieve autonomouslong rangenavigation,variousalgorithmshave to bedevelopedfor eachof the basicnavigation functions(environmentmod-eling, localizationand motion generation). Sucha paradigmeventually leadsto the developmentof a complex integratedsystem,thusrequiringthedevelopmentof integrationtools,atboththefunctionalanddecisionallevels.Weareconvincedthatsuchtoolsarethekey to implementefficientautonomyon largetime andspaceranges.

Thereare however several openissues. Among these,webelieve that themostimportantoneis still localization.In par-ticular, thesystemmustberobustto extremelylargeuncertain-ties on the positionestimates,that will eventually occur: thisrequiresthe developmentof landmarkrecognition abilities totacklethe dataassociationproblem,andalsothe developmentof terrain model structuresthat can toleratelarge distortions.Notethatbothproblemsshouldbenefitfrom theavailability ofan initial terrain map, suchas provided by an orbiter, whosespatialconsistency is ensured.Indeed,the developmentof al-gorithmsthat matchlocally built terrainmodelswith suchaninitial mapwould guaranteeboundson theerrorof thepositionestimates.

References[1] R. Alami, R. Chatila, S. Fleury, M. Ghallab,and F. In-

grand. An architecturefor autonomy. SpecialIssueofthe International Journal of RoboticsResearch on Inte-gratedArchitecturesfor RobotControl andProgramming,17(4):315–337,April 1998. Rapport LAAS N97352,Septembre1997,46p.

[2] P. Ballard and F. Vacherand. The manhattanmethod:A fastcartesianelevation mapreconstructionfrom rangedata. In IEEE InternationalConferenceon RoboticsandAutomation,SanDiego,Ca.(USA), pages143–148,1994.

[3] S. Betge-Brezetz,R. Chatila,andM.Devy. Object-basedmodelling and localization in naturalenvironments. In

5landmark-basedlocalizationintegrationis underway, but is shouldalsobecontinuouslyrun, while activelycontrollingthe imageacquisi-tion.

Page 8: Autonomous Rover Navigation on Unknown Terrains ...homepages.laas.fr/simon/publis/LACROIX-ISER-2000.pdf · The terrain on which we test the navigation algorithms is ap-proximately

IEEE InternationalConferenceon RoboticsandAutoma-tion, Nagoya(Japan), pages2920–2927,May 1995.

[4] R. Chatila. Deliberationand reactivity in autonomousmobile robots. RoboticsandAutonomousSystems, 16(2-4):197–211,1995.

[5] R.ChatilaandS.Lacroix.A casestudyin machineintelli-gence:Adaptiveautonomousspacerovers.In A. Zelinsky,editor, Field andserviceRobotics, numberXI in LectureNotesin ControlandInformationScience.Springer, July1998.

[6] J. Gout, S. Fleury, andH. Schindler. A new designap-proachof softwarearchitecturefor anautonomousobser-vationsatellite.In 5th InternationalSymposiumon Artifi-cial Intelligence, RoboticsandAutomationin Space, No-ordwijk (TheNetherlands), June1999.

[7] H. Haddad,M. Khatib, S. Lacroix, andR. Chatila. Re-active navigation in outdoor environmentsusing poten-tial fields. In InternationalConferenceon RoboticsandAutomation,Leuven(Belgium), pages1232–1237,May1998.

[8] A. Hait, T. Simeon,andM. Taix. Robustmotionplanningfor rough terrainnavigation. In IEEE/RSJInternationalConferenceon Intelligent Robotsand Systems,Kyongju(Korea), pages11–16,Oct.1999.

[9] L. HenriksenandE.Krotkov. Naturalterrainhazarddetec-tion with a laserrangefinder. In IEEE InternationalCon-ferenceon Roboticsand Automation,Albuquerque, NewMexico (USA), pages968–973,April 1997.

[10] A. Kemurdjian, V. Gromov, V. Mishkinyuk,V. Kucherenko, and P. Sologub. Small marsokhodconfiguration. In IEEE International Conference onRoboticsandAutomation,Nice(France), pages165–168,May 1992.

[11] I.S. Kweon andT. Kanade. High-resolutionterrainmapfrom multiple sensordata. IEEE Transactionson PatternAnalysisand Machine Intelligence, 14(2):278–292,Feb.1992.

[12] S. Lacroix andR. Chatila. Motion andperceptionstrate-giesfor outdoormobile robotnavigation in unknown en-vironments. In 4th InternationalSymposiumon Experi-mentalRobotics,Stanford, California (USA), July1995.

[13] A. Mallet, S. Lacroix, andL. Gallo. Postionestimationin outdoorenvironmentsusingpixel trackingandstereo-vision. In IEEEInternationalConferenceonRoboticsandAutomation,SanFrancisco,Ca (USA), pages3519–3524,April 2000.

[14] M. Di Marco, A. Garulli, S. Lacroix, andA. Vicino. Asettheoreticapproachto thesimultaneouslocalizationandmapbuilding problem. In 39th IEEE Conferenceon De-cisionandControl, Sydney (Australia), Dec.2000.

[15] L. Matthies,A. Kelly, andT. Litwin. Obstacledetectionfor unmannedground vehicles: A progressreport. InInternational Symposiumof RoboticsResearch, Munich(Germany), Oct.1995.