a sonar approach to obstacle detection for a vision-based autonomous wheelchair

15
Robotics and Autonomous Systems 54 (2006) 967–981 www.elsevier.com/locate/robot A sonar approach to obstacle detection for a vision-based autonomous wheelchair Guillermo Del Castillo a , Steven Skaar a , Antonio Cardenas b,* , Linda Fehr c a Aerospace and Mechanical Engineering, University of Notre Dame, USA b CIEP Facultad de Ingenier´ ıa, Universidad Aut´ onoma de San Luis Potos´ ı, Mexico c Prosthetics, Orthotics and Orthopedic Rehabilitation/Rehab Engineering, Hines VA Hospital, 5th Ave. & Roosevelt Rd.Hines, IL 60141, USA Received 1 March 2005; received in revised form 26 May 2006; accepted 29 May 2006 Available online 14 August 2006 Abstract An advanced prototype Computer Controlled Power Wheelchair Navigation System or CCPWNS has been developed to provide autonomy for highly disabled users, whose mix of disabilities makes it difficult or impossible to control their own power chairs in their homes. The working paradigm is “teach and repeat” a mode of control for typical industrial holonomic robots. Ultrasound sensors, which during subsequent autonomous tracking will be used to detect obstacles, also are active during teaching. Based upon post-processed data collected during this teaching event, elaborate trajectories – which may involve multiple direction changes, pivoting and so on, depending upon the requirements of the typically restricted spaces within which the chair must operate – will later be called upon by the disabled rider. An off-line postprocessor assigns an ultrasound profile to the sequence of poses of any taught trajectory. Use of this profile during tracking obviates most of the inherent problems of using ultrasound to avoid obstacles while retaining the ability to near solid objects, such as when passing through a narrow doorway, where required by the environment and trajectory objectives. The work in this article describes a procedure to obtain consistent maps of sonar boundaries during the teaching process, and a preliminary approach to use this information during the tracking phase. The approach is illustrated by results obtained by using the CCPWNS prototype. c 2006 Elsevier B.V. All rights reserved. Keywords: Control systems; Robotics; Estimation using vision; Wheeled robots; Sonar Obstacle Detection 1. Introduction With mobile robots there are three questions that need to be answered in order to have a basic navigation capability. “Where am I?”, “where do I want to go?” and “how do I get there?” [9]. The first question is related to localization. The second and the third are related to path planning. In terms of localization, an initial position can be given to the system and subsequent estimates of the coordinates can be calculated by measuring the movement of the wheels using encoder devices, for example. But with mobile robots, due to the characteristics of the nonholonomic kinematics, there is a need to have feedback * Corresponding address: CIEP Facultad de Ingenier´ ıa, Universidad Aut´ onoma de San Luis Potos´ ı, Av. Dr. Manuel Nava #8, 78290 San Luis Potosa, SLP CP, Mexico. E-mail addresses: [email protected] (G. Del Castillo), [email protected] (S. Skaar), [email protected] (A. Cardenas). information through observations of the environment [6]. These observations can then be incorporated into an estimation algorithm to correct the position of the vehicle. Sensors are devices that obtain information from the workspace. This information, once it has been properly interpreted, can be used to make inferences about the environment and produce estimates. There are many types of sensors, and each has its own advantages and drawbacks. The most commonly used are active/passive vision, light detection and ranging (LIDAR) and ultrasound range-sensing (sonar). Autonomous wheelchairs are a subset of autonomous mobile robots that is receiving increasing attention around the globe. This interest resides in the need to give mobility and a certain degree of independence to persons who are handicapped or have been severely disabled. The designs of these wheelchairs include several desirable characteristics: maneuverability, navigation, control, safety; these set them apart from other kinds of autonomous mobile robots [1]. 0921-8890/$ - see front matter c 2006 Elsevier B.V. All rights reserved. doi:10.1016/j.robot.2006.05.011

Upload: jejudo

Post on 28-Apr-2015

25 views

Category:

Documents


1 download

DESCRIPTION

A sonar approach to obstacle detection for a vision-based autonomous wheelchair

TRANSCRIPT

Page 1: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

Robotics and Autonomous Systems 54 (2006) 967–981www.elsevier.com/locate/robot

A sonar approach to obstacle detection for a vision-based autonomouswheelchair

Guillermo Del Castilloa, Steven Skaara, Antonio Cardenasb,∗, Linda Fehrc

a Aerospace and Mechanical Engineering, University of Notre Dame, USAb CIEP Facultad de Ingenierıa, Universidad Autonoma de San Luis Potosı, Mexico

c Prosthetics, Orthotics and Orthopedic Rehabilitation/Rehab Engineering, Hines VA Hospital, 5th Ave. & Roosevelt Rd.Hines, IL 60141, USA

Received 1 March 2005; received in revised form 26 May 2006; accepted 29 May 2006Available online 14 August 2006

Abstract

An advanced prototype Computer Controlled Power Wheelchair Navigation System or CCPWNS has been developed to provide autonomyfor highly disabled users, whose mix of disabilities makes it difficult or impossible to control their own power chairs in their homes. The workingparadigm is “teach and repeat” a mode of control for typical industrial holonomic robots. Ultrasound sensors, which during subsequent autonomoustracking will be used to detect obstacles, also are active during teaching. Based upon post-processed data collected during this teaching event,elaborate trajectories – which may involve multiple direction changes, pivoting and so on, depending upon the requirements of the typicallyrestricted spaces within which the chair must operate – will later be called upon by the disabled rider. An off-line postprocessor assigns anultrasound profile to the sequence of poses of any taught trajectory. Use of this profile during tracking obviates most of the inherent problemsof using ultrasound to avoid obstacles while retaining the ability to near solid objects, such as when passing through a narrow doorway, whererequired by the environment and trajectory objectives. The work in this article describes a procedure to obtain consistent maps of sonar boundariesduring the teaching process, and a preliminary approach to use this information during the tracking phase. The approach is illustrated by resultsobtained by using the CCPWNS prototype.c© 2006 Elsevier B.V. All rights reserved.

Keywords: Control systems; Robotics; Estimation using vision; Wheeled robots; Sonar Obstacle Detection

1. Introduction

With mobile robots there are three questions that need to beanswered in order to have a basic navigation capability. “Wheream I?”, “where do I want to go?” and “how do I get there?” [9].The first question is related to localization. The second andthe third are related to path planning. In terms of localization,an initial position can be given to the system and subsequentestimates of the coordinates can be calculated by measuring themovement of the wheels using encoder devices, for example.But with mobile robots, due to the characteristics of thenonholonomic kinematics, there is a need to have feedback

∗ Corresponding address: CIEP Facultad de Ingenierıa, UniversidadAutonoma de San Luis Potosı, Av. Dr. Manuel Nava #8, 78290 San Luis Potosa,SLP CP, Mexico.

E-mail addresses: [email protected] (G. Del Castillo), [email protected](S. Skaar), [email protected] (A. Cardenas).

0921-8890/$ - see front matter c© 2006 Elsevier B.V. All rights reserved.doi:10.1016/j.robot.2006.05.011

information through observations of the environment [6]. Theseobservations can then be incorporated into an estimationalgorithm to correct the position of the vehicle.

Sensors are devices that obtain information from theworkspace. This information, once it has been properlyinterpreted, can be used to make inferences about theenvironment and produce estimates. There are many types ofsensors, and each has its own advantages and drawbacks. Themost commonly used are active/passive vision, light detectionand ranging (LIDAR) and ultrasound range-sensing (sonar).

Autonomous wheelchairs are a subset of autonomousmobile robots that is receiving increasing attention aroundthe globe. This interest resides in the need to give mobilityand a certain degree of independence to persons who arehandicapped or have been severely disabled. The designsof these wheelchairs include several desirable characteristics:maneuverability, navigation, control, safety; these set themapart from other kinds of autonomous mobile robots [1].

Page 2: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

968 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

Some mobile systems use a priori pre-planned trajectoriesbased on complex stored maps of the environment [12–14].An alternative scheme is based on the most common meansto control an industrial, holonomic robot: “teach–repeat”.This straightforward approach is dependable and is basedon excellent repeatability of the taught poses by a typicalholonomic robot. Researchers at the University of NotreDame Department of Aerospace and Mechanical Engineeringextended this simple approach to nonholonomic robots [4].

For this purpose, a fully autonomous wheelchair prototypehas been developed at the Dexterity, Vision and ControlLaboratory at Notre Dame in collaboration with the Hines VAHospital in Chicago. The name of the vehicle prototype is theComputer Controlled Power Wheelchair Navigation System, orCCPWNS (Fig. 1). The goal of this project is to give a certainlevel of autonomy to persons who have lost the use of their legsand lack the necessary coordination to manipulate the joystickor other user-input device of a motorized wheelchair.

The CCPWNS uses passive vision and a prelocated,predefined set of ‘cues’ to get observations and solve theproblem of localization [6,7]. That approach is practicalbecause it gets around the problem of having to interpret verycomplex visual information about the environment. Since thecue detection only involves a simple scanning and patternrecognition, the computational cost is quite low. But as adownside, the system does not get any information about thephysical characteristics of the workspace. That is, it does nothave the capability to detect obstacles or distinguish walls andfurniture, since all the analysis using vision is dedicated toextracting cue information from the images. The teach–repeatparadigm is used to sidestep the path-planning problem, sincethe trajectory is not calculated in real time and the systemsimply follows a series of poses taught by a human teacher [6–8]. However, in a realistic workspace, there is always thepossibility of changes in the environment. These could be anobject blocking the path that was not there previously; or arearrangement of the furniture in a room after the teachingepisode; or a trajectory that crosses a doorway that has to takeinto account that the door might be closed; or a path that waitsfor an elevator to arrive, just to name a few examples. TheCCPWNS has to be able to deal with such eventualities. Allthe situations described above have in common the necessityof obtaining information to detect changes in the environmentsince the path was originally taught. This information couldthen be used to prevent collisions by coming to rest or changingthe course of the vehicle.

The device chosen for the project is ultrasound range-sensing or sonar. In this article, the justification for this choiceis given, an experimental data sample is chosen, and animplementation using the CCPWNS is explained. The case isargued for using ultrasound range-sensing (sonar) to create amodel of the environment. It is emphasized that the purposeof the sensing is not to create precise maps of the workspace,but to distinguish whether the generated patterns during theteaching and tracking episodes are similar. The basic conceptis to take advantage of the teach–repeat paradigm together with

Fig. 1. Autonomous wheelchair vehicle.

Fig. 2. Simulated ray-trace data.

the reported fact [12] that the artifacts created by sonar are quiteconsistent.

2. Why use sonar?

Ultrasound range-sensing has been used since the mid-eighties to generate simple range data; it is well known thatultrasound range sensing has two big shortcomings: beam widthand specularity [17]. Compare the plots of ray-trace scannersimulated data and actual sonar readings of a tracking episodein Figs. 2 and 3. In both figures, the black circles indicate thesonar readings. The two rectangles indicate the initial and finalposition of the CCPWNS vehicle. The curve connecting thetwo polygons is the path traveled. The data of Fig. 2 givesan accurate representation of the physical space. But in Fig. 3,note how the wall appears to be detected at different distancesdepending on the orientation of the sensor.

The effect due to the beam width creates a series of curvedsurfaces along walls. This is because the distance to the wallremains constant no matter the angle of incidence of the soundcone (or the sensor with respect to the wall). These curvedsurfaces were referred to as regions of constant depth (RCDs) inLeonard’s work. Kuc, Leonard and Siegel suggested that correctsonar interpretation depends on extracting the RCDs from theraw data. The work of Kuc and Siegel was to characterize the

Page 3: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 969

Fig. 3. Real sonar data.

different surfaces and their matching RCDs [18,9]. They foundout that the response of a wall to ultrasound is a series ofarcs at the range of the closest return; and that corners andedges behave as RCDs that vary with orientation dependingon the angle of the emitting device. These phenomena are aconsequence of specular reflection.

The risk of using sonar is that there is no guarantee thatobjects detected within the cone correspond to an object thatis in front of the plane of the transducer while using a simpletime-of-flight sensing [15]. The main reason sonar was chosenfor this project is the fact that the data provided by this sensoris coarse but consistent. The purpose of the sensing is not tocreate complete maps, but simply to detect if “something” in theworkspace changed from the teaching to the tracking episodes.The matching of the patterns is easier to achieve due to thespareness of the data, and is less sensitive to noise — unlikethe case of LIDAR [16].

To interpret the range data, it is a most important stepto choose a proper representation. Two possible ways toorganize the resulting datasets are using grid or geometricrepresentations. Grid representations divide the workspace intocells [22,23]. The cells can have constant or variable size,depending on the application or the degree of desired resolution(higher close to possible objects, lower in long stretches of openspace). Each of these discretized units has a value of ‘occupied’,‘free’ or ‘unknown’. Central to this approach is the idea of freespace, which is a measure of an area where the vehicle can movewithout risk of hitting an object.

In contrast, geometric representations try to divide the spaceinto a set of geometric primitives, like lines, points and planes.They require the extraction of features from the raw dataobtained from the sensors [3]. Figs. 4 and 5 show two typesof representation of the same range data. These data werecollected from the sensors while physically pushing the vehicle.

The plot on Fig. 4 has a grid of 100 by 100 mm of resolution.The two rectangles indicate the initial and final position of thevehicle. Using the sonar readings during the small trajectoryshown as a curve that connects a particular juncture on thevehicle in the two positions, range readings can assign valuesto the grid components. In the particular case of Fig. 4, the

Fig. 4. Grid-based map.

Fig. 5. Geometry based map.

white squares are free space, the darker ones are consideredunknown and the darkest ones are considered occupied. In theplot in Fig. 5, the two rectangles represent the initial and finalposition of the vehicle. The black lines are the representationof the sonar readings. The resulting artifact reflects how theultrasound reacts to a wall.

Grid based representations are more general than geometricones, but they are computationally much more intensive.A larger region mapped or a higher resolution requiresexponential increases in the quantity of memory required [9].It might be argued that the creation of maps that use featurerepresentation also needs considerable computer power, inparticular if they are built in real time. But the purpose ofthis work is not map-building since the CCPWNS does notuse sonar for navigation purposes, but rather obstacle detectiononly.

A geometric approach was chosen for its relative simplic-ity. Leonard and Durrant-Whyte propose this fundamental as-sumption in terms of sonar representation: It is assumed thatthe actual three-dimensional geometry of the environment is or-thogonal to the horizontal plane in which the robot moves. Thisproposition is known as the 2D assumption [9].

Page 4: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

970 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

The ultrasound range-sensing hardware of the CCPWNSconsists of the sensors, two supporting metal plates, aStamp module and serial connection to the main CPU. Eightultrasound sensors are attached to two steel plates four in eachplate. Fig. 1 shows a photograph of the CCPWNS system withits rigid plates bolted below the footrests of the wheelchair.Each transducer is set at a different angle from the center axisof the vehicle. This arrangement is made for the purpose ofaugmenting the possibility that at least one sensor will get areading close to its perpendicular plane of a possible object orwall.

In the teaching phase, the estimates and the correspondingsonar readings are saved into a diagnostic file after the processof teaching a trajectory. Then, the information of the sonarboundaries or regions is made available to the tracking program,during the repeat phase.

3. Sonar obstacle detection

Many recent efforts have tried to build accurate maps of theenvironment using complex algorithms that interpret the time-of-flight (TOF) data [19,20] from sonar or LIDAR.

The main difference between the CCPWNS and otherwheelchair projects like Rolland [3] or VAHM [5] is that therange sensing produced by the sonar transducers is used onlyfor obstacle detection and not as the main means of navigation.

The basis of the scheme is to collect ultrasound samplesduring the teaching episode of a certain trajectory. Thisinformation, after being compressed in a useful form, can becompared with the range readings detected during tracking.

The approach chosen for the CCPWNS uses the idea ofdefining regions that bound the sonar readings during theteaching episode. The readings detected during a trackingepisode would then be compared with a sonar boundarycalculated during the teaching phase. If readings occuroutside of this designated boundary, it would signify thatthe environment is not the same as when the teaching wasperformed. Then, a warning or an evasive procedure can beexecuted, or the vehicle may be brought to rest.

It is important to note that the regions defined have tobe local to the trajectory. Since a certain point or region ofthe environment could be crossed many times from differentdirections, the resulting RCDs would be different also. Thesensors create different artifacts, when referencing the samestructure from the same position at a different angle.

Consider a teaching episode where a human teacher rollsthe vehicle physically from position A to position B. Thepose estimates and TOF sonar data collected during one suchexercise is displayed in Fig. 6. The two rectangles at positions Aand B represent the vehicle in the initial and final destinations.The black curve that joins the two rectangles is actually aseries of very dense points. These points represent the estimatesof the position of the chair during the teaching episode. Thepolygons represent objects in the room, i.e. tables, walls, etc.Those were hand-measured. Finally, the circles represent TOFreadings from the left footrest of the chair, and the trianglesrepresent readings from the right footrest.

Fig. 6. Raw data from teaching episode.

Fig. 7. Segments generated by postprocessor.

Fig. 7 shows the path segments computed from theposition estimates using an offline postprocessor program.Lines represent the segments of the trajectory and circles arethe end points of each segment of the taught path. The pathgenerator algorithm divides the trajectory into 17 segments,numbered 0–16. For information about the path generatoralgorithm used by the CCPWNS, and how it divides the taughttrajectory into stable line segments see [10,21].

The algorithm to track a segment is used, in the postprocessing of data acquired during the teaching phase, toseparate the sonar points by segment. Basically, it simulatesa tracking episode using the teaching estimates as the inputdata, and each time it arrives to the end of a path segmentthe algorithm assigns it the corresponding sonar readings. Theprogram uses its estimates of the position to calculate itsprogress along the current segment, and determines at whichjuncture a new path segment should be uploaded [6,7,21].

Fig. 8 shows the acquired sonar data for segment 12 of thetrajectory shown in Fig. 7. In segment 12, the vehicle mostlypivots counterclockwise between two parallel walls. The tworectangles show the initial and final positions. Plots like thisone can be generated for each segment.

Page 5: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 971

Fig. 8. Sonar data for segment 12.

The following step is to detect the points that are closestto the vehicle, and to use those to define a region withinwhich objects detected during tracking will be consideredproblematic. A subsequent procedure finds the largest rangewhere the data are distributed. In the case of segment 12, thelargest range is along the X axis. Then, the algorithm dividesthe range into small angular segments. The procedure looks forthe reading with minimal distance to the estimated position ofthe vehicle within each of these segments. The result can beobserved in Fig. 9 for the segment-12 dataset.

The resulting points can be approximated using a curvebased on a least-squares best fit or the joining of consecutivepoints using line segments. This latter method was chosen as themore conservative, after eliminating the redundant points usingan algorithm similar to the one used for the path generationof segments [8,10]. The resultant boundary points per pathsegment are stored from shortest to largest distance to theorigin. This ordering strategy simplifies the boundary-datahandling. The number of boundary points varies dependingon the complexity of the environment. Finally, two areas canbe defined using these line segments, see Fig. 10. The tworectangles indicate, as before, the initial and final position ofthe vehicle for segment 12.

The points calculated in the earlier step have been used togenerate a boundary. In principle, any point that appears withinthe darker area during tracking would represent an acceptablereading.

The reason to separate the sonar data using as a guide thepath segments of the teach–repeat process is one of simplicity.The data handling is manageable, since the files that contain thedefinition of the segments can be expanded to also contain theinformation necessary to define each sonar boundary. There isno need to define another set of conditions to transition betweenultrasound regions, since the region change occurs at the sametime as the change in path segments.

It might be argued that the RCDs so created do not followthe geometry of the surfaces, and that this kind of separationis artificial. It is true that the ultrasound created artifacts getcut in an arbitrary way, but the interest of the project is not

Fig. 9. Closest points segment 12.

Fig. 10. Region defined segment 12.

to define these sonar regions precisely – as in a map – butrather to compare whether the local patterns generated duringtracking are or are not essentially similar to the ones madeduring teaching.

The sonar fence is just a boundary to determine whethera reading is inside or outside of a region. It is built out ofRCD data but it is not a proper RCD. Since the interest of theproject is not map building or characterization of the RCDs,this distortion is not critical. The interest lies in reproducinga similar sonar pattern while tracking in the absence ofintroduced obstacles. The connecting long lines bound spacesempty of sonar TOF readings. Presumably, these empty spacesremain essentially the same during a tracking episode.

To see how closely the fence matches the sonar readingsrecorded during a tracking episode, observe the exampledisplayed in Fig. 11. In the figure, the circles refer to theleft sonar array readings and the triangles to the right of thedata collected during a tracking episode. The two rectanglesare the initial and final trajectory-segment positions of thevehicle. Segment 16 of the trajectory is shown, along with thecorresponding left and right fences.

Page 6: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

972 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

Fig. 11. Tracking sonar data and teaching fences from segment 16.

Fig. 12. Tracking sonar data and teaching fences from segment 10.

Note how most of the points are outside the fences or veryclose to their boundary. Some points of the boundary appearon the inside of the fences. This phenomenon happens becausethe trajectory tracked is not exactly the same as that whichwas taught. The path postprocessor refines and converts intosegments the teaching estimates [6].

Still, the sonar pattern looks roughly the same, as it shouldsince nothing has changed in the environment. But the approachto be developed has to discern that these discrepancies in thepattern are not due to an obstacle. This topic will be dealt within detail in Section 5.

Now consider a plot similar to Fig. 12, but using the datafrom segment 10 of the same tracking episode. Refer to Fig. 13.Note how the ‘clump’ of data generated by the edge is on theinside of the fence. It is also at a large distance from the fence incomparison with most of the data shown in Figs. 11 and 12. Thiseffect created by the edge is due to the strong sonar reflection.It also means that if the angle of incidence of the emittingdevice varies slightly, the generated reflection can have a largevariation. This behavior is well known and documented [16].It poses a problem for the system to distinguish betweenperturbations caused by an obstacle, and a strongly reflected

Fig. 13. Tracking segment 11 with fence 8.

Fig. 14. Tracking segment 4 with fence 0.

reading generated by a sharp edge. These questions will beaddressed in Section 5.

One last word regarding the local range of the fences usedduring tracking. To illustrate the point, Figs. 13 and 14 featuresonar data from tracking episodes of segments of the trajectoryshown in Fig. 7, with sonar fences calculated during earliersegments. Consider the data displayed in Fig. 13. In the figure,the circles refer to the left sonar array readings and the trianglesto the right of the data collected during a tracking episode of thetrajectory displayed in Fig. 6. The solid polygons represent thewalls. The filled rectangle signals the final position at the endof path segment 11. The rectangular outline shows the initialposition of the CCPWNS during the teaching phase of segment8. Finally, the left and right fences of segment 8 are shown as aseries of dark and light lines, respectively.

Note how the readings of the left sensor follow the RCDcreated by the wall, but not its corresponding left fence. Thisbehavior is noticeable because the right fence bounds the leftarray data. On the other side, the data generated by the rightarray sensor is not bounded at all. This cluster of points isgenerated by the strong edge of the wall structure on the right.

Page 7: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 973

A relatively small change in the vehicle’s position created alarge change in the trajectory’s sonar profile, since the systemdetects two distinct RCDs during the tracking of path segment11 — as opposed to one RCD detected during the teachingphase of path segment 8.

A possible alternative to the local fence approach to thepath segments would be to categorize the various RCDs thatform the workspace, or to create a list of stable environmentallandmarks [19]. But that approach implies having certain apriori knowledge of the workspace and it is more appropriatefor map building. Now consider the plot shown in Fig. 14. Thefigure follows the same conventions as Fig. 13, except that thetracking segment is number 4, and the fences and CCPWNSoutline displayed are the ones corresponding to segment 0.

In this case, the RCDs perceived by the vehicle during pathsegments 0 and 4 are the same, produced by the left structureand the bottom wall. Still, the strong reflection produced by theedge of the lower structure would be perceived as an introducedobstacle, since it deviates noticeably from the left fence. Thisbehavior has been documented extensively, and it is causedby the changes on the angle of incidence of the transducerwhile detecting sonar returns from a prominent edge [18]. Thisphenomenon of perceiving a particular structure in differentways depending on the angle of incidence makes it necessaryto assign the sonar profiles detected during the teaching phaseto roughly the same junctures during the tracking phase, thatis, to make the fences local to a suitably contained trajectorysection, as per the present discussion.

4. Sonar TOF reading comparison

In this section, a method to compare the sonar dataobtained during the tracking event with the corresponding sonarfence calculated off-line is explained. The method receives asinput a vector with the values of the TOF readings and thecorresponding estimates of the position of the vehicle. Then, itcalculates the distance of each reading to the fence, and whetherit is inside or outside the allowed range. Based on the valuesof the distance to the fence, and the ‘side’ of the reading, aprogrammable criterion to determine whether an obstacle iscreating a perturbation in the original taught pattern can bedevised. This criterion can be implemented into a concurrentthread that receives the synchronic sonar information.

This sonar thread receives the TOF readings, and themeasured values of xoffset

lm , yoffsetlm and φoffset

lm which are thecoordinates of the position of sensor lm with respect to pointx, y. Fig. 15 shows a schematic of the wheelchair and theposition of the sensor arrays.

In the figure, the rectangle represents the vehicle and thedark square is the midpoint of the axis between the two drivewheels; in the pose shown that is point x, y on the plane ofthe floor. The circles symbolize the sonar sensors on the leftfootrest and the triangles the ones on the right. The arrows showthe orientation of the transducers and the direction of the sonaremission.

The first step to calculate distance to the fence is the locationon the surface of the floor of each sensor’s position. These

Fig. 15. Vehicle schematic.

Fig. 16. Fence comparison.

coordinates, with respect to the floor’s reference frame, arecalled xs0

lm , ys0lm at the point of emission, and xs1

lm , ys1lm at

the endpoint of the sonar TOF reading, or point of return. Theindex l indicates the sonar array location, with 0 as the left and1 as the right side; and m indicates the sonar sensor referencedinside the array.

These coordinates are computed with respect to referencepoint x, y, φ. The endpoint of the sonar reading will becompared to the segments of the sonar fence, formed byconsecutive array points x fence

li , yfenceli and x fence

li+1 , yfenceli+1 ; where

the index i has a range of 1 to Il − 1 and Il is the total numberof points of fence l. All fence points x fence

li , yfenceli and x fence

li+1 ,yfence

li+1 are ordered from shortest to largest distance to the origin,as explained in Section 3. Refer to Fig. 16.

In the figure, the dark circle represents the return of the sonarreading, and the lighter circles represent points of the fence. Tofind the shortest distance of point xs1

lm , ys1lm to the indicated

fence segment, two vectors, uilm and vi

lm are defined.

The displacement vector vilm is the vector from point li of

the segment to the endpoint of the sonar reading. The vectorthat joins two consecutive points of the fence, x fence

li , yfenceli and

x fenceli+1 , yfence

li+1 is denoted as uilm . Then,

vilm =

(xs1

lm − x fenceli ys1

lm − yfenceli

)T

uilm =

(x fence

li+1 − x fenceli yfence

li+1 − yfenceli

)T.

(1)

Page 8: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

974 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

Fig. 17. Sonar comparison first case.

Fig. 18. Sonar comparison second case.

The angle between these two vectors can be calculated by

cosψ ilm =

vilm · ui

lm∣∣vilm

∣∣ ∣∣uilm

∣∣ . (2)

There are three possible cases to consider, however, basedon the relative situation of the return position in space and thefence endpoints. In the first case, the angle between ui

lm and vilm

is larger than 90◦. Refer to Fig. 17.Since the sign of the cosine is determined by the numerator

of Eq. (2), the condition for the first case can be rewritten as

vilm · ui

lm < 0,

and d ilm is defined as

d ilm =

∣∣∣vilm

∣∣∣ . (3)

In the second case, the sonar return lm has a largermagnitude than the last point of fence element li . Refer toFig. 18.

In this situation,

if vilm · ui

lm >

∣∣∣uilm

∣∣∣2,

d ilm would be defined as the distance between the point of sonar

return lm and the last point of fence element li ,

d ilm =

√(xs1

lm − x fenceli+1

)2+

(ys1

lm − yfenceli+1

)2. (4)

Finally, in the third case, the sonar return lm has a magnitudebetween the two points that define fence element li , and d i

lm isdefined as the distance between the point of sonar return lmand the line segment defined by fence points x fence

li , yfenceli and

x fenceli+1 , yfence

li+1 . This is the case shown in Fig. 16. The distance

Fig. 19. Intersection of extended sonar reading and fence.

from the sonar reading to the fence can be calculated as,

d ilm =

∣∣uilm × vi

lm

∣∣∣∣uilm

∣∣ . (5)

This distance calculation has to occur for each segment ofthe fence. In the end, the measure of the distance of the sonarreturn lm to fence l is,

dfencelm = min(d1

lm, d2lm, . . . , d Il−1

lm ). (6)

The second comparison is to ascertain on which side of thefence the detected sonar reading is. In order to perform thattest, the sonar segment lm, defined by points xs0

lm , ys0lm and

xs1lm , ys1

lm , is extended until it intersects the extended fencesegment li , defined by points x fence

li , yfenceli and x fence

li+1 , yfenceli+1 at

point x interli , yinter

li . Refer to Fig. 19.Now, assume that for some i = K the point x inter

lK , yinterlK is

on fence segment lK. Then, the following conditions would bemet:

min(

x fencelK , x fence

lK+1

)≤ x inter

lK ≤ max(

x fencelK , x fence

lK+1

)min

(yfence

lK , yfencelK+1

)≤ yinter

lK ≤ max(

yfencelK , yfence

lK+1

).

(7)

This test is performed independently of the distancecalculation, since the segment where d i

lm is minimal might notbe the same as the one where condition (7) is satisfied. Refer toFig. 20.

In the figure, note how dfencelm = d K−1

lm but the comparison todetermine on which side of the fence the sonar reading falls isperformed with segments lK. Note that the ‘side’ test takes intoaccount the point of sonar emission and the distance calculationdoes not.

For segment lK the distance from the sonar emission pointto the point of intersection with the fence is, d inter

lK andthe difference between the sonar TOF reading Dlm and thiscalculated distance is,

1Dlm = Dlm − d interlK . (8)

If 1Dlm < 0, the sonar reading is inside the fence. Whenthat case arises, the sonar thread checks whether

dfencelm > DIST BLOCK MAX. (9)

If that condition is met, the disturbance might mean thatan object is distorting the sonar pattern. Each time the thread

Page 9: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 975

Fig. 20. Intersection of prolonged sonar reading with fence segment.

Table 1Statistics of readings with 1Dlm < 0

Segment Percent avg Percent std

0 10.24 4.401 12.47 1.552 9.61 4.013 0.83 0.804 6.70 3.085 7.51 2.306 12.52 5.477 9.91 7.678 19.24 3.169 27.03 2.49

10 6.76 2.4811 14.22 3.0412 34.71 3.9213 35.48 6.4714 22.68 4.6715 22.91 4.9116 5.46 7.63

detects condition (16), it adds +1 to an internal counter. When asonar return meets this condition, it is called a blocked reading.There is a counter that keeps track of the left side and one of theright. If any of the counters reaches SAMPLES STOP counts,the sonar thread assumes that there is an obstacle. At that stage,the system is in a locked state. Note that the possible obstaclemight not be blocking the trajectory, since this scheme onlydetects whether the sonar pattern generated during the trackingepisode displays no major deviations from the fences calculatedduring the teaching phase.

Once the system is in a locked state, each unblocked readingwill subtract a count from its corresponding side, while addingfor each blocked reading. If the counter reaches the valueof SAMPLES RUN, the system unlocks. The system will bein a locked state if either of the counters has a value ofSAMPLES STOP, and will unlock only if both counters have avalue of SAMPLES RUN. The counters are bounded by these

two constants and they start with an initialization value ofSAMPLES RESET.

The current setting of these constants is SAMPLES RUN =

−3, SAMPLES RESET = 0 and SAMPLES STOP = 2.These can be changed depending on the desired sensibility ofthe system, as long as

SAMPLES RUN < SAMPLES RESET

< SAMPLES STOP. (10)

What remains to be determined is the tolerance parameterDIST BLOCK MAX. This topic will be discussed in the nextsection. Once the system has the ability to detect if an obstacleis present/absent, a scheme to take action is needed.

5. Sonar testing for fence parameters

In this section, results of testing for the distance tolerancefrom the fence, with and without obstacles, are presentedand discussed. The limitations of having a fixed thresholdfor DIST BLOCK MAX are exposed. Some solutions areproposed with an acknowledgment that, as a work in progress,they could be optimized.

To try to understand how the sonar samples behave duringtracking, 15 datasets were produced. These were the result oftracking episodes from pose A to pose B, as defined in Fig. 6with no additional objects. Then, for each dataset, the percentof samples that satisfy condition 1Dlm < 0 were calculatedper segment. Finally, the average of all the percents and theirstandard deviation were calculated by segment. The results areshown in Table 1.

From the table, it is clear that the sample behavior variesgreatly. For example, in segment 3, the samples that would beconsidered inside the fence if DIST BLOCK MAX = 0 areless than 9%. But in segments like 13, the samples that complywith 1Dlm < 0 can be as high as 40%. That is the reason thesecond condition, dfence

lm > DIST BLOCK MAX, is needed —to discern what samples with a negative 1Dlm should still beconsidered outside the fence. For that purpose, the maximumdfence

lm of all 15 datasets were calculated by segment. The resultsare shown in Table 2. Take into account that the data shownalready complies with the condition 1Dlm < 0.

It is interesting to note that most of the calculated maximumdistances fall well below 178 mm. There are two exceptions,however: segments 11 and 13, highlighted in the table. Thephenomenon on display is the strong reflection made by theedge in segment 10. That it appears most clearly in segment11 is because the edge is detected in the boundary of thetwo tracking segments. Segments 12 and 13 feature bouncesof the sharp edge in minor measure, which accounts for theirmagnitudes of dfence

lm being smaller than in segment 11.To ascertain whether the values of dfence

lm for each segmentvary substantially if an object is introduced, series of tests wereconducted. Diverse objects were placed in several positionsalong the trajectory, Fig. 21 shows an introduced object in thetrajectory of the wheelchair and the sonar pattern detected. Thechair was stopped manually in the cases where it was close to

Page 10: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

976 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

Table 2

Maxima of dfencelm with 1Dlm < 0 per segment

Fig. 21. Shows example of tracking episode with an introduced object.

collision with an object, and the sonar data were recorded todisk. Results are shown in Table 3.

In the table, the first column is simply the dataset index; thesecond column is the description of the introduced object. Thehighlights are simply to improve the readability of the table,from one dataset to the next. The can of paint is a cylinderof 165 mm in diameter and height of 197 mm; the tripod isa structure with legs of 33 mm of diameter; the trashcan canbe approximated by an octahedron of base 285 × 200 mm;the book has a rectangular base of 180 × 240 mm; the boxhas base dimensions of 250 × 133 mm; Finally, the stool hasfour legs and can be approximated by an octahedron with baseof 355 × 355 mm. The third column gives the approximatecoordinates of the center of the object.

The segments displayed in the table in the fourth column areof interest because they feature significantly larger distancesinside the fence than their averaged counterparts in Table 2.Cylindrical and cubic objects disrupt greatly the sonar profileand produce measurements of dfence

lm that are typically largerthan 230 mm. Even the legs of a stool or of a tripod change the

Table 3

Maximum dfencelm readings produced by introduced objects

pattern noticeably. Fig. 22 shows the plot of the ninth segmentof data set 24.

In the figure, the circles refer to the left sonar array readingsand the triangles to the right of the data collected duringa tracking episode. The two rectangles are the initial andfinal positions of the vehicle. Segment 9 of the trajectory isdisplayed, along with the corresponding left and right fences.The darker rectangle shows the approximate, hand-measured,position of the introduced object on the floor. Note how clearlythe detected object (a cardboard box) appears, as displayed inTable 3.

Fig. 23 shows segment 2 trajectory of dataset 26 along withthe corresponding left and right fences. Note how clearly andconsistently ‘something’ is disrupting the sonar pattern. That‘something’ is a four-legged stool. The stool is displayed asa rectangular outline in the plot; each of the four legs, with

Page 11: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 977

Fig. 22. Object detected in dataset 24, segment 9.

Fig. 23. Object detected in dataset 26, segment 2.

transversal areas of 25×25 mm, is represented as a black squarein the figure. The sonar is able to detect the obstacle, even if it isnot a solid box as in the case of Fig. 22. Using the informationof Table 3 the value of DIST BLOCK MAX was set at 230 mm.Eleven new trajectories were taught to the system with theircorresponding sonar profiles joining station points 1–4, asdefined in Section 4. These paths were tracked in both thepresence and absence of obstacles. The results were verysimilar to the ones shown in Tables 2 and 3. Setting the fixedtolerance at constant value worked well, since in general, noreadings were larger than DIST BLOCK MAX in the absenceof obstacles, and readings larger than DIST BLOCK MAXappeared duly when an obstacle was present.

An exception to this behavior has found and it has to do withsharp edges. A small change in the angle of incidence of theemitted wave can create a large change in the position of thesonar return [9,18]. This effect is present in the highlightedcells of Table 2. Note how the maximum readings are veryclose to the threshold DIST BLOCK MAX. What happens isthat sometimes during a tracking episode, condition (16) is metseveral times due to a sharp edge reflection. If the produced

false blocked readings reach SAMPLES STOP counts, thesystem will lock. This case is not infrequent, in particular if thechair is facing an edge while turning slowly, or if the vehiclestops in front of such an edge during a direction change. Sharpedges are very common in real home environments, and so, asolution is needed.

Making the threshold larger is not a sound option. It wouldcertainly get rid some of the strong reflections, but it wouldalso make the system lose sensitivity, without guaranteeing thata sharp edge might still produce false positives. The currentsolution is to simply disable the troublesome sonar segments.Normally, after the trajectory has been taught, the path istracked a couple of times. If the vehicle enters a locked statewith no changes in the environment, the problematic pathsegment is disabled. When the fences are disabled in a pathsegment, the sonar information collected by the transducers isignored. That is, if an obstacle is present, the vehicle will notbe able to detect it and take action. Typically, only one or twosegments at most need to be disabled per trajectory.

This is a transitory solution at best. The best solutionmight be something similar to what was proposed in earlierparagraphs, a variable value of DIST BLOCK MAX. This isproposed in the future work section. In the following section,the particular case of crossing the threshold of a door isdescribed.

6. Special case: Crossing a doorway

Doorways are very common in indoor environments, sincethey serve the function to connect the diverse rooms in aworkspace. They are sometimes quite narrow for a normal-sized wheelchair, and the problem of crossing them safelyhas demanded special attention from researchers working onautonomous wheelchairs [5,11,2,4]. It is also worth notingthat the most widely used method for navigation, artificialpotentials, cannot be applied to configurations such as corridorsand doorways [5].

Therefore, an illustrative example of crossing a doorwayis analyzed in this section. The environment setup is shownin Fig. 24. In the figure, the different polygons represent thehand-measured dimensions of the objects in the workspace.The solid walls are represented by solid black polygons in thediagram. The environment consists of two desks, one bookcase,one wood panel and two doors. One of the doors is open, to letthe vehicle through. The door opening for the vehicle to crossis approximately 900 mm, whence at its widest point, the chairmeasures 610 mm.

A trajectory was taught to the CCPWNS that consisted ofcrossing the doorway, while capturing the sonar profile. The fullteaching sonar dataset, along with the trajectory, is displayed inFig. 25.

The figure follows the same conventions as Fig. 24, appliedto the environment described in this section. The resultingtrajectory consists of 13 path segments, numbered from 0 to12. Note the clustering of sonar readings at the crossing ofthe doorway. The clutter of structures, which in turn createRCD and related sonar artifacts, creates this accumulation of

Page 12: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

978 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

Fig. 24. Doorway environment.

Fig. 25. Full teaching doorway dataset.

returns. An important question is whether the fence approachdescribed in Sections 3–5 is able to negotiate the trajectorywithout detecting a false positive, that is, an introduced obstaclewhere there is none. A following question is whether thesystem is able to detect an introduced obstacle in the describedsetup.

To address the first question, fifteen tracking episodes of thetrajectory shown in Fig. 25, with no introduced obstacles, wereexecuted. A tracking episode was considered successful if theCCPWNS arrived at the final position and orientation withoutstopping due to a detected obstacle. All fifteen episodes werecompleted successfully. Fig. 26 shows the tracking data of pathsegment 6 of one such episode, along with the sonar fences andthe sonar profile detected during the maneuver. Note how thesonar data follow closely the fences, even on the strong edgesdefined by the right door.

To address the second question, a basket, with dimensionsof 292 × 197 mm, was placed in several positions near thedoor entrance. A run was considered successful if the vehiclestopped before hitting the object. After the removal of theobject, the system should resume movement, that is, it shouldbe able to infer that the introduced obstacle has been removed.Five tests were executed, all of them successful. The segment

Fig. 26. Doorway tracking segment 6.

Fig. 27. Doorway tracking segment 6, with obstacle.

6 of the tracking data, along with the sonar profile andcorresponding fences, is shown in Fig. 27.

The figure follows the conventions of Fig. 26 with onlythe addition of a rectangle that stands for the basket describedabove.

The disturbance created by the introduction of this objectis clearly noticeable in the resulting sonar pattern. Comparethe difference in the ultrasound profiles of Figs. 26 and 27.In the case described above, the system was able to detectthe introduced object (at path segment 6, as shown in thefigure), and, once it was removed, to complete successfully thetrajectory.

The system can also detect introduced obstacles that are onlypartially obstructing the trajectory. Consider the data shown inFig. 28. The figure follows the conventions of Fig. 27. The samebasket as mentioned earlier was placed close to a door, insteadof in the middle of the doorway. This is an instance where anobstacle is set close to a rather long RCD, in this case, the flatsurface of the opened door. The system was able to detect it ontime to stop. But a modification was needed for the parameterDIST BLOCK MAX, the maximum distance allowed betweena sonar reading and its corresponding fence. A compromise

Page 13: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 979

Fig. 28. Doorway tracking segment 7, with partially blocking obstacle set closeto a rather long RCD.

needs to be achieved between a DIST BLOCK MAX that islarge enough to allow for strong reflections, but is still able todetect the disturbance created by a small object. In the caseshown Fig. 27, the parameter DIST BLOCK MAX had to bedecreased by 50 mm for segment 7 of the trajectory. If theparameter is decreased further, the CCPWNS starts to detect thedoor on the right side as an introduced obstacle. This approachof ‘fine-tuning’ the tolerances should be made automatic, and apreliminary scheme is outlined in Section 7.

Lastly, Fig. 29 shows a case where a partially obstructingobject is placed relatively close to a strong reflecting edge.The figure follows the conventions of Fig. 27. In this case,the system was able to detect the partially blocking obstaclerelatively close to the edge. For this example, the same reducedsensitivity of Fig. 27 for parameter DIST BLOCK MAX inpath segment 7 was used. The system was able to stop once theobject was introduced, and finish the path once it was removed,but detected many false positives. That is, the sonar readinginterpretation was very noisy on the right sensor array, causingthe system to stop and start several times. Further testing isneeded to fine-tune problematic cases where strong edges arecombined with small blocking obstacles.

This illustrative example shows the capability of the CCP-WNS to detect obstacles in cluttered, realistic, environments,and to complete successfully a path that crosses a narrow door.In the following section, future work and conclusions are dis-cussed.

7. Conclusions and future work

The conditions at the end of Section 4 determine whetherthe system enters a locked state. In that section, the frameworkwas laid to distinguish whether an object not present at theteaching stage had been introduced into the environment. Atthe actual stage of the CCPWNS when the system locks, thevehicle stops. When it unlocks, the vehicle resumes its trackingalong the path. This naive approach can be quite powerful, andit becomes useful when combined with the ability to reverse atrajectory [10].

Fig. 29. Doorway tracking segment 7, with partially blocking obstacle placedrelatively close to a strong reflecting edge.

The theory and tests developed are the groundwork to perfectand develop an eventual strategy for obstacle avoidance. Thereare several existing systems that have strategies of obstacleavoidance [4,11]. Some of them can even detect movingobjects, and plan their trajectories accordingly. The difference,in terms of the parameters of the problem, with the CCPWNS isthat these others normally work in big rooms with lots of spaceto maneuver. For example, the MAid prototype was tested inthe central railway station of Ulm and in the exhibition halls ofthe Hannover Messe ’98, the largest industrial world fair [2].The CCPWNS can move in very cramped environments withlittle space to maneuver. Consider again the trajectory shown inFig. 6, path A to B. If an object was placed at any point over theblack curve, there would be no alternative route for the vehicleto complete the trajectory, since the walls are simply too close.In a case like that, stopping and retracing the path might bethe only option. But maybe, in later versions, the system couldcompare both approaches, and if the space is large enough, itcould try to circumnavigate the blocking object.

Still, the strategy above does an excellent job detectingobstacles. What is interesting about the scheme proposed in thisarticle is that the system does not have a preprogrammed map.Also, it does not need a complicated algorithm to categorizecomplex RCDs, and the creation of the fences is done offline.Using the simple idea that the sonar patterns are consistent, itachieves a competent level of robustness.

In terms of the possible actions to be taken, there is muchroom for improvement. For example in the current setting, thevehicle stops if a disturbance is detected in the sonar pattern.But maybe, an obstacle is present that clearly does not interferewith the trajectory. In that case, the program should be able todiscern that it can continue, and not simply stop and wait, as itdoes at present.

Secondly, the current stop-and-start paradigm is useful whenthe obstacle has independent movement, for example, a person,an animal or an elevator door. Still, another feature that couldbe added relatively easy is the ability to retrace the path at therequest of the user.

Page 14: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

980 G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981

Fig. 30. CCPWNS with rider.

Finally, some obstacle circumnavigation routines could beadded to the program. When confronted with an obstacle, thesonar thread could decide whether the blocking object is in theway, and if it is, whether there is enough room to go around it.This procedure would be executed while the vehicle is stopped,since an algorithm of this nature is probably quite burdensometo the system. So the vehicle could begin circumnavigation, andif it gets stuck, it could simply reverse the completed portions ofthe path. In any case, the user would always have the option toabort the circumnavigation routine and reverse the path portionactually taken up to this point of reversal.

In this article, the case has been made to use ultrasoundrange sensing to generate TOF readings of an environment.A geometric representation was chosen to interpret the dataand the 2D assumption was presumed as valid. The RCDideas originally proposed by Kuc and Siegel were employedto deal with the two main problems of sonar data interpretation,specularity and beam width.

A sonar approach to the CCPWNS was described. Thisapproach takes advantage of the teach–repeat paradigm andthe reported fact that the artifacts created by sonar are quiteconsistent. It was emphasized that the purpose of the sensing isnot to create precise maps of the workspace, but to distinguishwhether the generated patterns during the teaching and trackingepisodes are qualitatively identical. The hardware and softwaredeveloped to collect ultrasound data were explained.

The measurements made during the teaching episode of acertain trajectory were used to generate regions of constantdepth (RCDs) on an illustrative example. These RCDs werethen used to provide boundaries to the tracking data, and toprovide guidelines to ascertain whether a blocking object ispresent.

The method to discern when a sonar reading might indicatethe presence of an obstacle was explained. Test results forthe sonar parameters were shown, along with the specialcase of crossing a doorway. Emphasis was placed on theadvantages and limitations of the current approach to sonarobstacle detection. Finally, future work and a list of suggestedimprovements were given. The system is currently being testedat the Hines VA Hospital at Hines, IL. Fig. 30 shows theCCPWNS with a rider at the spinal cord injury sector of thehospital.

For more information, and video of the system performing,consult http://www.nd.edu/˜gdelcast/.

Acknowledgments

This work was supported in part by the Office of Researchand Development, Rehabilitation Research and DevelopmentService of the Department of Veterans Affairs, Hines VARehabilitation Research & Development Center and theNational Council of Science and Technology of Mexico(CONACYT) and the Fulbright–Garcia Robles Foundation.

References

[1] S.P. Tzafestas, Research on autonomous robotic wheelchairs in Europe,IEEE Robotics & Automation Magazine 7 (1) (2001) 4–5.

[2] E. Prassler, J. Sholz, P. Fiorini, A robotic wheelchair for crowded publicenvironments, IEEE Robotics & Automation Magazine 7 (1) (2001)38–45.

[3] A. Lankenau, T. Rofer, A versatile and safe mobility assistant, IEEERobotics & Automation Magazine 7 (1) (2001) 29–37.

[4] M. Mazo, Research group of the SIAMO Project, an integral system forassisted mobility, IEEE Robotics & Automation Magazine 7 (1) (2001)46–56.

[5] G. Bourhis, O. Horn, O. Habert, O. Pruski, An autonomous vehicle forpeople with motor disabilities, IEEE Robotics & Automation Magazine 7(1) (2001) 20–28.

[6] S.B. Skaar, J.-D. Yoder, Extending Teach–Repeat to NonholonomicRobots, Structronic Systems: Smart Structures, Devices & Systems, PartII, in: Series on Stability, Vibration and Control of Systems Series B, vol.4, pp. 316–342.

[7] E. Baumgartner, An Autonomous Vision-Based Mobile Robot, Ph.D.Dissertation, University of Notre Dame, 1992.

[8] J.S. Yoder, S. John-David, Advanced topics for the navigation of anautomatically-guided wheelchair system, Ph.D. Dissertation, Universityof Notre Dame, 1995.

[9] J.J. Leonard, H.F. Durrant-Whyte, Directed Sonar Sensing for MobileRobot Navigation, Kluwer Academic Publishers, 1992.

[10] B.P. Reichenberger, An autonomous wheelchair for clinical testing,Master’s Thesis, University of Notre Dame, 2000.

[11] Th. Rofer, A. Lankenau, Architecture and applications of theBremen autonomous wheelchair, in: Proc Fourth Joint Conference onInformation Systems, vol. 1, Association for Intelligent Machinery, 1998,pp. 365–368.

[12] R. Kuc, M.W. Siegel, Physically based simulation model for acousticsensor robot navigation, IEEE Transactions on Pattern Analysis andMachine Intelligence (1987) 766–778.

[13] X. Lebegue, J.K. Aggarwal, Generation of architectural CAD modelsusing a mobile robot, in: Proceedings IEEE International Conference inRobotics and Automation, 1994, pp. 711–717.

[14] Z. Zhang, O.D. Faugeras, A 3D model builder with a mobile robot,International Journal of Robotics (1992) 269–285.

[15] Polaroid Corporation Commercial Battery Division, Ultrasonic RangingSystem, Polaroid Corporation, Cambridge, MA, USA, 1984.

[16] M.D. Adams, Sensor Modelling, Design and Data Processing forAutonomous Navigation, in: World Scientific Series in Robotics andIntelligent Systems, vol. 13, pp. 27–39.

[17] R. Kuc, V.B. Viard, A physically based navigation strategy for sonar-guided vehicles, International Journal of Robotics Research 10 (2) (1991)75–87.

[18] R. Kuc, M.W. Siegel, Physically based simulation model for acousticsensor robot navigation, IEEE Transactions on Pattern Analysis andMachine Intelligence (1987) 766–778.

[19] O. Wijk, H.I. Christensen, Triangulation-based fusion of sonar data withapplication in robot pose tracking, IEEE Transactions on Robotics andAutomation 16 (6) (2000) 740–752.

Page 15: A sonar approach to obstacle detection for a vision-based autonomous wheelchair

G. Del Castillo et al. / Robotics and Autonomous Systems 54 (2006) 967–981 981

[20] L. Kleeman, R. Kuc, Mobile robot sonar for target localization andclassification, International Journal of Robotics Research 14 (4) (1995)295–318.

[21] G. Del Castillo, S.B. Skaar, L. Fehr, Extending teach and repeat topivoting wheelchairs, Journal of Systemics, Cybernetics and Informatics1 (1) (2004).

[22] A. Elfes, Sonar-based real-world mapping and navigation, IEEE Journalof Robotics and Automation (1987) 249–265.

[23] H. Moravec, Sensor fusion in certainty grids for mobile robots,in: Sensor Devices and Systems for Robotics, Springer-Verlag, 1989,pp. 253–276.

Guillermo Del Castillo was born in Grenoble, Francein August 1972 and is of Mexican nationality. Hereceived his M.S. and Ph.D. degrees in MechanicalEngineering from the University of Notre Dame, USAin 2003 and 2004, respectively.

The title of his dissertation was “Autonomous,Vision-Based, Pivoting Wheelchair With ObstacleDetection Capability”, done with the ComputerControlled Power Wheelchair Navigation System

(CPWNS), an autonomous wheelchair prototype developed at the Dexterity,Vision and Control Laboratory at Notre Dame in collaboration with the HinesVA Hospital in Chicago. The main purpose of the project was to give mobileautonomy to severely disabled patients. The work developed resulted in theUS Patent ref. 6,842,692, Computer-controlled Power Wheelchair NavigationSystem (Inventors: Fehr, Linda; Skaar, Steven B.; Del Castillo, Guillermo),given on January 11, 2005 by the US Patent & Trademark Office.

His work has been showcased in the Chicago Sun-Times (“A step forwardfor the wheelchair-bound”, Technology Section, March 26 2003), VisionSystems Design Magazine (“Ultrasound/vision help handicapped”, TechnologyTrends, May 2004), The Wall Street Journal (“The Robot That Does A SimpleJob Very Well May Be Wave of Future”, Portal Section, June 28 2004) and theNew York Times Magazine (Dumb Robots Are Better, December 12, 2004).

Recently, Guillermo Del Castillo joined the Foundation for Research onInformation Technologies in Society (IT’IS), a section of the Swiss FederalInstitute of Technology (ETH). He is currently working as a post doctorate incomputational electrodynamics, and living in Zurich, Switzerland.

Steven Skaar was born in Syracuse, New York in June1953. He received his A.B. degree from the Collegeof Arts and Sciences of Cornell University, and M.S.and Ph.D. degrees from the Department of Engineer-ing Science and Mechanics, Virginia Polytechnic In-stitute and State University, USA in 1978 and 1982,respectively.

Publications: book: Steven B. Skaar and Carl F.Ruoff (editors): Teleoperation and Robotics in Space,

a volume in the AIAA Progress in Aeronautics and Astronautics Series, AIAA,Washington, D.C., September 1994.

Other publications: Skaar, S.B., “Vision-Based Robotics Using Estimation”,a multimedia monograph of ONR-sponsored research, WWW/Mosaic system(on the internet), URL: http://www.nd.edu/NDInfo/Research/sskaar/Home.html

Skaar, S.B. and Gonzalez-Galvan, E., “Versatile and Precise ManipulationUsing Vision”, chapter in the volume Teleoperation and Robotics in Space,AIAA, Washington, D.C., pp. 241–279, 1994.

Skaar, S.B. and Yoder, J.D., “Extending Teach–Repeat to NonholonomicRobots”, chapter in Smart Structures, Devices and Systems, Prentice Hall (toappear).

Patents: L. Fehr, S.B. Skaar, G. Delcastillo, “Computer Controlled PowerWheelchair Navigation System”, US January 11, 2005 US Patent Office.

M. Seelinger, J.D. Yoder, and S.B. Skaar, “Mobile Camera SpaceManipulation”, US Patent no. 6,194,860, Feb. 27, 2001, US Patent Office.

S.B. Skaar, E. Gonzalez-Galvan, M. Robinson, M. Seelinger, “Precise androbust vision-based robot control relative to an arbitrary surface”, US Patent#6,304,050.

US Patent #4,833,383, “Camera-Space Manipulation”, issued 5/23/89.US Patent #5,300,869, “Nonholonomic Camera-Space Manipulation”,

issued 4/5/94.Currently, Steven Skaar is part of the Department of Aerospace and

Mechanical Engineering of University of Notre Dame, USA, as Researcher andProfessor.

Antonio Cardenas received the B.S. degree from theUniversity of San Luis Potosi, Mexico, and masterin computer science from Institute Technologic ofMonterrey, campus Cuernavaca, Mexico; He receivedhis M.S. and Ph.D. degrees in Mechanical Engineeringfrom the University of Notre Dame, USA in 2001and 2003, respectively. He spent three years betweenhis master and Ph.D. studies working for the Instituteof Electrical Research at Cuernavaca, and for the

University of San Luis Potosi, Mexico. Recently, Antonio Cardenas joined theCIEP of the University of San Luis Potosi, Mexico as researcher and professor.His current research focuses on vision-guided strategies to control hybridholonomic/nonholonomic robotic systems based on estimation and navigation.Other research interests include numerical simulation.

Linda Fehr holds a B.S. degree in Computer Sciencefrom Lewis University, Lockport, IL, and an M.S.in Electrical Engineering from University of IllinoisChicago. She has spent the last fifteen years in theRehabilitation Research and Development Service ofthe U.S. Department of Veterans Affairs, workingprimarily on interventions for persons with mobility-limiting conditions.