supporting wilderness search and rescue using a camera-equipped mini uav

22
Supporting Wilderness Search and Rescue using a Camera-Equipped Mini UAV Michael A. Goodrich, Bryan S. Morse, Damon Gerhardt, and Joseph L. Cooper Brigham Young University Provo, Utah, 84602 Morgan Quigley Stanford University Sanford, California, 94305 Julie A. Adams and Curtis Humphrey Vanderbilt University Nashville, Tennessee, 37240 Received 18 September 2006; accepted 10 March 2007 Wilderness Search and Rescue WiSAR entails searching over large regions in often rug- ged remote areas. Because of the large regions and potentially limited mobility of ground searchers, WiSAR is an ideal application for using small human-packable unmanned aerial vehicles UAVs to provide aerial imagery of the search region. This paper presents a brief analysis of the WiSAR problem with emphasis on practical aspects of visual-based aerial search. As part of this analysis, we present and analyze a generalized contour search algorithm, and relate this search to existing coverage searches. Extending beyond labo- ratory analysis, lessons from field trials with search and rescue personnel indicated the immediate need to improve two aspects of UAV-enabled search: How video information is presented to searchers and how UAV technology is integrated into existing WiSAR teams. In response to the first need, three computer vision algorithms for improving video display presentation are compared; results indicate that constructing temporally localized image mosaics is more useful than stabilizing video imagery. In response to the second need, a goal-directed task analysis of the WiSAR domain was conducted and combined with field observations to identify operational paradigms and field tactics for coordinat- ing the UAV operator, the payload operator, the mission manager, and ground searchers. © 2008 Wiley Periodicals, Inc. FIELD REPORT Journal of Field Robotics 25(1), 89–110 (2008) © 2008 Wiley Periodicals, Inc. Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/rob.20226

Upload: michael-a-goodrich

Post on 06-Jul-2016

214 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Supporting wilderness search and rescue using a camera-equipped mini UAV

Supporting Wilderness Searchand Rescue using a

Camera-Equipped Mini UAV

Michael A. Goodrich, Bryan S. Morse,Damon Gerhardt, and Joseph L. CooperBrigham Young UniversityProvo, Utah, 84602

Morgan QuigleyStanford UniversitySanford, California, 94305

Julie A. Adams and Curtis HumphreyVanderbilt UniversityNashville, Tennessee, 37240

Received 18 September 2006; accepted 10 March 2007

Wilderness Search and Rescue �WiSAR� entails searching over large regions in often rug-ged remote areas. Because of the large regions and potentially limited mobility of groundsearchers, WiSAR is an ideal application for using small �human-packable� unmannedaerial vehicles �UAVs� to provide aerial imagery of the search region. This paper presentsa brief analysis of the WiSAR problem with emphasis on practical aspects of visual-basedaerial search. As part of this analysis, we present and analyze a generalized contour searchalgorithm, and relate this search to existing coverage searches. Extending beyond labo-ratory analysis, lessons from field trials with search and rescue personnel indicated theimmediate need to improve two aspects of UAV-enabled search: How video informationis presented to searchers and how UAV technology is integrated into existing WiSARteams. In response to the first need, three computer vision algorithms for improving videodisplay presentation are compared; results indicate that constructing temporally localizedimage mosaics is more useful than stabilizing video imagery. In response to the secondneed, a goal-directed task analysis of the WiSAR domain was conducted and combinedwith field observations to identify operational paradigms and field tactics for coordinat-ing the UAV operator, the payload operator, the mission manager, and groundsearchers. © 2008 Wiley Periodicals, Inc.

FIELD REPORT

• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •

Journal of Field Robotics 25(1), 89–110 (2008) © 2008 Wiley Periodicals, Inc.Published online in Wiley InterScience (www.interscience.wiley.com). • DOI: 10.1002/rob.20226

Page 2: Supporting wilderness search and rescue using a camera-equipped mini UAV

1. INTRODUCTION

Wilderness search and rescue �WiSAR� operations in-clude finding and providing assistance to humanswho are lost or injured in mountains, deserts, lakes,rivers, or other remote settings. WiSAR is a challeng-ing task that requires individuals with specializedtraining. These searches, which consume thousandsof man-hours and hundreds of thousands of dollarsper year in Utah alone �Utah County, 2007�, are oftenvery slow because of the large distances and chal-lenging terrain that must be searched. Moreover, thesearch timeliness is critical; for every hour thatpasses, the search radius must increase by approxi-mately 3 km,1 and the probability of finding and suc-cessfully aiding the victim decreases.

The goal of this work is to develop a camera-equipped mini unmanned aerial vehicle �UAV�2 thatcan be used by WiSAR personnel to improve both theprobability that missing persons will be found andthe speed with which they are found. We begin bypresenting a brief analysis of the WiSAR problemwith an emphasis on practical aspects of search. A keycontribution of this analysis is the generalized con-tour search, which includes the spiral and “lawn-mower” search as special cases. Extending this analy-sis with lessons from a series of field tests, twoimportant areas were identified that need to be ad-dressed before a UAV can be deployed in practicalsearches; improved presentation of video informa-tion and integration into the WiSAR process. The re-mainder of the paper addresses these two needs.Throughout the paper, we rely on observations fromsearch trials, input from volunteers and subject mat-ter experts from Utah County Search and Rescue, andexperiments with human participants.

2. RELATED LITERATURE AND PREVIOUSWORK

There is a great deal of current research dealing withthe human factors of semi-autonomous UAVs�HFUAV, 2004; Cooke, Pringle, Pederson & Connor,2006�. Toward applying human factors analysis toUAV-assisted WiSAR, this paper uses results from agoal-directed task analysis �Endsley, Bolte & Jones,

2003�. Cognitive work analysis is a complementaryanalysis approach that yields different types of infor-mation �Vicenti, 1999; Cummings, 2003�; a cognitivework analysis was also conducted for this applica-tion, and the results from this analysis are available ina technical report �Goodrich et al., 2007b�.

A key human-factors consideration when usingUAVs in WiSAR is the number of humans required tooperate the vehicle. Typically, a UAV engaged in asearch task requires either two operators or a singleoperator to fill two roles: A pilot, who “flies” the UAV,and a sensor operator, who interprets the imageryand other sensors �Tao, Tharp, Zhang & Tai, 1999�.Lessons from ground robots suggest that it is some-times useful to include a third person to monitor thebehavior of the pilot and sensor operators, to protectthese operators, and to facilitate greater situationawareness �Burke & Murphy, 2004; Casper & Mur-phy, 2003; and Drury, Scholtz & Yanco, 2003�. Impor-tant work has analyzed how many unmanned plat-forms a single human can manage �Olsen Jr. & Wood,2004; Hancock, Mouloua, Gilson, Szalma & Oron-Gilad, 2006; Cummings, 2003�. Although there is nocertain conclusion coming from this work, it is appar-ent that the span of human control is limited so that,for example, it would be difficult to monitorinformation-rich video streams from multiple UAVsat once, though it is possible to coordinate multipleUAVs a la air traffic control �Miller, Funk, Dorneich &Whitlow, 2002; Mitchell & Cummings, 2005�. Al-though an important issue, using multiple UAVs toperform a WiSAR task is beyond the scope of this pa-per.

We explore how autonomous information acqui-sition and enhanced information presentation can po-tentially simplify the pilot and sensor operator roles.The goal is to support fielded missions in the spirit ofMurphy’s work �Burke & Murphy, 2004; Casper &Murphy, 2003�, but to focus on different hardwareand operator interface designs in an effort to comple-ment and extend existing designs. Higher levels ofautonomy, which can help reduce the number of hu-mans required to perform a task, include path-generation and path-following algorithms, and lowerlevels include attitude and altitude stabilization�Quigley, Goodrich & Beard, 2004�.

In the WiSAR domain, literature related to aerialsearch is particularly relevant �Koopman, 1980; Bour-gault, Furukawa & Durrant-Whyte, 2003�. Recentwork includes the evaluation of three heuristic algo-rithms for searching an environment characterized by

1This assumes a nominal 3 km/h walking pace for the averagemissing person.2Mini UAVs have wingspans in the 2–8 ft range. Unless other-wise stated, this paper uses the term “UAV” to mean mini-UAV.

90 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 3: Supporting wilderness search and rescue using a camera-equipped mini UAV

a probabilistic description of the person’s likely loca-tion �Hansen, McLain & Goodrich, 2007�. Additionalliterature includes operator interface work for bothUAVs and traditional aviation displays �Calhoun,Draper, Abernathy, Patzek & Delgado, 2003; Prinzel,Glaab, Kramer & Arthur, 2004; Alexander & Wickens,2005; Drury, Richer, Rackliffe & Goodrich, 2006; Wick-ens, Olmos, Chud & Davenport, 1995�. The exact typeof interaction between a human and onboard au-tonomy varies widely across UAV platforms. At oneextreme, the Predator UAV, operated by the UnitedStates Air Force, essentially recreates a traditionalcockpit inside a ground-based control station, com-plete with stick-and-rudder controls. At the other ex-treme are architectures employed by research UAVsthat follow specific, preprogrammed flight paths forsuch applications as the precise collection of atmo-spheric composition data �Goetzendorf-Grabowski,Frydrychewicz, Goraj & Suchodolski, 2006�. The in-teractions represented by these two extremes typifythe extreme points of several adjustable autonomyscales �Sheridan & Verplank, 1978; Kaber & Endsley,2004; Parasuraman, Sheridan & Wickens 2000�. Thepresented work uses altitude, attitude, and directioncontrol algorithms, as well as the ability to autono-mously travel to a series of waypoints. Thus, thiswork is between the extremes of teleoperation andsupervisory control.

The presented work is limited to fixed-wingUAVs. Rotocraft UAVs provide the ability to hoverand perform vertical take-off and landing—featuresthat have made them attractive in many search do-mains �Whalley et al., 2003; Murphy, Stover, Pratt &Griffin, 2006�. State of the art fixed-wing UAVs allowlonger flight times for a given UAV mass, largely be-cause fixed-wing UAVs are more efficient. Manyfixed-wing UAVs compensate for the inability tohover by using a gimballed camera that allows themto focus on a fixed point on the ground even as theUAV circles. This paper employs a fixed-wing UAVbecause it is light, affordable, and can stay aloft for along enough period of time to cover reasonablesearch distances.

3. PRACTICAL ASPECTS OF VISUAL-BASEDAERIAL SEARCH: FRAMING AND ANALYSIS

This section begins by describing the UAV platformand autonomy used in field trials. We then frame thevisual-based aerial search problem using a Bayesian

perspective and identify the two obligations of effec-tive search: coverage and detection. Constraints ondetectability translate into practical constraints on theheight above ground at which the UAV flies. Consid-erations of coverage lead to a generalized contoursearch, which includes spiral and lawnmower searchpatterns as special cases. Lessons from field trials arepresented that indicate that two capabilities were notidentified in the analysis: �a� the need to effectivelypresent video information, and �b� the need to coor-dinate the UAV with ground searchers.

3.1. The Platform

The experimental UAVs used in this work are smalland light, with most having wingspans of approxi-mately 42–50 in. and flying weights of approxi-mately 2 lbs �see Figure 1�a��. The airframes are de-rived from flying wing designs and are propelled bystandard electric motors powered by lithium batter-ies. It was concluded from preliminary discussionswith Utah County Search and Rescue that at least90 min of flight time was required for a reasonablesearch, so the BYU MAGICC lab created a customairframe capable of staying aloft for up to 120 minwhile supporting an avionics sensor suite, a gim-balled camera, and an autopilot.

The standard aircraft sensor suite includes three-axis rate gyroscopes, three-axis accelerometers, staticand differential barometric pressure sensors, a globalpositioning system module, and a video camera on agimballed mount. The UAV uses a 900 MHz radiotransceiver for data communication and an analog2.4 GHz transmitter for video downlink. The autopi-lot is built on a small microprocessor, and is de-scribed in Beard et al. 2005. We adopt the hierarchalcontrol system, described in Beard et al., 2005, inorder to reduce the risks associated with automonywhile still taking advantage of autonomy’s benefits.The UAVs are equipped with autopilot algorithmsthat stabilize the aircraft’s roll and pitch angles, atti-tude stabilization, altitude controller, and ability tofly to a waypoint.

3.2. A Bayesian Framing: Coverage and Detection

Search is the process of removing uncertainty re-garding the probable location of a missing person.Uncertainty is removed by identifying such thingsas the point last seen �PLS� and the direction oftravel, by finding signs that a missing person has

Goodrich et al.: UAV-Enabled WiSAR • 91

Journal of Field Robotics DOI 10.1002/rob

Page 4: Supporting wilderness search and rescue using a camera-equipped mini UAV

recently been in an area, or by “covering” an areawithout finding a sign. Let s denote a cell state in adiscretized version of the earth’s surface in an areaof interest, and let S denote the set of all such states.The search outcome assigns each state to one of thefollowing sets: a sign of the missing person, no sign�nominal�, or unexplored. Formally, the classificationprocess, c, is a partitioning of the state space,c :S� �sign,nominal,unexplored�. Although de-noted simply as sign, a sign includes labeling infor-mation such as a color, time stamp, etc. The classifi-cation map, mc, is the set of classified states: mc

= �c�s� :s�S�.The classification map is constructed by taking

environmental observations, o, and using these ob-servations to construct mc. The map is a representa-tion of the entire search region, but observations arelocalized to what is visible by the UAV camera,which is controlled by the pose of the UAV and itscamera. We represent this pose using the state vari-able xt. Although automated target recognition tech-nologies exist, this analysis is limited to observationsmade by a human from the UAV’s video stream.Such observations can include different types of in-formation at different resolution levels:

• Location: “over there,” “something interest-ing a few frames ago,” “by the trail,” GPS lo-cation �from triangulation�, etc.

• Label: color, object name, brightness, “some-thing unusual,” etc.

We can represent the relationship between the clas-sification map at time t, a series of observations,ot:1=ot ,ot−1 , . . . ,o1, and the UAV/camera pose, xt:1

=xt ,xt−1 , . . . ,x1, as p�mtc �ot:1 ;xt:1�. Given that UAVs

operate outdoors and are equipped with a sensorsuite, we can assume that GPS coordinates and poseinformation are available. In practice, this is not en-tirely perfect, but it is good enough for this analysis.

The objective of UAV-enabled search is to accu-mulate observations in order to make the posteriorestimate of the classification map as accurate as pos-sible. In terms of creating the posterior classificationmap given the observations, this problem is knownas “mapping with known poses,” as described inThrun, Burgard & Fox, 2005. When one uses Bayeslaw and makes standard independence assumptions�Thrun et al., 2005; Bourgault et al., 2003; Koopman1980�, the posterior probability becomes

p�mtc�ot:1;xt:1� � p�ot�mt

c;xt�p�mtc�ot−1:1;xt:1� . �1�

Therefore, the posterior probability of the classi-fication map given the observations is proportionalto the product of the likelihood of making the obser-vation given the map and the probability of a pre-dicted map given previous observations and poses.

Figure 1. Aerial photograph taken of a mock missing person to show the maximal sweep width of the UAV with astandard NTSC video camera.

92 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 5: Supporting wilderness search and rescue using a camera-equipped mini UAV

The likelihood3 represents the probability that a signwill be detected if it is present, and the predictedmap represents the probability that we will cover thecorrect area of the world. Thus, the Bayesian framingof the problem allows us to identify the two keyelements of effective search: detection and coverage.

There is a tension between these two objectives.This is illustrated by observing, for example, that itis possible to cover ground completely but soquickly that no signs are actually detected by peoplewatching the video. Conversely, it is possible to en-sure that almost every sign sensed by the camera isalways detected but move so slowly that coveragetakes an unacceptable amount of time. Finding aworkable tradeoff between coverage and detectionrequires careful analysis and field practice to dis-cover ways to minimize the expected amount oftime required to find the missing person.

The remainder of this section presents resultsthat relate to finding a workable tradeoff. Practicallyspeaking, detection constrains the UAV’s pose sothat observations accurately reflect the presence orabsence of signs. The next subsection sets a practicalceiling on the height above ground to ensure an ac-ceptable likelihood of correct detection. Practicallyspeaking again, coverage implies that xt should bechosen to maximize the benefit to the posterior prob-ability. Generalized contour search is an approach toperforming efficient search that includes two impor-tant search tactics as special cases.

3.3. Detection: Practical Constraints on HeightAbove Ground

The previous subsection identified the detectionprobability, p�ot �mt

c�, as a key factor in the search ef-fectiveness. This subsection translates this probabil-ity into a practical constraint on how high the UAVcan fly. The UAVs for this work use NTSC4 videocameras that capture approximately 200,000 pixels.Figure 1�b� shows a missing person in a video framecaptured by a microvideo camera on a UAV. The im-age was digitized at 640�480 resolution and themissing person is 40 pixels tall.

The portion of the image containing the missing

person is magnified and shown in Figure 1�c�. Asillustrated, the human form is on the border of rec-ognition; the color of the clothing is primarily whatallows the form to be recognized, and if fewer pixelsare devoted to the missing person or the clothingcolor is less detectable, recognition can become ex-tremely difficult. If Figure 1�c� is taken as the fewestpixels possible that enable recognition of a humanform, the minimal resolution of the image must be5 cm per pixel. This means that each image cancover an area no wider than 32 m wide�24 m tall.

We can translate information from this imageinto a practical constraint for the particular cameraand UAV used in the field trials. Given the camera’sfield of view, the constraint on detecting a humanshape implies that the UAV should fly no higherthan 60 m above the ground. Flying lower is unsafe,given wind currents. If the requirement to detect ahuman form is abandoned, then flying higher is pos-sible, but the UAV must still be low enough to detectunusual colors from clothing, man-made objects, etc.Flight tests suggest that colors of human-sized ob-jects are difficult to perceive if the UAV flies higherthan 100 m. Thus, the operational maximum heightabove ground for this work is between 60 m and100 m.

We can translate the probable size of objects intodesign guidelines for other UAVs and other cameras.Observe that the practical height above ground fromthe previous paragraph is insufficient for detectingsmall signs of the missing person, such as discardedfood wrappers. This limitation can be overcome, tosome extent, by using a camera with a narrowerfield of view. Indeed, for a fixed pixel resolution onthe ground, the allowable height above groundgrows as 1/tan��/2�, where � is the camera’s field ofview. However, using a narrower field of viewmeans that �a� less context is available for interpret-ing the sign, �b� the number of frames containing asign is smaller for a given UAV ground speed, �c�camera jitter is amplified, and �d� the spacing be-tween search paths must be smaller, which meansthat the search will proceed more slowly. Fortu-nately, the possibility of building temporally local-ized image mosaics helps address the first three de-ficiencies as described in Section 4.

3.4. Generalized Contour Searches

Optimally covering an area implies two objectives:completeness and efficiency �Koopman, 1980�. Com-

3The likelihood represents the observation process and variesfrom application to application; examples of specific likelihoodsare given in �Koopman, 1980; Bourgault et al., 2003�.4NTSC is an analog television standard adopted by the NationalTelevision System Committee.

Goodrich et al.: UAV-Enabled WiSAR • 93

Journal of Field Robotics DOI 10.1002/rob

Page 6: Supporting wilderness search and rescue using a camera-equipped mini UAV

pleteness means that the search algorithm eventu-ally covers the entire search area. Efficiency meansthat the algorithm quickly obtains an image contain-ing a sign of the missing person’s location; efficiencyreflects the utility of finding signs quickly becausethe probability of rescue decreases over time. Themost efficient complete search is one that obeys theoperational principle that states “information aboutthe classification map accumulates most quicklywhen the camera is aimed at the area where newsigns are most likely to be found.”

The key enabling technology that allows us touse this operational principle in practice is the abil-ity to aim a gimballed camera at a target point on theground while orbiting that point. Fortunately, thereare a number of references that discuss how to makea UAV orbit a point �Nelson, Barber, McLain &Beard, 2006�. The UAV in this paper uses a simpleorbiting routine built from the Hopf bifurcation vec-tor field �Madsen & McCraken, 1976; Quigley, Bar-ber, Griffiths & Goodrich, 2006�. By intelligently cre-ating a queue of target points, it is possible to havethe camera progressively follow almost any possibleground path �except those where terrain increasesfaster than the UAV can climb�. An effective search isobtained by creating a queue of target points thatfollow the contours of the likely location of the miss-ing person or relevant signs.

When the distribution of signs is stationary, uni-modal, and symmetric, continuously following con-tours of the missing person distribution is optimal,meaning that the path is complete and maximallyefficient. Although these assumptions are very strict,the analysis of missing person behavior suggeststhat in the absence of significant terrain features, thedistribution of where a missing person is located is aprobability density that peaks at the PLS and ap-pears to monotonically decrease as one moves awayfrom the PLS �Setenicka, 1980; Syrotuck, 2000�. Onereason for this distribution is an element of random-ized movement that is produced by the psychologyof being lost �Hill, 1998�, and the fact that some miss-ing persons stop moving after a period of time.Moreover, signs regarding where the missing personhas been, such as discarded clothing, will probablynot move so it may be possible to assume that theseclues are stationary.

Under these conditions, the optimal �completeand maximally efficient� search is a spiral that be-gins at the center of the distribution and spirals out-ward. This pattern creates a continuous path that

follows the contours of the distribution of signs. Asthe camera follows this pattern, it gathers observa-tions from the region with the highest probability ofcontaining an unobserved sign. Previous work�Quigley, Goodrich, Griffiths, Eldredge & Beard,2005� presented a series of equations that guide thecamera footprint in an appropriate spiral,

r ← r +��

r� ← � +

��, �2�

where �r ,�� is the camera target point in polar coor-dinates relative to the center of the search area,�x0 ,y0�. The parameters � and � make it possible tovary the space between the computed target points�the search speed� and the space between the spiralarms to compensate for camera footprint, and thusfollow adjacent contours.

We can formally encode the practice of creatinga continuous path that follows the contours of aprobability distribution, and then use the resultingalgorithm to follow the contours of other distribu-tions or the contours of a terrain map. The resultingalgorithm yields a complete and efficient search fortwo important distributions. For a unimodal andsymmetric distribution the algorithm approximatesthe optimal spiral, and for a uniform distributionover a rectangular area the algorithm gives the opti-mal lawnmower search. The formal algorithm is asfollows:

1. Initialize search parameters.�a� Specify a starting point for the search and

an initial direction of travel.�b� Specify the maximum path length before

switching to a new contour. This allowsthe algorithm to stay within a specifiedsearch area.

�c� Specify a desired height above ground.�d� Specify whether the UAV will ascend or

descend contours.

2. Sample points around the current end of thepath and close to the specified direction oftravel. Select the neighbor with surfaceheight approximately equal to the startingpoint. Repeat until the length of the resultingdiscrete path is within a tolerance of the pathlength, or until a point on the path is withina tolerance of the beginning point �which oc-

94 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 7: Supporting wilderness search and rescue using a camera-equipped mini UAV

curs when the target points follow a circuit tothe start point�.

3. Interpolate the discrete points �we currentlyuse bicubic interpolation� and resample thispath at uniform distances. Compute a centerof the UAV orbit for the first point on thepath. As the UAV orbits this point, commandnew orbit centers and camera focal pointsthat follow the path.

4. When the camera reaches the last point on thepath, compute a new path by ascending ordescending the surface.

�a� Compute the distance from the terminat-ing point by projecting the distance fromthe camera to the ground, computing thesize of the camera footprint, and choosinga new starting point that yields a newcamera footprint that slightly overlapsthe previous footprint.

�b� Use Step 2 to compute a new contour us-ing the new starting point.

�c� If the maximum distance between theprevious contour path and the newlycomputed contour path leaves a gap be-tween the camera footprints for the two

paths, move the starting point of the newcontour path closer to the ending point ofthe previous contour path and repeat.

�d� Change the direction of travel and repeatStep 2.

5. Repeat until the entire region is searched, oruntil the UAV’s batteries need to berecharged.

For a unimodal distribution, a complete searchcan be obtained by specifying an infinite maximumpath length. The resulting path flies concentric cir-cuits over the surface. For a unimodal, symmetricdistribution, the resulting circuits are concentriccircles that follow the contours of the distributions.To illustrate that such paths are usable in practice,Figure 2�a� shows the optimal spiral path �unimodalsymmetric distribution� in calm wind conditions.

If the distribution is uniform across a rectangu-lar area and if the direction of travel is parallel to theside of the rectangle, the contour search yields theoptimal lawnmower search pattern. This is impor-tant since it implies that the algorithm yields behav-ior that is optimal in two known cases.

Figure 2. �a� An example of a spiral search flown in calm wind conditions by a UAV. The motion of the search point isshown by the heavy �red� line. The generated UAV flightpath is shown as the thin �blue� line. The x and y axis units aremeters from launch point. Variants of this algorithm have been flown in over ten flight tests. �b� Pixel density as a functionof location for a search pattern consisting of concentric circles. The pixel density is projected onto a map of the flight area.

Goodrich et al.: UAV-Enabled WiSAR • 95

Journal of Field Robotics DOI 10.1002/rob

Page 8: Supporting wilderness search and rescue using a camera-equipped mini UAV

Several practical limitations affect the usefulnessof the contour search. The first limitation ariseswhen the UAV’s altitude must change to maintain aconsistent height above ground. Since the camerafootprint is a projection onto uneven terrain from apossibly oblique angle, not every area is coveredwith equal pixel density. This issue is aggravated bymanaging the height above ground, which can causethe camera to pitch up when the UAV climbs, andthen pitch down when the UAV descends. This ef-fect is illustrated in Figure 2�b�, which displays thenumber of pixels per area as intensities on a mapfrom a flight test; higher intensity represents ahigher pixel density.5 Note the poor coverage in themiddle caused by only a brief dwell time at the pointlast seen. Also, note that regions near the border ofthe search are covered at different pixel densities;this is caused by both the UAV pitching and rollingto maintain a consistent height above the groundand the way the UAV switches from one circularcontour to another.

Recall that the lawnmower and spiral searchesemerged from the generalized contour search whenthe UAV followed contours of a uniform and a sym-metric unimodal probability distribution, respec-tively. It is also possible to use the generalized con-tour algorithm to follow contours of the terrain.Indeed, for very steep hills or rugged terrain, follow-ing terrain contours may be more efficient than try-ing to follow the contours of a probability distribu-tion.

Searching in very steep terrain can be done byaiming the camera out the side of the UAV while theUAV follows a contour of the terrain. This causes thecamera footprint to obliquely cover a swath alongthe side of a hill. Figure 3 shows the series of cameraaim points selected using the algorithm. It should benoted that to increase the amount of time that thecamera dwells on a particular point, it may be nec-essary to have the UAV orbit waypoints along thecontour path while the camera persistently recordsvideo along the path of aim points. This causes thecamera to cyclically approach and recede from theaim point, meaning that resolution changes overtime. Currently, this algorithm has been used only insimulation, but future plans include testing it in thefield.

A second known limitation of the generalizedcontour algorithm is that it is incomplete when themissing person is moving because the camera andthe missing person may inadvertently evade one an-other. To illustrate this, let Vcam denote the ground-speed of the camera’s footprint.6 Let VMP denote thespeed of the missing person. Monte Carlo simula-tions depicted in Figure 4 show how sensing ratesfor the spiral search drop as the ratio of UAV speedto missing person speed approaches one.7

Since Figure 4 uses the ratio of missing personspeed to UAV speed, the patterns can be interpretedfor many time scales and many possible speeds ofthe missing person. Data was gathered assumingthat the camera covered the ground at a nominalspeed of 3 m per simulation time step. Thus, if thetime scale is assumed to be one simulation time step

5The UAV is flying 70 m above the ground at 13 m/s with a cam-era angle of 40° by 30°. It started with a radius of 50 m and thenflew out in increments of 25 m to a maximum radius of 250 m.

6Since the speed with which the camera traverses the ground isdetermined by the ground speed, not the UAV’s airspeed, windcan strongly affect the camera traversal speed.7These results were obtained by setting the UAV’s altitude and thecamera’s field of view to provide a camera footprint of 32 m�24 m. We assumed a perfect observer, that is, if a sign waswithin the field of view then it was observed. Thus, the x-axisrepresents the ratio of VMP

max to Vcam. This figure is generated usinga spiral search strategy given the following conditions: �a� Themissing person’s initial location is normally distributed with astandard deviation of 100 m around the point last seen. �b� Themissing person moves at a velocity selected from the interval�−VMP

max,VMPmax� driven by a random walk with reflecting bound-

aries at the edge of the interval, and with travel directions drivenby an unreflected random walk.

Figure 3. Four contour paths obtained from data from aterrain database. The planned paths are shown as brightlines. For reference, a known contour map is wrappedonto the terrain map.

96 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 9: Supporting wilderness search and rescue using a camera-equipped mini UAV

per second, then the Vcam=3 m/s, and if the timescale is two simulation time steps per second thenVcam=6 m/s. For these two time scales, a ratio of 1.0indicates that the missing person is moving at thesame rate as the camera footprint. As a point of cali-bration, a speed of 3 m/s �or 6.7 mph� is a steadyjog, and a speed of 6 m/s is a very fast sprint.

Suppose that we are interested in the detectionrate, assuming that the missing person will notmove faster than 3 m/s. If the unit of time is givenas one simulation time step per second, 5000 simu-lation time steps corresponds to approximately83 min of flying. Additionally, Vcam=3 m/s, whichmeans that we should look at a ratio of 1.0 in Figure4 to determine that the UAV has only about a 25%probability of obtaining imagery containing themissing person when the missing person is movingat approximately the same rate.

Again, suppose that we are interested in the de-tection rate, assuming that the missing person willnot move faster than 3 m/s, but assume that the unitof time is given as two simulation steps per second.This means that the simulation covers approxi-mately 42 min of flying time. Additionally, Vcam=6 m/s so we should use a ratio of 2.0 in Figure 4 toconclude that the UAV has only about a 40% prob-

ability of obtaining imagery containing the missingperson, given that the missing person is movingtwice as slowly as the camera.

The conclusion from this analysis is that al-though the generalized contour search eventuallycovers a specified search area, it is incomplete interms of always “seeing” a moving missing person.Indeed, this is true for all searches, though redun-dant searches and searches by multiple UAVs/ground searchers can mitigate the problem �Set-nicka, 1980; and Bourgult et al. 2003�.

The third known problem with the generalizedcontour search is that many distributions or areas ofterrain have multiple modes. Planning an optimalsearch path to such problems is known as the orien-teering problem, which is known to be very compu-tationally complex �Fischetti, Gonzalez & Toth,1998�. Although future work should address thisproblem, we presently require the incident com-mander to create a plan based on his or her experi-ence given a multimodal search space.

3.5. Lessons from Field Trials

Although the previous analysis is important foridentifying practical limits on height above groundand efficient patterns of search, it is essential to con-duct field studies to identify critical things not iden-tified in the analysis. A typical field trial involvesplacing a dummy in the wilderness along with real-istic clues �see Figure 5�. A scenario, obtained from

Figure 4. Detection rates as a function of the ratio ofmissing person speed to camera speed for a fixed searchperiod.

Figure 5. A “dummy” in a field test.

Goodrich et al.: UAV-Enabled WiSAR • 97

Journal of Field Robotics DOI 10.1002/rob

Page 10: Supporting wilderness search and rescue using a camera-equipped mini UAV

prior missing person case studies, is presented to thegroup. The incident leader then constructs a searchplan and the UAV is used to execute this search planinsofar as possible. A series of field search trials wasconducted with search and rescue personnel in 2006and 2007 using the technology described in the pre-vious subsections.

Following the series of field trials, we surveyedparticipants to identify things that were essential foreffective search. From these surveys, a strongconsensus8 emerged on two needs that must be metfor practical field deployment: improved video pre-sentation consisting of stabilized video and imageenhancement, and improved integration of the UAVinto the WiSAR process. Sections 4 and 5 addressthese needs, respectively.

4. IMPROVING VISUAL DETECTION

Lessons from field trials strongly indicate that im-proving video presentation is necessary for UAV-assisted WiSAR. Because the UAV is so small, theUAV video is plagued with limited spatial and tem-poral fields of view, distractive jittery motions, disori-enting rotations, noisy images, and distorted images.As a result, it is very difficult to maximize the prob-ability of correct detections without incurring a highfalse alarm rate.

Video should be presented such that the prob-ability of correct detection is maximized and theprobability of a false alarm is minimized. Given thenature of this classification task, detection theory in-dicates that it is very difficult to perfectly accomplishboth objectives �Sheridan, 1992�; there is an inherenttradeoff. It is tempting to conclude that, because thecost associated with missed detections is the potentialloss of a human life, then a high number of falsealarms can be tolerated. Two observations temperthis conclusion. First, a high false alarm rate willnegatively influence practical usage of the UAV tech-nology. Second, false alarms trigger additional effortto acquire and analyze information. This effort mayinvolve sending search teams to a location or requir-ing the UAV to gather additional information regard-

ing a particular location. The cost of these responsesis the loss of information that could have been ac-crued if these resources were employed in othersearch regions.

Improving video presentation includes many fac-tors. At a minimum, improved presentation includescalibrating the camera, deinterlacing the image, andcorrecting for camera distortion. We refer to videothat has been modified in these ways as “nominalvideo.” This section describes three potential videopresentation improvements followed by a study com-paring the three improvements to the nominal videofeed. The computer vision algorithms used to pro-duce these improvements find feature correspon-dences in real time and then use these correspon-dences to determine frame-to-frame alignment. Thealgorithm details can be found in Gerhardt, 2007.

4.1. Brief Description of the PresentationViews

In order to improve the presentation and supportthe analysis of UAV-acquired video, four separatebut related presentations were derived: nominalvideo, the stable view, the temporally localized �TL�-mosaic view, and the stable-mosaic view. Each viewpresents one or more blended images in a view-frame. The viewframe is delineated by a blackbounding rectangle. Figure 6 shows two differentviews within the viewframe: �a� represents nominalvideo and stabilized imagery; stabilized imagery al-lows the image to drift within the viewframe so thatimage features remain in approximately the sameviewframe location. �b� represents a mosaic;9 the dif-ference between the two views is how the view-frame scrolls as imagery moves. Note that the fourpresentations do not use terrain information toblend or display images, but instead assume a flatearth model.

The first and simplest presentation view isnominal video. The rectangle bounding the video re-mains constant within the viewframe in the tradi-tional manner of displaying video.

The second presentation attempts to improvenominal video by making image features morestable. The resulting stable view displays images us-ing a smoothed view path independent of any mo-saic. Roughly speaking, over time the location and

8Survey participants included people with search and rescue ex-perience, students and faculty. Seven participants were surveyed.As shown in the accompanying technical report �Goodrich et al.2007b�, statistically significant results are obtainable from the sur-vey, but since most participants were not search and rescue ex-perts the statistical analysis is not included in this paper.

9A mosaic is a composite image consisting of several other images“stitched” together.

98 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 11: Supporting wilderness search and rescue using a camera-equipped mini UAV

orientation of the image shifts within the viewframe.When a new image is received, features in the newimage are compared with features in the previousimage. The old image is replaced by the new image,but the new image is presented within the view-frame such that the features between subsequent im-ages are in approximately the same location withinthe viewframe. The reasoning is that if image fea-tures are stable within the viewframe, it is easier foran observer to detect features on the ground. Stabi-lizing the image hypothetically improves an opera-tor’s orientation and balances the removal of contentjitter from the original presentation and the presen-tation jitter of the mosaic view. Unlike many stabili-zation views, which constrain the viewframe to asubregion of the image, our viewframe is larger thanthe image �see Figure 6�a��. This larger viewframe isdesirable since maximizing the camera footprint in-creases the search efficiency. A scrolling algorithmthat smoothly translates the image allows features toremain relatively stable while still approximatelycentering the image.

The third presentation attempts to improvenominal video by improving the temporal persis-tence of image features. The resulting TL-mosaicview builds a local mosaic of recent images �andthus the name “temporally localized mosaic”�. In

contrast to local mosaics, which seek to construct aunified static image over the entire area, the TL-mosaic constructs a mosaic only over recent videoframes. The motivation for this view is to expandboth the spatial and temporal viewing windows forthe observer by aggregating each frame onto the dis-play canvas. As new images are received, they areadded to the collection of previous images withinthe viewframe by aligning features. This presenta-tion is desirable because features will be stably pre-sented and signs will be persistently displayed inthe viewframe. In practice, these image aggregationswork well until the images begin to be aggregatedonto the canvas outside of the viewing frustum. Asimple solution is to translate �or scroll� the view-point, when necessary, to follow this image aggrega-tion path and always maintain the current framewithin the presentation’s viewing frustum. The re-sulting display expands the effectual viewing frus-tum and removes video content jitter. However, theviewpoint translations must occur frequently and ef-fectually reintroduce some jitter.

The fourth presentation attempts to retain thebenefits of the TL-mosaic and also reduce the jittercaused when the viewing frustrum must shift. Theresulting stable-mosaic view combines elementsfrom the TL-mosaic and stable views. The difference

Figure 6. These two images illustrate how the history of a mosaic can increase the positive identification rate. �a� Theoriginal view contains a red umbrella, seen in the lower right corner of the view and circled, which is visible in only acouple of frames. �b� The mosaic view contains the same red umbrella, seen in the lower middle of the view and circled,which is visible in over hundreds of frames. Note that the umbrella is extremely difficult to see on a black and whiteimage—color is the key discriminator.

Goodrich et al.: UAV-Enabled WiSAR • 99

Journal of Field Robotics DOI 10.1002/rob

Page 12: Supporting wilderness search and rescue using a camera-equipped mini UAV

between the mosaic and stable-mosaic view is thesmoothing function that scrolls the mosaic as newimages are acquired. Hypothetically, this presenta-tion provides the benefits of both the stabilized viewand TL-mosaic view, with a potential cost of provid-ing less imagery in the viewframe at a given time.

4.2. Experiment Description

An evaluation of the different views was performedby 14 naïve and 12 potentially biased volunteer par-ticipants. Participant bias was determined based onthe participants’ familiarity with this work. Thenaïve participants were recruited from the universitystudent population and compensated $15.00. The bi-ased participants were those affiliated or familiarwith this work and were not compensated. One par-ticipant reported some color blindness.

Each participant was asked to perform two taskssimultaneously in order to replicate field conditionswhile maintaining appropriate experimental control.The primary task was to detect and identify pre-described signs in the video display shown on amonitor that was positioned in front of them and totheir left. The video was obtained from previousflights where signs �red umbrellas� were distributedin the environment; in the video, these umbrellas ap-pear as blurry red ellipses. During video collection,the UAV was engaged in common search patterns.The secondary task was to detect and identify pre-described objects of interest on a monitor of thesame size positioned to the right of the video dis-play. This secondary task was designed to representan active visual task such as placing waypoints onmaps, using maps to localize the imagery, etc.

The video display presented each participantwith a controlled random ordering of 16 differentshort video clips. Each clip lasted about 1.5 min andwas presented using one of the four possible views.The participants were asked to detect and identify asmany of the signs as possible during each trial. Cliporders were presented in a balanced, randomizedorder.

The secondary display provided the participantwith a controlled random sequence of uniquely col-ored spots dependant on the corresponding videoclip. Each clip corresponded to its own particularrandomized spot generation sequence. The set ofspots were regenerated every 2–5 s using probabili-ties that gave the red �target� spot an 82% chance ofbeing displayed each time the spots regenerated.

Any time a participant believed that a sign wasvisible, he or she was to freeze the frame by mouse-left-clicking anywhere in the video display window.The clicking represented the task of noting an imageof interest and localizing the sign. Freezing theframe caused the video display to freeze but did notpause the video. Rather, the video continued to playin the background, which implied that the longer theframe was frozen, the more video content the par-ticipant missed. The participant had two optionsduring the freeze-frame state: select the sign bymouse-left-clicking on it, or make no selection andcancel the freeze frame by mouse-right-clicking any-where within the video display window. If the par-ticipant selected a sign, a red ring would brieflyhighlight the selected area. Any part of the sign �thered umbrella� within the ring counted as a precisehit. When this action was taken, normal presentationcontinued. A similar control scheme was employedfor the secondary task.

After each trial, the participant answered threepost-trial questions that were shown on the second-ary display. These questions were designed to mea-sure participant preferences and confidence. The hitrates for detecting umbrellas in the primary task, thehit rates for detecting spots in the secondary task,and false-positive rates and types were measuredduring each trial. Hits were classified as occurringwithin the current frame, within the mosaic history,and within the viewframe but not within the mosaic.

4.3. Results

Three preliminary observations are in order beforecomparing the main results. First, there is no statis-tical difference between the objective results fornaïve and biased participants, who had a 73% and72% probability of detecting the red umbrellas.Thus, the remaining analysis does not distinguishbetween the two participant types. Second, detectionand identification success rates for the secondarydisplay are very high and consistent across all par-ticipants and views at about 94%. This suggests thatany influence from the additional cognitive load onthe results will be expressed mainly in the differ-ences among the red umbrella hit rates within thevideo display. Third, one particular video clip wasan outlier wherein all participants identified all ofthe umbrellas regardless of the accompanying view.This clip was removed from the analysis.

100 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 13: Supporting wilderness search and rescue using a camera-equipped mini UAV

The primary results,10 shown in Table I, indicatethat providing the participant with an increasedviewing frustum and stabilized view increases theprobability that signs are detected. The TL-mosaicview resulted in the largest increased percentage�45.33%� in hit probability over the original view.Also, there is a strong �43%� improvement from thenonmosaicked to the mosaicked views. The pairwisedifferences between the two mosaicked views andthe two nonmosaicked views are statistically differ-ent �P�0.01�, but the pairwise differences betweenthe two view sets are not statistically different.

This improvement is largely explained by refer-ring to Figures 6�a� and 6�b�. Figure 6�a� demon-strates that the sign �the red umbrella� is visible onlyfor a couple of frames �or 1/15th of a second� in thelower right corner of the nominal view which�would appear very similar to the stable view�.However, in the corresponding mosaicked view, asseen in Figure 6�b�, this red umbrella is visible for amuch longer time, possibly over hundreds offrames, or several seconds, before it moves out of theviewing frustum.

One downside to providing a mosaicked view isthe increase in the probability of false positives. Afalse positive occurs when some object in the imageor some image artifact �such as discoloration� causesthe observer to think that a red umbrella has beenobserved. The probability of a false positive is 11%and 18% of total hits for TL-mosaic and stable-mosaic presentations, respectively, and 5.77% and8.65% for stable and nominal presentations, respec-

tively. If some element of the image is likely to trig-ger a false positive in the video, then causing thiselement to persist in a mosaic will increase thechance that the observer will notice the element. For-tunately, experimental results indicate that the prob-ability of correct detection grows much more quicklythan the probability of false positive when a mosaicis used.

In summary, participants overwhelmingly pre-ferred the mosaicked views �92%� and found themosaicked views to be more orienting �88%� and lessstraining �73%� than the nonmosaicked views. Im-portantly, this increased persistence potentially al-lows the UAV to travel more quickly since theamount of time that a sign must be within the cam-era’s field of view can be small provided that a hu-man has long enough to see the sign.

Many researchers have explored creating imagemosaics, including using aerial imagery �Kumar etal., 2001�, but few appear to be using mosaics to im-prove real-time detection. Since such detection is es-sential for WiSAR, it is necessary to understand howto use spatially and temporally local mosaics to en-hance live presentation of video with the goal ofsupporting an observer watching the video. The ex-periment results are, to the best of our knowledge,the first to analyze how local image mosaics can im-prove an observer’s ability to detect objects from avideo stream.

5. INTEGRATING THE UAV INTO THE WISARPROCESS

The second need identified in field trials was the lackof an effective process for coordinating among theUAV operator, the incident commander, and groundsearchers. It was assumed when preparing for thefirst field trials that the UAV would fly a completesearch path �e.g., spiral or lawnmower pattern�, im-agery would be recorded, signs would be identified,the signs would be clustered together, and thenground searchers would go to the location of thesigns and find the missing person.

The preliminary trials indicated that practice dif-fered from this model in two ways. First, instead ofsystematically executing a complete but probably in-efficient search, the incident commander wanted toexecute a probably efficient �but potentially incom-plete� heuristic search. Given the desired search path,the UAV operator translated the path into waypoints

10These results were obtained via a multiple comparison ANOVAwith the Tukey-Kramer adjustment.

Table I. Hit probability comparison among the differentpresentation views and between the naïve and biased par-ticipants, where � is the least-squares means estimate andPD= e�

1+e� , i.e., the probability that the sign will be detectedgiven the corresponding presentation view or participant.Also, the improvement over the lowest PD

low, which corre-sponds to the nominal view, was computed by PD

view−PDlow

PDlow .

Presentation � PD % Improvement

TL-mosaic 1.6610 84.04% 45.33%

Stable-mosaic 1.5486 82.47% 42.62%

Stable 0.3935 59.71% 3.26%

Nominal 0.3156 57.83% 0.00%

Goodrich et al.: UAV-Enabled WiSAR • 101

Journal of Field Robotics DOI 10.1002/rob

Page 14: Supporting wilderness search and rescue using a camera-equipped mini UAV

and began flying. Second, the inability to efficientlydispatch ground searchers to a potential sign made itnecessary to gather images from many angles and al-titudes. Often, this process caused the original searchplan to be neglected or abandoned, which meant thatthe search became incomplete and inefficient.

In order to improve the process and identify ef-fective operational paradigms, it was necessary tobetter understand how real WiSAR is performed.This section reports results of a human factors analy-sis of the WiSAR domain. We use these results toidentify �a� tasks that must be performed to do UAV-enabled WiSAR, and roles �b� and responsibilities forpeople performing these tasks. This results in threeoperational paradigms for organizing the UAV, theincident commander/mission manager,11 andground searchers.

5.1. Goal-Directed Task Analysis

A goal-directed task analysis �GDTA� �Endsley et al.,2003� is a formal human-factors process for identify-ing how complex tasks are performed. GDTAs havebeen used in other first responder domains �Adams,2005� military domains �Bolstad, Riley, Jones & End-sley, 2002�, screening applications �Segall et al.,2006�, etc. The analysis results in a formal descrip-tion of the goals and tasks required to successfullyperform the task with a focus on the informationrequired to support situation awareness. The GDTAcan be used to identify how new human-centeredtechnologies can augment existing processes. To en-capsulate this, we have performed a GDTA and apartial cognitive work analysis �CWA� �Vicenti,1999� of the WiSAR domain.

The complete GDTA and CWA are provided inan accompanying technical report �Goodrich et al.,2007b�; however, the results of these analyses wereemployed to identify the central information flow ofthe process and use this process model to guide ouranalysis of human-robot WiSAR teams. This sum-mary was previously presented in a workshop paper�Goodrich et al., 2007a�, but we significantly extendthis paper not only to identify tasks specific to UAV-enabled search but also to include the four qualita-tively different types of search used in WiSAR. As

shown in Figure 7, the search task involves gather-ing evidence, utilizing that information to modifythe understanding of the search problem, and thendirecting further efforts at gathering additional evi-dence.

The information flow for WiSAR personnel be-gins with the initial details given by the reportingparty. Responders immediately consider the urgencyof the call based on the potential danger to the miss-ing person and other factors. Combining priorknowledge and experience with information pro-vided by the reporting party, responders develop amodel of high probability sources of additional evi-dence. Potential sources of evidence include bothgeographic locations surrounding the missing per-son’s point last seen �PLS�, as well as informationfrom people familiar with the missing person.

After evaluating initial sources of evidence, theWiSAR team develops and executes a plan for ac-quiring additional evidence. In the more challengingsituations, the plan must allocate human and UAVsearch resources to efficiently accumulate evidencefrom different sources. Such allocation is governedby the probability that useful information will be ob-tained, by the risks involved in gathering the infor-mation, and by the capabilities of available resourcesfor acquiring information.

Time and additional evidence result in adjust-ments to the probability model of possible sources ofevidence. Changes in the model lead to changes tothe search plan. All evidence changes the expectedutility of searching in different areas. Incident com-mand continually evaluates evidence and redirectsavailable resources in order to maximize the value ofthe search.

Ideally, the search ends when the WiSAR teamlocates the missing person �the probability distribu-tion moves to a single spike�. Work then proceeds torescue or recovery. However, the process may alsoend if the search continues long enough that theprobability of the missing person actually beingwithin the search area falls below a certain thresholdor if dangers or other constraints �i.e., another inci-dent� cause the relative expected value of continuingthe search to fall below a threshold �Setnicka, 1980�.

5.2. Task Breakdown for UAV-Enabled SearchExecution

Using a UAV to support the WiSAR process altersthe process identified in the GDTA; some tasks are

11The director of the overall search operations is known as theincident commander. In this section, we use the term mission man-ager to indicate that the UAV search team might be directed bysomeone different from the incident commander, but who reportsto the incident commander.

102 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 15: Supporting wilderness search and rescue using a camera-equipped mini UAV

fundamentally changed and other tasks are intro-duced. We begin by describing what tasks the UAVand its operator�s� must perform in order to follow aplan. As the tasks are presented, we identify variousroles that humans must fulfill. Figure 8 provides aUAV-enabled WiSAR task breakdown. This break-down was obtained by combining the GDTA results,observations from multiple flight tests and searchtrials, and an activity analysis patterned after theframeworks in Norman, 2005; Parasuraman et al.,2000; and Sheridan & Verplank, 1978.

When a portion of a task is automated, the hu-man’s responsibility shifts from performing the taskto managing the autonomy �Woods et al., 2004�.There are four roles that emerge when a UAV is usedto support the WiSAR process. The UAV operator isresponsible for guiding the UAV to a series of loca-tions that allow the camera to obtain imagery of po-tential signs. The sensor operator is responsible fordirecting, for example, a gimballed camera and forscanning and analyzing imagery to detect potential

missing person signs. The mission manager is re-sponsible for managing the search progression withan emphasis on processing information, focusingsearch efforts, and reprioritizing efforts.

Figure 7. Information flow in the WiSAR domain.

Figure 8. Hierarchical task breakdown of UAV-enabledsearch.

Goodrich et al.: UAV-Enabled WiSAR • 103

Journal of Field Robotics DOI 10.1002/rob

Page 16: Supporting wilderness search and rescue using a camera-equipped mini UAV

New responsibilities for the human include de-ploying, retrieving, and monitoring the UAV’shealth. These responsibilities are performed by thehuman responsible for the UAV operator role. De-ploying 12 and retrieving must be performed prior toand subsequent to the aerial visual search, but moni-toring the UAV’s status and health must be per-formed in parallel to the visual search.

The purpose of flying the UAV is to acquire in-formation about where the missing person may belocated. As illustrated in Figure 8, acquiring infor-mation using a UAV requires gathering imagery byflying the UAV and aiming the camera to make itlikely that some sign appears in the video �cover-age�, evaluating the imagery for a possible sign �de-tection�, and then identifying the evidence’s locationin order to modify search priorities.

5.2.1. Gathering Imagery

Imagery is acquired by planning a path, flying theUAV, and controlling the camera viewpoint to en-sure that imagery of the search area is obtained. Thespeed and path of the camera’s footprint over theground are the key control variables �Koopman,1980�, and the search completeness and efficiencyare the key performance measures. The path shouldmaximize the probability of locating a sign in theshortest time. Guiding the UAV and following asearch path is incorporated into the UAV operatorrole.

5.2.2. Analyzing Imagery

Imagery can be scanned either in real-time or offline.Since the purpose of using the UAV in WiSAR is toallow searchers to find the missing person, analyz-ing imagery for signs is essentially the purpose offlying the UAV. The key performance variable is theprobability that a human can detect a sign in an im-age given a set of image features. Since this probabil-ity is strongly influenced by the way information isobtained and presented, the key control variables in-clude how video information is presented, the cam-era resolution, the UAV’s height, and the number ofimages that contain information from the searcharea. Managing the camera and analyzing imageryfalls into the separate role of the sensor operator.

5.2.3. Localizing Signs

Once a sign has been identified in an image, it isnecessary to estimate the sign’s location so thatsearchers can reach the sign. Estimating the locationis often referred to as “georeferencing” the imagery.In practice, camera footprint localization is per-formed by projecting the camera angle from the theUAV’s pose to a flat earth model �Redding, McLain,Beard & Taylor, 2006�. The key control variables arethe video and telemetry synchronization, the rangeof UAV viewing angles of the sign �to support trian-gulation�, and the reliability of the terrain modelused in the camera projection model. The key per-formance variable is the accuracy of the sign’s loca-tion in the physical environment. Once signs are lo-calized, the person in the mission manager role usesthis information to replan using the pattern in Figure7.

5.3. Coordinating the UAV with Field Personnel

The UAV-enabled WiSAR process indicated the fol-lowing three roles: UAV operator, sensor operator,and mission manager. Field tests strongly indicatethat a fourth role, ground searcher, is needed for asuccessful search. Ground searchers confirm or de-confirm signs by, for example, inspecting a brightlycolored spot to determine if it is a manmade objectdiscarded by the missing person. An operational as-sumption is that seeing a sign from the ground re-moves more uncertainty than seeing a sign from theair; in other words, search thoroughness and confi-dence is very different for aerial and ground re-sources. Importantly, lessons from other search-related domains �Drury et al., 2003; and Burke &Murphy, 2004� also indicate that multiple roles arerequired. In both the WiSAR domain and other do-mains, these roles can theoretically be filled by oneor more people with varying levels of authority, of-ten supported by various autonomy algorithms anduser interface technologies.

Two factors determine how ground and aerialsources coordinate: the level of search thoroughnessand the search strategy. The level of search thor-oughness may be represented as a probability of de-tecting the missing person or a related sign. It is nec-essary to specify the level of thoroughness sincededicating too much time and effort to one area pre-vents searchers from searching other areas withinthe perimeter. A coarse search technique may be pos-

12The UAV is deployed to an area of interest �AVI� that may besome distance from the launch point.

104 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 17: Supporting wilderness search and rescue using a camera-equipped mini UAV

sible if the missing person can hear, see, or call out tosearchers. Similarly, a coarse search may be possibleif expected cues are easy to detect, such as knowl-edge that the missing person is wearing a bright or-ange raincoat. High thoroughness means higherprobability of finding signs, but it also means agreater number of searchers or slower searches. Im-portantly, repeated searches at a low level of thor-oughness can be just as effective as a single search ata high level of search thoroughness �Setnicka, 1980�.As this applies to UAV-enabled WiSAR, the UAV canprovide a rapid search at a different level of thor-oughness than could be performed by �albeit slower�ground searchers.

The GDTA indicated that four qualitatively dif-ferent types of search strategies13 are used in WiSAR:hasty, confining, high probability region, and ex-haustive �see also Setnicka, 1980�. Different searchstrategies suggest different responsibilities and rolesfor aerial and ground resources. Prior to discussingthese different roles and responsibilities, it is usefulto discuss the four types of searches.

A hasty search entails rapidly checking areasand directions that offer the highest probability ofdetecting the missing person, determining the miss-ing person’s direction of travel, or finding some clueregarding the missing person’s location. Suchsearches often use canine and “mantracking” teamsto follow the missing person’s trail. A constrainingsearch is intended to locate clues that limit thesearch area. For example, if there is a natural ridgewith only a few passages, searchers will inspect thetrails through the ridge for signs in order to restricttheir search efforts to one side of the ridge. Resultsfrom hasty and constraining searches are often usedto inform search in high probability regions. The in-cident commander can estimate the probability offinding the missing person in the various map sec-tions based upon a combination of experience borneof intuition, empirical statistics, consensus, andnatural barriers �Setnicka, 1980�. The incident com-mander then deploys the search teams with the ap-propriate skills to examine the areas of highest prob-

ability and to report their findings with anassessment of the thoroughness of the search. An ex-haustive search is a systematic coverage of an areausing appropriate search patterns. Such a searchtypically indicates that other �usually more effective�search strategies have failed to yield useful informa-tion. If the exhaustive search produces new informa-tion, the incident commander may revert to a priori-tized search.

In field trials, coordination among the UAV op-erator, people watching the video, the mission man-ager, and ground searchers was often ineffective.This was largely the result of not correctly assigningresponsibilities and roles for a specific level of searchthoroughness and a specific search strategy. Thus, anecessary step toward effective field practice is todevelop a process of coordinating roles.

The remainder of this section identifies opera-tional paradigms for organizing roles; much of thissubsection was presented in a workshop paper�Goodrich et al., 2007a�, but portions are includedand extended herein because they identify key prin-ciples and practices for fielded searches. As illus-trated in Figure 9, the important distinctions be-tween the three operational paradigms discussedbelow are in �a� the physical locations of the indi-viduals filling the roles and �b� the temporal se-quence in which the roles are fulfilled. We now de-scribe the organization in each paradigm andpresent principles for deciding when a particularparadigm is appropriate.

13Note that we use the phrase “search strategy” to indicate someform of informed method of executing a search. In �Setnicka,1980�, the term “strategy” is restricted “to the process of establish-ing a probable search area most likely to contain the subject” andthe term “tactics” refers to “explicit methods used to deploysearch resources into that area.” Thus, in the parlance of �Setnicka,1980�, our use of the phrase “search strategy” would more accu-rately be referred to as “search tactics.”

Figure 9. Spatial and temporal relationships betweenpeople filling WiSAR roles: U=UAV operator, S=sensoroperator, M=mission manager, and G=ground searchers.

Goodrich et al.: UAV-Enabled WiSAR • 105

Journal of Field Robotics DOI 10.1002/rob

Page 18: Supporting wilderness search and rescue using a camera-equipped mini UAV

5.3.1. Sequential Operations• Organization. Under the sequential opera-

tions paradigm, the UAV is used to gather in-formation independently without coordinat-ing in real-time with the ground searchers.The mission manager works with the sensorand UAV operators to create a plan. The UAVoperator then executes the search plan andthe resulting video and telemetry informa-tion is provided to the mission manager andsensor operator. They evaluate the informa-tion with the goal of detecting and localizingsigns. If a potentially valid sign is found, aground support searcher is dispatched toevaluate the sign. Information from groundsearchers is then provided to the missionmanager and a new plan is created. The para-digm is called “sequential” because thesearch proceeds in stages: an aerial search fol-lowed by a ground search as needed.

• Principles. A sequential search is appropriatewhen there is limited ground mobility orwhen the probability of identifying missingperson locations is uniformly spread across alarge area. Such conditions tend to indicatethat the UAV will be able to cover moreground at a particular level of thoroughnessthan what could be achieved with a groundsearch given the number of people available.Sequential search allows the team to gatherdata using the UAV, cluster signs, and thendispatch a ground search team to the highestprobability locations. Because of the turn-taking nature of this search paradigm, it ismost appropriate for exhaustive searches orsearches in low probability regions. General-ized contour searches could be appropriatelyused to support sequential operations.

5.3.2. Remote-Led Operations• Organization. During remote-led operations,

the mission manager is physically locatedwith the ground team to perform a hastyground-based search such as tracking a foot-print trail or using a canine team to track ascent trail. The UAV operator flies an orbitthat is centered on the ground searchers’ lo-cation, while the sensor operator controls thecamera to gather imagery beyond the ground

searchers’ field of view. Thus, the UAV effec-tually extends what can potentially be seenby the ground searchers. This allows the mis-sion manager greater access to potentially rel-evant information to guide the hasty search.

• Principles. Remote-led operations are appro-priate when the mission manager has moreawareness of search-relevant information atsome location in the field than at the base sta-tion. This occurs in a hasty search when thereis a cluster of signs that allows the missionmanager to rapidly update the model of themissing person’s location, such as might oc-cur when tracking the individual. The UAVprovides supplementary information thatbroadens the scope of what the mission man-ager can include in his or her model, and thusincreases the field of search without sacrific-ing search thoroughness. Generalized con-tour searches are not appropriate for suchsearches; instead, the UAV operator shouldfly orbits that center on the location of theground teams.

5.3.3. Base-Led Operations• Organization. During base-led operations,

the mission manager is physically locatednear the UAV operator control unit. As thesensor operator identifies possible signs inthe video, the mission manager adjusts his orher model of the missing person’s likely lo-cations and instructs the UAV operator to fo-cus flight time in high probability areas.Ground searchers may track critical points onthe UAVs path so that they are close to pos-sible signs, which may be identified by thesensor operator. For example, in a spiralprobability contour search, a single groundsearcher may be positioned in the center ofthe search spiral and/or four ground search-ers can walk in the four cardinal directionsfrom the spiral center as the UAV spirals out.As another example, in a terrain contoursearch up the side of a mountain, searcherscan be placed near both extremes of the con-tour line and near the center, and then hikeup the mountain as the UAV tracks to higheraltitudes. When the UAV records a possiblesign, the ground searchers rapidly confirm or

106 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 19: Supporting wilderness search and rescue using a camera-equipped mini UAV

deconfirm the sign. This paradigm may beappropriate for priority searches where theimproved thoroughness of a ground searcheris needed to confirm rapidly signs detectedfrom the UAV.

• Principles. Base-led operations are appropri-ate when the terrain allows ground teams tobe sufficiently mobile but when there is insuf-ficient information to perform a hasty search.The ground team can position themselves sothat they are within a minimal expected dis-tance when the sensor operator detects a sign.Feedback from the ground allows the missionmanager to adapt the search plan rapidly.Generalized contour searches or ad hoc way-point list could be appropriately used to sup-port base-led operations.

6. CONCLUSIONS AND FUTURE WORK

The WiSAR problem can be framed as one wheremaps of the likely locations of signs are used to con-struct a map of the missing person’s likely location.The map of signs obtained from the UAV, called theclassification map, is combined with other informa-tion sources by the incident commander. The classi-fication map provided to the incident commander isa filtered estimate of the likely sign locations given aseries of observations; we have restricted observa-tions to visual detections by a human observer from�possibly enhanced� UAV video. The posterior prob-ability of the classification map given all observationsis the product of the likelihood that a sign will be cor-rectly detected given the map and the predicted clas-sification map given previous observations. Thesetwo elements represent the two obligations of effec-tive search: detection and coverage. In terms of theexpected time to find the missing person, there is atradeoff between detection and coverage. Generallyspeaking, rapid coverage can reduce the probabilityof detection and vice versa.

Practically speaking, the requirement for efficientdetection translates into a maximum altitude giventhe minimum sign size and the camera field of view.Similarly, the predicted map translates into a cover-age search path that seeks to obtain as much infor-mation regarding the environment as quickly as pos-sible. A generalized contour search can provideeffective and complete search strategies for stationary

signs, but fielded search will require input from amission manager to produce efficient flight paths.

A series of field tests were performed under vari-ous conditions that revealed information regardingeffective WiSAR that was not apparent from the a pri-ori analysis. These field tests strongly indicated aneed to improve how information was presented andto understand how UAVs can best support theWiSAR process.

Our contribution to the information presentationproblem was the development of computer vision al-gorithms that enhanced the stability and temporal lo-cality of video features. This contribution also in-cluded a study comparing nominal, stabilized, andmosaicked video presentations. The comparison met-ric was the operator’s ability to detect objects given asecondary task that distracted visual attention. Sur-prisingly, no benefit was found by stabilizing thevideo, probably due to algorithm features that al-lowed the image presentation to drift within theviewframe. By contrast, temporally localized mosa-icked images provided a significant benefit, probablybecause such mosaics expand the effective spatialand temporal viewing windows of the displayedvideo, creating more persistence and thus allowingmore opportunity for detection.

Our contribution to integrating UAVs into theWiSAR process began with a formal goal-directedtask analysis of how WiSAR is currently performed.It was apparent from this analysis that the iterativesearch planning, search execution, and search replan-ning cycle was changed by incorporating a UAV.Given the specific goals and information require-ments to perform search planning and execution, wecreated a WiSAR specific breakdown of roles andtasks. We drew on lessons from field tests with searchand rescue personnel to identify specific paradigmsfor organizing the UAV operator, the sensor operator,the ground searchers, and the mission manageracross different spatial and temporal locations. Dif-ferent paradigms have advantages for supportingdifferent search strategies at different levels of thor-oughness given constraints on terrain and probabil-ity; we have identified some principles to help guidewhich paradigms should be used in field conditions.

There are several areas of ongoing and futurework that have the potential to benefit UAV-enabledWiSAR. Longer searches over wider areas may revealthe need for more user-friendly interfaces. Improvedinterfaces would be necessary for a single person tofulfill both the UAV operator and sensor operator

Goodrich et al.: UAV-Enabled WiSAR • 107

Journal of Field Robotics DOI 10.1002/rob

Page 20: Supporting wilderness search and rescue using a camera-equipped mini UAV

roles. Fulfilling dual roles would also benefit from au-tomated target recognition. Interestingly, the avail-ability of automated target recognition software maylend itself to image enhancement by highlighting ar-eas where signs are detected so that the human canmore quickly detect them. Finally, fulfilling dual rolesmay also be possible if searchers process and evaluatevideo offline.

Additional hardware and control algorithms mayalso benefit UAV-enabled WiSAR. Results from pre-liminary evaluations of a small �flyable� infrared cam-era suggest that detection at the nominal 60 m heightmay be possible, but that searches will be most ben-eficial at night or early in the morning so that the am-bient heat of the day does not mask body heat sig-natures. Other future work includes using high-resolution cameras, using multiple cameras topresent an effectively wider field of view, using nar-rower field-of-view cameras to provide higher reso-lution, and using color enhancement of likely signs.Using multiple UAVs to perform a coordinatedsearch or to provide a reliable ad hoc communica-tions network also deserves more research to deter-mine how existing solutions apply to the WiSARproblem domain. Importantly, adding more expen-sive hardware or managing multiple UAVs will likelymake it more important to provide user-friendly in-terfaces plus autonomous support for managingheight above ground.

ACKNOWLEDGMENTS

The authors thank their colleagues Tim McLain,Randy Beard, and many students from the followingBYU laboratories: the MAGICC laboratory for build-ing and maintaining the UAVs and for providing thefoundational control algorithms, the HCMI labora-tory for assistance in field trials and for the general-ized contour search algorithm, and the graphics, vi-sion, and image processing laboratory forimplementing video processing algorithms. The au-thors would also like to thank Dennis Eggett fromthe BYU Statistical Consulting Center for his help indesigning the experiment and analyzing the datafrom the information presentation experiment. Fi-nally, the authors would like to thank the reviewersfor their helpful comments on strengthening the pre-sentation and focus of the paper. The work was par-tially supported by the National Science Foundationunder Grant No. 0534736, by the Utah Small Un-

manned Air Vehicle Center of Excellence, and by amentored environment grant for undergraduatesfrom Brigham Young University. Any opinions, find-ings, and conclusions or recommendations ex-pressed in this material are those of the author anddo not necessarily reflect the views of the NationalScience Foundation.

REFERENCES

Adams, J.A. �2005�. Human-robot interaction design: Un-derstanding user needs and requirements. Proceed-ings of the Human Factors and Ergonomics Society49th Annual Meeting, Orlando, FL.

Alexander, A.L., & Wickens, C.D. �2005�. Synthetic visionsystems: Flightpath tracking, situation awareness, andvisual scanning in an integrated hazard display. Pro-ceedings of the 13th International Symposium onAviation Psychology, Oklahoma City, OK.

Beard, R., Kingston, D., Quigley, M., Snyder, D., Chris-tiansen, R., Johnson, W., McLain, T., & Goodrich, M.A.�2005�. Autonomous vehicle technologies for smallfixed-wing UAVs. Journal of Aerospace Computing,Information, and Communication, 2, pp. 95–108.

Bolstad, C.A., Riley, J.M., Jones, D.G., & Endsley, M.R.�2002�. Using goal directed task analysis with armybrigade officer teams. Proceedings of the Human Fac-tors and Ergonomics Society 47th Annual Meeting,Baltimore, MD.

Bourgault, F., Furukawa, T., & Durrant-Whyte, H.F. �2003�.Coordinated decentralized search for a lost target in aBayesian world. Proceedings of the 2003 IEEE/RSJ In-ternational Conference on Intelligent Robots and Sys-tems, Las Vegas, NV.

Burke, J.L., & Murphy, R.R. �2004�. Human-robot interac-tion in USAR technical search: Two heads are betterthan one. Proceedings of the 13th International Work-shop on Robot and Human Interactive Communica-tion: RO-MAN, Kurashiki, Okayoma, Japan.

Calhoun, G.L., Draper, M.H., Abernathy, M.F., Patzek, M.,& Delgado, F. �2003�. Synthetic vision system for im-proving unmanned aerial vehicle operator situationawareness. In Verly, J.G., editor, Proceedings of SPIEVol 5802, in Enhanced and Synthetic Vision 2005.

Casper, J., & Murphy, R.R. �2003�. Human-robot interac-tions during the robot-assisted urban search and res-cue response at the World Trade Center. IEEE Trans-actions on Systems, Man and Cybernetics, Part B,33�3�, 367–385.

Cooke, N.J., Pringle, H., Pederson, H., & Connor, O., edi-tors �2006�. Human Factors of Remotely Operated Ve-hicles, volume 7 of Advances in Human Performance andCognitive Engineering Research. Elsevier.

Cummings, M.L. �2003�. Designing decision support sys-tems for a revolutionary command and control do-mains. Ph.D. dissertation, University of Virginia.

Drury, J.L., Richer, J., Rackliffe, N., & Goodrich, M.A.

108 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob

Page 21: Supporting wilderness search and rescue using a camera-equipped mini UAV

�2006�. Comparing situation awareness for two un-manned aerial vehicle human interface approaches.Proceedings of the IEEE International Workshop onSafety, Security and Rescue Robotics: SSRR.

Drury, J.L., Scholtz, J., & Yanco, H.A. �2003�. Awareness inhuman-robot interactions. Proceedings of the IEEE In-ternational Conference on Systems, Man and Cyber-netics.

Endsley, M., Bolte, B., & Jones, D. �2003�. Designing forSituation Awareness: An Approach to User-CenteredDesign. London and New York, Taylor and Francis.

Fischetti, M., Gonzalez, J.J. S., & Toth, P. �1998�. Solving theorienteering problem through branch-and-cut. IN-FORMS Journal of Computing, 10�2�, 133–148.

Gerhardt, D.D. �2007�. Feature-based unmanned air ve-hicle video euclidean stabilization with local mosaics.M.S. thesis, Brigham Young University, Provo, Utah.

Goetzendorf-Grabowski, T., Frydrychewicz, A., Goraj, Z.,& Suchodolski, S. �2006�. MALE UAV design of anincreased reliability level. Aircraft Engineering andAerospace Technology, 78�3�, 226–235.

Goodrich, M.A., Cooper, L., Adams, J.A., Humphrey, C.,Zeeman, R., & Buss, B.G. �2007a�. Using a mini-UAVto support wilderness search and rescue: Practices forhuman-robot teaming. Proceedings of the IEEE Inter-national Workshop on Safety, Security, and Rescue Ro-botics.

Goodrich, M.A., Quigley, M., Adams, J.A., Morse, B.S.,Cooper, J.L., Gerhardt, D., Buss, B.G., & Humphrey,C. �2007b�. Camera-equipped mini UAVs for wilder-ness search support: Tasks, autonomy, and interfaces�Tech. Rep. BYUHCMI TR 2007-1�. Provo, Utah:Brigham Young University.

Hancock, P.A., Mouloua, M., Gilson, R., Szalma, J., &Oron-Gilad, T. �2006�. Is the UAV control ratio theright question? Ergonomics in Design.

Hansen, S., McLain, T., & Goodrich, M. �2007�. Probabilis-tic searching using a small unmanned aerial vehicle.Proceedings of AIAA Infotech@Aerospace.

HFUAV �2004�. Manning the unmanned. 2004 Human Fac-tors of UAVs Workshop, Chandler, Arizona. CognitiveEngineering Research Institute.

Hill, K.A. �1998�. The psychology of lost. In Lost PersonBehavior. National SAR Secretariat, Ottawa, Canada.

Kaber, D.B., & Endsley, M.R. �2004�. The effects of level ofautomation and adaptive automation on human per-formance, situation awareness and workload in a dy-namic control task. Theoretical Issues in ErgonomicsScience, 5�2�, 113–153.

Koopman, B.O. �1980�. Search and Screening: GeneralPrinciples with Historical Applications. PergamonPress. This book has been reprinted in its entirety in1999 by the Military Operations Research Society, Inc.

Kumar, R., Sawhney, H., Samarasekera, S., Hsu, S., Tao,H., Guo, Y., Hanna, K., Pope, A., Wildes, R., Hir-vonen, D., Hansen, M., & Burt, P. �2001�. Aerial videosurveillance and exploitation. Proceedings of theIEEE, 89�10�, 1518–1539.

Madsen, J.E., & McCracken, M. �1976�. The Hopf bifurca-tion and its applications. New York: Springer-Verlag.

Miller, C.A., Funk, H.B., Dorneich, M., & Whitlow, S.D.

�2002�. A playbook interface for mixed initiative con-trol of multiple unmanned vehicle teams. Proceedingsof the 21st Digital Avionics Systems Conference, vol-ume 2, pages 7E4-1–7E4-13.

Mitchell, P.M. & Cummings, M.L. �2005�. Management ofmultiple dynamic human supervisory control tasks.10th International Command and Control ResearchAnd Technology Symposium.

Murphy, R., Stover, S., Pratt, K., & Griffin, C. �2006�. Co-operative damage inspection with unmanned surfacevehicle and micro unmanned aerial vehicle at hurri-can Wilma. IROS 2006 Video Session.

Nelson, D.R., Barber, D.B., McLain, T.W., & Beard, R.�2006�. Vector field path following for small un-manned air vehicles. Proceedings of the AmericanControl Conference.

Norman, D.A. �2005�. Human-centered design consideredharmful. Interactions, 12�4�, 14–19.

Olsen, Jr., D.R., & Wood, S.B. �2004�. Fan-out: Measuringhuman control of multiple robots. Proceedings of Hu-man Factors in Computing Systems.

Parasuraman, R., Sheridan, T.B., & Wickens, C.D. �2000�. Amodel for types and levels of human interaction withautomation. IEEE Transactions on Systems, Man andCybernetics—Part A: Systems and Humans, 30�3�,286–297.

Prinzel, L. J, Jr., J.R. C., Glaab, L.J., Kramer, L.J., & Arthur,J.J. �2004�. The efficacy of head-down and head-upsynthetic vision display concepts for retro- andforward-fit of commercial aircraft. The InternationalJournal of Aviation Psychology, 14�1�, 53–77.

Quigley, M., Barber, B., Griffiths, S., & Goodrich, M.A.�2006�. Towards real-world searching with fixed wingmini-UAVs. Proceedings of the 2005 IEEE/RSJ Inter-national Conference on Intelligent Robots and Sys-tems.

Quigley, M., Goodrich, M.A., & Beard, R.W. �2004�. Semi-autonomous human-UAV interfaces for fixed-wingmini-UAVs. Proceedings of the International Confer-ence on Intelligent Robots and Systems.

Quigley, M., Goodrich, M.A., Griffiths, S., Eldredge, A., &Beard, R.W. �2005�. Target acquisition-localization,and surveillance using a fixed-wing mini-uav andgimbaled camera. Proceedings of the IEEE Interna-tional Conference on Robotics and Automation.

Redding, J., McLain, T.W., Beard, R.W., & Taylor, C. �2006�.Vision-based target localization from a fixed-wingminiature air vehicle. Proceedings of the AmericanControl Conference.

Segall, N., Green, R.S., & Kaber, D.B. �2006�. User, robotand automation evaluations in high-throughput bio-logical screening processes. Proceedings of the ACMSIGCHI/SIGART IEEE RAS Conference on Human-Robot Interaction.

Setnicka, T.J. �1980�. Wilderness Search and Rescue. Appa-lachian Mountain Club, Boston, MA.

Sheridan, T.B. �1992�. Telerobotics, automation, and humansupervisory control. Cambridge, MA, MIT Press.

Sheridan, T.B., & Verplank, W.L. �1978�. Human and com-puter control of undersea teleoperators �Tech. Rep.MIT Man-Machine Systems Laboratory.

Goodrich et al.: UAV-Enabled WiSAR • 109

Journal of Field Robotics DOI 10.1002/rob

Page 22: Supporting wilderness search and rescue using a camera-equipped mini UAV

Syrotuck, W.G. �2000�. An Introduction to Land Search:Probabilities and Calculations. Barkleigh Productions,Mechanicsbrug, PA.

Tao, K.S., Tharp, G.K., Zhang, W., & Tai, A.T. �1999�. Amulti-agent operator interface for unmanned aerialvehicles. Proceedings 18th Digital Avionics SystemsConference, St. Louis, MI.

Thrun, S., Burgard, W., & Fox, D. �2005�. Probabilisic ro-botics. Cambridge, MA: MIT Press.

Utah County �2007�. History of Utah County Search andRescue. Volunteers are on call “24 hours a day,365 days a year” and must invest over $2,500 for theirown gear.

Vicenti, K. �1999�. Cognitive Work Analysis: Toward Safe,

Productive and Healthy Computer-Based Work.Lawrence Erlbaum Associates.

Whalley, M., Takahashi, M.F. M., Christian, D., Patterson-Hine, A., Schulein, G., & Harris, R. �2003�. TheNASA/ARM autonomous rotocraft project. In Ameri-can Helicopter Society 59th Annual Forum.

Wickens, C.D., Olmos, O., Chudy, A., & Davenport, C.�1995�. Aviation display support for situation aware-ness �Tech. Rep. ARL-97-10/LOGICON-97-2�. Avia-tion Research Lab, University of Illinois at Urbana-Champaign.

Woods, D.D., Tittle, J., Feil, M., & Roesler, A. �2004�. Envi-sioning human-robot coordination in future opera-tions. IEEE Transactions on Systems, Man and Cyber-netics, Part A, 34�6�, 749–756.

110 • Journal of Field Robotics—2008

Journal of Field Robotics DOI 10.1002/rob