uav quadcopter landing on a displaced target quadcopter landing on a displaced target technical...

9
UAV Quadcopter Landing on a Displaced Target Technical Report #CSSE14-05 Jeremiah Jeffrey * , Andrey Yanev , Dr. Saad Biaz , and Dr. Chase Murray § * University of Wisconsin - Parkside Kenosha, Wisconsin Email: jeremiah [email protected] Marshall University Huntington, West Virginia Email: andrej [email protected] Auburn University Auburn, Alabama Email: [email protected] § Auburn University Auburn, Alabama Email: [email protected] Abstract Unmanned Aerial Vehicles (UAVs) are be- coming more popular each year, and the applications they can be used for continue to grow as well. One issue faced by UAV quadcopters at the moment is autonomously landing accurately at a home base location that may have changed from its starting posi- tion. The solution proposed to this problem is using a GPS signal that gives the quadcopter the approximate location of the home base during the time of its descent, and then an optical flow sensor will be used to determine the exact location where landing should oc- cur. Figure 1. The 3DR Robotics IRIS 1. Introduction In an unmanned aircraft system (UAS) there is more than just the UAV itself. A UAS consists of everything that assists in the operation of the UAV or works with the UAV. This may include the ground control

Upload: ngokhanh

Post on 18-Mar-2018

217 views

Category:

Documents


1 download

TRANSCRIPT

UAV Quadcopter Landing on a Displaced TargetTechnical Report #CSSE14-05

Jeremiah Jeffrey∗, Andrey Yanev†, Dr. Saad Biaz‡, and Dr. Chase Murray§∗University of Wisconsin - Parkside

Kenosha, WisconsinEmail: jeremiah [email protected]

†Marshall UniversityHuntington, West Virginia

Email: andrej [email protected]‡Auburn UniversityAuburn, Alabama

Email: [email protected]§Auburn UniversityAuburn, Alabama

Email: [email protected]

Abstract

Unmanned Aerial Vehicles (UAVs) are be-coming more popular each year, and theapplications they can be used for continueto grow as well. One issue faced by UAVquadcopters at the moment is autonomouslylanding accurately at a home base locationthat may have changed from its starting posi-tion. The solution proposed to this problem isusing a GPS signal that gives the quadcopterthe approximate location of the home baseduring the time of its descent, and then anoptical flow sensor will be used to determinethe exact location where landing should oc-cur.

Figure 1. The 3DR Robotics IRIS

1. Introduction

In an unmanned aircraft system (UAS)there is more than just the UAV itself. AUAS consists of everything that assists inthe operation of the UAV or works with theUAV. This may include the ground control

station, satellites used with the GPS, and airtraffic control. The UAS used for this researchconsists of a 3DR IRIS shown in Figure 1,which is equipped with a GPS, XBee ProS2B radio module, and the Pixhawk autopilotsystem. Additionally, there is a ground controlstation, and there is a ground vehicle thatacts as the UAV quadcopter’s home base. Theground control station used is a product ofAuburn University’s ATTRACT program, andis responsible for communicating with theUAV once it is in the air. It receives updateson the UAV’s location, altitude, and its nexttarget location through radio telemetry. Lastly,the ground vehicle is equipped with a GPSand a radio module, which allows for it tobroadcast its location to the ground controlstation. The ground control station then up-dates the UAV quadcopter on the locationof the ground vehicle. With this UAS inplace and the advancement of autonomoustechnology, the UAV quadcopter is capableof autonomous take off, flight, and landing.

The autonomous operation of the UAV canbe broken into three phases: take off, taskto be completed, and landing. The researchpresented in this paper deals with the landingphase of the mission with the only constraintbeing that the UAV must land at the homebase from which it started once it has suc-cessfully completed its mission. There are afew options for landing: return to the launchposition, land at the current location or landat a specified location. Landing at the currentlocation may not be the best choice becauseit may not be safe due to the terrain belowthe UAV. Landing at the launch position is aviable solution, but one has to assume thatthe home base the UAV started from hasnot changed position, which is not optimal.Therefore, the best option is to have the UAVland at a specified location because this wouldallow for more flexibility. The home base thatthe UAV started from would then be free tochange position, and when the UAV’s task iscomplete, the home base could broadcast its

location to the UAV. The UAV would thenhead towards this specified location, so thatit can land safely at the new position of thehome base.

The topic of autonomously landing a UAVat a certain location has become of interestto an increasing amount of people. Variousapproaches have been researched, and one ofthe most popular ones to date tends to bevision-based landing. The task seems to bewell-suited for vision-based landing, since thelanding location is at an unknown position.There are different implementations of thevision-based landing, some using sonars andGPS just for a height measurement, whileothers use an active vision system to find thelanding location. In this paper, an approachfor autonomously landing a UAV Quadcopterusing an optical flow sensor will be discussedin further detail.

2. Problem Description

The problem addressed is the autonomouslanding of IRIS onto a ground vehicle thatacts as the UAV’s home base. Once IRIStakes off to complete its mission, the groundvehicle will be allowed to move anywhere.After the mission has been completed, IRISwill be expected to land on the ground vehicleregardless of whether it is moving or not. Inorder to achieve this, there are intermediategoals that have been set, such as getting theIRIS to successfully communicate with theground control station, integrating the IRISand the ground vehicle into the ground controlstation with their own icons, and having IRISfly a guided mission with waypoints sent toit by the ground control station.

IRIS needs to be able to send and receivecommands at any time during its flight, take-off to a certain altitude, and successfully visitwaypoints in order to complete its mission.After a successful mission, the UAV willinitialize the landing process. IRIS UAV hasthe capabilities to land by returning to the

launch position, land at its current location,or land at a specified location. The goal forthis project is to have the UAV successfullyland at a specified location, which will be theground vehicle.

The landing procedure can be describedas a two stage process: the first part is thedescent to a certain altitude, and the secondpart of the process is finding the exact loca-tion where the landing should take place. Themain goal is to optimize the second part, sothe UAV quadcopter will be able to land asclose as possible to the home base’s locationat that time. In order to do that, we will incor-porate vision-based landing techniques, andonce the UAV is at a certain altitude, markingthe end of its descent phase, it will startlooking for a pattern that is associated withthe home base. Once the pattern is found, thelanding procedure will be finalized, and theUAV will land at the specified location.

The toughest task of the second part ofthe landing process will be getting a goodaccuracy when landing at the home base.Since the UAV will look for a pattern, itis important that the descending phase getsthe quadcopter as close to the home base aspossible so finding the pattern is possible.Using an optical flow sensor, the process offinding the pattern will be made easier, andthe accuracy of hitting the targeted landingposition will be higher.

Our research is focused on landing theUAV at a displaced home base, meaning thatwhen the UAV takes off, it will take off fromthe home base and then carry out its mis-sion. Meanwhile, the home base will moveto a different location, and once the UAVquadcopter finishes its mission, it will returnto the home base, which will be at its newlocation. In order to successfully accomplishthis goal, the home base will need to beequipped with a GPS and a XBee in orderto broadcast its location at all times to theground control station. The ground controlstation can then send the location of the home

base to the UAV quadcopter once its missionis completed, and then the UAV can makeits way to that location and start the landingprocedure.

3. Literature Review

The rising popularity of UAVs has led to anincreased interest in successful autonomouslanding. A few papers have been publishedwith different ideas, thoughts, and implemen-tations of a certain strategy to accomplishlanding. In this section, we will present thedifferent approaches that have been taken toachieve autonomous landing.

3.1. Fuzzy Control Loop

One approach that has been taken in the au-tonomous landing process is that of integrat-ing a segmented Fuzzy Logic Controller witha sensor. Since landing an aircraft without apilot could be considered one of the moredangerous parts of the mission, a numberof safety precautions need to be taken intoaccount. In this approach, a sensor is chosenfor the docking sensor, and the authors choseXBox Kinect as the right sensor for theirexperiment [1]. The control loop starts witha seeking function, then calculates a properlanding position after comparison with way-points. The corrections needed to be madeso the UAV lands in that position are thencalculated, and finally the command is issuedto the UAV.

In order to manage uncertainty and impre-cision, the authors of the paper used FuzzyLogic Controller(see Figure 2), which can beimplemented successfully to manage complexsystems. The advantage to using fuzzy sys-tems is the possibility of using approximationin cases where analytical functions are notvery useful, or do not exist at all. A super-vised Fuzzy Logic Controller system givesthe possibility of closely monitoring everystep, and even switching the mode given

a certain context. The fact that fuzzy logiccan be fault tolerant is especially useful inenvironments where the degree of uncertaintyis high, such as when autonomously landing.All constraints and factors are calculated,and taken into account by the Fuzzy LogicController before making the final decision,which makes this Controller a good choice forunknown, dangerous environments. Researchon this project has not been finished, andfuture research topics will focus on visionsystems, collision avoidance, and acousticsystems among others.

Figure 2. Fuzzy Logic Control Agent

3.2. Vision-based Landing

The most popular approach for landing aUAV safely is by far the vision-based landingapproach. In the last few years, researchershave proposed a few different strategies inorder to achieve landing. Vision-based land-ing has been a popular strategy, as it hasbeen implemented by itself, with vision-basedsensors only, and also with a combinationof other sensors, such as GPS or sonar. Wewill discuss vision-based landing only in thissection.

In this paper [2], vision-based landing isachieved by measuring height and verticalvelocity. A binocular 3D vision system isused, which measures the range without hav-ing to look for a predefined landmark. Theheight measurement is derived from the rangemap, and the vertical velocity is estimated by

fusing the height measurement with the ac-celeration measurement. A two-stage landingprocess is used(see Figure 3), which can bebroken down into a descending phase and alanding phase. The descending phase is wherethe UAV’s altitude gets below a certain height,and the landing phase is the process when theUAV descends from the height the descendingphase finished at and lands safely on theground. The dynamics of the landing phasediffer from those of the descending phase,as the UAV flies near the ground where aground effect exists. As soon as landing isachieved, the motors turn off in order to makethe landing as safe as possible. The advantageof using a two-stage landing process is theflexibility each controller has during the sepa-rate stages. Since the two stages are distinct, aseparate controller is used for each stage, anda coordinator gives control to each controllerwhenever necessary.

Figure 3. Two-Stage Landing

In [3], an onboard monocular vision sys-tem is used not only for landing, but forautonomous takeoff and hovering as well. Inorder to achieve that, six degrees of freedom(DOF) pose estimation is used, which is basedon a single image of a landing pad consistingof the letter “H” surrounded by a circle.Each camera image is then binarized, and

connected components are extracted based onthe two-scan labeling algorithm. Then, eachof those connected components are classifiedusing an artificial neural network. Once allof the connected components are classifiedby the neural network, false positives aresuppressed by disregarding those “H”’s thatare not surrounded by a circle. The generalcase that the perspective projection of a circleis an ellipse is then considered, and accurateellipse fitting is critical for later computations.The effect of lens distortion is then corrected,and the ellipse is generated by geometriccalculation. Roll, pitch and yaw angles arethen calculated from the letter “H”. After allcalculations are done, comparisons are madeuntil the match between the given image andwhat the UAV sees are identical, and then thelanding process is started. The algorithm usedin this paper can process up to 60 frames persecond, and the pose estimates have provento be accurate, landing the UAV successfullyover the landing pad. The algorithm usedin this paper can reliably land a completelyautonomous UAV on the landing pad even inchallenging environments.

3.3. Optical Flow

Optical flow, in a computing sense, is theapparent motion of objects based upon thebrightness of different areas or pixels of theobject in an image. If you have two successiveimages of an object that have been takenwithin a short period of time, a computercould detect movement and the direction ofthe movement by comparing the two im-ages. The computer compares these imagesby seeing if blocks of pixels have changedbrightness or not, and it tries to detect inwhich direction the pixels have moved.

There are problems with trying to detectmovement in this way. First, the images thatare being compared must be taken within avery small time frame. If they are not, therecould be too many changes, and the computer

will be less likely to detect what directionthe movement is in or how much movementhas occurred. Secondly, since optical flow isthe measure of apparent motion, if there isa simple change in lighting, the computerwould mistake this change as motion. Therecould also be issues if the texture of imagesis very flat and homogeneous. If the imagedoesn’t have much texture, it can be hardto determine if any movement has occurredbecause all of the pixels in the image lookthe same. Lastly, the amount of lighting in theenvironment could be an issue in computingoptical flow. If there is not enough light thenit can be harder to detect light differences,and this will increase the amount of error incalculating optical flow.

Measuring optical flow has many differentapplications. A common use is in optical-computer mice, which simply takes thousandsof pictures per second, compares successiveimages, and tells the computer how muchthe mouse has moved. Optical flow has alsobeen extensively researched for doing obsta-cle avoidance, object tracking, and motiondetection, which arguably makes optical flowone of the most important topics in the field ofcomputer vision. One way that vision-basedlanding could be implemented is through anoptical flow sensor. This sensor takes multiplepictures per second, and then it performsan image processing algorithm, such as theLucas-Kanade method or the sum of absolutemethods, to determine if there has been anymovement. This sensor has mainly been usedto keep a UAV steady in one spot by cal-culating the optical flow between successiveframes and attempting to keep these opticalflow measurements as close to zero as possi-ble. In combination with an ultrasonic rangesensor, this could be adapted to accuratelyland a UAV at a location of choice. Theultrasonic range sensor would be used totell how far the UAV is from the ground,and the optical flow sensor would make surethat it stays above its landing target. This is

what would make the PX4Flow sensor a greatchoice for autonomous landing.

The PX4Flow, as shown in Figure 4, isan optical sensor and an ultrasonic rangesensor that can capture up to 250 picturesper second with each picture having a res-olution of 64x64 pixels. Since this sensorhas a CMOS image sensor, it can operate inlower light environments. This is because theimages are processed by a computer that ison the ground, and then the calculated opticalflow values are sent back to the UAV. Thisallows for operation indoors and outdoors,and allows for accurate velocity and positionvalues. In flight tests, a UAV was held at analtitude of 0.6m and along a square trajectorytotaling 28.44m in length. The UAV was ableto come within 0.25m of its starting positionwhen it ended, which is acceptable for ourproject application. [4]

Figure 4. Px4Flow Sensor

4. Summary of Work

Landing a quadcopter on a displaced targetwas the goal at the beginning of research.Auburn University’s REU program so far had

focused on flying planes autonomously, andIRIS was the first quadcopter to be added tothe Auburn Ground Control Station. Layingthe foundations of a new project is alwaysexciting, but at the same time challenges andproblems have to be faced in order to makeprogress and achieve success. In this section,there will be a discussion of what tasks wereachieved successfully, what problems had tobe overcome in order to achieve those tasks,as well as some of the research that wasconducted.

The first task was to swap the radiomodules. Since IRIS came with the default3DR Radio, it needed to be replaced witha XBee radio module, because that is whatthe Auburn Ground Control Station uses forcommunication with the UAVs. The originalradio module which 3DR Robotics used wasconnected to the Pixhawk (autopilot systemthat IRIS uses) with a 6-position to 6-positioncable, but a cable that was 6-position to 5-position was needed in order to connect ourXBee to the Pixhawk. A robust connectionbetween the XBee and the Pixhawk was madeby finding a 6-position to 6-position cablethat had the 6-position housing on both ends,removing one of the housings off, and replac-ing it with the crimp connector housing thatconnects into the XBee. Pins were crimped onthe cables, and then connected into the crimpconnector housing. Upon plugging the wireinto the XBee and Pixhawk, it was discoveredthat a good connection has been made, andIRIS was able to send and receive packetssuccessfully.

Swapping the radio modules was achievedsuccessfully due to finding out some ideaswould not work. No suitable cable existedon the Internet, which meant there was aneed for creating one. An extra cable thathad been lying around the lab that was usedfor connecting the APM autopilot system tothe planes was used to connect XBee toPixhawk at first, by taking off the 5-positionhousing off, and replacing it with a 6-position.

After creating the cable, it was discoveredthat the connection was not very strong, andtransmission of data would stop at randomtimes, which meant that there was a need fora more secure wire.

The next task that was achieved success-fully was integrating the quadcopter with theAuburn Ground Control Station in order tohave the ability to send messages back andforth through the XBee radio module. Thecustom Auburn message was added to thelist of MAVLink messages, which is howcommunication between the quadcopter andthe ground control station occurs. The customAuburn message was programmed to be sentby the quadcopter every second, with usefulinformation such as location, distance to nextwaypoint, speed, etc. which could then beused by the ground control station for anynecessary calculations and statistics. Testingin the field ensured that IRIS was able toboth send messages to the ground controlstation and receive messages from it, whichmeant that two-way communication was suc-cessfully achieved.

After communication with IRIS was pos-sible, the next task was to fly a missionwith a few waypoints. This task was achievedby giving the quadcopter a few guided way-points, and IRIS would fly towards its currentwaypoint. When IRIS came within 30 m ofthe waypoint, it considered that the way-point had been reached, and the quadcopterwould then move towards its new waypoint,if there was one. Changes were made tothe ArduCopter code IRIS uses in order toensure that IRIS does not turn towards itsnew waypoint, but would rather fly directlyto it, which improved speed and performance.Flying a mission in guided mode also meantthat collision avoidance was possible, andthe Auburn Ground Control Station wouldsend the quadcopter an avoidance waypointif collision possibility with another UAV wasdetected.

The Auburn Ground Control Station was

originally designed for planes, and IRIS wasthe first quadcopter that was incorporated intothe system. This meant that a new simulationicon would be needed for IRIS, so the differ-ence between quadcopters and planes couldbe seen easily when watching the groundcontrol station flight paths. A certain range ofIDs were assigned to quadcopters, and a cus-tom quadcopter icon was made specificallyfor differentiating quadcopters and planes inthe software. Testing provided satisfactoryresults, and the new icon is used by allquadcopters in the software, both real andsimulated ones.

Since IRIS behaves differently in the airthan a plane does, collision avoidance hadto be tested in order to see how the quad-copter would perform when sent a collisionavoidance waypoint. It was noticed that ini-tially IRIS would be much slower than thesimulated planes at avoiding, and often timesthe plane would have visited its collisionavoidance waypoint by the time IRIS wouldstart moving toward its collision avoidancewaypoint. In part, this issue had to do with theway IRIS was programmed: it would alwayslook at its waypoint, which meant that thequadcopter had to turn towards its waypointbefore actually moving there. Additionally,the default acceleration that was used by IRISwhen moving towards horizontal waypointswas set to a slow 1m/s2. Both of thosefactors had an effect on how slow IRIS wasperforming when sent a collision avoidancewaypoint, so they were changed. The horizon-tal acceleration was changed to its maximumof 5m/s2, and the quadcopter would not turntowards its waypoint anymore. Changing bothof those parameters led to IRIS performingmuch better in collision avoidance tests andmoving towards the collision avoidance way-point at a much higher rate.

Research was conducted on landing usingthe PX4Flow Sensor mentioned in the paper,with the ultimate goal of IRIS landing ontop of a ground vehicle. The idea was to

have an image of the ground vehicle, and theoptical flow sensor would be used to comparethe images taken in the air with the groundvehicle image in order to decide when landingshould occur. Initial research looked good andthe idea seemed feasible;however, PX4FlowSensor is not compatible with the Pixhawk atthis time, which made the task seem infea-sible. A new plan regarding the quadcopterand the ground vehicle was adopted, and it isdescribed in the next paragraph.

Upon completing a mission, the planeswould just circle around the last waypoint.However, this is not optimal, so a decisionwas made to implement a new feature forIRIS. After the quadcopter had finished itsmission and hit the last waypoint, it thenwould receive the coordinates of a groundvehicle and fly to it, as well as follow theground vehicle around if it moved anywhere.A ground vehicle icon was added to theAuburn Ground Control Station Flight Sim-ulation software, and ground vehicles wereassigned a range of IDs as well, making themthe third type of vehicle the system supports,after planes and quadcopters (see Figure 5).In this implementation, the coordinates of theground vehicle are sent to IRIS anytime theyhave changed, and the waypoint threshold forIRIS was changed from 30 m to 1 m, meaningthat it follows the ground vehicle closelyfrom a certain altitude whenever the groundvehicle moves. All of this is easily observedby watching the Ground Control Station.

After ensuring the quadcopter could go towaypoints, research was done on automatictakeoff and landing, and the best way toimplement those commands so IRIS would flya completely autonomous mission. Automatictakeoff and landing implementation in theIRIS software is meant to be for Auto modeonly. Furthermore, the entire mission needs tobe loaded onto the Pixhawk, and then oncethe quadcopter is in the air and the Automode switch is flipped, IRIS goes throughthat mission.However, since Auburn’s Ground

Figure 5. Quadcopter and Ground Vehiclein Auburn Ground Control Station FlightSimulator

Control Station puts heavy emphasis on colli-sion avoidance, that scenario would not be op-timal. The goal was to pass those commandson the fly, have IRIS takeoff, then change toguided waypoints, and finally send IRIS theautomatic land command, which would bringthe quadcopter to a specified location. Teststhat were conducted showed that if a missionitem regarding automatic takeoff was passedto IRIS, it was not processed as a command,so IRIS did not takeoff by itself, and itseemed to ignore the command altogether.

Because of what happened, the focus ofthe automatic takeoff and landing researchnow shifted to taking off in Auto mode,changing the mode to guided, visiting thespecified waypoints, and then going back toAuto mode for the landing. However, lookingthrough the code and realizing how manychanges would be needed in order to makethat happen, it was decided that it wouldtake a lot longer than expected. Automatictakeoff and landing commands would haveto be modified to work with guided mode,which would require rewriting a few functionsin different files of the ArduCopter code. Thistask was deemed infeasible for this summer,but the research conducted can be used in

the future to successfully implement auto-matic takeoff and landing commands fromthe Auburn Ground Control Station to IRIS.Since the task would not be completed, effortswere focused on IRIS following a groundcontrol vehicle, which was described above.

A naı̈ve approach to automatic takeoffwould be to have a few waypoints at theGPS takeoff location that continuously in-crease the altitude until it is at the desiredlevel. Giving IRIS a few guided waypointsthat continuously rise in altitude would bea workaround for the problem. It is worthnoting that the current location needs to beknown in order to do that, which wouldnot be optimal. Furthermore, that would notwork well if the quadcopter has to land at adisplaced target. It would be best if automatictakeoff and landing commands are changed ina way such that the Auburn Ground ControlStation could send those commands while thequadcopter is in guided mode.

5. Conclusion and Future Work

In this paper, the problem of autonomouslylanding a UAV onto a moving vehicle, similarresearch, and the progress made have beendiscussed and documented. A method for aUAV to follow a ground vehicle assuming thatthere is a strong GPS signal was developed.This method is a base for beginning the land-ing process onto the ground vehicle, but thereis more room for work and improvement.Future work on this problem can includeimplementing a precise procedure for landingthe UAV quadcopter with an optical flowsensor. Once the UAV quadcopter is abovethe ground vehicle, the quadcopter could usethe optical flow sensor to track and staysteady over a target on the ground vehicle.Research conducted showed that this is apromising idea, but technology issues, suchas the fact that PX4Flow Sensor is currentlynot compatible with Pixhawk, made this aninfeasible task for this project. However, once

compatibility is not an issue anymore, our re-search could be used to implement an opticalflow sensor solution for autonomous landing.

In addition, research was conducted on au-tomatic takeoff for IRIS. Given the timeframeavailable, this addition was deemed unnec-essary, but future work can focus on imple-menting automatic takeoff into guided mode,which would lead to a more autonomous mis-sion. The research conducted on automatictakeoff could be used as a starting point forfuture teams interested in implementing thisfeature.

References

[1] J. Tweedale, “Fuzzy control loop in an au-tonomous landing system for unmanned air vehi-cles,” in Fuzzy Systems, 2012.

[2] K. N. Zhenyu Yu et al., “3d vision based landingcontrol of a small scale autonomous helicopter,”International Advanced Robotic Systems, 2007.

[3] S. S. Shaowu Yang et al., “An onboard monocularvision system for autonomous takeoff, hoveringand landing of a micro aerial vehicle,” UnmannedAircraft Systems, 2012.

[4] L. M. Dominik Honneger et al., “An open sourceand open hardware embedded metric optical flowcmos camera for indoor and outdoor applications,”Robotics and Automation, 2013.