5hc99embeddedvisualcontrol … · we implemented a gui application that uses this class in order to...

67
5HC99 Embedded Visual Control Design & Implementation of a Quadcopter with Visual Control S. Balasubramanian, #0785610 M.G.H. Cox, #0629185 L.J.W. Waeijen, #0631911 Eindhoven University of Technology Department of Electrical Engineering September 19, 2012

Upload: dangminh

Post on 04-Jul-2018

213 views

Category:

Documents


0 download

TRANSCRIPT

5HC99 Embedded Visual Control

Design & Implementation of a Quadcopter with

Visual Control

S. Balasubramanian, #0785610

M.G.H. Cox, #0629185

L.J.W. Waeijen, #0631911

Eindhoven University of Technology

Department of Electrical Engineering

September 19, 2012

Introduction

The project described in this report is conducted as part of course 5HC99 (Embedded VisualControl) in the Electrical Engineering department of Eindhoven University of Technology. Theproject has been implemented in the period of February-September, 2012.

The course is project-driven and aims at combining and understanding (robotic) control, em-bedded computation and computer vision. For each discipline, the students have to becomefamiliar with the main theories, mathematical formalisms and practical issues. In regular pre-sentations, the theoretical findings and practical progress are reported to the supervisors andthe other students.

The project we describe in this report is an attempt to build a quadcopter1, an active area inUAV (Unmanned Aerial Vehicle) research. These vehicles use multiple sensors and an electroniccontrol system to stabilize and fly the aircraft. With their small size and agile manoeuvrability,quadcopters can be flown indoors as well as outdoors. Important applications of quadcopters in-clude inspecting places (for example conflict or disaster sites) which are unreachable by humans,military applications, cell-tower inspection and aerial photography.

Moreover, the goal of the project is to equip the quadcopter with a camera that it can use toautonomously capture the environment and react to it. In our case, we will implement a visionalgorithm that runs on the embedded computational platform of the quadcopter and is able tolocate and track a ball. This can then be used to automatically let the quadcopter follow theball.

This report contains theoretical background as well as practical trade-offs and problems thatwe encountered along the way. It should be possible for the reader to reproduce the projectbased on this report.

Overall a good team effort resulted in a fruitful outcome of the project. Our sincere thanks toProfessor Corporaal for giving us the opportunity to work on this great project. We extend ourgratitude to Mark Wijtvliet, Zhenyu Yi and R.S.Pieters, as course supervisors who helped usduring the course of this project.

1A quadrotor, also called a quadrotor helicopter or quadcopter, is a multicopter that is lifted and propelled

by four rotors

1

Summary

This report describes the design and implementation of an autonomous quadcopter equippedwith a camera and a vision algorithm to be able to follow a red ball. The quadcopter iscompletely build from scratch.

The quadcopter features two computational platforms: a 700 MHz 32-bit ARM-based Linuxboard, BeagleBone, and a 90 MHz 32-bit ARM microcontroller, the NXP MBED LPC1768. TheBeagleBone is equipped with WiFi to be able to control the quadcopter wirelessly. Moreover,a webcam is connected to the BeagleBone to capture the environment. Multiple sensors areused to measure the position and attitude of the quadcopter: an accelerometer, a gyroscope, amagnetometer and an ultrasonic rangefinder. On the mechanical side, the quadcopter is buildon a prefabricated fibreglass frame and four brushless motors with 9 inch propellers.

The control is split into local and global control. Local control is concerned with following areference attitude and position, whereas the global control generates this reference based onuser commands and information from the camera. The local control loop is implemented on theMBED with a loop frequency of 250 Hz. A quaternion-based algorithm is used to combine thedata of the sensors into a reliable estimate of the current position. The global control consistsof a class that can be used on the BeagleBone to communicate with the local control algorithmon the MBED. We implemented a GUI application that uses this class in order to allow manualcontrol over the quadcopter. Moreover, the class can be used by the image processing applicationto control the quadcopter based on information extracted from camera images. All software isimplemented in C++.

The image processing application that we developed captures frames from the camera usingthe OpenCV framework. By subsampling and thresholding the image in color-space (HS com-ponents in HSV format), a red ball is detected with minimal computational load to be ableto obtain high frame rates. For comparing the performance and load of the algorithm, also aHough-based variant is developed. The output of the detection algorithm is used to track theposition and moment of the ball.

The result of the project is a quadcopter that is able to hover and can follow reference signals.The stability of the quadcopter can still be improved, for example by using more accurate motordrivers and further enhancing the control software. The image processing algorithm is able todetect and track a red ball on the embedded computational platform; enables autonomousflying. Enough effort has been taken to improve the stability of the quadcopter and make ithover perfectly, however, it still requires slight improvement in this area to call it completelyautonomous. All software required for coupling image processing algorithm to the quadcoptercontrol is available, though we were only able to test these two components independently andhave been successful to prove its correct functioning.

2

Contents

1 Project description and requirements 5

2 Hardware and platform 72.1 Mechanical construction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Motors and driving Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.3 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4 Computational platform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.5 Supporting Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.6 List of components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

3 Local control software 253.1 Mathematical model of the quadcopter . . . . . . . . . . . . . . . . . . . . . . . . 253.2 Local control overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273.3 Software mapping and interfacing . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.4 Inertial Measurement Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293.5 Controller design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.6 Software implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4 Image acquisition and processing 344.1 Objective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.2 Requirements and Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.3 Concept of ball detection and tracking . . . . . . . . . . . . . . . . . . . . . . . . 364.4 Pseudocode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 424.5 Testing under various scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444.6 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464.7 Pros/cons of the algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5 High level control 495.1 User interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.2 Global control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

6 Calibration and tuning 516.1 Motor calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 516.2 Propeller and frame balancing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.3 Accelerometer calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.4 Gyroscope calibration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 546.5 Automated controller tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

7 Experimental results 60

3

8 Conclusions and recommendations 61

Bibliography 62

A Who did what 65

B Source Code Structure 66

4

Chapter 1

Project description andrequirements

This chapter describes the project and its requirements, in addition to tasks performed in thefield of ”Embedded”, ”Visual” and ”Control”.

The goal of the project described in this report is to build a fully functional quadcopter fromscratch, that is capable of autonomously following or avoiding a ball. From this goal, thefollowing tasks arise:

• Choose and build a mechanical construction, consisting of a frame, motors, a battery packand propellers;

• Choose other needed components like a computational platform for running the controland vision algorithms, inertial sensors, connectivity, a camera and power supply electron-ics;

• Interface all components and build a circuit board containing all needed electronics;

• Implement the local control, which uses data from the sensors to calculate the correctmotor signals in order to move in the desired way;

• Implement the vision algorithm needed to detect a ball and to extract the movementinformation of the ball;

• Implement a GUI/interface that can be used on for example a laptop to control thequadcopter;

• Combine the vision program with the control to enable autonomous flight.

With time being a critical factor, the various parameters that may affect the quality of theoutcome are man hours spent in planning, procuring the components, understanding the variousconcepts in electronics, embedded software, control systems and imaging and in applying themin practice. The time required for programming the processor board and the microcontroller aswell as debugging and tuning the quadcopter during various stages are other factors to be keptin mind during the pre-initiation phase of the project.

5

Quadcopters

A quadcopter is a four rotor hovering platform. Each rotor is mounted directly to a motor andthe motors are mounted at the four corners of the frame. Most commercially available quad-copters use open source firmware for control. AeroQuad and ArduCopter are popular opensource projects that feature a detailed hardware description and include open-source softwarebased on Arduino. There are a lot of prefabricated or DIY-kits available based on these plat-forms. Another popular quadcopter is the Parrot AR. Drone, which includes a camera and canbe controlled from a smartphone or tablet. The AR.Drone uses proprietary firmware, althoughopen-source alternatives are available.

The commercially available quadcopter kits usually costs more than 300e. We decided tobuild our own quadcopter because it gives us the freedom to take our own design decisions andpossibly reduce the overall costs involved. This decision has made the course more challengingand deepened our understanding on various grounds.

The course for which this project is conducted includes three fields: embedded computation,visual processing and control. The sections below discuss shortly what has been performed ineach of these fields.

Embedded computation

As will be described later in this report, the quadcopter will include two embedded computa-tional platforms. We will be implementing all control, communication and image processing onthese platforms. Especially the implementation of the image processing on an embedded plat-form with limited computational power requires extra attention. The circuit board containingthe computer boards, sensors and supporting electronics is designed and build by hand. Allcommunication between the boards and the sensors is handled by software written by us.

Visual processing

The visual part of this project consists of efficiently capturing frames from a webcam connectedto the linux board and then processing the frames. The processing algorithm tries to detect aball in the frames, and extract the position and radius of the ball. From this information, itcan then estimate the direction of the ball, and can be used to control the quadcopter. Themost challenging part is to implement this chain with reasonably high frame rate on a lightCPU.

Control

We implement all control algorithms needed for stabilizing and flying the quadcopter fromscratch. To be able to do this, we invested a lot of efforts in researching techniques to combinesensor data and to control a quadcopter. In addition, effort is taken in calibrating the motors,sensors and tuning(automatically) the controller parameters.

6

factors when selecting a frame are that it must be light weight, rigid and is able to minimizethe vibrations coming from the motors[20].

Figure 2.2: Carbon fiber frame

Given these constraints, there are many feasible frame materials. For example a frame canbe made of carbon fibre, aluminium or plywood. Carbon fibre is by far the most rigid of thethree materials but is also very expensive. A good alternative was found in fibreglass, which isalso very rigid and light weight, but much cheaper than carbon fibre and much stronger thanplywood. The selected fibreglass frame weighs approx.300 grams.

There are two common configurations in which a quadcopter can be set up, known as X and +configuration. The difference between the two configurations is in how the motors are controlledwhile steering the quadcopter and the direction of flight. In the + configuration, each motor isassigned to each direction. Whereas, for the X configuration, a pair of opposite motors worktogether on changing the direction. Figure 2.3 shows a quadcopter in the + configuration, whichis used for the final quadcopter design.

Figure 2.3: Plus configuration of a quadcopter. Notice how the forward direction is definedalong the axis of the motors.

When choosing the frame, it is not only important to take the orientation into account, but evenmore important are its size and coupled to that its weight. These dimensions will determinethe required parameters for other components such as motor power and battery capacity.

The power P required to generate thrust(i.e.lifting capacity)T, according to [16], is given inequation 2.1.

8

p ∝ 3√T (2.1)

Figure 2.4 shows the non-linear relation between the battery power vs. weight(for a givenmaximum flight duration) plotted as group of dotted lines, each for a flight duration. P0

indicates the battery power required to lift the motors and structure. Due to this non-linearstructure, the choice of the battery and the frame must match very closely to be in the feasibledesign range.

Taking all these points into consideration, the frame of Figure 2.2 was selected. It is build fromfibreglass as this material is both strong and light for a good price. The frame is about 1 meterwide which brings all dimensions in such a range that a computer platform is easy to lift andthis large frame also provides a good stability.

Figure 2.4: Weight Vs Power relation, courtesy [16]

As feared, that the quadcopter might be very unstable in the initial stage of the design, amethod was devised to protect it from physical damage. Having chosen a frame that is sturdyis beneficial but it can not protect the fragile propellers when the quadcopter swings into asolid surface. Therefore a lightweight bumper was constructed from a material called styrodur.Styrodur is a kind of foam, but it is incredibly rigid for its low weight. This makes it ideal forthe bumper. The bumper attached to the quadcopter can be seen in Figure 2.5.

Figure 2.5: A picture of the bumper (green foam) attached to the quadcopter

9

2.2 Motors and driving Electronics

Motors and the electronics to control them are critical components in the design of a quadcopter.Not only should the motors and electronics be able to output enough power to generate therequired lift to take off, but they should also be able to respond both fast and accurately toenable the quadcopter to hover steadily. The selection of the motors is described in section 2.2.The design of the electronics driving the motors is detailed in section 2.2.

Motor Selection

When selecting the motors for the quadcopter a number of criteria should be taken into con-sideration. The most important criteria are:

• Output power vs weight

• Operating speed and voltage

• Bandwidth (related how fast the motor can change its speed)

• Speed Accuracy

• Durability

• Electronic driver complexity

• Efficiency

• Cost

The first and probably most important decision that has to be made is what type of motor touse. There exist many types of electric motors powered by both alternating current (AC) anddirect current (DC). Since the power source of the quadcopter is DC, it is considered overlycomplex to convert this to AC, given that the characteristics of DC motors are just as goodas those of AC engines. Therefore, the choice is made to use a DC powered motor. It isimportant to note that there are many variants available on the market for the DC type. Thetwo predominant ones are the brushed and the brushless types. Interested reader can readfurther on the excellent explanation of working of both these types at wikipedia[6].

When comparing these two types of motors, it seems the brushless motor is better suited:

• In general, brushed motors have a lower efficiency of 75-80% compared to 85-90% forbrushless motors, which is due to the friction of the brushes.

• A brushless motor can be easily synchronized by its driving electronics, leading to a betterspeed accuracy and greater dynamic bandwidth.

• The brushes (usually made of carbon) of the brushed motor often wear out, giving it ashorter lifetime than a brushless motor.

• A brushed motor generates a lot of electromagnetic noise due to the sparks at the brushes.

• The driving electronics for a brushed motor are extremely simple and cheap while abrushless motor needs more complicated control electronics.

While looking at the criteria described at the beginning of this section, it turns out that bothbrushed and brushless motors satisfy the power, speed and weight requirements; the dynamicbehaviour the brushless motor is far superior to the brushed version. The only drawback for

10

(a) Theoretical (b) Practical with virtual ground

Figure 2.8: EMF measuring

The biggest difficulty however is not to determine the order in which to power the phases butto determine the moment to switch on the next phase. As soon as the magnet has aligneditself to the phase that is currently powered, the magnetic field no longer performs any work.At that exact moment, the next phase should be enabled. If this is not performed, the motorwould not only be slowed down (the inertia of the rotating magnet would be lost due to thecurrent phase preventing the magnet to turn any further) but the energy in the coils would bedissipated as heat and eventually destroying the coils or the driving electronics. Thus, it is ofvital importance that the next phase is enabled at exactly the right time.

To discuss how does the electronics determine this exact point in time, one solution is to addsensors to the motor and measure the exact position of the rotor. Although this solution works,it adds the cost of the sensor to the design while there actually is a cheaper method. This cheapmethod relies only on the existing hardware and uses the back-electromagnetic-force or back-EMF of the terminal that is not powered. As the permanent magnet turns, it also functionsas a dynamo, inducing a voltage over the terminal that is not connected. When this voltagespikes, the magnet exactly positions itself in the middle between the old and the new phase.With timing it can then be calculated when the magnet will arrive at the new phase.

Measuring this back EMF is unfortunately not trivial. To understand this lets take a lookat Figure 2.8. The Back EMF is induced over the coil that is not powered/connected, butsince the motor is wired in star form, indicates the back EMF should be measured between thecentre of the star and the disconnected terminal (2.8a). There are motors that bring out thecenter tap exactly for this purpose but the motor that were selected do not have a dedicatedterminal for the engine ground. Instead a virtual ground is created by connecting the terminalsby resistors in star shape (2.8b). The spike of the back EMF can now reliably be determinedby a comparator between the disconnected terminal and the comparator.

The back emf can now be determined, it is possible to switch the phases at the time the magnetrequires so and thus get the motor turning, though it still does not allow for controlling the speedof the motor. Eventually, when the magnet reaches the position the current phase dictates, thenext phase must be activated for the aforementioned reasons (prevent losing magnet inertiaand overheating the coils). Hence, the time it takes the magnet to travel from one pole pairto the next actually determines when the next phase should be activated and thus the overallmotor speed. To control the speed of the motor, it should thus be possible to determine thetime it takes the magnet to travel from one pole to the next. As the magnet is turned by the

12

forces exerted on it by the coils, it is possible to control this time by controlling the power inthe coils.

One way of controlling the power in the coils would be to control the voltage over the phases.When this voltage is lowered, the magnet is attracted less and the speed of the engine reduces.When the voltage is increased, so does the power of the magnetic field and the motor speedsup.

In power electronics, however, it is very inconvenient to control the driving voltage in a continuesmanner. By using a transistor for example, to give a voltage drop from the voltage comingfrom the power supply, all the power that would have gone into the motor at full speed isnow dissipated as heat in the transistor. In case of the selected motors, this means around ahundred Watt should be dissipated, which in practice means the transistor that provides thevoltage drop would need an enormous amount of cooling to take away all this excess heat.

Instead of controlling the voltage in a continuous manner, it is also possible to use Pulse WidthModulation or PWM, to reduce the effective power that goes into the coils, without having toconvert the voltage coming from the battery. PWM switches the voltage over the coil very fastbetween on and off. The fraction between on-time and off-time determines the average amountof power that is dissipated in the coil. When the on-time is 100%, full power is applied and theengine runs at top speed. By reducing the average ON time to say 80% indicates, that for everytime unit the voltage is applied to the terminals, it is switched on for 80% of that time unit,and for 20% it is switched off, the engine slows down. How much it slows down all dependson the load and various forms of friction because what is controlled here is the power to theengine and not the number of revolutions per second. This is an important observation, as forcontrol techniques it would be much more convenient to control not the power to the engine,but its actual speed and thus the amount of thrust it generates, which is directly coupled tothat.

After all this theory, it is time to look at a first design for such a controller. Fortunately thisproject is not the first to require a driver for a brushless motor, therefore many cheap driversare readily available on the market. The schematic of one of such drivers by a company namedTurnigy is shown in Figure 2.9.

Looking at this schematic, the major components can be identified quite easily. On the righthand side are the power FETs that drive the 3 different phases. The FETs are (with someadditional electronics) controlled by the I/O pins of the Atmega micro controller located on theleft hand side of the schematic. At the bottom center the resistor network can be found thatcreates the virtual ground and connects all phases to the comparators that are incorporated inthe micro controller (for detecting the back-EMF).

The Turnigy driver as depicted in Figure 2.9 is targeted at rc-toy applications and does a greatjob for almost no money. The hardware itself suits all requirements for a driver perfectly butunfortunately there are some shortcomings:

• First of all the interfacing with the micro controller is poor. A protocol called pulse-peakmodulation is used. This protocol uses the width of pulses to communicate to the microcontroller how much power should go to the engines. This is done because remote controlsfor RC-models transmit this signal allowing for the driver to be directly connected to areceiver. The downside is that this way of communication is very slow and not veryprecise (the controller has to interpret the timing of the pulses). For the quadcopter itis important that the motor speed can be adjusted to both fast and accurate, but in theoriginal set up the protocol to interface with the micro controller is the bottleneck in

13

Figure 2.9: Schematic for a Turnigy 18 Ampere brushless motor driver

updating i.e. the motors themselves have a larger bandwidth than can be achieved withthis driver.

• The second shortcoming of the Turnigy controller is that it only allows for controlling thepower that goes to the engine, but not the speed. As mentioned earlier in this report,setting the PWM duty cycle controls the amount of power that goes to the coils of themotor. But, as the micro controller controls both the PWM and measures when the phaseshave to change, it is trivial to add a (PID) controller that does not control the power butcontrols the speed. When the motor spins too hard, the controller can reduce the PWMduty cycle, and when it goes to slow it can increase it. This would be an extremely usefulfeature for a controller as it removes the dependency of load and imbalance betweendifferent motors (due to manufacturing variance). All of this is not possible however, andit is even impossible to request the current speed from the controller, as it only supportspeak-position-modulation input which is one-way.

It is unfortunate that although this cheap hardware is perfect for the needs of the quadcopter,the firmware is what is causing these problems. Luckily we are not the first group of engineersthat ran into exactly this problem and an effort has already been made to write new firmwarefor the atmega micro controller. This firmware enables the I2C interface of the atmega allowingthe desired motor speed to be updated to well over 300Hz, which is the approximated maximumbandwidth of the selected motors. In other words, the communication is no longer the bottleneckwhen it comes to adjusting the motor speed. I2C is a digital protocol, it is much more accuratethen the original PPM interface. Except for improving the communication, the new firmwarealso allows for speed control of the engines, instead of power control.

14

As this new firmware removed all concerns we had with the original Turnigy controllers, it wasdecided to order these controllers and upgrade them with the new firmware. However, whenwe received the controllers, it turned out that the factory had by now updated the schematicsand used a completely different micro controller for which no firmware was available.

This meant it was back to the drawing board to come up with a good alternative. The result wasan effort that was made to make a copy of the original hardware and use the available firmwareto drive it. A small modification was made to use suited power FETs instead of placing them inparallel as was done in the Turnigy schematic. Also, the 5 volt power supply (two 7805 voltageregulators in parallel) were omitted, as it is not a good engineering practice to put these kindof devices in parallel. Instead, a completely separate power supply was going to be built andthe drivers would be powered from this central power supply.

(a) Top (b) Bottom

Figure 2.10: Pictures of one of the motor driver prototypes.

It seemed like a solid plan rebuilding the original hardware and after a few weeks the firstprototype, which is depicted in Figure 2.10 was ready to be tested. This test did not workout very well as the engines would not function properly. Also, since the firmware was writtenin assembly, it was very hard to debug and find out what the problem was. After spendingsufficient time on trying to debug the driver, it was decided to try and write completely newfirmware in C and use a different (more available) chip.

This meant a redesign of the circuit board, which again consumed quite some time. When thecircuit and the firmware were finally finished, it turned out the driver still had major problems.Although the motors now would turn at a decent speed, at some point they would end upin some kind of fault state where the next phase wasn’t activated properly. It seemed verymuch like the back-EMF sensing was unreliable, causing the firmware to wait for it while ithad already occurred. The most likely cause was found to be the circuit board. It was notexpected that this back emf signal would be so weak, no effort was made to separate the signalsfrom the power lines. Most likely coupling between other, more powerful, signals disturbed theback-EMF sensing. To solve this another circuit board would have to be designed, but by nowthe drivers had already taken a disproportional amount of time. Therefore, it was decide toafter all use the original Turnigy controllers. Given either more time or more money, this woulddefinitely not have been the preferred solution, but given these constraints this is the best thingthat could be done.

15

2.3 Sensors

In this section, we describe all the sensors that are essential for proper control of the quad-copter. To reliably determine the orientation of the quadcopter, a accelerometer, a gyroscope,a magnetometer and a height sensor are being combined. The combination of multiple sensorsto obtain such an orientation is usually referred to as an IMU (Inertial measurement unit)which uses filtering and drift compensation techniques to provide a reliable output. The designmotivations for choosing the different sensors for this project are low power, small package andsupport of the I2C interface with a high sampling rate. The remaining parts of this section willdetail how the various sensors were selected.

Accelerometer

An accelerometer describes how fast something is speeding up or slowing down and is used tosense both static (gravity) as well as dynamic (sudden start/stop) accelerations. Its unit ofmeasurement is either in m/s2 or G-forge(g). Accelerometers are affected by gravity, and givean indication of its orientation with respect to earth’s surface [17].

To obtain a valid orientation, the acceleration along three orthogonal axis is required. Ac-celerometers that measure in 3 directions come in one package nowadays, making them idealfor a quadcopter. Besides measuring three axis, it would be easy if the sensor can be connectedto an I2C bus, which led the search for an accelerometer to the BMA020 from Bosch [1], whichis not only cheap, but is also aimed at low power consumption and comes in a small package.Besides that it is also very easy to program and it can filter its measurements on chip, reducingcomputational effort for the computing platform.

Figure 2.11: A 3-axis accelerometer

Gyroscope

A gyroscope measures angular velocity; i.e. how fast something is moving around an axis. Theunit of measurement of a gyroscope is either in rotations per minute(RPM) or in degrees persecond[17]. Again to obtain a valid orientation for the quadcopter, the angular velocities ofthree orthogonal axis are required.

Gyroscopes come in various precisions and update speeds, but for a quadcopter basically anycheap 3-axis gyroscope will work, according to many projects available online. Apparently thefabrication techniques for 3-axis gyroscopes are so good there are no ‘bad’ sensors, meaning acheap sensor will suffice if no extreme accuracy is needed.

The other main concern when selecting the gyroscope was whether it had n I2C interface, whichwould allow seamless integration with the rest of the design. One of the most common 3-axisgyroscopes with I2C turned out to be the ITG-3200[7].

16

The ITG-3200 is manufactured by Invensense. It is a high precision sensor which has a 16-bitanalog to digital converter for digitizing the gyroscope outputs, it is low cost and supports fastmode I2C (400kHz). Again its parameters such as its range, number of axes measured, andupdate speed can be easily programmed. This made the ITG-3200 an ideal candidate for thequadcopter.

Figure 2.12: Wii-remotes, which contain an ITG-3200 gyroscope

An other benefit of the ITG-3200 is that it is extremely cheap. In fact, this particular sensor isalso used in wii-remote controllers (Figure 2.12, to detect the hand motion of a player. Onlinethere are many wii-remote clones available and we even managed to find one that was cheaperthan a separate ITG-3200. Therefore such a wii-remote clone was bought and its gyroscope wasextracted for use in the quadcopter (Figure 2.13).

Figure 2.13: The 3-axis gyroscope on a wii-remote PCB. The red outline shows the wii-remotePCB and the blue circle indicates the actual gyroscope chip

Magnetometer

By utilizing the accelerometer and gyroscope, discussed earlier, a relative orientation can beobtained for the quadcopter. It would however be good the have some type of absolute reference,which both the gyroscope and accelerometer cannot provide (for more details see chapter3). Agood way to obtain such an absolute reference is a magnetometer. A magnetometer measuresthe geomagnetic field, giving a absolute point for the rotation of the quadcopter with respectto the magnetic poles.

Again there are many magnetometers available, so one was selected that supports high speedI2C and is low-cost; the MAG3110[10], from freescale semiconductors.

17

Figure 2.14: A 3-axis magnetometer

Height sensor

To be able to maintain a certain height, a quadcopter should be able to measure its heightabove the ground. There are man was this could be achieved, but perhaps one of the easiestones is by using an ultrasonic range finder(URF).

A URF works by transmitting a short ultrasonic audio beam at a frequency that is inaudibleto the human ear. This beam then hits a solid object and bounces back to the URF. The timebetween transmission and echo reception gives an indication of the distance between the URFand the object.

Also this time we looked for a sensor with an I2C interface and at minimal cost. Since thetransducers in a URF are quite expensive, a nice trade-off is to use the same transducer fortransmission and reception. As the electronics need to switch the transducer to another mode,the first few milliseconds after a transmission, any returning echo’s cannot be received, leadingto a minimum distance such sensors can read. Such a minimum distance is typical around 10-20cm, which is acceptable for a quadcopter as flying will always be done above that. Thereforea URF was selected with only one transducer to reduce cost and weight.

The URF that was final selected is the SRF02 [8]. It supports an I2C interface, 40kHz ofsampling rate, and provides a minimum range of 15cm and a maximum range of 6 meters.

Figure 2.15: An ultrasonic range finder

Camera

Besides measuring the orientation of the quadcopter correctly, a camera is also needed to be ableto detect red balls in front of the quadcopter, which is one of the gaols of this project. Thereexist many kinds of cameras and camera interfaces, but for simplicity and cost it was chosen touse an ordinary USB webcam. As some of these USB webcams are officially supported by Videofor Linux (V4L), there are be no problems with drivers and other complicating interfacing. Thewebcam that is selected, is a logitech c210 (Figure 2.16). This webcam is both small andsupported by V4L, which made it perfect for the quadcopter.

18

Figure 2.16: The logitech c210 USB webcam

2.4 Computational platform

The final design of the quadcopter incorporates two major computational platforms. First thereis the BeagleBone for high level control and graphical algorithms and secondly an MBED microcontroller is used for positioning and local control. Both platforms and some of their specificsare discussed in this section.

BeagleBone

In the first design only one computational platform was going to be used. This platform wouldneed to meet several demands, of which the more important ones are listed below:

• Small Physical Dimensions (weight and size)

• Sufficient Computational Speed for image processing

• Ability to interface with sensors

• Ability to interface with camera

• Small power consumption

• Real Time behaviour for control

• Low Price

Especially the requirement on weight and size severely limits the feasible options to a handful ofdevelopment boards. The restricted weight and size means an embedded system is needed, whichthen heavily constrains the available computational resources. Due to increased demand forsystems with similar needs in mobile devices, a number of systems on chip (SoC’s) that deliveran adequate amount of processing power exist, whilst still being small and light. Unfortunatelymost if not all of these SoC’s come in packages that are extremely small and thus hard toprocess for a human being with a regular soldering iron, making it impossible to use thesedevices at acceptable cost in a prototype. Therefore several development boards with thesedevices already mounted were examined, of which the most important ones are:

BeagleBone :

19

• 700 MHz ARM cortex A8• 256-MB DDR2 RAM• I2C, USB, GPIO• 86.36 mm x 53.34 mm @ 39.68 grams• $89USD• Manual for further details[15]

BeagleBoard XM :• 1GHz ARM cortex A8• 800MHz DSP• 200MHz SGX (GPU)• 512-MB LPDDR RAM• I2C, USB, GPIO, Camera port• 82.55 mm x 82.55 mm @ 37 grams• $149USD• Manual for further details[14]

Panda Board ES :• 1.2GHz ARM cortex A9 Dual Core• PowerVR SGX540 (GPU)• 1 GB low power DDR2 RAM• I2C, USB, GPIO, Camera port, wifi, bluetooth• 114.3 mm x 101.6 mm @ 81.5 grams• $182USD• Manual for further details[24]

GumStix Overo FireSTORM :• 1GHz ARM cortex A8 with NEON SIMD• 800MHz DSP• PowerVR SGX530 (GPU)• wifi, bluetooth, connections for expansion boards• 17 mm x 58 mm @ 42.6 grams (with antenna’s)• $219USD• Website with further details[19]

In terms of computing power the Panda Board ES has most to offer, but it is also quite largeand heavy. The smallest is the GumStix, which for its size still offers an enormous amount ofprocessing power, but it is also quite expensive. Especially since the budget is very constraint, inthe end the BeagleBone was chosen for its low price and reasonable size. Also the BeagleBoneoffers a lot of easy accessible I/O pins, which makes interfacing all the sensors trivial. Thedownside is that the BeagleBone lacks some of the more advanced hardware, such as a DSPor GPU for the image processing. However, getting such additional hardware working in non-standard ways (i.e. not just using the available video codecs etc) has in many cases alreadyproven to be a challenge. Support for these kind of embedded systems is usually very limit andgiven the time available for this project, it was feared that using the additional hardware suchas the GPU and DSP might be infeasible anyway. Therefore in the end, the BeagleBone wasselected as the main computational platform.

The BeagleBone itself came shipped with an linux distribution called Archlinux. At first thisOS was used, but it proved to be highly unstable. The packet manager was a complete mess andcould break the whole system without warning. If it got a requirement from a package that itneeded a newer kernel, the new kernel would be downloaded and installed, but all kernel modules

20

would not be updated rendering the complete installation useless. Also a new image was releasedalmost weekly and almost every time crucial features were either added or removed. One weekit was possible to access the analogue to digital converters, allowing for battery monitoring onthe BeagleBone, the next this feature would be removed again. Eventually after lots of reinstallsand troubles with kernel modules to get the wifi working, it was decided to switch to anotherdistribution. The choice was made to go for the debian based ubuntu distribution, which isknown for its reliability and its famous ‘it just works’ philosophy. The ubuntu distributionproved to be much more stable and workable. Also the wifi seemed to work a little bit betterwith ubuntu, but unfortunately it was still not reliable.

MBED

As mentioned, in the first design the BeagleBone would be the only computational platform.However, for reasons detailed in section 3.3, it was decided to move the local control away fromthe BeagleBone and port it to a micro controller based system. In short the main advantageis that on such a system, it is much easier to give real-time performance guarantees, leading toa more stable update frequency of the control loop and thus a more stable control. Also thisfreed up the BeagleBone to be used mainly for image processing.

When selecting a micro controller based platform for the local control the choice was actuallyquite easy. Although many quadcopter projects available online use small 8-bit, 20MHz devicesfor control, these devices are actually quite out-dated and overpriced for the performance theydeliver. Especially with the newer 32-bit ARM cores available nowadays, it makes almost nosense to resort to an 8-bit 20MHz AVR or PIC. Also the local control implemented in thisproject is more demanding than that of the average quadcopter, as various techniques areimplemented to compensate for sensor mismatches and errors. These require a number of 32-bit matrix multiplications that would strain the old 8-bit 20MHz devices severely, compromisingthe update speed of the control loop.

Instead a very good platform was found in the ARM Cortex-M3 based development board byNXP, called MBED, which is depicted in Figure 2.17. This is a very well supported developmentboard with a large feely available code base. The MBED has two hardware I2C modules,allowing it to communicate as a master with all the various sensors on one bus and to listento the BeagleBone as a slave on the other. Also the MBED has 6 PWM channels, allowingit to communicate to the motor drivers directly. Besides all of this, even a real-time OS isprovided for MBED, which allowed for guarantees on the control loop. This in contrast tothe BeagleBone, where the used Linux distribution was only low latency, but still could notguarantee anything, which was noticeable when executing the control loop.

2.5 Supporting Electronics

Except for the computational platform, the various sensors and the motor drivers themselves,some additional electronics were needed to connect and power everything. Some of this hardwareis trivial and will not be discussed in this report, but also some interesting trade-off’s have beenmade.

The first piece of supporting electronics worth mentioning is the power supply. For poweringthe motors an 11-12 Volt power source was needed. As the amount of power that is consumedby the motors is very large (hundreds of Watts), it is best to use a source that natively supplies

21

Figure 2.17: NXP’s ARM Cortex-M3 platform ‘MBED’

these voltages, instead of up or down converting from another kind of source. That the enginesneeded 11-12 volts is of course no accident, as almost all regular lithium ion batteries usedin radio controlled models supply these voltages. It was therefore obvious to use one of thesebatteries.

However, by using 11-12 volt batteries, the 5 volt power supply for the sensors, the MBED andthe BeagleBone then became somewhat of a challenge. Although these devices do not requirenearly as much power as the engines, they still consume a reasonable 6 to 7 Watts in the worstcase. If a standard voltage regulator would be used, this would mean a voltage drop of 12−5 = 7Volt would be needed at an amperage of 7W

5V< 2.5A. This means around 2.5 ∗ 7 = 17.5 Watts

of power would need to be dissipated in the voltage regulator. Using a simple 7805 (standard 5volt regulator), this would require active cooling, which is large, heavy and power consuming.Even with very much cooling, it would be difficult not to overheat the device. It was obviouseither a separate power source was needed, or a DC-DC converter, which convert voltages muchmore efficient (up to 98% under certain conditions[5]).

Figure 2.18: Single package DC-DC converter capable of delivering 5 Watt

A separate power source would not have been a very nice solution, so it was decided to usea DC-DC converter. Building such voltage converters can in practice be quite tricky, becauseeven under heavily varying loads such as the BeagleBone, it still must ensure a stable voltage.After some research a module was found that contained a complete 5 volt DC-DC regulatorin a single 3-pin package, capable of delivering 5 Watts (Figure 2.18). To provide the need7 Watts, 2 of these regulators were ordered. The electronics were then divided into 2 groupswhich would not consume more than 5 Watts each. With these DC-DC converters in place,the battery voltage can now be easily and efficiently (up to 96% efficiency for these particularregulators) be converted to the required 5 volt.

The second interesting piece of supporting electronics that was designed, is the interface between

22

Figure 2.19: AVR bridge between the BeagleBone and the Turnigy controllers

the BeagleBone and the Turnigy motor drivers. As the motor drivers need a PPM (peak-positionmodulated) signal, the BeagleBone would have to emulate such an interface with software, asit does not have a hardware interface for it. It should have been possible to use the PWMhardware of the BeagleBone for this purpose, but due to bad support from the manufacturer,the PWM hardware is still not exposed through the linux OS. Emulating the PPM interfacewith software would however need a significant amount of interrupts, severely occupying theCPU and thus costing performance.

To overcome this PPM interfacing problem, a kind of bridging device was developed that wouldemulate the PPM signals to the motor drivers, but that could communicate with the BeagleBonethrough an I2C interface. As the BeagleBone has hardware modules for I2C, it means theprocessor can just send out a number of bytes to this module and then they will be transmittedwithout further interference of the CPU.

To make the bridge device, an AVR micro controller was used (Figure 2.19). This cheapcontroller has only two hardware timers, but it needs to produce 4 PPM signals accurately, soit required some elaborate software, but in the end it could provide 4 perfect PPM signals, whilestill being able to communicate with the BeagleBone. This hardware stayed in the design, untillater for reasons detailed in section 3.3, the local control software was ported to the MBED. Asthe MBED has 7 hardware PWM channels, the bridge device was no longer needed and wasremoved from the final design. Nevertheless it would have worked perfectly if the local controlsoftware had remained on the BeagleBone.

Figure 2.20: Icidu USB-wifi module

A special piece of electronics that was used building the quadcopter, is the USB to wifi adapter.As the BeagleBone does not have integrated wifi, it seemed like a good idea to use a USB towifi adapter to provide wireless connection to the quadcopter (Figure 2.20). As the BeagleBonewould also have to interface with the USB camera, and has only one USB port, a small inex-pensive USB hub was added to the design. Although the USB hub works perfectly, the wifiadapter experienced many troubles throughout the project.

The main issue with the wifi adapter appears to be a faulty driver in the linux repositories. The

23

adapter would sometimes connect properly to its access point, and sometimes not at all. Variousconfigurations were tested in which different adapters were tested, a driver was recompiled fromsource and even a distribution switch (from archlinux to ubuntu) did not resolve the inconsistentbehaviour of the wifi adapter. After many hours spent trying to resolve the issues, the currentstatus is that a random number of reboots is required for the wireless to come up and connectto the network. Because of this unwanted behaviour an effort was made to examine otherpossibilities to control the quadcopter. Two 433 MHz module have been interfaced as a test, butafter the first experiments it turned out the basic modules lacked even a form of carrier sensing,making it virtually impossible to write a good protocol to communicate reliably between them.Also the bitrate was not high enough to transmit images from the webcam, so this attempt wasaborted and the unreliable wifi remains the only way to control the quadcopter remotely.

2.6 List of components

Table 2.1 is a reference list of the final components that were used.

Description Supplier Qnty Price

BeagleBone, ARM Cortex 8 beagleboard 1 73.50

X450 Glass Fiber MultiCopter Quad-Rotor Frame goodluck buy 1 20

1000Kv Brushless Motor hobby king 5 31.60

9x5 Propellers (Standard and Counter Rotating) hobby king 12 5.40

TURNIGY Plush 25amp Speed Controller hobby king 5 48.50

ZIPPY Flightmax 2200mAh 3S1P 25C hobby king 2 14

Turnigy balancer and Charger 2S3S hobby king 2 7

Ultrasonic Range Finder(SRF02) Antratek 1 12.49

Triple Axis Magnetometer(MAG3110) Antratek 1 11.20

3 axis acceleration sensor, 3DBS, complete kit Antratek 1 6.95

863-MC7805ACTG (5V, 1Amp), Semi Linear Regulators beagleboard 5 1.94

Prop adapter (Colet) 3mm rc addict 1 12.50

Tracopower TSR-2450, Converter DC/DC, SIP farnell 1 15.40

Premium MotionPlus for Wii Remote local purchase 1 6.60

NXP’s ARM Cortex-M3 platform (MBED) LPC-1768 local purchase 1 55

logitech webcam (291063) local purchase 1 19.95

ICICU Nano USB adaptor 150N local purchase 1 13

TOTAL PRICE - - 356

Table 2.1: List of components for quadcopter

Note: This table also includes spare parts that were bought such as an additional motor, extrapropellers and an extra battery.

24

Chapter 3

Local control software

In order to control the movements of the quadcopter, it should be able to follow a referenceposition signal. The reference signal is be generated by a higher level control, which can be amanual control interface or a high-level control algorithm that uses visual information from thecamera.

This chapter describes the software that is developed to handle the task of following a referencesignal, also known as ’local control’. First, a simple mathematical model of the quadcopter willbe introduced, followed by an overview of the entire local control loop. The remainder of thechapter describes the implementation details of the functional blocks in the control loop, as wellas the trade-offs that were made along the way.

3.1 Mathematical model of the quadcopter

This section defines a simple mathematical model of a quadcopter, which is used to derive a(local) controller. The model is based on the model proposed in [21]. The introduction in thissection is quite brief and only defines the basic relations without deriving them, please refer to[21] for the derivations.

Figure 3.1 is taken from [21] and shows the structure of the quadcopter model, the used quan-tities and angles. The origin of the global axis frame x, y, z is located in the center of mass of

xy

z

φ

θ

ψ

xByB

zB

f1

τM1

ω1

f2

τM2

ω2

f3

τM3

ω3

f4

τM4

ω4

Figure 3.1: Quadcopter model structure, quantities and angles

25

the quadcopter. The x axis is defined as always pointing towards the north, and the z axis isdefined as pointing towards the zenith. As the right-hand axis frame is used, the y axis alwayspoints to the west. The body axis frame xB, yB, zB has the origin in the origin of the globalaxis frame, but the axis of the body frame always aligns with the arms of the quadcopter. Theattitude of the quadcopter (and thus the body axis frame relative to the global axis frame)is given by the angles φ, θ and ψ. Roll, rotation around the x axis, is indicated with angleφ. Angle θ corresponds to rotation around the y axis, known as ’pitch’, and angle ψ denotes’yaw’: rotation around the z axis. In this way, the total position (linear and angular) of thequadcopter is given by the vector q, defined as:

q = [x, y, z, φ, θ, ψ]⊤ (3.1)

The quadcopter is assumed to be a rigid body which is completely symmetrical around thez-axis in both weight distribution and size. This results in a diagonal inertia matrix:

I =

Ixx 0 00 Iyy 00 0 Izz

The values of the elements of I are determined by the actual size and weight distribution of thequadcopter.

Each of the four motors, indexed by i ∈ {1, 2, 3, 4}, generate a force fi perpendicular to thezB-axis, the z axis of the body axis frame, because they are connected to propellers. ωi denotesthe angular speed of rotor i. Because the rotors and the propellers have a certain weight, alsoa torque τMi

around the rotor axis is generated by each motor.

fi = kω2

i (3.2)

τMi= bω2

i + IM ωi (3.3)

Here k is a lift constant, b is a drag constant and IMiis the inertia moment of rotor i. The

total thrust T that is generated by the propellers is in the direction of the zB-axis, and thus isnormal to the earth’s surface if the roll and pitch angles are 0. The angular torque ~τB expressesthe torque in the body frame as a function of the rotor velocities. Torque ~τB determines thechange in roll, pitch or yaw.

T =4

i=1

fi = k4

i=1

ω2

i (3.4)

~τB =

τφτθτψ

=

lk(−ω22+ ω2

4)

lk(−ω21+ ω2

3)

4∑

i=1

TMi

(3.5)

where l denotes the distance from the middle of the quadcopter to the rotors.

As can be seen from (3.4) and (3.5), the rotor speeds ωi determine the total thrust, whichcorresponds to the height of the quadcopter, and the change in roll, pitch and yaw angle. Thetask of the local controller is to control these speeds in such a way that the thrust, roll, pitchand yaw converge to the desired values as fast as possible. The actual controller will be definedin section 3.5, but first section 3.2 will give an overview of the total controller structure.

26

in more detail in section 3.4.

3.3 Software mapping and interfacing

Software mapping

The total control loop depicted in Figure 3.2 should be evaluated at a fixed and preferably highfrequency. A high loop frequency results in more accurate data from the sensors and results infaster reaction on changes in the actual or desired position. A fixed loop frequency is desiredbecause it simplifies the integration of sensor data and the closed loop stability of the systemis improved. If the loop for example is not executed for 0.5 seconds, the quadcopter will not bestabilized during that time and it will probably fall out of the air.

An important question to address is on which hardware the global control, the local controllerand the IMU software should run. In our setup there are two options: the BeagleBone linuxboard or the MBED LPC1768 microcontroller. The execution of one cycle of the loop consistsof the following parts:

1. Obtain data from all the sensors over a i2c bus;

2. Evaluate one iteration of the IMU algorithm to find p from the sensor data;

3. Get the current reference signal from the global control algorithm and calculate e;

4. Evaluate on iteration of the controller algorithm to calculate u;

5. Translate u to PPM signals to control the ESC’s that control the motors.

Step 5 can only be performed with system that has 4 hardware PWM channels, so the obviouschoice for this is step is the MBED microcontroller. Step 1 through 4 can be executed oneither the BeagleBone or the MBED. Because of convenience we initially chose to run steps 1through 4 on the BeagleBone and to communicate the values of u to the MBED via a i2c bus.Unfortunately, because the BeagleBone runs the Linux operating system and not a RTOS, itis difficult to achieve a constant loop evaluation frequency if the frequency is high, especiallyin combination with high CPU loads because of other running processes. This results in a lowloop frequency: we could only achieve up to 100Hz reliably. Because of this problem, we portedall above steps to the MBED, which runs a RTOS. This resulted in much more constant loopfrequencies, and we can reliably achieve a 400 Hz loop frequency.

The global control algorithm should be able to use visual information from a camera in order tocontrol the quadcopter, and therefore the only practical choice is to run this algorithm on theBeagleBone. The MBED is much slower compared to the BeagleBone (90 MHz 32-bit ARM vs.700 MHz 32-bit ARM), so it is not powerful enough to run useful image processing algorithmsat decent frame rates. So, the global control software will be implemented on the BeagleBoneboard. A nice result of this choice is that the local and global control are split over differentprocessors, so heavy image processing algorithms will not influence the loop frequency of thelocal control loop.

Interfacing between both computation platforms

The choice to implement the entire local control and IMU on the MBED and the global controlon the BeagleBone introduces a new problem: how to obtain the reference signal r(t) from the

28

of the sensor, the measured angular velocities can be integrated. More details on thegyroscope and its calibration are given in section 6.4.

• Magnetometer. A magnetometer measures the strength of the magnetic field which itis in. Because of earth’s magnetic field, these measurements can be used to locate themagnetic north of the earth, which allows the quadcopter to determine in which absolutedirection it is heading. We use the MAG3110 triaxial digital magnetometer.

• Ultrasonic range finder. An ultrasonic rangefinder submits ultrasonic pulses and mea-sures the time it takes for the echo to arrive. This time is a measure for the distance to anobject, for example the ground. We use the SRF02 ultrasound rangefinder kit to measurethe distance to the ground.

To be able to find a reliable estimate of the quadcopter position, the data from different sensorshas to be fused. The gyroscope can only measure displacement relative to its starting position,the accelerometer can find the absolute position, but is noisy and inaccurate. The magnetometermeasurements are corrupted by magnetic fields induced by the other electronics and motors onthe quadcopter. There are multiple methods to combine the measurements of multiple sensorsinto a reliable position estimate. One of the most reliable methods is to use a Kalman filter,as described for example in [23]. The Kalman filter estimates the noise on each sensor in orderto optimize the reliability of the filtered position. However, the reliability of the Kalman filtercomes at the cost of high computational load, the need the model the system and a quitecomplex implementation.

Direction Cosine Matrix based approach

To overcome the problems of the Kalman filter while still getting a reliable estimate of theorientation, simpler algorithms have been proposed. One of the most used approaches is theso called DCM (Direction Cosine Matrix) based IMU algorithm as described in [25]. In thisalgorithm, the translation from the global axis frame to the body axis frame is captured in a3x3 matrix, the DCM-matrix. In each iteration, the data of each sensor is used to estimatehow body frame moved relative to the global frame. In the most simple implementation, theestimate of each sensor is weighted and then used to rotate the DCM matrix. Generally, thegyroscope receives a high weight because of its accuracy and the accelerometer gets a smallweight that is just large enough to compensate the drift of the integrated gyroscope signal. Theweights can also be chosed dynamically to take knowledge of the current state of the quadcopterinto account to increase the performance of the filter. A DCM-based algorithm is included inmost open source quadcopter firmwares.

We implemented a DCM-based IMU, roughly based on the methods described in [25]. This IMUworks, but it has the disadvantage that it is still quite computationally demanding because ineach iteration the 3x3 matrix has to be rotated and orthonormalized. Also, it is not trivial howto chose the weights for the different sensors.

Quaternion based approach

The problems of the DCB-based approach mainly result from the fact that 9 variables arerequired (all elements of the 3x3 DCM matrix) to describe just 3 Euler angles: roll, pitch andyaw. The z component is not incorporated in the DCM matrix. The problem with just savingthe Euler angles themselves is that they do not uniquely identify an orientation, because also

30

the order in which the angles are calculated determines the angles. A roll angle of 180o resultsin the same orientation as a pitch angle of 180o and a yaw angle of 180o.

There exists however a more efficient method to describe 3D rotations, by using a quaternionrepresentation. A quaternion consists of a four element vector and is an extension of thecomplex numbers [11]. In [22], an orientation filter is introduced that uses the quaternionrepresentation of rotations to save and update the orientation. Calculations are done directlyon the quaternion representation, which greatly reduces the number of computations per cyclecompared to DCM-based approaches. The roll, pitch and yaw angles can be calculated from thequaternion representation whenever needed. The estimate of the orientation is determined byevaluating a Newton optimization step in each iteration. Another advantage of the quaternionbased algorithm from [22] is that there are only two tuning parameters.

In our IMU implementation, we adopted the quaternion-based orientation filter described in [22]because of its low computational load and good performance. The original article describes twoversions of the algorithm: one uses only an accelerometer and gyroscope, the more advancedone can only use magnetometer data to find the absolute orientation. For the details of thealgorithm, please refer to the original article. In our software, we can chose whether or not toinclude the magnetometer, and the IMU will automatically use the correct algorithm. Currently,the magnetometer is not used because of simplicity.

3.5 Controller design

Relations (3.4) and (3.5) show that by controlling the speeds of the four motors, we can controlthe total thrust and the torque in the roll, pitch and yaw direction. To control each of thesefour quantities, four PID controllers are used, one for each quantity. PID controllers are chosenbecause they have a simple structure, are easy to implement and they have proved to beeffective in controlling quadcopters, because they are used in almost all open source quadcopterfirmware.

If e(t) is the error signal that is the input of the PID controller, the output of a general PIDcontroller is given by;

u(t) = KP e(t) +KI

−∞

e(t) +KDe(t)

KP is known as the proportional gain, it is multiplied with the error signal. KI is the integralgain and is multiplied with the infinite time integral of the error signal. KD, the differentialgain, is multiplied with the derivative of the error signal.

In our case, the error signal e(t) has four elements: eφ(t) (the error signal of the roll), eθ(t),eψ(t) and ez(t). As in [21], we define the controllers for the total thrust T and the angulartorques τφ, τθ and τψ in the following way:

T =

(

g +KT,P ez(t) +KT,I

−∞

ez(t) +KT,Dez(t)

)

m

cosφcosθ(3.6a)

τφ = Kφ,P eφ(t) +Kφ,I

−∞

eφ(t) +Kφ,Deφ(t) (3.6b)

τθ = Kθ,P eθ(t) +Kθ,I

−∞

eθ(t) +Kθ,Deθ(t) (3.6c)

τψ = Kψ,P eψ(t) +Kψ,I

−∞

eψ(t) +Kψ,Deψ(t) (3.6d)

31

Chapter 4

Image acquisition and processing

4.1 Objective

The aim of performing image processing is to implement an efficient algorithm on the embeddedprocessor to detect, track and avoid a ball.

The quadcopter must be able to get appropriate control signals generated by its on boardprocessor (BeagleBone) to move it to a new position in order to avoid a ball coming towards it.In this respect, the ball tracking algorithm must be robust in nature in detecting a ball underdifferent conditions. This ball detection is to be merged with global control algorithm thatwould track, sense position and direction of the incoming ball in order to issue control signalsto the quadcopter.

Note: The output of the algorithm must be able to generate center, radius, angle(direction ofmovement) of the detected region.

4.2 Requirements and Preliminaries

The quadcopter in principle must be quick to track the ball movement in advance and avoidbeing hit. In this project, to test this principle we choose the color of the ball as red.

To show the concept of ball detection, Matlab was used initially with the camera on a localPC to detect a red coloured ball. This experience was necessary for testing a basic algorithmthat works to detect a ball. The algorithm was later implemented on the BeagleBone processor.An external Logitech C210 webcam was used on the local PC. We used the same webcam fordetection of ball on the aerial vehicle.

The processor (BeagleBone) used in this project supports running Linux and the Logitechwebcam supports V4L (Video4Linux) drivers. This gave us the advantage of using a Logitechwebcam supported on Linux. We use the OpenCV library for the image processing algorithmon the BeagleBone. OpenCV is a C library designed for computer vision problems. It providesmany useful available functions for manipulation and operations on images. OpenCV hasbindings to V4L, which theoretically would allow for a seamless integration of the webcamwith the library functions.

The OpenCV library contains functions for capturing a frame from the webcam, though forreasons unknown, it did not work in combination of the selected platform, OS and camera.

34

OpenCV internally runs V4L to capture frames. A library that extends the OpenCV withadditional functions of V4L was created to capture frames from the webcam, but in formatsOpenCV works with.

Selecting Color Space

The first decision to be made is in choosing the color space to operate on. There are threecommonly known color spaces namely, RGB, YUV and HSV respectively.

• RGB is by far the simplest color space to work with. Composed of red, green and bluechannels, it is easier to separate out the (combination of) colors. However, RGB formatis not very efficient when dealing with real world images as brightness is part of everychannel.

• YUV is the color space initially chosen for our implementation. The Chrominance channels(U and V) define the color and Y is the luminance. Initially, YUV color space was usedfor implementation in the algorithms, however later it was found that U and V channelsdo not work very well under different levels of brightness and give false/noisy detections.

• The HSV color space on the other hand, is similar to other color spaces but its represen-tation forms a single cone as shown in Figure 4.1. We later, switched to the HSV colorspace as it gave a better stability to variation in brightness of the background.

A brief outline of HSV color space follows[4]:

Hue: This is the basic color in the color wheel, ranging from 0 to 360 degrees where both0 and 360 degrees are red. For OpenCV, this value ranges from 0 to 179. It is to be notedthat the value of red in the color wheel merges at the beginning and end of the of thewheel circumference.

Saturation: This defines the shades of the color hue i.e. how pure or dull the color is. Itranges from 0 to 100, where 100 is fully saturated and 0 is gray.

Value: This defines how bright the color is, ranging from 0 to 100, where 100 is as brightas possible (white) and 0 is as dark as possible (black). In OpenCV, the saturation andvalue ranges from 0 to 255. Appropriate conversion needs to be performed for the chosencolor ranges to correctly work on OpenCV.

Figure 4.1: HSV color space

It is much easier to detect colored areas using the HSV (Hue, Saturation, Value channels). HSVcolor space has the advantage of only having a single number (Hue) to detect the color, in spite

35

of having different possible shades of the same color, ranging from relatively light to darkershades. More precisely, H and S can be obtained from UV plane moving to polar coordinatesystem, where hue is the angle. HSV was tested to be better than using YUV color space asthe hue angle in HSV does not change with different brightness of light source, whereas U andV do change in YUV color space.

Note: All of the images that are shown in the forthcoming sections were captured and tested on

the embedded platform of the quadcopter itself, unless otherwise stated.

4.3 Concept of ball detection and tracking

In the following section, the high level overview of the Global control architecture is detailedand the two algorithms that were implemented for the ball detection on the processor will bediscussed. These algorithms are later compared for their feasibility for detection of movingobjects.

Figure 4.2 depicts the high level description of global control. It is comprised of the followingtwo stages:

1. Fast ball detection

2. Path extraction and avoidance

Figure 4.2: High level description of global control

Two algorithms were implemented on the processor to test its accuracy, speed and feasibility.The algorithms implemented are based on binarization and hough transform in image processingtheory. The difference between the two governs how they have been implemented. For thepurpose of hough transform OpenCV’s library provided essential functions for calculation ofregion of interest for the ball. However, with binarization, naive techniques are used in ’C’for manipulation of image matrices for detection of region of interest and reduction of noise.In binarization, whereever required, we handpicked only lightweight functions from OpenCVlibrary and the rest was a raw code implementation so as to keep the algorithm fast. The idea ofboth the algorithms remain the same, i.e. to compute the region of interest of the ball for eachframe. The stepwise description of the algorithm implemented in OpenCV is given below:

Note: The first four steps are common to both the algorithms, however steps 5 and 6 areperformed differently for the two.

36

Ball detection

Step 1: Capture a frame from camera : The webcam supports upto 30 fps. The framesare queued in the OpenCV’s buffer until they are consumed. Attempt has been madeto keep the algorithms as light as possible so as to consume the frames from the bufferquickly and to reduce any delay. In OpenCV, the camera frame capture is performed byfollowing the two main steps, i.e. initializing the camera and capture frames from buffer,one by one (The frame capture was our own implementation by writing a new library dueto the incompatibility with OpenCV, as mentioned in 4.2.

openCamera() : Opens the default USB camera on embedded processor.

grabFrame(.) : Grabs a frame from the frame buffer.

Step 2: Convert to color format HSV : In OpenCV the incoming frames are stored inBGR (blue, green, red) format, contrary to the RGB format that is common elsewhere.We convert the image to HSV color format. In OpenCV, the following function callconverts the frame to the required color format.

cvCvtColor( img, imgHSV, CV_BGR2HSV )

where, the first parameter indicates the source frame and second parameter indicates thedestination frame with HSV format.

Step 3: Subsample the Image : Subsampling the image is an important step, which governsa tradeoff between faster processing and accurate computation. Although, this step couldbe skipped but the results obtained with subsampling resulted in faster processing.We define the subsampling rate as 4, 8 or 16, where the dimensions of the capturedimage is reduced in size by a given factor. For example, with frame capture resolution of640x480, when a subsampling rate of 4 is applied, the size of the image becomes 160x120.Having to do computation on the processor for embedded application, a higher value ofsubsampling rate is preferred. However, with a subsampling rate of 8 and above, the ballis not detected accurately at longer distances as resolution of the detected region mergeswith noise. We chose to have subsampling rate of 4 by default, for our camera resolutionof 640x480 with the possibility of detecting a ball at distances greater than 3 meters.

As an example, a test image is shown to describe the effect of different subsampling ratesused. Figure 4.3 shows the first image being processed as it is and the ball region isdetected correctly. The second image is subsampled by a factor of 4, and yields a similarresult. For the third image however, with a subsampling rate of 8, detection of the ballregion was wrongly judged to the right most corner of the frame.

It is to be noted that the overall speed of computation with successive frames is inverselyproportional to the subsampling rate used, yielding higher fps (frames/sec) for highersubsampling rate.

Step 4: Threshold image on color channels : After performing subsampling on the im-age, the size of the frame is essentially reduced. We consider the color channels of theimage (Hue, Saturation) and threshold the image on these color channels.

The threshold values are determined by the range of values taken in the color space. Forour case, the hue for color of the red color ball was found to be in the range of (0: 25) and(160: 178). For saturation, the range was found to be (160:25). It is important to notethat red color is the only color in HSV color space that starts and ends in itself. Any pixelwhich has a value within the given threshold, can be considered as red. The red regions

37

(a) No subsampling (b) Subsampled by 4

(c) Subsampled by 8

Figure 4.3: Effect of different subsampling rates

are assigned a value 1 (white), the rest of the regions are regarded as noise and assigneda value 0 (black). We apply full range to Value(V) so as to take into account wide rangeof luminance of background. The ball chosen although, was not not completely red.

With use of OpenCV, this can be done by using the combination of CvScalar, cvInRangeSfunctions by supplying suitable parameters. OpenCV already uses a fast approach towardscomputation of threshold, thus it was not required to implement our own custom methodfor thresholding.

Note: The accurate value of these thresholds were determined in ‘C’ code by looping through all possible

values of hue and saturation on test images to locate the desired region with the given color. Although,

better methods are available to find threshold, this yielded accurate results

Step 5: Filter ROIs from the image : After we threshold the image, it is necessary tofilter out the unwanted areas in the image to find the interesting region. Note: Filteringis performed differently for the two algorithms. In this section, we describe the overviewof methods to find ROI with our algorithm.

• Binarization technique is applied to the thresholded image obtained. Using thistechnique, we manipulate 1’s and 0’s in the image to find the region of interestand suppress noise. OpenCV 2.x provides fast and efficient ’C’ interface to matrixoperations, which was more frequently used in this algorithm. The steps followed infinding the ROI are briefly listed:

1. Sum up all rows and columns, find interesting regions.

2. Create bounding region from the thresholded image from the previous step. Animage may have multiple bounding regions. We are interested to filter out themain ball region, however the others need to be eventually disregarded as noise.

3. Filter out peaks smaller than 2% of the image size. This removes small specs ofnoise in the image.

38

4. Remove duplicate/ overlapping regions by overlapping the detected region againsteach other. This will ensure reduction of false detections to a certain degree.

5. With the remaining regions, the only regions having a mean of atleast 60% allpixels as white are allowed and the only regions with near-to-square shapes arepreferred. In most cases, the ball region is correctly determined in this step,under different lighting conditions.

6. If there were more than one bounding region at the end of the previous stage,previous step is applied with stricter condition one more time.

7. If however, multiple bounding regions still remain, a dictionary is referred towhich has the location of the object and radius from the previous frame. Thebest probable region nearest to the previously determined region is chosen ORthe largest detected region is returned. (The non-zero locations are stored forfive previous frames and averaged). However, this step was hardly necessary.

Figure 4.4 shows the four steps from the algorithm. Figure 4.4a shows the thresh-olded image, 4.4b displays how the computation of sum of rows and columns tofind the interesting regions are performed. Figure 4.4c depicts the main bound-ing ball region and some noise in the background which is also shown as detected.Finally, step 7 of the algorithm is applied to obtained a clean detected ball regionas shown in Figure 4.4d.

(a) Thresholded image (b) sum rows/cols(Step 1)

(c) Bounding region(s)(Step 3) (d) Final ROI(Step 7)

Figure 4.4: Example of binarisation algorithm

It is important to consider that bounding boxes are not created at this stage, it is

merely shown here for understanding of the regions that are detected. They are

created under step 4.3.

• In the other algorithm, we use hough circle detection based on hough transform.Details on hough transform can be found in [9], [12]. It works on computing thegradient of a given image, by accumulating each non-zero point from the gradientimage into every point that is one radius distance away from it to detect a circular

39

region. This is an inbuilt OpenCV feature, which accurately determines circularobjects in a grayscale image. The OpenCV Hough function performs a canny edgedetection[3] on the input image before the actual hough transform. It is beneficialto apply smoothing to obtain a nice gradient around the edge of the circles. Theparameters of the hough function must be tuned to provide better results and avoidmultiple detections. The steps followed in this algorithm are listed below:

1. Apply gaussian blur to reduce noise in the given image and false circle detections.GaussianBlur is a function in OpenCV whose parameters can be tuned to providerequired smoothing of the image.

2. Apply morphological operations on the given thresholded image. ”Morphologicaloperations use something called a structuring element that is smaller than animage. The structuring element slides over the image and transforms it. It eithercreates new pixels or erases the existing ones. Based on the requirement erosionor dilation could be performed”[13]. We perform dilation followed by erosion soas to remove unwanted noise in the image.

3. After the previous step, hough transform is applied to find circles in the image.We supply the gray scale image and output is the information on the circularregion(s) detected in this image. The parameters of this function can be tuned toset the max and min radius of the detected circle, the distance between differentdetected regions, etc. The usage of hough function can be found in [18].

4. If there are multiple detected circles, the circle with maximum area is consideredas the final detected region. It is considered that the majority of noise has alreadybeen removed and the biggest circular region is the ball object.

The hough circle detected for a frame is shown in Figure 4.5.

(a) Original frame (b) Conversion to HSV color

(c) Thresholded image (d) Detected ball region

Figure 4.5: Example of hough detection algorithm

Step 6: Constructing the bounding box : Having found the region of interest in the im-age, a bounding region is constructed around that region of interest. This can be a square

40

shaped region (after binarization) or a circular region (based on hough detector). Thisbounding region (obtained for each frame) is passed on to the next stage for finding theposition and direction of movement of the given ball.

Path extraction and avoidance

Step 7: Compute ball radius : A dual spaced buffer is created, each space in the buffercorresponds to the position and radius of the detected region in a frame. Figure 4.6 showsthe structure of the buffer. A new element is enqueued in the buffer and the processedelement is popped out from the buffer.

Figure 4.6: A dual buffer to track successive frames

A weighted running average of radius is computed for detected region with successiveframes. Higher weight is assigned to the past radius average and lower weight is assignedto the detected region’s radius from the latest frame (The total sum of weights appliedsums up to 1). By considering the past averages and giving less importance to presentdata, helps in determining the ball radius with a greater accuracy.

Step 8: Compute ball direction (LR, RL, TB, BT) : The direction of ball movementcan either be Left-Right, Right-left, Top-Bottom, Bottom-Top and Unknown respectively.In order to determine the ball direction, the weighted running average of the angle of ballmovement with each detected frame is added. Higher weights are assigned to the pastaverages; which improves the probability of determining an accurate ball direction. Fur-ther, we include atan2(.) function from ’math.h’ to determine which quadrant the ball iscurrently in. To know the direction of the ball, it is necessary to find which combinationof quadrant(Q) the ball is in, as given below:(45◦, 135◦): Right (Q4, Q1)(135◦, −135◦): Up (Q1, Q2)(−135◦,−45◦): Left (Q2, Q3)(−45◦, 45◦): Right (Q3, Q4)

Step 9: Ball avoidance : The algorithm continuously tracks the radius and direction of ballwith each passing frame. Once it is determined that the radius of ball has continuouslyincreased to reach the maximum allowable threshold, the quad copter autonomously movesin a direction perpendicular to the plane of ball movement. The maximum radius thresholdwas found after testing how close the ball could be allowed to the quadcopter and thereaction time of quadcopter based on frame rate of the algorithm.

41

Figure 4.7: Radius of the ball grows as it nears, seen by the quadcopter

4.4 Pseudocode

Ball detection

Algorithm 1 describes the procedure to find an object using binarization technique.

Algorithm 2 describes the procedure used for hough circle detection.

Path extraction and avoidance

The complete ball avoidance algorithm comprised of Algorithm 3, 4 and 5.

Algorithm 1 Binarization

Require: key q, currentFrame, hsvFrame, thresholded, projX, projY, boundingBox,H, SEnsure: Camera structure is initialized and first frame is captured from camera1: while Key q is not pressed do2: if frame 6= NULL then3: currentFrame← frame4: hsvFrame ←convertToHSV(frame)5: hsvFrame ← subsample hsvFrame (factor: 4, 8 or 16)6: thresholded ← CALL thresholdHSV(hsvFrame, H and S)(threshold values)7: CALL findCoRegions(threshold, projX, projY)(recursive call to form a list of interest-

ing regions)8: boundingBox← CALL createROI (ProjX, ProjY) (Intersection between given regions,

remove false positives, compute mean of region, determine best ROI)9: CALL AvoidBALL(boundingBox)

42

Algorithm 2 Hough detection

Require: key q, currentFrame, hsvFrame, thresholded, hsvmin, hsvmax, SatRange, circles, circle, factorEnsure: Camera structure is initialized and first frame is captured from camera, and hsvMin

and hsvMin are set to proper threshold values.1: while Key q is not pressed do2: if frame 6= NULL then3: currentFrame← frame4: hsvFrame ←convertToHSV(frame)5: hsvFrame ← subsample hsvFrame (factor: 4, 8 or 16)6: thresholded ← CALL thresholdHSV(hsvFrame, hsvMin, hsvMin, and SatRange) (Per-

forms thresholding)7: CALL cvSmooth(thresholded, ..) (Performs gaussian smoothing)8: CALL ApplyMorph(thresholded, ..) (Apply morphlogical operators to remove noise)9: circles ← cvHoughCircles(thresholded,..,) (Tune parameters)

10: circle ← circles (Filter best circle)11: CALL AvoidBALL(circle)

Algorithm 3 Avoid Ball

Require: totalDrv, totalRad, realRadius, realDirection, foundEnsure: Buffer struct with elements radius and position is initialized.1: if bufferSize > 2 then2: buffer → POP3: else4: buffer ← PUSH (radius, position) from current frame.5: CALL trackRadius() RETURN found6: CALL trackDirection() RETURN realDirection7: if found then8: CALL issueCommand(realDirection) (Move the quad in required direction)

Algorithm 4 Track radius

Require: prevRad, currRad, totalRad, MAX-Radius, WEIGHT-ACCUMULATED-RAD,WEIGHT-CURRENT

Ensure: MAX-Radius is assigned a value half of vertical resolution of camera, buffer struct isinitialized.

1: prevRad ← totalRad2: currRad ←(buffer[0].radius+ buffer[1].radius)/23: totalRad ← WEIGHT-ACCUMULATED-RAD ∗ prevRad + WEIGHT-CURRENT ∗ cur-

rRad (Compute the mean running radius to reduce the errors)4: return totalRad > MAX-Radius

43

Algorithm 5 Track direction

Require: prevDrv, currDrv, totalDrv, WEIGHT-CURRENT, WEIGHT-ACCUMULATED-Dir, slopeAngle

Ensure: (x1,y1), (x2,y2) position coordinates are available from two positions in the buffer.1: prevDrv ← totalDrv (assign the previous accumulation)2: slopeAngle ← atan2((y1− y2), (x1− x2)) ∗ 180/PI;3: currDrv ← slopeAngle4: totalDrv ← WEIGHT-ACCUMULATED-DIR ∗ prevDrv + WEGHT-CURRENT ∗ cur-

rDrv (Compute the mean running angles of the slope of the ball in radians to reduce theerrors)

5: return LOOKUP(totalDrv) [Computes direction based on quadrant]

4.5 Testing under various scenarios

This section describes some results obtained after rigorous testing of ball algorithm undervaried scenarios of light conditions. The tests were performed around different times duringthe evening. For these tests, binarization algorithm has been considered as it was almost threetimes faster than its hough counterpart.

• Figure 4.8 shows the ball movement in four different frames captured by the camera undermid light conditions. The ball radius and detection is recorded for each frame. It can beseen that the radius is increased with successive frames and the direction of the incomingball shifts to the right. It may not be intuitive to understand the direction of movementat first as it takes a few frames to correct any error in the new direction; the runningaverage is continuously computed to reduce this error.

(a) Radius: 12, Dir: Unknown (b) Radius: 20, Dir: Right

(c) Radius: 28, Dir: Right (d) Radius: 48, Dir: Right

Figure 4.8: Test of detection under medium light conditions

Note: An important approach that has been taken in this area is to ignore the frameswithout any detection; if more than five continuous frames are encountered with no de-

44

tections, the frame buffer is flushed and all averages are reset. This was found to workvery well so as to not affect the moving averages when no ball region is detected.

• Figure 4.9 shows the ball movement in six different frames captured by the camera undercloudy/shadowy light conditions. The ball radius and detection for each frame is recorded.The direction of the frames seem to shift first towards the upside direction and thendeviates to the right. The change is very minute but, the algorithm is made accurate todetect this change.

(a) Radius: 16, Dir: Up (b) Radius: 24, Dir: Up

(c) Radius: 40, Dir: Up (d) Radius: 208, Dir: Right

Figure 4.9: Test of detection in shadowy conditions

• Figure 4.10 shows the ball movement in two different frames captured by the cameraunder bright light conditions. In this set of frames, the user is taking the ball away fromthe camera. The radius of the ball reduces. We get ’Unknown’ as direction when thedirection of ball falls exactly between the combination of quadrants we have considered.

(a) Radius: 32, Dir: Unknown (b) Radius: 24, Dir: Up

Figure 4.10: Test of detection under bright light

The more complex scenarios have not been recorded however, they have been shown to workfast and efficient resulting in accurate ball detections with binarization and in some cases foundbetter than hough detection algorithm in terms of accuracy.

45

4.6 Benchmarks

Table 4.1 shows the benchmark results for the two algorithms. The camera can capture amaximum of 30 frames/second. With binarization technique, the camera buffer is able toprovide us on an average of 22 frames/ second. Hough detection results in an average of 7frames/second, which is too low for processing of fast ball detection and tracking. This couldhave been even faster, if our camera supported raw HSV images (pre-decoded by the hardwareof the camera).

The third column in the table 4.1 shows detection rate for the two algorithms for differentspeed of the ball. Binarization gives an average of 86% of detection rate when the speed of ballmovement is fast. The rest of 14% are either false detections or no detections. Under the samecondition, hough circle detection gives deteriorated results in detecting the ball (This may bedue to slow capture of the camera when hough is used). However, with slow to medium speedof ball it was seen that binarization gave us on an average 88% success rate for detections andhough performs better with 95% success rate. The ’fast’ case was calculated when the ball wasthrown at the copter from a distance of 3 meters and slow/ medium case was recorded whenthe user held the ball by hand and brought closer to the copter from the same distance. Eachrecording lasted a minute and five recordings were taken in each case.

The tradeoff obtained for speed of algorithm is far greater with binarization. It is interesting tonote that due to choosing the same thresholding criterion, results have similar outcome in bothcases. Hough transform adopts several advance techniques for detecting circles in an imagewhich result in heavier computation. Our algorithm detects a ball in successive frames andadopts several simple and efficient strategies for dealing with this situation. Post-thresholdingtechniques are applied as discussed earlier under section 4.3, to find a region with a higherprobability of the chosen color and shape of object.

Algorithm frames per second fast ball detection(%) slow/medium ball detection(%)

With hough circle 7 74% 95%

With binarization 22 86% 88%

Table 4.1: Comparison of algorithms adopted for ball detection and tracking

Note: The fps given in the table includes the time of frame capture, ball detection and itstracking to give position(center) and radius information for each detected region with successiveframes. The values were averaged out over 458 frames in 20.26 seconds yielding (22 fps) and147 frames in 20.68 seconds yielding (7 fps) for algorithm1 and algorithm2, respectively.

Ball tracking in real-time

Figure 4.11 shows a screen shot of real-time ball detection. This was a simulation done usingpython script on the BeagleBone. A pipe (a unidirectional data channel that is used for inter-process communication) was created between the ball detection process and the python scripton the BeagleBone by sending the radius, position and direction coordinates for each successiveframe. The simulation helps us visually in tracking the ball movement in forward and sidewaysdirections. The size of the circle depicts the size of the ball based on its radius and a small arrowprojected from the center shows the (moving average of) direction in which the ball moves. Thecircle on the screen moves along with the directional movement of ball and grows smaller orlarger in size based on how far or close the ball is to the quadcopter camera, respectively.

46

Figure 4.11: Real time ball detection, simulation with BeagleBone

Unfortunately, due to partial instability of hovering of the quadcopter, the ball detection couldnot be tested on field. However, the ball detection tested with a live cam on the quadcopteritself shows that testing it on field may be trivial, as it just means issuing the already writtencommands to the quadcopter to move it in any of the four directions. A link to the videodisplaying the smooth ball movement in real time computed on board quadcopter is givenunder the experiment section.

4.7 Pros/cons of the algorithms

Tracking the ball has not been an easy task. The main initial challenge has been the change

of conditions of illumination of surroundings. Appropriate threshold values were chosen aftermany rounds of rigorous testing. The color space was changed to HSV for a better stability ofdetection under varied illumination.

Pros: With an aim of trying to keep the algorithm light and simple, binarization technique wasapplied and was shown to work in principle. It is fast, atleast 20 fps on an average and detectsthe ball under varied illumination conditions. The algorithm was also tested with differentspeed of the ball, and was found to work well for slow/ fast movements of ball towards thequadcopter. With hough transform, the average fps is always less than 10, which is not feasiblesolution for an embedded processor.

Detection of any color is possible by just changing the threshold range.

Cons: With too much noise in the background and with similar colored objects present, bothalgorithm fail to keep track of the ball alone and starts detecting other regions in the background.In such a scenario, it is difficult to keep track of the correct position and direction of the desiredobject even with information of detected region in previous frames. False detections if anydue to similar shape/colored objects present in the background have to be carefully eliminatedand the algorithm requires to be extended to take this into account. Additionally, furtheradvanced techniques like back projection[2] which determines how well the pixels of an imagefit the distribution of pixels in a histogram model, could also have been tested for more reliabledetection under complex conditions of the background, though the current algorithms devisedand applied itself took significant amount of time to try newer techniques.

The bottleneck of the whole process of detection and tracking was found to be the algorithmused for implementation. Alongside, the custom algorithm written has been compared to the

47

OpenCV’s hough circle detection algorithm to depict a tradeoff between timing and successrate of detections.

48

Chapter 5

High level control

To control the quadcopter, two different use-cases can be identified. The first one is where onlythe local control is active and a user controls the movements of the quadcopter remotely. Thesecond case is where the global control algorithm is running on the BeagleBone and controlsthe direction of the quadcopter. In essence, both give input to the local control software on theMBED, but from different sources.

To allow the MBED to receive this input from the BeagleBone, a protocol was defined over I2C.This logic connection can also be seen in Figure 3.5 where thread 1 on the MBED handles thecommunication over the I2C bus with the BeagleBone. The defined protocol is basically a formof shared memory. The I2C protocol itself is originally designed to interact with registers andit always requires an address. This was exploited by hard coding addresses and coupling themto variables in a structure. Both, BeagleBone and the MBED would create these structures intheir memory and whenever a variable is updated, this is communicated over I2C.

This basic communication and synchronization module is common to both user control andautomatic control of the quadcopter. On top of this two classes of software were created.A User interface and the high level global control, both of which will be discussed in thissection.

5.1 User interface

To allow easy control of the quadcopter, an effort was made to implement a friendly userinterface. A screenshot of the interface at work is given in Figure 5.1.

At the top in the screen the status of the quadcopter/local control and the battery voltageare displayed. The next row of parameters under the dotted line indicates the target and realposition of the quadcopter. The target Roll, Pitch, Yaw and Height can all be controlled by userusing keyboard and the measured values are automatically updated once every second. Thedotted box on the screen shows the values for the different PID controllers. A good benefit ofthis control interface is that these values can be adjusted at run time, giving user the ability totweak the PID controllers whilst doing observations. It is discussed in a later section, how thisrun-time tuning of PID values has been used to automatically find optimal values for the PIDcontrollers. Furthermore, the bottom part of the screen indicates the different modes which canbe selected, having them mostly for testing and calibration purposes.

The local control software executes on the BeagleBone and can be started through an ssh

49

Figure 5.1: Screenshot of local control interface

session from any of the internet connected devices. No installation of tools is required in thecontrolling machine. The downside of running the interface on the BeagleBone is that the screeninformation has to be sent over the wireless connection, thus consuming bandwidth. However,the refresh frequency can be controlled and the screen is also updated in an efficient way whereonly the changed values are transmitted.

5.2 Global control

The goal set up for the global control is that it must be able to avoid a red ball.

The detection algorithm that is described in chapter 4 shows how the ball detection is performedin real-time within the bounds of computational resources available on the quadcopter. Giventhat the ball detection algorithm not only detects a red ball but also indicates the direction theball is heading in, makes it very easy to implements a global control strategy. Every time whena ball is detected, the quadcopter should move in the opposite direction of it.

Utilizing the communication class that communicates with the local control on the MBED, itis very easy to let a global control class steer the quadcopter, hence all required componentsneeded to make the global control are in place.

As discussed under 4, an interface has been created to successfully test the movement of ball andshow how well it detects on the static quadcopter with a running processor and live camera.However, due to the instability of the copter at the time of this writing, we were unable totest the ball detection on field with a flying copter. Since, the local control could not keepthe quadcopter completely stable, the global control superimposed on top of it would havemade the quadcopter more unstable. Therefore, although tests have been performed to validatethat detection works well under different situations, the global control was unable to be fieldtested.

50

Chapter 6

Calibration and tuning

6.1 Motor calibration

The control algorithm described in section 3.5 calculates an output that is proportional to thedesired thrust for each propeller. A problem is however that the thrust cannot be controlleddirectly. The propellers are driven by brushless motors, which are controlled by ”ElectronicSpeed Controllers” (ESCs). Those ESCs are controlled by an analog PPM signal. In this PPMsignal, the length of a periodic pulse determines the power that is delivered to the motor.

To translate the desired thrust (for example expressed in grams) into a PPM pulsewidth, onehas to identify the relation between them. When the relation is known, it can be used totranslate the outputs of the controller into actual control signals for the ESCs.

To identify the relation between pulsewidth and thrust, we measured the transfer functionfor each of the four combinations of ESC, motor and propeller. It is necessary to calibrateall combinations independently because the ESCs, motors and propellers vary significantly insensitivity and efficiency. We measured the delivered thrust by mounting the quadcopter insuch a way that it can rotate around one axis and letting the arm opposing the running motorpush on a digital scale. This setup is shown in Figure 6.1. The available pulse width range forthe PPM signal is quantized into 255 steps, which is more accurate then the sensitivity of theESCs themselves.

A typical result of the identification is depicted in Figure 6.2. The slightly concave shape of thecurve is consistent for all motors. In the figure, a first and second order Least Squared Error(LSE) fit are plotted. Because the shape of the curve is consistent for all motors, we chose touse the second order fit to translate thrust (output of the controller) into motor signals. Thefit parameters for each motor are stored on the MBED flash storage and are read by the classin the local control software that controls the motors. The result of the calibration is that eachmotor will deliver the same thrust when they should, independent of differences in propeller,motor and ESC efficiency.

Fine-tuning by balancing opposing motors

Calibrating the motors greatly helps in compensating differences among the ESCs, motors andpropellers. However, after the calibration, if two opposing motors are set to the same thrustlevel, the quadcopter still tends to fall to a certain direction. This is due to the fact thatthe quadcopter, which is basically an inverted pendulum, is very sensitive to net torques, in

51

Figure 6.1: Setup for measuring the thrust curve for each ESC+motor+propeller combination

Figure 6.2: Typical measured relation between ESC pulsewidth (range 0-255) and generatedthrust in grams

52

combination with the limited accuracy of the calibration. To fine-tune the calibration, after thethrust calibration we balance both opposing motor pairs to deliver exactly the same torque bymanually increasing/decreasing the motor signals. The results of these balancing tests, whichare conducted over the entire range of the motors, are also incorporated in the fit. After thisfine-tuning, the calibration is no longer the bottleneck, but the sensitivity of the ESCs is nowthe limiting factor.

Battery voltage compensation

When the available energy in the lithium polymer battery decreases, the voltage slowly drops.Because the ESCs have no battery voltage compensation, a lower voltage results in a lower thrustlevel with the same ESC input signal. This is of course not desired, as the controller parametersare tuned for a specific motor behaviour. To be able to compensate for this, we identified therelation between battery voltage and delivered thrust. Because the voltage decrease is relativelysmall, we found that a first order approximation is sufficient to be able to compensate for batteryvoltage changes. The local control software can compensate for the battery voltage drop since itconstantly monitors the battery voltage using an A/D converter. It is assumed that the batteryvoltage is always in the range of 9.7V to 12.0V. If the battery voltage decreases, the signal goingto the ESCs is modified to compensate this decrease by using the fitted first order approximation.This is again implemented in the motors class in the local control software.

6.2 Propeller and frame balancing

It only makes sense to calibrate the motors when the frame also is balanced. Otherwise at zerodegree roll and pitch, the frame would be pulling down one side harder, making that the motoron that side also has to do more work. Again this ‘more work’ can only be compensated for bythe integrating action of the PID controller, because in the design of the controller we assumedthat the weight distribution was uniform. The result of the integrating action is slow and alarge integrating action also makes the whole controller slow and increases overshoot. Thereforeit is very important to balance the frame as good as possible.

Frame balancing was done by letting the quadcopter rotate freely over one axis, but fix theother one at zero degrees. Then by adding weight in the form of nuts and bolts, the framewould be adjusted until it could balance at zero degrees.

Another very important aspect that needs to be done before calibrating the motors is that thepropellers should be balanced/centred. The selected propellers did not fit the motor mounts,so they had to be adjusted to fit. This was the cause of some propellers becoming off centreor imbalanced, such that when the motors would spin, they would start to vibrate heavily.Not only are these vibrations disturbances for the various position sensors, but also it takesmore power for the motor to obtain the same amount of trust as a motor with a balancedpropeller. Therefore the propellers should be balanced before calibrating the motors, as itdirectly influences the amount of trust they generate for a certain power input.

Balancing the propellers can be done in various ways. One way is to put a low friction axisthrough the propeller and make sure the propeller can be positioned in any way without gravitypulling one side down. This method was tried, but the results weren’t all that accurate. In factthe best method in practice proved to be adding weight to one side of the propeller by means ofsticky tape. Then spin the motor at full speed and look at whether the vibrations had decreased

53

or increased. By iterating over this method three or four times it was possible to balance thepropellers in such a way that the axis of the motor showed no large visual vibrations any more,which was much better that with the first method.

6.3 Accelerometer calibration

The accelerometer that is part of the quadcopter needs to be calibrated in order to give usefulmeasurements. There are multiple reasons for this:

1. The axis frame of the accelerometer is not aligned with the body axis frame of the quad-copter. So, the acceleration vector that the sensor measures has to be rotated in 3D-spaceto correspond to the quadcopter axis frame.

2. Because of manufacturing and temperature changes, the measurements are scaled with acertain factor that is not known beforehand.

3. Because of manufacturing and temperature changes, the measurements may have someoffset.

The actual acceleration measured in the quadcopter body axis frame (x-axis pointing to frontmotor, y-axis pointing to left motor, z-axis normal on the propeller plane pointing up), Ax,calib,Ay,calib, Az,calib, can be expressed in the raw measurements Ax, Ay, Az as:

Acalib =

Ax,calibAy,calibAz,calib

= [Cmisalign]3x3

1/Sx 0 00 1/Sy 00 0 1/Sz

Ax −Ox

Ay −Oy

Az −Oz

=

C11 C12 C13

C21 C22 C23

C31 C32 C33

AxAyAz

+

Coffset,xCoffset,yCoffset,z

(6.1)

where Cmisalign is a 3x3 misalignment matrix, Sx, Sy, Sz are the scaling factors per dimensionand Ox, Oy, Oz are the offsets. As can be seen from equation (6.1), in order to fully calibratethe sensor we need to find the 9 C values of the calibration matrix as well as the Coffsetvalues.

To be able to find the 12 required calibration values, we mounted the quadcopter in such a waythat it could rotate only around 1 axis at a time, as can be seen in Figure 6.3. By using a waterlevel, we put the quadcopter in a lot of different known positions, for example flat, 90o roll, −90opitch, etc... Because the orientations are known, we can use the data that the accelerometermeasured in these positions to perform a LSE fit on the calibration values. The calibration valueswere calculated by a MATLAB script we wrote to perform the fit. The found calibration valuesare saved to a file on the MBED flash storage that the accelerometer driver class reads, andthe values are used for correcting the raw measurements from the accelerometer. Experimentsshowed that this greatly improved the accuracy of our accelerometer readings.

6.4 Gyroscope calibration

Basically, a gyroscope needs the same calibration as an accelerometer, described in section 6.3.It also suffers from misalignment, scaling and offset. The problem is that the gyroscope measures

54

Figure 6.3: Setup for rotating the the quadcopter around one axis to be able to calibrate theinertial sensors and to tune/test the controller.

rotational speed, and it is not very easy to measure the rotational speed of the quadcopter withanother device to be able to calibrate the gyroscope. Therefore, we chose a different approachto find the values C11 to C33 from equation (6.1).

The quadcopter is again mounted in the setup that allows it to rotate around just one axis.Now, we rotate the quadcopter for a while around each axis. During the rotation, a couple ofhundreds of gyroscope measurements are collected, and each measurement vector is normalised.All measurements from the same rotation are averaged to find the average normalised angularspeed. This tells us nothing about the actual speed, but it does give information on the directionof the angular movement. Because we know that the movement was in just one dimension, wecan use this to rotate the gyroscope axis frame to the quadcopter body axis frame using thesame LSE fitting method described in section 6.3. By doing this, we neglect the scaling in thesensor, we assume that the scaling is limited. The calibration could be improved by actuallymeasuring the angular speed using another calibrated gyroscope, because then also the scalingcan be calibrated.

Next to the rotation matrix (C11 to C33), the offset of the gyroscope has to be calibrated. Thisis especially important since the output of the gyroscope will be integrated, so offset errorswill accumulate. Because the offset depends on multiple changing factors (wear of the sensor,temperature), the offset of the gyroscope is measured every time the quadcopter is activated.This is easy since if the quadcopter is steady on the ground, the gyroscope output should be 0.By averaging over a lot of measurements, the offset is determined, and then used to compensatethe output. The determined rotation matrix and offset vector are again saved to the MBEDflash memory and used by the gyroscope driver software to correct the measurements.

55

6.5 Automated controller tuning

As no accurate model of the motors and our quadcopter as a whole was available, it was uselessif not impossible to mathematically determine values for the PID controllers that control thepitch and roll. Models are available of course, but they include a lot of constants for example forthe propeller efficiency that we cannot measure accurately. If we were to use an accurate model,it would have meant that the motors would have to be characterised, as well as the frame, theweight distribution and the air resistance. On top of this, a quadcopter is actually a non-linearsystem, further complicating the theoretical determination of good PID values. Instead of thetheoretical approach, it was chosen to adjust the PID values by observation.

Although not very scientifically founded, this ‘observe and adjust’ based method worked quitewell and it was possible for a human to get some working values for the PID controllers suchthat the quadcopter would stabilize on one axis. For these tests one axis would always be fixed,such that only the values of either the roll or pitch controller would influence the system. Thiswas achieved by inserting a iron rod through the frame axis, which would allow the frame torotate over this rod. Next the quadcopter would be suspended from this rod and it was upto the control to balance the quadcopter on this axis. A picture of the test set up is given inFigure 6.3.

Tuning the PID values by hand would allow the quadcopter to balance, but it is very hard totell by eye what are good values in terms of stability and tracking a reference signal. Thereforea method was devised to automatically tune the PID values in a more accurate and meaningfulway. To do this, a reference trajectory was created for the quadcopter to follow. This trajectoryconsisted of going to a specified angle and returning to zero degrees again. The trajectory isgiven in Figure 6.4.

Figure 6.4: Reference trajectory. The Y-axis is the roll/pitch angle and the X-axis is time inseconds

56

Next, software would completely automatically load different PID values and attempt to followthe trajectory. While doing this, the measured position was recorded such that is could beanalysed later. So an exhaustive test was performed over a hundreds of possible controllerparameter values. From the acquired data, various metrics could be extracted such as thesquared sum of errors with respect to the reference signal and the settling time, which is thetime it takes for the real position to get within a given margin from the reference. Someexamples of the graphs obtained by this method are given in figures 6.5, 6.6 and 6.7. A shortvideo of a test run can be found at http://youtu.be/AFiShwGhxYk.

Figure 6.5: Example of a good controller. Blue is the reference trajectory, red the measuredvalues. The green line is the settling time after the step.

As can be Seen in the figures, all PID controllers have difficulty with following the reference atlarge angles, this is because of the non-linearity inherent to a quadcopter. When at an anglegravity wants to pull the frame round such that the motor on one side has to compensate veryhard for this. With just PID controllers this could be solved by a large integrating actionand lots of patience one at that point, but such an integrating action would make the wholesystem very slow. Fortunately it is not of that much importance, as long as the controller isgood for balancing at zero degrees where the frame is in balance and it is able to get track thereference to some extent for higher degrees of pitch/roll, the quadcopter can both hover and issteerable.

When selecting values for the PID controllers, it is important to look at stability at zero degreesas this determines the amount of drift when the quadcopter is hovering. Also the overshoot ofthe controller should be within acceptable bounds and preferably the quadcopter should trackchanges in the reference signal as fast as possible. Looking at these criteria it can be seen thatthe controller of Figure 6.5 is very good. The settling time is very short without any seriousovershoot and also the steady state looks very stable. This in contrast to the controller of Figure6.6, where in steady state around zero degrees the quadcopter is very much oscillating, whichwould lead to unacceptable drift. Another bad example is given in Figure 6.7, that controller

57

Figure 6.6: Example of a bad controller. The steady state shows heavy oscillations which wouldlead to heavy drift or even instability in practice

is very slow to follow the reference signal and has tremendous overshoot. It is even unable tomaintain a stable error with respect to the reference at high degree and it takes seconds beforeit comes near to settling around zero degrees.

The aforementioned criteria were used to automatically select the best PID values, which provedto be much better than the values obtained by hand. The method described here was used totune both the roll and pitch controllers. The yaw and height controllers were tuned by handsince it is not practical to test a lot of settings on these controllers, since the quadcopter actuallyhas to fly during the tests.

58

Figure 6.7: Another example of a bad controller. The controller fails to track the referencesignal and has a large amount of overshoot and long settling time

59

Chapter 7

Experimental results

After the motor calibration and the frame & propeller balancing, the auto tuning was able toprovide some decent values for the PID controllers. As a result balancing on one axis works re-markably well. The final controllers go back to the reference very fast after a disturbance.

Flying itself still proves to be a bit of a challenge. The quadcopter can hover, but it tends toswirl around the room. As the room available for testing is quite narrow, some of the test runshad to be aborted, because the quadcopter was heading towards a wall. Nevertheless on variousoccasions during testing, the quadcopter proved to be reasonably steady. Also it never becomesunstable or flips over, it just is slightly not level enough to stay in one fixed position.

The ball detection also worked out very nicely in practice. The red ball can be detected at22 frames per second. Also the distance between the ball and the camera (depending on theradius of the ball) and the direction the ball is travelling at are calculated correctly at the samerate.

Video

To demonstrate the final results of the project, a video was created that displays all thesefeatures. It can be found at http://youtu.be/Z-ZQBLP9_EM.

There is also a video that summarizes the practical process of building the quadcopter describedin this report, available at http://youtu.be/kgxeD-s1KyE.

60

Chapter 8

Conclusions and recommendations

The goal of the project was to built a quadcopter that with the use of embedded comput-ing platforms and a camera can fly autonomously and perform a task based on the cameraimages.

A quadcopter was constructed and it can actually fly. However the local control of this quad-copter is not yet optimal, which led to a quadcopter that experiences a lot of drift while hovering.It is however believed that by faster motor drivers and better balanced propellers, the designshould be able to hover much more stable.

On this quadcopter two embedded platforms were mounted. One is used for local control andone is used for image processing and global control. The local control software runs very reliableand consistent, leading to a good local control in terms of the software implementation. Also thesensors connected to this platform perform well, which makes is possible to obtain a accurateorientation of the quadcopter.

On the second embedded platform a ball detection and ball trajectory finding mechanism havebeen successfully implemented. Despite the relatively low available computation resources, ared ball could be detected fast and its trajectory could be estimated fairly well.

Due to the fact that to local control is not very stable, it was not possible to implement andtest a global control algorithm to navigate the quadcopter away from red balls. When the localcontrol gets more stable, this should however be trivial with the correctly functioning red balltrajectory detection.

Recommendations

Here we list some recommendations for both future project teams with a similar goal as well asteachers/assistants who need to support such a group.

• When building a quadcopter, make sure you invest in proper, fast motor drivers that havesome kind of turning speed feedback. It may cost a little bit more money, but it willsave literally weeks spent on balancing the frame, compensating for differences betweenmotors, compensating for a drop in battery voltage and it will lead to a much stablerquadcopter in the end.

• Make sure that the propellers you buy fit directly on the mounts you buy for the motors.Also pay a little bit extra to get high quality propellers that do not need to be balancedusing sticky tape. This will be good for both power consumption and it will drastically

61

Figure 8.1: Large power supply by delta, informally known as ‘Dikke Berta’

reduce the vibrations that disturb the gyroscope and accelerometer so much. Also, makesure you order spare propellers as you will brake them during testing.

• The usage of 3-pin DC-DC converter modules we used is something we would recommendanyone in need of a fast and reliable solution to decrease the battery voltage to 5 volts.

• Implement the local control on a micro controller with a hard real time OS. It will greatlyimprove the stability of the execution frequency of the control loop. As all filters andbasically the whole control is tuned for a fixed speed, moving to a real-time OS will greatlyincrease the stability of the local control software. Note, linux with low latency/‘realtime’ kernels is not a good substitution as under certain ‘hangup’ conditions such as wifiproblems it will completely ruin the stable execution frequency of the loop.

• When this project is done again, make sure each group has access to a large power supply(12 Volt, 10+ Ampere), such that test can be done without the need of having to rechargethe battery every 10 to 15 minutes (at the TU/e these kind of supplies are sometimeslovingly referred to as ‘Dikke Berta’, see Figure 8.1).

• Build a good bumper to protect the frame and the fragile propellers. Styrodur (at timeof writing available at the ‘bouwkunde’ faculty) has proven to be a very good material.It is light weight, incredibly strong for its weight and not very expensive.

We would like to conclude with the remark that the decision to build the quadcopter our-selves from scratch has given us to opportunity to learn many things about sensors, embeddeddevelopment, vision algorithms on limit hardware, control and the actual construction of aquadcopter. We have all enjoyed this project very much. However, due to the many troubleson the road, it has also taken a whole lot more time then was intended and even more thanwhat we expected (overtime can always be expected when building such an elaborate project).Therefore, if anyone would like undertake a similar challenge, be aware it will cost you manyhours of spare time.

62

Bibliography

[1] Acceleromter(bma020) datasheet. http://www.elv-downloads.de/Assets/Produkte/9/915/91521/Downloads/91521_bma020_data.pdf.

[2] Back projection. http://docs.opencv.org/2.4.2/doc/tutorials/imgproc/

histograms/back_projection/back_projection.html.

[3] Canny edge detector. http://en.wikipedia.org/wiki/Canny_edge_detector.

[4] Color space(hsv). http://msdn.microsoft.com/en-us/library/windows/desktop/

aa511283.aspx.

[5] Dc-to-dc converter. http://en.wikipedia.org/wiki/DC-to-DC_converter.

[6] Electric motor. http://en.wikipedia.org/wiki/Electric_motor.

[7] Gyroscope (itg3200) datasheet. http://www.sparkfun.com/datasheets/Sensors/Gyro/PS-ITG-3200-00-01.4.pdf.

[8] Height sensor (srf02) datasheet. http://pishrobot.com/files/products/datasheets/

srf02.pdf.

[9] Hough circle detection. http://en.wikipedia.org/wiki/Hough_transform.

[10] Magnetometer(mag3110) datasheet. http://www.freescale.com/files/sensors/doc/

data_sheet/MAG3110.pdf.

[11] Quaternion. http://en.wikipedia.org/wiki/Quaternion.

[12] AIShack. Hough circle detection. http://www.aishack.in/2010/03/

the-hough-transform/.

[13] AIShack. Morphological operations. http://www.aishack.in/2010/07/

mathematical-morphology/.

[14] beagleboard.org. Beagle board xm manual. http://beagleboard.org/static/BBxMSRM_latest.pdf.

[15] beagleboard.org. Beagle bone manual. http://beagleboard.org/static/beaglebone/

latest/Docs/Hardware/BONE_SRM.pdf.

[16] Johann Borenstein. The hoverbot, an electrically powered flying robot. http://web.eecs.umich.edu/~johannb/paper99.pdf.

[17] Sparkfun electronics. Acceleromter/gyro buying guide. http://www.sparkfun.com/

pages/accel_gyro_guide.

[18] Willow garage. Hough function, c++. http://opencv.itseez.com/modules/imgproc/

doc/feature_detection.html?highlight=houghcircles#houghcircles.

63

[19] gumstix.com. Gumstix datasheets. https://www.gumstix.com/store/product_info.

php?products_id=267.

[20] Thomas Jespersen. Quadcopter components. http://blog.tkjelectronics.dk/2012/

03/quadcopters-how-to-get-started/.

[21] T. Luukkonen. Modelling and control of quadcopter. Technical report, Aalto University,aug 2011.

[22] Sebastian O.H. Madgwick. An efficient orientation filter for inertial and inertial/-magnetic sensor arrays. http://imumargalgorithm30042010sohm.googlecode.

com/files/An%20efficient%20orientation%20filter%20for%20inertial%20and%

20inertialmagnetic%20sensor%20arrays.pdf.

[23] Sandra Mau. What is the kalman filter and how can it be used for data fusion? http:

//see-out.com/sandramau/doc/Mau05MathKalman.pdf.

[24] pandaboard.org. Panda board es manual. http://pandaboard.org/sites/default/

files/board_reference/pandaboard-es-b/panda-es-b-manual.pdf.

[25] William Premerlani and Paul Bizard. Direction cosine matrix imu: Theory.http://www.google.nl/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=

0CCUQFjAA&url=http%3A%2F%2Fgentlenav.googlecode.com%2Ffiles%2FDCMDraft2.

pdf&ei=OhVPUMWXLoq60QXUpIDABQ&usg=AFQjCNEQ6EAash8rN-BSUb0ZtSs0Xs_rcQ.

64

Appendix A

Who did what

Table A.1 gives an overview of relative contributions to the parts of the project. Please notethat the time spent on each part is not equal, so although the table gives a good overview onwhat tasks have been performed by who, it gives no real information on the time spent.

Task Luc Marco Shyam

Initial research and selecting components + + +

Mechanical construction + + +

Investigating and building own ESCs ++

Hardware / electronics + +

Interfacing sensors (drivers) + +

IMU research & implementation ++

Local control research & implementation + ++ +

GUI/control interface development ++

Camera interfacing (dev. of OpenCV library) ++

Research & development image processing algorithms1 + +++

Calibration (sensors, frame, motors, propellers) + ++

Controller tuning and testing + + +

Documentation + + +

Table A.1: Overview of the relative contributions of the team members to the tasks (MarcoCox, Luc Waeijen, Shyam Balasubramanian.

1Includes working in matlab, V4L, OpenCV and developing global control ball tracking

65

Appendix B

Source Code Structure

The source code for this project is available via the following url:https://www.dropbox.com/s/vnik49pv9x6rkyy/quadcopter-5hc99-source.zip.

The source archive contains multiple directories, the list below gives a brief overview of thestructure.

• /: The root folder contains driver classes and code for the BeagleBone. This code is usedwhen the local control runs on the BeagleBone, and by the calibrator tool. The calibratortool (make calibrator) can be used to measure and calibrate the sensors.

• /cvCameraInterface: Code for a custom build OpenCV library for obtaining framesfrom the webcam and converting between different colorspaces.

• /localControlII: The GUI for controlling the quadcopter. This code runs on the Bea-gleBone and communicates with the code on the MBED. localCtrl.h defines a class thatcan be used in other applications to control the quadcopter.

• /mbed/: All the software running on the MBED. LOCALCTRL defines the main controlclass, IMU implements the IMU.

• /mbed/communication: Code for communicating via i2c and definition of the commu-nication structure used for communicating with the BeagleBone

• /mbed/helpers: Helping functions for timing, macros and matrix/vector class defini-tions.

• /mbed/hw interfaces: Driver classes for all the hardware components that are con-nected to the MBED (sensors and motors).

• /mbed/mbed-rtos: Realtime Operating System for the mbed microcontroller (opensource).

• /mbed/MODI2C: Modified (non-blocking) i2c library for mbed (open source).

• /opencv: The image processing algorithm, based on OpenCV.

66