3d mapping drone - ucf department of eecs  · web view3d mapping drone. edwin lounsbery, brian...

143
Group 12 3D Mapping Drone Edwin Lounsbery, Brian Vermillion, Matthew McHenry SENIOR DESIGN 1 | August 6, 2015

Upload: ngocong

Post on 17-Jun-2019

213 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Group 12

3D Mapping Drone

Edwin Lounsbery, Brian Vermillion, Matthew McHenrySenior Design 1 | August 6, 2015

Page 2: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

 Table of Contents1 Executive Summary..................................................................................1—1

2 Project Description....................................................................................2—2

2.1 Project Motivation and Goals..............................................................2—3

2.2 Objectives...........................................................................................2—5

2.3 Requirements Specifications..............................................................2—6

3 Research related to Project Definition.......................................................3—6

3.1 Existing Similar Projects and Products...............................................3—6

3.2 LIDAR vs TOF cameras......................................................................3—7

3.3 Sensors..............................................................................................3—9

3.4 Transmitters/receivers......................................................................3—10

3.5 Embedded Linux vs Microcontroller..................................................3—11

3.6 Drone Design....................................................................................3—12

3.6.1 How many rotors do we need?..................................................3—12

3.6.2 Open or enclosed design...........................................................3—13

3.6.3 Material: carbon fiber vs polycarbonate vs wood.......................3—13

3.7 Power System Design......................................................................3—14

3.8 Motor systems..................................................................................3—15

3.9 Camera System................................................................................3—16

4 Standards................................................................................................4—18

4.1 Related Standards............................................................................4—19

4.2 Design impact of relevant standards................................................4—21

5 Facilities and equipment..........................................................................5—22

6 Realistic Design Constraints...................................................................6—23

6.1 Economic and Time constraints........................................................6—23

6.2 Environmental, Social, and Political constraints...............................6—23

6.3 Ethical, Health, and Safety constraints.............................................6—24

6.4 Manufacturability and Sustainability constraints...............................6—25

7 Project Hardware and Software Design Details......................................7—25

7.1 Initial Design Architectures and Related Diagrams...........................7—25

7.2 Drone System...................................................................................7—26

I

Page 3: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.2.1 Drone ESC.................................................................................7—26

7.2.2 Drone Frame..............................................................................7—27

7.2.3 Battery and Flight Time..............................................................7—28

7.2.4 Induction Charging    .................................................................7—30

7.2.5 Flight Controller..........................................................................7—30

7.2.6 Collision sensors........................................................................7—31

7.2.7 Power Distribution......................................................................7—32

7.3 Base/Charge Station.........................................................................7—40

7.3.1 Induction Interface......................................................................7—40

7.3.2 Charge Control and Conversion.................................................7—41

7.3.3 Map Display Software................................................................7—47

7.3.4 Radio Transmission...................................................................7—47

7.4 Software Design...............................................................................7—47

7.4.1 Tools and libraries......................................................................7—48

7.4.2 Low level data management......................................................7—48

7.4.3 High Level Data Management....................................................7—55

7.4.4 Algorithms..................................................................................7—58

7.4.5 State machine............................................................................7—62

8 Schematics and Data structures.............................................................8—67

8.1 Schematics.......................................................................................8—67

8.1.1 Electrical.....................................................................................8—67

8.1.2 Mechanical.................................................................................8—71

8.2 Data Structures.................................................................................8—71

9 Project Prototype, Construction, and Coding..........................................9—72

9.1 Construction.....................................................................................9—72

9.1.1 Charging station.........................................................................9—72

9.1.2 Drone.........................................................................................9—72

9.2 Hardware..........................................................................................9—73

9.2.1 Parts Acquisition........................................................................9—73

9.2.2 Parts Assembly..........................................................................9—73

9.3 Software...........................................................................................9—73

9.3.1 Initial testing of sensors..............................................................9—73

II

Page 4: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

9.3.2 Basic Drone control....................................................................9—74

9.3.3 Drone collision avoidance..........................................................9—74

9.3.4 3D mapping software.................................................................9—75

9.3.5 Base Station Display..................................................................9—76

10 Project System Testing......................................................................10—76

10.1 Testing and Objectives...................................................................10—76

10.2 Testing Environments.....................................................................10—76

10.3 Power Testing.................................................................................10—77

10.3.1 Power Distribution....................................................................10—77

10.3.2 Induction Charging (charging rate & operational).....................10—79

10.3.3 Base Charging Station (conversion)........................................10—79

10.4 Drone System.................................................................................10—79

10.4.1 Payload....................................................................................10—79

10.4.2 Basic Flight...............................................................................10—80

10.4.3 Avoidance................................................................................10—81

10.4.4 Ceiling Height...........................................................................10—81

10.4.5 Distance Range........................................................................10—82

10.5 Software Testing.............................................................................10—82

10.5.1 Data input.................................................................................10—82

10.5.2 Data computation.....................................................................10—83

10.6 Overall system testing....................................................................10—83

11 Bill of Materials...................................................................................11—83

12 Owner's Manual.................................................................................12—83

13 Administrative Content.......................................................................13—85

13.1 Milestone Discussion......................................................................13—85

13.2 Budget and Finance Discussion.....................................................13—89

13.3 Consultants, Subcontractors, and suppliers...................................13—90

14 Appendices........................................................................................14—90

14.1 Copyright Permissions and Authorizations.....................................14—90

14.2 Datasheets/Table References........................................................14—90

14.3 Bibliography....................................................................................14—90

14.4 References.....................................................................................14—90

III

Page 5: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

IV

Page 6: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

1 Executive Summary     Imagine sitting on your suburb porch at the edge of the Everglades. A thought of exploring the area, to find the newest fishing or hunting spot or to capture photos of the wildlife in their natural habitat. Now imagine creating an accurate 3D model of these areas. Having a flying drone with a fully-operational camera; including software for copter coordinates, movements, and photo/film modification. The opportunities are boundless, for such opportunities, locating fishing spots, wildlife intrigue or study, machine or software training, photography or documentary potential, can now be made easy. Our multi-rotor copter or drone offers this ideal situation. On a more serious note and in terms of productivity, a computer operated task-specified copter design can prove irreplaceable and sometimes vital information for law enforcement, disaster relief, and the military. From fugitive recovery to providing the personnel with vital information in completing a task or mission, drone can save many lives.  

Law Enforcement’s goal is to protect the citizens, while, defending themselves. Upholding the law requires alertness, intuitiveness, and sureness. These three qualities keep people alive, describe ultimate attribute for an enforcement officer, and situational awareness. Active video and data of a crime scene or situation is the equivalent to this attribute for any officer. Like a dog for a K9 officer, a maneuverable flying drone can provide information that officers don’t obtain as efficiently or routinely. Flying drones can offer similar qualities and beyond. These qualities include ambush realization, pre-confrontation identification, and heat specifics, ultrasonic specifics (sound), and visualization of a crime scene. Additionally, such drones integrated with time of flight cameras can provide a second scan and a video copy of evidence. Operating the flying drones with time of flight cameras could be as simple as programming a set of boundaries for the drone. The resulting flight path encompasses the entire area or volume of the boundaries until every coordinate is covered.  

When hurricane Katrina flooded New Orleans disaster relief didn’t know where to look for survivors or how to get supplies to them. Flying drones aren’t designed for aerodynamics, in fact, if not for their on-board computers they can’t obtain flight. Therefore, design modifications for payload transportation is possible. Additional electronic devices are standard for drones. Time of flight drones searching a disaster area can represent potentially hundreds of search parties with their own programmed search area, although, obtained video may require days to watch. Thus, provided the disaster a specified sensor may detect possible human life and a “panic” button, as part of the drone, pressed by the survivor can immediately contact relief administration.  

1

Page 7: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Military applications include scouting and equipment transportation. Scouting the frontiers of a military camp has benefits such as resources allocation, defense, and strategy. Drones may be also an effective stealth device for enemy activities, locations, and resources. The Lidar camera photographs surrounding area of a drone with a 360 degree revolution. The resulting 3-D map allows for human routes to be chosen for the drone. Overall, the flying drone is a modern and respectful presentation of technology that requires power systems, computer systems, and wireless communication.                          

2 Project Description  The project incorporates three main department of electrical engineering and computer science fundamentals. Power facilitation, computer control and programming, and electrical sources, with induction. The fundamentals work together to operate a flying drone, or otherwise aeronautical vehicle, as described previously. The drone’s main power supply is from a battery like many devices or automobiles that are electrical. Electrical energy that the battery produces is most recently originated from an induction charging station. An American outlet is the original source of all electrical energy operating within the drone. Controlling the drone is a remote control or a time of flight camera implemented by three-dimensional capture. The foremost goal of this project is for the time of flight camera’s processed photographs to direct the flying drone’s flight path. 

To best describe the result of our main goal if accomplished is like achieving artificial intelligence without the intelligence. The drone won’t have the ability to make personal decisions or to make new decisions based upon previous knowledge. However, the drone can obtain knowledge of future possible locations and obstructions that can be used to determine alternate flight paths. Such goal, includes programming protocols for an inertial measurement unit, mass processing, and rotor control. Additionally, all parts of the drone must work for this goal to be achieved. For example, the base station must be functional to provide a charging source, the ultrasonic sensors have to connected and interfaced to echo locate possible hindrances, the depth sensor has to receive the proper power for operation, algorithms for actual location in correspondence to mapped location have to bug free, and biasing for electronics has to be correct. Other goals for the project are off-drone three-dimensional mapping, specific altitude (minimum goal of fifteen meters), automatic charge station flight path, and possibly an LED power level display. Motivations for this project that may not be obvious are the incorporation of wireless communication and familiarizing with flying drones/Copters. Motivations that are more obvious are experience with power distribution, microcontroller implementation, and three-dimensional capture and use. 

2

Page 8: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Wireless communication is to wire communication as gold is to copper. Both get the same job done just one seems more admirable. The idea and capability of control from a distance and free from any seemingly logical link brings an invisible motivation of focused determination. This determination is sufficed by the wireless FOV transmitter, flight controller receiver, time of flight control, and charging station. From a practical viewpoint wireless communication experience is extremely useful based upon smart phones, radio, satellite television, and routers. 

Iris Quadcopter, Dji Phantom, and Inspire1 are names of market drones available. Helicopter-like drones have gained a lot of attention recently especially with their autonomous interfacing capabilities. Controlled from personal mobile devices, smart phones, and product remote controllers and software use being relatively simple the average American can be skilled enough to purchase and fly these drones.  Joined with the entertain value the production of a similar aeronautical vehicle is motivating the decision to engineer and build this project.   

Power is the main supply of electrical devices and, thus, part of the motivation for this project. A direct description of power is the energy per unit time. Similarly, power is the work done as a function of energy for unit time measurement. An equivalent perception of power is the voltage multiplied by the current. Voltage is the work done as a function of energy per unit charge measurement, while, and current is the amount of charge as a function of time. The multiplied synthesis of voltage and current is the energy as a function of time due to the separation, or mathematically the cancelation, of charge. Furthermore, power can be thought of as equal to the electrical force multiplied by the amount of current. Everybody likes power even if it’s being able to control the television from a couch. Power is also associated with history, wealth, and innovation. For this project the concept of power relates more to NASA, United States’ armed forces, and engineering intuitiveness. Space shuttles are models of billions of transistors and generate one billion watts per second of power to reach orbit from the earth’s surface. A comparable force is the six hundred thousand watts per second of power generated by a NASCAR automobile. Both are significant displays of power production, facilitating, and regulating especially with respect to a popular technological device; a seventy-five inch plasma television which requires approximately two hundred and forty four watts per second of power to operate. Three-dimensional capture and mapping is being realized to be increasing beneficial. The popularity and potential, in addition to curiosity and intrigue of the Kinect sensor, has orchestrated motivation to make three-dimensional capture the main part of this project. Three-dimensional photograph and map configure is a topic of interest at Columbia County, Georgia. Software purchased by the county makes 1,100 square miles and 2,500 streets accessible by computer via aerial photographs. The software requires two terabytes of memory and makes twenty million file folders. Each part of the photographs are mapped to a pixel by a latitude, longitude, and elevation. Software analysis tools available make

3

Page 9: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

calculations for traffic accident research, rain excess flow, and other infrastructure applications. Three dimensional mapping is relatively new technology and can be produced by 3-D cameras or by 2-D cameras that require processing to acquire 3-D maps. 

 2.1 Project Motivation and Goals  

The project incorporates three main department of electrical engineering and computer science fundamentals. Power facilitation, computer control and programming, and electrical sources, with induction. The fundamentals work together to operate a flying drone, or otherwise aeronautical vehicle, as described previously. The drone’s main power supply is from a battery like many devices or automobiles that are electrical. Electrical energy that the battery produces is most recently originated from an induction charging station. An American outlet is the original source of all electrical energy operating within the drone. Controlling the drone is a remote control or a time of flight camera implemented by three-dimensional capture. The foremost goal of this project is for the time of flight camera’s processed photographs to direct the flying drone’s flight path.

To best describe the result of our main goal if accomplished is like achieving artificial intelligence without the intelligence. The drone won’t have the ability to make personal decisions or to make new decisions based upon previous knowledge. However, the drone can obtain knowledge of future possible locations and obstructions that can be used to determine alternate flight paths. Such goal, includes programming protocols for an inertial measurement unit, mass processing, and rotor control. Additionally, all parts of the drone must work for this goal to be achieved. For example, the base station must be functional to provide a charging source, the ultrasonic sensors have to connected and interfaced to echo locate possible hindrances, the depth sensor has to receive the proper power for operation, algorithms for actual location in correspondence to mapped location have to bug free, and biasing for electronics has to be correct. Other goals for the project are off-drone three-dimensional mapping, specific altitude (minimum goal of fifteen meters), automatic charge station flight path, and possibly an LED power level display. Motivations for this project that may not be obvious are the incorporation of wireless communication and familiarizing with flying drones/Copters. Motivations that are more obvious are experience with power distribution, microcontroller implementation, and three-dimensional capture and use.

Wireless communication is to wire communication as gold is to copper. Both get the same job done just one seems more admirable. The idea and capability of control from a distance and free from any seemingly logical link brings an invisible motivation of focused determination. This determination is sufficed by

4

Page 10: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

the wireless FOV transmitter, flight controller receiver, time of flight control, and charging station. From a practical viewpoint wireless communication experience is extremely useful based upon smart phones, radio, satellite television, and routers.Iris Quadcopter, Dji Phantom, and Inspire1 are names of market drones available. Helicopter-like drones have gained a lot of attention recently especially with their autonomous interfacing capabilities. Controlled from personal mobile devices, smart phones, and product remote controllers and software use being relatively simple the average American can be skilled enough to purchase and fly these drones. Joined with the entertain value the production of a similar aeronautical vehicle is motivating the decision to engineer and build this project.

Power is the main supply of electrical devices and, thus, part of the motivation for this project. A direct description of power is the energy per unit time. Similarly, power is the work done as a function of energy for unit time measurement. An equivalent perception of power is the voltage multiplied by the current. Voltage is the work done as a function of energy per unit charge measurement and current is the amount of charge as a function of time. The multiplied synthesis of voltage and current is the energy as a function of time due to the separation, or mathematically the cancelation, of charge. Furthermore, power can be thought of as equal to the electrical force multiplied by the amount of current. Everybody likes power even if it’s being able to control the television from a couch. Power is also associated with history, wealth, and innovation. For this project the concept of power relates more to NASA, United States’ armed forces, and engineering intuitiveness. Space shuttles are models of billions of transistors and generate one billion watts per second of power to reach orbit from the earth’s surface. A comparable force is the six hundred thousand watts per second of power generated by a NASCAR automobile. Both are significant displays of power production, facilitating, and regulating especially with respect to a popular technological device; a seventy-five inch plasma television which requires approximately two hundred and forty four watts per second of power to operate.Three-dimensional capture and mapping is being realized to be increasing beneficial. The popularity and potential, in addition to curiosity and intrigue of the Kinect sensor, has orchestrated motivation to make three-dimensional capture the main part of this project. Three-dimensional photograph and map configure is a topic of interest at Columbia County, Georgia. Software purchased by the county makes 1,100 square miles and 2,500 streets accessible by computer via aerial photographs. The software requires two terabytes of memory and makes twenty million file folders. Each part of the photographs are mapped to a pixel by a latitude, longitude, and elevation. Software analysis tools available make calculations for traffic accident research, rain excess flow, and other infrastructure applications. Three dimensional mapping is relatively new technology and can be produced by 3-D cameras or by 2-D cameras that require processing to acquire 3-D maps.

    

5

Page 11: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

2.2 Objectives   Once our group came up with the project we established that we needed to determine objectives. First, we needed to outline our project. Our main points of the project are: 

3D mapping of an environment  A mobile unit such as a quadcopter  Induction charging  Autonomous flight with the option of user controlled 

Our objectives for this project was to make an autonomous drone that can operate in an environment where GPS sensors may not work or may not be an option. The drone will also collect data from an infrared sensor to “see” as well as create a 3D mapping of its environment. Since the drone will be able to “see” it will have limited to no input from one of the team members. The term “see” that we are using for the drone means that it will collect data from its sensors and the onboard computer will do live calculations to avoid crashing as well as move to new sections of its environment to map. The drone will go section by section to map its environment and then return to the charging/base station. The drone will have an on board camera that will transmit video so we can see the live feed from the drone. The 3D mapping feature is there so that any individuals going into abnormal settings can have a visual understanding prior to entering the location. Lastly, the induction charging and the autonomous features are to reduce the amount of user input needed.        2.3 Requirements Specifications 

In outlining the project we set some specification requirements. These specifications will add constraints to our design and implementation of our project. In table 2.1 we will lay out these specifications.

Table 2-1: Requirements of the Project

Requirement ValueVehicle operation time 30 minutesRecharging time (Drone) 2 HoursWireless charging YesRate of chargeRange 1 kilometerCeiling height 400 metersPayload 1 KilogramMapping Resolution 5 inches / pixelAutonomous YesLive Feed YesCrash Avoidance Turn away @ .5 meters

6

Page 12: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Low Power Detection (visual) 2 red LEDSLow Power Detection (drone return home)

Yes

 

3 Research related to Project Definition  3.1 Existing Similar Projects and Products    Here at the University of Central Florida there are two past projects that are similar to ours in design and function. One of the former projects came from a group that called their project “DroneNet: The Quad Chronicles”. Their project consisted of a drone and a mobile unit that were both operated by remote. The goal of their drone was to be autonomous by purchasing a preconfigured autonomous flight controller with open source software. Their frame was also a premade frame. Some of their design aspect contained a home grown mobile station with a home grown transmitter for the mobile unit. The transmitter housed a touch screen that accessed the telemetry for the mobile command unit as well as operational functions. The drone seemed to have minor group designed parts and the rest of the parts were store bought and put together.   The second project found here at the University of Central Florida was “S.C.R.A.M.”. This project we could not find any information for besides their mobile charging unit which sits in the engineering 1 atrium where we discovered it. By doing a physical over view of the project we noticed that it was a solar powered charging station with induction charging for the quadcopter. The frame for the mobile unit was home grown as well as the power distribution board but most of the other electronics were purchased.    Another project that we came across was a drone project from MIT which used a drone and a Kinect to do 3d imaging and autonomous flight where gps is not feasible. This drone uses the Kinect and an onboard computer to do real-time calculations to in order to navigate through its environment. There are videos of the drone in action but limited further data on how they accomplished this or if there were other design aspects besides the Kinect 3d mapping and if it had an autonomous algorithm.  The MIT project is closest to our own project, the main focus of our project is to use the Kinect to map its environment but will be done autonomously and the other aspects of our project resemble that of the other two similar projects.                        3.2 LIDAR vs TOF cameras   Creating a 3D map of an area is a difficult problem that many companies have addressed with many different technologies. Because of the complexity of designing and building a reliable instrument for mapping an area, we have decided to buy a sensor. When searching for a device to use, there are many

7

Page 13: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

different technologies and variables to consider. First, there are many different ways to find the distance of an object. Sonar, infrared, laser, computer vision, are all practical examples that have been used. Most of these are only one dimensional measurements though, which adds a challenge to getting three dimensional data. What are the ranges of these devices, and how much range do we actually need? Do these devices have a limited Field of View (referred to as FOV from here on out)? What kind of data do they offer, and what is the accuracy of this data? And lastly, what kind of mechanical components do they have that could limit them? These are all questions that need to be addressed before choosing a device for our drone.  

The technologies most commonly used in range finding are sonar, infrared, laser and computer vision. Sonar is a technology that uses high frequency sound waves to measure distance. It does this by measuring the time is takes to return, and since the speed of the sound is known, we can derive the distance. While sonar is good for a single distance point, it is not as effective at plotting three dimensions because it is slow. Infrared range finders depend on a formula to calculate distances based off of how much the light intensity has changed when received. The limited range of these devices make them almost useless. The last two are lasers and computer vision. Lasers are a reliable technology for distance measuring, but requires precise, and often expensive timing circuits to accurately measure distance. The lasers are rotated about two axis to get the three dimensional data. Computer vision alone takes too much computation for most applications, and this is the case for our drone. In recent times lasers have been paired with some basic computer vision to get a depth image at very low computational cost. The laser is diffracted into a two dimensional point map, which a basic CMOS camera looks at and determines the distance. In this research we will be comparing a set of consumer grade technologies both of which are under a hundred dollars. The first is a LIDAR modal found in autonomous vacuums, the Neato XV-11. The second is a recent innovation by Microsoft, the depth sensing camera module called Kinect that uses a CMOS and laser combination. 

One of the most important factors in choosing a sensor is the optimal range that it operates in. For our drone, we are looking for a sensor that has at least a meter range.  The Kinect sensor has a maximum range of four meters, and the Neato has a maximum range of six meters. In terms of maximum distance, both of these sensors meet our needs. These measurements are maximum distances measured in a controlled environment that may not always represent the conditions our drone will be in. One environmental hazard that can affect both of these sensors is sunlight. Sunlight contains strong infrared in addition to the visible light we see. This infrared drowns out the infrared lasers of our sensors, making measurements nearly impossible outside of our minimum range. The Kinect is known to not work well outdoors, and since the Neato is an indoor vacuum, there has not been many tests on its outdoor use. Because of these

8

Page 14: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

limitations, we have set a constraint that says that the drone will only operate indoors. 

FOV is an important feature to consider when choosing a sensor because it affects what you see at any one point in time. For example, if you have a very small FOV it will make mapping the environment much more difficult because you are only collecting a small amount of data at a time. On the other hand, if you have an extremely large FOV, you will need a computer capable of processing all of that data. The Kinect is essentially a camera that can detect depth, and as such it has a similar FOV as a normal camera. This FOV is a rectangular area facing the forward direction. The angles that the Kinect can view are 43° vertically and 57° horizontally with a camera resolution of 640x480. The Neato Lidar has a completely different field of view. It gathers distance points on a two dimensional plain by spinning a distance laser. For each rotation it gathers 90 points, and it rotates at about four times per second. To get a three dimensional map with the Neato would require rotating it around another axis, this can be accomplished with the drone just by moving it around. In terms of FOV, the Kinect seems like a better choice because it already supports three dimensional mapping. 

Another important factor is the amount of detail each sensor provides. As mentioned before, the Kinect has a resolution of 640x480. So on the horizontal plain the Kinect gets 640 depth pixels over 57°. This ends up being 11.2 pixels per degree. The Neato collects 90 points over 360° on the same plain. This ends up being .25 pixels per degree. In terms of data points, Kinect provides the most detail. Pixel per degree is not the only measurement of data that should be considered because a device can make up for this by collecting a larger amount of points per second. The Kinect operates at 30 frames per second, so in every second the Kinect will collect thirty 640x480 depth images. The Neato operates at 10Hz, a different type of measurement but still based on time. This gives the Neato about four revolutions per second with a total of 360 points per second. In terms of raw data, the Kinect provides the most by operating at 30fps, which is equivalent to 60Hz, as opposed to the Neato’s 10Hz. 

Drones are typically small, and have weight constraints to fly. This limits the number of sensors we could use in our design. A typical industrial Lidar is very large and heavy, not something we could put on the drone. The Neato itself is a small device. 

Both choices have their merits. While the Neato does not provide the same amount of information that the Kinect does, it is still sufficient to build a three dimensional map of an area, and the simplicity of the data makes it easier to process. Along with the simplicity, the 360° view of the Neato provides a massive advantage to navigation, and eliminates other simpler sensors that will be necessary with the Kinect. A stronger processor will be needed if the Kinect will be used, which adds cost and weight to the design. Some of the weaknesses of

9

Page 15: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

the Neato can be alleviated by adding another sensor to a different plain. The disadvantage to this is the additional weight and cost of the devices. When looking at the overall advantages and disadvantages of the devices they, tend to even out. We decided to use the Kinect in the design because we want to take advantage of the extra data it provides, and build a more detailed map.  

3.3 Sensors   To coincide with the Kinect sensor there are barometer, ultra-sonic sensors, and an inertial measurement unit composed of an accelerometer, gyroscope, and magnetometer. Such devices measure the scientific quantities that are fundamental for flight navigation and stabilization. Three specified requirements met by measurement sensors that stem from the TOF flying drone project:

1. Diverging from path for structural interferences. 2. Coordinate direction and orientation for navigation and stabilization.  3. Altitude reading. 

Ultrasonic sensors are a function of velocity and time that measure distance. Derived from the Doppler Effect, for sound, the equation that describes the distance is,  

Distance = [Time * (Speed of Sound)] ÷ 2 = [Time * (340 m/s)] ÷ 2

Time is the total time for the echo of the sound wave to reach the ultrasonic sensors. Compatibility with the drone flight controller is key making the HC-SR04 Ultrasonic Distance Sensor Module our prospective choice. Having a scanning radius of four and a half meters with 15 degrees of focus makes dodging potential collisions genuinely realistic. For navigation and stabilization the GY-85 inertial measurement unit (IMU) is our number one candidate to meet our next specified requirement. Manufactured with an accelerometer (ADXL345), gyroscope (ITG3205), and a magnetometer (HMC5883L) similar to many inertial measurement units. These sensors are integrated inside the PixHawk flight controller unit. The most involved measuring sensor is the accelerometer that measures proper acceleration or acceleration with reference to the gravitational field. Described as a MEMS (microelectromechanical system) the movement of a mass produces electricity proportional to the force exerted by the mass. Acceleration is equal to the force divided by the mass and such a relation with the data collected from the accelerometer calculates the acceleration of the drone. In other words, an accelerometer is a cube that encapsulates a weightless ball and when the device moves the ball moves and collides with pressure sensitive walls that acknowledge and measure the collision. The acceleration is then equal and opposite of the collision where the ball interfaces with cube. The next device functions similarly. A gyroscope keeps track of mechanics to

10

Page 16: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

measure angles and their rate of change. Revolving mechanics and the data recorded by the gyroscope informs the drone of the directional heading. The other electrical component of the IMU is the magnetometer or also referred to as a compass. Basically, the magnetometer generates a directional heading based upon the magnetic phenomena that opposite magnetic poles attract and equal magnetic poles repel. Thus, with reference to the Earth’s magnetic field a directional heading can be derived by the magnetometer. Finally, altitude is an important quantity for pilots and astronauts, however, for this flying drone altitude is important for testing. Digital barometers make use of engineering conversions to measure atmospheric pressure. By converting an electrical charge obtained from air pressure measured in watts to joules and then the converted measurement of joules to Pascal’s to provide an accurate pressure reading for the altitude test. Transmitting the data of IMU effectively compensates for automatic and manual control of the drone, while, the data from the ultrasonic sensors will keep the drone from colliding with any obstacles during automatic control.                                                                                                          3.4 Transmitters/receivers  The connection of two devices makes communication. Such interfaces include wireless, I2C, and serial FET. Described and constrained by analog and digital electronics, transceivers provide information and instructions for computers and embedded systems. The location of such transactions is the embedded ecosystem. Pins, or conductors, are set to their electromagnetic state as either input pins or output pins by the computer. Input pins of course receive information and output pins transmit information.  Examples of receiver and transmitters that apply pins as their interfaces are TTL, CMOS, and MSP430. The communication system that is transmitter and receivers works as an analog to digital conversion. Transforming analog signals to digital signals is the work of comparators and modulators. The digital signals are processed and combined to manufacture computer data and instructions. Such data and instructions are stored by peripheral registers. When these registers are filled the computer is notified by an interrupt. For the exchange of data a subroutine is launched and executed, by the computer that transfers the content of the peripheral registers to another device, for transmission, or to a memory location, for reception. Multiple uses of pins inquires the application of assigned voltage levels for the interfaces, for instance, the TTL set voltage is 4 and 3 volts, CMOS is 4 volts, and MSP430 is 3 and 2 volts. Therefore, interfaces that can be accessed by the same pin are TTL and CMOS, and TTL and the MSP430. As a result, more pin can access more interfaces with the embedded computer. Transmitters and receivers are an essential part of any remote control or wireless device or vehicle. They are actively involved with our processor, the Parallela board, and flight controller, Arduino based.  

11

Page 17: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

3.5 Embedded Linux vs Microcontroller    Every drone needs a controller of some type, because a drone has to perform many tasks just to stay up in the air. Some of the tasks that a drone has to do are as follows. Monitor the world around it. This could be as limited as monitoring its own orientation to stay stable or as complicated as attaching detection sensors for collision avoidance. Another task is adjusting the motor speeds. If you want the drone to turn left, the rotors need to be adjusted to do that. Basically the drone has to constantly monitor its environment and adjust itself to those conditions. The drone we are envisioning also has a few other tasks it needs to perform. It will need to gather sensor data from the Kinect, keep track of that data and build a world model, then use that model to navigate. A traditional drone might have a microcontroller with a radio transceiver to control the drone remotely, but the Idea with this drone is to have it move freely by itself, and no microcontroller will be able to accomplish this task. 

There are two different designs that we are considering for controlling the drone. The first is using an embedded Linux computer to control all aspects of the drone. The second takes a more classical drone approach with our additional hardware piggybacking on the drone. This would be the standard microcontroller keeping the drone stable and moving the drone with an embedded Linux box and the Kinect recording data and directing the microcontroller in the right direction. There are many merits to the dual microcontroller and embedded device design. For one, if the embedded computer crashes that does not mean that the drone will crash. The microcontroller will still be active and able to give the drone a safe landing. In addition to that redundancy, it frees up computation power for the embedded device, which will be needed for processing the Kinects images. The biggest drawback to this design is the additional power of the microcontroller, which is a very important factor to the drone’s flight time. From a high level view, the dual design looks like the best choice for our drone.  

To proceed forward, we will need to choose a good microcontroller and a powerful embedded device to accomplish our tasks. There are many different choices for each of these devices, but to start we will look at the most common choices, the Atmega8 that is used in Arduinos and the Raspberry Pi embedded Linux platform.  

The Atmega8 is an eight bit microprocessor with eight kilobytes of on chip memory, 23 I/O pins available, and operates at a 16 MHz clock rate. It is very likely that we will be using up to 18 of the I/O pins for just sensors alone. The IMU, which contains the gyroscope, accelerometer, and digital compass will take about six pins on the chip. We are also considering putting two ultrasound sensors on all of the axis of the drone, which take two pins each and ends up being 12 pins. Together it adds up to 18 pins, but this doesn’t include pins for the ESCs (Electric Speed Controllers for the motors), which is an unknown until we design them. Looking at our design right now, the Atmega8 may not meet our needs. It is possible to get an Atmega8 with up to 35 pins, but there is no way to tell yet if it will be powerful enough to run all of the sensors. There are a number

12

Page 18: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

of other good options available from Atmega, and other brands as well. The problem that is with these more powerful chips is that they do not have pins that we could attach to a bread board and test. If we decide to use these chips, we would have to design our own PCB (Printed Circuit Board) to use them. When building the actual design, we may end up making a PCB for it anyways, this just limits the amount of testing we can do.  

A few years ago the Raspberry Pi Foundation surprised the world with its cheap and small Linux computer. The tiny desktop computer was capable of running most desktop programs, and at the cost of $45 made it accessible for many hobbyist projects. Since then a number of Pi like devices have been created, most of which are more powerful than the original Pi. The purpose of choosing a device like this is to run the Kinect software on it. This will require a decent amount of CPU power because it will have to process the data returned from the Kinect, keep track of it, and use that data to navigate the world. Most embedded Linux platforms available today are probably not capable of doing all of these tasks, but while searching for a viable solution we came across a unique device by a company called Parallela. Parallela makes an embedded Linux board with 18 cores, and at a cost of only $100.  Out of these 18 cores there are two ARMv9 cores and 16 general purpose computation cores. The purpose of the Parallela board is to provide a cheap alternative to a computer cluster so people can learn to program in parallel. With the power of those 16 cores, and the FPGA built into the board, it may be possible to accomplish more with the software than a standard raspberry pi.        3.6 Drone Design  3.6.1 How many rotors do we need?   When it comes to drones we have many options and one of those options is how many rotors do we need. This can be dictated by the end use, equipment, and weight of the drone. Some drones are used for racing while others are capturing photos. The drones that are used for racing need to be smaller, dexterous, and an unstable nature. Meanwhile photo capturing drones need to be able to carry the weight of heavy camera equipment and need to be able to fly precisely even with high wind conditions. To be able to have maneuverability a quad-rotor or tri-rotor drone is ideal because its stability is a bit unbalanced and any fluctuation in any one motor speed will cause the drone to move dramatically. This is ideal for racing because the drone can maneuver through tight corners at a reasonably fast rate and be able to turn in one direction and fly in the complete opposite direction with relative ease. On the other hand the camera carrying drone needs to be more of a hex or octal-rotor for the very opposite reasoning of racing. With having six or eight rotors the drone can compensate for one or two of the motors fluctuation in speed and still be able to hold its position. Individuals need this stability in order to take still photos. The more rotors that are placed on the drone the heavier it will become but it will have better payload weight. If an individual is looking to place a single reflex camera onto the drone then the payload becomes

13

Page 19: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

a big deal whereas people who are looking to race their drone, it needs to be light and powerful. These individuals are concerned with their power to weight ratio. An alternative to these two extremes is to use less motors but to have more powerful motors that can give a bigger payload specification. Larger motors with a high level controlling algorithm and sensors will attempt to adjust the drone with any small fluctuation and still give it moderate maneuverability. For our project this seemed to be the logical choice. We need a drone that will have stability as well as maneuverability to be able to capture video and photos as well as be able to fly through tight conditions.  

3.6.2 Open or enclosed design  There are a few options when designing a drone, we can make it a skeletal or open design or an enclosed design. There are disadvantages and advantages to both styles. With an enclosed design we can shield out the elements such as dust and possibly water. This is beneficial because the electronics on the drone are very sensitive and can be damaged from dust and water. Even though the enclosed design is intended to protect the electronics on the drone, it can also damage them. With an enclosed design there is no air movement and can cause the electronics to heat up. The alternative is to have a skeletal design where the electronics can air movement. The enclosed design is also more difficult to create. The skeletal design can be implemented with less difficulty and provide air movement but will be susceptible to weather conditions and dust. Fortunately for us the drone will be mainly used in doors, dust and other conditions will either be negligible or non-existent. We decided to go with an open design for this reason.  3.6.3 Material: carbon fiber vs polycarbonate vs wood   When creating any type of flying vehicle, whether it is a helicopter, multirotor copter, or a plane, the frame of the vehicle is a critical component in the design. Many factors come into play, for instance durability, weight, cost, and the workability of the material. We looked at three different materials to compare. First, wood seemed to be a logical choice because of the availability, cost, and workability of the material. With this material we can go to any local hardware store and purchase it with minimal cost for large quantity. We can use simple tools to work the product into whatever design we would like with relative ease. The downside to using wood is that it is brittle and heavy. The thinner we make the frame for the drone the weaker it will become. This will affect the integrity of the drone, one hard land or collision and the frame can break and possibly damage other parts of the drone. Our second option was to choose polycarbonate. With polycarbonate we can make a light weight frame that is durable and stiff. The cost is a little higher than wood but the disadvantage of polycarbonate is the workability of the material. We only have two options when

14

Page 20: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

working with polycarbonate and that is to either design a mold or to use a 3D printer to make our frame. Option one, making a mold would require other material as the mold which would increase the cost of building it and there is only one chance of making the frame correct because of the processing. Option two, using a 3D printer would require a 3D virtual model made and an expensive 3D printer. Our last option is to build our frame out of carbon fiber. Carbon fiber will be the most durable and the stiffest material we can use to build our frame. The availability, workability, and cost are the biggest disadvantages with carbon fiber. Only certain companies will sell non-impregnated carbon fiber material. The price is by far the most expensive of the materials even considering the manufacturing process of the polycarbonate. Lastly, the workability of carbon fiber is very difficult. There are many other products needed in order to mold and impregnate the material, which increases the cost of using carbon fiber. Much like the polycarbonate there is only one chance of making the carbon fiber mold as well as the process of the epoxy setting and removal of excess epoxy. If the excess epoxy is not removed then this will increase the overall weight and remove one of the advantages of using carbon fiber. There is one other option with carbon fiber which will reduce the manufacturing of the frame but will limit the design of the frame. There are carbon fiber sheets and rolls that are pre- impregnated that we can purchase. For our project we decided to use pre-impregnated carbon fiber sheets and rolls for the strength lightness and with purchasing these sheets it reduces the cost of manufacturing but limiting our designing ability. We will also incorporate small 3d polycarbonate pieces to add to the appearance of the drone as well as hold some electronics. 

 3.7 Power System Design   

The power distribution will supply all energy to all systems. The battery will supply a constant voltage that will be facilitated by the power distribution. Two main specifications must be met. First, the distribution center will have to be like an ancient cities’ well of water. Each component will be supplied from the main source. Second, the voltage relayed to the rotors has to be converted to pulses for maneuvering the drone.  

Specification One - The battery has only one set of terminals and the drone has more than one component that has to be powered. As a result, the power distribution receives the voltage potential of the battery and links each component to that supply. This seems like a magician’s trick since one source of voltage enters and that same amount exits to each electrical component. Successfully leading the batteries’ voltage to each part of the drone can be done by power blocks. Next, the power (P = VI) for each part has be converted from the battery voltage. For instance, a twelve volt battery supply for a six watt (5 volt and 1.2 amp) microprocessor demands a transformation process. 

15

Page 21: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Specification Two – One path from the battery will lead to what may be thought of as a second power distribution. Accomplishing this facilitation of power will incorporate the 555 Timer, particularly, the mono-stable multi-vibrator implementation. The mono-stable 555 Timer, also referred to as the one-shot, operates a charging capacitor with a current set by an external resistor. The key is to trigger the circuit with a negative pulse. Initially, the R-S flip flop of the 555 Timer will set the output voltage at Vcc, also referred as V+, and the capacitor begins to charge, ideally starting uncharged. At the time the capacitor reaches two-thirds of Vcc the R-S flip flop of the 555 Timer sets the output voltage to zero and the Timer discharges the capacitor. When capacitor discharges all energy the 555 Timer can then begin a new cycle and produce another voltage pulse equal to the time required to charge the capacitor, t = 1.1*R*C.  In figure 3.1 a representation of a charge/discharge cycle of the timing

capacitor. Modification to the 555 Timers pulse is done with a control voltage source. The control voltage can increase and decrease the value of two-thirds Vcc effectively changing the amount of time the capacitor will charge. The second power distribution supplies the electronic speed control (ESC) that then operate the motors that make the rotors revolve. Therefore, for declining altitude pulse time periods will decrease and for inclining altitude pulse time periods increase. While, for turns one side of the drone will continue receiving power and the other side won’t continue receiving power.   3.8 Motor systems       Motors need power. For proper use of their power, motors must be controlled and sometimes their production is measured. Electronic speed controls can be centered amongst the power supply and motors to achieve this specified requirement. They act as voltage regulators and pulse width modulators to supply and control the brushless motors for this flying drone project. Brushless motors are a technology derived from the constraints of brushed motors. Such constraints are the brushes wear, electrical noise and sparks, max speed is limited, the electromagnet is difficult to cool, and limited number of magnetic

16

Figure 3-1: 555 Timer and output voltage. Initially, capacitor is charged thus resetting the output to zero immediately. A negative pulse triggers every twenty millisecond and the timing capacitor has an eleven millisecond charging time.

Page 22: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

poles. The pioneer brushed motor design consisted of two permanent magnetics of opposite polarity, an electromagnet (iron-core coiled by wire) at the center to rotate, and brushes to change the polarity of the electromagnetic. The electromagnet’s polarity changes each time a pass is made of an attracting permanent magnet, therefore, the magnetic of the forward direction for the rotating electromagnet is then the attracting permanent magnet and other is an opposing permanent magnet. Conceptually, the brushless motor is the same, however, centered and rotating are permanent magnetics of opposite polarity, brushes are replaced by transistors and computers, and the stator is electromagnets. Benefits of the design are more precise and efficient speed production, no sparks and less noise, and cooling for the electromagnetic isn’t difficult. Natural effects that build the motor are magnetic attraction and opposition, current and magnetic field correspondence, electromagnets and the horseshoe electromagnet. The electronic speed control component controls the motors production by three states, off, half-speed, and full-speed. Determining the state is the voltage pulse provided by the power distribution, the mono-stable 555 timer. For instance, at Vcc equal to twelve volts and a maximum capacitor charge time of two milliseconds the electronic speed controllers will regulate the voltage if applicable and provide the necessary power to the motor as none for 1 millisecond pulse, half for 1.5 millisecond pulse, and full for 2 millisecond pulse. According the brushless motor has electromagnets that are attracting and opposing poles from the centered permanent magnets to produce the correlating rotation to obtain the speed requirement of the flying drone’s rotors. A possible brushless motor for the flying drone, the Cobra 2204/58, has a twelve stator slots for electromagnets and fourteen magnetics poles at the center.                                             3.9 Camera System  The integral part to gather information for forming three-dimensional maps and navigating is from Microsoft Corporation Kinect sensor. Released to the public November, 2010, the device has intrigued the computer vision community and served as a replacement to more expensive traditional three-dimensional cameras. The device operates as a time of flight camera that searches a volume until movement is detected by photography. When movement is found a three-dimensional image configures and the appropriate Xbox computer system reaction follows. Kinect depth sensor’s original application has the capability of three-dimensional imaging can be repurposed for three-dimension capture. The main feature of the Kinect is an infrared emitter teamed with a complementary metal oxide semiconductor infrared pass-filter camera, emitter and CMOS camera combination photograph infrared signals at a range of eight tenths of a meter to three and a half meters. Relative positions of IR signals are plotted to a three-dimensional axis. Possible software options for converting the image data to produce three-dimensional maps from the Kinect are OpenNI, OpenKinect, and Microsoft SDK. These software development kits process the information similarly, although, the OpenNI proves to be the best option. In figure 3.2,

17

Page 23: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

OpenNI’s software developmental kit linearizes the depths of the infrared signals and converts depth to unsigned bit values representing distance of millimeters.   

0 500 1000 1500 2000 2500 3000 3500 4000 45000

500

1000

1500

2000

2500

3000

3500

4000

4500

Distance (mm)

Dep

th S

enso

r Val

ue (m

m)

Figure 3-2 : Depth sensor value as a function of distance

The three-dimensional map is derived from the depth measurement. Depths are obtained from infrared-pass camera’s laser which points directly outward and the Kinect’s depth sensor that acquires the surroundings. With reference to the laser the depth sensor calculates the distance outward to an infrared signal. From the infrared signal interfaced with the laser other signals are mapped perpendicular to the laser. Kinect’s angular field of view is fifty-seven degrees horizontally and forty-three degrees vertically. Resolution measures the spatial distance from each infrared signal photographed approximately three millimeters at two meters depth. To process the information the image data from the Kinetic transmits to Parallela microprocessor via universal serial bus port connection. Graphics processors are designed as single instruction multiple data streams with correlation to vector architecture. For the Parallela to process the data from the Kinect and have time to navigate with the three-dimensional imaging the graphics processors will have to provide a map for the drone to locate current position. In figure 3.3, an exponential representation of the resolution versus distance can be graphed. The distance of the camera cuts off roughly at 4 meters.            

18

Page 24: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

0 500 1000 1500 2000 2500 3000 3500 4000 45000

10

20

30

40

50

60

Distance (mm)

Res

ulot

ion

(mm

)

Figure 3-3: Resolution as a function of distance 

Program options for graphic processing units are OpenGl, Microsoft DirectX, and NVIDIA. Necessity of the mass computations from graphic processing units stems from the Kinect uploading a new photograph every thirty-three milliseconds or every one thirtieth of a second. Central processing completes the camera system. The central processing unit is tasked to interface with the Arduino flight controller, inertial measurement unit, and ultrasonic sensors for tracking distance traveled and distance to obstructions. Coupled by the computer three-dimensional plot from the graphics processor unit central processing has the capability to instruct the drone flight path by comparing actual position to the computer plot. 

4 Standards 

The United States of America signed to existence during seventeen seventy six posted ten legal amendments to guide and establish the country. IEEE has a similar set of amendments, their code of ethics, to construct and define the electrical and computer science community. Number one for IEEE members and engineers to honor and display the highest ethical and professional conduct is to accept the responsibility of making decisions that effects the public’s safety, health, and welfare. Engineering and manufacturing standards implemented by IEEE administration and members also effect the public’s safety, health, and welfare by setting required levels of accuracy and quality. Together IEEE code of ethics’ and standards’ keep the world’s technology and industrial products up to date and functioning.

19

Page 25: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Code of Ethics1. To accept responsibility in making decisions consistent with the safety,

health, and welfare of the public, and to disclose promptly factors that might endanger the public or the environment;

2. To avoid real or perceived conflicts of interest whenever possible, and to disclose them to affected parties when they do exist;

3. To be honest and realistic in stating claims or estimates based on available data;

4. To reject bribery in all its forms;5. To improve the understanding of technology, its appropriate application,

and potential consequences;6. To maintain and improve our technical competence and to undertake

technological tasks for others only if qualified b training or experience, or after full disclosure of pertinent limitations;

7. To seek, accept, and offer honest criticism of technical work, to acknowledge and correct errors, and to credit properly the contributions of others;

8. To treat fairly all persons regardless of such factors as race religion, gender, disability, age, or national origin;

9. To avoid injuring others, their property, reputation, or employment by false or malicious action;

10.To assist colleagues and co-workers in their professional development and to support them in following this code of ethics.

4.1 Related Standards 

Expected quality and likely manufacturability are ordinarily what standards describe. For instance, the starter relay for a vehicle is built to transmit a signal approximately one hundred thousand times. Such an extent of predicted function requires standards for quality parts and design to meet customer satisfaction. Additionally, the relay is built by machines and may be sometimes humans. Designs capable of being produced, and reproduced, with approved materials that are obtainable is necessary for continued success of the relay. The project drone has a flight controller, microprocessor, Xbox Kinect camera, OpenKinect software, induction charging circuit, electronic speed control, and power distribution from a battery source. For scheduling the project to function certain standards are guides to construct while others add to the overall understanding,

1. IEEE/EIA 12207: The Life Cycle Process Framework addresses the framework of which an organization has developed a fully capable and interactive software that will meet specific project and task-level requirements and needs.

2. 295-1969-IEEE Standard for Electrons and Power Transformers pertains to coupled inductors that are supplied by power lines or generators, of

20

Page 26: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

sine wave or polyphase voltage, and are applied by electronic equipment. Includes testing procedures.

3. C57.13-2008-IEEE Standard Requirements for Instrument Transformers describes electrical, dimensional, and mechanical characteristics for inductively coupled voltage and current for equipment associated with distribution, generation, measurement, or transmission of alternating current.

4. C57.154-2012-IEEE Standard for the Design, Testing, and Application of Liquid-Immersed Distribution, Power, and Regulating Transformers Using Temperature Insulation Systems and Operating at Elevated Temperatures explains the thermal, ambient, and continuous load conditions and limits for a regulating and distributing liquid-immersed transformer, additionally, any transformer incorporated to a high-temperature insulation system.

5. 1547-2003-IEEE Standard for Interconnecting Distributed Resources with Electric Power Systems is a benchmark for utility electric power systems not designed to accommodate active generation and storage at the distribution level. Major issues and obstacles occur as a result when a basic transition to using and integrating distributed power resources with the grid.

6. 208-1995-IEEE Standard on Video Techniques: Measurement of Resolution of Camera Systems, Techniques is a list of methods for measuring the resolution of camera systems. The principle being when the limit of fine detail has been decided the camera system will not continue to reproduce the original image.

7. 754-2008-IEEE Standard for Floating-Point Arithmetic specifies any exception a computer programming environment has to the default arithmetic formats and their methods of interchanging from binary and decimal float-point arithmetic.

8. 802-2014-IEEE Standard for Local and Metropolitan Area Networks: Overview and Architecture provides standard’s information of higher layer protocols, structure addresses, hierarchy identifiers, and include a method for higher layer protocol identification.

9. 18-2012-IEEE Standard for Shunt Power Capacitors defines capacitors rated above two hundred and sixteen volts and designed for shunt (parallel) connection are to be considered for alternating-current transmission and distribution systems operating at a frequency fifty to sixty hertz.

10.1-2000-IEEE Recommended Practice-General Principles for Temperature Limits in the Rating of Electrical Equipment and for the Evaluation of Electrical Insulation concerns the thermal endurance of electrical insulating material and simple combinations of such materials, for establishing the limiting temperatures of a system

4.2 Design impact of relevant standards   

21

Page 27: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

 Standards particularly important for the three-dimensional mapping drone are 1547-2003-IEEE Standard for Interconnecting Distributed Resources with Electric Power Systems, 208-1995-IEEE Standard on Video Techniques: Measurement of Resolution of Camera Systems, Techniques, and 295-1969-IEEE Standard for Electronics and Power Transformers. 1547-2003 details electric systems and distributed sources, mainly concerning the grid as the source. Power distribution for the drone isn’t powered directly from the grid during flight, however, the battery is a distributed resource. Energy-storage technologies interconnection to electric power systems is dependent of the type of energy (electricity), characteristics of energy (low voltage), the capacity of energy (~30A/h), and the electric power system service (ESC, flight controller, embedded Linux).  Stated are standard guidelines for interconnection, keep voltage levels as near their specified levels as possible, distributed resources will offset load current and may cause voltage increase for the electric power system, inductive loads can provide voltage decrease, and capacitive loads can provide voltage increase. For a low voltage electrical power system a possible problem to aware of is if the distributed load follows a voltage regulator and is of lower voltage the regulator may react with a similar output as the distributed load. Other potential problems for electrical power systems and distributed loads that aren’t as relevant are high voltage limiting, voltage unbalance, excessive operations, improper regulation for reverse flow, and alternate source feeds. 208-1995 resolves detail factors for grading camera resolution. Possible factors are methods of scanning image lines, circuit’s frequency response characteristics, optic lenses, the imaging device system (camera), or the reproducing device (monitor). Measurement for solution procedure is important for picture and quality comparison and determination. Two parameters are measured for standard 208-1995 that describe the Xbox Kinect image, limiting resolution and horizontal resolution. Limiting resolution is done by obtaining the reproduced image from the camera and observing the display from the monitor. When individual lines cannot be specified, or in other words the image blurs together, the resolution limit is met. The three-dimensional mapping is done mostly by the black and white imaging of the Kinect depth sensor that corresponds more to the horizontal resolution. Measurement is done by connecting an oscilloscope to the monitor and the signal output of the camera system. A measure of the black to white video is made with reference to the magnitude difference of white and black colors, with respect to aliasing. The corresponding proportion calculation describes the horizontal resolution. 295-1969 informs the testing techniques, possible over-powering problems, and of insulation rectifiers for coupled inductors. Transformers and mutual inductors should be tested at a higher voltage than the design value and for at least seven thousand two hundred cycles. Current influx is a known to occur for these circuit elements. This happens when voltage fills the circuit and there is residual flux. Opposition to current isn’t as strong since magnetic field is already initialized and change of current for the coupled inductors is rapid. Linear rectifier insulation is basic compared to non-linear rectifier insulation and each create an internal impedance. The internal impedance can be accounted for by the rectifier’s

22

Page 28: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

current output across the coupled inductors dividing the voltage by a factor of two or more precisely by a regulation percentage of pre-coupled inductor voltage and post-coupled inductor voltage.

5 Facilities and equipment  We will use any and all facilities available to us that is located at the University of Central Florida. There are quite a few labs that are available with various equipment. The senior design lab has electrical testing equipment and computers. This gives us access to: 

Tektronix AFG 3022 Dual Channel Arbitrary Function Generator, 25 MHz  Tektronix DMM 4050 6 ½ Digit Precision Multimeter  Agilent E3630A Triple Output DC Power Supply  Dell Optiplex 960 Computer 

     We also have access to software such as Multisim, Matlab, Xilinx, and many C program applications. Another lab that is available to us is the machine lab. The equipment in this lab is extensive, it has: 

Timemaster Lathe  DoALL Band Saw  Wellsaw Band Saw  Bridgeport Milling Machine  Bridgeport TORQ-CUT 30 CNC Machining Center  Milltronics MB11 CNC Mill  Wellsaw Horizontal Band Saw  Millport Grinder  DoALL Tool and Cutter Grinder  Kent KLS-1540 Gap-Bed Lathe  Bridgeport Series II CNC  Millport Milling Machine  Hunco Milling Machine  Tri-onics Handyman CNC  Millport Lathe  Logan Lathe  Rockwell/Delta Grinder  Starrett Rapid Check 2  Cut Off Saw  

Finally, another lab that’s available is the TI innovation lab which contains a 3D printer as well as a laser cutter.  

23

Page 29: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

6 Realistic Design Constraints 6.1 Economic and Time constraints   With any real world projects or applications there are economic and time constraints and our project is no different. Economic constraints are a type of external constraint. An external constraint is some factor in a company's external environment that is usually out of the company's control. According to Study.com “An economic constraint involves external economic factors that affect a company and is usually outside of its control” (1). Funding would be an economic constraint for us. As of right now we do not have financial funding and therefore the burden of funding the project becomes ours. Designing and building a drone can be exotic and extensive if funding allows it. Our group collectively can only fund a small amount of money thus leaving us to attempt to figure out how to obtain all the necessary components with minimal cost. Instead of purchasing new parts we are looking at purchasing second hand or even broken parts that we might be able to salvage in order to reduce costs. Instead of building a molded carbon fiber frame we decided to purchase carbon fiber sheets and rolls in order to get the properties of carbon fiber but with a reduced cost. This leads to a restricted design. One constraint that is dealt to any one big or small is time. Even if we had all the funding in the world, time will limit how extravagant our project can be. As students we have many classes to take and they require time. From studying to homework and labs, these classes reduce the amount of time we can spend on our senior design project. Jobs and family also reduce the amount of time an individual can devote to the project. Even within the project time is reduced because of time allocated to documentation and a set deadline. We also need to give time for parts to arrive, especially if they are being ordered from Asia. With all the time constraints, it limits the available time to work on the project to limited hours per person.                                         6.2 Environmental, Social, and Political constraints       There are many things that are beyond our control, things like social, environmental, and political constraints can affect how we strategize and implement our project. The environmental constraints were determined in figure 6.1.   

Table 6-2

Constraints AffectsPeople In accurate mapping

24

Page 30: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Buildings Radio interference Weather (wind) Flight time / ceiling heightWeather (sun) Interfere with infrared sensor

For social constraints we looked around the internet to see what would hinder our product in the social world. Fortunately for us drones are extremely popular and the demand for them is only increasing. The downside to that is being able to build a good quality drone for minimal price. There are drones on the market that an individual can purchase for a few hundred dollars and get a quality drone with lots of features. In order for our drone to stand out amongst the sea of drones we need to incorporate features that no other drone has for a moderate price point. The political constraint on our project is ever increasing. With China banning drones to the idea of Amazon having thousands flying overhead to deliver the latest purchases, society is growing weary of drones invading someone’s space. Politicians are pushing for stricter rules concerning where a person can fly a drone that not only are there dozens of websites talking about the legality of flying drones, the FAA has made an APP for users to download and look to see where an individual can fly their drone legally.     6.3 Ethical, Health, and Safety constraints    

For individuals that don’t own flying drones the laws that limit their use may of be of more importance than to owners of UAV’s. Nonetheless, an ethical review of unmanned aerial vehicles is of interest and significance. Other than unique possibilities like quantity or strict airspace regulations and application-specific drones such as facial recognition and celebrity photographers, ethical discussions focus on privacy of citizens and unapproved drone use. The FAA, Federal Aviation Administration, is responsible for licensing drones. Any drone that can fly above four hundred feet must be certified by the FAA as airworthy or that the drone can fly for regulation and safety reasons. Unsanctioned drones are quite popular for such a relatively new technological machinery though. These drones are typically used for assisting law enforcement. Privacy is the other moral principle that has mixed perceptions of what is right and what is wrong. As of today the fourth amendment doesn’t protect citizen’s household privacy from drone airspace and their cameras since any aircraft could fly above their property. However, drone surveillance of citizens isn’t obtainable unless for court evidence and any suspicious activity is considered a search by a non-general public device as a revision of the fourth amendment. Ethical constraints for drones won’t be a concern for this project assembly and testing. Although safety constraints are a factor when testing the drone. As stated UAV’s don’t have to be approved by the Federal Aviation Administration making our drone legal up to four hundred feet with no prior certification. Safety is a concern though when an area has many people. The autonomous connection for controlling the drone requires the transmitted signal be received by drone. If any interference from the

25

Page 31: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

drone’s body or any other obstacle blocks the signal a potential collision could incur. Small and light-weight, the drone is still made of material that doesn’t break on contact normally making the drone potentially hazardous to bystanders. Through the years, electricity and computers have proven not to provide any health risks, therefore, the drone has no obvious possible health constraints.

6.4 Manufacturability and Sustainability constraints   

The manufacturing constraint for drones is mainly resultant of a drone’s design. For an enclosed drone heat constraints will ensue and manufacturing will require fans, low-power embedded systems, and possibly altered assembly. An open drone body has dust and water resistance constraints. Creative techniques for blocking dust and maintenance will be required for manufacturing as well as techniques and technologies that block water or are more water resistant. Sustainability constraints of drones are reliant of the manufacturing. Poor manufacturing will leave a drone susceptible to overheating, dust and water malfunctions. Another part of the design that is effecting the drone susceptibility and isn’t enough of a problem for a manufacturing constraint is rotor size. Rotors of greater size compensate for declining atmospheric pressure that keep the drone airborne. The drone for this project is an open body design and main purpose is indoor three-dimensional mapping, therefore, our concern for manufacturing and sustainability constraints is minimal. 

7 Project Hardware and Software Design Details  7.1 Initial Design Architectures and Related Diagrams    

In our initial designs for the drone we came up with an enclosed X style design (Figure 7-1) as well as a skeletal X design (Figure 7-2).

26

Page 32: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-4: Initial sketch of drone frame

Figure 7-5: Initial size estimation of drone with needed components.

After we purchased and disassembled a Kinect to retrieve the sensor we realized that it has significant weight. We then came up with a longer H pattern with the ultrasonic sensors on the main body. We then finally modified that design to move the sensors below the motors to have the sensors at the extreme outer limit to move the sensing range to the drones’ extremities (Figure 7-2).           

27

Page 33: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.2 Drone System 7.2.1 Drone ESC     A very critical part of the drone is the ESC or the electronic speed controller. There is a lot of information needed to take in to consideration when designing an ESC, first you need to know the amperage of the motors. Second, will the electronic speed controller also be a battery eliminating circuit? Third, will the electronic speed controller be a four in one design or a single motor electronic speed controller? Fourth, will the electronic speed control give the motor a soft start? We will also need to design an electronic speed controller that has a 20% higher ampere rating then that of the motor. All of this information is great but first we need to know what an electronic speed controller is? In short an electronic speed controller is a pulse width modulator that takes a signal from the flight controller and a current from the battery and converts the direct current input to a modulated direct current output. Our motors are rated at 18 amperes and with the 20% increase we need to build an ESC with an ampere rating of 21.6 amperes. We decided to use the Webench program from the Texas instruments website. This program makes building a circuit easy. We entered the parameters that we needed and it gives us a schematic as well as a PCB board layout of our design. This program also give us the build of materials for the circuit. We will attempt to make this speed controller a soft start in order to reduce wear and possible failure on the motor from a hard on of sequence. The overshoot on the pulse when starting the motor can damage the motor therefore we will implement a soft start at a cost of slower reaction time. The ESC will also have a BEC in order to reduce the weight of the drone. Figure 7.3 shows the design given to us from the TI website.

28

Figure 7-6: ESC circuit design

Page 34: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.2.2 Drone Frame    Figure 7-4 gives a basic layout of the components of our drone and a basic design. Our drone frame will consist of carbon fiber tubing, carbon fiber sheets, and 3D printed parts. There will be aluminum brackets and varies hardware. The drone will be designed in an “H’ pattern and will have multiple layers to the body. These multiple layers will reduce the overall length of the drone and the bottom layer which will house the battery and induction charger will be moveable in order to balance the drone’s weight. The landing gear will be fixed and also be made out of carbon fiber tubing. We will be mounting the ultrasonic sensors underneath the motors in order to have the sensing ability on the extremities of the drone. There will also be an upward facing and downward facing ultrasonic sensor so that we have complete three axis proximity sensing.

Figure 7-7 Drone frame layout

7.2.3 Battery and Flight Time  When picking a battery we need to figure out several other variables first and review our specifications as well as our constraints. Our first objective was to calculate the weight of our drone. In order to do that we had to make some hypothesis’ on various parts, for insistence we do not know the final weight of the power distribution block and the electronic speed controllers, so we looked at current speed controllers that are on the market and added 20% weight to compensate for production purposes. We gave our power distribution block a weight comparable to the Parallela board that we are using for computation because we decided that it should be about as large but not as demanding as the Parallela board. We then began picking parts and creating a spreadsheet of the weights and amperage which can be seen in table 7.1. Within the spreadsheet

29

Page 35: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

we made an assumption for weight of the batteries, so we can choose motors that would give us the thrust needed to lift and operate the drone. 

Table 7-3: Power and weight requirements

Name Notes Number

mA (Draw) Max

mA (Draw) Hover V

weight (grams) per item

Parallella Board 1 2000 2000 5 72drone frame 1 0 0 0 500

flight controller 1 800 8007-12V 36

barometerPowered by FC 1 .65 0.012

1.8 to 3.6 5

imuPowered by FC 1 0 0 3 to 5 2

esc 4 12000 120006 to 12 128

motor 4 10000 4000 24 141ultrasonic sensors 6 15 15 5 8.5kinect 1 1110 1110 5 150battery 1 0 0 421power distribution board 1 0 0 75

trnsmitter Powered by FC 1 600 600

7.4-16 22

receiver 1 1300 1300 4.8-6 9.5induction charging 1 0 0 40wires 1 0 0 15propelers 4 0 0 20Total 93900 69900 2554.5

30

Page 36: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Once the motors were chosen, we could calculate the amperage draw from all of the components in the system. Once the amperage was calculated we came across a calculator for figuring flight time and we also used this information to calculate the battery we will need. First we chose the battery. We used the “C Rule” in order to figure out what battery capacity we needed. In our search we were able to find a 60c 5.2 Ah lithium polymer battery that gives us a battery capacity of 300 A. This gives us more than enough battery capacity even if our components are not as efficient as we’ve calculated. We then took the battery information and entered it in the flight time calculator from the FlyingTech (Figure 7-5) website which we then entered our average of hover amperage usage and max amperage usage to obtain our estimated flight time of thirty minutes.  This falls within our drone requirement of thirty minutes. 

Figure 7-8

7.2.4 Induction Charging    

An induction interface initializes at the base/charge station where the drone docks. When the inductor of the drone aligns with the inductor of the base station this is ideal for recharging. The interface transfers a twenty volt sinusoid peak every eight and three-tenths milliseconds. Alternately, fifteen volt sinusoid peak transfer. Enough voltage to charge a car battery, the supply requirement to the drone is met. The drone battery needs eleven and one-tenths to eleven and one-half volts for recharge. If the median supply is chosen for calculation the battery consumes and repurposes fifty six and one-half percent of the voltage that is transferred across the induction interface for twenty volt peaks. Or, seventy five and three-tenths percent for fifteen volt peaks. The induction interface is amply supplied and allocates sufficient potential for recharging the drone’s battery

31

Page 37: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.2.5 Flight Controller      The nerve center of any drone is the flight controller. With so many calculations needed and readings needed just to maintain level hovering the flight controller is a vital part of the drone. It will take the signals given by a user and sensors calculate it into a proportional integral derivative algorithm and then pushes signals to the motors via the electronic speed controllers to speed up or slow down based on the given input. For our drone we decided to use the PixHawk from 3D Robotics because it has several features and is the go to flight controller for high end drones. The PixHawk is a bit on the expensive side at $199.99 but will be more than enough for our project and it has autonomous capability and we will attempt to modify and use this feature in our project. We will purchase the flight controller directly from 3D Robotics because this is the heart of the drone, we will like to have the customer service opportunity in case this is any issues. If we decided to purchase a flight controller from overseas the customer service aspect usually is not there.                         7.2.6 Collision sensors   To avoid collision the drone will be equipped with six ultrasonic sensors. This will give us a reading in every direction including above and below the drone. We will be placing four of the sensors under the motors to give us detection on the outer extremities of the drone. The information and graph was obtained on the Accudiy website. The sensors themselves have a sensing range of 2 centimeters to 4.5 meters and with a field of view of fifteen degrees. This will give the drone the ability to sense an object in a 1.8 meter field width at 4.5 meters (figure 7.6).

32

Page 38: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-9: Collision sensors field of view

Figure 7-10: A representation of the signal path if an object is detected.

The way the sensor works is that a pulse is sent out that is 10 microseconds wide and the signal will bounce of an object and return. Figure 7.7 shows the signal path. This process will modify the return signal and based on the width of this signal, it will determine the distance of the object by using the formula:

Distance= pulse width∗speed of sound2

The speed of sound is a constant variable in this equation, equaling 340 m/s. We will also have the Kinect’s depth sensors to detect at the very front of the drone and has a similar signal path. With these sensors we will be able to detect anything on all three axes even though it will have some blind areas these areas are not much of a concern because of the direction and speed of the drone. The drone will have the most field of vision at the front where there will be two ultrasonic sensors and the Kinect the sides and rear will still have some readings but will also have the least field of vision. The Kinect will be travelling forward

33

Page 39: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

with rotational changes generally therefore the blind spots at the sides and in the rear of the drone are negligible. We determined that these sensors would work perfect for our project because they will give us the range we need and they are relatively low power. We purchased ten of these sensors from a seller on Ebay and at a price of $1.63 per ultrasonic sensor.

7.2.7 Power Distribution 7.2.7.1 Kinect power subsystem 

In addition to the power distribution board that controls the power to the drone, we are in need of a power system for the Kinect. The reason being that the Kinect has multiple inputs for voltage to power its different components, and to prevent the design of the drone power system from being too complicated, we decided to break up this area into its own subsystem. Out of the box, the Kinect has a wall converter that provides it with 12V of power in addition to the 5V that comes from the USB plug in the computer. This power requirement can be significantly reduced by stripping down the Kinect.

The Kinect is made of three different boards, each with different sensors and power needs. The front most board has all of the camera components, which is really the only thing we need. The middle one has the cheap for processing all of the audio data, and the last one is a USB hub for connecting all of the Kinect devices to the computer. Because we only need the camera board, the audio and USB boards can be removed without any problem. This significantly reduces the amount of power that the Kinect requires. Ideally we would be able to connect up the camera board through USB and have it powered that way, but unfortunately it still requires more power than can be drawn from a USB hub. The reason for this is the Peltier device that the Kinect uses to cool the laser. It draws anywhere from 500 to 900mA of power depending on the cooling needs. Here’s a look at the pin out of the Kinect camera board, which is a proprietary connector with 14 pins. The part number for this connector is Tyco 4-174639-4.

Pins 1, 3, and 5 powers the Peltier and draws 500-900 mA at 3.3V Pins 2, 4, 6, 9, and 12 are the ground pins Pin 7 requires 1.8 V at 200mA Pin 8 and 10 are USB connector pins Pin 11 is the 5V power from USB Pins 13 and 14 are for debugging and are not needed

The pinouts of this board were discovered by the hackers working on the OpenKinect project and deserve credit for their findings. My brief initial analysis of the pinouts partially confirms these findings. I discovered a resistance of about 10 Ohms across pins 1, 3, and 5, which implies they will probably need a voltage. The work done by the OpenKinect project has been used by many different people and is confirmed to be correct.

34

Page 40: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

With the pin functions known to us we can now go about designing a power system for the Kinect. The required input output voltages are already known. We will have two outputs, one for the 3.3 V and one for the 1.8V, and let the 5V at pin 11 be powered by USB like it was intended. To design the system we decided to use a tool provided by TI called Webench. Because of the limited power available from the battery of the drone, one of the most important parameters for us is the efficiency of the circuit. After using the tool for a while we were able to find a design that suited our needs. At first we were blindly tweaking the parameters to see if we could create an optimized build, but then we came across an article by Digikey about voltage regulators that stated that the closer the input voltage was to the output voltage the more efficient a circuit would be (Electronic Products,2012). Using this idea, we chose 5 volts as the min and max for the input voltage, and was able to get a circuit that is 97.8% efficient, at a cost of only $4.41.  While that seems cheap, that is only the cost of the components. The actual cost will be greater because the components are too small to solder ourselves and we will have to pay to have them mounted to a PCB board. The basic design is illustrated in figure 7-8 below.

Figure 7-11: Kinect power supply

As figure 7-8 illustrates, the input voltage be 5V and the current required to power the devices is 681 mA. These two parts, the TPS54218 and the LM3671-1.8, both are used to provide the proper voltage and current to their pins. Both Supply one and Supply two have their own circuits design associated with them, which can be viewed in the schematics section in chapter nine.

7.2.7.2 Main power System

The power distribution for the drone from the battery supplies four voltage levels from eleven volts of electric potential. Figure 7.9 represents the main power block diagram facilitating to four voltage switching regulators, specifically, the TPS563200, TPS562200, LM25119, and TPS62140. The first two of the aforementioned regulators are from the same regulator family that differ by metal

35

Page 41: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

oxide semiconductor field effect transistor internal resistances. Similarly to Thevenin Norton voltage divider circuits the regulator’s output the specified required voltage and current to each part of the drone system.

Switching regulators supply eleven components that function with four and four fifths volts, five volts, seven volts, and eight volts. The demands for current are fifteen milliamperes, six hundred eighty milliamperes, eight hundred milliamperes, one and three tenths amperes, two amperes, and thirty amperes. To operate each voltage regulator their pins, inputs and outputs, have to be configured, supplied, and facilitated for six pins, sixteen pins, and thirty-two pins. These regulation circuits have passive circuit elements for configuration and facilitation and the outputs are inductor capacitor filters.

Figure 7-12: Main power block diagram

A filter reduces ripples for both current and voltage waveforms to produce more of a DC output. The filter has a double pole exist at

s = 1 ÷ [2π * LC ^ (1/2)].

This pole has to be placed at high frequency point of entry. For the circuit to have stability with certainty. Figure 7.10 shows the frequency response at low frequency the phase is one hundred and eighty degrees, at the pole the output rolls to ninety degrees phase. The pole created is the effect of the low-pass filter.

36

Page 42: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-13: LC frequency response phase and decibels.

This process is to provide further stabilization and the ripple of each, waveform, output is reduced. Values for the output capacitor and output inductor correspond. Additionally, the regulation circuit’s inductance and capacitance for their LC filter outputs are dependent of the capacitor effective series resistance and the inductor’s ripple, root mean squared, and peak-to-peak current. The inductor must be rated for currents higher than the calculated ripple, root mean squared, and peak-to-peak currents.

The DC power supply is limited by the TPS562200 and TPS563200 synchronous step-down voltage regulator for the power distribution’s five volt and seven volt branches. Features making this integrated circuit family compatible and suitable is an input range from 4.5 volts to 17 volts, an output range from 0.76 volts to 7 volts, a switching frequency of 650 kilohertz, and a low shutdown current less than 10 microamperes. Applications are digital television power supply, high definition Blu-ray disc players, networking home terminal, and digital set top box. There are six pins for operation. Table 7.2 indicates the pins.

Table 7-4: Pins

Name Number

Description

GND 1 Grounds source terminal of low-side NFETSW 2 Switch node from high-side NFET to low-side NFETVIN 3 Input voltage supplyVFB 4 Feedback voltage for converterEN 5 Enable deviceVBST 6 High-side NFET gate drive bias

Input from the battery at eleven volts enters the circuit through a decoupling capacitor. The voltage regulator has to be activated, thus, the input voltage line also is connected to the enable pin. This particular side of the regulator is where

37

180 °

90°

Frequencys

dB

θ

Page 43: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

the regulator is grounded. The regulation circuit outputs with a LC filter that has the voltage from the switch-node pin. The switch-node biases the metal oxide field effect transistors that regulate voltage from the feedback input. Voltage to the LC filter, the output from the switch-node pin that is five volts, is dependent of the divider resistors. Their resistances are from the look-up table of recommended values to optimize the circuit from noise and amplified feedback errors.

Vout = 0.765 * [1 + (R2/R3)]

For output voltage of five volts recommended values are resistor two equals fifty four thousand nine hundred ohms and resistor three equals ten thousand ohms. For output voltage of seven volts the resistance of resistor two is eight two thousand five hundred ohms and the resistance of resistor three ten thousand ohms. Values for the output inductor and a range for the value of the output capacitor are also found from table 7.11. The inductance for the five volt branch is three and one-third micro-Henrys and for the seven volt branch four and seven tenths mircoHenrys. Capacitance range is non-variant for all output voltages, the values are twenty microFarads to sixty-eight microFarads. The efficiency of the TPS56X200 family for the specified currents is better than sixty percent and ninety percent for the Kinect depth sensor and Parallela microprocessor.

38

Seven Volt DistributionElement ValueInput Capacitor 22 nFVBST Capacitor 100 nFResistor Two 82.5 kohmsResistor Three 10 kohmsInductor 4.7 uHCapacitor 22 nF

Five Volt DistributionElement ValueInput Capacitor 22 nFVBST Capacitor 100 nFResistor Two 56.2 kohmsResistor Three 10 kohmsInductor 3.3 uHCapacitor 47 nF

Page 44: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-14: Efficiency vs. output current for voltage five volts. Circuit element values.

The DC power supply is limited by the LM25119 Wide Input range Dual Synchronous Buck Controller for the power distribution’s eight volt branch. This branch emphasizes current regulation to thirty amperes. Features of LM25119 are an operating range from 4.5 volts to 42 volts, a 750 kilohertz switching frequency, and emulated peak current mode control that provides cycle-by-cycle current limiting and loop compensation. To function the controller it has a current ramp generator to be programmed, two internal regulators to be biased, and thirty-two pins. Table 7.3 displays the pins for the Lm25119.

Table 7-5: Pins

(Number)Pin

Description (Number)Pin

Description

(1)VCC1 Bias supply (17)DEMD Diode emulation(2)LO1 Low-side gate drive

output(18)SS2 Set ramp rate

(3)PGND1 Low-side power ground return

(19)RAMP2 PWM ramp signal

(4)CSG1 External current sense resistor

(20)CS2 Current sense amplifier input

(5)CS1 Current sense amplifier input

(21) CSC2 External current sense resistor

(6)RAMP1 PWM ramp signal (22)PGND2 Low-side power ground return

(7)SS1 Set ramp rate (23)LO2 Low-side gate drive

39

Page 45: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

output(8)VCCDIS Disable VCC (24)VCC2 Bias supply(9)FB1 Feedback Input (25)SW2 Switch-node buck

regulator(10)COMP1 Output internal error

amplifier(26)HO2 High-side gate drive

output(11)EN2 Enable channel two (27)HB2 High-side MOSFET

supply(12)AGND Analog ground (28)UVLO Under-voltage lockout(13)RT Oscillator input (29)VIN Bias supply VCC

regulators(14)RES Restart timer (30)HB1 High-side MOSFET

supply(15)COMP2 Output internal error

amplifier(31)HO1 High-side gate driver

output(16)FB2 Feedback input (32)SW1 Switch-node buck

regulator

Components making the LM25119 work are the timing resistor that calibrates the switching frequency. Thirty four kilo-ohm resistor makes the frequency one hundred and fifty two kilohertz for the switching regulator. Output inductors from each switch-node for the LC filters. The inductance is calculated from the switching frequency, maximum ripple current, maximum input voltage, and output voltage. The current sense resistor is a function of the maximum output current, ramp capacitor makes thermal stability and is less than two nano-Farads for discharge between cycles, ramp resistor is a function of the current sense resistor and ramp capacitor (normally 34 kohms). Output capacitor is dependent of the mode of the controller, diode emulation or full synchronous, and the mode is dependent of the amount of tolerable ripple current. Additionally, the effective series resistance effects the output capacitor value. To limit the ripple voltage of the input parallel capacitors are selected. The capacitors operate one hundred eight degrees out of phase and must be rated to sustain root mean squared currents at least the value of half of the output current. Other components effecting the output current are the VCC capacitor, the bootstrap capacitor, restart capacitor, and the soft-start capacitor that determines the time for output voltage to reach final regulated value. Table 7.4 demonstrates the capacitor discharge values.

Table 7-6: Capacitor at discharge at each one hundred eight degrees of phase for ideal filtering of voltage ripple. Circuit element values.

Eight volt Thirty ampere DistributionElement ValueRt --- Timing Resistor 34 kohmL1 & L2 --- Output Inductors 3.3 uH & 3.3 uH

40

Page 46: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Rsense1 & 2 --- Current Sense Resistor

3 mohm & 3 mohm

Cramp1 & 2 --- Ramp capacitor 820 pF & 820 pFRramp1 & 2 --- Ramp resistpr 88.7 kohm & 88.7 kohmCout --- Output Capcitor 1 mFCin1 &2 --- Parallel Input Capacitor 82 uF and 100 uFCvcc1 &2 --- VCC Capacitor 470 nF & 470 nFCboot1 & 2 --- Bootstrap Capacitor 100 nF & 100 nFCss1 & 2 --- Soft-Start Capacitor 15 nF & 15 nFCres --- Restart Capacitor 470 nF

The DC power supply is limited by the LPS62140 Step-Down Converter for the power distribution’s four and four-fifths volt branch. Features are an input voltage range of 3 volts to 17 volts, an output voltage of 0.9 volts to 6 volts, an output current up to 2 amperes, and short-circuit protection. Applications for the LPS62140 are standard 12 volt rail supplies, supply from Li-Ion batteries, solid-state disk drives, and embedded systems. Typical operation is at 2.5 megahertz, control begins with the soft-start pin and enable pin, and has sixteen pins for regulating configuration. Table 7.5 displays the pin configuration of the LPS62140.

Table 7-7: Pins

(Number)Pin

Description (Number)Pin Description

(1,2,3)SW Switch-node converter (9)SS/TR Soft-start/Tracking

(4)PG OutPower for active or non-active regulation

(10)AVIN Control supply

(5)FB Feedback input (11,12)PVIN Power supply(6)AGND Analog ground (13)EN Enable(7)FSW Switching frequency select (14)SOV Output voltage

sense(8)DEF Output voltage scaling

Output voltage is dependent of divider resistors and the reference voltage parameter,

Vout = [(R1/R2) + 1] * Vref

The LC filter is designed from the recommended combinations and reflective max current output of the inductor and regulating circuit.

41

Page 47: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Four and four-fifth volts DistributionElement ValueCin --- Input Capacitor 10 uFCss --- Soft-Start Capacitor 560 pFRfb1 --- Resistor One 180 kohmsRfb2 --- Resistor Two 909 kohmsL1 --- Output Inductor 2.2 uHCout --- Output Capacitor 22 uF

Figure 7-15: Recommended values for LC output filter, circuit element values, and efficiency vs. input voltage comparison for an output voltage of five volts.

7.3 Base/Charge Station                 7.3.1 Induction Interface    Magnetic phenomena circuitry component the inductor is made of conducting wire which is looped around a magnetic or nonmagnetic core. Technological applications include bandwidth enhancement, delay reduction, impedance matching, frequency selection, distributed amplifiers, and voltage-controlled oscillations. A brief description of circuits is creditable for the complete understanding of an inductor’s ability to be an interface. Circuit current is, of course, essential for providing electrical systems with energy. Less obvious is the magnetic field affiliated with the current enclosed by a circuital loop. The inductor’s unique characteristic to oppose current is derived from this magnetic field that can be calculated from Biot-Savart’s Law or Ampere’s Circuital Law.

The main purpose for induction as part of the charging circuit is for the wireless capability of mutual inductance. Advantages of wireless charging with coupled inductors are free motion at the charging station for the drone, intrigue for observation, design quality, and a non-protruding interface. Additionally, for an enclosed drone design the interface may be necessity rather than advantageous. When electric force is applied to a circuit, current fills encapsulating wires. For the case when current is guided to an inductor the conducting loops surrounding the core fill with electrons enacting a magnetic field that is of opposite direction than the circuit’s magnetic field. Reacting is a volume of current prodded against

42

Page 48: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

the current flowing through the conducting wires. The strength of the reaction is measured by volts and known as back electromotive force. Collectively this description is the parameter inductance. Inductance also is described by the formula for an inductor, voltage equals inductance multiplied by the current rate of change. An observation of the RL response displays the back electromotive force. Voltage of the inductor begins at max value due to the high inductance implying that the inductor doesn’t initially fill with the current of the circuit. Hence, when voltage is extinguished current from the circuit has filled the inductor and reached a steady-state producing no more change of current.

Inductance is a parameter that relates voltage to time-varying current, therefore, for applying mutual inductance mesh currents are normally the analysis technique for the intuitive formation of Kirchhoff’s Voltage Law. The perception for analyzing linking inductors is that the magnetic field is made of lines of attraction and opposition ability that can combine or disperse to create an electric potential. For an inductor of a circuital loop connected to a voltage source the magnetic field lines associated with the inductor can affect the magnetic field lines of an inductor of another circuital loop. This is how mutual inductors interact and the basis for the induction interface for charging the drone’s battery. Michael Faraday’s law for induced voltage states the change of magnetic linkage for some time interval is proportional to the induced voltage. Deriving mutual inductance from Faraday’s Law, for induced voltage, is an engineering problem that requires the number of conducting coils and the value of the magnetic flux.

For the inductor directly connected to the voltage source the number of coils multiplied by the magnetic flux encircling both inductors makes a voltage resultant equivalent to inductance multiplied by the change of current. Furthermore, the induced voltage of the coupled interfacing inductor is the number of coils multiplied by the magnetic flux encircling the coupled inductor which is equivalent to mutual inductance multiplied by the change of current. More specifically, the sourced inductor’s induced voltage equals number of coils of the inductor squared multiplied by both inductor’s core magnetic permeability and current rate of change. And the coupled inductor’s induced voltage equals number coils of both inductors multiplied together multiplied by the current rate of

43

Figure 7-16: RL series circuit and voltage measurement across inductor for square wave source.

Page 49: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

change and magnetic permeability of the coupled inductor. When the coupled inductor begins to react to the sourced inductor a mutual inductance is impressed to the source inductor. For linear circuits with inductor cores’ made from non-magnetic material the mutual inductance for each inductor is equivalent.

When inducing voltage across coupled inductors by magnetic fields that are function of changing current energy is stored. The energy storage of the magnetic field for mutual inductance is the transfer of voltage by varying magnetic fields. The transfer is classified as the rate of energy supplied to the inductor and is described as the value of the current at the inductor squared multiplied by the inductance divided by two. Also referred to as electrostatic energy. Electrostatic energy for the coupled inductors is the rate of energy supplied to each inductor added together since they share the transfer added by the electrostatic energy stored by their correspondence that can be a gaining or draining amount, which, if the latter is true the mutual storage will subtract from the total energy.

7.3.2 Charge Control and Conversion        Charging station’s or dock’s source is an American outlet. American outlets supply one hundred twenty root mean squared volts alternating from positive to negative magnitude. Their sinusoid waveform completes a period every sixty Hertz. Converting the outlet’s waveform for a voltage that recharges the drone’s battery is the objective. Two circuits are being considered with this intention. Each circuit follows the same charge control and conversion path, however, voltage regulation implementation differs. For the first circuit voltage regulation is done by a clipping circuit and the second circuit by a linear operational amplifier voltage regulator. Charge control and conversion begins with a transformer.     

The transformer is a type of coupled inductors that calibrates a source to a greater or lesser magnitude. The primary windings are directly connected to the one hundred twenty alternating current outlet source and the secondary windings are the source to the remainder of the charging circuit. This part of the circuit is of particular importance since an ideal voltage source can be obtained from the transformer. Calculating an ideal voltage requires the knowledge of a specified output, such as the voltage to recharge the drone’s battery, and the transformer ratio. The ratio for transformers relates the number of primary windings divided by the number of secondary windings to the outlet voltage divided by the voltage applied to the remainder of the circuit.

44

Page 50: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

A ratio of fifteen to two is equal to seven and a half multiplied by applied voltage equal to the outlet voltage. As a result, applied voltage equals sixteen volts by calculation. Connected to the applied voltage source from the transformer is the charging station drone induction interface. Coupled inductors with equal windings and no other circuital components affecting current rate of change. The interface should be lossless.

Output of the coupled inductors leads to a full-wave rectifier. The full-wave bridge rectifier, the particular type, is constructed of PN junction diodes except a resistor that outputs the voltage.  The function is to invert the negative portions of the sinusoid, similar to a half-wave rectifier that outputs zero volts for negative half of sinusoids. The composition is four diodes set as a series diode connection parallel to another series diode connection to make a bridge to the resistor. Input leads from the coupled inductors enter at gaps from the series diode connections. When voltage is applied a diode from each series connection has their electrostatic barrier lowered enough to produce drift current. For positive polarity input voltage diode two and diode three are forward biased and enter the resistor at the positive terminal. Negative polarity input diode one and diode four are forward biased and enter the resistor at the negative terminal. As a result a repeating positive half sinusoid of two diode voltage drops less magnitude as the input is the voltage output of the full-wave bridge rectifier. A key factor for the

45

Figure 7-17: Transformer and converting voltage measurement

Figure 7-18: Transformer and coupled inductors. Coupled inductors output voltage measurement.

Page 51: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

circuit is breakdown and forward bias characteristics of the diode, the IN4148 PN junction diode has specification of a minimum seventy-five volts for reverse bias and a maximum of one volt for forward bias.

Figure 7-19Full-wave bridge rectifier. Volt 1

The actual circuit is constructed with a full-wave bridge rectifier filter. For the rectifier filter a capacitor is set connected to the output of the diodes and parallel to the resistor. The full-wave rectifier filter has a ripple factor one half that of the half-wave rectifier filter and this proves significant enough for the full-wave rectifier filter to the implementation choice from simulations. Initially the capacitor is uncharged and the time constant equals to zero. The capacitor’s voltage increases as the voltage from the diodes increases. When the peak voltage occurs the capacitor discharges and the diode voltage is blocked from the resistor. Resulting is the capacitor’s voltage across the resistor and representing the output voltage. The time constant is large enough to produce a nearly direct current voltage that begins equal to the voltage produced from the diodes. The diodes return to increasing voltage that recharges the capacitor and supplies the resistor when the greater than capacitor voltage.

46

Page 52: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-20: Full-wave bridge rectifier filter. The left simulation is measurements of the capacitor and diode voltages. The right simulation is a measurement of the resistor voltage.

The full-wave bridge rectifier filter concludes the charge control and conversion path alike to both circuits. The first circuit clips the output from the rectifier filter that regulates the voltage for charging. A formation of a resistor, PN junction diode, and a Zener diode limits the voltage to the PN junction diode’s turn-on voltage summed with the Zener diode’s turn-on voltage. The resistance alters the current to a reasonable amount for the PN junction diode that conducts for positive voltage above the turn-on voltage. Zener diodes are reverse bias activated and manufactured for, among other applications, regulating circuits since their turn-on voltage can be fixed for practical voltages. Conceptually these circuits regulate, or clip, by the Zener diode turn-on voltage being the main voltage outputting from the circuit.

47

Page 53: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-21: Induction charging circuit one, final output voltage, voltage of Zener diode, and voltage of PN junction diode.

Added to the final circuit is a resistor and capacitor to the transformer input and a resistor to lower the current magnitude for the coupled inductors. The power line source is of relatively high magnitude, the resistor reduces the initial current magnitude by a factor of one hundred, although doesn’t affect the rate of change, while, the capacitor a barrier for the initial burst current and final arcing current from the transformer to prevent current influx.

The second circuit being consider outputs from the full-wave bridge rectifier filter to a linear operational amplifier voltage regulator. A series connection of two resistors is the independent variable for the linear regulator that incorporates an output equal to,

Vout = 1.485(R1+R2 / R2)

The implementation regulates with the operational amplifier trying to keep the inverting and non-inverting terminal’s voltage equal. When voltage to the collector properly biases the bipolar junction transistor and voltage originating from the non-inverting terminal to the base of the bipolar junction transistor starts conduction then the regulation process can begin. If voltage increases from the non-inverting terminal the operational amplifier’s output increases as verified from the output formula, the differential gain multiplied by the non-inverting terminal’s voltage minus the inverting terminal’s voltage. The operational amplifier’s output is connected to the inverting terminal and this increase returns the circuit to regulated state. When the inverting terminal’s voltage increases the output of the operational amplifier decreases. Therefore, the circuit can provide instantaneous regulation and operate as a linear regulator. Electrolytic capacitors preserve low AC impedance paths to ground for filtering alternating current voltage and the output resistor limits current to a reasonable amount. Insulation

48

Page 54: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

for this is provided by the many circuit components, therefore, the resistor and capacitor to input the transformer is the only addition.

Figure 7-22: Induction charging circuit two and voltage output measurement

The first choice of the two circuits is the clipping voltage regulating circuit. The decision of priority is made due to the biasing and the amount of circuitry that has to be mounted to the drone for the linear operational amplifier voltage regulator, the uncertainty of the coupled inductors interface, and simulation results.          7.3.3 Map Display Software 

49

Page 55: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Map display software will consist of two different parts. One is an advanced view of the drone’s activity at a specific time, and the other is an overall view of what the drone accomplished. As the drone moves through its environment it records everything it sees and all of its position data into timestamped files on the disk. When completed those files can be reviewed by the mapping software. The final overview will take all of that recorded data and display a three dimensional view of the collected data from the entire mapping process. Functions will also be provided to output this data in different formats. For example, if you wanted to 3D print your map, a function will be provided for that. The advanced view of the data will provide a time based view of the data that can be played back almost like a video. It will have options to play, fast forward, or jump to a location during the mapping process. This view will also provide a cumulative or current view. At any point in time the cumulative view will display everything mapped previously while the current view will only display what the drone sees at that moment. The advanced view will help greatly with debugging because it will let us see what the drone did at any point in time, and catch those mistakes. It was decided that these programs would be written in MatLab. The reason for this is because it was designed for tasks like this, which are taking data points and displaying them. This will be a powerful tool for us to use. We would also like to design some software that can take a stream from the drone and display what it sees, but this is additional functionality that isn’t necessary for our drone to work.

7.3.4 Radio Transmission   For testing and manual override we will be using a Walkera DEVO 10 Radio System. This transmitter and receiver combination will give us ten channels, so we can add in many features and if we are in close proximity to other radio controlled devices the manufacturer states that this product will be exempt from interference. We are looking to purchase this combination from HobbyPartz.com because they state that their lead time is minimal compared to purchasing them from China. The Walkera DEVO 10 Radio System price is $ 130.00. We are also going to purchase a second transmitter and receiver for our first person view data and for this we chose Eachine FPV 5.8G 600mW 32CH Wireless Transmitter Receiver. We will be purchasing this from Banggood.com because the price point on this product comparatively was less expensive than many competing websites. The cost for this transmitter and receiver is $45.00. This piece of equipment will give us the ability to see a first person view (FPV) from the drone. The voltage and current for both of the on board transmitter and receiver meet within our power constraints. We will have a transmitter and receiver mounted on the drone and the FPV receiver will be mounted inside the charging station and connected to a LCD screen for viewing.  

7.4 Software Design

At the heart of the drone there will be software that controls all of its movements and safely navigates it throughout the world. Overall, this software is in charge of

50

Page 56: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

collecting sensor data, analyzing that data, organizing the data in a meaningful way, then using that data to make decisions. All of this has to run on an embedded Linux platform, so there are limitations in ram and processing power. Each of the tasks the drone has to perform will have its own section of code. The low level data management is responsible for collecting sensor data and converting it into a meaningful format. The high level data management will analyze the data, building a graph out of the raw data and filtering out any anomalous data. Algorithms for tasks like finding unexplored areas, plotting flight paths, and preemptive collision detection will use the graph data and will be run by the state machine. The state machine is responsible for making decisions based on the data it gets, and is responsible for choosing when to use the different algorithms available. The drone can accomplish its purpose by only completing the first and second levels of the software design. By those stages it is already building a 3D map of its environment, and with the assistance of a simple state machine for navigation, it will work well. Completing the third and fourth levels of the software design will undoubtedly make the drone unique compared to other drones. It adds a level of intelligence and independence that is rarely seen in drones. This is the goal of the drone control software: to autonomously navigate the world without colliding into objects, collecting three dimensional data for later use, and smart navigation to allow the drone to find its way back, avoid already mapped objects, and to seek out unexplored areas to complete a room map.

7.4.1 Tools and libraries

Before discussing the software that we are going to develop, we will talk about the environment, development tools, and libraries that are going to be in use. One of the most important decisions we had to make was the operating system to develop for. That affects every other choice we will make in the future of our software development design. Windows would have been a better choice for the Kinect, but Linux wins out over it because there are almost no cheap windows embedded devices on the market that we can use. Because of this, we now have to use the open source libraries for the Kinect called OpenKinect. The libraries provided by Microsoft only work for their windows platforms. The language we will be developing is C++. We chose this for the performance benefits it has over other languages. For our main development IDE we will be using eclipse. Eclipse supports a wide variety of languages besides java

7.4.2 Low level data management

The low level data management software is responsible for collecting the data from all of the peripheral devices, the Kinect and the IMU, translating that data into something more useful to the drone, and storing that data for later use. It is the lowest level of software and works directly with the hardware of the drone. In addition to data collection it is responsible for output data like drone control

51

Page 57: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

commands and radio transmission. Each of these sections will be contained in their own class. The class interactions are demonstrated in figure x below.

7.4.2.1 Data Management

Data management is the class responsible for handling the raw data from the devices connected to the system. As the design stands now, there will be two devices that the software will need to interact with, the Kinect and the flight controller. Each of these devices will have wrapper classes around the communication libraries. These classes will provide communication with the devices in the form of callbacks and are responsible for properly initializing, maintaining, and closing these devices. Figure 7-21 below shows the callbacks available, the classes they interact with, and the data they provide.

Figure 7-23 Wrapper classes and their callbacks

When the callbacks happen, the data provided by the callbacks are packed into a struct. Along with the data, the current system time in milliseconds is recorded in the structs. These structs are then put into a buffer which will be accessed by the DataProcessing class later on. There are three different kinds of structs, PData, PRXData, and POData.

The PData struct contains point data returned from the Kinect sensor. The point data is a 16bit unsigned int that stores a pointer to an array of point data. Each array contains all the points for one depth image frame.

The POData struct contains a 32bit integer with the current world position of the drone. The world position is calculated and maintained by the flight controller using the IMU data. In addition to the point, it will contain orientation data in the form of two 16 bit integers. This will provide the

52

Page 58: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

direction the drone is facing, in degrees, about two axis. All positions will be relative to where the drone first started (0,0,0).

The PRXData contains information about when a proximity sensor on the drone goes off. The first byte identifies the sensor on the drone. Distance data of the detected object is stored in a 32bit integer. These proximity sensors are not part of the Kinect, they are additional ultrasound sensors to allow the drone to fly safely.

This class doesn’t process the data at all. It is a class to hold the data buffers and manage devices. The next class, DataProcessing, is responsible for getting real world points from the data. It will provide get different methods for access to the buffers, one is a single PData matched with its POData. The second returns data in chunks, so a large set of PData with a smaller set of POData. These will be used at the digression of DataProcessing, and are illustrated in the figures 7-22 and 7.23. The data they return is called a PointPack, which is a struct containing two arrays one of PData and another of POData. The function F(x,y) in the figures represents the DataProcessing class.

In this class there is a lot of danger for memory leaks. It contains statically allocated buffers, with pointers to data that was allocated by device libraries written in c. While C++ does have some auto memory management, it can’t deal with the memory allocated by these libraries because they were written in C. Special care needs to be taken to deallocate all memory returned by these libraries. The UML diagram in figure 7-24 below is a rough estimate of how the DataManagement class will look like. As we learn more about how the Kinect system works, it may be tweaked slightly to compensate.

53

Figure 7-24 Figure 7-25

Page 59: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-26 DataManager class diagram

7.4.2.2 Data Processing

By itself, the POData and the PData are not very useful for creating a 3D map of a room. The PData describes detected points from the point of view of the drone and provides no context whatsoever about where these points are in the world. So if the drone moves, it will get a similar set of points with respect to itself, but won’t know where those points are with respect to the other points it recorded. That is where the POData comes in. POData is the current position of the drone in the real world. This data comes from the flight controller, which has a 9-axis IMU for tracking its acceleration, orientation, and direction. The idea with this is to have the flight controller constantly update its world position from a fixed point, where the drone started, using the data from the IMU. That data will then be sent to this software through GPIO pins. It is the responsibility of the DataManagement class to buffer this data, and the buffered data will be accessed by this class, the DataProcessing class. When you combine the drones current position in the world and the points relative to the drone, you can calculate all of points positions in the world. That is the sole purpose of the DataProcessing class; get the data from DataManagement and use that data to determine real world point locations.

In figures 7-22 and 7-23 the processing is represented as a simple function, but in reality the processing will involve multiple functions and multiple steps. That’s why a whole class is being used to provide better organization and modification to the different algorithms required to process this data. The first stage of the calculations is to convert the Kinect data from the PData struct into something

54

Page 60: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

meaningful. According to the OpenKinect wiki, the data returned from the Kinect is a pointer to an array of 11 bit numbers, which can store values between 0 and 2048 (OpenKinect, 2011). It turns out that this data is the disparity of that pixel. In computer vision, disparity is the distance between two points, and in this case those points are pixels. The Kinect was calibrated with an image full of known distances. It compares the image it gets back from the camera to this known image and gets the disparity for each pixel. Given the disparity, the Z distance of that pixel from the camera can be found with the formula in figure 7-25.

Figure 7-27: Equations provided by https://www.ptgrey.com/KB/10102

Both the focal length and the baseline are known values. The focal length was calculated to be 580 (Open Source Robotics Fondation, 2012), and the baseline is the distance between the IR laser and IR camera, which is 7.5 cm. Even though the disparity data only uses 11 bits, it is stored in a 16 bit number due to memory only being byte accessible. A 16 bit variable is used instead of 12 bit because it is easier to use data that is the size of a power of two. Before the data can be used, the rest of data will need to be anded out to prevent random bits of data from affecting the calculations. In addition to the official formula for calculating Z distance listed in figure 9-5, there are two other formulas used by the Kinect community. One is a “first order approximation,” which is not useful for our purposes because we want as much precision as possible. The second is listed below, and claims to have an accuracy of about a third of a centimeter. Without further testing, there is no way to know which is the most accurate.

Z = 0.1236 * tan(d / 2842.5 + 1.1863)

The next stage of data processing is to get the x and y coordinates of the pixel. OpenKinect provides simple formulas for getting these positions:

Figure 7-28 X, Y, and Z formulas

The formulas in figure 7-26 are no different than normal vector calculations, they just use measurements of pixels as a measure of distance. The first section describes the pixel location with respect to the center of the image. I and J are the real pixel locations of the image, while W and H over 2 describe the center of the image. Part two describes the Z location adjusted for the minimum distance the Kinect can view. The last part, the scaleFactor, is necessary because we are

55

Z = fB/d

Z = distance along the camera Z axisf = focal length of lens (in pixels)B = baseline (in metres)d = disparity (in pixels)

x = (i - w / 2) * (z + minDistance) * scaleFactory = (j - h / 2) * (z + minDistance) * scaleFactorz = z

Page 61: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

using pixels to calculate distance, and the scaling factor describes a pixel/meter ratio.

Figure 7-29 Depth frame relative to drone

After calculating the coordinates of the depth frame points, the real world positions need to be derived from the position of the drone. It is not as simple as adding and subtracting the coordinates from each other. The depth frame is on an axis relative to the drone, while the drone is on an axis relative to the world. To convert each of the depth frames to real world points, trigonometry will need to be used to convert the points. In addition to the location of the drone we will need two angles that describe the direction the drone is facing. Figure 7-27 above illustrates the problem in two dimensions. The drone is at position X1, Y1, and Z1 while the depth points are on an axis 135 degrees offset from the drone. To get the axis1 positions of the depth frames, all of the axis2 points need to be broken into components of the axis1 and then added up. These formulas in figure 7-28 are how it would be calculated in only two dimensions:

Figure 7-30 A similar set of formulas are used to calculate the points in three dimensions.

As you can see, these there are a lot of steps to calculating just one point of data. The processor will have to perform these calculations on each pixel of a frame, of which there are 297,600, at 30 frames per second. Normally, calculations like these would be performed by a GPU, but most embedded devices lack powerful GPU’s, and the data is not in a format that a GPU knows. Our solution to the problem is to use an embedded device that has a lot cores for processing. By distributing the processing across the cores, the drone should be

56

Xn = X1 + X2*cos(θ) - Y2*sin(θ)Yn = Y1 + Y2*cos(θ) - Y2*sin(θ)

Page 62: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

able to handle all the data from the Kinect. If it is still too much for it to process, then we can reduce the frame rate or the resolution to compensate. After being processed, it will be stored and managed by the Data Storage class. Figure 7-29 shows the data processing class.

Figure 7-31: Data processing class

7.4.2.3 Data Storage

Data storage is arguably one of the most important classes in the drone software. It is responsible for the efficient management of the data returned and processed by the Kinect. The main tasks it is responsible for are storing the data in a sparse matrix, checking for redundant data and deallocating it, storing the data into a database on disk, and retransmitting data to base station.

The most important part of the DataMangment class is the design of the sparse matrix to hold all of the points. A normal three dimensional array would not work for our applications because if the large area we are mapping. While we are recording points in space, we are really looking at the surface area of the objects around us. This means that most of the points in space won’t have anything in them, and all of that space in the three dimensional array will be wasted. It is especially important to conserve memory because embedded devices have a limited amount compared to desktop computers. We are storing points on a centimeter level of accuracy, and if we were to allocate memory for a 20m X 20m X 20m room, it would need 8 billion points. Each point contains at minimum an X, Y, and Z 32 bit integer, which contains 4 bytes each. In total this works out to be 96 gigabytes of data, most of which are empty. To solve this problem, we will be using something called a sparse matrix. It saves space by only saving the values

57

Page 63: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

with something in them into the matrix. If you look at that same 20m X 20m X 20m room using a sparse matrix, the amount of data is equivalent to the surface area. The surface area of the room works out to be 24 million square meters, which can be stored in 288 megabytes. This is a significantly smaller amount of data. There are several ways to implement a sparse matrix, but the best one for our purposes is with a hash table. This allows order one access to the points with only the overhead of a simple hash.

Another function of this class is to check for redundant data. It is important to conserve memory, so not storing redundant data is a good way of doing this. When the Kinect is running, it receives about 30 frames per second. The drone will not move very fast, so the difference between frames will be almost unperceivable. This also means that many of the data points that were stored last time will be stored again. This is another area that the hash table is advantageous. When the data is hashed and then inserted into the hash table, it will check if there is already an entry, and if so, discard that entry. This is natural behavior for hash tables and works well for our purposes.

At a regular interval data from the hash table will need to be stored to disk. The reason for this is because RAM is volatile memory, so if the drone shuts down when finished, all of the data it collected will disappear. This presents problems though. Writing data to disk is significantly slower than RAM and if the disk write is in the main thread it will block all other operations. This blocking will prevent the drone from functioning properly. There are two ways to solve this problem. The first, and simplest, is to save all of the data to disk when the drone completes its task. The second is to have a thread that periodically checks for new data in the hash table and saves it to disk. This option is ideal because if we do run out of RAM, then we can export some of the data to disk and free memory. Starting off though, we will implement the first choice just to keep things simple. If idea number two is needed, we will implement it then.

The last piece to this class is the retransmission of data to the base station. It is part of our design to have a base station where we stream the data and can view it. For this class, it will act similarly to the second possibility for saving data to disk. A thread will periodically check for new data and send it off to the Comms class. In figure 7-30 a flowchart displays the data storage class. The Comms class will actually handle the communication to the base station. This part of the design is not necessary to the functionality of the drone, it is an optional feature that we would like to have.

58

Page 64: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-32: Data Storage class diagram

7.4.3 High Level Data Management7.4.3.1 The Idea

When the data returned from the Kinect is processed, we are left will a large amount of world points that aren’t very useful by themselves. Also, a lot of the points are not actually useful, even after the redundant points have been filtered. It is the task of the high level data management class to take the data from the newData hash table and inert it into the dataStorage after doing more filtering and building connections between the points. The independent data points are not very useful by themselves because they have no relation to the points around them. To make algorithms that can make use of this data, they should be able to look at a point and tell if there are points around it too. This means we will have to build a graph out of the data received. Each vertex of the graph will consist of edge pointing to the points to the left, right, top, and bottom of the vertex. If you think about the nature of the data the Kinect is returning, which is the points on a surface, you realize that the data we are actually getting is surface area in terms of three dimensional points. If we express this data as a two dimensional graph of points then it becomes much easier to use the data. It is the job of this class to build this graph from the received points, but there are various problems that need to be overcome to do this successfully.

7.4.3.2 The Problem

One such problem is the presence of random data points in the data returned by the Kinect. There is no such thing as a perfect sensor, so depending on the reliability of the instrument you are using there is bound to be a number of anomalous data points. If nothing is done to filter these data points, then it may

59

Page 65: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

affect the navigation of the drone and produce weirdly shaped graphs. One such example of this is if the drone is looking at a wall and detects a random point about a meter out from it. If a graph was built from this it might appear as a spike or some other odd formation that obviously doesn’t exist in the real world. An example of this is illustrated below in figure 7-31. If this exists in the graph, then this is another obstacle that the drone has to navigate around, even though it doesn’t exist in the real world.

Figure 7-33: Example of a spike in a 2D graph that may interfere with drone operations.

Another problem that exists is building the graph itself. It’s easy to say that the graph will be connected to the points above, below, left, and right of it but how does it find those points. Even looking at a flat wall is not straightforward because concepts as above and below are completely relative to the viewer. What if, for some odd reason, the drone is upside down or the camera was mounted upside down? This changes the view of the drone and may affect the performance of any algorithm based on these relative views. So far we have only looked at flat walls. The world is not made of only flat walls. There are corners, curves, and various other shapes that the drone will be looking at. The point to the “right” on a curve is not actually to the right, it is more like right and forward, depending on the way the curve is facing. In addition to this, walls may be layered. An extremely accurate device will pick up a wall as a single layer of points, but that may not be the case as the drone moves around and views the world from different angles. In this case, a wall could appear as double or triple layered. In this worst case scenario, a point being added to the graph may be surrounded by points, and somehow the algorithm has to find a way of determining the correct right, left, up, and down points which are completely arbitrary.

60

Page 66: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.4.3.3 The Solution

The solutions to these problems is to provide weight values for each node. These weights will be determined by how many times a point has been detected by the Kinect. In theory, the Kinect will be getting 30 frames per second, so it should get a large amount of points that were the same as the last. Every time a point is detected, but is already there, a value will be incremented and that point will be discarded. Using this data, only points with a certain weight threshold will be added to the graph. This will filter out those random points of data and extra layers that might affect building the graph. It can also be used to re calibrate the drone as it flies around. When it flies in an area that it has been previously, and it starts detecting points near already known points, it can use those data points to re calibrate its world position. This weight system is an extremely efficient solution to the problem, with only the cost of an integer per point. At a regular interval, the system will perform garbage collection on points that don’t meet the minimum weight threshold. This will aid in memory conservation in our limited system. There are two ways that the algorithm can be run on the system. It can be ran periodically, or a system for keeping track of new nodes can be implemented. I believe that the tracking system would be more beneficial because a periodic check on the graph could become costly when the graph becomes large. It could be done more intelligently, but at its worst it might have to check every unassigned data point to see of it has reached that threshold. The other method is a lot more complicated to implement, but the benefits may outweigh that. The idea with it would be every time a weight point is incremented, check if it meets that minimum threshold. If it does, trigger a callback that will add it to the graph.

While this does address issues of erroneous and offset data, the algorithm will still need a few more rules to address conditions like corners. The best way to do this is not to look specifically for corners and other special cases, but create a generic method that will address all conditions equally. The best way would be to look for any points around it, and determine its connections based on those points. Keep in mind that the data is points in space, and the computer has no context of up, down, left, and right. Because of this the algorithm needs to look at all of the points around it for these points and build attachments to them. It will end up looking at all of the eight points around them, and if they are a part of the graph it will add them to its list of edges and add itself to their list. The node will not look outside this area for connections, so it could end up with no connections or all of its connections. They should not have any more than for connections if the graph builds correctly. This algorithm works because it is not looking for anything specific. It just finds the points around it that it knows are real and connects with them. It is always possible errors will come up in the graph as different objects are mapped, but hopefully those cases help better improve the algorithm and not break it

61

Page 67: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.4.4 Algorithms

This section contains ideas about the algorithms that we would like to implement in our drone to greatly enhance its functionality. All of these algorithms are very difficult to design, and since our actual design is likely to change as we build it, we will provide a more abstract discussion of our ideas. It’s also important to note that these algorithms are not necessary for the functionality of the drone. These are extra features that will complement the drone nicely. It’s very possible that the drone will not have the computation power to spare on these algorithm’s, In which case we will drop them or try to improve them.

7.4.4.1 Finding unexplored areas and map completion

The goal of this mapping project is to provide a complete map of an area, not just parts and pieces of an area to get a good look at it. We want to create an accurate representation of the area, which means minimizing the gaps and other areas of missing data from the graph. To do this the drone can’t just wander around aimlessly because it will never find all of the little nooks and crannies of a room. For this, the drone needs to be capable of detecting areas that have not been mapped yet. That’s where the algorithm for finding unexplored areas comes in.

To illustrate the idea behind the algorithm, we are going to compare the graph to a painting that is being painted. For the sake of this illustration, let’s assume that a painter paints their painting from one edge of the canvas to the other. In reality, a painter will apply layers to the canvas until the painting is done. For our example though, as the painter paints the canvas with carious different colors, we can tell where he has not painted yet by looking at the white areas of the canvas. More specifically though, we can tell that that area has not been painted because of a clear edge where the paint stops and the canvas starts. We can do something similar with the graph if we look at the unconnected edges of the graph. Assuming a point in the graph is not completely filled, meaning it doesn’t have at least four edges connecting it to other nodes, then there is a point there that hasn’t been mapped before and needs to take a look.While this is a good way of finding points that haven’t been mapped, we don’t want the drone to be hunting out every single point that hasn’t been mapped. It should be looking for larger areas to map, that way it is mapping things more efficiently. The first step in doing this is to keep track of all of the edge nodes. It is a waste to search the entire graph every time to search for new points, so as they are added to the graph, pointers to them will be kept until they are no longer an edge node. This provides quick and easy access to those edge nodes, but will cost a little bit in memory. The second part of the strategy will be to divide the world into larger quadrants, say 2x2x2 cubic areas, which will be the targets for the drone search. To determine which areas need to be mapped, a queue system will keep track of the areas with highest priority. Priority will be set by the

62

Page 68: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

areas that have the highest number of uncompleted points and the smallest amount of completed points. This will ensure that the drone is always exploring the areas that it hasn’t seen before.

For now, we will have the done seek out every unmapped point, starting with the large areas then going down to the single points that it missed. It is possible that the drone will not be capable of mapping every point. If this is the case, then a certain percentage of the missed areas will have to be completed before the drone accepts that it is finished. There are two ways that the drone might be able to tell when it is finished. The first is when it runs out of unmapped points, it is guaranteed that the map is complete. As stated earlier, there might be erroneous data points, and our algorithms may not filter all of them. In this case, it will never run out of unmapped points and the drone will be running around an area unable to map that area. Another way to solve this is to look at the graph. The graph should be fully connected when complete, that means that any node you look at should be part of a cycle in every direction you look. If there are some points that aren’t connected, then this test will fail. This test is more costly in terms of CPU time, but reduced the memory cost. It still suffers from the same problem as the other, which is there may always be lingering points that cannot be mapped. The solution is just to accept a certain percentage of the map to be unmapped.

7.4.4.2 Drone navigation

The drone navigation software is a complicated set of programs that need to work together to accomplish the task of safely flying the drone to some destination, whether that is a GPS waypoint or an area of the world that the drone is seeking out to map. At the lowest level the flight controller will be looking for objects to avoid with the ultrasound sensor. At the highest level, the drone state machine will be deciding where to go next, and what navigation queues that should take priority. It is important that all of these parts and pieces work together with the state machine to produce reliable results.

As stated before, one of the lowest layers of hardware, the flight controller, will have navigation software to prevent the drone from running into objects. The flight controller will have six ultrasonic sensors pointing out on ever axis to detect nearby objects. There are a couple of reasons why we choose to add this extra detection on top of the Kinect. First is the Kinect’s field of view is not extremely large. It will only be able to see thing right in front of it, which is ok if we only move the drone forward as we plan to do. It’s possible that the drone may drift sideways due to instability or wind, and in these cases the ultrasound sensors will provide an early warning to the drone on an area it can’t see. Another good reason to have these is because of a weakness in the Kinect sensor that makes nearly blind in direct sunlight. Ultrasound sensors are not affected by sunlight, so it will still be able to navigate without crashing into things if it finds itself in direct sunlight. Because this software is important for the safety of the drone and it is running on the flight controller already, it has the highest precedence over all other navigation commands. When it overrides the state machines commands

63

Page 69: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

due to safety reasons, it will signal the state machine that it is doing so until it is done, so the state machine ca adjust its decisions on what to do.

Besides this low level software, the drone will have two different algorithms for navigation to meet its needs. The first is real time detection of objects in the drone’s path, detected by the Kinect camera. The second is an algorithm for plotting a path between two points that avoids all known objects. Both of these algorithms will be run on a regular basis, with the second being used less frequently than the first. Because we will be receiving data from the Kinect at, hopefully, 30 frames per second, detecting objects in the path of the drone will need to be a fast and simple algorithm. Right now or algorithm is very simple, but because of that it is more likely to work. The check will be done when the points are being inserted into the newData hash table. It will first check the Z distance from the drone to see how close in front of the drone it is. If it falls within the minimum safe distance, it will then check the X and Y coordinates to see if it aligns with the drone, meaning that if it continues to move forward it will hit that object. If these tests detect a potential object, they will flag the state machine so it can make decisions based on the new information. It is likely that the first iteration of this software will be pretty dumb and just rotate the drone until the object is out of his path. The decisions it will make on this information will be covered in more detail in the state machine section.

Plotting a path between two points will be a feature that is used infrequently because it is very likely that the algorithm will be very costly in terms of CPU time. The most important time that it may be used is when the drone needs to return back to the base station. In this case, it will be plotting the shortest known path back to where it started, while avoiding objects. To do this it will start by plotting an imaginary line between itself, and its target destination. Then it will slowly increment over all of the points on this line checking them to see if they have a detected object. It is important to note that this line does not actually exist in software. The computer will just have a start point and destination point which will be incremented along that imaginary line to detect objects already mapped. Also, the line will be more like a rectangle than a line because the drone will need to check all points that fall within its width and height so that it doesn’t clip any objects. When it does detect an object it will shift the line out of that object’s path with some room to spare, then recheck the path between start and this point to make sure that a new object wasn’t put into the path by this change. When that point is confirmed to be good, it will begin again detecting a path to finish, but this time it will start at the new point. Unfortunately, this algorithm will not be able to make very good use of the graph we have built because it has to check all of the points in the path, and most of those will be empty points that are not in the graph. This also means that each of the points will need to be hashed to check if it is in the graph, and there might be the occasional collision. These things will drive up the CPU cost of the algorithm, so we will need to use it sparingly. Right now we plan on using it to find its way home, and occasionally to plot its way to an area where it needs to map. Further research and design might reveal a

64

Page 70: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

better way of doing this, but for now this isn’t the highest priority and this algorithm should meet our needs as it is.

7.4.4.3 Filtering dynamic objects

Filtering dynamic objects really is more of an idea than an actual algorithm right now. It’s something that will be saved to the very end for us to explore if we have finished everything else. The title of the algorithm deserves a brief introduction because it may not be very intuitive. The term dynamic objects has nothing to do with program, but in fact has to do with objects in the world that move, hence they are dynamic as opposed to static objects like a wall. The reason an algorithm like this might be necessary is because these dynamic objects might leave ghosts in the 3D image if they move during the mapping process. Imagine the drone is flying around mapping a room, and in the corner of its vision it detects a part of a person, which gets recorded into the graph. As it goes on mapping it is constantly searching for areas that it has not mapped and eventually it will come back to where that person was standing. If that person is not there anymore then the drone will have a piece of its map incomplete and will not be able to finish. This is why it may be necessary to filter out these moving objects. It will also provide problems for the navigation software. If there were a lot of people around when it was mapping then it will have a decent amount of ghosts in its map. It will have to take the extra effort to navigate around all of these ghosts, and it may not event be able to plot a path. Starting off we will use the drone in more remote areas, to minimize the risk to people and so we don’t have to worry about this problem.

At the moment there is only one way that we might be able to do this. The strategy is to keep a copy of the last frame and compare this frame to the new frame that it is receiving. In a situation where the drone is completely stationary, you could compare the two frames, see the areas where the pixels have changes, and those points will be associated with the moving object. At that point you would omit the points with the detected change from the processing queue. This will only work if the drone is stationary, to do this while the drone is moving you would have to calculate the expected amount that the objects in the frame should move based on the accelerometer data. Using this data you can look at objects that have moved more or less than expected and filter them out. Another problem associated with this is that objects at different distance will appear to move different amounts with respect to the drone. For example, something one foot away might appear to have moved 50 pixels while something 20 feet away might only appear to move 5 pixels. This throws another wrench in the design of the algorithm. At this point the algorithm has become extremely costly because it is doing calculations on every point in a frame, just like the calculations of the world points. We will explore this algorithm, and alternatives, if we have time, but for now we will deal with the ghosts as they appear.

65

Page 71: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.4.5 State machine

The state machine really is the brain of the drone. It will be responsible for making all of the decision for the drone and dealing with and problems that it encounters as it flies and maps. The diagram is illustrated below in figure 9-10, and each step of the graph will be explained in detail.

7.4.5.1 System Checks

System checks will be the first thing that the drone does when it turns on. It is important to make sure that everything is functioning before attempting to fly on its own. There are a number of things that the drone needs to check before flying. First is making sure that the flight controller is operational in addition to all of its peripherals. This includes the ultrasound sensors and the GPS module. The flight controller needs to check each of the ultrasound sensors to make sure it is not returning incorrect data. Also, the GPS module has to wait a few seconds to connect to the satellites. Once the flight controller is checked and confirmed working, the startup process will boot the Kinect and test that it is working also. The drone will not fly if it fails these test. The last and most important check that it will perform is a battery check. The drone should have at least have a battery that is half full before taking off. This will prevent it from running out of charge while it is in the middle of mapping. Once all of these checks pass, the drone moves on to the next stage.

7.4.5.2 Initialize Map

Since the algorithms that are going to be used for mapping rely on previous data to build on, special care will be made to start off the map correctly. This map initialization will take place before the drone starts flying, so the image received should be the same. It will take the images from this point of view and build a starter map that the drone can build off of. This starting point that the drone is at will be considered (0,0,0) of the world while it is mapping. Once an initial map is built, it will move onto the next stage of the state machine.

7.4.5.3 Take off and hover

Before doing anything else, the drone will take off and hover in place for a few seconds before starting its mapping task. This is to ensure that the drone flight is working. It is almost impossible to check if the motors are working without actually using them. That is why this stage is being used to test that the motors are working. Otherwise, we would have added this check to system checks. If nothing significant happens during its hover, then it will move onto the next stage of the state machine. The drone will be looking for things like sudden dips, imbalanced rotors, not standing still, and other such flight anomalies that may

66

Page 72: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

indicate that the drone is not functioning properly. If a problem is detected, it will attempt to land as soon as possible.

7.4.5.4 Running

Running is the main loop of the drone state machine. In this state the drone is flying around mapping points, mostly at random. When it hits an area that it has mapped before it will stop randomly searching and look for an area that is unmapped. While it is flying around it is also checking all of its sensors to make sure it doesn’t run into something. It will take evasive actions if it does detect something. Also, this loop is responsible for executing some periodic tasks that the drone needs to perform. It will do this until it is completed mapping or receives a command to return home.

7.4.5.5 Check local flags

At this time there are only two local flags that the state machine sets and handles on its own. The first is the plot path flag and the second is the auto home flag. When these flags are triggered, the navigation algorithm for plotting a path through the world is activated. While it calculates the route that it needs to take, the drone will be stationary and wait for more commands. The result of the algorithm should return a set of waypoints to travel to, and the paths between these waypoints are all safe places for the drone to fly. When the drone is finished calculating the path, it executes the waypoints in the navigation state machine. All this state machine does is fly the drone directly to a waypoint, and does so for each waypoint it receives. Whether or not the drone continues flying depends on the flag that was set. If the navigate flag was set then the drone will reset its flags once it gets to its location. Then it will resume with the running stage of the state machine. If it was the auto home flag then it will not resume the running state. It will instead land on the landing pad which it just navigated to. After it does that the drone will shut down.

7.4.5.6 Check ultrasonic flags

In this stage of the state diagram the state machine is checking the status of the ultrasonic sensors. When event occurs with one of the sensors, it will set a flag to let the state machine know. The ultrasonic detect objects in the world that are out of the drone’s field of view, so it is like a second warning system. Also, because of the difference between how the sensors work, the ultrasonic sensors do not have some of the weaknesses that the Kinect has. When an object is detected that is too close to the drone, the flight controller takes control of the drone from the state machine. It will set a flag to let the state machine know that this is happening. While this is happening the state machine will idle and weight for control to return. These are important safety measures for the drone.

67

Page 73: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

7.4.5.7 Check Kinect flags

Right now the Kinect supports two different states. The first is similar in concept to the ultrasonic flags in the sense that an event happens when an object is detected. The Kinect passively checks each of its frames for objects within its path, and when it finds something it will set a flag to let the state machine know of this problem. To handle this problem the state machine will request the position of the object that the Kinect detected, while also halting the drone’s movements. Based on the position of the object, the drone will rotate or move itself slightly until it is out of danger. Afterwards it will continue. The other function of the flags is to detect errors within the Kinect. If the Kinect were to suddenly stop working or started malfunctioning it could cause a lot of problems for the drone. A background program will monitor the Kinect for errors and set this flag when appropriate. With the Kinect no longer working the drone cannot map anymore, so it will attempt to safely return to its base station.

7.4.5.8 Check for GPS waypoint

GPS waypoints are a functionality of the drone that we recently decided to add for extra versatility of the drone. It can get these waypoints in two different ways. First is the waypoints are preprogramed into the drone before takeoff. The second way is to receive waypoints from the base station. These waypoints are queued in the system and used periodically. Basically, the drone will chose the topmost waypoint, navigate to that point, and spend a certain amount of time mapping that area. New waypoints can be added during flight and the periodic switching of waypoints make it a necessary step in the state machine.

7.4.5.9 Check insertions per second

Insertions per second is currently our best way of detecting if an area has been mapped already or is an empty area, The idea behind this is if the area had been mapped previously or the area has nothing in it, then there will be a very low amount of insertions into the map every second. By Keeping track of this number in the background, and checking it periodically with the state machine we can have a good idea if the drone needs to move to another area. If an event like this does occur it will trigger the algorithm used to find unmapped areas. Once the algorithm completes the search, the drone will move to that location and continue mapping. If the insertions per second return to normal during that time the drone will just continue in a normal manner.

7.4.5.10 Check base station commands

68

Page 74: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

The drone will be in constant communication with the base station while it is flying. Periodically it might receive a command to do something from the station, which gets queued by the system and sets the flag. Right now we only have one command that the drone supports, and that is the return home command. When this command is received the drone stops everything it is doing and executes its home navigation algorithm. This algorithm will take its current position and calculate a safe route to the base station. Once the path is set it will fly back to the base station and land.

7.4.5.11 Check for periodic events

Periodic events are functions that need to be performed on a regular schedule. Some of these task are very important, but can be costly in terms of CPU time. Others are not very important so they can be run infrequently. The timing for these events can be adjusted to best suit their needs. When one of these tasks run, they spawn a new thread and separate from the state machine. This ensures that the state machine isn’t hanging on one process. Events that run in the background are represented by dotted arrows in the state machine graph. Right now we have three different periodic events that need to be run. The first one is the process that handles backing up the map to the disk. Disk writing is very slow, so to prevent hang-ups this is in a process that occurs infrequently. The second process to run is the map completion algorithm. This could be very costly in CPU resources, and for most of the time it is running it will not be complete, so this task will be run very infrequently as well. The last one is the map cleaning algorithm. Could be a little costly, but we are also saving RAM space when running this. Saving RAM is very important for the drone to be able to map, so this is run more frequently with a higher priority.

7.4.5.12 Navigation

Navigation is a special area represented in the state graph as a box with more states inside. It handles navigation to a specific point in the world based on plotted or GPS waypoints. Navigation will be a more complicated process that demonstrated in the graph, but for simplicity it is represented by the states in its box. When one of the navigation algorithms are run, they will generate a set of points for the drone to safely fly through. At minimum there will be two points. The navigation state machine will take these points, and fly to each of these one at a time in order. To get to each of these points the state machine has to actively control the drone’s movements and adjust to its environment to ensure that that the path is followed. While this is running the main state machine will be on hold because this process will demand all of the drone’s attention to safely navigate these points.

69

Page 75: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Figure 7-33, the color orange indicates an important stage and the blue indicates an important algorithm running. Lines that are dotted indicate that those processes are started by the state machine, but are running in their own threads.

70

Figure 7-34 The color orange indicates a important stage and the blue indicates an important algorithm running. Lines that are dotted indicate that those processes are started by the state machine, but are running in their own threads

Page 76: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

8 Schematics and Data structures 8.1 Schematics 8.1.1 Electrical 

Overall block diagram for the power distribution system

71

Page 77: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Five Volt Distribution --- TPS563200 Voltage Regulation Schematic

72

Page 78: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Seven Volt Distribution --- TPS562200 Voltage Regulation Schematic

Four and four-fifths Volt Distribution --- TPS62140 Voltage Regulation Schematic

73

Page 79: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Eight Volt Distribution --- LM25119 Voltage Regulation Schematic

Three Point Three Volt Distribution --- TPS54218 Voltage Regulation Schematic

74

Page 80: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

One Point Eight Volt Distribution --- LM3671 Voltage Regulation Schematic

8.2 Data Structures 

75

typedef struct {uint16_t point;int32_t time;

} PData;

typedef struct {uint8_t sensor_num;int32_t distance;int32_t time;

} PRXData;

typedef struct {int32_t point;int32_t time;

} POData;

typedef struct {PData point [ ];int num_p;POData[ ] location [ ];int num_l;

} PointPack;

PointPack *getBufferData(int num_PD, int num_POD);

Page 81: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

9 Project Prototype, Construction, and Coding 9.1 Construction 9.1.1 Charging station 

In the construction phase of the charging station, Matt will take the induction design and assemble the power conversion and induction platform. With the drone completed we can take our basic charging design and adapt the design to compliment the drone’s induction charging and landing gear. We will build a frame to give the charging station support and establish our critical points where the station and the drone will meet.

Once the framing is established we will cover it with felt fabric and attach it with staples. Next, we will apply epoxy on the felt in order to get our felt into a solid form. Next, we will add layers of fiber glass and sand the fiberglass in order to obtain the desired shape and desired rigidity. Once the charging station is in the desired form, we will add paint and other small cosmetic touches. Finally, we will install all of the electronic components and test to verify that everything is working without complications.

9.1.2 Drone 

Once all of the parts are acquired, we will begin construction of the drone. Our design will be a starting point of our construction. We will begin with part location and general lay out of our design. Next, we will adjust the main body and or the extent of the motor/rotor placement to give us unrestricted visibility of the Kinect sensor. Next, the motor assembly will be put together and we will set all the main mounting points drill all the necessary holes so that we can get the basic drone assembled. Next we will place each component on the drone drill and mount them. Once all of the boards, ESC’s, sensors, and transmitter and receivers are mounted on, we will set the mounting array of the lower level, which will house the battery and induction charging. This array of mounting holes will give us the opportunity to move this lower level for weight balance purposes. Next, we will mount the upper level section which will house the upward facing ultrasonic sensor and the antennas for the transmitter and receiver. After our prefabrication is set up, we will pull the drone apart in order to give it some design elements. To give these design elements we will use the laser cutter or the CNC lathe to give us the precision cutting. For this we will have to put our design into a program like autocad or the like and upload it to the machine. We will also need to use the 3D printer to construct the ultrasonic sensor cases. This will also require an electronic drawing that can uploaded to the printer. Next, we will assemble the drone completely and then test the balance of the drone. We can then move the battery mount to adjust the balance of the drone.

76

Page 82: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

9.2 Hardware                         9.2.1 Parts Acquisition   

The parts acquisition will depend on if we receive funding. If we receive funding, we should have the opportunity to purchase the vast majority of parts at one time. Some parts should be coming from china or overseas and will need extra time for shipping. These parts we will purchase first and then we will purchase parts where we can start programing into or assembling and testing. Another consideration that we have to take into account is any parts that need to be sent out to a subcontractor to assemble such as the ESC and the power distribution pcb. These components will have a lead time and can hinder our final assembly and testing of several subsystems. If we do not have funding then our main objective is to purchase parts in a similar order as if we did but also the cost of a part will be taken into consideration as well. We will attempt to purchase parts with the greatest lead time first and then purchase parts that require the greatest amount of time to either assemble or program.

9.2.2 Parts Assembly 

The drone and charging station will have several components that will require subassembly. The charging subsystem on the charging station will need to be assembled and connected to the wireless charging pad as well as the LCD screen and receiver. The Kinect will need to be wired up to the Parallela board and receive power from the distribution board. All of the electronics and sensors need to be connected to the flight controller and to the power distribution board. Most of the part assembly will take place in the senior design lab, innovation lab, manufacturing lab, and at home or in a garage. The assembly of the charging station will need some curing time and an area where we can assemble it and work on it over time. Other parts will need to be assembled in the senior design or innovation lab, so that we can apply power to test the quality of any electrical connections and to test as we are proceeding with the build.                                                          9.3 Software                   9.3.1 Initial testing of sensors                

The drone will have a multitude of sensors that will need to be tested before incorporating them into the project. These sensors include the Kinect, ultrasound sensors, and any others we might add according to need. Testing of the ultrasound sensors is fairly strait forward. We will attach them to an Arduino Uno, individually starting off, and test them to make sure they are working properly. While there is a lot of documentation on how to use them on the internet, this will also serve as a chance to properly learn how to use them to their fullest potential. After the sensors pass our initial check, we will test all six of the hooked up together. In our design we described the different configurations possible we

77

Page 83: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

could use to hook up the sensors, and through our prototyping we will determine which one fits our needs best.

Testing of the Kinect will be much more extensive. Starting off we will just plug it in the computer and use it as it was intended, with the Microsoft SDK. This will allow us to know that the unit is functioning properly. To use the Kinect on our drone, we have to strip it of most of its components and only use the camera board of the Kinect. The next stage of prototyping is to build a makeshift power board for the Kinect cameras using the voltage sources in the electronics lab, and hook the board up to a computer to make sure it still functions. Luckily the camera board still supports USB, so that part of the testing will be easy. After we verify the camera board is working properly, we can move onto other stages of prototyping with the sensors.

9.3.2 Basic Drone control  

Writing the code for drone control should be fairly strait forward given that we purchase a flight controller that handles balancing the drone for us. Our initial testing design of the drone will just be a drone that is radio controlled, so all we have to code is simple commands like move forward, backward, and turning. Since this part of the drone is fairly strait forward, we probably won’t change much of the code later on. The mapping software can use these same commands, just through a serial connection instead of radio communication from a remote control. Once the radio controller functions properly, we can move onto other stages of prototyping.                    

9.3.3 Drone collision avoidance             

Collision avoidance can be a tricky problem, so to avoid accidents with the drone that could be costly, we are going to build the collision avoidance separately and later integrate it into the drone. The basic idea behind collision avoidance is to have six ultrasound sensors on all of the axis detecting objects in the drone’s path. To begin testing this the sensors have to pass our initial testing mentioned in the 10.1.1 of our prototyping document. The initial device we are going to make will be an Arduino Uno, or more powerful board if needed, with all of the sensors attached. We will then write software to use the sensors and display some simple output when they detect something in their FOV. The next stage is to prototype decision making in software. Ultra sound sensors are actually fairly slow, so it is important that the drone make fast decisions on the data to avoid collisions with objects. The decision software initially will be something simple like a case statement that decides what to do based on which ultrasound sensor is triggered, in order of importance of course. Initial testing will be of a stationary sensor platform that we test by moving object within the range of detection. From there we can see if the response is appropriate for the situation. Responses will simulate what the drone should do in those situation. Moving on from there, we will continue to improving the detection software to a point where we feel

78

Page 84: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

comfortable putting it on the drone. If it works perfectly, great. Otherwise we will continue to improve it for the safety of the drone and the people around the drone.

9.3.4 3D mapping software

The 3D mapping software will undergo many different stages of prototyping to ensure proper functionality and to simplify the development process. The end goal it to have the software running on an embedded Linux box on the drone, but it would only hinder the development process to start there. Starting off we will develop the software on a normal desktop computer running Linux, because it will have and IDE and GUI that won’t be available on the embedded Linux device. This will also allow us to see the limitations in our software. If it won’t run on a desktop computer it defiantly won’t work on the embedded device. Finding these bottlenecks and eliminating them will be the main focus of the desktop prototype. Once the basic mapping software is working on the desktop the software project will split into two different parts. The first and most important part is to get the basic mapping software onto the drone.

Getting the mapping software to work on the embedded device will be a little tricky because of the different architectures they run on. First, there might be driver issues with the Kinect that need to be resolved. When the Kinect can be used it is time to compile and run our software on the device. Some issues we may encounter are compilation errors because of different processor architectures, different bottlenecks because of the limited resources available on the computer, and many unforeseen bugs that will pop up. It may require us to significantly reduce the data that the Kinect gives back. Once this is complete it will undergo testing for functionality. At this point we should have a basic working product. Now we can make optimizations and improvements to the software on the computer. The embedded Linux device we will purchase has 16 general purpose computation cores that are not being used at this time. We would like to take the current software and make it use these cores to process the Kinect depth frames. Using these we should be able to increase the frame rate that the Kinect operates at, and have more data to use in the mapping

While we work on getting the basic software to operate on the drone, we will continue expanding the software on the desktop to incorporate some more complicated algorithms to use on the map data and control the drone. We are building them first on the desktop for the same reasons that were stated before. Often with software like this it is fairly easy to build and test, but then you learn after testing that the idea was a little naïve and you need to rethink the idea and improve it. This writing and testing stage will probably be repeated a few times before this software ever reaches the drone. After reaching a satisfactory level of functionality, it will be moved to the drone, where all of the pieces will be put together and tested. The prototyping and testing stages of the drone software will be extensive.

79

Page 85: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

9.3.5 Base Station Display

The first prototype of the base station display will be a very simple test. We will build the base station will the LCD screen and receiver to view the feed. This part of the design will be the same as the finished product because it is so simple. Once we build that, we will use a webcam and transmitter to stream a test video to the base station. This way we know that the base station is working and we can focus on getting the drone part to work. The next stage of prototyping will involve getting the camera feed from the Kinect transmitted to the base station. This will be done before everything is incorporated onto the drone.

10 Project System Testing 10.1 Testing and Objectives   In our testing our objective is to have a fully functioning drone that can complete all of the specifications that we are requiring it to do. In order to accomplish that goal testing in several environments and retesting the components is critical. Once the drone is completely built it will be difficult to pin point an issue if one arises and in order to avoid this confusion we will test and document all of our results or particular problems. This should limit issues during specification testing and give us the ability to localize the problem with little to no effort. Our goal is to ensure that:

Charging between the station and the drone. The Kinect is scanning and creating a 3D image Drone can accomplish its range specifications in distance as well as

ceiling height Autonomous flight Auto return to base station when drone is at low battery state   Autonomous crash avoidance

10.2 Testing Environments   

We will have several testing environments. To begin we will test individual components in the senior design lab. Several of the small components will need use of the equipment in this lab. We will make sure that each individual component is working within the expected range prior to full assembly. Once the drone is built we will be testing the flight capabilities in an open field or request permission to test the flight capabilities in a remote area of the University of Central Florida’s campus. This will give us the opportunity to gain confidence in maneuvering the drone and to test some of the specifications that are not feasible in doors. Once the Kinect sensor and our algorithm becomes operational we will need to test the capturing of 3D imaging in a room to verify prior to installing it on the drone. This will take place in the engineering atrium or the

80

Page 86: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

senior design lab. Since the Kinects sensors are sensitive to sun light, this section of testing is necessary to test indoors. Lastly, as we are building our project some pieces will need to be tested and retested several times prior to our final tests for each segment. These test will occur in the Innovation and manufacturing labs. To ensure that our build is going in the right direction we need to test and retest constantly. For the distance testing for the drone we will have team members that will be outside of vocal range so we will be using cellphones as a way of communicating with one another and to safeguard the testing area for anyone that may be affected.                

10.3 Power Testing                                                                10.3.1 Power Distribution    

Seven distributions make the power supply board, each distribution is tested individually to verify they are functioning correctly. The triple power supply at testing environments such as the Smart Laboratory or the Senior Design Laboratory will be the DC source. For DC measurement the digital millimeter at laboratories will obtain voltage and current outputs. If all the distributions are verified, the DC source representing the battery will supply all distributions at the same time for verification that the power distribution board is operational.

The following are tables for individual distribution testing requirements:

Table 10-8: Test for imput 11.1 volts and input 11.5 volts

Five Volt Distribution to TPS563200 Switching RegulatorDestination Output

Voltage Output Current

Supply Voltage

Pass/Fail Supply Voltage

Pass/Fail

Parallela Board

5 Volts 2000 mA

11.1 Volts

11.5 Volts

Kinect Camera

5 Volts 680 mA 11.1 Volts

11.5 Volts

Ultrasonic Sensor

5 Volts 15 mA 11.1 Volts

11.5 Volts

Ultrasonic Sensor

5 Volts 15 mA 11.1 Volts

11.5 Volts

Ultrasonic Sensor

5 Volts 15 mA 11.1 Volts

11.5 Volts

Ultrasonic Sensor

5 Volts 15 mA 11.1 Volts

11.5 Volts

Ultrasonic Sensor

5 Volts 15 mA 11.1 Volts

11.5 Volts

Ultrasonic Sensor

5 Volts 15 mA 11.1 Volts

11.5 Volts

 

81

Page 87: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Seven Volt Distribution to TPS562200 Switching RegulatorDestination Output

VoltageOutput Current

Supply Voltage

Pass/Fail Supply Voltage

Pass/Fail

Flight Controller

7 Volts 800 mA 11.1 Volts

11.5 Volts

Four and Four-Fifths Volt Distribution to TPS62140 Switching RegulatorDestination Output

VoltageOutput Current

Supply Voltage

Pass/Fail Supply Voltage

Pass/Fail

Receiver 4.8 Volts 1300 mA

11.1 Volts

11.5 Volts

Eight Volt Distribution to LM25119 Switching RegulatorDestination Output

VoltageOutput Current

Supply Voltage

Pass/Fail Supply Voltage

Pass/Fail

ESC1 8 Volts 30,000 mA

11.1 Volts

11.5 Volts

ESC2 8 Volts 30,000 mA

11.1 Volts

11.5 Volts

ESC3 8 Volts 30,000 mA

11.1 Volts

11.5 Volts

ESC4 8 Volts 30,000 mA

11.1 Volts

11.5 Volts

Table 10-9: Testing the simultaneous supply of all the distributions

   

10.3.2 Induction Charging (charging rate & operational)  

A pair inductors are responsible as the drone’s source of resupply. Verification that they are functional is critical. The laboratory environments provide a function generator that is the AC source and an oscilloscope for AC measurements. Testing the coupled inductor at varying voltages is the first task to display the interface is capable of transferring energy. For confirmation the coupled inductors will pass energy when isolated from their source they will be tested at circuit voltage with other circuital elements.

1. Each test should be for at least seven thousand two hundred cycles to ensue steady-state characteristics (time = 7200 * (1/60) = 120 seconds).

2. The input source is a sine sinusoid wave oscillating at sixty hertz.3. The output voltage should measure as approximately equal to the supply

voltage.

Test One

Coupled Inductors

82

Page 88: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Supply Voltage

Pass/Fail Output Voltage

10 Volts15 Volts20 Volts25 Volts

Test Two

Coupled inductorsSupply Voltage Isolation Pass/Fail Output Voltage

15 Volts 100 ohms, 10 uF, 10 ohms

15 Volts 100 ohms, 10 uF, rectifier20 Volts 100 ohms, 10 uF, 10

ohms20 Volts 100 ohms, 10 uF, rectifier

10.3.3 Base Charging Station (conversion)   

Joining the drone is a multi-purpose base station. The main purpose is to recharge the drone battery for the many electronic devices that make the drone controllable and fly. Additionally, the battery supplies a camera depth sensor that captures three-dimensional photographs. Photographs that can manufacture three-dimensional maps of where the drone flies. The base station provides an apparatus to display these three-dimensional photographs. Fitted with a receiver and an LCD display the photographs are able to be relocated to the station and presented for user and audience view. Finally, the base station is a suitable and logical place to dock the drone for preservation and display motives.

                           10.4 Drone System                                10.4.1 Payload     

Payload is important from a marketability stand point and is also one of our requirements. We will attach weight to the landing gear and test several weights until we reach a limit. Table ## shows the payload weights and the results of each payload.

Table 10-10

Payload Weight (grams ) Expected Result Actual Result (Pass / Fail)

500

83

Page 89: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

PASS

1000 PASS

1500 PASS

2000 PASS

2500 FAIL

10.4.2 Basic Flight  

Once the components of the drone has been tested we will test it for basic flight in a controlled environment. This will give us the confidence that basic flight prior to adding in all of the sensors and the 3D imaging hardware. It will be tested with a pass / fail result.         

Table 10-11

Direction Expected ResultActual Result (Pass / Fail)

Lift PASS PASS

Descend PASS PASS

Left PASS PASS

Right PASS PASS

Forward & Right PASS PASS

Forward & Left PASS PASS

Reverse & Right PASS PASSReverse & Left PASS PASS

10.4.3 Avoidance   

We will be testing the avoidance capabilities of the drone even though it is not in the requirements or specifications. We are testing the drone’s capabilities to avoid objects to circumvent damaging the drone and to elude persons within the drone’s vicinity. In table ## we will test a variety of situations and gather the distance sensed and result of the test.

Table 10-12: Avoidance Testing

Test Type Distance Actual Result

84

Page 90: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

(Meters) Expected ResultDirect flight towards chair Stop @ .5 metersSideways flight towards chair Stop @ .5 meters

Passing a person Veer around Person

Proximity sensing of a person 4.5 meters

10.4.4 Ceiling Height   

To meet our requirement our drone will have to reach an altitude of 400 meters which we will test in a field or remote area of UCF. In Table 10-6 we will test the drone’s altitude in 100 meter increments with a pass or fail result.

Table 10-13: Ceiling height test

Altitude (meters) Expected Result Actual Result Measurement Type

100 PASS

200 PASS

300 PASS

400 PASS                  10.4.5 Distance Range      

Testing the drone range will be a little delicate. We will need to find an area where we can either have an unobstructed one kilometer line of sight where it will not interfere with anyone or anything. The second option will be to have an area where the interference is minimal and our team members will be stationed at critical points along the drone’s path. Depending on the type of testing environment will also establish our max range, with an unobstructed line of sight testing we should have the greatest distance. This again will be a pass fail test until a fail result is reached. At the point where we reach a fail result we will calculate the distance. To accomplish this in this day and age we can use our cellphones, fitness bands, GPS, or even use a measuring wheel.          

85

Page 91: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Table 10-14: Distance range testing

Distance (meters) Expected Result Actual Result

Measurement (Actual)

250 PASS

500 PASS

750 PASS

1000 PASS

1250 PASS

1500 FAIL      

                                                10.5 Software Testing 

One of the most important aspects of our project is to have accurate data. We will test the data input from the hardware and verify the accuracy of this data.

10.5.1 Data input 

Before any other part of our software is built, the validity of the data needs to be tested. One of the hardest problems to solve when developing software is when you program malfunctions due to bad data input. That is why our first step will be to build the Kinect input software and confirm that the data received is accurate enough to be used. To confirm that the Kinect is functioning properly it will be tested in a controlled environment. This environment will be the Kinect facing a flat wall with no other objects in its path. The distance from the wall will be known ahead of time, and the data will be recorded a compared afterward. This process will take place five times, at five different distances. With the data collected, the point distances will be input into a program like MatLab to visualize, and check for errors.

Table 10-8: Data input testing

Distance (meters) Expected Result Actual Result

Measurement (Actual)

0.5 PASS

1 PASS

2 PASS

3 PASS

86

Page 92: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

4 FAIL

10.5.2 Data computation 

Accuracy of data computed is just as important as the accuracy of data received. To test this it will undergo a very similar process as the data input. The difference this time is the additional data from the IMU. This process will simulate what will actually happen in the drone, where the frames received will be processed with the IMU data in real time and the actual positions will be recorded. This will be done in the controlled environment and the results will be stored on disk. Afterwards, the data will be visualized in MatLab and the distance will be compared against the actual distances in the environment.

10.6 Overall system testing    

Once all of the subsystems are tested, we will need to complete the build and perform a thorough testing of the project prior to displaying in front of our peers and faculty. In this section of testing we will retest several checks to ensure that we are within our specifications with the additional weight and to confirm that everything is working as a whole.

Table 10-15

Test TypeExpected Result

Actual Result

Measurement Type

Autonomous Flight PASS3D Mapping of Environment PASSAvoidance Maneuvering PASSFPV viewing on Charge Station PASS

11 Bill of Materials Table 11-16: Overall parts list

Name NumberOficial Part

Name/Number Price

Parallella Board 1 Parallella-16 150drone frame 1 150flight controller 1 Pix Hawk 200esc 4 80

87

Page 93: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

motor 4 EMAX MT4114 178ultrasonic sensors 6 9.78kinect 1 Kinect 360 20battery 1 We 88.72power distribution board 1 0trnsmitter 1 45receiver 1 130induction charging 1 0wires 1 Wires 5propelers 4 1555 Drone rotors 24

Table 11-17: Kinect TPS54218 Circuit

Part Number Component Quantity Ordered Prototype Price TotalCC0805JRNPO9BN101 Ccomp2 1 $0.10 $0.10ERJ-6ENF5901V Rcomp 1 $0.10 $0.10ERJ-6ENF1003V Renb 1 $0.10 $0.10ERJ-6ENF1273V Rent 1 $0.10 $0.10ERJ-6ENF6653V Rt 1 $0.10 $0.10ERJ-6ENF3162V Rfbt 1 $0.10 $0.10C3216X5R0J226K Cout 1 $0.48 $0.48GRM033R61A272KA01D Css 1 $0.10 $0.1008053C104KAT2A Cboot,Cinx 2 $0.10 $0.20ERJ-6ENF1002V Rfbb 1 $0.10 $0.10GRM32ER61C226KE20L Cin 1 $0.68 $0.68GRM21BR71E104KA01L Ccomp 1 $0.10 $0.10TPS54218RTER U1 1 $3.52 $3.52SER2915L-333KL L1 1 $0.00 $0.00ERJ-6ENF4992V Rpg 1 $0.10 $0.10

Table 11-18: Kinect LM3671LX

Part Number Component Quantity Ordered Prototype Price TotalLM3671TLX-1.875/NOPB U1 0 $0.34 $0.00MLP2520S2R2MT0S1 L1 1 $0.42 $0.42GRM188R60J475KE19D Cin 1 $0.12 $0.12C3216X5R0J226K Cout 1 $0.48 $0.48

88

Page 94: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Table 11-19: ESC parts list

Part Number Component Quantity Ordered Prototype Price Total

ERJ-6ENF2432V Ruv2 0 $0.10 $0.40SER2915L-682KL L1,L2 4 $0.10 $0.40LM25119PSQ/NOPB U1 4 $0.10 $0.40CSD17305Q5A M1,M3 4 $0.10 $0.4035SVPF120M Cin 8 $0.12 $0.48ERJ-6ENF5492V Ruv1 12 $0.10 $0.804TPF680MAH Cout 4 $0.10 $0.8008053C104KAT2A Cboot1,Cboot2 12 $0.10 $0.80CL21C331JBANFNC Ccomp2 8 $2.40 $19.20ERJ-6ENF5112V Rt 4 $0.11 $0.44CC0805KRX7R9BB682 Ccomp1 4 $0.10 $0.40ERJ-6ENF2322V Rcomp1 4 $0.65 $5.20EMK212B7474KD-T Cres 4 $0.10 $0.40CC0805KRX7R9BB821 Cramp1,Cramp2 4 $0.10 $0.40CC0805KRX7R9BB153 Css1,Css2 8 $0.10 $0.40GRM155R61A474KE15D Cvcc1,Cvcc2 8 $0.10 $0.40BSC042NE7NS3 G M2,M4 8 $0.10 $0.40RR1220P-362-D Rfb2 8 $0.12 $0.48ERJ-6ENF1151V Rfb1 4 $0.10 $0.80PRL1632-R005-F-T1 Rsense1,Rsense2 4 $0.10 $0.80C0805C104K5RACTU Cinx 8 $0.10 $0.80

Table 11-20: Power distribution circuit parts for LM25119 switch

Part Part Number Quantity PriceCboot1 C1005X5R1A104K 1 0.01Cboot2 C1005X5R1A104K 1 0.01Ccomp1 CC0805KRX7R9BB222 1 0.01Ccomp2 C2012C0G1H271J 1 0.01Cin 16SVPF82M 3 0.35Cinx 08053C104KAT2A 1 0.01Cout 16SVPF1000M 2 0.74Cramp1 CC0805KRX7R9BB821 1 0.01Cramp2 CC0805KRX7R9BB821 1 0.01Cres GRM155C80G474KE01D 1 0.01Css1 CC0805KRX7R9BB153 1 0.01Css2 CC0805KRX7R9BB153 1 0.01

89

Page 95: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Cvcc1 GRM155R61A474KE15D 1 0.01Cvcc2 GRM155R61A474KE15D 1 0.01D1 B220-13-F 1 0.08D2 B220-13-F 1 0.08L1 XAL1010-332MEB 1 1.71L2 XAL1010-332MEB 1 1.71M1 CSD16325Q5 1 0.91M2 CSD17577Q3A 1 0.28M3 CSD16325Q5 1 0.91M4 CSD17577Q3A 1 0.28Rcomp1 CRCW040246K4FKED 1 0.01Rfb1 RC0603JR-07910RL 1 0.01Rfb2 RR1220P-822-D 1 0.01Rramp1 CRCW040288K7FKED 1 0.01Rramp2 CRCW040288K7FKED 1 0.01Rsense1 CSNL1206FT3L00 1 0.19Rsense2 CSNL1206FT3L00 1 0.19Rt CRCW040234K0FKED 1 0.01Ruv1 CRCW040254K9FKED 1 0.01Ruv2 CRCW04028K87FKED 1 0.01U1 LM25119PSQ/NOPB 1 2.6

Table 11-21: Power distribution circuit parts for TPS563200 switch

Part Part Number Quantity PriceCbst 08053C104KAT2A 1 0.01Cin GRM32ER61C226KE20L 1 0.16Cout GRM31CR61A476KE15L 1 0.21L1 VLP8040T-3R3N 1 0.22Rfbb CRCW040210K0FKED 1 0.01Rfbt CRCW040256K2FKED 1 0.01U1 TPS563200DDCR 1 0.52

Table 11-22: Power distribution circuit parts for TPS562200 switch

Part Part Number Quantity PriceCbst 08053C104KAT2A 1 0.01Cin GRM32ER61C226KE20L 1 0.16

90

Page 96: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Cout GRM32ER61C226KE20L 1 0.16L1 MSS5131-472MLB 1 0.48Rfbb CRCW040210K0FKED 1 0.01Rfbt CRCW040282K5FKED 1 0.01U1 TPS562200DDCR 1 0.47

Table 11-23: Power distribution circuit parts for TPS62140 switch

Part Number Quantity PriceCin GRM219R61E106KA12 1 0.05Cout GRM31CR61A226KE19L 1 0.08Css GRM033R71E561KA01D 1 0.01L1 SDR0403-2R2ML 1 0.18Rfb1 RR1220P-184-D 1 0.01Rfb2 CRCW0402909KFKED 1 0.01Rpg CRCW0402100KFKED 1 0.01U1 TPS62140RGTR 1 0.95

12 Owner's Manual 

Battery --- Match the drone’s charging unit to the base station’s charging unit. Connect the base station to an outlet to charge the drone’s battery.

Software --- The software for the drone is already provided and ready to use. When the drone is turned on it will wait for user feedback to begin mapping. The start command will be triggered by a switch on the remote.

Mapping --- After the drone has docked and shutdown, withdraw memory card from the drone and insert to computer. Open the data from the memory card in the display software provided, or in MatLab. Output data will be in a MatLab compatible format.

Piloting --- Switch on the drone and the remote controller. Use the D-pad and joystick to fly the drone. Be aware that drone ascends and descends by specific intervals.

91

Page 97: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

D-Pad ControlsPress and Hold Up --- The drone will ascend.

Press and Hold Down --- The drone will descend.

Press and Hold Right --- The drone will pivot right.

Press and Hold Left --- The drone will pivot left.

Y-stick ControlsSlide Up --- Forward motion.

Slide Down --- Reverse motion.

92

Page 98: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

Slide Right --- Tilt right.

Slide Left --- Tilt left.

13 Administrative Content  13.1 Milestone Discussion  

◄ April 2015 ~ May 2015 ~ Jun 2015 ►

Sun Mon Tue Wed Thu Fri Sat1 2

3 4 5 6 7 8 9

10 11 12 13 14 15 16

17 18 19 20 21 22 23

24 25 26 27 28 Established group

29 Establishing project details

30 Establishing project details

31 Establishing project details

Notes:

◄ May 2015 ~ June 2015 ~ Jul 2015 ►

Sun Mon Tue Wed Thu Fri Sat1 Write Initial Document

2 Write Initial Document

3 Write Initial Document

4 Write Initial Document

5 Initial Project Document Due

6

93

Page 99: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

◄ May 2015 ~ June 2015 ~ Jul 2015 ►

Sun Mon Tue Wed Thu Fri Sat7 Divide project details

8 Begin Research

9 RESEARCH

10 RESEARCH

11 RESEARCH

12 RESEARCH

13 RESEARCH

14 RESEARCH

15 RESEARCH

16 RESEARCH

17 RESEARCH

18 RESEARCH

19 RESEARCH

20

21 Begin TOC

22 Writing TOC

23 Picking sensors

24 Writing TOC

25 Testing Kinect Sensor

26 Writing TOC

27

28 29 30 Table of contents Due

Notes: RESEARCH ON GOING

◄ June 2015 ~ July 2015 ~ Aug 2015 ►

Sun Mon Tue Wed Thu Fri Sat1 2 3 4

5 Design Discussion And research

6 7 8 9 10 11

12 Alternative Designs and research

13 14 15 16 17 18

19 Write rough draft

20 21 22 23 24 25

26 Write rough draft

27 28 29 30 31 Finish rough draft

Notes:

◄ July 2015 ~ August 2015 ~ Sep 2015 ►

Sun Mon Tue Wed Thu Fri Sat1

2 REVISION

3 REVISION

4 REVISION

5 Print and Bind Paper

6 Senior Design 1Paper Due

7 8

94

Page 100: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

◄ July 2015 ~ August 2015 ~ Sep 2015 ►

Sun Mon Tue Wed Thu Fri Sat9 Order partsAnd begin coding

10 11 12 13 14 15

16 BREAK

17 18 19 20 21 22

23 Begin testing subsystems

24 25 26 27 28 29

30 Begin Construction

31 Notes: RESEARCH ON GOING

                

◄ August 2015 ~ September 2015 ~ Oct 2015 ►

Sun Mon Tue Wed Thu Fri Sat1 2 3 4 5

6 Continue construction

7 8 9 10 11 12

13 Assembly

14 15 16 17 18 19

20 Initial testing

21 22 23 24 25 26

27 Modifications

28 29 30 Notes: RESEARCH ON GOING

                                     ◄ September 2015 ~ October 2015 ~ Nov 2015 ►

Sun Mon Tue Wed Thu Fri Sat1 2 3

4 Continue testing

5 6 7 8 9 10

95

Page 101: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

◄ September 2015 ~ October 2015 ~ Nov 2015 ►

Sun Mon Tue Wed Thu Fri Sat11 Software integration

12 13 14 15 16 17

18 Project demonstration

19 20 21 22 23 24

25 Continue testing

26 27 28 29 30 31

 

◄ October 2015 ~ November 2015 ~ Dec 2015 ►

Sun Mon Tue Wed Thu Fri Sat1 Documentation

2 3 4 5 6 7

8 DocumentationAnd testing

9 10 11 12 13 14

15 Revision and modifying

16 17 18 19 20 21

22 Prepare paperworkAnd drone for presentation

23 24 25 26 27 28

29 Present

30 Notes:

     ◄ November 2015 ~ December 2015 ~ Jan 2016 ►

Sun Mon Tue Wed Thu Fri Sat1 2 3 4 5

6 Present

7 8 9 10 11 12

13 14 15 16 17 18 19

96

Page 102: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

◄ November 2015 ~ December 2015 ~ Jan 2016 ►

Sun Mon Tue Wed Thu Fri Sat20 21 22 23 24 25 26

27 28 29 30 31 Notes:

        

13.2 Budget and Finance Discussion    

With any project, whether it’s for personal or if it’s for an employer, setting a budget or discussing financial objectives is a critical part. In our discussion we set a few levels to our project. Our first level is a dream list of parts with little consideration of the final cost. We then looked at all of the parts that we picked and deliberated on which were an absolute necessity for the project and which can be changed out for less expensive parts that will work. This we broke down into two levels with the parts reducing in price respectively. Our last option is to remove different options of the project to reduce cost or to buy used parts instead of new. Once these levels were established we calculated roughly how much will our project cost at a cost of $1500.00 which would breakdown to a cost of $500.00 per person. This seemed a little on the expensive side for the team members but is not out of the realm of conceivable. Once we designed and picked the parts of what we will use and what parts we need to build every little aspect of our project we calculated a price of $1432.21. This final price includes overlooked items such as wires, soldering tools, etc. as well as material such as paint and sand paper. Another option to reduce the financial burden to the team members is to solicit for sponsorship from a group or business. PolyGlass USA, a company that Edwin Lounsbery is currently interning at, might be willing to become a sponsor or partial sponsor to assist the team with purchasing of parts. PolyGlass USA is a building materials company that one of its locations is in Winter Haven Florida. They specialize in coatings for roofs and tar roofing material for underlayment and flat roofs. Their interest in assisting our team is to give back to students and help ease our financial burden. Our final decision on what products we will use will heavily rely on sponsorship. If our sponsorship does not solidify then the financial burden will be divided evenly between the group members. 13.3 Consultants, Subcontractors, and suppliers  

We will be using Quality Manufacturing services in Lake Mary Florida for our power distribution board and electronic speed controller boards. They are an ISO 9001 compliant company which means that they have stringent quality control processes and we should receive a good quality product. Our parts suppliers will

97

Page 103: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

be vast. We will be ordering parts from Ebay, HobbyKing.com, HobbyPartz.com, Banggod.com, and Digikey.                             

14 Appendices 14.1 Copyright Permissions and Authorizations    

 

14.2 Bibliography  

Electronic Products. (2012, 05 08). Understanding the Advantages and Disadvantages of Linear Regulators . Retrieved from DigiKey: http://www.digikey.com/en/articles/techzone/2012/may/understanding-the-advantages-and-disadvantages-of-linear-regulators

Open Source Robotics Fondation. (2012, 12 27). Lens distortion and focal length. Retrieved from ros.org:

98

Page 104: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

http://wiki.ros.org/kinect_calibration/technical#Lens_distortion_and_focal_length

OpenKinect. (2011, 2 27). Hardware info. Retrieved from openkinect.org: http://openkinect.org/wiki/Hardware_info

14.3 Autobiography14.3.1 Brian Vermillion

I am an undergraduate student at the University of Central Florida studying computer engineering. I enjoy both hardware and software design, but my main focus is on software design. In software design some of my main interests are distributed systems, parallel processing, computer graphics/vision, and artificial intelligence. Each of these topics are very extensive, and at the moment I have only touched the surface of them. This project incorporates two of these areas that I’m interested in, the parallel processing and some computer vision. I look forward to learning more about these topics through this project.

14.3.2 Matthew Mchenry

Hello my name is Matthew McHenry, I’m from the west coast of Florida and an electrical engineering major. Contributions to the project have mainly been to chapters one through seven. My favorite parts and interests of the project is the wireless charging and that the project can fly. Intriguing is the popularity and implementation of three-dimensional mapping. Additionally, the project’s full scope, navigating by 3-D mapping, is revealing as possibly a new technology application.

14.3.3 Edwin Lounsbery

I am studying electrical engineering at the University of Central Florida as well as having a full time job and being a full time dad. I enjoy overcoming technical challenges and working with my hands. With this project I get to do both and learn some new things along the way. I

99

Page 105: 3D Mapping Drone - UCF Department of EECS  · Web view3D Mapping Drone. Edwin Lounsbery, Brian Vermillion, Matthew McHenry. ... The Atmega8 is an eight bit microprocessor with eight

am also a gearhead which again this project is perfect. My take away will be circuit design, data processing, control theory, and 3D imaging.

14.4 References http://copter.ardupilot.com/wiki/common-autopilots/common-pixhawk-

overview/ http://copter.ardupilot.com/ http://openkinect.org/wiki/Main_Page http://accudiy.com/ http://dev.ardupilot.com/ http://ardrone2.parrot.com/support-android/ https://www.parallella.org/parallella-models/ http://robotics-worldwide.1046236.n5.nabble.com/robotics-worldwide-news-

Looking-for-outdoor-depth-cameras-with-short-range-td5710069.html       http://www.instructables.com/id/Homemade-Infrared-Rangefinder-Similar-to-

Sharp-GP/  http://big.cs.bris.ac.uk/projects/mobile-kinect  http://www.hobbypartz.com/66p-125-mt-4114-plus-thread.html http://www.stefanv.com/electronics/escprimer.html http://www.4qdtec.com/index.html?Home=+Home http://www.digikey.com/en/articles/techzone/2012/may/understanding-the-

advantages-and-disadvantages-of-linear-regulators http://wiki.ros.org/kinect_calibration/

technical#Lens_distortion_and_focal_length

100