reconnaissance robot senior project final reportunh.edu/ece/department/senior...
TRANSCRIPT
Reconnaissance RobotSenior Project Final Report
Advisor Frank Hludik
Groups Members Ruben Lalinde
Ben Major
Ben Minerd
Andrew Pavlik
Submit Date 20 May 2011
Abstract
The reconnaissance robot is a testament to the design, building, and analysis work
performed to build it. The minimal requirements that the reconnaissance robot needed to meet
was to be wirelessly controlled and be able to capture and relay visual information.
Beginning with a brief overview of the problem that is trying to be solved with
reconnaissance robots, this report explains the critical elements involved in the design of the
robot. Also included in this report are the future improvements that could be made on the
robot and existing reconnaissance robot implementations on which the design was based with
particular emphasis put onto the technical design portions.
Introduction
Robotic products have endless applications. These applications can range from robots
designed to help with house cleaning to autonomous military combat robots designed to
eliminate enemy combatants. No matter what the application, the main goal is to make
performing certain tasks simpler and easier. Not only do robots accomplish this, but they also
can perform tasks that would be needlessly dangerous for humans to perform. One such task
involves reconnaissance during potentially deadly situations such as a hostage crisis. In a
reconnaissance application, a robot would gain visual information so that the end user could
better assess the situation. The situation in which the robot must deal with necessitates a
particular set of specifications for the robot.
Because a reconnaissance robot will need to gain visual data without compromising
its position, the specifications for the robot will center on minimizing its visual and audio
presence. Minimizing the machine’s visual presence requires that the robot be camouflaged
with its environment. Since there are many unique environments, it either needs to be coated
with a material that constantly changes its appearance to match its environment or have
different sets of outer shells depending on the environment. Moreover, the less volume the
robot encompasses-- the less chance that it will be seen. As for audio presence, the robot
needs to have its internal system producing a sound level of about 30 dB; this imposes a
trade-off between the sound generated and the speed. The specifications for a reconnaissance
robot center on minimizing its visual and audio presence but depend on other factors as well.
The other design specification factors can be derived from analyzing existing reconnaisance
robots.
To further understand what specifications must be met, existing reconnaissance robot
must be analyzed. Within the field of reconnaissance robotics the most prevalent
theme centers on military reconnaissance robots. One such example of a military
reconnaissance robot is the Recon Scout XT. The key features of the Recon Scout XT are that
it is light, water resistant, and shock resistant. Furthermore, the robot possesses video
feedback and an optical infrared system. Certainly a reconnaissance robot needs to have some
type of visual system to gather data as well as being reasonably durable.
While the Recon Scout XT offers a simple data gatherer, the iRobot PackBot offers a
more versatile solution. The PackBot possesses much of the same abilities as the Recon Scout
XT, i.e. video feedback, durability; however, the PackBot has a small manipulator arm,
longer runtime, stair climbing abilities, and faster speed. A drawback of the PackBot,
however, is its large size which does not camouflage well with given environments. Although
each reconnaissance robot design offers its individual advantages, the key specifications are
durability, reliable video feedback system, and mobility.
When put into operation, the key elements that make a reconnaissance robot
successful are its mobility, durability, and the quality of its feedback system. A
reconnaissance robot's deployment environment would encompass many types and sizes;
therefore, mobility is quintessential. Additionally, a robot needs to sustain any intended or
unintended damage when traveling through hostile environments. Although each individual
environment presents its own specific hazards that necessitate specific hazard protection
mechanisms, central to every design is the need to guard the circuitry and motors with a
strong outer frame. This frame provides basic protection in most environments. Fundamental
to a reconnaissance robot's design is the ability to provide an excellent quality video feed of
its targeted deployment environment. In addition to video quality, the camera should be able
to capture video in the dark. Solidifying mobility, durability, and video quality will determine
the effectiveness of the robot.
Each situation in which a robot is deployed determines its required specifications.
Based upon application, it requires a minimal visual and audio presence to gain visual data
without compromising its location. Existing reconnaissance robotic designs exemplify
extremes of specifications with the Recon Scout XT focusing on mobility and fast threat
detection assessment and the iRobot Packbot focusing on longevity and durability. These two
robots displayed the three essential elements for any reconnaissance robot: mobility,
durability, and high video quality. Designing a high quality robot will need to encompass all
of these features.
Technical Components
Overview
The technical specifications of the robot design were established by several basic
components necessary for a reconnaissance robot. The basic requirements are easy
maneuverability, robust body, and excellent video feed. To meet the maneuverability
requirement of the robot, an easy-to-use GUI was implemented to control the robot. Strong
durable drive motors as well as a strong frame added to both maneuverability and the robot's
physical integrity. For video feed, a wireless camera was mounted on the robot equipped with
an LED/Infrared lights for night viewing. In addition to meeting the goals required for a
general reconnaissance robot, the robot possess additional accelerometer, GPS, and
navigation sensors as well as facial recognition technology. Overall this robot met and
exceeded the original design goals.
Figure 1 Overview of Robot Hardware
Group Member ContributionsRuben Lalinde Mechanical Framework, Arm Design, Video
CommunicationsBen Major Financing, Component Organization, Facial
Recognition, Accelerometer IntegrationBen Minerd GUI Design, Computer-Robot Control
Integration, Facial Recognition, GPSAndrew Pavlik Battery and Power Circuitry, LED & Infrared
Lighting System, Circuit Board Layouts
Mechanical Frame
The size of the robot was a crucial element of its design. One of the key design
specifications of this robot was to be able to climb stairs, which relied heavily on the shape,
weight, and weight distribution of our design. Not only was climbing stairs a key
specification but also the need for a flexible geometry where there were many choices on the
physical layout of the electronics, motors, and the other equipment. Additionally, the vital
equipment onboard the robot would need a structurally sound container for protection. For
strength, stability, and manueverability, an aluminum rectangular box frame was chosen with
a triangular extrusion in the front. The triangular extrusion was added so the robot could
climb over steps. Rather than weld the aluminum frame pieces together, nuts and bolts were
used. Nuts and bolts provided additional flexibility while not diminishing from the integrity
of the design.
Figure 2 Schematic of Robot Climbing Stairs
Mechanical Arm
There were several ways to approach the arm design but the key feature behind each
design was the need to have a wide viewing range. The approach that we decided to use was
to have a single aluminum bar. Rather than using multiple sections of aluminum bars
connected by joint motors, three servo motors were attached to the top of the single
aluminum arm. With three servo motors on top of the arm and the ability to change the
position of the robot, many view angles could be achieved. Included in the specification of
having a wide viewing range is the ability of the camera to look into windows which was
achieved due to the long length of the aluminum bar. The details of the actual camera used on
the arm will be explained in the video communications section.
Drive System
Two high torque 12V DC motors and a rubberized track system were used for the
primary drive system. The DC motors are AME 218-series, standard brushed motors that
contain an internal worm gear assembly. The worm gear linkage provides high drive torque
around 392 in-lbs total which is about four times more than the calculated torque required to
drive the system. This was tested and is sufficient to drive the assembled robot up a 60 degree
incline. The motors are powered using a Dimension Electronics, Sabertooth model speed
controller which provides up to 40A of drive current in two channels and is serially linked to
the Arduino microcontroller. Each motor is rated for a stall current of about 20A and an active
drive current of about 1A was measured under normal driving operation for the robot.
Control Communication
For controlling the robot, we used two 900MHz XBee transceivers and communicated
through the wireless channel using an OEM ASCII packet protocol that we developed.
Packets are delimited using start and end characters (‘>’ and ‘<’, respectively), and data is
delimited using commands and parameters that are separated by special characters. For
example, the string “>Lft=128;Rht=1;Lit=1<” is a packet that contains three commands: the
left drive-motor command with a value of 128 (full-forward), the right drive-motor command
with a value of 1 (full-reverse), and the visible LED command.
The wireless communication is setup using a basic “hand-shake” technique, where the
robot transmits the “Bgn" (begin) command once the Arduino microcontroller boots up and
has initialized all of the necessary ports. Once the GUI receives the “Bgn” command, it
begins transmitting “Req” (request) commands at two-second intervals. The robot replies
with a packet that contains the current values for the digital compass and the arm servo angle
upon receiving every “Req” command and also adds the current system and Arduino battery
levels and the robot’s GPS location for every 5th packet (i.e. every ten seconds).
Figure 3 Communication between Robot and Laptop
Video Communication
To preserve bandwidth in the communication between the robot and laptop and given
the large bandwidth a video feed would encompass, it was desired to separate the control and
data communications from the video communication. To accomplish a separation of
communication channels, a commercial wireless-N camera was used in conjunction with a
commercial wireless-N router. The laptop was connected via an Ethernet cable to the
wireless-N router in order for the video feed from the robot to be relayed back to the GUI.
The camera was mounted on a stack of three servo motors within the robot. To help ensure
that the only users that were able to see the video feed from the robot an encrypted point-
topoint connection between the router and camera was used.
Battery and Power Systems
A single, 11 cell, 12V nominal (13.2V actual) NiMH battery was used for the primary
power system. The NiMH battery supplies current to the primary drive motors as well as the
remaining interface and power supply circuitry located in the robot. The Arduino
microcontroller system is powered from a separate, isolated battery source integrated into the
Arduino system.
The following diagram (Figure 4) shows the interface boards and power supplies
which take their power from the primary battery source. The list which follows describes the
interconnections of the boards shown in the figure.
Figure 4 Interface Boards and Power Supplies
Board Details
• (A) – Interface Board 1 and Power Supply
• Output power:
• 5V, 1A for arm and camera pan, tilt, and balance servos
• 12V for cooling fan interfaced through fan relay
• Relay Control:
• Master Relay: Controls power to other relay boards to allow the
microcontroller to activate interface board 2
• Relay 1: Controls connection between Arduino battery sensor and A/D
input on Arduino. This prevents power from being unnecessarily
drained from battery while off.
• Relay 2: Fan relay, activated during IR usage by microcontroller
• (B) – Interface Board 2 and Power Supply
• Output power:
• 5V, 0.75A for spare servo control
• Relay Control:
• Relay 1: IR LED Connection
• Relay 2: Shorts part of the resistance to activate IR Bright mode
• Relay 3: Center visible flashlight mode
• Relay 4: Main battery sensor connection
• (C) – GPS board and Power supply
• Primary function: GPS receiver, serial output to microcontroller
• Output power: 5V 0.75A for accelerometer and digital compass
• (D) – Digital Compass and Accelerometer
• Receive 5V power from GPS board.
• (E) – Power Supply and Smaller Resistor Board
• Power Output:
• 9V regulated DC for camera
• 6V unregulated DC
• CH1 for IR LED’s
• CH2 for visible center LED
• Resistor board provides current limiting for IR LED’s on flashlight. Center LED
contains a series resistor integrated into the wiring.
LED & Infrared Lighting System
The lighting system is separated into IR lighting and visible lighting. The IR system
consists of 12 IR LEDs powered from a 6V unregulated source (see Figure 4) and each LED
is current limited to about 100mA. The total system draw is about 700mA under the norminal
dim mode and 1.2A during bright mode. Both modes are activated via 3A relays on interface
board 1 which is interfaced to the Arduino system. When both relays are active, the bright
mode is active. When relay 2 is deactivated, extra resistance is added into the system
allowing the normal “dim” mode. The visible lighting system is a single, ultra bright SMD
LED controlled by the Arduino microcontroller on relay 3 through interface board 1. Due to
the power draw required, the IR and visible system are not able to be activated at the same
time; this system is controlled and limited by the software.
Microcontroller
We chose to use an Arduino microcontroller to control all of the various subsystems of
the robot. We chose the Arduino platform because it provided us with a large set of open
source C++ libraries that allowed us to quickly interface with the various analog inputs,
digital outputs, serial ports and pulse width modulation ports. The Arduino worked very well
for us, it provided us with plenty of inputs and outputs to interface to the multiple subsystems
we incorporated into the robot. We interfaced four servo motors, two brushed DC motors, a
GPS module, a digital compass, a three axis accelerometer, three lighting systems and
wireless and wired serial communication. The biggest issue we ran into when using the
Arduino as a platform was its inability to run multiple threads. We got around this issue by
creating a system that gave each task that the microcontroller had to perform a priority. The
priority corresponded to how often that tasks function would be run. Each task's function
would perform a single iteration of the task then return back to the main running loop. This
allowed the microcontroller to appear to a human observer that multiple things were
happening at once (driving forward while panning the camera while moving the arm up or
down, for example), when in fact they were not.
GPS
To be able to locate the robot, we used a Seedstuido u-blox GPS Bee Receiver that
receives the GPS satellite signals and outputs location data in the NMEA format. We
configured the GPS Bee after powering on to output the location data every five seconds, as
opposed to every second, to minimize time spent parsing the GPS data. The GPS Bee
communicates through one of the Arduino’s four UART/serial ports and transmits the
location data as ASCII strings, which we parse to grab the robot’s latitude, longitude, and
elevation and save them into local variables. On every fifth request from the GUI, the
Arduino transmits the GPS data over the XBee link.
Accelerometer
We originally purchased and interfaced a three axis accelerometer to the project with
the intent of using two of its axis to calculate the tilt angle of the arm. Using trigonometry
and reading two of the accelerometers outputs, we were able to calculate the angle from the
horizontal at which the accelerometer was being tilted. We then mounted the accelerometer
onto the base of the arm, so that we could measure the tilt angle of the arm and transmit that
data to two places: the balancing servo of the pan/tilt system for the camera, and the GUI so
that the user would be able to know the position of the arm even if the user could not
physically see the robot. The original motor we were going to use at the base of the arm was a
brushed DC motor with a worm gear. The motor itself had plenty of torque to move the arm,
but the holding torque of the worm gear was not enough to support the weight of the arm
when we wanted to hold the arm in a vertical position. We changed our design to use a high
torque servo motor instead. The Arduino platform provided us with a library that allowed us
to write to a servo motor with an angle, in degrees, between 0 and 180, so we no longer
required the accelerometer to know the tilt angle of the arm. We decided to continue using the
accelerometer on the body of the robot instead, so that the user would be able to know how
steep a hill or a set of stairs is that the robot is currently climbing.
Graphical User Interface (GUI)
One of our main initial goals was to be able to control the robot wirelessly from a
laptop. We achieved this by developing a multi-threaded GUI in C++ using an open-source
tool called “Qt” that supplies a large amount of well-written libraries for everything from
simple data structures to complex graphical “widgets.” The GUI displays the robot’s wireless
video stream, compass heading, Arduino and system battery levels, the robot’s GPS location
using an embedded Google Maps HTML web page, and a 2-D representation of the robot that
shows the angle of the robot’s motorized arm.
The GUI we developed makes heavy use of programming abstraction. For example,
the GUI can communicate to the robot via either an XBee transceiver or a serial port, using
“XbeeLink” and “SerialLink” objects. Both of these objects are subclasses of the “Link”
object and are used in the same way by the programmer, but they handle the actual
transmitting and receiving of data differently per the characteristics of their respective
channels. This allows the programmer to easily swap these objects in place of each other,
without needing to any implementation by other GUI objects.
Figure 5 Screenshot of GUI with Facial Recognition Enabled
Facial Recognition
We used a set of open source image processing libraries originally developed by Intel
to perform face detection and recognition. The GUI received a frame from the camera over
the network and gave that frame to the image processing code. The frame was scanned for
face data by comparing each set of pixels to a file containing a cascade of Haar Features. If a
set of pixels passed all of the tests in the Haar Feature cascade file, then that set of pixels was
labeled as being a face and a box was drawn around it. The code then took that face image
and compared it to a database of other faces. If the new face image compared closely to a face
in the database, then the name of the face in the database was pulled out from a list and
displayed in the GUI. Otherwise, if the new face did not compare closely to a face in the
database, then it was labeled and displayed as an unknown face in the database.
Figure 6 Pattern Recognition of Facial Features
Possible Future Improvements
The final design was largely satisfactory with each system providing either excellent
or superior performance. While the final product provided excellent performance, there are
areas that could be improved. For data and video, longer range encoded transceivers would be
used. The track tension would be increased with a stronger metal or rubberized material. The
infrared and LED lighting system would be modified to allow for a broader range and have a
higher luminosity. Rather than rely on NiMH batteries for power, lithium ion batteries would
be used for greater power density. Another nice feature to include would be a non-lethal
weapon system as well as laser targeting.