gough formal special sensor report - mil.ufl.edu...! 6! computer’vision’(queenbot)’!...
TRANSCRIPT
M a t t h e w G o u g h
Fire Ants A fire fighting robot team by Matt Gough EEL 4665 – IMDL
Fall 15
08 Fall
Instructors: Dr. Arroyo & Dr. Schwartz TAs: Andy Gray & Jake Easterling
2
Table of Contents
Fire Detection System (DroneBot) ..................................................................................... 3 Inspiration .......................................................................................................................................... 3 Theory .................................................................................................................................................. 3 Experimental Data ........................................................................................................................... 4 Future Work ....................................................................................................................................... 5
Computer Vision (QueenBot) .............................................................................................. 6 Inspiration .......................................................................................................................................... 6 Theory .................................................................................................................................................. 6 Experimental Data ........................................................................................................................... 7 Future Work ....................................................................................................................................... 8
Code .............................................................................................................................................. 8 Fire Detection Experimentation: ................................................................................................. 8 Computer Vision – Ball tracking .................................................................................................. 9 Computer Vision – RGB Range Detection ............................................................................... 11
3
Fire Detection System (DroneBot)
Inspiration The fire detection system designed for the DroneBot was inspired by two main sources – previous fire fighting robots in IMDL and Dr. Arroyo’s lecture on Fuzzy Logic. When looking at old robots, the most common sensor used was the IR flame sensor, as opposed to a noncontact thermometer or UV flame sensor. Furthermore, the designs that seemed to work best were those that incorporated multiple sensors for more robustness and redundancy in the system. Furthermore, after learning about Fuzzy Logic I decided this task would be the perfect application.
Theory Fuzzy Logic is especially useful in this type of situation because of the dynamic range of input data from the flame sensors. Each flame sensor returns an ADC value between 0 and 1023 (the lower the value the closer the flame). Fuzzy Logic allows these values to be classified over a range of Very Near (VN), Near (N), Far (F), or Very Far (VF) for both the left and right sensors.
Based on a K-‐map like table, shown below, the robot can determine the appropriate action to take based on the left and right sensor values. For example, if the left flame sensor reads VN, and the right flame sensor reads VF, it stands to reason that the fire is on the left side of the robot, and therefore the robot should turn hard to the left in order to correctly orient itself.
Left Flame Sensor
Right Flame Sensor
“Fuzzification” Motor Output
4
Experimental Data The graph below represents the average ADC values returned by the left and right flame sensors taken over approximately 10 trials. While testing, I’ve noticed a significant change in the ADC range based on the size of the flame and the ambient IR light in the room. However, the maximum range of the flame sensors for a small candle flame is approximately two feet. Beyond that, the flame is virtually indistinguishable from the ambient IR. In addition to the left and right flame sensors, a third flame sensor is positioned in the middle facing directly forward. This sensor will return a digital value signaling when the flame is directly in front of the robot and within the desired range (~3 inches).
Left Sensor
Right Sensor
VN N F VF NO
VN S L HL HL HL
N R S L HL HL
F HR R S L L
VF HR HR R S L
NO HR HR R R S
VN = Very Near N = Near F = Far VF = Very Far NO = Nothing
Key:
S = Straight L = Left HL = Hard Left R = Right HR = Hard Right
5
Future Work For this fire detection system to be truly considered a success, I would like to see the range of detection increase. There are two ways I could achieve this:
1. Use a larger flame source • A larger flame makes a significant impact on the range of detection
possible. The difference in max range between a tea candle and a slightly larger candle is almost double.
• Tradeoff: Harder to extinguish using a computer fan 2. Improve searching algorithm
• By improving the search algorithm, the max range becomes slightly less important and increases the intelligence of the robot. This would be the optimal solution, since ultimately this is a proof of concept for a swarm of robots in which intelligent searching between multiple robots would be a key goal.
• Tradeoff: Complexity The final result will likely be a combination of the two improvements listed above.
0
200
400
600
800
1000
1200
3 6 9 12 15 18 21 24
ADC Values
Distance (inches)
6
Computer Vision (QueenBot)
Inspiration One of the largest problems I faced in designing two cooperative robots was the localization of one robot by the other. The simplest and most cost effective method I came up with was to use computer vision to detect an illuminated Ping-‐Pong ball on the DroneBot by the QueenBot. Since so many IMDL students in the past have had success using OpenCV on a Raspberry Pi (or similar microcomputer), I decided to implement it into my project as well.
Theory The computer vision algorithm implemented thus far can be simplified to the following outline:
1. Capture image • Take a single frame from the webcam
2. Threshold • Apply RGB range filtering to eliminate everything that is not the color
of the illuminated ball • Apply blur techniques to reduce noise and smooth contours
3. Find Contours • Identify all remaining contours in the image
4. Draw minimum enclosing circle • For each contour, draw the smallest circle that encloses the entire
contour 5. Compare AContour with Acircle
• By comparing the area of the contour with the area of the circle, the algorithm can eliminate any contour that is not within 10% of the area of the ball. This eliminates just about every other contour, with only the Ping-‐Pong ball remaining.
6. Identify centroid • The coordinates of the centroid can be passed to the Arduino using
serial communication. This information is then translated into motor commands to navigate the robot to the ball.
7
Experimental Data
The left image above shows the original frame captured from the PS3 Eye. The right image shows the resulting image after the RGB thresholding has been applied. The main concern with this algorithm is its inability to filter out bright sources of light. For example, in the right image, everything is eliminated except the ball and the bright white region in the top. This light source is from a light bulb in the ceiling. With fluorescent bulbs or ambient light, this is not an issue. Only bright, focused sources of light cause this issue. However, by comparing the areas of the contours with the areas of the minimum enclosing circle, the algorithm is able to discern the ball from other irregular contours in the image. The final result is shown below:
8
Future Work To improve upon this algorithm, I need to improve the initial threshold image. The LEDs in the Ping-‐Pong balls seem to saturate the camera, making it virtually indistinguishable from other light sources. Another possible solution to avoid false positives would be to include a unique image on the ball, such as a uniform stripe or cross. This would ensure that the contour being examined is in fact the same unique object being searched for. However, as it stands the algorithm is quite robust. So far I am able to accurately detect the illuminated ball upwards of 5-‐6 feet away.
Code
Fire Detection Experimentation: The following code was used when testing the flame sensors. Essentially, each sensor was aligned with a flame source and the ADC values measured over various distances. These values are then printed to the serial monitor. /******************************************************************************/ /******** CONSTANTS ********/ /******************************************************************************/ /* Flame Sensor Wiring: * * Right: A0 (Pin 14) * Left: A1 (Pin 15) * Middle: A2 (Pin 16) */ // Flame Sensor Pins const int flameSensorLeft = A1; const int flameSensorRight = A0; const int flameSensorMid = A2; void setup() { // Initialize flame sensors pinMode(flameSensorLeft, INPUT); pinMode(flameSensorRight, INPUT); pinMode(flameSensorMid, INPUT); Serial.begin(9600); } void loop() { int fDisLeft = analogRead(1);
9
//int fDisRight = analogRead(0); //int fDisMid = analogRead(2); Serial.println(fDisLeft); delay(500); }
Computer Vision – Ball tracking The following code is used to identify the Ping-‐Pong ball: # Fireants Autonomous Robots: Ball tracking (Queenbot) # Author: Matt Gough # October 2015 # import necessary packages # deque: list like data structure for storing past (x,y) locations of ball from collections import deque import numpy as np import argparse import cv2 import sys PY3 = sys.version_info[0] == 3 if PY3: xrange = range # Set up command line arguments for contrail buffer # longer buffer = longer contrail ap = argparse.ArgumentParser() ap.add_argument("-‐b", "-‐-‐buffer", type=int, default=10, help="max buffer size") args = vars(ap.parse_args()) # define upper and lower bounds of ball color # use script range-‐detector to determine bounds ballLower = (254,255,255) #ballUpper = (255,252,229) #ballLower = (225,255,223) ballUpper = (255,255,255)
10
# initialize the list of tracked points (contrail) pts = deque(maxlen = args["buffer"]) # grab reference to webcam camera = cv2.VideoCapture(0) # begin ball tracking loop # continue until "q" is pressed while True: # grab current frame # "grabbed" = boolean indicating whether file was read successfully (grabbed, frame) = camera.read() output = frame.copy() # FRAME NOT RESIZED # find red in image corresponding to ping pong ball mask = cv2.inRange(frame, ballLower, ballUpper) mask = cv2.erode(mask, None, iterations=2) mask = cv2.dilate(mask, None, iterations=2) # blur image to make contours more accurate blur = cv2.GaussianBlur(mask, (9,9), 0) # find contours in the mask and initialize the current # (x, y) center of the ball cnts = cv2.findContours(blur.copy(),cv2.RETR_EXTERNAL,cv2.CHAIN_APPROX_SIMPLE)[-‐2] center = None # only proceed if at least one contour was found if len(cnts) > 0: for i in cnts: cntArea = cv2.contourArea(i) ((x, y), radius) = cv2.minEnclosingCircle(i) encArea = 3.14*radius*radius if cntArea <= encArea*1.1 or cntArea >= encArea*0.9: M = cv2.moments(i) center = (int(M["m10"] / M["m00"]), int(M["m01"] / M["m00"])) # draw the circle and centroid on the frame, # then update the list of tracked points cv2.circle(frame, (int(x), int(y)), int(radius),
11
(0, 255, 255), 2) cv2.circle(frame, center, 5, (0, 0, 255), -‐1) # update the points queue pts.appendleft(center) # loop over the set of tracked points for i in xrange(1, len(pts)): # if either of the tracked points are None, ignore # them if pts[i -‐ 1] is None or pts[i] is None: continue # otherwise, compute the thickness of the line and # draw the connecting lines thickness = int(np.sqrt(args["buffer"] / float(i + 1)) * 2.5) cv2.line(frame, pts[i -‐ 1], pts[i], (0, 0, 255), thickness) # show the frame to our screen cv2.imshow("Frame", frame) key = cv2.waitKey(1) & 0xFF # if the 'q' key is pressed, stop the loop if key == ord("q"): break # cleanup the camera and close any open windows camera.release() cv2.destroyAllWindows()
Computer Vision – RGB Range Detection In addition, the following code way used to determine the appropriate RGB range. This code is taken directly from Adrian Rosebrock at pyimagesearch.com #!/usr/bin/env python # -‐*-‐ coding: utf-‐8 -‐*-‐ # USAGE: You need to specify a filter and "only one" image source # # (python) range-‐detector -‐-‐filter RGB -‐-‐image /path/to/image.png # or # (python) range-‐detector -‐-‐filter HSV -‐-‐webcam import cv2 import argparse
12
from operator import xor def callback(value): pass def setup_trackbars(range_filter): cv2.namedWindow("Trackbars", 0) if range_filter == 'RGB': v1_min = 'R_MIN' v2_min = 'G_MIN' v3_min = 'B_MIN' v1_max = 'R_MAX' v2_max = 'G_MAX' v3_max = 'B_MAX' else: v1_min = 'H_MIN' v2_min = 'S_MIN' v3_min = 'V_MIN' v1_max = 'H_MAX' v2_max = 'S_MAX' v3_max = 'V_MAX' cv2.createTrackbar(v1_min, "Trackbars", 0, 255, callback) cv2.createTrackbar(v2_min, "Trackbars", 0, 255, callback) cv2.createTrackbar(v3_min, "Trackbars", 0, 255, callback) cv2.createTrackbar(v1_max, "Trackbars", 255, 255, callback) cv2.createTrackbar(v2_max, "Trackbars", 255, 255, callback) cv2.createTrackbar(v3_max, "Trackbars", 255, 255, callback) def get_arguments(): ap = argparse.ArgumentParser() ap.add_argument('-‐f', '-‐-‐filter', required=True, help='Range filter. RGB or HSV') ap.add_argument('-‐i', '-‐-‐image', required=False, help='Path to the image') ap.add_argument('-‐w', '-‐-‐webcam', required=False, help='Use webcam', action='store_true') args = vars(ap.parse_args()) if not xor(bool(args['image']), bool(args['webcam'])): ap.error("Please specify only one image source")
13
if not args['filter'].upper() in ['RGB', 'HSV']: ap.error("Please speciy a correct filter.") return args def get_trackbar_values(range_filter): if range_filter == 'RGB': v1_min = cv2.getTrackbarPos("R_MIN", "Trackbars") v2_min = cv2.getTrackbarPos("G_MIN", "Trackbars") v3_min = cv2.getTrackbarPos("B_MIN", "Trackbars") v1_max = cv2.getTrackbarPos("R_MAX", "Trackbars") v2_max = cv2.getTrackbarPos("G_MAX", "Trackbars") v3_max = cv2.getTrackbarPos("B_MAX", "Trackbars") else: v1_min = cv2.getTrackbarPos("H_MIN", "Trackbars") v2_min = cv2.getTrackbarPos("S_MIN", "Trackbars") v3_min = cv2.getTrackbarPos("V_MIN", "Trackbars") v1_max = cv2.getTrackbarPos("H_MAX", "Trackbars") v2_max = cv2.getTrackbarPos("S_MAX", "Trackbars") v3_max = cv2.getTrackbarPos("V_MAX", "Trackbars") return v1_min, v2_min, v3_min, v1_max, v2_max, v3_max def main(): args = get_arguments() range_filter = args['filter'].upper() if args['image']: image = cv2.imread(args['image']) if range_filter == 'RGB': frame_to_thresh = image.copy() else: frame_to_thresh = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) else: camera = cv2.VideoCapture(0) setup_trackbars(range_filter) while True: if args['webcam']: ret, image = camera.read()
14
if not ret: break if range_filter == 'RGB': frame_to_thresh = image.copy() else: frame_to_thresh = cv2.cvtColor(image, cv2.COLOR_BGR2HSV) v1_min, v2_min, v3_min, v1_max, v2_max, v3_max = get_trackbar_values(range_filter) thresh = cv2.inRange(frame_to_thresh, (v1_min, v2_min, v3_min), (v1_max, v2_max, v3_max)) cv2.imshow("Original", image) cv2.imshow("Thresh", thresh) if cv2.waitKey(1) & 0xFF is ord('q'): break if __name__ == '__main__': main()