OpenCVand
Embedded SystemJordan Buford
Hui XuYifan Zhu
Content
IntroRaspberry Pi Camera with OpenCVConcrete OpenCV examplesRaspberry Pi to FPGA Connection
2
OpenCV Intro● Open source computer vision and machine learning library● Cross platform
○ Windows, Linux, MacOS○ Android, iOS
● Language support○ C/C++, Python
● Install at https://opencv.org● Documentation at https://docs.opencv.org/3.4.3/
3
4
Example: Controlling a Drone
5
Drone → Controller → Pi Running OpenCV → Camera
-- type of camera
Raspberry Pi with Camera
6
This is an example of the difference
in NoIR camera and normal camera.
NoIR camera can be used as
night vision camera.
https://pimylifeup.com/raspberry-pi-camera-vs-noir-camera/ 7
Raspberry pi software UIOpen up a terminal and execute the following command:
$ sudo raspi-config
This will bring up a screen that
looks like this:
8
An easy example of how to take photo using picamera
#!/usr/bin/python
Import picamera
Import time
camera = picamera.PiCamera()
time.sleep(2) # camera warm-up time
camera.capture(‘test.jpg’)9
Once we can read from the camera...● Now we can use OpenCV to do more analysis on our images or video● For example
○ Tracking○ Object detection
10
Example: Trackingcap = cv.VideoCapture('slow.flv')# take first frame of the videoret,frame = cap.read()# setup initial location of windowr,h,c,w = 250,90,400,125 # simply hardcoded the valuestrack_window = (c,r,w,h)# set up the ROI for trackingroi = frame[r:r+h, c:c+w]hsv_roi = cv.cvtColor(roi, cv.COLOR_BGR2HSV)mask = cv.inRange(hsv_roi, np.array((0., 60.,32.)), np.array((180.,255.,255.)))roi_hist = cv.calcHist([hsv_roi],[0],mask,[180],[0,180])cv.normalize(roi_hist,roi_hist,0,255,cv.NORM_MINMAX)# Setup the termination criteria, either 10 iteration or move by atleast 1 ptterm_crit = ( cv.TERM_CRITERIA_EPS | cv.TERM_CRITERIA_COUNT, 10, 1 )while(1): ret ,frame = cap.read() if ret == True: hsv = cv.cvtColor(frame, cv.COLOR_BGR2HSV) dst = cv.calcBackProject([hsv],[0],roi_hist,[0,180],1) # apply meanshift to get the new location ret, track_window = cv.meanShift(dst, track_window, term_crit) x,y,w,h = track_window img2 = cv.rectangle(frame, (x,y), (x+w,y+h), 255,2) cv.imshow('img2',img2) k = cv.waitKey(60) & 0xff if k == 27: break else: cv.imwrite(chr(k)+".jpg",img2) else: breakcv.destroyAllWindows()cap.release()
11
Example: Face Detectionimport numpy as npimport cv2 as cv# Load classifiersface_cascade = cv.CascadeClassifier('haarcascade_frontalface_default.xml')eye_cascade = cv.CascadeClassifier('haarcascade_eye.xml')# Read Imageimg = cv.imread('sachin.jpg')gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)# Detectfaces = face_cascade.detectMultiScale(gray, 1.3, 5)for (x,y,w,h) in faces: cv.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2) roi_gray = gray[y:y+h, x:x+w] roi_color = img[y:y+h, x:x+w] eyes = eye_cascade.detectMultiScale(roi_gray) for (ex,ey,ew,eh) in eyes: cv.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)cv.imshow('img',img)cv.waitKey(0)cv.destroyAllWindows()
12
More Detection● Through machine learning, we are able to train our own classifier
○ https://github.com/mrnugget/opencv-haar-classifier-training
13
Train Your Own Classifier● Place positive images into a folder and create a list● Place negative images into a folder and create a list● Generate samples● Run training script
14
find ./negative_images -iname "*.jpg" > negatives.txt find ./positive_images -iname "*.jpg" > positives.txt
perl bin/createsamples.pl positives.txt negatives.txt samples 5000python ./tools/mergevec.py -v samples/ -o samples.vec
opencv_traincascade -data lbp -vec samples.vec -bg negatives.txt -numStages 20 -minHitRate 0.999 -maxFalseAlarmRate 0.5 -numPos 4000 -numNeg 7000 -w 40 -h 40 -mode ALL -precalcValBufSize 4096 -precalcIdxBufSize 4096 -featureType LBP
What We Can Do...● With our ability to detect and track object, we are able to build a simple drone
following program
15
What’s Next● Generate control signal according to detection● Communicating with FPGA
16
Drone → Controller → Pi Running OpenCV → Camera
Raspberry Pi to FPGA Communication
Raspberry Pi supports three serial communication protocols via it’s GPIO headers by default:
● UART
● SPI
● I2C
Also available: USB and Ethernet
17
Raspberry Pi to FPGA Communication
By default the SPI and I2C peripherals are not activated
The UART peripherals are connected to the Bluetooth module (if present) and console output
Pin 15-UART RXPin 14-UART TX
18
Bus Access via Software LibrariesVarious open-source libraries are available to interface with these busses through GPIO and include functions to configure and read
Example libraries:
● wiringPi
● spidev
● smbus
● pigpio
int fd, result; unsigned char buffer[100];
fd = wiringPiSPISetup(CHANNEL, 500000);
cout << "Init result: " << fd << endl;
// clear display buffer[0] = 0x76; wiringPiSPIDataRW(CHANNEL, buffer, 1);
19
What can we do with data sent to/from Raspberry Pi?
Give instructions to Raspberry Pi dependent on entire system state
- e.g. enable and disable camera, change entities tracked, switch camera
Continuously track environment parameters for OpenCV
- e.g. ambient light sensor data can prescale brightness of image
Use OpenCV data to generate interrupts and, for example, control motors
Use camera data to encode a video file
20
Why SPI for Communication?● Full-duplex
○ Bidirectional simultaneous communication
○ No collisions● Push-pull transistor design
○ Faster rise times○ Increased data throughput
● No clock synchronization issues○ Master drives clock○ Easier implementation
21
Future Design Possibilities
22
Use of Raspberry Pi GPU: Broadcom VideoCore IV
23
https://docs.broadcom.com/docs/12358545
FPGA Video Encoding● Highly parallelizable
● Various standard codecs
24
Q&A
25