a multi-camera framework for raspberry pi low-cost video ... · a multi-camera framework for...

6
A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo, Antonio C. Sobieranski, Aldo von Wangenheim Image Processing and Computer Graphics Lab – LAPIX National Institute for Digital Convergence – INCoD Federal University of Santa Catarina – UFSC Florianopolis, Brazil Email: {cacciari, thiagopaulo, asobieranski}@incod.ufsc.br, [email protected] Abstract—Development of surveillance systems has reached a fully-digital installment by consequence of technological devel- opment of hardware and computer vision software. By using commodity hardware such as micro computers and off-the- shelf web-cameras, it is possible to create a foundation for computer vision surveillance applications. In this work, we present a scalable multi-camera framework composed by using low-cost Raspberry Pi units, conventionally used USB cameras and mini-servo engines for enhanced object tracking features in a monitored environment. Event modeling also takes place at a centralized intelligence node, providing statistics and querying for detected events. The obtained results show the effectiveness for the proposed platform as a promising solution for low-cost surveillance systems and intelligent monitoring. Index Terms—Low-cost Surveillance System, Multi-Camera Framework, Event Modelling, Video Processing, Computer Vi- sion. I. I NTRODUCTION With the evolution of surveillance systems over the past decades, researchers and developers have dedicated plenty of time to create tools to assist operations on human-based anomalous event detection [1]. Product availability, standard- izing of components, systems deployment and conception fulfilled an intricate role within surveillance operations. As a consequence of that, production optimization, technological innovation and cost reduction, it became easier to spread a large number of monitoring endpoints over wider target areas. This broader coverage meant environments as airports, urban stations, malls, parking lots and stadiums [1]. Generations of newer developed systems completed a mi- gration from analog-based to digital-based components and installments [2]. This fact allowed systems to perform fully digital bandwidth optimized video streaming, efficient stor- age of compressed video data, batch processing as well as archiving for later data query. This also act as an enabler for providing solutions to the design of surveillance systems from sensor level, up to the presentation of mixed symbolic and visual information to operators staff [3]. Technological spread over common population is nowadays allowing for a general acceptance and understanding of video capture devices. Large availability of retail hardware (e.g., portable web cameras) pushed customers forward to a wide adoption of such devices on their daily communication processes via Internet. This integration made such devices accessible enough to allow development of products with differentiated services, turning it into a common and cheap tool for video surveillance. Concurrently, combined advancements in hardware and software spawned a range of practical applications of varied categories. Cameras began to offer higher resolutions, better frame rates, embedded automatic aspect adjustments and user- friendly management tools (local or web). Digital image pro- cessing algorithms and interpretation methods made possible practical hardware utilization in diverse project types related to automated intelligent image analysis [4]. In modern scenarios where demand for security present itself as a growing trend [1], convergence with computational techniques targeted the benefits of information value leverag- ing. Extracting knowledge from a series of images of a video stream became essential in diverse scenarios, such as anomaly detection, entity counting, crowd and flow analysis, face recognition and intent prediction [5][6]. Relevant information is given as output of these methods as a result of feature, characteristics or parameters capture and computing. When binding together the aforementioned technical con- cepts, it is possible to model such acquired knowledge into tangible and discrete events. Computer vision itself is adopted to automatically detect interesting objects in a video domain directly from dimensions such as color space, geometry or motion energy, thus generating valuable information. A set of captured events as discrete units works as an supporter for higher reasoning modules responsible for knowledge acquisi- tion, decision support or real-time reaction systems. By adding further stages of processing over acquired data, these can be seen as additional layers of intelligence, paving the way for a broader research on such topics [7]. In reviewed literature there are several approaches on what refers to surveillance architecture and component design. In a set of many projects adopting off-the-shelf endpoint proces- sors, we focused our analysis and review scope on those which chose Raspberry Pi as a development platform. Vamisikrishna et al. [8] presented a Raspberry Pi Model B+ camera capable of notifying an user by SMS via GPRS modem whenever there are any human interference in a monitored area. Menezes

Upload: ledang

Post on 11-Feb-2019

220 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Multi-Camera Framework for Raspberry Pi Low-Cost Video ... · A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo,

A Multi-Camera Framework for Raspberry PiLow-Cost Video Surveillance Systems

Gregorio Cacciari, Thiago Gondin Paulo, Antonio C. Sobieranski,Aldo von Wangenheim

Image Processing and Computer Graphics Lab – LAPIXNational Institute for Digital Convergence – INCoD

Federal University of Santa Catarina – UFSCFlorianopolis, Brazil

Email: {cacciari, thiagopaulo, asobieranski}@incod.ufsc.br, [email protected]

Abstract—Development of surveillance systems has reached afully-digital installment by consequence of technological devel-opment of hardware and computer vision software. By usingcommodity hardware such as micro computers and off-the-shelf web-cameras, it is possible to create a foundation forcomputer vision surveillance applications. In this work, wepresent a scalable multi-camera framework composed by usinglow-cost Raspberry Pi units, conventionally used USB camerasand mini-servo engines for enhanced object tracking features ina monitored environment. Event modeling also takes place at acentralized intelligence node, providing statistics and queryingfor detected events. The obtained results show the effectivenessfor the proposed platform as a promising solution for low-costsurveillance systems and intelligent monitoring.

Index Terms—Low-cost Surveillance System, Multi-CameraFramework, Event Modelling, Video Processing, Computer Vi-sion.

I. INTRODUCTION

With the evolution of surveillance systems over the pastdecades, researchers and developers have dedicated plentyof time to create tools to assist operations on human-basedanomalous event detection [1]. Product availability, standard-izing of components, systems deployment and conceptionfulfilled an intricate role within surveillance operations. Asa consequence of that, production optimization, technologicalinnovation and cost reduction, it became easier to spread alarge number of monitoring endpoints over wider target areas.This broader coverage meant environments as airports, urbanstations, malls, parking lots and stadiums [1].

Generations of newer developed systems completed a mi-gration from analog-based to digital-based components andinstallments [2]. This fact allowed systems to perform fullydigital bandwidth optimized video streaming, efficient stor-age of compressed video data, batch processing as well asarchiving for later data query. This also act as an enabler forproviding solutions to the design of surveillance systems fromsensor level, up to the presentation of mixed symbolic andvisual information to operators staff [3]. Technological spreadover common population is nowadays allowing for a generalacceptance and understanding of video capture devices. Largeavailability of retail hardware (e.g., portable web cameras)pushed customers forward to a wide adoption of such devices

on their daily communication processes via Internet. Thisintegration made such devices accessible enough to allowdevelopment of products with differentiated services, turningit into a common and cheap tool for video surveillance.

Concurrently, combined advancements in hardware andsoftware spawned a range of practical applications of variedcategories. Cameras began to offer higher resolutions, betterframe rates, embedded automatic aspect adjustments and user-friendly management tools (local or web). Digital image pro-cessing algorithms and interpretation methods made possiblepractical hardware utilization in diverse project types relatedto automated intelligent image analysis [4].

In modern scenarios where demand for security presentitself as a growing trend [1], convergence with computationaltechniques targeted the benefits of information value leverag-ing. Extracting knowledge from a series of images of a videostream became essential in diverse scenarios, such as anomalydetection, entity counting, crowd and flow analysis, facerecognition and intent prediction [5][6]. Relevant informationis given as output of these methods as a result of feature,characteristics or parameters capture and computing.

When binding together the aforementioned technical con-cepts, it is possible to model such acquired knowledge intotangible and discrete events. Computer vision itself is adoptedto automatically detect interesting objects in a video domaindirectly from dimensions such as color space, geometry ormotion energy, thus generating valuable information. A set ofcaptured events as discrete units works as an supporter forhigher reasoning modules responsible for knowledge acquisi-tion, decision support or real-time reaction systems. By addingfurther stages of processing over acquired data, these can beseen as additional layers of intelligence, paving the way for abroader research on such topics [7].

In reviewed literature there are several approaches on whatrefers to surveillance architecture and component design. In aset of many projects adopting off-the-shelf endpoint proces-sors, we focused our analysis and review scope on those whichchose Raspberry Pi as a development platform. Vamisikrishnaet al. [8] presented a Raspberry Pi Model B+ camera capableof notifying an user by SMS via GPRS modem wheneverthere are any human interference in a monitored area. Menezes

Page 2: A Multi-Camera Framework for Raspberry Pi Low-Cost Video ... · A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo,

et al. [9] also made use of Model B+, however adopted e-mail messaging as alarm and notification system whenever anevent occurs. Cocorullo et al. [10] made use of an unspecifiedRaspberry Pi model, but considered an endpoint data retentionsystem by means of installing a ”long-term storage” for cameracaptured frames to an SD card. Abaya et al. [11] adoptedRaspberry Pi Model B, and developed data retention for localprocessing. Notifications are handled via sound emissions ande-mail system. On what refers to software implementationaspects, SimpleCV, which is an open source framework forbuilding computer vision applications, based on OpenCV, hasbeen adopted by both works in [8][9], while OpenCV itselfwas the preferred choice in [10][11].

In terms of software and intelligent analysis, detectingobjects in image sequences involve segmentation betweenforeground and background elements. In order to let a back-ground be considered as a static element, it should ideallybe resilient to environment and ambient light changes. Astandard method is discussed by Staufer and Grimson [12],when it is stated that, after initialization, averaging imagesover time creates a background approximation which is similarto the current static scene. In a vehicle detection work usingRasberry Pi, Suryatali and Dharmadhikari [13], used Kalmanfilter algorithm to extract and maintain background referenceimage, which was subsequently subtracted to get the desiredsegmentation with results considered robust and efficient.Monitoring of public spaces has also been considered by Abaset al. [14] with a Raspberry Pi. For this last one, foregroundextraction is performed via Mixture of Gaussians (MoG).

TABLE I: Comparison of the state-of-the-art approaches forRaspberry-based surveillance projects.

Our Approach [11] [9] [8]

Multi-camera

setupYes No No No

Raspberry

Board typeB B B+ B+

API OpenCV OpenCV SimpleCV SimpleCV

Endpoint

retentionYes Yes Yes No

Retention

timeUntil removed Long-term N/A N/A

USB

cameraYes Yes Yes Yes

Event

modelingYes No No No

Fault

toleranceYes No No No

Although there are several approaches for surveillance tasks,none of them considers adopting a distinct range of featuresan which would compose a larger and extensible framework,as shown in Table I. To address these limitations, in thispaper a set of combined features for surveillance using similarhardware and software integration is presented. Our work

Fig. 1: General diagram of physical topology and hardwarearchitecture.

consists in deploying an array of distributed non-overlappingcameras connected via a TCP/IP network to an intelligencenode. The endpoints are capable of detecting motion andperform tracking of objects locally by means of backgroundsubtraction as well as estimate pixel changes on the scene.Any ordered sequence of frames consisting on tracked objectsare transferred to a central node, which assembles them all andcreates a summarized file consisting of a discrete event piece ofvideo presented through a friendly interface. Combined to that,our cameras are based on low-cost hardware, which consistsbasically on a Raspberry Pi computer and a commodity USBcamera. Optionally, we test mini servo engines, which providessupport for partial PTZ controls, and endpoint storage, for aresilient operation in case of network failure with a retention-and-flush feature. The obtained results show the effectivenessfor the proposed platform as a promising solution for low-costsurveillance systems and intelligent monitoring.

The remainder of this paper is organized as follows: SectionI discusses our proposed approach, presenting details of hard-ware architecture on what refers to setup, choice and topology,as well as a software discussion over the API, implementationfor background subtraction and object tracking. The obtainedresults of our experimentation, with special details over theenvironment testbed, as well as how our framework is able tooutput data and model discrete events, are presented in SectionII. Finally, conclusions and further discussions are presentedin Section III, where we present a general overview of ourwork, with future work exposition such as anomaly detection.

II. PROPOSED APPROACH

A. Hardware Architecture

Our experimental setup consists in an arrangement of 3distinct units: (i) a Raspberry PI Model B, with a 900Mhzquad-core ARM Cortex-A7 CPU, 1GB of RAM, 4 USB Ports,40 General purpose Input/Output (GPIO) pins, and a 16 GBmicro SD installed on the board’s card slot; (ii) a TowerPro micro-servo engine, option made for either models SG90

Page 3: A Multi-Camera Framework for Raspberry Pi Low-Cost Video ... · A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo,

or MG90S. Each model has its distinct characteristics andperformance levels. These micro-servo units are responsiblefor simulating partial PTZ features (pan and tilt) on commoditycameras installed above them. These features are controlledby signals sourced by Raspberry’s GPIO ports, resulting inan acquired 180 degree horizontal rotation span; and (iii)commodity webcam models. One is a Hewlett-Packard (HP)model HD-4110 [15], capable of capturing 30 frames persecond, with possible capture resolution up to 1280x720. Otherequipment is a Philips model SPC1030NC [16], with capturedframes set up as 1280x1024@60fps. These cameras are at-tached on the mentioned servo engines and moves accordingto programmed response as a consequence of detected objects.Both models have approximate weight, with 113g/0.25lb forHP, and 103g/0.22lb for Philips. A general diagram of thissetup can be seen on Figure 1.

Fig. 2: General diagram of Raspberry internal GPIO connec-tions to mini-servo engines and camera.

In this work, structural arrangement of devices follows a startopology, with all n endpoints connected directly to a centralserver, which we call intelligence node. Connection is madevia TCP/IP.

B. Software Architecture

Raspberries are operating with Linux Raspbian, a Linuxdistribution optimized for Raspberry Pi hardware. Librariesneeded for image processing are provided by an OpenCV[17] installation, version 3.0.0, which allowed us to run oursoftware written in C++. Some additional support software hasbeen used as well. Servoblaster [18] is used as an interfacefor providing control to one or more mini-servo enginesthrough Raspberries GPIO ports. We also made use of a

collection of libraries contained in FFMPEG framework forvideo processing [19]. Streaming from Raspberry endpointsto intelligence node is performed with MJPG-streamer frame-work [20]. The intelligence node runs based on Linux Ubuntu10.04 LTS, however, our system has been developed withoutany restrictions on what refers to operating system choice.

The deployed endpoints work with two main core functionswhich serve as base to other pipeline sub-processes. These aretracking (directly related to background subtraction, panningand event modeling) and streaming (serving as a support toweb view operation) as illustrated in Figure 1. As trackingand streaming are distinct and necessary functionalities, wegave them each a dedicated video device access. To providethis access and to avoid further complexity such as breakingthe low-cost model with addition of auxiliary hardware, wemade use of modprobe. Modprobe is a Linux software capableof adding further modules to the operating system. Virtualvideo devices were added for the tracker and for the streamer,with signal feed being provided by an implemented FFMPEGreplication.

We have used MJPG-streamer library for the streamingfunctionality, which allows for configuration of parameterssuch as frames per second and frame resolution settings.Generated video format is Motion-JPEG, which consists ona sequence where each of its frames gets independentlycompressed. Motion-JPEG is a lightweight format because it isnot computationally expensive. Also, due to its simplicity, it iseasily handled by any web browser. We have developed a web-based frontend interface which allows for an easier visualiza-tion by an operation team. Three sections allow for (a) eventvisualization, (b) amount of movement detected, presentinggraphics and allowing for independent event visualization, and(c) live camera streaming.

Endpoint storage provides operating system space, as wellas a retention area in case of network failure. A defaultoperation uses a remotely-mounted SSHFS for direct transferto the intelligence node for video writing under OpenCV. Incase of failure, our software writes events to local disk. Thus,data becomes available on local file system when connectiongets established on network operation return.

C. Event Modelling

Events consist on the entire time period on which a contin-uous movement is detected by cameras and only exist after apipeline of processing is triggered, as illustrated in Figure 3.This pipeline begins by detecting movement on targeted areas.Each endpoint is able to perform movement detection fromone of video signals sourced by the modprobe/FFMPEG set.After that, post-segmentation processes are executed, followedby tracking methods. Event modelling takes place during theentire flow and are delivered to surveillance operation as avideo file stored in intelligence node.

Motion detection begins with image segmentation, whichconsists on splitting background from foreground areas. Itworks by subtracting current frame from previous frame, andif difference satisfies an specific threshold, pixel is assumed

Page 4: A Multi-Camera Framework for Raspberry Pi Low-Cost Video ... · A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo,

Fig. 3: Movement Detection, Tracking and Camera Reposi-tioning Pipeline.

to be part of a moving body, and it is assumed as a back-ground otherwise. This algorithm has low computational costand primes for its implementation simplicity, becoming lightenough to run under Raspberry’s hardware limitations.

Such segmentation methods are generally subject to failuredue to the existence of shadows cast by tree branches, leaves orchanges in environment lighting, creating a demand for a postprocessing noise reduction and removal stage. Pixel anomalies,short illumination bursts and other small events are treatedas irrelevant information. As a rule-of-thumb, one-frame briefchanges are set to be ignored as only changes spread around2 frames or more are considered as events.

Objects split by the aforementioned background/foregroundsegmentation need to be tracked. Object tracking is the processresponsible for detection of objects trajectories in a temporaldimension. This stage is performed by using an region de-tection algorithm which is responsible for calculation of aminimal rectangle which represents a region of interest ofthe object. Object is considered the same if these regionsof interest are approximate enough when comparing currentand previous frames. When tracked objects are leaving framearea, our software guides camera panning via GPIO ports onRaspberry PI, commanding servo-engines and making camerato center its view on the tracked object.

System stores videos in disk only when events are detectedby endpoints. For our testing configuration, we’ve set eventsto be over after 30 seconds of inactivity, or a time-cap of 10

Fig. 4: Camera behaviour and tracking over time.

minutes if constant motion occurs - being a new event startedshortly afterwards. This storage and retrieval strategy has beenimplemented in order to save space and speed up retrieval ofsuch detected scenes. Events summarized in this fashion avoida large storage utilization.

III. EXPERIMENTAL RESULTS

A. Environment sets

Our environment consists in four endpoints spread aroundover a mixed indoor and outdoor area. For the test runsmade for this work, each camera placement is arranged insuch way that it captures, streams and detects events insomehow crowded locations. Raspberries are connected vialocal network to the intelligence node, which provides aninterface for events and streaming features.

B. Obtained Results

As shown in Figure 4, camera stands in its default centeredposition (frames a-b) and as soon as the system detectsmovement, object tracking starts. Later on the same sequence,human target moves to the left of the frame (frame c), thusleaving optical capture boundaries. Tracking an object thatleaves frame out of any boundary, triggers our software for anautomatic repositioning by means of using GPIO and servoengines. Camera position is panned accordingly to realign itscenter position to where the updated target position is (framesd-f ). As soon as servo engine reaches its rotation limit (on thismodel, angle range is 180 degrees), camera halts panning, butcontinues to capture stream until defined recording limits forinactivity or time cap are reached. (frames g-h).

Streaming to intelligence node is also shown in Figure5, where independently of any activity on what regards tomotion or event modeling, it keeps up and showing operatorsthe feeds of interest in a consolidated fashion. In same way,mobile surveillance can also be achieved when accessingthe intelligence node webpage via smartphones, as shown inFigure 6.

Page 5: A Multi-Camera Framework for Raspberry Pi Low-Cost Video ... · A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo,

Fig. 5: Intelligence node streaming interface.

Fig. 6: Access to the intelligence node via smartphone, forpurposes of mobile surveillance.

Results gathered by intelligence node are displayed ashistograms, measured by amount of energy detected by eachevent. As it can be seen on Figure 7, the displayed histogramare split by hourly time slices, allowing for querying andvisualization of recorded events.

IV. CONCLUSION

In this paper we presented a low-cost and extensible archi-tecture using off-the-shelf components to develop an efficientmonitoring system for video surveillance. Differently from therelated work over the literature, our approach makes use ofRaspberry GPIO ports to create additional features such as

Fig. 7: Daily histograms showing event amount, categorizedby hour.

reactive control of a mini-servo engine, panning to a com-modity camera, and flexible object tracking. Event modelingwas also implemented with motion itself marking events startand rules for idle activity times and time caps for events end.Any detected events are displayed in our page with a graphicalcategorization over time. Nonetheless, our approach is keptin a low-cost fashion by using only commodity and entry-level components. Also, concerns of scalability are covered onendpoints deployment as much as network supports it. On theother hand, careful implementation of algorithms is mandatory,due to the low-processing capacity of Raspberry Pi and theevident need of performance in computer vision applications.More tests are needed to address ratio between cameras weightand servo-engines torque capacities.

Integrating such computer vision applications with low-cost hardware brings challenges in an era of evident demandfor underlying technologies present in many projects. Patternrecognition, machine learning, artificial intelligence and dis-tributed computing play a significant role in a close ubiquitouscontext of smart cities [21], automatic traffic management [22],safety and diverse types of benefit to people, and many otherapplications.

The next steps of our proposed approach is to implementmechanisms to effectively extract information from the en-vironment by using high level computer vision algorithms.Features such as face detection and learning, tracking statisticson temporal and spatial dimensions and intelligent algorithmsfor special applications such as anomaly detection should bethis research target from now on.

ACKNOWLEDGMENT

We would like to acknowledge the support provided to thiswork by the Brazilian National Council for Scientific andTechnological Development (CNPq). The authors would liketo thank to LAPIX team members from the Federal Universityof Santa Catarina, Brazil, for their valuable efforts on thisproject.

REFERENCES

[1] T. D. Raety. Survey on contemporary remote surveillance systems forpublic safety. IEEE Transactions on Systems, Man, and Cybernetics,Part C (Applications and Reviews), 40(5):493–515, Sept 2010.

Page 6: A Multi-Camera Framework for Raspberry Pi Low-Cost Video ... · A Multi-Camera Framework for Raspberry Pi Low-Cost Video Surveillance Systems Gregorio Cacciari, Thiago Gondin Paulo,

[2] M. Bramberger, A. Doblander, A. Maier, B. Rinner, and H. Schwabach.Distributed embedded smart cameras for surveillance applications. Com-puter, 39(2):68–75, Feb 2006.

[3] Special issue on video communications, processing, and understandingfor third generation surveillance systems. Proceedings of the IEEE,89(10):1355–1539, Oct 2001.

[4] M. Valera and S. A. Velastin. Intelligent distributed surveillance systems:a review. IEE Proceedings - Vision, Image and Signal Processing,152(2):192–204, April 2005.

[5] M. Pei, Yunde Jia, and S. C. Zhu. Parsing video events with goalinference and intent prediction. In 2011 International Conference onComputer Vision, pages 487–494, Nov 2011.

[6] Mingtao Pei, Zhangzhang Si, Benjamin Z Yao, and Song-Chun Zhu.Learning and parsing video events with goal and intent prediction.Computer Vision and Image Understanding, 117(10):1369 – 1383, 2013.

[7] Tiziana D’Orazio and Cataldo Guaragnella. A survey of automatic eventdetection in multi-camera third generation surveillance systems. IJPRAI,29(1), 2015.

[8] P. Vamsikrishna, S. R. Hussain, N. Ramu, P. M. Rao, G. Rohan, andB. D. S. Teja. Advanced raspberry pi surveillance (ars) system. InCommunication Technologies (GCCT), 2015 Global Conference on,pages 860–862, April 2015.

[9] V. Menezes, V. Patchava, and M. S. D. Gupta. Surveillance andmonitoring system using raspberry pi and simplecv. In Green Computingand Internet of Things (ICGCIoT), 2015 International Conference on,pages 1276–1278, Oct 2015.

[10] G. Cocorullo, P. Corsonello, F. Frustaci, L. Guachi, and S. Perri. Em-bedded surveillance system using background subtraction and raspberrypi. In 2015 AEIT International Annual Conference (AEIT), pages 1–5,Oct 2015.

[11] W. F. Abaya, J. Basa, M. Sy, A. C. Abad, and E. P. Dadios. Lowcost smart security camera with night vision capability using raspberrypi and opencv. In Humanoid, Nanotechnology, Information Technology,Communication and Control, Environment and Management (HNICEM),2014 International Conference on, pages 1–6, Nov 2014.

[12] C. Stauffer and W. E. L. Grimson. Adaptive background mixture modelsfor real-time tracking. In Computer Vision and Pattern Recognition,1999. IEEE Computer Society Conference on., volume 2, page 252 Vol.2, 1999.

[13] A. Suryatali and V. B. Dharmadhikari. Computer vision based vehicledetection for toll collection system using embedded linux. In Circuit,Power and Computing Technologies (ICCPCT), 2015 InternationalConference on, pages 1–7, March 2015.

[14] K. Abas, C. Porto, and K. Obraczka. Wireless smart camera networksfor the surveillance of public spaces. Computer, 47(5):37–44, May 2014.

[15] HP Development Company L.P. HP Webcam HD-4110 Datasheet.[16] Philips Consumer Electronics B.V. SPC1030NC User Manual.[17] Opencv.org. http://www.opencv.org, note = Accessed: 2016-05-04.[18] R. Hirst. Servoblaster. https://github.com/richardghirst/, commit =

96014c804d68677baf4fff25178f9b152c1f31e6, 2015.[19] Ffmpeg.org. http://ffmpeg.org, note = Accessed: 2016-05-04.[20] Mjpg-streamer. https://sourceforge.net/projects/mjpg-streamer/, note =

Accessed: 2016-05-04.[21] Angeliki Kylili and Paris A. Fokaides. European smart cities: The role

of zero energy buildings. Sustainable Cities and Society, 15:86 – 95,2015.

[22] Sei Ping Lau, Geoff V. Merrett, Alex S. Weddell, and Neil M. White.A traffic-aware street lighting scheme for smart cities using autonomousnetworked sensors. Computers Electrical Engineering, 45:192 – 207,2015.