multiple moving targets surveillance based on a...

8
IEEE Communications Magazine • April 2018 82 0163-6804/18/$25.00 © 2018 IEEE ABSTRACT With the development of better links, enhanced coverage, comprehensive data resourc- es, and network system stability, the cooperative network formed by wireless sensor networks and unmanned aerial vehicles is envisioned to pro- vide immediate and long-term benefits in military and civilian fields. Previous works mainly focus on how to use UAVs to assist WSNs in sensing and data collection jobs, or target localization with a single data source in surveillance systems, while the potential of multi-UAV sensor networks has not been fully explored. To this end, we pro- pose a new cooperative network platform and system architecture of multi-UAV surveillance. First we propose the design concepts of a multi- UAV cooperative resource scheduling and task assignment scheme based on the animal colo- ny perception method. Second, we provide the moving small target recognition technique and localization and tracking model using the fusion of multiple data sources. In addition, this article discusses the establishment of suitable algorithms based on machine learning due to the complexity of the monitoring area. Finally, experiments of recognition and tracking of multiple moving tar- gets are addressed, which are monitored by multi- UAV and sensors. INTRODUCTION Unmanned aerial vehicles (UAVs), with the advan- tages of easy deployment and flexible usage, have been widely adopted in many military applica- tions, including air reconnaissance, battlefield sur- veillance, target localization, tracking, damage assessment, and anti-terrorism arrests. They are also adopted in many civilian applications such as aerial photography, geophysical exploration, disaster monitoring, and coastal anti-smuggling. Recently, the applications that use UAVs to col- lect sensor data have received intensive attention because of wireless sensor networks’ (WSNs’) advantages like miniaturization of sensors, lower costs, availability of various types of data (tem- perature, humidity, angle and voltage, etc.), and fast and flexible deployment [1, 2]. A coopera- tive network formed by UAVs and sensors can provide support for a variety of studies. This kind of network can expand the coverage to a bor- der area, thus effectively broadening the time and space coverage of monitoring and detection. In this research and these applications of UAVs, the surveillance of complex environments or targets is a significant application whose core technology is recognition and tracking of multiple moving targets [3]. It is an interdisciplinary com- plex issue that involves multi-sensor information fusion, image processing, and artificial intelligence and control technology, due to organizing devic- es with various types of loads to carry out the task together. However, in practical applications, there are even more challenges for researchers to rec- ognize and locate the multiple moving targets. In this article, we mainly solve the following. 1. In the field of image processing, there has been a lot of research on target recognition from which we can learn [4]. However, in the field of UAV applications, one of the common problems is recognition of small moving targets. Due to long distance, small targets in images often lack con- crete shape, size, texture, and other features that can be used. It is difficult to detect a target only using gray information. Moreover, if the video image background has heavy noise and clutter, small targets are usually buried in a complex back- ground with low signal-to-noise ratio (SNR). 2. In practical applications, especially localiza- tion and tracking, it is hard for the algorithms to meet the real-time requirement because UAVs and targets are likely to move at high speed. However, the state-of-the-art methods based on machine learning (ML) [5-6] could make the situation even worse by introducing more com- plex models to improve the accuracy. Therefore, how to balance speed and accuracy is another unsolved challenge in our problem. 3. From real implemented localization and tracking experiments, the UAVs and targets mov- ing at high speed lead to unstable communica- tion links and packet losses, which hinder the collection of enough data, such as received signal strength (RSS) and time of arrival (TOA), or data with various kinds of bias. As far as we know, there has been no effective way to solve these problems in UAV applications. Taking into account these uncertain factors and challenges, the main contributions include: Recognition of a small moving target based on video images. We mainly solve the chellenge of small targets lacking clear features even under good light condition by detecting the moving tar- gets through motion segmentation. The recog- nition problem is converted to an optimization Jingjing Gu, Tao Su, Qiuhong Wang, Xiaojiang Du, and Mohsen Guizani AMATEUR DRONE SURVEILLANCE: APPLICATIONS, ARCHITECTURES, ENABLING TECHNOLOGIES, AND PUBLIC SAFETY ISSUES The authors propose a new cooperative network platform and system architecture of multi-UAV surveillance. They pro- pose the design concepts of a multi-UAV coopera- tive resource scheduling and task assignment scheme based on the animal colony perception method. They provide the moving small target recognition technique and localization and tracking model using the fusion of multiple data sources. Jingjing Gu, Tao Su, and Qiuhong Wang are with Nanjing University of Aeronautics and Astronautics; Xiaojiang Du is with Temple University; Mohsen Guizani is with the University of Idaho. Digital Object Identifier: 10.1109/MCOM.2018.1700422 Multiple Moving Targets Surveillance Based on a Cooperative Network for Multi-UAV

Upload: others

Post on 16-Jul-2020

2 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 201882 0163-6804/18/$25.00 © 2018 IEEE

AbstrAct

With the development of better links, enhanced coverage, comprehensive data resourc-es, and network system stability, the cooperative network formed by wireless sensor networks and unmanned aerial vehicles is envisioned to pro-vide immediate and long-term benefits in military and civilian fields. Previous works mainly focus on how to use UAVs to assist WSNs in sensing and data collection jobs, or target localization with a single data source in surveillance systems, while the potential of multi-UAV sensor networks has not been fully explored. To this end, we pro-pose a new cooperative network platform and system architecture of multi-UAV surveillance. First we propose the design concepts of a multi-UAV cooperative resource scheduling and task assignment scheme based on the animal colo-ny perception method. Second, we provide the moving small target recognition technique and localization and tracking model using the fusion of multiple data sources. In addition, this article discusses the establishment of suitable algorithms based on machine learning due to the complexity of the monitoring area. Finally, experiments of recognition and tracking of multiple moving tar-gets are addressed, which are monitored by multi-UAV and sensors.

IntroductIonUnmanned aerial vehicles (UAVs), with the advan-tages of easy deployment and flexible usage, have been widely adopted in many military applica-tions, including air reconnaissance, battlefield sur-veillance, target localization, tracking, damage assessment, and anti-terrorism arrests. They are also adopted in many civilian applications such as aerial photography, geophysical exploration, disaster monitoring, and coastal anti-smuggling. Recently, the applications that use UAVs to col-lect sensor data have received intensive attention because of wireless sensor networks’ (WSNs’) advantages like miniaturization of sensors, lower costs, availability of various types of data (tem-perature, humidity, angle and voltage, etc.), and fast and flexible deployment [1, 2]. A coopera-tive network formed by UAVs and sensors can provide support for a variety of studies. This kind of network can expand the coverage to a bor-der area, thus effectively broadening the time and space coverage of monitoring and detection.

In this research and these applications of UAVs, the surveillance of complex environments or targets is a significant application whose core technology is recognition and tracking of multiple moving targets [3]. It is an interdisciplinary com-plex issue that involves multi-sensor information fusion, image processing, and artificial intelligence and control technology, due to organizing devic-es with various types of loads to carry out the task together. However, in practical applications, there are even more challenges for researchers to rec-ognize and locate the multiple moving targets. In this article, we mainly solve the following.

1. In the field of image processing, there has been a lot of research on target recognition from which we can learn [4]. However, in the field of UAV applications, one of the common problems is recognition of small moving targets. Due to long distance, small targets in images often lack con-crete shape, size, texture, and other features that can be used. It is difficult to detect a target only using gray information. Moreover, if the video image background has heavy noise and clutter, small targets are usually buried in a complex back-ground with low signal-to-noise ratio (SNR).

2. In practical applications, especially localiza-tion and tracking, it is hard for the algorithms to meet the real-time requirement because UAVs and targets are likely to move at high speed. However, the state-of-the-art methods based on machine learning (ML) [5-6] could make the situation even worse by introducing more com-plex models to improve the accuracy. Therefore, how to balance speed and accuracy is another unsolved challenge in our problem.

3. From real implemented localization and tracking experiments, the UAVs and targets mov-ing at high speed lead to unstable communica-tion links and packet losses, which hinder the collection of enough data, such as received signal strength (RSS) and time of arrival (TOA), or data with various kinds of bias.

As far as we know, there has been no effective way to solve these problems in UAV applications. Taking into account these uncertain factors and challenges, the main contributions include:

Recognition of a small moving target based on video images. We mainly solve the chellenge of small targets lacking clear features even under good light condition by detecting the moving tar-gets through motion segmentation. The recog-nition problem is converted to an optimization

Jingjing Gu, Tao Su, Qiuhong Wang, Xiaojiang Du, and Mohsen Guizani

AMATEUR DRONE SURVEILLANCE: APPLICATIONS, ARCHITECTURES, ENABLING TECHNOLOGIES, AND PUBLIC SAFETY ISSUES

The authors propose a new cooperative network platform and system architecture of multi-UAV surveillance. They pro-pose the design concepts of a multi-UAV coopera-tive resource scheduling and task assignment scheme based on the animal colony perception method. They provide the moving small target recognition technique and localization and tracking model using the fusion of multiple data sources.

Jingjing Gu, Tao Su, and Qiuhong Wang are with Nanjing University of Aeronautics and Astronautics; Xiaojiang Du is with Temple University; Mohsen Guizani is with the University of Idaho.

Digital Object Identifier:10.1109/MCOM.2018.1700422

Multiple Moving Targets Surveillance Based on a Cooperative Network for Multi-UAV

Page 2: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 2018 83

problem of recovering the sparse matrix and the low-rank matrix, which represent targets without background and a combination of neighboring frames, respectively.

A cooperative task assignment based on a swarm intelligence optimization algorithm. With full consideration of the dynamic change in com-plex environment, multi-objective optimization of UAVs’ cooperative task assignment is performed based on the swarm intelligence optimization (SIO) algorithm, which is inspired by living group cooperative activities [7].

An efficient localization algorithm based on multiple data source fusion. To avoid significant errors or incomplete information on a single data source, and the consequent compromised results, localization and tracking of moving targets based on multiple data sources are employed. We mainly propose a localization model based on ML and data fusion methods to provide refined localiza-tion. For the calculation efficiency problem, the localization processing is divided into two phases: offline and online. In the offline phase, we build a model to reflect the characteristics and inher-ent network information of the whole surveillance region. In the online phase, we locate the targets by using the established model in the offline phase.

Experiments based on a real deployed multi-ple-UAV system. The results of the experiments of recognizing, localizing, and tracking air and ground moving targets are shown through the cooperative work of UAVs and the sensor network.

network envIronment And system structure

UAVs are desired to work in human-hostile or unreachable environments for such tasks as per-forming search and rescue missions in explosion

and fire scenes, as well as earthquake and disaster sites, or carrying out reconnaissance and arrest missions in mountains and forests. A UAV cluster system consists of multiple UAVs collaborating to complete tasks, and implement functions like task decomposition, cooperative scheduling, and segmented operations. It can coordinate multi-ple heterogeneous, low-cost UAVs to efficiently complete complex tasks. The data link networks of the UAV cluster have better fault tolerance and certain self-healing ability to provide usability assurance [3]. The multi-UAV system can perform data analysis and fusion on the results output with different devices and performance, which can not only monitor a wider range, but also improve the accuracy and precision of target recognition, localization, and tracking.

Figure 1 is an envisioned multi-UAV platform diagram based on a sensor network. The moni-toring areas are supervised with ground deployed sensors, and UAVs equipped with different types of sensors, optical cameras, and infrared cameras for a multi-source dataset and distributed decision making [8]. The UAVs form a cluster with flying ad hoc networks (FANETs). The communication of UAV-UAV, UAV-sensor, UAV-ground station, and UAV-command center can be achieved by data links with short-range wireless communica-tion technology (e.g., Bluetooth and Wi-Fi) [3] to perceive the real-time surrounding environment and collect data. The collected data includes not only the basic sensed information from the ground deployed sensors (e.g., signal strength, time, temperature, humidity, and wind speed direction), but also the data collected by UAVs (e.g., images, infrared data, radar data, and laser imaging data). These data are sent to a cloud data center through a suitable wireless network tech-nology, like a fourth generation (4G)-LTE or 5G

Figure 1. Multi-UAV platform based on the sensor network.

Sensor

Scenarios

Anti-terrorism

Pulse interval τ (μs)

(a)

32

CPM

G nu

mbe

r 24

16

8

0

32

24

16

8

0

Unmanned aerialvehicle (UAV)

Raw data

Remote sensing data Signal

Collection

Collaborativescheduling

Processing

Target recognition Localization

Application

Tracking

Multi-source data fusion

Max entangled

Normalized PL1.00.7

0.9

(c)

CPM

G nu

mbe

r

0.8 1.0

τ=200 nsτ=456 ns

1Pulse interval τ (μs)

1.0

0.9

0.8

2 30

0.4

(b)

0.6 0.8 1.00.2

CPMG-6CPMG-12

Servers

Port explosion

PL (a

.u.)

Page 3: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 201884

network [9]. The collected data can be processed directly on the relevant UAVs or transmitted to the ground station and the server of a cloud data center for processing when energy resources are insufficient or computation consumption is large. The processed results could be used for tasks like resource scheduling, collaborative path planning, cooperative task planning, and target recognition, localization, and tracking, and form a unified plat-form for decision making.

The system architecture of a multi-UAV surveil-lance platform is shown in Fig. 2. The architecture can be divided into the service layer, support mid-dleware layer, and physical layer.

1. The physical layer mainly involves three sublayers: the cloud data center equipment, the Internet of Things (IoT), and the infrastructure. The cloud data center equipment is mainly used to deal with the collected data, including relevant network equipment, such as the cluster server, network interconnection equipment, and storage devices. The IoT is mainly used to collect data and provide network connection between the UAVs/sensors and the cloud data centers, includ-ing facilities like access units, gateways, routers, sink nodes, and mobile terminals [10, 11]. The UAV platform is in the IoT sublayer and mainly includes data sensing and the acquisition subsys-tem, control subsystem, power subsystem, and communication subsystem. In the data sensing and acquisition subsystem, information on a mon-itored area is sensed and collected through var-ious airborne devices, like an inertial navigator, an optical camera, an infrared camera, airborne radar, and a flight control sensor. The control sub-system usually includes flight control, flight sta-tus monitoring, take-off and landing control, and equipment management and control. The power subsystem is designed to satisfy power require-ments of UAVs and the network. The communica-tion subsystem includes communication between

UAVs and other aircraft or UAVs and ground systems. The IoT sublayer also contains the sen-sor subsystem that refers to a variety of sensors deployed in the monitoring region, including a temperature sensor, RSS transmitter and receiver, an infrared sensor, a laser sensor, and so on. The infrastructure sublayer is the basis of realization of the overall architecture, including base stations, satellites, ground-based radars, ground stations, and related staff.

2. The support middleware layer is used to provide the interface between the service layer and the physical layer. It mainly provides all types of service support related to data processing, including data management, temporal and spatial alignment, feature extraction, real-time data pro-cessing, task assignment, data association, data fusion, and data storage.

3. The top-level service layer is mainly used to provide a variety of services, including target dis-covery, recognition, localization, tracking, assisted decision making, unified situation generation, sit-uational consistency assessment, and map cloud services.

multIple movIng tArgets surveIllAnceOur surveillance system, as shown in Fig. 3, includes recognizing targets, establishing a local-ization model through multi-source data, and coordinating multi-UAV to determine which UAV is assigned to perform the localization and track-ing task. Moreover, we implement our surveil-lance module based on ML algorithms, which can optimize the performance by using example data or past experience. The detailed processes are discussed in the following subsections.

movIng tArget recognItIon

Moving target recognition based on UAVs in complex environments has always been a hot topic. A UAV equipped with a camera takes aerial

Figure 2. The system architecture of the multi-UAV cooperative target surveillance platform.

Target tracking •••

•••

Target localizationTarget recognitionTarget discovery

Cluster server Network interconnectionequipment

Cloud data centerequipment Storage devices

Access units

UAVplatform

Base stationsInfrastructure Ground stationsSatellites •••Relatedstaff

Ground-basedradars

Sensorsub-system

Temperaturesensor

RSS transmitter& receiver Infrared sensor Laser sensor •••

Internet ofthings

Gateway Sink nodes Mobileterminals

Inertial navigator Optical camera Infrared camera Airborne radar •••

•••

Routers •••

•••

Servicelayer

Map cloud servicesSituational consistencyassessmentSituation predictionUnified situation

generation

Feature extractionTemporal and spatialalignmentConflict resolutionData managementSupport

middlewarelayer

Physicallayer

Data storageData fusionTasks assignmentResource scheduling

Data sensing &acquisition sub-system

Controlsub-sytem

Powersub-sytem

Communicationsub-sytem

Our surveillance system

includes recognizing

targets, establishing

localization model

through multi-source

data, and coordinating

multi-UAV to determine

that which UAV is

assigned to perform the

localization and tracking

task. Moreover, we

implement our surveil-

lance module based on

ML algorithms.

Page 4: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 2018 85

video images in the mission execution area and transmits real-time data to the ground station. After receiving the image data, the general recog-nition processes are:1. Image preprocessing: Preliminarily detect

images to remove low correlation or low-quality images.

2. Motion segmentation: Split moving objects of images from the background [5].

3. Feature extraction: Extract features of objects and background.

4. Feature matching: Match features with the feature template that has been stored to identify the same feature in multiple images.

5. Perform target recognition and confirm the targets that need to be monitored or searched.It is generally known that in the field of UAV

application, relevant target recognition research studies mainly include the following aspects [12].

Motion Segmentation: Effective segmenta-tion of the motion area is important for feature extraction, feature expression and matching, and final recognition, since the postprocesses mainly take into account pixels in the image that corre-spond to the motion area. The typical methods are background subtraction (e.g., Gaussian of mixture and approximate median), optical flow, frame differencing, and so on.

Feature Extraction: The selected features can not only represent images, but also distinguish different categories of objects. Typical methods include shape-based, motion-based, texture-based, histogram orientation gradient (HOG)-based, algebraic-based, and geometric-based.

Feature Matching: Most of these works trans-form the feature matching problem into pat-tern classification problems by using supervised ML methods including support vector machine (SVM), AdaBoost, k-nearest neighbor (k-NN), con-ditional random field (CRF), sparse representa-tion classification (SRC), artificial neural network (ANN), and so on.

Although the above methods have some

advantages like mature algorithms and easy imple-mentation, there are still some challenges when applying them to real UAV applications. Because UAVs work in a rapid flight state, collected imag-es are vulnerable to environment change, such as light and shade change, target background changes, and disturbance of targets with similar shapes, which disturb the recognition. In addition, for UAV aerial videos or images, targets tend to become small targets that occupy only a small portion of the pixels of the image. These small targets are usually moving, and the background images probably change dynamically as well. Also, there may be large noise in these backgrounds. These would lead to small SNR and hinder the extraction and recognition of small targets.

Herein, we aim to solve this problem with the following processes.

Step 1: We first preprocess the distorted orig-inal video images by video stabilization technol-ogy [5].Then we obtain the image sequences of the video frame by frame. If it is the first frame, we convert it into the initial background. If not, we convert it into a gray image.

Step 2: We extract the moving target from image sequences (by comparing the current frame with the previous one), and form the tar-get image and background image. We then con-vert the background image into a background matrix B and convert the target image into a tar-get matrix T. If the size of each frame in the video is m n, where m is the frame height and n is the frame width (in our experiment, m = 1080, n = 1920), we obtain B = {bij}mn and T = {tij}mn. Here, bij and tij are presented as the pixel values of the jth (j = 1, …, n) column and the ith (i = 1, …, m) row in the background image and target image, respectively. Especially in the matrix , if the pixel is a background pixel, tij = 0. It is not difficult to find that T is a sparse matrix when the target is relatively small with respect to the whole image.

Step 3: If the video contains N frames (in our experiment, we do a motion segmentation every 300 frames, i.e., N = 300), we can stretch each

Figure 3. Framework of multi-target surveillance.

Target localization

Targetrecognition

Modeling andsolving

Localization model establishment

Targetsegmentation

Target recognition

Cooperative task assignment

Preprocessing

Data

Images

GPS

Received signed strengths

Time of arrivals

Featureextraction

Image distortioncorrection

Weight computation

Multi-source datafusion

Data compensation

Data processing Model establishment

For UAV aerial videos

or images, targets tend

to become small targets

which occupy only small

portion of the pixels of

the image. These small

targets usually are mov-

ing, as well as the back-

ground images probably

change dynamically.

And there may be large

noise in these back-

grounds. These would

lead to small SNR and

hinder the extraction

and recognition of the

small targets.

Page 5: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 201886

frame of the video into a vector, sk RD, (k = 1, …, N), where D = m n and sk consists of all pix-els by scanning from left to right and from top to bottom in the kth frame. We can get a compre-hensive image matrix S = [s1, …, sk, …, sN] RDN by combining the vectors of all frames. In the same way, we can get a combined background matrix B^ = [b̂1, , b̂N] RDN and a combined target matrix T^ = [t̂ 1, , t̂ N] RDN. B^ is a low-rank matrix because there are few changes in the background of two continuous frames.

Step 4: We extract the target by recovering sparse and low-rank matrices, more specifically, by solving the optimization problem

minB̂,T̂

B̂*+λ T̂

1,

subject to S = B^ + T^. Here, ||·||* means the ker-nel-norm of the matrix (i.e., the sum of the singu-lar values of the matrix), ||·||1 means the 1-norm of matrix, and l is a positive weighting parame-ter. We use the augmented Lagrange multiplier (ALM) [5] method to solve this problem.

Step 5: We solve the feature matching prob-lem through SCR [12]. This has the advantages of low complexity and strong model generalization ability because of unbiased estimation of general-ization error.

multI-uAv cooperAtIve tAsksAfter targets are recognized, the next major work is to localize and track them in the surveillance system. Nowadays, some UAVs have local data storage, computing, and processing capabilities, which enable them to carry out tasks onboard. Thus, we need a more applicable approach in which the station chooses and assigns one spe-cific UAV to perform localization and tracking according to the characteristics between each UAV and the targets [13].

The task assignment should not only satisfy the required constraints and scheduling objectives, but also provide a cooperative task assignment scheme. A good cooperative scheduling scheme can shorten the working hours, avoid occurrence of redundancy and conflict, reduce energy con-sumption, as well as improve the utilization of resources and success rate of the implementation of cooperative tasks.

After scheduling the cooperative work, UAVs not only need to know “what tasks” they should perform, but also need to specify how to “per-form the tasks.” Such collaborative resource scheduling of multiple UAVs can be regarded as an optimization problem. We can establish the optimization model based on data collected by both sensor nodes and UAVs, such as distances, speeds, angles, and ranges.

To solve this optimization problem, we adopt swarm intelligence optimization (SIO) algorithms, which have been extensively studied in recent years [7]. The swarm intelligence is derived from macro collaboration intelligent behavior by a gregarious colony. It simulates the functions and behaviors of a biological system, which is distrib-uted, decentralized, and self-organized. It also acts autonomously while searching for targets and relaying the information to all swarm mem-bers. Due to these characteristics, SIO has been applied for autonomous UAV control. The typical

SIO algorithms that are applicable to UAVs are ant colony optimization (ACO), particle swarm optimization (PSO), artificial fish swarm optimi-zation (AFSO), artificial bee colony (ABC), genet-ic algorithm (GA), pigeon-inspired optimization (PIO), and so on [7]. Each of them has its advan-tages to apply to different network scenarios. For instance, ACO, AFSO, and PIO have stronger robustness; PSO, ABC, and PIO have faster con-vergence speed; and GA is applicable to the opti-mization problem with multi-peak function [7].

In our work, we prefer PIO to build the opti-mization model, which is inspired by the natural behavior of the homing pigeon [13]. In the model, virtual pigeons are used to simulate the process of exploring the optimal solution path. The position and speed of the pigeons are initialized accord-ing to the landmark operator, map, and compass operator. In the multidimensional search space, the position and speed of the pigeons are updat-ed in each iteration. This article, based on the PIO algorithm, calculates the global optimal solution and solves the collaborative optimization problem of task assignment. Taking the battery energy con-straints into account, the goal of the collaborative scheduling is to assign different targets and resourc-es to suitable UAVs to perform the entire task.

tArget locAlIzAtIon And trAckIng bAsed on multIple dAtA sources

ML-based methods can take advantage of data’s inherent information, extract data’s topologic structure, describe the nonlinear and noisy pat-tern, and build the mapping between data and environment. Consequently, during the last decade, many published research works focus on localization and tracking by utilizing ML algo-rithms in the wireless network field [6]. However, usually, the more accurate the model reflecting the characteristics of data and environment, the more prior knowledge might be induced. Fur-thermore, applying ML-based methods in UAV application has strict requirements of calculation speed and energy consumption. Therefore, in this article, we prefer an effective and feasible method of localization with two phases:• Model-estimating phase: The localization

model is built by as much collected data and experience information as possible.

• An online localization phase: The locations of targets are estimated by using the learned model. In the first phase, an ML algorithm matched

with the data distribution is selected and improved. Then it can be applied in the current scenario for better localization performance and stability. Due to digging up and reflecting both topology struc-tures and characteristics in the whole network region, the model can make up the impact of environment conditions and has strong robust-ness. In this phase, a mapping between the posi-tion of targets and the acquired data is also built. Thus, in the second phase, we can estimate the position of a target according to its related data through the mapping. If the mapping is designed appropriately, it requires just a few steps of linear calculations whose cost is affordable by UAVs, and it can be used in real-time localization and tracking in the UAV network.

Nowadays, some UAVs

have local data stor-

age, computing, and

processing capabilities,

which enable them to

carry out tasks onboard.

Thus, we need a more

applicable approach in

which the station choos-

es and assigns one

specific UAV to perform

localization and tracking

according to the char-

acteristics between each

UAV and the targets.

Page 6: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 2018 87

In the multi-UAV spatial cooperative network based on the sensor network, the types of data sources are diversified. This is not only because the collected information comes from the distributed multi-UAV, but also because the types of sensors are varied. For example, we use GPS to measure the position and velocity of each UAV, gyroscopes, and magnetometers to estimate the attitude, accel-erometers, and acceleration of each UAV. All types of equipment loaded on the UAVs can collect dif-ferent types of data, such as images, GPS, infrared, and laser signals. It is a great challenge to effectively perform data association, temporal and space align-ment, and data fusion so that more accurate target localization and tracking results can be achieved.

In this article, we use the transmitted image information, signal data, time information, GPS and inertial sensors carried by UAVs, and laser range finder to comprehensively perform the effective target localization. Figure 3 shows how to build the model by different data sources. The following are the detailed processes.

Images and GPS: We analyze the images, measure the focal length of the image, and then calibrate the distortion image by the image focal length. After analyzing the calibrated image, we would obtain the coarse-grained coordinate posi-tion of the target. Then we use the GPS data and inertial sensors carried on UAVs to estimate the real-time altitude of UAVs [14], and use the laser range finder to measure the distance between UAVs and the target.

Signal Strengths and Time Arrivals: Sensors, deployed both on UAVs and on the ground, can measure the signal strengths and time differences of arrival between targets and sensors. Because of environmental influence, the collected signal val-ues with noisy data may have large deviation. We adopt a Kalman filter to refine the data by utilizing the error characteristic data so that we can get more accurate data for data compensation [15].

After processing the data, we set each data source with a certain weighting factor according to the quality of the collected data. The weights are added to build a weight-based-fusion model by multi-source data association. Then we locate the targets, and the station assigns tasks to rele-vant UAVs under current locations of the moving targets. Then we get ready to collect information of the targets, and wake up the sensor that is related to the forward direction of targets. There-fore, we can constantly locate and track the mov-ing targets with UAVs.

experIment vAlIdAtIonWe use three four-rotor UAVs (DJI-M100, DJI-In-novations Inc.) to build a cooperative multiple-tar-get localization and tracking system of our UAVs and sensor network. Each of them can carry 1.2 kg of equipment, including an airborne computer, “Manifold,” with an embedded Linux system, a sensor node, and a camera. Figure 4b shows the UAV used in our experiment. The environment of our experiment is an approximate 110 m 90 m open space, and 10 sensor nodes are deployed on the ground. In this scenario, two targets, including an air flight target and a ground moving target, carried on sensors need to be recognized and located. We use a small electric truck to stim-ulate the ground moving target and one UAV (DJI-Phantom series, DJI-Innovations Inc.) as the air target in our experiment, as shown in Fig. 4a.

The control software in the airborne comput-er mainly contains the flight control module, the target localization and tracking module, and the communication module. The flight control mod-ule has the landing, take-off, and flight trajecto-ry control functions. The target localization and tracking module can locate and track the moving targets by using the established model, which has been described in the above parts. The communi-cation module is mainly used for communication

Figure 4. Experiment setup: a) deployment of the experiment with three surveillance UAVs, two moving targets, and 10 sensors on the ground; b) used UAV along with sensor and airborne computers.

Camera Sensor

Monitor Manifold(airborne computer)

UAVTarget

Sensor

Page 7: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 201888

between sensor nodes and the PC on the control terminal. With a completely charged battery, the flight time of a UAV is around 30 minutes with a full payload.

Our experiments include two parts:1. Recognition of moving small targets2. Localization and tracking of both ground

and air moving targetsFigure 5 shows some of the results of the first experiment, where Fig. 5a is an original image of a frame extracted from a video whose length is 300 frames. We can find that compared to the whole image, the size of the moving tar-get is very small. There are a number of sensors whose sizes are similar to the target. And there are some large trucks, which are easy to con-fuse with the identified target. The key of this experiment is to separate the moving small tar-get from the background image (including large trucks, sensors, houses, trees, grass, ground lines, etc.). Figure 5b is a grayscale image, and Figs. 5c and 5d are the background image and target image obtained by recovering the low-rank matrix and the sparse matrix (a detailed introduction can be found in earlier sections). From the experimental results, we can find that the background in Fig. 5c is completely extract-ed. In Fig. 5d (the target image), the target object is successfully extracted, and the small target is basically separated from the back-ground. However, there is still some noise. The main reason is that the video background has dynamic changes. Also, the background images of two continuous frames sometimes change obviously, which would cause certain errors on the experimental results.

In the second experiment, the adopted sen-sors use a low-power and low-data-rate Zigbee protocol. At the beginning of the experiment, UAVs get their own position information accord-ing to their carried GPS, and then fly in accor-dance with a path that we set in advance. During the flight, UAVs periodically broadcast their own

location information to the entire WSN through the onboard sensor node in 3 Hz. Figure 6 shows the tracking results of our experiments. It illustrates the GPS trajectories (denoted by red lines) planned in advance and their correspond-ing estimated trajectories (denoted by blue lines). Figure 6a shows the result of the ground target, while Fig. 6b shows those of the air tar-gets. From these figures, we can find that our estimated trajectories are roughly consistent with the real values.

conclusIonThe research and application of UAVs are devel-oping dramatically. When equipped with different monitoring and sensing devices, UAVs can inte-grate with WSNs and form integrated sky-ground cooperative networks. In this article, we describe the scenarios of such a network and design its system architecture. As an important application of this network, we first introduce the multi-UAV cooperative resource scheduling and task plan-ning scheme. For data with multiple data sourc-es based on multi-UAV and sensor collection, we propose a target recognition, location, and tracking method. This article also presents two real implemented experiments on moving target recognition and localization both on the ground and in the air. The results show that the proposed model and method have high accuracy.

Acknowledgments

This work has been supported in part by the Aero-nautical Science Foundation of China under grant no. 2016ZC52030.

references[1] N. X. Du and F. Lin, “Maintaining Differentiated Coverage

in Heterogeneous Sensor Networks,” EURASIP J. Wireless Commun. and Networking, vol. 5, no. 4, Sept. 2005, pp. 565–72.

[2] X. Du et al., “A Routing-Driven Elliptic Curve Cryptography Based Key Management Scheme for Heterogeneous Sensor Networks,” IEEE Trans. Wireless Commun., vol. 8, no. 3, Mar. 2009, pp. 1223–29.

Figure 5. Recognition of a moving small target: a) original image; b) grayscale image; c) background image by recovering low-rank matrix; d) target image by recovering sparse matrix.

(a) (b)

(c) (d)

Noise

Target

The target localization

and tracking module

can locate and track

the moving targets by

using the established

model which has been

described in the above

parts. The communica-

tion module is mainly

used for communication

between sensor nodes

and the PC on the

control terminal. With

a completely charged

battery, the flight time

of a UAV is around

30 minutes with a full

payload.

Page 8: Multiple Moving Targets Surveillance Based on a ...static.tongtianta.site/paper_pdf/4e1daeea-9430-11e9-bfa7-00163e08bb86.pdfple-UAV system. The results of the experiments of recognizing,

IEEE Communications Magazine • April 2018 89

[3] A. V. Leonov, “Modeling of Bio-Inspired Algorithm AntHoc-Net and BeeAdHoc for Flying Ad Hoc Networks (FANETs),” Proc. 13th IEEE Int’l. Sci.-Tech. Conf. Actual Problems Elec-tron. Instrum. Eng., 2016, Novosibirsk, Russia, Oct. 2016, pp. 90–99.

[4] S. Xu, K. Doğançay, and H. Hmam, “Distributed Pseudo-linear Estimation and UAV Path Optimization for 3D AOA Target Tracking,” Signal Processing, vol. 133, Apr. 2017, pp. 64–78.

[5] D. Sabushimike et al., “Low-Rank Matrix Recovery Approach for Clutter Rejection in Real-Time IR-UWB Radar-Based Moving Target Detection,” Sensors, vol. 16, no. 9, Sept. 2016, pp. 1409.

[6] S. Mahfouz et al., “Non-Parametric and Semi-Parametric RSSI/Distance Modeling for Target Tracking in Wireless Sensor Networks,” IEEE Sensors. J., vol. 16, no. 9, Sept. 2016, pp. 2115–26.

[7] M. Mavrovouniotis, C. Li, and S. Yang, “A Survey of Swarm Intelligence for Dynamic Optimization: Algorithm and Applications,” Swarm Evol. Comp., vol. 22, Apr. 2017, pp. 1–17.

[8] X. Du et al., “Self-Healing Sensor Networks with Distribut-ed Decision Making,” Int’l. J. Sensor Networks, vol. 2, no. 2(5/6), 2007, pp. 289–98.

[9] N. H. Motlagh, M. Bagaa, and T. Taleb, “UAV-Based IoT Plat-form: A Crowd Surveillance Use Case,” IEEE Commun. Mag., vol. 55, no. 2, Feb. 2017, pp. 178–34.

[10] X. Du, M. Rozenblit ,and M. Shayman, “Implementation and Performance Analysis of SNMP on a TLS/TCP Base,” Proc. 7th IFIP/IEEE Int’l. Symp. Integrated Network Manage-ment, Seattle, WA, May 2001, pp. 453–66.

[11] X. Du et al., “An Effective Key Management Scheme for Heterogeneous Sensor Networks,” Ad Hoc Networks, vol. 5, no. 1, Jan. 2007, pp 24–34.

[12] G. Cheng and J. Han, “A Survey on Object Detection in Optical Remote Sensing Images,” ISPRS J. Photogramm. Remote Sens., vol. 117, July 2016, pp. 11–28.

[13] R. Hao, D. Luo, and H. Duan, “Multiple UAVs Mission Assignment Based on Modified Pigeon-Inspired Optimiza-tion Algorithm,” Proc. IEEE Chin. Guid., Navig. Control Conf., Yantai, China, Aug. 2014, pp. 2692–97.

[14] X. Wang, J. Liu, and Q. Zhou, “Real-Time Multi-Target Localization from Unmanned Aerial Vehicles,” Sensors, vol. 17, no. 1, Jan. 2017, pp. 33.

[15] H. Chuang, C. Hou, and Y. Chen, “Indoor Intelligent Mobile Robot Localization Using Fuzzy Compensation and Kalman Filter to Fuse the Data of Gyroscope and Magnetometer,” IEEE Trans. Ind. Electron., vol. 62, no. 10, Oct. 2015, pp. 6436–47.

bIogrAphIesJingJing gu ([email protected]) received her B.E. degree and Ph.D. degrees from the University of Aeronautics and Astro-nautics (NUAA), China, in 2005 and 2011, respectively. She is currently an associate professor at the Institute of Artificial Intel-ligence and Pattern Computing, NUAA. Her current research interests include flying ad hoc networking, wireless sensor net-works, and data mining in networks.

Tao Su ([email protected]) received his B.S. degree in comput-er science and technology from Nanjing University of Aeronau-tics and Astronautics, China, in 2015. He is currently a Master’s candidate in the College of Computer Science and Technology at the same university. His research includes wireless sensor net-works and machine learning.

Qiuhong Wang ([email protected]) received her B.S. degree in computer science and technology from Nanjing Uni-versity of Aeronautics and Astronautics in 2016. She is current-ly a Master’s candidate in the College of Computer Science and Technology at the same university. Her research interests include data mining and wireless sensor networks.

XiaoJiang Du [SM] ([email protected]) is a professor in the Depart-ment of Computer and Information Sciences at Temple Uni-versity. He received his M.S. and Ph.D. degrees in electrical engineering from the University of Maryland College Park in 2002 and 2003, respectively. His research interests are security, cloud computing, wireless networks, computer networks, and systems. He has published over 230 journal and conference papers.

MohSen guizani [F] ([email protected]) received his B.S. (with distinction), M.S., and Ph.D. degrees in electrical engineering, and an M.S. degree in computer engineering from Syracuse Uni-versity in 1984, 1986, 1987, and 1990, respectively. He is a pro-fessor and chair of the Department of Electrical and Computer Engineering, University of Idaho. His research interests include wireless communications and mobile computing, computer net-works, smart grid, cloud computing, and security.

Figure 6. Targets’ GPS trajectories and their estimation trajectories: a) result of the ground target; b) result of the air target.

(a) (b)50

5

0

15

10

20

25

30

35

40

45

50

10 15 20 25 30 35 40 45 50

GPS trajectoryEstimated trajectory

10

5

0

10

15

20

25

30

0

2030

404550 50 40 35 30 25 20 15 10 5 0

GPS trajectoryEstimated trajectory