a survey on sensor technologies for unmanned ground vehicles · addition, high-precision,...

9
XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE A Survey on Sensor Technologies for Unmanned Ground Vehicles Qi Liu School of Mechanical Engineering Beijing Institute of Technology Beijing, China [email protected] Shihua Yuan School of Mechanical Engineering Beijing Institute of Technology Beijing, China [email protected] Zirui Li School of Mechanical Engineering Beijing Institute of Technology Beijing, China [email protected] Abstract—Unmanned ground vehicles (UGVs) have a huge development potential in both civilian and military fields, and have become the focus of research in various countries. In addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes a brief review on sensor technologies for UGVs. Firstly, characteristics of various sensors are introduced. Then the strengths and weaknesses of different sensors as well as their application scenarios are compared. Furthermore, sensor applications in some existing UGVs are summarized. Finally, the hotspots of sensor technologies are forecasted to point the development direction. Keywords—unmanned ground vehicles, sensor, application I. INTRODUCTION The unmanned ground vehicle (UGV) is a comprehensive intelligent system that integrates functions including environmental perception, localization, navigation, path planning, decision-making and motion control [1]. It depends on various technologies such as sensors, computer science, and data fusion to operate efficiently [2]. For the civil application, UGV is mainly reflected in autonomous driving. The fusion of environmental perception system and driver models can completely or partially replace the active control of drivers [3], which has a great potential in reducing traffic accidents and alleviating traffic congestion. [4]. For the military application, UGV can effectively replace soldiers to complete various task on the battle field including intelligence acquisition, surveillance and reconnaissance [5]. Thus, UGV technology can greatly promote the construction of intelligent transportation and the development of unmanned combat. Environmental perception is one of the most important technologies in UGV, including the acquisition of external environmental information and the state estimation for the vehicle itself. In addition, algorithms for environmental perception require sensors to work, a complete UGV needs to be equipped with various sensors to ensure its safe and stable operation. Thus, high precision and reliability sensors are crucial to complete the scheduled task. In this paper, our main contribution is to present a brief review of various sensors as well as their application on existing platforms. The paper is organized as followed: section 2 introduces the commonly used sensors for UGV as well as compares their strengths and weaknesses; section 3 summarizes sensor applications in some existing platform; section 4 present some hotspots and development direction for sensor technologies. II. SENSORS FOR UGV This section presents some presentative sensors which are widely used in UGV as well as sensors with great application potential. UGVs employ a wide selection of sensors, which can be divided into exteroceptive sensor and proprioceptive sensors. Exteroceptive sensors are required for external environmental perception system to acquire knowledge of the surroundings; while proprioceptive sensors are equipped for state estimation system to monitor platform status in real time. A. Exteroceptive Sensors Exteroceptive sensors including Lidar, radar, ultrasonic, monocular camera, stereo camera, Omni-direction camera, infrared camera and event camera are presented in this section. Furthermore, a detailed comparison of the above sensors is given in TABLE I. . 1) Lidar a) Brief Introduction Lidar is an active sensor which is able to detect the position and speed of the target by emitting laser beam; the direction of laser beam can be changed all the time by a rotating mirror driven by a high-speed electric motor, reaching a 360 degrees horizontal detection range. Lidar can be divided into 2D-Lidar (only 1 laser beam) and 3D-Lidar (usually 4 to 128 laser beams) according to the number of laser beams. Both can construct point cloud map of the surroundings with high accuracy. b) Application Lidar is a good choice for SLAM, object detection, object tracking and point cloud matching, some typical algorithms using Lidar are as follows. SLAM: Classic Lidar-based SLAM methods include Gmapping [6], Hector [7], Karto [8] for 2D SLAM ; Loam [9] for 3D SLAM; Cartographer [10] for both 2D and 3D SLAM. All the above methods are open sourced in ROS (Robot Operating System). Recent state-or-art methods are established mainly based on deep learning. Sascha Wirges [11] trained a network to construct a multi-layer grid map; Ji [12] proposed robust CPLG-SLAM that is suitable for off-road environment. Shen [13] put forward a optimized with fully CNN and dynamic local NDT, which showed a higher efficiency than methods with conventional NDT. Object detection: Algorithms about object detection can be mainly divided into three categories: projection method, voxel grid method and point cloud network method. Numbers of tasks have been carried out for object detection using Lidar,

Upload: others

Post on 16-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE

A Survey on Sensor Technologies for

Unmanned Ground Vehicles

Qi Liu

School of Mechanical Engineering

Beijing Institute of Technology

Beijing, China

[email protected]

Shihua Yuan

School of Mechanical Engineering

Beijing Institute of Technology

Beijing, China

[email protected]

Zirui Li

School of Mechanical Engineering

Beijing Institute of Technology

Beijing, China

[email protected]

Abstract—Unmanned ground vehicles (UGVs) have a huge

development potential in both civilian and military fields, and

have become the focus of research in various countries. In

addition, high-precision, high-reliability sensors are significant

for UGVs’ efficient operation. This paper proposes a brief

review on sensor technologies for UGVs. Firstly, characteristics

of various sensors are introduced. Then the strengths and

weaknesses of different sensors as well as their application

scenarios are compared. Furthermore, sensor applications in

some existing UGVs are summarized. Finally, the hotspots of

sensor technologies are forecasted to point the development

direction.

Keywords—unmanned ground vehicles, sensor, application

I. INTRODUCTION

The unmanned ground vehicle (UGV) is a comprehensive intelligent system that integrates functions including environmental perception, localization, navigation, path planning, decision-making and motion control [1]. It depends on various technologies such as sensors, computer science, and data fusion to operate efficiently [2].

For the civil application, UGV is mainly reflected in autonomous driving. The fusion of environmental perception system and driver models can completely or partially replace the active control of drivers [3], which has a great potential in reducing traffic accidents and alleviating traffic congestion. [4]. For the military application, UGV can effectively replace soldiers to complete various task on the battle field including intelligence acquisition, surveillance and reconnaissance [5]. Thus, UGV technology can greatly promote the construction of intelligent transportation and the development of unmanned combat.

Environmental perception is one of the most important technologies in UGV, including the acquisition of external environmental information and the state estimation for the vehicle itself. In addition, algorithms for environmental perception require sensors to work, a complete UGV needs to be equipped with various sensors to ensure its safe and stable operation. Thus, high precision and reliability sensors are crucial to complete the scheduled task.

In this paper, our main contribution is to present a brief review of various sensors as well as their application on existing platforms. The paper is organized as followed: section 2 introduces the commonly used sensors for UGV as well as compares their strengths and weaknesses; section 3 summarizes sensor applications in some existing platform; section 4 present some hotspots and development direction for sensor technologies.

II. SENSORS FOR UGV

This section presents some presentative sensors which are widely used in UGV as well as sensors with great application potential.

UGVs employ a wide selection of sensors, which can be divided into exteroceptive sensor and proprioceptive sensors. Exteroceptive sensors are required for external environmental perception system to acquire knowledge of the surroundings; while proprioceptive sensors are equipped for state estimation system to monitor platform status in real time.

A. Exteroceptive Sensors

Exteroceptive sensors including Lidar, radar, ultrasonic, monocular camera, stereo camera, Omni-direction camera, infrared camera and event camera are presented in this section. Furthermore, a detailed comparison of the above sensors is given in TABLE I. .

1) Lidar

a) Brief Introduction

Lidar is an active sensor which is able to detect the position and speed of the target by emitting laser beam; the direction of laser beam can be changed all the time by a rotating mirror driven by a high-speed electric motor, reaching a 360 degrees horizontal detection range. Lidar can be divided into 2D-Lidar (only 1 laser beam) and 3D-Lidar (usually 4 to 128 laser beams) according to the number of laser beams. Both can construct point cloud map of the surroundings with high accuracy.

b) Application

Lidar is a good choice for SLAM, object detection, object tracking and point cloud matching, some typical algorithms using Lidar are as follows.

SLAM: Classic Lidar-based SLAM methods include Gmapping [6], Hector [7], Karto [8] for 2D SLAM ; Loam [9] for 3D SLAM; Cartographer [10] for both 2D and 3D SLAM. All the above methods are open sourced in ROS (Robot Operating System). Recent state-or-art methods are established mainly based on deep learning. Sascha Wirges [11] trained a network to construct a multi-layer grid map; Ji [12] proposed robust CPLG-SLAM that is suitable for off-road environment. Shen [13] put forward a optimized with fully CNN and dynamic local NDT, which showed a higher efficiency than methods with conventional NDT.

Object detection: Algorithms about object detection can be mainly divided into three categories: projection method, voxel grid method and point cloud network method. Numbers of tasks have been carried out for object detection using Lidar,

Page 2: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

TABLE I. INFORMATION OF DIFFERENT EXTEROCEPTIVE SENSORS

Features

Sensors

Affected by

Illumination

Affected by

Weather

Color and

Texture Depth disguised Range Accuracy Size Cost

Lidar — √ — √ Active <200m High Large High

Radar — — — √ Active <250m Medium Small Medium

Ultrasonic — — — √ Active <5m Low Small low

Monucular Camera √ √ √ — Passive — High Small low

Stereo Camera √ √ √ √ Passive <100m High Medium low

Omni-direction Camera √ √ √ — Passive — High Small low

Infrared Camera — √ — — Passive — Low Small low

Event Camera √ √ — — Passive — Low Small low

* The range of cameras except for depth range of stereo camera is related to operation environmental thus there is no fixed detection distance

the principle, pros and cons, and some related works are shown in TABLE II. .

Object tracking: The large field of view and high precise depth information allow Lidar track the object over a long periods of time.

Most widely used technologies rely on data association, which means to match the target at the current moment with the tracking target of the previous time cycle according to a similarity judgment. Specific methods include nearest neighbor method [14], multiple hypothesis tracking [15], similarity metrics with point density [16] or Hausdorff distance [17], and Bernoulli filter [18].

Physical models for object tracking are also carried out by some researchers including dynamic model [19] and shape model [20].

In addition, the results of object tracking often require noise reduction to achieve a smoother tracking result. In general, filter methods are commonly used including Kalman filter [21] and particle filter [22].

Point cloud matching: Point cloud matching is usually used to find the area with high similarity between online point clouds and high-precision priori point cloud maps to achieve localization.

In general, there are two classic point cloud matching methods, including Iterative Closest Point (ICP) [23] and Normal Distribution Transform (NDT) [24]; the two methods were compared in [25], and NDT proved to be more robust than ICP. Several researches were proposed to improve the performance of the above two methods, specifically MCL [26], IMLP (optimize matching accuracy of ICP) [27] and adaptive-NDT (reduce noisy causing by NDT) [28].

c) Pros and Cons

The strength of Lidar is its long detection distance ranging, wide field of view and high accuracy of data collection. In addition, depth information of target can be obtained from Lidar, and it is not affected by the lighting conditions which means it is able to work at both day and night.

However, there are several weaknesses to be taken into account. First, it has extremely high cost. What’s more, it suffers from sparse point cloud and low vertical resolution at a long distance, which may result in false detection. In addition, data collected by Lidar is easily affected by weather such as rainy, foggy, snowy and sandstorm [29]. Last but not

least, it has poor performance in dark and specular object detection since they absorb or reflect most radiation of laser beams.

2) Radar

a) Brief Introduction

Radar plays a important role in both civil and military field. The principle of radar is similar to that of Lidar, but it emits radio waves instead of laser beam to detect the position, distance and velocity of the target. Radar can be divided into various types according to different bands, of which millimeter wave radar (MMW) is the most commonly used for UGV.

b) Application

Radar is mainly used for object detection, and ADAS system. Several researches are list as follows:

Object detection: Algorithms about object detection can be mainly divided into two categories: machine learning methods and end-to-end network methods.

Widely used machine learning method in radar detection are LSTM and random forest. Ole Schumann [30] clustered radar data based on the DBSCAN and compared the effectiveness of random forest and LSTM methods in vehicle detection. Tokihiko Akita [31] extracts candidate regions based on radar echo intensity, and uses LSTM to detect and track targets. End-to-end net work is similar to point-net method applied in Lidar, including improved PointNet [32] and PointNet++ [33].

ADAS: Radar-based ADAS related work mainly includes blind spot detection [34], lane change assistance [35] and collision warning [36], which is able to improve vehicle safety and assist drivers with their decision-making.

c) Pros and Cons

Radar sensors outperform for its high performance price ratio. Compared with Lidar, radar has a further detection range with small size and low cost, in addition it is not easily affected by light and weather condition. Apart from the above advantages, several weaknesses of radar are listed below: low resolution and accuracy, filters are usually needed to use for preprocessing [29]; limited for color and texture information; poor concealment and easy interference with other equipment [37].

Page 3: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

TABLE II. REASEARCHES OF OBJECT DETECTION USING LIADR

Methods Principle Literature Advantages Limitations

Projection

Spherical

Project the point cloud to spherical

coordinate system including

azimuth, elevation, and distance.

SqueezeSeg [38];

PointSeg [39]

Point cloud get

more dense after

transformation

Difficult to achieve sensor

fusion

Plane Project the point cloud to main view

to generate depth information.

Faster R-CNN [40]

ConvNet [41]

Convince for data

fusion with camera

images

Empty pixels may be

produced at distant location

due to sparse point cloud.

Bird-Eye

Project the point cloud to top-view

image in order to provide size. and

position information

Deep learning: PIXOR

[42]; YOLO3D [43]

Other: Similarity [44]

Directly provide

the location and size

information of the

object

Sparse point cloud at distant

location may cause error

detection

Volumetric

Assign the point cloud to the three-

dimension voxel grid at the

corresponding position.

Deeplearning: Voxelnet

[45]; Second [46]; MVX-

Net [47]

Other: Particle filter [44]

Original 3D data

information can be

retained

Empty voxel grids are

generated due to sparse and

uneven distribution of point

cloud

PointNet

Directly take the original point cloud

as input and achieve end-to-end

detection.

PointNet [48]

PointNet++ [49]

PointPillars [50]

Simple and fast

No hard demand

for point cloud pre-

processing

Usually a long network

training period

TABLE III. REASEARCHES OF OBJECT DETECTION USING MONOCULAR CAMERA

Methods Principle Literature Advantages Limitations

Two-

stage

Detection

HG

Apperenc

e-based

Prior knowledges of the

object are required to generate

ROI.

Color [51]

Edge [52]

Texture [53]

High accuracy

Various type of features

can be extracted

High computing cost

Easily affected by

environment

Motion-

based

Temporal information are

extracted to generate ROI.

Fame-difference [54]

Bckgroud modeling [55]

Otical flow [56]

Low computing cost

Easy to implement

Hard for low speed and

stationary objects

Hard for complex scences

HV

Template

-based

Establish feature template and

calculate the ximilarity

measurement.

Hybrid template [57]

Deformable template [58]

Low computing cost

Simple algorithm

Difficult to build a

comprehensive template

library

Learning-

based

Train the classifier to verify

the target in the candidate

region.

SVM

Adaboost

Neural network [48]

High accuracy

Robust

High computing cost

Hard to train a classfier

One-state Detection

Achieve end-to-end detection

through deep learning with

out HG process.

SSD [59]; YOLOv3 [60];

MB-Net [61]; EZ-Net [62];

Fast detection speed and

good real-time performance Low accuracy

3) Ultrasonic

a) Brief Introduction and Application

Ultrasonic detects targets by emitting sound waves, whose principle is similar to Lidar and radar. Ultrasonic is widely used in the field of ships; however for UGV, it is capable to be equipped for detecting short-range target [63], ADAS system including automatic parking [64] and hazard warning [65].

b) Pros and Cons

Ultrasonic is not easily affected by light and weather condition, and has a low cost and small size. However, the detection distance of ultrasonic is short, the accuracy is low which is easy to generate noisy in collected data [66]. In addition, the narrow field of view usually causes blind spots during detection work.

4) Monocular camera

a) Brief Introduction

Monocular camera stores environmental information in the form of pixels by converting optical signals into electrical signals. The image collected by the monocular camera is basically the same as the surroundings sensed by human eye. Monocular camera is one of the most popular sensors in UGV fields, which is strongly capable to many kinds of tasks for environmental perception.

b) Application

A numbers of researches have been carried out by using monocular camera: SLAM, semantic segmentation, object detection, road detection and traffic sign detection.

SLAM: SLAM based on monocular camera are commonly used for environmental perception due to its low cost and convince, however it can’t construct a true size map due to the lack of depth information. Classic monocular camera based SLAM methods include MonoSLAM [67], ORB-SLAM [68], ORB-SLAM v2 [69] and DTAM [70]; and recent algorithms include LSD-SLAM [71], SVO [72] and InertialNet [73].

Semantic segmentation: Semantic segmentation is essential for the understanding of the driving environment and the extraction of objects of interest. Conventional algorithms include Conditional Random Fields (CRF) [74] and Markov Random Fields (MRF) [75] while recent methods are mainly developed basing on deep learning to improve segmentation accuracy, specifically FCN [76], PSPNet [77], DeepLab [78], ChiNet [79] and SSeg-LSTM [80].

Object detection: Numerous works have been carried out for object detection using monocular camera, which can be divided into two-stage detection method and single-state detection method. For two-stage detection method, the first

Page 4: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

TABLE IV. REASEARCHES OF OBJECT DETECTION USING STEREO CAMERA

Methods Principle Literature Advantages Limitations

IPM

Transform data from the camera

coordinate system to the world

coordinate system to build top-view

image without disparity.

[81] Low computing cost

Simple and mature algorithm

Vulnerable to road conditions

including off-road and uneven

road

Disparity

Map

Extract plane features parallel

(perpendicular) to the camera plane

which may may contain object by

generating a disparity map

V-disparity map [82]

UV-disparity map [83]

Filter [84]

High accuracy

Easy to obtain depth information

High computing cost

Low resolution for planes with

similar shapes

TABLE V. REASEARCHES OF ROAD DETECTION USING MONOCULAR CAMERA

Methods Principle Literature Advantages Limitations

Feature

Extraction

Apperence-

based

Detect road area by means of road

appearance.

Color[85]

Texture [86]

Different type of features

can be used

Difficult to detect

unstructed road

Boundary-

based

Detect elevation difference between the

road and its surrounding to generate road

boundary.

[87] Suitable for most scenarios

Difficult to detect off-

road with unclear

boundary

Model

Fitting

Parametric Fitting the road model with parametric

equations.

Line [88]

Polynomial [89]

Low computing cost

Simple model

Low accuracy

Poor robustness

Semi-

parametric

Fitting the road model with semi-

parametric without prior assumtion of

road model.

Cubic-spline [90]

B-spline [91]

No need to assume a specific

geometry of the road

High computing cost

Over-fitting need to be

considered

Non-

parametric

Fitting the road model with a curve only

requires continuity.

Hierarchical-

Bayesian [92]

Dijkstra [93]

Only continuity of the curve

need to be required

High computing cost

Complicate model

stage called hypothesis generation (HG) generates region of interests (ROI) of the object, the second stage called hypothesis verification (HV) trains a classifier to identify the candidate regions; while single-stage method relies on the neural networks to achieve an end-to-end detection. Several related works are shown in TABLE III. .

Road detection: Road detection is an important work to make sure that platform can be stably driven in a safe and accessible area of the road. Algorithms can be mainly divided into two stages: feature extraction and model fitting. Several related works are shown in TABLE V. .

Traffic sign detection: Correct identification of traffic signs is essential to the platform's reasonable behavior decision. In general, detection of traffic sign can be divided into two steps: segmentation and verification.

The purpose of segmentation is to obtain the location of the traffic sign; color-based method including RGB [94], HSV [95] and HIS [96] color space, and shape-based method including edge [97], HOG [98] and Haar wavelet feature [99] are often used for segmentation. The purpose of verification is to identify the specific type of traffic sign from the result of segmentation, a serious of learning-based method are mainly carried out specifically SVM [98], AdaBoost [100] and neural network [101].

c) Pros and Cons

The biggest advantage of monocular camera is that it can generate high-resolution images with color and texture information from the environment; as a passive sensor, it also has good concealment and will not interfere with other devices. In addition, camera-based perception technology is relatively mature, and the size of monocular camera is small with low-cost. Nevertheless, several drawbacks of monocular camera should also be considered: lack of depth information;

highly affected by illumination and weather condition; large calculation cost for high-resolution images.

5) Stereo camera

a) Brief Introduction

The imaging principle of the stereo camera is the same as that of the monocular camera, but the stereo camera is equipped with an additional lenses at a symmetrical position or calculating the time of flight to generate depth information; another solution is to place two or more monocular cameras at different positions on the platform to form a stereo vision system, however this will bring greater difficulty to the calibration of the camera.

b) Application

Stereo camera is widely used in the following work: SLAM, object detection and road detection.

SLAM: Depth information collected by stereo camera make it more suitable to generate precise 3D maps than monocular camera. Relevant methods include DVO [102], RGBD-SLAM-V2 [103] and Stereo-ORB-SLAM [104].

Object detection: Algorithms developed within stereo camera are similar to that of monocular camera. Since stereo camera is able to collect depth information, it has more choices in the ROI generation step including inverse perspective mapping (IPM) and disparity map. Several related work are shown in TABLE IV. .

Road detection: Road detection methods using stereo camera are similar to that of monocular camera. Inverse perspective mapping (IPM) [105] are usually carried out to achieve road detection since the bird-eye’s view construct from IPM is able to intuitively extract geometric information of the road, thereby facilitating the extraction of road features and model fitting.

Page 5: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

TABLE VI. SENSOR APPLICATION IN SOME EXISTING UGV

Basic Information Sensor Application Platform Perfromance

Name Time Country Exteroceptive Proprioceptive

CITAVT-IV

[106] 2002 China

2 monocular cameras for object and road

detection

Odometer and IMU for

localization

Highest speed 75.6km/h on Changsha

Around-city Express

THMR-V [107] 2002 China 1 Lidar and 2 monocular cameras and for

object and road detection GPS for localization

Average speed 5~7km/h in Tsinghua

University

Junior [108] 2007 America

5 Lidars for SLAM; 1 monocular camera

for road detection; 2 radars for object

detection

GPS and IMU for localization Average speed 20km/h in urban

environment of 2007 Darpa Challenge

Boss [109] 2007 America

9 Lidars for SLAM and object detection;

2 radars for object detection; 2

monocular cameras for road detection

Integrated navigation for

localization and state estimation

Average speed 22.53km/h in urban

environment of 2007 Darpa Challenge

Odin [110] 2007 America 8 Lidars and 2 monocular cameras and

for object and road detection

GPS and IMU for localization;

IMU for state estimation

Average speed 20.92km/h in urban

environment of 2007 Darpa Challenge

Talos [111] 2007 America

13 Lidars and 15 radars for object

detection; 5 monocular cameras for road

detection

GPS and IMU for localization;

IMU for state estimation

Urban environment of 2007 Darpa

Challenge. (speed not mentioned)

Robotcar [112] 2013 U.K

3 Lidars, 2radars, 3 monocular cameras

and 1 stereo camera for vehicle,

pedestrain and road detection

GPS and IMU for localization;

IMU for state estimation

Operated more than 100 times around

Oxford city centre. (speed not

mentioned)

BRAiVE [113] 2013 Italy

5 Lidars for object detection; 1 radar for

ADAS; 10 monocular cameras for road

and traffic sign deetection

GPS and IMU for localization;

IMU for state estimation

13,000km section from Italy to

Shanghai. (speed not mentioned)

Bertha [114] 2014 Germany

6 radar for object detection; 2 monocular

cameras for traffic sign deetection; 1

stereo camera for road detection.

GPS for localization Average speed 30km/h from

Mannheim to Pforzheim (103km).

Uber 2015 America 1 Lidar, 10 radars and 7 cameras for

SLAM, object and road detection

GPS and IMU for localization;

IMU for state estimation

Have been tested in city section of San

Francisco, Pennsylvania.

Land Cruiser

[115] 2016 China

1 Lidar for SLAM; 4 monocular cameras

for object detection GPS and INS for localization;

Have been tested in off-road

environment..

Autopilot 2017 America 1 radar and 10 ultrasonics for ADAS; 5

monocular cameras for road detection. GPS for localization Not mentioned

Apollo [116] 2018 China

1 Lidar, 1 radar, 1 ultrasonic and 6

monocular cameras for ADAS, object

and road detection, traffic sign detection

GPS and IMU for localization;

IMU for state estimation

Good perfromance in the Chinese

Subject Three Examination

c) Pros and Cons

Compared with Lidar, stereo camera can obtain color information and generate denser point clouds [117]; compared with monocular camera, two images taken simultaneously through multiple angles of view by different lens can obtain depth information and motion information of the environment.

However, stereo camera is also susceptible to weather and illumination conditions; in addition, the field of view is narrow; and additional calculation is required to process extra depth information [117].

6) Omni-direction camera

a) Brief Introduction and Application

Compared with monocular camera, an Omni-direction camera is able to collect a ring-shaped panoramic image centered on the camera. With the development of hardware, it is gradually used in UGV including SLAM [118] and semantic segmentation [119].

b) Pros and Cons

The Omni-direction camera has a large field of view, however the computational cost is large due to the increase collection of image point clouds.

7) Infrared camera

a) Brief Introduction

Infrared camera gathers surroundings information by receiving signals of infrared radiation from objects, which is

able to complement well with conventional cameras. In general, infrared camera can be divided into the active infrared camera operating in the near-infrared (NIR) region and passive infrared camera operating in the far-infrared (FIR) region. The NIR camera is sensitive to the wave lengths in the range of 0.15-1.4 μm; while FIR camera is sensitive to the wave lengths in the range of 6-15 μm.

b) Application

Infrared camera is popular for object detection (usually vehicles and pedestrian) at night for UGV, and it should be noticed that the corresponding type of camera should be selected according to the infrared wavelength range emitted by the detected object. Several researches among infrared camera have been put forward as followed:

Some researchers established the method based on appearance or feature extracted from the image, specifically edge [120], HOG [121] and Haar [122]. However the low resolution of infrared images makes it difficult to extract features; it is a good choice to enhance the contrast of infrared images first, so that the subsequent detection work can achieve better results [122]. Other relevant works were mainly based on YOLOv3 [123] network in deep learning.

Construct a stereo infrared vision system is another way to solve the detection problem. Mita [124] placed two infrared cameras to form a stereo infrared vision system, and achieve vehicle detection under different weather conditions by calculating disparity map.

Page 6: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

c) Pros and Cons

The biggest advantage of the infrared camera is that it has good performance at night; and it is low-cost, small in size. However, it cannot generate color, texture, and depth information, and the resolution of the image is relatively low.

8) Event camera

a) Brief Introduction

Event camera is a newer perception modalities applied in the field of UGV. The working principle of event camera has a large difference comparing with traditional cameras, it records a series of asynchronous signals by measuring the change in brightness of each individual pixel of the image at the microsecond level [125].

b) Application

Event camera is suitable for highly dynamic application scenarios of UGV such as SLAM [126], state estimation [127] and object tracking [128].

c) Pros and Cons

The recorded events are sparse in both space and time, which has a large potential to reduce the time of information transmission and processing. In addition, the microsecond level time resolution makes it maintain a high dynamic measurement range. However, the resolution of output data is low.

B. Proprioceptive sensors

proprioceptive sensors including GNSS and IMU are summarized in this section.

1) GNSS Global Navigation Satellite System (GNSS) is the most

widely used technology for UGV localization system. The current mainstream GNSS systems are GPS developed by United States, GALILEO developed by European Union, GLONASS developed by Russia, and Beidou System developed by China. It is able to provide absolute position information for UGV through a serious of satellites around the earth.

The working principle of GNSS is based on measuring the time of flight of the signals transmitted by satellites and received by receptor. Position coordinate and pseudo-range can be obtained in the form of (x, y, z, t), where x, y and z represent coordinates while t represents pseudo-range.

There are several factors that affect the positioning accuracy of GNSS: errors caused by satellites, including the atomic clocks and orbit error; errors caused by signal transmission, including transmission in ionosphere and troposphere and multipath effect; errors caused by receptor noisy.

2) IMU IMU stands for Inertial Measurement Unit, which includes

accelerometer, gyroscope and magnetometer. It is mainly used to measure the acceleration and angular velocity information of the UGV. It allows to monitor the running status of the vehicle and provide real-time feedback for the bottom control of the platform. IMU often performs data fusion positioning with GPS to improve its positioning accuracy.

III. APPLICATION OF SENSORS IN EXISTING PLATFORMS

This section introduces some existing UGVs, All the platforms are arranged in TABLE VI. , with the focus on sensor application of each platforms as well as their basic information and test performance.

IV. FUTURE OUTLOOK

The related hotspots around sensor technologies are as follows.

A. Sensor Fusion

Usually, there are two mainly problems need to be solved for sensor fusion: what sensors should be fused; how to assign the work of different sensors. In any case, the advantages of different sensors must be complemented in order to achieve a higher accuracy and stronger robustness system. However, the real-time performance and sensor cost need to be weighed since the equipment of multiple sensors will increase the complexity of the model and algorithm.

B. Unusual Object Detection

In addition to the efficient detection of conventional objects such as vehicles, pedestrians and buildings, objects in some special scenarios including negative obstacles, objects in a very short or very long distance also need to be correctly identified.

Negative obstacles including potholes, holes and trenches can threaten driving safety of UGV; objects in a very short distance such as pedestrian beside the vehicle are often in blind spots of sensors while objects in a very long distance may cause error identification because of low resolution. Generally speaking, unusual obstacle detection is a great guarantee for the safety of the platform, more researches need to be carried out in the future.

C. Application in High-speed Platform

The high-speed driving of UGV is an important guarantee for its maneuverability, which is essential to complete the task efficiently. However, if the sensor algorithm cannot generate surrounding information at high speed, the platform is extremely prone to danger during high-speed driving. Thus, how to effectively improve the real-time performance of the sensor technologies and validate the effect through substantial vehicle test need to be highly considered in the future.

V. CONCLUSION

Sensor technologies play a more and more important role in the development of UGV. This paper proposes a brief review on sensor technologies for UGV, including their brief introduction, algorithms, strengths and weakness, as well as the application in some existing UGVs. Finally, the hotspots of sensor technologies are forecasted to highlight the research direction.

In general, there are still many aspects that need to be improved around sensors, which is of great research value. Future work should focus on the survey of sensor fusion technologies as well as sensor applications in future developed platforms.

Page 7: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

REFERENCE

[1] R. Bishop, "A survey of intelligent vehicle applications worldwide," in Proceedings of the IEEE Intelligent Vehicles Symposium 2000 (Cat. No. 00TH8511), 2000, pp. 25-30: IEEE.

[2] Y. Xu, R. Wang, and B. J. A. E. Li, "A summary of worldwide intelligent vehicle," Automotive Engendering, 2001.

[3] Z. Li et al., "Transferable Driver Behavior Learning via Distribution Adaption in the Lane Change Scenario*," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 193-200.

[4] L. I. Li, F. Y. Wang, N. N. Zheng, and Y. Zhang, "Research and Developments of Intelligent Driving Behavior Analysis," Acta Automatica Sinica, vol. 33, no. 10, pp. 1014-1022, 2007.

[5] H. Y. Chen and Y. Zhang, "An overview of research on military unmanned ground vehicles," Acta Armamentarii, vol. 35, no. 10, pp. 1696-1706, 2014.

[6] G. Grisetti, C. Stachniss, and W. Burgard, "Improved Techniques for Grid Mapping With Rao-Blackwellized Particle Filters," IEEE Transactions on Robotics, vol. 23, no. 1, pp. 34-46, 2007.

[7] S. Kohlbrecher, O. V. Stryk, J. Meyer, and U. Klingauf, "A flexible and scalable slam system with full 3d motion estimation," in 2011 IEEE International Symposium on Safety, Security, and Rescue Robotics, 2011.

[8] K. Konolige, G. Grisetti, R. Kümmerle, W. Burgard, and R. Vincent, "Efficient sparse pose adjustment for 2D mapping," in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2010.

[9] Z. Ji and S. Singh, "LOAM: Lidar Odometry and Mapping in Real-time," in Robotics: Science and Systems Conference, 2014.

[10] W. Hess, D. Kohler, H. Rapp, and D. Andor, "Real-time loop closure in 2D LIDAR SLAM," in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016, pp. 1271-1278.

[11] S. Wirges, C. Stiller, and F. Hartenbach, "Evidential Occupancy Grid Map Augmentation using Deep Learning," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 668-673.

[12] K. Ji et al., "CPFG-SLAM:a Robust Simultaneous Localization and Mapping based on LIDAR in Off-Road Environment," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 650-655.

[13] Z. Shen, Y. Xu, M. Sun, A. Carballo, and Q. Zhou, "3D Map Optimization with Fully Convolutional Neural Network and Dynamic Local NDT," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 4404-4411.

[14] A. Sinha, Z. Ding, T. Kirubarajan, and M. Farooq, "Track Quality Based Multitarget Tracking Approach for Global Nearest-Neighbor Association," IEEE Transactions on Aerospace and Electronic Systems, vol. 48, no. 2, pp. 1179-1191, 2012.

[15] S. Hwang, N. Kim, Y. Choi, S. Lee, and I. S. Kweon, "Fast multiple objects detection and tracking fusing color camera and 3D LIDAR for intelligent vehicles," in 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2016, pp. 234-239.

[16] S. Kraemer, M. E. Bouzouraa, and C. Stiller, "Utilizing LiDAR Intensity in Object Tracking," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 1543-1548.

[17] W. Le, L. Chen, J. Zhou, and W. Zhang, "Automatic moving object tracking algorithm based on the improved hausdorff distance," Electronic Measurement Technology, 2014.

[18] S. Reuter, B. Vo, B. Vo, and K. Dietmayer, "The Labeled Multi-Bernoulli Filter," IEEE Transactions on Signal Processing, vol. 62, no. 12, pp. 3246-3260, 2014.

[19] D. Z. Wang, I. Posner, and P. Newman, "Model-free detection and tracking of dynamic objects with 2D lidar," The International Journal of Robotics Research, vol. 34, no. 7, 2015.

[20] M. He, E. Takeuchi, Y. Ninomiya, and S. Kato, "Precise and Efficient Model-Based Vehicle Tracking Method Using Rao-Blackwellized and Scaling Series Particle Filters," in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, 2016.

[21] F. Ebert and H. Wuensche, "Dynamic Object Tracking and 3D Surface Estimation using Gaussian Processes and Extended Kalman Filter," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 1122-1127.

[22] J. Li, W. Zhan, and M. Tomizuka, "Generic Vehicle Tracking Framework Capable of Handling Occlusions Based on Modified

Mixture Particle Filter," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 936-942.

[23] W. Armbruster and R. Richmond, "The application of iterative closest point (ICP) registration to improve 3D terrain mapping estimates using the flash 3D ladar system," Proceedings of Spie the International Society for Optical Engineering, vol. 7684, no. 1, pp. 143-153, 2010.

[24] J. Levinson, M. Montemerlo, and S. Thrun, "Map-Based Precision Vehicle Localization in Urban Environments," in Robotics: Science and Systems III, June 27-30, 2007, Georgia Institute of Technology, Atlanta, Georgia, USA, 2007.

[25] M. Magnusson, A. Nuchter, C. Lorken, A. J. Lilienthal, and J. Hertzberg, "Evaluation of 3D registration reliability and speed - A comparison of ICP and NDT," in 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 3907-3912.

[26] R. Valencia, J. Saarinen, H. Andreasson, J. Vallvé, J. Andrade-Cetto, and A. J. Lilienthal, "Localization in highly dynamic environments using dual-timescale NDT-MCL," in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 3956-3962.

[27] S. D. Billings, E. M. Boctor, and R. H. Taylor, "Iterative Most-Likely Point Registration (IMLP): A Robust Algorithm for Computing Optimal Shape Alignment," Plos One, vol. 10, 2015.

[28] E. Javanmardi, M. Javanmardi, Y. Gu, and S. Kamijo, "Adaptive Resolution Refinement of NDT Map Based on Localization Error Modeled by Map Factors," in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 2018, pp. 2237-2243.

[29] S. Sivaraman, "Learning, Modeling, and Understanding Vehicle Surround Using Multi-Modal Sensing," 2013.

[30] O. Schumann, C. Wöhler, M. Hahn, and J. Dickmann, "Comparison of random forest and long short-term memory network performances in classification tasks using radar," in 2017 Sensor Data Fusion: Trends, Solutions, Applications (SDF), 2017, pp. 1-6.

[31] T. Akita and S. Mita, "Object Tracking and Classification Using Millimeter-Wave Radar Based on LSTM," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 1110-1115.

[32] A. Danzer, T. Griebel, M. Bach, and K. Dietmayer, "2D Car Detection in Radar Data with PointNets," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 61-66.

[33] O. Schumann, M. Hahn, J. Dickmann, and C. Wöhler, "Semantic Segmentation on Radar Point Clouds," in 2018 21st International Conference on Information Fusion (FUSION), 2018, pp. 2179-2186.

[34] G. Liu, M. Zhou, L. Wang, H. Wang, and X. Guo, "A blind spot detection and warning system based on millimeter wave radar for driver assistance," Optik International Journal for Light & Electron Optics, vol. 135, pp. 353-365, 2017.

[35] A. Amodio, G. Panzani, and S. M. Savaresi, "Design of a Lane Change Driver Assistance System, with implementation and testing on motorbike," in 2017 IEEE Intelligent Vehicles Symposium, 2017.

[36] G. Liu, M. Zhou, L. Wang, and H. Wang, "A Radar-Based Door Open Warning Technology for Vehicle Active Safety," in 2016 International Conference on Information System and Artificial Intelligence (ISAI), 2016, pp. 479-484.

[37] X. Sun, Z. Zhou, C. Zhao, and W. Zou, "A compressed sensing radar detection scheme for closing vehicle detection," in 2012 IEEE International Conference on Communications (ICC), 2012, pp. 6371-6375.

[38] B. Wu, A. Wan, X. Yue, and K. Keutzer, "Squeezeseg: Convolutional neural nets with recurrent crf for real-time road-object segmentation from 3d lidar point cloud," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 1887-1893: IEEE.

[39] Y. Wang, T. Shi, P. Yun, L. Tai, and M. J. a. p. a. Liu, "Pointseg: Real-time semantic segmentation based on 3d lidar point cloud," 2018.

[40] K. Banerjee, D. Notz, J. Windelen, S. Gavarraju, and M. He, "Online Camera LiDAR Fusion and Object Detection on Hybrid Data for Autonomous Driving," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 1632-1638.

[41] D. Feng, X. Wei, L. Rosenbaum, A. Maki, and K. Dietmayer, "Deep Active Learning for Efficient Training of a LiDAR 3D Object Detector," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 667-674.

[42] B. Yang, W. Luo, and R. Urtasun, "Pixor: Real-time 3d object detection from point clouds," in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2018, pp. 7652-7660.

Page 8: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

[43] W. Ali, S. Abdelkarim, M. Zidan, M. Zahran, and A. El Sallab, "Yolo3d: End-to-end real-time 3d oriented object bounding box detection from lidar point cloud," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 0-0.

[44] S. Kim, H. Kim, W. Yoo, and K. Huh, "Sensor Fusion Algorithm Design in Detecting Vehicles Using Laser Scanner and Stereo Vision," IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 1072-1084, 2016.

[45] Y. Zhou and O. Tuzel, "Voxelnet: End-to-end learning for point cloud based 3d object detection," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 4490-4499.

[46] Y. Yan, Y. Mao, and B. J. S. Li, "Second: Sparsely embedded convolutional detection," vol. 18, no. 10, p. 3337, 2018.

[47] V. A. Sindagi, Y. Zhou, and O. J. a. p. a. Tuzel, "MVX-Net: Multimodal VoxelNet for 3D Object Detection," 2019.

[48] K. Shin, Y. P. Kwon, and M. Tomizuka, "RoarNet: A Robust 3D Object Detection based on RegiOn Approximation Refinement," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 2510-2515.

[49] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," in Advances in neural information processing systems, 2017, pp. 5099-5108.

[50] A. H. Lang, S. Vora, H. Caesar, L. Zhou, J. Yang, and O. Beijbom, "PointPillars: Fast encoders for object detection from point clouds," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 12697-12705.

[51] Y. Zhang, B. Song, X. Du, and M. Guizani, "Vehicle Tracking Using Surveillance With Multimodal Data Fusion," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 7, pp. 2353-2361, 2018.

[52] M. Manana, C. Tu, and P. A. Owolawi, "Preprocessed Faster RCNN for Vehicle Detection," in 2018 International Conference on Intelligent and Innovative Computing Applications (ICONIC), 2018, pp. 1-4.

[53] Z. Qian and H. Shi, "Video Vehicle Detection Based on Texture Analysis," in 2010 Chinese Conference on Pattern Recognition (CCPR), 2010, pp. 1-4.

[54] W. Ji, L. Tang, D. Li, W. Yang, and Q. Liao, "Video-based construction vehicles detection and its application in intelligent monitoring system," CAAI Transactions on Intelligence Technology, vol. 1, no. 2, pp. 162-172, 2016/04/01/ 2016.

[55] C. Pan, Z. Zhu, L. Jiang, M. Wang, and X. Lu, "Adaptive ViBe background model for vehicle detection," in 2017 IEEE 2nd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2017, pp. 1301-1305.

[56] Z. Guo, Z. Zhou, and X. Sun, "Vehicle detection and tracking based on optical field," in 2017 International Conference on Security, Pattern Analysis, and Cybernetics (SPAC), 2017, pp. 626-630.

[57] Y. Li, B. Li, B. Tian, F. Zhu, G. Xiong, and K. Wang, "Vehicle detection based on the deformable hybrid image template," in Proceedings of 2013 IEEE International Conference on Vehicular Electronics and Safety, 2013, pp. 114-118.

[58] J. Wang, S. Zhang, and J. Chen, "Vehicle Detection by Sparse Deformable Template Models," in 2014 IEEE 17th International Conference on Computational Science and Engineering, 2014, pp. 203-206.

[59] W. Liu et al., "Ssd: Single shot multibox detector," in European conference on computer vision, 2016, pp. 21-37: Springer.

[60] J. Redmon and A. J. a. p. a. Farhadi, "Yolov3: An incremental improvement," 2018.

[61] N. Gählert, M. Mayer, L. Schneider, U. Franke, and J. Denzler, "MB-Net: MergeBoxes for Real-Time 3D Vehicles Detection," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 2117-2124.

[62] L. Chen et al., "Surrounding Vehicle Detection Using an FPGA Panoramic Camera and Deep CNNs," IEEE Transactions on Intelligent Transportation Systems, pp. 1-13, 2019.

[63] A. H. Pech, P. M. Nauth, and R. Michalik, "A new Approach for Pedestrian Detection in Vehicles by Ultrasonic Signal Analysis," in IEEE EUROCON 2019 -18th International Conference on Smart Technologies, 2019, pp. 1-5.

[64] T. Wu, P. Tsai, N. Hu, and J. Chen, "Research and implementation of auto parking system based on ultrasonic sensors," in 2016 International Conference on Advanced Materials for Science and Engineering (ICAMSE), 2016, pp. 643-645.

[65] P. Krishnan, "Design of Collision Detection System for Smart Car Using Li-Fi and Ultrasonic Sensor," IEEE Transactions on Vehicular Technology, vol. 67, no. 12, pp. 11420-11426, 2018.

[66] M. Mizumachi, A. Kaminuma, N. Ono, and S. Ando, "Robust Sensing of Approaching Vehicles Relying on Acoustic Cue," in 2014 International Symposium on Computer, Consumer and Control, 2014, pp. 533-536.

[67] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, "MonoSLAM: Real-Time Single Camera SLAM," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, 2007.

[68] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: An efficient alternative to SIFT or SURF," in 2011 International Conference on Computer Vision, 2011, pp. 2564-2571.

[69] R. Mur-Artal and J. D. Tardós, "ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras," IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255-1262, 2017.

[70] R. A. Newcombe, S. J. Lovegrove, and A. J. Davison, "DTAM: Dense tracking and mapping in real-time," in 2011 International Conference on Computer Vision, 2011, pp. 2320-2327.

[71] J. Engel, S. Thomas, and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM," 2014.

[72] C. Forster, Z. Zhang, M. Gassner, M. Werlberger, and D. Scaramuzza, "SVO: Semidirect Visual Odometry for Monocular and Multicamera Systems," IEEE Transactions on Robotics, vol. 33, no. 2, pp. 249-265, 2017.

[73] T. J. Liu, H. Lin, and W. Lin, "InertialNet: Toward Robust SLAM via Visual Inertial Measurement," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 1311-1316.

[74] H. Xuming, R. S. Zemel, and M. A. Carreira-Perpinan, "Multiscale conditional random fields for image labeling," in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., 2004, vol. 2, pp. II-II.

[75] S. Gould, R. Fulton, and D. Koller, "Decomposing a scene into geometric and semantically consistent regions," in 2009 IEEE 12th International Conference on Computer Vision, 2009, pp. 1-8.

[76] E. Shelhamer, J. Long, and T. Darrell, "Fully Convolutional Networks for Semantic Segmentation," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 39, no. 4, pp. 640-651, 2017.

[77] H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, "Pyramid Scene Parsing Network," 2016.

[78] L. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille, "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 4, pp. 834-848, 2018.

[79] V. John et al., "Sensor Fusion of Intensity and Depth Cues using the ChiNet for Semantic Segmentation of Road Scenes," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 585-590.

[80] A. Syed and B. T. Morris, "SSeg-LSTM: Semantic Scene Segmentation for Trajectory Prediction," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 2504-2509.

[81] Y. Kim and D. Kum, "Deep Learning based Vehicle Position and Orientation Estimation via Inverse Perspective Mapping Image," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 317-323.

[82] V. D. Nguyen, T. T. Nguyen, D. D. Nguyen, and J. W. Jeon, "Toward Real-Time Vehicle Detection Using Stereo Vision and an Evolutionary Algorithm," in 2012 IEEE 75th Vehicular Technology Conference (VTC Spring), 2012, pp. 1-5.

[83] J. Leng, Y. Liu, D. Du, T. Zhang, and P. Quan, "Robust Obstacle Detection and Recognition for Driver Assistance Systems," IEEE Transactions on Intelligent Transportation Systems, pp. 1-12, 2019.

[84] H. Königshof, N. O. Salscheider, and C. Stiller, "Realtime 3D Object Detection for Automated Driving Using Stereo Vision and Semantic Information," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 1405-1410.

[85] Q. Wang, J. Gao, and Y. Yuan, "Embedding Structured Contour and Location Prior in Siamesed Fully Convolutional Networks for Road Detection," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 230-241, 2018.

[86] J. M. Álvarez, A. M. López, T. Gevers, and F. Lumbreras, "Combining Priors, Appearance, and Context for Road Detection," IEEE Transactions on Intelligent Transportation Systems, vol. 15, no. 3, pp. 1168-1178, 2014.

Page 9: A Survey on Sensor Technologies for Unmanned Ground Vehicles · addition, high-precision, high-reliability sensors are significant for UGVs’ efficient operation. This paper proposes

[87] D. Barnes, W. Maddern, and I. Posner, "Find your own way: Weakly-supervised segmentation of path proposals for urban autonomy," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 203-210.

[88] G. Zhang, N. Zheng, C. Cui, Y. Yuzhen, and Y. Zejian, "An efficient road detection method in noisy urban environment," in 2009 IEEE Intelligent Vehicles Symposium, 2009, pp. 556-561.

[89] H. Wang, Y. Wang, X. Zhao, G. Wang, H. Huang, and J. Zhang, "Lane Detection of Curving Road for Structural Highway With Straight-Curve Model on Vision," IEEE Transactions on Vehicular Technology, vol. 68, no. 6, pp. 5321-5330, 2019.

[90] Z. Kim, "Robust Lane Detection and Tracking in Challenging Scenarios," IEEE Transactions on Intelligent Transportation Systems, vol. 9, no. 1, pp. 16-26, 2008.

[91] P. Burger, B. Naujoks, and H. Wuensche, "Unstructured Road SLAM using Map Predictive Road Tracking," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 1276-1282.

[92] A. V. Nefian and G. R. Bradski, "Detection of Drivable Corridors for Off-Road Autonomous Navigation," in 2006 International Conference on Image Processing, 2006, pp. 3025-3028.

[93] Y. Zhang, J. Yang, J. Ponce, and H. Kong, "Dijkstra Model for Stereo-Vision Based Road Detection: A Non-Parametric Method," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 5986-5993.

[94] H. Gomez-Moreno, S. Maldonado-Bascon, P. Gil-Jimenez, and S. Lafuente-Arroyo, "Goal Evaluation of Segmentation Algorithms for Traffic Sign Recognition," IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 4, pp. 917-930, 2010.

[95] S. Xu, "Robust traffic sign shape recognition using geometric matching," IET Intelligent Transport Systems, vol. 3, no. 1, pp. 10-18, 2009.

[96] Y. Y. Nguwi and A. Z. Kouzani, "Detection and classification of road signs in natural environments," Neural Computing & Applications, vol. 17, no. 3, pp. 265-289, 2008.

[97] S. Houben, "A single target voting scheme for traffic sign detection," in 2011 IEEE Intelligent Vehicles Symposium (IV), 2011, pp. 124-129.

[98] I. M. Creusen, R. G. J. Wijnhoven, E. Herbschleb, and P. H. N. d. With, "Color exploitation in hog-based traffic sign detection," in 2010 IEEE International Conference on Image Processing, 2010, pp. 2669-2672.

[99] X. Baro, S. Escalera, J. Vitria, O. Pujol, and P. Radeva, "Traffic Sign Recognition Using Evolutionary Adaboost Detection and Forest-ECOC Classification," IEEE Transactions on Intelligent Transportation Systems, vol. 10, no. 1, pp. 113-126, 2009.

[100] T. Chen and S. Lu, "Accurate and Efficient Traffic Sign Detection Using Discriminative AdaBoost and Support Vector Regression," IEEE Transactions on Vehicular Technology, vol. 65, no. 6, pp. 4006-4015, 2016.

[101] Y. Tian, J. Gelernter, X. Wang, J. Li, and Y. Yu, "Traffic Sign Detection Using a Multi-Scale Recurrent Attention Network," IEEE Transactions on Intelligent Transportation Systems, pp. 1-10, 2019.

[102] C. Kerl, J. Sturm, and D. Cremers, "Robust odometry estimation for RGB-D cameras," in 2013 IEEE International Conference on Robotics and Automation, 2013, pp. 3748-3754.

[103] F. Endres, J. Hess, J. Sturm, D. Cremers, and W. Burgard, "3-D Mapping With an RGB-D Camera," IEEE Transactions on Robotics, vol. 30, no. 1, pp. 177-187, 2014.

[104] L. Li, Z. Liu, Ö. Ü, J. Lian, Y. Zhou, and Y. Zhao, "Dense 3D Semantic SLAM of traffic environment based on stereo vision," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 965-970.

[105] W. Yang, B. Fang, and Y. Y. Tang, "Fast and Accurate Vanishing Point Detection and Its Application in Inverse Perspective Mapping of Structured Road," IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 48, no. 5, pp. 755-766, 2018.

[106] Z. Sun, X. J. An, and H. G. He, "CITAVT-IV-a autonomous land vehicle navigated by machine vision," vol. 24, 03/01 2002.

[107] P. F. Zhang, K. Z. He, Z. Z. Ouyang, and J. Y. Zhang, "Multifunctional intelligent outdoor mobile robot testbed-THMR-V," vol. 24, 03/01 2002.

[108] M. Montemerlo et al., "Junior: The Stanford Entry in the Urban Challenge," Journal of Field Robotics, vol. 25, pp. 569-597, 09/01 2008.

[109] C. Urmson et al., "Autonomous driving in urban environments: Boss and the Urban Challenge," J. Field Robotics, vol. 25, pp. 425-466, 01/01 2008.

[110] C. Reinholtz et al., Team VictorTango's entry in the DARPA Urban Challenge. 2009, pp. 125-162.

[111] J. Leonard et al., "A Perception-Driven Autonomous Urban Vehicle," Journal of Field Robotics, vol. 25, pp. 727-774, 10/01 2008.

[112] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, "1 year, 1000 km: The Oxford RobotCar dataset," The International Journal of Robotics Research, vol. 36, no. 1, pp. 3-15, 2017.

[113] A. Broggi et al., "Extensive Tests of Autonomous Driving Technologies," IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 3, pp. 1403-1415, 2013.

[114] J. Ziegler et al., "Making Bertha Drive—An Autonomous Journey on a Historic Route," IEEE Intelligent Transportation Systems Magazine, vol. 6, no. 2, pp. 8-20, 2014.

[115] Z. Liu et al., "Real-Time 6D Lidar SLAM in Large Scale Natural Terrains for UGV," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 662-667.

[116] B. A. Auto., "https://apollo.auto/," 2018.

[117] E. Arnold, O. Y. Al-Jarrah, M. Dianati, S. Fallah, D. Oxtoby, and A. Mouzakitis, "A Survey on 3D Object Detection Methods for Autonomous Driving Applications," IEEE Transactions on Intelligent Transportation Systems, vol. 20, no. 10, pp. 3782-3795, 2019.

[118] S. Wang, J. Yue, Y. Dong, R. Shen, and X. Zhang, "Real-time Omnidirectional Visual SLAM with Semi-Dense Mapping," in 2018 IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 695-700.

[119] K. Yang et al., "Can we PASS beyond the Field of View? Panoramic Annular Semantic Segmentation for Real-World Surrounding Perception," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 446-453.

[120] J. Gu, H. Xiao, W. He, S. Wang, X. Wang, and K. Yuan, "FPGA based real-time vehicle detection system under complex background," in 2016 IEEE International Conference on Mechatronics and Automation, 2016, pp. 1629-1634.

[121] C. Wen-jing, W. Lu-ping, and Z. Lu-ping, "Vehicle detection algorithm based on SLPP-SHOG in infrared image," Laser & Infrared, vol. 46, no. 08, pp. 1018-1022, 2016.

[122] D. Chen, G. Jin, L. Lu, L. Tan, and W. Wei, "Infrared Image Vehicle Detection Based on Haar-like Feature," in 2018 IEEE 3rd Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), 2018, pp. 662-667.

[123] X. Zhang and X. Zhu, "Vehicle Detection in the Aerial Infrared Images via an Improved Yolov3 Network," in 2019 IEEE 4th International Conference on Signal and Image Processing (ICSIP), 2019, pp. 372-376.

[124] S. Mita, X. Yuquan, K. Ishimaru, and S. Nishino, "Robust 3D Perception for any Environment and any Weather Condition using Thermal Stereo," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 2569-2574.

[125] G. Gallego et al., "Event-based Vision: A Survey," 2019.

[126] H. Rebecq, T. Horstschaefer, G. Gallego, and D. Scaramuzza, "EVO: A Geometric Approach to Event-Based 6-DOF Parallel Tracking and Mapping in Real Time," IEEE Robotics and Automation Letters, vol. 2, no. 2, pp. 593-600, 2017.

[127] A. I. Maqueda, A. Loquercio, G. Gallego, N. García, and D. Scaramuzza, "Event-Based Vision Meets Deep Learning on Steering Prediction for Self-Driving Cars," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 5419-5427.

[128] X. Lagorce, C. Meyer, S. Ieng, D. Filliat, and R. Benosman, "Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking," IEEE Transactions on Neural Networks and Learning Systems, vol. 26, no. 8, pp. 1710-1720, 2015.