the impact of cooperative perception on decision making...

12
IEEE INTELLIGENT TRANSPORTATION SYSTEMS MAGAZINE 39 FALL 2015 1939-1390/15©2015IEEE Seong-Woo Kim Seoul National University. E-mail: [email protected] Wei Liu and Marcelo H. Ang Jr. Department of Mechanical Engineering, National University of Singapore, Singapore. E-mails: [email protected], and [email protected] Emilio Frazzoli and Daniela Rus Massachusetts Institute of Technology, Cambridge, MA., USA. E-mails: [email protected] and [email protected] The Impact of Cooperative Perception on Decision Making and Planning of Autonomous Vehicles Digital Object Identifier 10.1109/MITS.2015.2409883 Date of publication: 24 July 2015 Abstract—In this article, we investigate how cooperative per- ception gives the impact on decision making and planning of autonomous vehicles on the road. Cooperative perception is the exchange of local sensing information with other vehicles or infrastructures via wireless communications, by which the per- ception range can be considerably extended up to the boundary of connected vehicles. This augmented perception capability can provide oncoming traffic information beyond line-of-sight and field-of-view, which enables better control of both manned and unmanned vehicles. In this article, we first present an on-road sensing system to provide a see-through/lifted-seat/satellite view to drivers. Then, we investigate how the extended percep- tion capability can contribute to situation awareness on the road. Finally, we provide methods for safer and smoother autonomous driving using the augmented situation awareness and perception ©ISTOCKPHOTO.COM/ANTON_NOVIK

Upload: truongduong

Post on 26-Mar-2018

218 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 39 • fall 2015

1939-1390/15©2015IEEE

Seong-Woo Kim Seoul National University. E-mail: [email protected]

Wei Liu and Marcelo H. Ang Jr. Department of Mechanical Engineering, National University of Singapore, Singapore. E-mails: [email protected], and [email protected]

Emilio Frazzoli and Daniela RusMassachusetts Institute of Technology, Cambridge, MA., USA. E-mails: [email protected] and [email protected]

The Impact of Cooperative Perception on Decision Making and Planning

of Autonomous Vehicles

Digital Object Identifier 10.1109/MITS.2015.2409883Date of publication: 24 July 2015

Abstract—In this article, we investigate how cooperative per-ception gives the impact on decision making and planning of autonomous vehicles on the road. Cooperative perception is the exchange of local sensing information with other vehicles or infrastructures via wireless communications, by which the per-ception range can be considerably extended up to the boundary of connected vehicles. This augmented perception capability can provide oncoming traffic information beyond line-of-sight and field-of-view, which enables better control of both manned and unmanned vehicles. In this article, we first present an on-road sensing system to provide a see-through/lifted-seat/satellite view to drivers. Then, we investigate how the extended percep-tion capability can contribute to situation awareness on the road. Finally, we provide methods for safer and smoother autonomous driving using the augmented situation awareness and perception

©is

toc

kp

ho

to.c

om

/an

ton

_no

vik

Page 2: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 40 • fall 2015

capability. All introduced and proposed concepts are imple-mented and validated on autonomous vehicles.

I. Introduction

In this article, we investigate the impact of cooperative per-ception on decision making and planning of autonomous driving vehicles. To understand the motivation of this research, we start with an intuitive example. Suppose a

driver sits on a seat lifted at 3 m from the ground for a view in Figure 1 whereby the driver can make decisions based on a long-term perspective. Instead of physically lifting a driv-ing seat, we can consider alternative approaches such as cooperative perception, employing a drone or a periscope. In this article, we investigate cooperative perception as a technical solution to provide such far-sight to drivers.

The basic principle of cooperative perception on the road is to share local perception information with other vehi-cles or infrastructure through wireless communications. Through cooperative perception, a driver can see upcom-ing traffic situations ahead which is useful in driving situ-ations such as hidden obstacle avoidance, safe lane-chang-ing/overtaking, smooth braking/acceleration. Moreover, the perception range can be extended beyond line-of-sight or field-of-view up to the boundary of connected vehicles. The perception results can be represented in a form of see-through, lifted-seat, or satellite views, which can be consid-ered as driving assistance tools.

Another compelling aspect of cooperative perception is that it is a more affordable method compared to conven-tional long-range sensors. Furthermore, the benefits of cooperative perception are not limited to human drivers. In emergency situations, such as sudden obstacle appear-ance in front of a drowsy driver, an autonomous system can automatically intervene in a timely and safely manner. Such far-sight traffic information also allows the detection of traffic congestion ahead in each lane. By utilizing this information, autonomous drivers can make a non-myopic lane-changing decision.

This article presents technical principles and practi-cal solutions to realize the ambitious goals of autonomous driving using cooperative perception. As a first step toward these goals, we present a cooperative perception system through which the intention of other vehicles can be better inferred. Then, we provide decision making and planning methods using the augmented situation awareness and perception capability to improve the traffic safety and effi-ciency of autonomous vehicles on the road. The feasibility of this work is validated by our implementation and experi-mental results using the vehicles on the road.

II. Brief History of Cooperative Perception and Cooperative Autonomous DrivingCooperative perception started with the use of mobile or robotic sensor networks such as environmental surveillance or monitoring [1]. Recently, cooperative perception has been considered as one of the essential technologies for safety improvement and traffic flow efficiency on the road [2]–[4]. The trend has been accelerated with the advent of intra-vehi-cle and inter-vehicle communication technologies [5].

From the perspective of human drivers, the extended spatial information can be directly used for driving assis-tance purpose as a part of Advanced Driving Assistance Sys-tems such as cooperative collision warning [6], or overtaking assistance. In particular, see-through systems for over-taking assistance [7]–[12] have been researched recently, whose common principle is to broadcast video to the follow-ing vehicle that wants to overtake using video compression codec and inter-vehicle communication standards such as H.264/Advance Video Coding [13], and IEEE 802.11p/Wire-less Access for Vehicular Environments [14], [15].

The extended perception range is also useful for autono-mous driving. In this context, Cooperative Adaptive Cruise Control (CACC) [16], [17] is a good example of cooperative autonomous driving, which is also one of the most tangible technologies for cooperative driving [18], [19]. As a mile-stone event, Grand Cooperative Driving Challenge was held in Netherlands 2011 to test feasibility of cooperative driving with CACC [20]–[25].

As a successor of Cruise Control, the main concern of CACC is longitudinal control, i.e., acceleration and decelera-tion. For this reason, CACC is a system supporting partial autonomous driving. For fully autonomous driving, lateral control is also needed. This requires a more general solu-tion such as a robotic motion planner. In this article, our motion planner will be introduced to enable fully autono-mous driving in terms of longitudinal and lateral control, through utilizing an extended perception range.

Autonomous vehicles controlled by a perception-based motion planner have been actively researched for more than a decade [26]. It was demonstrated that autonomous ground vehicles can drive on the road while comply-ing with urban traffic laws [27], and on intercontinental FIg 1 Lifted-seat view where driver’s seat is lifted at 3 m from the ground.

Page 3: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 41 • fall 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 41 • fall 2015

routes [28]. However, there are still sev-eral aspects to be improved such as pro-hibitively expensive sensors. In this sense, cooperative perception is one of the eco-nomically viable options to enable afford-able perception for autonomous driving.

The agenda and experimental results of cooperative autonomous driving have been presented in [29]–[31]. In particular, Ref. [29] involves the practice of coopera-tive autonomous driving with sharing of perception data. Ref. [32] provided specific methods for multi-lane cooperative auton-omous driving involving lane-merging, lane-splitting, and multi-lane platooning. Security issues of cooperative autonomous driving were investigated in [33].

III. Cooperative Perception on the RoadLet us examine how local and remote infor-mation are merged to augment driving situ-ation awareness on the road.

A. System DescriptionAn autonomous vehicle starts to move toward the next destination upon the requests of passengers or operators. An autonomous driv-ing system is necessary, which consists of several subsys-tems such as perception, planning and control. Figure 2 shows overall architecture of the autonomous driving system using cooperative perception. The left-most boxes show the components for realizing stand-alone autono-mous driving. The right dashed box indicates the compo-nents for autonomous driving using cooperative percep-tion, whose fundamental role is to support better decision making for the stand-alone autonomous driving.

The principle of cooperative perception is to merge local information with remote information from other vehicles or infrastructure. Thanks to the remote information, the ego vehicle can see oncoming traffic situations ahead in a see-through manner. Cooperative perception can con-tribute to long-term perspective and short-term perspec-tive decisions at the same time. In the system architecture, cooperative perception is used for a long-term perspective driving decision to obtain a near-future velocity benefit, and a short-term perspective driving decision for hidden collision avoidance instead of using cooperative perception for localization or planning directly.

In Fig. 2, the local sensing information are exchanged with other vehicles or infrastructure via wireless commu-nications. All local and remote sensing information is prop-erly fused at the cooperative perception. After the fusion procedure, the fused information can be delivered to 1) the planner for autonomous driving, 2) vehicles following

behind or infrastructure for the purpose of cooperative per-ception. Since the main focus of this article is the impact of cooperative perception on autonomous driving not coop-erative perception itself, we omit detailed mechanisms on cooperative perception in this article. One can find the details of multi-vehicle map merging methods that we used and proposed in [4].

B. Experimental ResultsLet us look into results and performance of the map merg-ing methods. We conducted the real experiments using three vehicles on the road, as shown in Figure 3, where the leftmost and the right-most vehicles are the ego vehicle and the second leader, respectively. The experiments were con-ducted on campus road at National University of Singapore (NUS). Evaluation data was collected on a one kilometer

LocalPerception

Other Vehicle/Infrastructure

IntentionAwareness

PlanningLong-TermPerspective

Decision

Control

LaneChangingDecision

Short-TermPerspective

Decision

Velocity BenefitQuantification

CooperativePerception

Collision RiskQuantification

Longitudinal Control Adjustment

Localization

Cooperative Autonomous Driving

Destination A Priori Map

FIg 2 Overall architecture of an autonomous driving vehicle using cooperative perception.

FIg 3 Test vehicles equipped with cooperative perception systems, where the left-most, the middle and the right-most vehicles are the ego vehicle, the first leader and the second leader, respectively.

Page 4: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 42 • fall 2015

long road, which has U-turn, up/down hill and buildings alongside. For the purpose of evaluating whether effects, experimental data was collected on sunny or cloudy days between 12 pm and 5 pm for several days. The driving is on the left side and the speed limit is set as 40 km/h according to the traffic rule.

Figure 4 shows snapshots of map merging of multiple vehicles driving on the road. In the satellite view, the red, green and blue dots represent laser scan points detected by the ego vehicle, the first and the second leader, respec-tively. The trapezoidal images are the merged results from vision information of forward-looking cameras.

The satellite view of Fig. 4 represents map merging results with range sensor reading and inverse-perspec-tively mapped vision images. Based on the merged results, we can also provide see-through and lifted-seat views as shown in the figure. Note that the ego vehicle can see 1) a traffic sign on the road surface at the blind spot of the ego vehicle, “AHEAD” in Fig. 4, and know 2) the speed and

distance of preceding vehicles beyond line-of-sight, the second leader in Fig. 4, with the support of cooperative per-ception. The cuboid represents a simplified vehicle model. The green and blue cuboids represent the first and the sec-ond leader, respectively.

Table 1 summaries the delay measurement of three message profiles, i.e., laser scan data, point clouds (meta), vision image (compressed) via IEEE 802.11g, IEEE 802.11n, 3G HSDPA and 4G LTE. To obtain this table, we set up two nodes, 10 m away from each other in open public area, and measured communication delay for around one hour according to the message profile and radio interface. The measurement results will be different at different times and at different places, because the performance of wireless communications we used is easily influenced by uncontrol-lable factors. However, this experimental results can help to know the tendency and overall performance of a number of representative configurations.

In terms of average delay, IEEE 802.11g and IEEE 802.11n outperform 3G HSDPA and 4G LTE. However, the communi-cation performance decreases as the inter-vehicle distance increases. 3G HSDPA and 4G LTE were not affected by the distance between communication correspondences. Since the target of the proposed system is moving vehicles on the road, the communication should be investigated from the perspective of impact on vehicle control, specifically posi-tion errors in this evaluation. From the perspective of aver-age delay, the position error of one hop is 24 cm at 100 km/h in case of laser scan data only and IEEE 802.11g. The worst case position error becomes 5 m at 100 km/h. This may be good enough to be used as control-purpose information,

Ego Vehicle

2nd Leader

3rd Leader

Satellite View

See-Through Views

Camera Views

EgoVehicle

1stLeader

Lifted-Seat Views

4 m Lifted

8.5 m, 10.2km/h

13.7 m, 10.0km/h

8.5 m, 8.6km/h

13.7 m, 10.0km/h

5.2 m, 8.4km/h

5.2 m, 8.4km/h

30 m Lifted

2ndLeader

1st Leader

FIg 4 See-through/lifted-seat/satellite view using cooperative perception, where the cuboid represents a simplified vehicle model. The green and blue cuboids represent the first and the second leader, respectively. In the satellite view, red, green and blue dots represent laser scan points detected by the ego vehicle, the first and the second leader, respectively. All these are processed in an online manner.

Avg./Std. (ms)

Laser Scan Vision (Meta) Vision (Comp)

752 B a 6 KB 50–110 KB

Wi-FiIEEE 802.11n 1.4/0.9 4.7/10.6 14.5/6.9

IEEE 802.11g 8.8/16.6 11.4/18.9 1322.4/920.4

4G LTE 89.90/21.5 91.8/25.1 1612.3/363.6

3G HSDPA 189.6/51.3 2143.5/223.8 4440.0/1131.4

Table 1. Average communication delay.

Page 5: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 43 • fall 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 43 • fall 2015

depending on the goal or task. However, communication delay becomes significantly uncertain, as the size of data increases. Depending on the application and purpose, mes-sage profile must be carefully determined.

IV. Impact of Cooperative Perception on Situation AwarenessOne obvious impact of cooperative perception on autono-mous driving is better situation awareness. For situation awareness, the important observation of multi-vehicle driv-ing on the road is that each vehicle is correlated to other vehicles. For example, a preceding vehicle highly affects the motion intention of the following vehicle. In Fig. 3, the ego vehicle is mostly affected by the first leader. Likewise, the first leader is affected by the second leader. The ten-dency becomes obvious as traffic density increases.

According to this motion intention correlation, the lon-ger range information through cooperative perception can be useful for better inference of the motion intention of preceding vehicles. Figure 5 summarizes the schematic diagram of motion intention awareness using cooperative perception. The grey circle represents perception from local sensor or remote sensors. The white circle represents driver’s intention such as acceleration, deceleration, turn-ing left, or turning right. This intention-awareness prob-lem can be modeled through Hidden Markov Model (HMM), where the perception and intention are corresponding to observation and hidden state, respectively.

Formally speaking, we define a driver’s intention set as

{ , , } .Stop Move LaneChangeH = (1)

According to the speed profile features in Table 2, we also classified observation into four-fold as follows:

., ., , .Accel Decel Parking TurningO=" , (2)

Then, the intention inference for a single preceding vehicle shown in Fig. 5(a) can be summarized as a HMM:

{ , , , , },H O T E{H= (3)

where { is the initial motion intention distribution, T is the transition probability matrix, and E is the emission matrix.

In Fig. 5(b), the extension to multi-vehicle without inten-tion correlation is straightforward, because each vehicle can be associated with an independent HMM H and the inference process can be proceeded independently.

However, Markov property for each individual vehicle will not hold exactly when the intention correlation is considered as shown in Fig. 5(c). Recognizing this issue, the joint intention state { }:

t tN1zU = and joint observation

{ }O o :t t

N1= for all the vehicles can be constructed at each time ,t where t

iz and oti denotes vehicle si\ intention and

observation, respectively. The forward intention inference problem can be solved recursively by the following formula:

( , )( | ) ( | ) ( , ).

OO O

P

P P P

:

:

t t

t t t t t t

1 0 1

1 1 1 0t

U

U U U U

=

W

+ +

+ + +/ (4)

Exact motion intention inference following Eqn. (4) can be intractable, thereby the approximated inference tech-niques, such as a particle filter, can be employed to take account of the enlarged state size and make the inference and model training tractable, where the details can be referred to our previous work in [34].

Before proceeding to the on-line inference, the param-eters of HMM should be determined by training process, which was accomplished by the Baum-Welch algorithm [35] using data collected from ten normal manual driving on the road.

For the purpose of evaluating the improvements achieved via introducing cooperative perception, the overall driving scenario is designed as follows. The second leader smoothly accelerates from stop status. After a while, the second leader suddenly decelerates to stop. The first leader and the ego vehicle follow the second leader accordingly.

Figure 6 shows the leaders’ speed profiles and intention inference results with and without cooperative perception. In the figure, the black circle, the white circle, and the black

ObservationIntention

First Leader

Second Leader

Third Leader

(a) t-1 t t+1 (b) t-1 t t+1 (c) t-1 t t+1

nth Leader

FIg 5 (a) Standard HMM model for one single leader’s intention inference, (b) multi-vehicle HMM without intention correlation, and (c) multi-vehicle HMM considering intention correlation.

Speed Profile ( / )m s

Parking .V 0 5<curr

Turning / . , .V V V1 2 0 5> >curr curr curry x

Acceleration , . , TurningV V V 0 5> >curr pre curr J

Deceleration , . , TurningV V V 0 5< >curr pre curr J

:Vcurr Current velocity;:Vpre Velocity at previous time step;

Table 2. Observation of autonomous vehicle.

Page 6: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 44 • fall 2015

triangle indicate the status of Stop, Move and LaneChange, respectively. The white triangle indicates speed profile. To avoid overlapping between speed and probability in Fig. 6,

we added one m/s to all speed values, which puts speed values to the upper part of the graphs. Fig. 6(a) shows the second leader’s speed profile and intention. Fig. 6(b) shows the first leader’s speed profile and intention using coop-erative perception. Fig. 6(c) shows the first leader’s speed profile and intention without cooperative perception.

Instead of exhaustively explaining the evolutions of the intention inference result, we want to highlight an inter-esting situation where the cooperative perception can significantly improve the reasonableness of the motion intention inference. Let us look at t 12 15= - s. The sec-ond leader started to decelerate. Without cooperative per-ception, the belief of the first leader’s keeping moving in Fig.  6(c) is increasing smoothly, because the inference relies on only local observation and the observation gives increasing speed as an input. In contrary, with cooperative perception, the first leader’s intention Move is decreasing rapidly and has lower value than Fig. 6(c). For example, the belief values of intention Stop with and without coopera-tive perception are 0.62 and 0.57 at 12 s, respectively.

As expected, the intention awareness using coopera-tive perception on the road can react to a hidden obstacle or sudden stop of hidden preceding vehicles earlier, even though the first leader keeps or increases its current speed. In the following section, we investigate the methods ben-efited from this augmented situation awareness.

V. Impact of Cooperative Perception on Decision Making and PlanningThis section examines how the see-through and situation awareness capability can contribute to better decision making and planning for safe and non-myopic autonomous driving.

A. Reactive Early Lane ChangingAs a first step, we discuss how the see-through and extended perception capability enable to make a decision of early lane changing and trigger the corresponding path planning for forward collision avoidance.

1) See-through Collision Warning using Cooperative Perception: For decision making for early lane changing, a specific functionality is necessary to warn a possible for-ward collision. We will discuss how the collision warning module can benefit from cooperative perception.

There have been many forward collision warning algo-rithms along with risk assessment methods, which include time-based and distance-based approaches. In this arti-cle, we consider the time-based approach, because this approach has been successfully tested on the road and used for commercial purpose [36]. The time-based meth-ods usually use Time-To-Collision (TTC), which can be formulated as

,TTC v vd

,,

i ji j

i j=

- (5)

(a)

0.01

0.1

1

10

0 10 15 205

Time (s)

Bel

ief (

Pro

b.)/

Spe

ed (

m/s

)

StopMoveLaneChangeSpeed

(b)

0.01

0.1

1

10

0 10 15 205

Time (s)

Bel

ief (

Pro

b.)/

Spe

ed (

m/s

)

StopMoveLaneChangeSpeed

(c)

0.01

0.1

1

10

0 10 15 205

Time (s)

Bel

ief (

Pro

b.)/

Spe

ed (

m/s

)

StopMoveLCSpeed

FIg 6 (a) The second leader’s speed profile and intention using cooperative perception. (b) The first leader’s speed profile and intention using cooperative perception. (c) The first leader’s speed profile and intention without cooperative perception.

Page 7: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 45 • fall 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 45 • fall 2015

where TTC ,i j is corresponding to TTC, d ,i j is the distance between a vehicle i and ,j and vi is the speed of a vehicle .i A forward collision warning is activated if TTC ,i j is less than a certain threshold time according to safety requirements. Thanks to the see-through characteristics of cooperative perception, j can be extended beyond the first leader. Fig-ure 7 illustrates the see-through collision warning.

2) Utility Evaluation: Given this cooperative percep-tion based see-through collision warning mechanism, its utilities will be elaborated from two perspectives: Driving Assistance and Reactively Lane Changing Triggering.

The most straightforward application of the collision warning module would be the driving assistance, where the drivers can adaptively change their driving behaviors according to the warning signals.

We evaluate the see-through forward collision using real cars driven by human drivers on the road. The test scenario is as follows. Three vehicles move forward on a single-lane road, where test drivers drive the ego vehicle. Then, the second leader suddenly stops at a certain position. To avoid collision, the first leader and the ego vehicle stop accordingly. Note that the first leader is equipped with no collision warning system.

In this experiment, the second leader was moving forward at 2–5 m/s before sudden braking, where TTC was set to 15 s. These parameters are somewhat conservative primarily for safety concerns. The brake lights of the second leader were deliberately turned off, which makes the first leader difficult to notice whether the second leader decelerates or not.

Figure 8 shows our driving assistance system installed in our test vehicle. For visualization, we installed 23-inch dis-play near the steering wheel as shown in Fig. 8. This system is one example of cooperative perception for driving assistance. One can use different type of displays or install the display at different position according to preference or requirement. Figure 9 shows the screenshot corresponding to Figure 8.

In the satellite view area, the green circle is the indica-tor of safety status, where the green color represents no potential collision is detected. The color turns to red along with sound alarm, which lasts for 2 s per a warning activa-tion, if a potential collision is detected. The sound alarm enables a human driver to focus on driving without the distraction of watching the display until any dangerous situation is detected. In our experiments on the road, test human drivers using this system could make a non-myopic driving decision with confidence in changing the lane and overtaking a slow preceding vehicle.

We collected data through eight trials on the road with three test human drivers. Figure 10 summarizes the results of this experiment. In the experiment, the second leader suddenly decelerates. The see-though collision warning is activated along with visual and sound alarm averagely 2.1 s later according to the preset TTC and cooperative percep-tion. The test driver pushes braking pedal averagely 0.8 s

later. The first leader notices the deceleration of the second leader, and then decelerates too. In this experiment, the test human drivers detected the sudden braking of the second leader earlier than the first leader and accordingly stopped as much as 0.33 s in average.

d1

v0 v1 v2

d2

w/o CP

w CP

Ego Vehicle Second LeaderFirst Leader

FIg 7 Comparison of collision warning w and w/o cooperative perception.

Collision Warning

Satellite ViewSee-Through View

FIg 8 Driving assistant system using cooperative perception.

Collision Warning

Satellite View See-Through View2nd Leader 2nd Leader

1st Leader

FIg 9 Screenshot of the proposed cooperative driving assistance system corresponding to Figure 8.

Second Leader’sDeceleration

0 s

Collision Warning Activation

2.1 s

Ego Vehicle’sDeceleration

2.9 s

First Leader’sDeceleration

3.3 s

FIg 10 Deceleration timing and warning activation, where the values are average ones.

Page 8: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 46 • fall 2015

3) Reactively Lane Changing Trigging: Based on the achievements made for driving assistance, we will extend the see-through collision warning for reactively lane changing triggering in this session. In a sense, when the collision warning signal is raised, the lane changing mod-ule will be reactively triggered for early obstacle avoid-ance. The corresponding experiment result and compari-sons are shown in Figure 11.

In Fig. 11(a), the leader and the ego vehicle are moving forward. Suddenly, an obstacle appears in front of the leader vehicle, and only the leader vehicle detects the obstacle due to the lack of cooperative perception and the limitation of line-of-sight of the ego vehicle. In Fig. 11(b), the leader and the ego vehicle suddenly stop, then the ego vehicle performs path replanning to overtake the stopped leader. However, it is difficult to find the feasible path to overtake the leader in the case of Fig. 11(b), because the ego vehicle and the leader are too close due to the sudden stop.

In Fig. 11(c), early lane changing is triggered by the see-through forward collision warning, where RRT* is used for motion planning [37]. Since the inter-vehicle space is suffi-cient to overtake the leader vehicle in the case, the motion planner can find the overtaking-path promptly and the ego vehicle can overtake the leader vehicle smoothly. Fig. 11(d) shows the case where the both vehicles can smoothly avoid the sudden obstacles.

As a result, the lane changing triggering can be acti-vated in a promptly manner thanks to the see-through characteristics of cooperative perception.

B. Impact on Decision Making and PlanningWe further investigate how a decision making and plan-ning module can benefit from the improved situation awareness in a proactive sense.

As a preliminary, the decision-making policy using Stop belief is arguably specified in Table 3. In a sense, the vehicle need to adaptively accelerate and decelerate to follow the front vehicle unless the front vehicle’s Stop belief is high, in which situation the Overtake action will be triggered.

The driving scenario is similar to that of lane changing triggering, the first leader and the ego vehicle were mov-ing forward together until the first leader came to stop to avoid a static vehicle, which was occupying the lane. The autonomous vehicle is expected to reason the leaders’ motion intention and making a proper decision efficiently. Note that the triggering of obstacle avoidance in this ses-sion is raised by the decision making module, which is dif-ferent from the purely reactive behavior in Fig. 11.

As shown in Figure 12(c), when the first leader stopped, its motion intention belief of Stop is strong enough, thereby the Overtake action is triggered accordingly, which is indi-cated as the red marker. Then the motion planner came into act to search feasible trajectories to avoid the front vehicles as shown in Fig. 12(d). Eventually, the re-planned trajectory is committed and followed by the vehicle as shown in Fig. 12(e). Two snapshots of the corresponding overtaking process are illustrated in Fig. 13.

For the purpose of quantitative performance evalua-tion, another experiment following the same scenario but

(a) (b) (c) (d)

FIg 11 Early automated lane changing for early obstacle avoidance. (a) The ego vehicle (red) and the leader vehicle (yellow) are moving forward. (b) A sudden obstacle appears so that the both vehicles decelerate accordingly, but leaving little space between the two vehicles for lane changing of the ego vehicle. (c) The red vehicle changes the lane earlier through see-through forward collision warning. (d) Two self-driving vehicles supported by cooperative perception can avoid the sudden obstacle smoothly at the same time.

Actions Follow Decelerate Overtake

Stop Belief [0:0, 0:5) [0:5, 0:75) [0:75, 1:0]

Table 3. Decision-making policy.

Page 9: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 47 • fall 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 47 • fall 2015

without cooperative perception was conducted. Without explicitly showing the detailed navigation process, Fig. 14 charts the vehicle’s behavior comparison w.r.t. distance to the front vehicle and velocity. With cooperative perception, the safety gap was increased by 1.198 meters and the auton-omous vehicle could react 1.43 seconds earlier.

In this experimental result, it was demonstrated that more time is allowed to overtake the leaders by the improved situation awareness using cooperative perception. Once an overtaking decision is made, an anytime and risk-aware motion planning is necessary to generate a safe trajectory within the time to complete the overtaking maneuver.

Let us move our focus to the impacts on the planning for autonomous driving. With the extended sensing range and see-through capability, the obstacles can become aware in a early manner, thereby the efficiency of the motion plan-ning can be improved, as the re-planning frequency can be greatly reduced [38] and a smooth trajectory can be gener-ated with less samples as shown in Fig. 11.

However, the extended range information of cooperative perception raises new issues such as longer planning hori-zon, vehicle motion noise, localization error and uncertainty of moving objects at far distance from the ego vehicle. To tackle with the issues, we consider trajectory level planning as one approach, specifically risk-aware trajectory genera-tion. The risk-aware planning used in this article was pro-posed in [39]. In this article, we focus on introducing how to apply the planning method to our target system.

Figure 15 shows risk-aware planning results according to the three different risk bound. Fig. 15(a) and (b) repre-sent the vision view from the leader and the ego vehicle respectively. Fig. 15(c) to (e) demonstrates the risk-aware planning result with risk bound set as 0.05, 0.15 and 0.2 respectively, where we can observe a trade-off between safety and efficiency.

VI. Lessons Learned and Remaining ChallengesIn this article, we introduced the advantages of cooperative per-ception and specific methodolo-gies to utilize cooperative per-ception for autonomous driving at both behavioral and trajec-tory levels. To show the benefits of this system clearly, we pre-sented specific scenarios such as automated collision avoid-ance, and decision/execution of lane-changing/overtaking in both reactive and proactive manners. Such benefits fun-damentally comes from a see-through sensing capabi l ity

thanks to remote information delivered via wireless com-munications. However, we should note that remote infor-mation has more uncertainty than local information due to communication delay, or map merging error along with security issues.

Considering this, we have designed and developed our autonomous vehicles to make a safety-critical decision rely-ing on local sensing information, whereas remote infor-mation supports to make a non-myopic decision. For this

FIg 13 Snapshots of automated obstacle avoiding by situation-aware decision making.

(c)

t = 3.5s

(a)

(e)(d)

t = 4.1s t = 13.5s

(b)

FIg 12 Situation-aware obstacle avoiding process: In (c), the red circle marker indicates that Overtaking signal is triggered and yellow arrow in the top is the temporal goal; The motion planning process is demonstrated in (d); (e) shows the first leader is successfully avoided and the Decelerate action is triggered.

Time (sec)

1.198 meter1.43 sec

(a)

Dis

tanc

e (m

eter

)

Vel

ocity

(m

eter

/sec

)

(b)

0 1 2 3 4 5 6 7 803

4

5

6

7

8

9

10

11

1 2 3 4 5 6 7 8

Time (sec)

Without CPWith CP

Without CPWith CP

FIg 14 Autonomous vehicle behavior comparison: (a) represents the distance to front vehicle and (b) shows the vehicle speed profile.

Page 10: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 48 • fall 2015

design goal, global and relative localizations are separately performed in our system. The global localization is used for basic autonomous driving, which is performed with local sensing information. On the other hands, the relative localization is used for merging remote information, which assists driving decision. This approach is stable and robust to both performance variations and large perturbations in remote information.

One of the key technologies to enable cooperative auton-omous driving is a reliable wireless communication. This article focuses on 1) sensing information fusion and 2) plan-ning and control using the fusion result. In this context, we implemented and tested our system using several off-the-shelves wireless technologies including IEEE 802.11g, IEEE 802.11n, 3G HSDPA, and 4G LTE. In our experiments, IEEE 802.11n works well at least within 30 m range on the urban road. However, its performance is inherently depen-dent on the distance between vehicles. On the other hand, 3G and 4G is independent from the distance between vehi-cles. However, its communication delay tends to be more unpredictable according to our experimental results. It is important to choose a proper communication protocol and physical interface to satisfy the requirements of target applications, e.g., driving assistance or autonomous driv-ing with the consideration of pros and cons of each technol-ogy. Future works include extensive evaluation of coopera-tive perception using IEEE 802.11p [14], [15].

Due to omni-directional characteristics of wireless devices typically available, it is difficult and challenging

to know who sent which message before connection establishment. Such a problem is categorized as a vehicle identification problem, which is theoretically addressed in [40]. Sharing information with neighbor vehicles inherently in-cludes a circular reasoning prob-lem [41]. These issues are expected

to become important according to the market penetration of vehicular wireless devices. Future works include theo-retical and extensive experimental validation on the road.

Lastly, this article validated autonomous driving using cooperative perception via vehicle-to-vehicle communica-tions. However, cooperative perception can be approached by vehicle-to-infrastructure communications. The infra-structure-based system is attractive in several aspects, particularly from the perspective of traffic administrators like a government or a corporation. The motivation and technical approaches of infrastructure-based autonomous driving is presented in [42]. Future works include coopera-tive autonomous driving using infrastructure-based coop-erative perception.

VII. ConclusionIn this article, we investigated the impact of cooperative perception on autonomous driving with our proposed methods. We demonstrated the augmented on-road sens-ing systems using cooperative perception to provide see-through/ lifted-seat/satellite views. Based on the extended perception range, we showed that situation awareness can be improved on the road. The augmented perception and situation awareness capability can contribute to better autonomous driving in terms of decision making and plan-ning. All concepts introduced in this article were dem-onstrated using our autonomous vehicles. Future works include analytical studies on subproblems needed to be optimized such as traffic flow improvement, wireless com-munication for vehicle driving control, decision/planning using remote sensing information, and multi-agent control on the multi-lane road.

AcknowledgmentThis research was supported by the Future Urban Mobil-ity project of the Singapore-MIT Alliance for Research and Technology (SMART) Center, with funding from Sin-gapore’s National Research Foundation, and the National Research Foundation of Korea (NRF) grant funded by the Ministry of Science, ICT & Future Planning (MSIP) (No. 2009-0083495). The authors would like to thank Dr. Di-luka Moratuwage, Dr. Harold Soh, Dr. Mid-Eum Choi, Cody Kamin for reading of the manuscript and for helpful discussions. We also thank anonymous reviewers for con-structive comments.

(c)

(a)

(e)(d)(b)

FIg 15 Risk-aware planning results comparison using three risk bounds: (a) and (b) represents the vision view from the leader and the ego vehicle respectively; (c) to (e) demonstrates the risk-aware planning result with risk bound set as 0.05, 0.15 and 0.2 respectively.

The augmented perception and situation awareness capability can contribute to better autonomous driving in terms of decision making and planning.

Page 11: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 49 • fall 2015 IEEE IntEllIgEnt transportatIon systEms magazInE • 49 • fall 2015

About the Authors Seong-Woo Kim (M’11) received the B.S. and M.S. degrees in electronics engineering from Korea University, Seoul, Korea, in 2005 and 2007, respec-tively, and the Ph.D. degree in electrical engineering and computer science from Seoul National University in 2011. He

was a postdoctoral associate with the Singapore-MIT Alli-ance for Research and Technology. In 2014, he joined the Seoul National University, where he is currently a Research Assistant Professor.

Dr. Kim received the Best Student Paper Award at the 11th IEEE International Symposium on Consumer Elec-tronics and the Outstanding Student Paper Award at the First IEEE International Conference on Wireless Commu-nication, Vehicular Technology, Information Theory, and Aerospace and Electronic Systems Technology and the Best Paper Award Finalist at the 3rd International Conference on Connected Vehicles & Expo.

Wei Liu received the B.S. degree in mechanical engineering from Nanjing University of Aeronautics and Astronau-tics, China, in 2011, the M.S. degree in mechatronics from National Univer-sity of Singapore, Singapore, in 2012, respectively. He is currently working

toward the Ph.D. degree in the department of mechanical engineering, National University of Singapore, Singapore. His research interests include stochastic planning, machine learning and environment understanding.

Marcelo H. Ang, Jr. received the B.S. degrees (cum laude) in mechanical engineering and industrial manage-ment engineering from the De La Salle University, Manila, Philippines, in 1981, the M.Sc. degree in mechanical engi-neering from the University of Hawaii at

Manoa, Honolulu, in 1985, and the M.Sc. and Ph.D. degrees in electrical engineering from the University of Roches-ter, Rochester, NY, in 1986 and 1988, respectively. His work experience includes heading the Technical Training Divi-sion of Intel’s Assembly and Test Facility in the Philippines, research positions at the East West Center in Hawaii and at the Massachusetts Institute of Technology, and a faculty position as an Assistant Professor of Electrical Engineer-ing at the University of Rochester, NY. In 1989, he joined the department of mechanical engineering of the National University of Singapore, where he is currently an Associate Professor. He also holds a joint appointment with the Divi-sion of Engineering and Technology Management as Dep-uty Head. In addition to academic and research activities,

he is actively involved in the Singapore Robotic Games as its founding Chairman. He also chairs the Steering Com-mittee for the World Robot Olympiad (2008–2009) and the World Skills Singapore Competition (2005, 2007, 2010). His research interests span the areas of robotics, mechatronics, and applications of intelligent systems methodologies. He teaches both at the graduate and undergraduate levels in the following areas: robotics; creativity and innovation, applied electronics and instrumentation; advanced computing; product design and realization, and special topics in mecha-tronics. He is also active in consulting work in these areas.

Emilio Frazzoli (SM’07) is a Professor of Aeronautics and Astronautics with the Laboratory for Information and Decision Systems, and the Operations Research Center at the Massachusetts Institute of Technology. He received a Laurea degree in Aerospace Engineering from the Uni-

versity of Rome, “Sapienza”, Italy, in 1994, and a Ph. D. degree from the Department of Aeronautics and Astronautics of the Massachusetts Institute of Technology, in 2001. Before return-ing to MIT in 2006, he held faculty positions at the University of Illinois, Urbana-Champaign, and at the University of Califor-nia, Los Angeles. He is currently the Director of the Transpor-tation@MIT initiative, and the Lead Principal Investigator of the Future Urban Mobility IRG of the Singapore-MIT Alliance for Research and Technology (SMART). He was the recipient of a NSF CAREER award in 2002. He is an Associate Fellow of the American Institute of Aeronautics and Astronautics and a Senior Member of the Institute for Electrical and Electronics Engineers. Dr. Frazzoli’s current research interests focus pri-marily on autonomous vehicles, mobile robotics, and transpor-tation systems, and in general lie in the area of planning and control for mobile cyber-physical systems.

Daniela Rus (F’10) received the Ph.D. in computer science from Cornell Uni-versity Ithaca, NY, in 1992. She is cur-rently the Andrew (1956) and Erna Vit-erbi Professor with the Department of Electrical Engineering and Computer Science, Massachusetts Institute of

Technology, Cambridge, where she serves as Director of the Computer Science, and Artificial Intelligence Labora-tory. Her research interests include distributed robotics and mobile computing, and her application focus includes trans-portation, security, environmental modeling and monitor-ing, underwater exploration, and agriculture.

Dr. Rus received the National Science Foundation Career Award. She is a Class of 2002 MacArthur Fellow and a fellow of the Association for the Advancement of Artificial Intelli-gence (AAAI) and of the Institute of Electrical and Electron-ics Engineers (IEEE).

Page 12: The Impact of Cooperative Perception on Decision Making ...ares.lids.mit.edu/fm/documents/impact_cooperative.pdf · that it is a more affordable method compared to conven- ... ing

IEEE IntEllIgEnt transportatIon systEms magazInE • 50 • fall 2015

References[1] L. Merino, F. Caballero, J. M. de Dios, J. Ferruz, and A. Ollero, “A coop-

erative perception system for multiple UAVs: Application to automatic detection of forest fires,” J. Field Robot., vol. 23, nos. 3–4, pp. 165–184, Apr. 2006.

[2] H. Li and F. Nashashibi, “Cooperative multi-vehicle localization using split covariance intersection filter,” IEEE Intell. Transp. Syst. Mag., vol. 5, no. 2, pp. 33–44, 2013.

[3] H. Li, M. Tsukada, F. Nashashibi, and M. Parent, “Multivehicle coop-erative local mapping: A methodology based on occupancy grid map merging,” IEEE Intell. Transp. Syst. Mag., vol. 15, no. 5, pp. 2089–2100, 2014.

[4] S.-W. Kim, B. Qin, Z. J. Chong, X. Shen, W. Liu, M. H. Ang Jr., E. Fraz-zoli, and D. Rus, “Multivehicle cooperative driving using cooperative perception: Design and experimental validation,” IEEE Trans. Intell. Transport. Syst., vol. 16, no. 2, pp. 663–680, 2014.

[5] T. Nolte, H. Hansson, and L. L. Bello, “Automotive communications-past, current and future,” in Proc. 10th IEEE Conf. Emerging Technolo-gies Factory Automation, 2005, vol. 1, pp. 985–992.

[6] H.-S. Tan and J. Huang, “DGPS-based vehicle-to-vehicle cooperative collision warning: Engineering feasibility viewpoints,” IEEE Trans. Intell. Transport. Syst., vol. 7, no. 4, pp. 415–428, Dec. 2006.

[7] C. Olaverri-Monreal, P. Gomes, R. Fernandes, F. Vieira, and M. Fer-reira, “The see-through system: A vanet-enabled assistant for over-taking maneuvers,” in Proc. IEEE Intelligent Vehicles Symp., 2010, pp. 123–128.

[8] P. Gomes, C. Olaverri-Monreal, and M. Ferreira, “Making vehicles transparent through V2V video streaming,” IEEE Trans. Intell. Trans-port. Syst., vol. 13, no. 2, pp. 930–938, 2012.

[9] H. Li and F. Nashashibi, “Multi-vehicle cooperative perception and augmented reality for driver assistance: A possibility to ‘see’ through front vehicle,” in Proc. IEEE 14th Int. Conf. Intelligent Transportation Systems, 2011, pp. 242–247.

[10] A. Vinel, E. Belyaev, K. Egiazarian, and Y. Koucheryavy, “An overtaking assistance system based on joint beaconing and real-time video trans-mission,” IEEE Trans. Veh. Technol., vol. 61, no. 5, pp. 2319–2329, 2012.

[11] E. Belyaev, A. Vinel, K. Egiazarian, and Y. Koucheryavy, “Power control in see-through overtaking assistance system,” IEEE Commun. Lett., vol. 17, no. 3, pp. 612–615, Mar. 2013.

[12] E. Belyaev, P. Molchanov, A. Vinel, and Y. Koucheryavy, “The use of automotive radars in video-based overtaking assistance applications,” IEEE Trans. Intell. Transport. Syst., vol. 14, pp. 1035–1042, Sept. 2013.

[13] I.-T. Rec, “H. 264 & ISO/IEC 14496-10 AVC, Advanced video coding for generic audiovisual services,” ITU-T, May 2003.

[14] D. Jiang and L. Delgrossi, “IEEE 802.11 p: Towards an international standard for wireless access in vehicular environments,” in Proc. IEEE Vehicular Technology Conf., 2008, pp. 2036–2040.

[15] P. Alexander, D. Haley, and A. Grant, “Cooperative intelligent trans-port systems: 5.9-GHz field trials,” Proc. IEEE, vol. 99, no. 7, pp. 1213–1235, 2011.

[16] D. de Bruin, J. Kroon, R. van Klaveren, and M. Nelisse, “Design and test of a cooperative adaptive cruise control system,” in Proc. IEEE Intel-ligent Vehicles Symp., June 2004, pp. 4616–4621.

[17] V. Milanés, S. E. Shladover, J. Spring, C. Nowakowski, H. Kawazoe, and M. Nakamura, “Cooperative adaptive cruise control in real traffic situ-ations,” IEEE Trans. Intell. Transport. Syst., vol. 15, no. 1, pp. 296–305, 2013.

[18] D. Caveney and W. B. Dunbar, “Cooperative driving: Beyond V2V as an ADAS sensor,” in Proc. IEEE Intelligent Vehicles Symp., June 2012, pp. 529–534.

[19] E. van Nunen, M. Kwakkernaat, J. Ploeg, and B. D. Netten, “Coopera-tive competition for future mobility,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1018–1025, 2012.

[20] L. Guvenc, I. M. C. Uygan, K. Kahraman, R. Karaahmetoglu, I. Altay, M. Senturk, M. T. Emirler, A. E. H. Karci, B. A. A. Guvenc, E. Altug, M. C. Turan, O. S. Tas, E. Bozkurt, U. Ozguner, K. Redmill, A. Kurt, and B. Efendioglu, “Cooperative adaptive cruise control implementation of team Mekar at the grand cooperative driving challenge,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1062–1074, Sept. 2012.

[21] K. Lidstrom, K. Sjoberg, U. Holmberg, J. Andersson, F. Bergh, M. Bjade, and S. Mak, “A modular CACC system integration and design,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1050–1061, Sept. 2012.

[22] J. Mårtensson, A. Alam, S. Behere, M. Khan, J. Kjellberg, K. Liang, H. Pettersson, and D. Sundman, “The development of a cooperative heavy-duty vehicle for the GCDC 2011: Team scoop,” IEEE Trans. In-tell. Transport. Syst., vol. 13, no. 3, pp. 1033–1049, Sept. 2012.

[23] M. R. I. Nieuwenhuijze, T. van Keulen, S. Oncu, B. Bonsen, and H. Nij-meijer, “Cooperative driving with a heavy-duty truck in mixed traffic: Experimental results,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1026–1032, Sept. 2012.

[24] A. Geiger, M. Lauer, F. Moosmann, B. Ranft, H. Rapp, C. Stiller, and J. Ziegler, “Team AnnieWAY’s entry to the 2011 grand cooperative driv-ing challenge,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 1008–1017, Sept. 2012.

[25] R. Kianfar, B. Augusto, A. Ebadighajari, U. Hakeem, J. Nilsson, A. Raza, R. Tabar, N. V. Irukulapati, C. Englund, P. Falcone, S. Papanastasiou, L. Svensson, and H. Wymeersch, “Design and experimental validation of a cooperative driving system in the grand cooperative driving chal-lenge,” IEEE Trans. Intell. Transport. Syst., vol. 13, no. 3, pp. 994–1007, 2012.

[26] J. Leonard, J. How, S. Teller, M. Berger, S. Campbell, G. Fiore, L. Fletcher, E. Frazzoli, A. Huang, S. Karaman, O. Koch, Y. Kuwata, D. Moore, E. Olson, S. Peters, J. Teo, R. Truax, M. Walter, D. Barrett, A. Epstein, K. Maheloni, K. Moyer, T. Jones, R. Buckley, M. Antone, R. Galejs, S. Krishnamurthy, and J. Williams, “A perception-driven au-tonomous urban vehicle,” J. Field Robot., vol. 25, no. 10, pp. 727–774, 2008.

[27] M. Buehler, K. Iagnemma, and S. Singh, The DARPA Urban Challenge: Autonomous Vehicles in City Traffic. Berlin, Heidelberg, Germany: Springer-Verlag, 2009.

[28] A. Broggi, M. Buzzoni, S. Debattisti, P. Grisleri, M. C. Laghi, P. Medici, and P. Versari, “Extensive tests of autonomous driving technologies,” IEEE Trans. Intell. Transport. Syst., vol. 14, no. 3, pp. 1403–1415, 2013.

[29] S. Tsugawa, S. Kato, T. Matsui, H. Naganawa, and H. Fujii, “An archi-tecture for cooperative driving of automated vehicles,” in Proc. IEEE Intelligent Transportation Systems, 2000, pp. 422–427.

[30] J. Kolodko and L. Vlacic, “Cooperative autonomous driving at the in-telligent control systems laboratory,” IEEE, Intell. Syst., vol. 18, no. 4, pp. 8–11, 2003.

[31] J. Baber, J. Kolodko, T. Noel, M. Parent, and L. Vlacic, “Cooperative autonomous driving: Intelligent vehicles sharing city roads,” IEEE Ro-bot. Automat. Mag., vol. 12, no. 1, pp. 44–49, 2005.

[32] S.-W. Kim, G.-P. Gwon, S.-T. Choi, S.-N. Kang, M.-O. Shin, I.-S. Yoo, E.-D. Lee, and S.-W. Seo, “Multiple vehicle driving control for traffic flow efficiency,” in Proc. IEEE Intelligent Vehicles Symp., June 2012, pp. 462–468.

[33] S.-W. Kim and S.-W. Seo, “Cooperative unmanned autonomous vehicle control for spatially secure group communications,” IEEE J. Select. Ar-eas Commun., vol. 30, pp. 870–882, June 2012.

[34] W. Liu, S.-W. Kim, K. Marczuk, and M. H. Ang, “Vehicle motion inten-tion reasoning using cooperative perception on urban road,” in Proc. IEEE 17th Conf. Int. Intelligent Transportation Systems, 2014, pp. 424–430.

[35] L. R. Welch, “Hidden markov models and the Baum-Welch algorithm,” IEEE Inform. Theory Soc. Newsletter, vol. 53, no. 4, pp. 10–13, 2003.

[36] E. Dagan, O. Mano, G. P. Stein, and A. Shashua, “Forward collision warning with a single camera,” in Proc. IEEE Intelligent Vehicles Symp., 2004, pp. 37–42.

[37] S. Karaman, M. R. Walter, A. Perez, E. Frazzoli, and S. Teller, “Anytime motion planning using the RRT*,” in Proc. IEEE Int. Conf. Robotics Au-tomation, 2011, pp. 1478–1483.

[38] W. Liu, S.-W. Kim, Z. J. Chong, X. Shen, and M. H. Ang Jr., “Motion planning using cooperative perception on urban road,” in Proc. IEEE Int. Conf. Cybernetics Intelligent Systems, Robotics, Automation Mechantronics, Nov. 2013, pp. 130–137.

[39] W. Liu and M. H. Ang Jr., “Incremental sampling-based algorithm for risk-aware planning under motion uncertainty,” in Proc. IEEE Int. Conf. Robotics Automation, 2014, pp. 2051–2058.

[40] M. X. Punithan and S.-W. Seo, “King’s graph-based neighbor-vehicle mapping framework,” IEEE Trans. Intell. Transport. Syst., vol. 14, no. 3, pp. 1313–1330, 2013.

[41] A. Howard, M. J. Mataric, and G. S. Sukhatme, “Putting the ’I’ in ’Team’: An ego-centric approach to cooperative localization,” in Proc. IEEE Int. Conf. Robotics Automation, 2003, vol. 1, pp. 868–874.

[42] B. Rebsamen, T. Bandyopadhyay, T. Wongpiromsarn, S.-W. Kim, Z. J. Chong, B. Qin, M. H. Ang Jr., E. Frazzoli, and D. Rus, “Utilizing the infrastructure to assist autonomous vehicles in a mobility on demand context,” in Proc. IEEE TENCON, Nov. 2012.