multilayer distributed intelligent control of an autonomous car

19
Multilayer distributed intelligent control of an autonomous car Humberto Martínez-Barberá , David Herrero-Pérez Department of Information and Communications Engineering, University of Murcia, 30100 Espinardo, Murcia, Spain article info Article history: Received 6 March 2012 Received in revised form 10 December 2013 Accepted 10 December 2013 Keywords: Intelligent vehicles Robotics development frameworks Mobile robots abstract This paper shows how the development of an intelligent vehicle application can benefit from using standard mobile robotics elements in general, and a development framework in particular. This framework, ThinkingCap-II, has been successfully used in other robotics applications. It consists of a series of modules and services that have been developed in Java and allows the distribution of these modules over a network. The framework facili- tates reusing components and concepts from other developments, which permits increasing the performance of the intelligent vehicle development. This fact is especially useful for small research groups. A two car convoy application has been implemented using this architecture and the development of an autonomous vehicle. Both the ThinkingCap-II and the autonomous vehicle architectures are described in detail. Finally some experiments are presented. Simulated experiments are used to validate the convoy model, testing the activation of the different behaviors in the decision-making process. Real experiments show the actual working of the developed intelligent vehicle application. Ó 2013 Elsevier Ltd. All rights reserved. 1. Introduction Vehicle automation for different automatic driving related tasks, e.g. Zohdy and Rakha (2012), and assisted driving, e.g. Jiménez and Naranjo (2011) and Chen et al. (2013), has increased during the last years. Nevertheless, in most cases these systems rely on architectures specifically designed for the sensorial system being used (artificial vision, radar, laser, etc.) or are task oriented, such as lane and car following, e.g. Ossen and Hoogendoorn (2011) and Zheng et al. (2013), obstacle avoidance, e.g. Jiménez and Naranjo (2011), and driver supervision, e.g. Di Stasi et al. (2012), Ahlstrom et al. (2013), and Martín de Diego et al. (2013), to name but a few. There have been several projects and systems showing autonomous driving. Some early developments in autonomous vehicles are CMU NavLab in Thorpe et al. (1988, 1991), OSU vehicles in Ozguner et al. (1997) and Fu et al. (2008), California PATH in Ioannou (1998), ARGO in Broggi et al. (1999), SAVE in Coda et al. (1997), and OTTO in Franke et al. (1995). Among the recent autonomous vehicle developments, we can mention the vehicles participating in the Grand Cooperative Driving Challenge (GCDC), e.g. Martensson et al. (2012), and vehicles participating in DARPA Grand Challenge (DGC), e.g. Ozguner et al. (2007), and DARPA Urban Challenge (DUC), e.g. Campbell et al. (2010), sponsored by Defense Advanced Research Pro- ject Agency (DARPA) of US Department of Defense, to cite but a few. Most of them are not general or open. By contrast, in autonomous robotics there are several open frameworks, a compilation and review appears in Oreback and Christensen (2003) and Kramer and Scheutz (2007), that allow transfer and reusability. The goal of this paper is to put these two together, by showing how to use a general framework (originally developed for autonomous robotics) in an intelligent vehicle application. This approach has been already tried by other groups, e.g. the work by Institut National de Recherche en Informatique et en Automatique (INRIA) in Laugier (1999). A hybrid architecture 0968-090X/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.trc.2013.12.006 Corresponding author. E-mail addresses: [email protected] (H. Martínez-Barberá), [email protected] (D. Herrero-Pérez). Transportation Research Part C 39 (2014) 94–112 Contents lists available at ScienceDirect Transportation Research Part C journal homepage: www.elsevier.com/locate/trc

Upload: david

Post on 30-Dec-2016

215 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Multilayer distributed intelligent control of an autonomous car

Transportation Research Part C 39 (2014) 94–112

Contents lists available at ScienceDirect

Transportation Research Part C

journal homepage: www.elsevier .com/locate / t rc

Multilayer distributed intelligent control of an autonomous car

0968-090X/$ - see front matter � 2013 Elsevier Ltd. All rights reserved.http://dx.doi.org/10.1016/j.trc.2013.12.006

⇑ Corresponding author.E-mail addresses: [email protected] (H. Martínez-Barberá), [email protected] (D. Herrero-Pérez).

Humberto Martínez-Barberá ⇑, David Herrero-PérezDepartment of Information and Communications Engineering, University of Murcia, 30100 Espinardo, Murcia, Spain

a r t i c l e i n f o a b s t r a c t

Article history:Received 6 March 2012Received in revised form 10 December 2013Accepted 10 December 2013

Keywords:Intelligent vehiclesRobotics development frameworksMobile robots

This paper shows how the development of an intelligent vehicle application can benefitfrom using standard mobile robotics elements in general, and a development frameworkin particular. This framework, ThinkingCap-II, has been successfully used in other roboticsapplications. It consists of a series of modules and services that have been developed inJava and allows the distribution of these modules over a network. The framework facili-tates reusing components and concepts from other developments, which permitsincreasing the performance of the intelligent vehicle development. This fact is especiallyuseful for small research groups. A two car convoy application has been implemented usingthis architecture and the development of an autonomous vehicle. Both the ThinkingCap-IIand the autonomous vehicle architectures are described in detail. Finally someexperiments are presented. Simulated experiments are used to validate the convoy model,testing the activation of the different behaviors in the decision-making process. Realexperiments show the actual working of the developed intelligent vehicle application.

� 2013 Elsevier Ltd. All rights reserved.

1. Introduction

Vehicle automation for different automatic driving related tasks, e.g. Zohdy and Rakha (2012), and assisted driving, e.g.Jiménez and Naranjo (2011) and Chen et al. (2013), has increased during the last years. Nevertheless, in most cases thesesystems rely on architectures specifically designed for the sensorial system being used (artificial vision, radar, laser, etc.)or are task oriented, such as lane and car following, e.g. Ossen and Hoogendoorn (2011) and Zheng et al. (2013), obstacleavoidance, e.g. Jiménez and Naranjo (2011), and driver supervision, e.g. Di Stasi et al. (2012), Ahlstrom et al. (2013), andMartín de Diego et al. (2013), to name but a few.

There have been several projects and systems showing autonomous driving. Some early developments in autonomousvehicles are CMU NavLab in Thorpe et al. (1988, 1991), OSU vehicles in Ozguner et al. (1997) and Fu et al. (2008), CaliforniaPATH in Ioannou (1998), ARGO in Broggi et al. (1999), SAVE in Coda et al. (1997), and OTTO in Franke et al. (1995). Among therecent autonomous vehicle developments, we can mention the vehicles participating in the Grand Cooperative DrivingChallenge (GCDC), e.g. Martensson et al. (2012), and vehicles participating in DARPA Grand Challenge (DGC), e.g. Ozguneret al. (2007), and DARPA Urban Challenge (DUC), e.g. Campbell et al. (2010), sponsored by Defense Advanced Research Pro-ject Agency (DARPA) of US Department of Defense, to cite but a few. Most of them are not general or open. By contrast, inautonomous robotics there are several open frameworks, a compilation and review appears in Oreback and Christensen(2003) and Kramer and Scheutz (2007), that allow transfer and reusability.

The goal of this paper is to put these two together, by showing how to use a general framework (originally developed forautonomous robotics) in an intelligent vehicle application. This approach has been already tried by other groups, e.g. thework by Institut National de Recherche en Informatique et en Automatique (INRIA) in Laugier (1999). A hybrid architecture

Page 2: Multilayer distributed intelligent control of an autonomous car

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 95

(hierarchical-reactive) for mobile robots is proposed and is applied to electric vehicle fleets in urban environments. An inter-esting component of the architecture is a meta-level of skills known as sensor-based maneuvers, which are general tem-plates that encode high-level expert human knowledge and heuristics about how a specific motion task is to beperformed. The control architecture has two main components: the mission monitor and the motion controller. Given a mis-sion description, the mission monitor generates a parameterized motion plan, which is a set of generic Sensor-Based Maneu-vers (SBM). These SBMs are selected from a SBM library. The goal of the motion controller is to execute the current SBM ofthe parameterized motion plan in a reactive way. The current SBM is instantiated according to the current execution context(parameters of the SBM are set using the a priori knowledge or sensed information available). Another example is the archi-tecture for Intelligent Cruise Control (ICC) by Handmann et al. (1999), which is a flexible and modular architecture for ICCsubdivided into three different processing steps: the object-related analysis of sensor data (sensor data is processed andinterpreted), the behavior-based scene interpretation (relevant information for the behavior is processed), and the behaviorplanning (final element that has to evaluate which action should be taken to perform the current task).

The approach proposed in this paper consists of a general framework that allows implementing different cognitive orfunctional architectures, and, for instance, the two previous architectures can also be implemented. The framework includesa reference architecture that is used to build an intelligent vehicle application, which reuses many components from stan-dard mobile robotics applications, for example electronics components and software modules from the industrial mobile ro-bot shown in Martínez-Barberá and Herrero-Pérez (2010b,a).

The autonomous driving examples mentioned above show that most works take an application oriented or vertical ap-proach, either in the hardware or in the software approach. In addition, many of those investigations have been supportedby industrial car makers, whose politics and development agreements do not allow the dissemination of certain privilegedinformation. For these reasons it is difficult to reuse standard mobile robotics techniques and systems for intelligent vehicleapplications, especially so for small research groups. The aim of this paper is precisely to present a horizontal approach forintelligent vehicle applications targeted at testing proof-of-concept techniques and applications. The goal is to offer a frame-work for building an intelligent vehicle application, in which concepts and components from different robotics systems andother intelligent vehicle applications can be ported and reused. Portability and reusability do not usually concern large appli-cation oriented projects, but they are key issues for small research groups (or those not directly sponsored by car makers).One typical drawback of such a horizontal approach is performance. While this can be a problem in certain cases, most solu-tions can be reshaped to use standard architectures, components or techniques.

This paper shows the use of generic hardware and software architectures used in mobile robotics for an intelligent vehiclerelated application which is independent of the sensori-motor system used and which allows management of multiple vehi-cles. The application described in the paper borrows from the CHAUFFEUR project, Borodani (2000), the idea of convoyingautonomous vehicles, in this case cars. The paper is organized as follows. Section 2 presents and describes some relatedwork. Section 3 describes the characteristics and design criteria of the software architecture which is later used to controlthe autonomous car. Section 4 describes the hardware architecture of the developed autonomous car. Section 5 presentsthe instance of the software architecture for the two car convoy application. Some experiments in both simulated and realenvironments are presented in Section 6, and finally, the conclusions are discussed in Section 7.

2. Related work

Different research groups in universities and private companies of the automotive sector have been working in the devel-opment of intelligent vehicles by adding additional equipment to off-the-shelf vehicles. Among these, the different versionsof the Carnegie Mellon University’s NavLab automated vehicles are quite well known, and NavLab 5 has achieved world widecoverage with the No Hands Across America (NHAA) demonstration. This vehicle is equipped with a vision system, a laserrange scanner for obstacle avoidance and a positioning system based on GPS and odometers. Its architecture, shown inJochem et al. (1995), is highly oriented to the vision system.

The Ohio State University’s OSU autonomous vehicles have also performed similar demonstrations in Ozguner et al.(1997). These vehicles are equipped with a vision system, a radar, a laser range scanner, and inertial units. Their architectureis a three layer hierarchical architecture, in which the lower layer is in charge of actuator control, the intermediate layer is incharge of the longitudinal and lateral control, and the higher layer is in charge of scenario level control. The interconnectionof the sensors and actuators with the different control layers is mainly through RS-232 links.

Another good example of automated vehicles is the California PATH (Partners for Advanced Transit and Highways) pro-ject. This is a large research framework, which includes many subprojects that directly or indirectly make use of automatedvehicles. Many of the new designs are based on the automated highway system control architecture (PATH AHS Architec-ture) described in Horowitz and Varaiya (2000). In this architecture, traffic is organized into platoons of closely packed vehi-cles. The design of this architecture consists of five hierarchical layers: Network, Link, Coordination, Regulation, and Physical.The first two layers are roadside control systems. As for the three layers inside the vehicle, the Coordination layer is a super-visory controller that determines which maneuvers to perform, manages inter-vehicle communications, and coordinates themovement of the vehicle with respect to neighboring cars. The Regulation layer is a continuous-time feedback-based con-troller that implements and executes the maneuvers. The Physical layer contains hardware, sensors, and low-levelcontrollers.

Page 3: Multilayer distributed intelligent control of an autonomous car

96 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

In Europe some similar projects have been carried out, an interesting example being the ARGO vehicle presented in Broggiet al. (1999), which made a journey of some 2,000 km across Italy under automatic steering, the MilleMiglia in Automaticodemonstration. The vehicle is equipped with a stereoscopic vision system, detailed in Bertozzi et al. (2000), its architectureis exclusively devoted to image processing using the development framework presented in Bertozzi et al. (2008), and its con-trol system is simple so as to guarantee the robustness of the system. Recently, the research group that developed the ARGOvehicle has carried out the VisLab’s adventure on the Silk road, which consists of two electric vehicles performing a 13,000 kmtrip, detailed in Broggi et al. (2010), from Parma (Italy) to Shanghai (China) with no driver. We also have to mention SAVEvehicle, described in Coda et al. (1997), which exhibits fully automated vehicle actuators and is equipped with a vision sys-tem and a laser range scanner.

The research conducted by Daimler-Benz with trucks is also a reference in autonomous vehicles. With their OTTO truck,described in Franke et al. (1995), they study the idea of having convoys of autonomous trucks. Later, within the Europeanproject CHAUFFEUR, described in Borodani (2000), they implement a two truck convoy, each weighing 40 tons. The CHAUF-FEUR’s IVECO trucks use the CAN bus as a device interconnection network. The trucks are equipped with two different CANbuses, interconnected by a gateway. One bus is devoted to the main elements of the vehicle control (engine, brakes, gearbox,and steering wheel), while the other is used to integrate positioning sensors, vision, wireless links, and control systems. Thearchitecture is composed of two control loops: one for the longitudinal control (in charge of maintaining the distance) andanother for the lateral control (in charge of following the leading vehicle reference).

Recently, several automated vehicles have been developed to participate in the GCDC, in the DGC, and in the DUC. We canmention Boss, developed by a team led by Carnegie Mellon University and described in Urmson (2008), Junior, developed by ateam led by Stanford University and described in Montemerlo (2008), and Odin, developed by Virginia Tech and presented inBacha (2008). At a high level, most vehicles in these competitions decomposed the problem into four basic subsystemsdescribed in Campbell et al. (2010): sensing, perception, planning, and control. The software integration with the low levelwas addressed using different approaches because of the challenging problem of the large number of sensor processing, nav-igation, planning, control, and safety processes that must run simultaneously, which usually require of handling efficientlythe load on each processor. In the case of Junior the subsystems are divided into individual modules, which run asynchro-nously and communicate between them via an anonymous publish/subscribe message passing protocol. The software infra-structure used by Boss is a toolbox that provides the basic capabilities required to build a robotic platform. This softwareinfrastructure provides the ability to modify the information flow by allowing components to be configured at run-time con-necting their inputs and outputs using shared memory, a TCP/IP socket, a file, or some other means. These sophisticatedautonomous vehicles, participating in the last edition of DUC, are compared with the development presented in this workbelow.

Nowadays, many private companies, often in partnership with research institutions, are developing autonomous vehiclesto put them on the market next years. One of these projects is Google driver-less car, where Google collaborates with somemembers of the Stanford University to develop a fully functional driver-less vehicle. Nissan Motor also plans to build a self-driving car that will be on the market by 2020. On the other hand, other companies only plan to incorporate some auton-omous features to their vehicles. Tesla Motors aims to develop an autonomous vehicle that will drive ‘‘90 percent’’ of themiles typically driven by a human driver. General Motors and Ford Motor only plan to integrate some self-driving featuresinto their vehicles, such as self-parking and adaptive cruise control. Unfortunately, the technical details about the technol-ogies used by these developments are not available to the public, but the collaboration with research centers that developthe prototypes presented above may lead us to think that they are following similar approaches.

3. The ThinkingCap-II framework

ThinkingCap-II (TC-II) is a framework, described in Martínez-Barberá and Herrero-Pérez (2010c), for developing mobilerobot applications. It is based on previous work on ThinkingCap and BGA architectures, described in Saffiotti et al. (1995)and Martínez-Barberá and Gómez-Skarmeta (2002) respectively. The framework consists of a reference cognitivearchitecture (largely based on ThinkingCap) that serves as a guide for making the functional decomposition of a robotics sys-tem, a software architecture (partially based on BGA) that allows a uniform and reusable way of organizing software com-ponents for robotics applications, and a communication infrastructure that allows software modules to communicate in acommon way, independently of whether they are local or remote.

The TC-II framework has been fully implemented in Java. One of the advantages of using Java is the real achievement ofhaving platform independence. Thus, the development of the components is typically done in desktop computers withstandard operating systems, while the actual deployment occurs in embedded systems that support Java. Moreover, the Javaimplementation makes the integration of the human-interfaces in web-based scenarios a simple process.

Thus, TC-II is a programming framework for high level controllers with support of many mobile robotics tasks likemap-building, path-planning, and localization. Besides, it includes services for sharing information among multiple robots.The TC-II framework has been validated on several mobile platforms including laboratory mobile robots, e.g. Martínez-Bar-berá and Gómez-Skarmeta (2002), industrial Autonomous Guided Vehicles (AGVs), e.g. Herrero-Pérez and Martínez-Barberá(2008a,b, 2010, 2011), and autonomous cars, e.g. Martínez-Barberá and Herrero-Pérez (2010c).

Page 4: Multilayer distributed intelligent control of an autonomous car

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 97

3.1. Functional architecture

The functional architecture is a two layer architecture, shown in Fig. 1(a), for controlling mobile robots, one layer for reac-tive processes and the other for deliberative processes. The modules group the different functionalities present in typical mo-bile robotics systems (navigation, perception, control, and planning), in which sensing and acting are a must. An importantrole is played by a centralized data structure called Local Perceptual Space (LPS). It is a geometrically consistent robot centricspace which maintains a coherent model of the local environment of the robot, and takes into account the a priori informa-tion (map) and the currently perceived information (sensors). This architecture has shown good capabilities as an abstractguideline to organize the software which has to be run in a module robot. The Virtual Robot module provides an abstractinterface to the sensori-motor functionalities of the robot, effectively hiding the hardware components. The other modulesare grouped into the two layers as follows.

3.1.1. The reactive layerThe Perception module receives sensor data from the Virtual Robot and applies a certain set of perceptual routines, includ-

ing sonar and laser based feature extraction and detection, and intelligent sensor fusion. The perceptual routines also accessthe LPS as a reference and to modify it in the case of evidence of a new environment feature. The Controller module closes thecontrol loop by taking as input the information included in the LPS and generating as output robot control values. It is typ-ically implemented as a library of fuzzy behaviors for navigation, like obstacle avoidance, follow path, and keep velocity. Itincludes a context-dependent blending mechanism, described in Saffiotti (1997), that combines the recommendations fromdifferent behaviors into a trade-off control.

3.1.2. The deliberative layerThe Navigation module is in charge of modeling the environment by using both the LPS and an a priori world model (if

available). It typically includes a series of map-building, localization, and path-planning algorithms, like fuzzy grid maps,fuzzy segments maps, topological maps, Kalman filters, and A� and D� planners. Finally, the Planner/Monitor module gener-ates and supervises plans that are needed to solve the robot task. If required, the planner may modify the current set of activeor applicable behaviors in the Controller.

3.2. Software architecture

The main goals of the TC-II software architecture design were: multi-robot support, distributed robot control, and flexibleand reusable components. To achieve these goals, the TC-II software kernel includes and offers: specification of run-time parameters for the different modules, flexible configuration of the system (in terms of modules, robots, and CPUs),and a predefined components library. The kernel makes extensive use of all the Object Oriented features of the Java language.

The kernel defines an abstract model of a TC-II module, which should be followed by all modules of the functional archi-tecture. Depending on the complexity of the system there could be a one to one correspondence or a one to many correspon-dence. For instance the Perception can be implemented as a single module or as a collection of submodules, but in either casethe modules must stick to the abstract model definition, which has support for single thread and multi-thread execution.

As the modules can be distributed among a set of CPUs, the kernel relies on a centralized communication scheme, whereall the communication goes through a blackboard. In addition, this blackboard is based on an event system so that modulesdo not need to poll the blackboard but wait until the desired type of event has occurred. The communication mechanism isdetailed in the next section. The abstract TC-II module includes a port to put data into the blackboard and to receive datafrom the blackboard (via both polling and events).

Fig. 1. The ThinkingCap-II framework: (a) functional architecture and (b) run-time module states.

Page 5: Multilayer distributed intelligent control of an autonomous car

98 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

As TC-II has been used to develop many robotics systems, there exists a collection of algorithms and methods packaged inwhat is called standard components library. This is readily available to the system designer and includes methods for sensorfusion, map-building, path-planning, and localization. The designer has simply to instantiate any of these methods from in-side the desired module.

In addition to this, the run-time characteristics of the system can be specified and customized by the use of configurationfiles. The kernel supports two different types of general configuration files, and contains methods to parse and check them.The following configurations are used by all the modules:

– Architecture Definition File (ADF). Specifies which modules are to be run, on which CPUs they will be running, whichtype of communication and process synchronization mechanism they will be using. The kernel provides methods toautomatically instantiate the corresponding classes at run-time. Thus, the use of ADF allows for great flexibility whentrying different approaches and provides a very convenient method for specifying a distributed system.– Robot Description File (RDF). Specifies the different parameters related to a given robot, like sensor number, types,position and orientation, platform kinematics type and parameters. The use of this file permits the customization ofdifferent vehicles using the same development framework. This file also facilitates the testing of the developmentproviding the capability of activate/deactivate different sensors. The kernel provides methods to access the definitionof the robot, and also to display it.

In addition to these, the kernel also includes an application specific configuration file, which may or may not be used bythe different modules:

– World Description File (WDF). Specifies the a priori knowledge of the robot environment, i.e. a geometrical descrip-tion, like walls/obstacles, road segments, landmarks, areas, way-points, etc. The WDF is generic enough so that it canbe used both in outdoor and indoor applications. The kernel provides methods to access and display this information.In the case of an intelligent vehicle application, the WDF contains all relevant information of the region the vehiclesare operating in.

3.2.1. Robot run-time operationIn order to execute a TC-II based robot, a valid ADF and a unique name are needed. The name is used to identify the robot

if more than one is present. The same ADF is used for each of the different CPUs of the robot. In each CPU, the kernel parsesthe ADF and then loads and instantiates all the different modules specified that correspond to that CPU, each with its cor-responding parameters. All the modules of a single robot connect to the robot local blackboard, which is run in one of theCPUs (this is user selectable). In a multi-robot scenario, the ADF also contains the address of the global blackboard.

A module can be in any of the different states detailed in Fig. 1(b). When a module is first instantiated it is in the LOADEDstate. Upon reception of a RDF and a WDF it changes to the INITIALIZED state. Now, the module is ready to start itsprocessing. To allow for debugging the system, the module can be run in either of two modes: step by step (SINGLE) or con-tinuous execution (CONTINUOUS). At any time the operator can change the execution status of a single module or all themodules. The Virtual Robot is a special module because it is in charge of sending the configuration files. Thus, it automat-ically changes from LOADED to INITIALIZED while it leaves the configuration in the blackboard. By default, all the abstractmodules receive an event when a new configuration has been sent to the blackboard.

3.2.2. Code reusabilityOne important aspect is code reusability. The kernel allows the developer to write robot independent code through the

use of RDFs. For instance if a developer writes a perception routine that computes some feature depending on the sensorconfiguration, the RDFs allow for a high level of abstraction and a uniform way of accessing the characteristics of the sensors.Then the system designer or integrator has to provide only the number and location of the sensors to use the routine. Theentire standard components library has been written following this approach.

3.2.3. SimulationPlatform independence is guaranteed by the use of the Java language, and it is a very important feature of the TC-II frame-

work. Platform independence allows running and debugging the code in personal computers and testing it in the real plat-form. The TC-II framework takes advantage of this and includes a Simulator module that can simulate different sensor types,like range sensors (sonar, laser, radar) and positioning sensors (GPS, laser, compass), and platform models, like differential,tricycle and Ackerman drives. The sensor simulation is realistic enough for taking into account multi-path reflection, noise,and different firing patterns, while the platform simulation is based on kinematics equations and some pseudo dynamicsconstraints (i.e. the minimum time to perform a full turn of the steering wheel). The combination allows for testing ofthe efficiency and performance of the different modules and their algorithms with an acceptable degree of realism. In addi-tion, the simulator can simulate multiple robots, and their sensors not only reflect the environment but also the other robots.

The simulator is implemented as a Virtual Robot, as shown in Fig. 2(b), that is not attached to any real device, and thesame RDFs used by the real robots are used by the simulator to configure the sensor types and models and the platformskinematics and constraints. Thus, switching between a real robot and a simulated one is as simple as changing the class

Page 6: Multilayer distributed intelligent control of an autonomous car

Fig. 2. (a) ThinkingCap-II multi-robot architecture and (b) the simulator running data from a real experiment.

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 99

name of the Virtual Robot section in the ADF. In addition, the model of the environment is specified using a WDF (which maybe the same as that used by the robots). Fig. 2(b) shows an instance of the simulator running with the whole multi-robotsupport. One can observe how the simulator or log data of real experiments are executed in Virtual Robot modules, andthe communication support, control architecture, and monitors are run as in real experiments.

3.3. Communication support

In a distributed system, as TC-II is, information sharing is a key point. Both the functional and software architecturesallow for execution of modules in different processes or machines. To allow for a flexible information exchange mechanism,TC-II relies on a shared blackboard with similar services to those offered by a Linda system, detailed in Gelernter (1985). Asin typical blackboard systems, each module reads information from the blackboard, processes it, and then writes the corre-sponding results. Besides this, it can also work as an event driven blackboard. In this way, each module registers into theblackboard which kind of data it desires to receive. When new data of such type is available it is sent directly to the module.With this centralized blackboard model the modules can actively query the blackboard or simply wait until new data isavailable, depending on the specific use of the module.

Shared information is exchanged using tuples, a tuple being a pair <key, item>. Key identifies the kind of data, and item isthe actual data. For instance there are some tuple types for different purposes, like sensor data, motion commands, debuginformation, navigation data and others. The object-oriented capabilities of the Java language are used in this case to defineand implement all these tuples from an object-oriented paradigm.

The blackboard of TC-II has been designed to work in distributed scenarios, with different modules running in differentmachines. It is also possible to connect more than one robot to the system, where robots may need information from otherrobots for coordination purposes and a global monitor to visualize the location of the different robots. In this case, some kindof robot specific naming and addressing is needed (to avoid a module sending a command to the wrong robot). Thus, eachrobot maintains its own blackboard and, in addition, there is a global blackboard, which is shared by the different robots.

Shared information is now exchanged using triples, a triple being <id,key, item>. As in the single robot case, key identifiesthe kind of data, and the item is the actual data. In addition, the id identifies uniquely the robot that produced theinformation. In fact, the current implementation does not distinguish between local and global blackboard. In localblackboards the id field is left empty.

In order to interface each robot instance with the global system, the architecture includes a connection module which isrun in each robot, as shown in Fig. 2(a). This module, the Router, is connected to both the local and global blackboards. Such amodule is in charge of filtering information sent to and received from the global blackboard, thus acting as a lightweightcommunication Router.

In the example shown in Fig. 2(a), there are two robots in execution using the single robot TC-II architecture and runningthe modules needed for autonomous navigation. There are also two Router modules that receive tuples from the local black-board and resend them to the global blackboard if needed. Thus, with this extension, a group of robots can be remotely mon-itored and/or controlled. As mentioned above, all the implementation has been done in Java, allowing external modules (likea global observer, a remote controller or a scheduler) to be embedded in a web page as a Java applet or as an external Javaapplication.

The blackboard itself is a process. When a module running in the same CPU accesses the blackboard it does so by using ashared memory. When the module is running on a different CPU it uses a custom UDP based protocol. This kind of imple-mentation has been favored over other more standard approaches (RMI, CORBA, etc.) because of performance. As the overall

Page 7: Multilayer distributed intelligent control of an autonomous car

100 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

control cycle time depends on the latency (in the case of a distributed execution), it is important to have an efficient com-munication method, which is also as simple as possible.

3.4. Framework support for intelligent vehicles

The aim of autonomous vehicles is to replace all or some of the driving tasks normally performed by human drivers. Suchtasks are usually classified following the Michon’s three level hierarchy, detailed in Michon (1985), which consists of mod-eling driving using three hierarchically ordered levels: strategic, tactical, and operational. The strategic level is concernedwith the selection of routes and the planning of the sub-goals to run such routes. The tactical level is responsible for gen-erating sequential tasks to reach a given sub-goal; including obstacle avoidance, gap acceptance, turning, and overtaking,to name but a few. At the operational level, the driving tasks are responsible for low level guidance of the vehicle, suchas vehicle following and lane keeping. Thus, the Michon model of driving provides an effective cognitive model and is con-sistent with the view that driving is a compromise between achieving goals and addressing ongoing constraints, as men-tioned in Patz et al. (2009).

The proposed framework is able to capture all the driving tasks considered by the Michon model of driving. The strategicdriving tasks can be incorporated in the deliberative layer of the functional architecture; in particular, the selection of routescan be implemented in the navigation module, e.g. using Dijkstra, A� or D� through the nodes of a map, while the Plannermodule can generate and supervise the plans needed to run the selected route. The tactical and operational driving tasksare implemented as behaviors in the Controller module. These behaviors are combined following the context-dependentblending mechanism using the information provided by the Perception module, and thus closing the control loop. The com-bination of behaviors addresses the compromise mentioned above about achieving goals and addressing constraints. The fol-lowing sections show the implementation of a platoon of vehicles using this framework, which only requires theimplementation of tactical and operational driving tasks in an autonomous vehicle following a manned one.

4. The autonomous vehicle

The autonomous vehicle, shown in Fig. 3, presented in this work is called SatAnt. It is based on a COMARTH S1-50 two-seat sport car, which has been heavily modified to allow it to be controlled by a computer based system. The weight of thewhole vehicle is 700 kg and the engine provides 90 h.p., which gives a high power to weight ratio and allows high acceler-ations. The modifications include an automatic gearbox, electronic assisted steering system, electronic speed control, andelectronic braking system. For safety reasons, all electronic systems have been designed in such way that they allow bothmanual and automatic control, and at any time the electronic systems can be disengaged. Both the frame and the outer shellhave been modified to accommodate for the nonstandard equipment. The sensors system includes a Novatel GPS (which pro-vides global positioning data), a Precision Navigation electronic compass (which provides both heading and pitch/roll data),relative encoders attached to the four wheels (which provide vehicle speed), absolute encoder attached to the steeringwheels arm (which provides steering wheels angle), and a Fujitsu 77 GHz radar (for detecting obstacles and the leadingcar in the experiments presented below).

The hardware architecture of the SatAnt vehicle, shown in Fig. 4, has been organized in two layers, in which lowlevel controllers and electronics (microcontrollers) communicate through a CAN bus operating at 500 Kbs, and high le-vel electronics (microprocessors) communicate through an Ethernet bus operating at 100 Mbs. The design goals for thehardware architecture were modularity and scalability, and in such a way that adding a new module does not interferewith the existing ones. Moreover, the same two layer organization is used in other robotics systems (indoor researchrobots described in Martínez-Barberá and Gómez-Skarmeta (2002) and industrial AGVs described in Martínez-Barberáand Herrero-Pérez (2010b,a)). In fact, most of the electronics has been borrowed from those robotics systems. One

Fig. 3. The SatAnt autonomous car: (a) vehicle frame and mock-up of the electronics, (b) SatAnt vehicle, and (c) vehicle dashboard.

Page 8: Multilayer distributed intelligent control of an autonomous car

Fig. 4. The SatAnt hardware architecture.

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 101

advantage of this organization is the use of standard buses, and, in particular, the same standard bus as used in theautomotive sector.

The low level microcontrollers are in charge of closed-loop controlling the actuators and sensor data acquisition. Thesemodules receive commands and send results through the CAN bus. The communication at this level is characterized by usingvery small data packets at relatively high update rates. The high level microprocessors are in charge of running all the inten-sive computations software, bridging the two layers, and communicating the vehicle with off-board equipment throughwireless links. The communication at this level is characterized by the use of large data packets at relatively slow updaterates.

4.1. Electronic speed control

The electronic speed control system consists of four different elements depicted in Fig. 5(a): the throttle pedal sensor,the servo actuated injector, the servo controller, and the speed controller. The servo controller and the speed controllerare connected through the CAN bus. The speed controller is a hierarchical fuzzy controller that uses both the wheelencoders and road steepness information to decide the position of the injection servo in order to maintain a commandedspeed. This command is named as set-speed. Given the available engine power and the light weight of the whole vehicle,the plant presents high non-linearities. In particular, the control of both acceleration and deceleration are important is-sues. Thus, there are three basic controllers: one for uphill control, one for downhill control, and the other for on the flatcontrol. The selection of the active controller is performed by a high level controller whose input is the pitch/roll data.Depending on the values, a combination of the basic controllers is executed. The output of the pitch/roll sensor used isquite noisy and a fuzzy IIR filter is applied to the sensor in order to smooth the signal. For illustration an in-level con-troller is shown in Fig. 5(b), with a reduced number of fuzzy sets. The input variables, shown in Fig. 5(c), are the set-point error (e) and the vehicle acceleration ( _v). The output (Dy), shown in Fig. 5(d), is the increment to be applied tothe throttle.

The servo controller receives both the throttle pedal sensor signal and the speed controller output and actuates the servoposition accordingly. In the case that the throttle is actuated by a human operator, the manual input takes over control in anycondition. All the electronics for both the servo control and speed control have been custom designed.

4.2. Electronic braking system

The electronic braking system consists of the pedal actuator, a motor controller, and the braking controller. The pedalactuator is a complex mechanical structure that allows parallel actuation of the pedal for both the human operator andthe braking motor. The braking controller is in charge of applying braking patterns by sending signals to the motor controllerthrough the CAN bus. A braking pattern is composed of three parameters: motor pressing time (to control the pressure ap-plied to the brakes), motor standby time (to control how much time the brake is kept pressed), and motor releasing time (tocontrol how fast the brake is released). The braking pattern command is named as set-brake. Both the braking controller elec-tronics and the mechanical structure have been custom designed and built.

Page 9: Multilayer distributed intelligent control of an autonomous car

(a)

(b)

(c)

(d)

Fig. 5. (a) The electronic speed control system and (b) a simple fuzzy throttle controller with the definition of the membership functions for the (c) inputand (d) output variables.

102 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

Page 10: Multilayer distributed intelligent control of an autonomous car

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 103

4.3. Electronic steering system

The electronic steering system consists of a Delphi power steering column and a steering controller. The power steeringcolumn is directly attached to the steering controller. It includes a switch to engage or disengage the automatic control. Thesteering controller is a fuzzy controller that uses the steering column absolute position sensor to maintain a given wheelangle. The steering command is named set-steer. The steering controller has also been custom designed.

5. The convoy application

The convoy application has been implemented in a similar way to the one developed in Borodani (2000), where the lead-ing vehicle (which is manned) acts as a guide for the following vehicles (which are unmanned). Such an application is part ofthe MIMICS project, an early description can be found in Martinez-Barbera et al. (2003), which aimed to develop an intelli-gent platoon of vehicles. For practical reasons the real prototype has been limited to only two cars. The operation of the lead-ing car is quite simple: it uses its sensors to send information to the following car, which then uses both its sensors and theinformation received to control the actuators. All the information is shared using wireless links.

The forward car is provided with a portable equipment that contains the positioning sensors, a small processing unit, andthe radio communication link. This portable system can be used in any standard car. The autonomous car, the SatAnt vehicledescribed above, is far more complex. First, it is automated to allow a computer to fully control it. Next, it includes sufficientprocessing elements and sensors to perform autonomous tasks. The system is intended to be deployed on special roads withcontrolled traffic, where good GPS coverage is possible and speeds are not high. For the rest of this section this scenario isassumed.

The two-car convoy application runs on the high-level layer of the hardware architecture of the SatAnt vehicle. It relies onthe TC-II architecture, described above, applied to a multi-robot scenario. As such, it consists of four different types ofelements:

– Multi-robot server. Given the nature of the communication infrastructure of the TC-II, all the information shared bythe different elements is exchanged through and stored in a multi-robot Linda space. There should be only oneinstance of this element.– Manned vehicle. This is the element that will serve as a guide for all the autonomous vehicles. There should be onlyone instance of this element. Although it is outside the scope of the project, extending the application to support morethan one manned vehicle is quite simple.– Unmanned vehicle. This is the element that will follow a leading vehicle, whose identifier is a priori set by the systemdesigner, allowing the leading car to be either a manned or an unmanned vehicle. It is possible to have more than oneinstance of this element.– Monitor. For monitoring purposes it is possible to receive all the information received at the information server.The monitor element displays this information and allows the operator of the system to change some parametersand to monitor the internal state of the different elements. It is possible to have more than one instance of thiselement.

A typical instance of the system (and the one used for real world testing) consists of: one manned car, one SatAnt un-manned car, and an operator base station. Such an instance is shown in Fig. 6(a). The manned car uses a single CPU portablerack that reads global positioning information, processes it to improve localization and sends it to the global Linda space. Thebase station is composed of both the Linda server and a monitor attached to it to allow the operator to control the system.

Fig. 6. (a) The two-car convoy application architecture and (b) the behaviors data flow.

Page 11: Multilayer distributed intelligent control of an autonomous car

104 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

The SatAnt car uses two CPUs to process sensing, improve localization, perform control operations, run the applicationspecific modules, and also a monitor process to enable a car operator to know how the system is performed. This is neededfor safety and debugging reasons while developing and testing the system.

The manned vehicle architecture is quite simple. The Virtual Robot module reads sensor information (GPS position,electronic compass heading and, optionally, pulses from the tachometer) and writes it into the local Linda Space. ThePerception module reads the raw sensor data and applies a Kalman filter to fuse them and produce a corrected globalposition. This position is then written into the Linda Space. The Router sends this corrected position to the multi-robot LindaSpace. All these modules run in the same CPU, and as there is no heavy processing, it can be a low-end CPU. The VirtualRo-bot-Perception-Router cycle runs at 2 Hz in an Intel 486 based board, providing enough update rate for the path followingapplication.

In contrast to the manned vehicle architecture, the autonomous vehicle architecture is more complex. The processing isdistributed into two CPUs: one for time-critical modules (the reactive layer) and the other for nontime-critical modules (thedeliberative layer and the user interface). In the first CPU, the Virtual Robot reads sensor information (GPS position, electroniccompass heading, tachometer pulses from the four wheels, and radar targets) and writes it into the local Linda Space. It alsoreads control values from the Linda Space and sends them to the corresponding actuators. The Perception module reads theraw positioning data and applies a Kalman filter to fuse them and produces a corrected global position, which is then writteninto the Linda Space. Moreover, the raw radar targets are stored in a radar buffer, from which an estimate of the time forcollision is computed, and false targets are filtered out. This estimate is also written into the Linda Space. The Router sendsthe position to the multi-robot Linda Space, and also writes the position of the leading car into the local Linda Space. TheController module reads the positioning data, collision data, and desired path from the Linda Space, executes the differentreactive behaviors, and then produces the corresponding control values, which are written into the Linda Space. In the sec-ond CPU, the Navigation module reads the position of the leading car from the Linda Space and generates a path to guide thevehicle (connecting the leading car positions) and the desired velocity (averaging the leading car speed over a period). Boththe path and the velocity are written into the Linda Space. The Monitor module simply displays the position of the differentvehicles and their trajectories on a map, and also allows the operator to take control over an autonomous car and teleoperateit using a joystick. The VirtualRobot-Perception-Controller cycle runs at 10 Hz in a Transmeta Crusoe based board, and at thisrate the set-speed, set-steer, and set-brake commands, briefly described above, are passed to the low-level hardware layer.

5.1. Localization

The robustness and performance of the system is largely influenced by the accuracy and integrity of the localization pro-cess. In order to allow for an increased integrity in the localization process, not only are the positioning sensor used (GPS,compass, and odometers) but also the radar. A localization scheme has been developed, which consists of a single ExtendedKalman Filter (EKF), operating in global coordinates for the whole platoon. The state of the filter consists of the position, ori-entation, and linear and angular velocities of each vehicle. This centralized structure allows direct relating of radar informa-tion together with the state of both vehicles. To implement the filter, the equations have been decomposed into two modifiedEKFs (one for each vehicle). Each modified filter runs in a different vehicle sharing the necessary information through themulti-robot Linda Space. The shared information is the covariance matrix and the individual states of the different vehicles.One advantage of this approach is that in the case of the information of the other vehicle not being received for some time,the local filter is only updated with its own sensor information.

5.2. Radar buffer

One problem to compute the time to collision is to detect when the radar has detected a valid target, avoiding false targetsand sensor noise. The radar used in this project is not very robust to noise, it produces false targets quite often and some-times fails to detect a valid target. As the radar is the only sensor source to detect obstacles it is important to filter correctlythe information it produces. To accomplish this, the strategy followed is similar to that used in sonar based mobile robots.The radar targets are stored in a buffer in robot centric coordinates. Then this buffer is translated and rotated at each cycle tomaintain the geometrical consistence. When the measures are older than a prefixed amount of time they are deleted fromthe buffer. The estimate of the time to collision is computed by weighting the targets in the collision area by a factor thatdepends on how old the measurements are. The target that gives the minimum value is used as an obstacle. By adjustingthe factors it is possible to modify the inertia of the buffer to use both new targets and to forget old ones.

5.3. Behaviors

The high-level controller of the vehicle is a behavior-based one. Similar behavior based controllers are used to maintainthe distance to the preceding vehicle, acting the throttle and the brake of the controlled vehicle, using fuzzy based tech-niques, e.g. Bauer and Tomizuka (1995, 1996, 2002), because these approaches do not require exact models of the vehicles.The behavior based controller adopted in this work is shown in Fig. 6(b). The behaviors in charge of maintaining the distance,as a time to collide, to the preceding vehicle are combined with a behavior for avoiding collisions. The controller consists ofthree different behaviors and a behavior blending mechanism. The keep-speed behavior is very simple. It uses and updates

Page 12: Multilayer distributed intelligent control of an autonomous car

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 105

the recent history of the leading car velocities, and averages them (typically over a window of 10 s) to produce set-speedcontrol action. The avoid-collision behavior uses the time-to-collision obtained from the radar buffer and the current velocityto produce set-speed and set-brakes control actions using a fuzzy controller. This behavior avoids collision with eitherpedestrians or other vehicles. If the radar contact is far enough the behavior operates reducing the speed. If the radar contactis below a certain threshold it commands the vehicle to stop. The follow-path uses the desired path obtained from the nav-igation module and produces set-speed and set-steer control actions to follow the path. The behavior follows a pure-pursuitscheme. At each control cycle a look-ahead point on the path is selected as the current goal. Then, a fuzzy controller, whoseinputs are the corresponding lateral and heading errors, produces the corresponding set-speed and set-steer control actions.

Finally, the blender, which follows the context-dependent-blending schema described in Saffiotti (1997), is in charge ofcombining the outputs of the different behaviors to produce the actual control values sent to the Linda Space. The blender is

(a)

(b)

(c)

Fig. 7. (a) The blender controller and the definition of the membership functions for the (b) input and (c) output variables.

Page 13: Multilayer distributed intelligent control of an autonomous car

106 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

a fuzzy controller, shown in Fig. 7(a), in which the antecedents of the rules are features of the robot’s context (front) and theoutput of the rules are the degrees of activation of the different behaviors (avoid-collision, follow-path, and keep-speed). Thusthe outputs of the behaviors are weighted by their degree of activation at each control cycle, and due to the nature of thefuzzy controller the transition between different operation modes is smooth. The front variable represents the time to col-lision with a forward obstacle, and DANGER, CLOSE and FAR, shown in Fig. 7(b), are trapezoidal fuzzy sets that represent anobstacle closer than 1.5 s, between 1.5 and 2.5 s and further than 2.5 s, respectively. The default clause specifies which thedegrees of activation of the behaviors are, shown in Fig. 7(c), if no rule affecting them fires.

Both the behaviors and the blender have been implemented using the BG high-level language. In this case, the controlleris a Java implementation of a BG interpreter. The BG language provides a simple way of specifying fuzzy behaviors andblenders, and much of the BG code can be reused among different robots, with few or no modifications. In the case of thepreviously defined behaviors, they have been partially reused from an industrial AGV described in Martínez-Barberá andHerrero-Pérez (2010a,b).

5.4. Comparison with other autonomous vehicles

The development of the SatAnt vehicle operating in the two-car convoy application can be compared with the develop-ment of other sophisticated autonomous vehicles to highlight their differences. In particular, the reference developments aresome of the robotic vehicles participating in the last edition of DUC: Boss, presented in Urmson (2008), Junior, described inMontemerlo (2008) and Levinson et al. (2011), and Odin, presented in Bacha (2008), which obtained the best results in theDUC competition operating at an average speed of 22.5, 22, and 21 km/h. Such vehicles had to autonomously navigatethrough urban areas performing complex tasks, such as parking lots.

The software development of these complex systems supposed an important effort in human resources, where the teamshad to face with the problem of organization and verification of their developments. Such developments had to be coordi-nated efficiently in order to ensure reliability, high degree of maintainability, and easy integration of the software. Table 1summarizes some of the features of the software development of these autonomous vehicles; in particular the functionalarchitecture, flexibility of the configuration of the system, communication support, and behavioral reasoning for addressingcomplex situations.

The software development of Boss, presented in Clark et al. (2008) and McNaughton et al. (2012), consists of 300 K lines ofcode grouped into 14 K modules, which were performed by 20 software developers spending 14 months implementing thecode. The functional architecture consists of three layers: mission planning, behavioral executive, and motion planning. Thesoftware infrastructure is a toolbox that provides the basic tools required to build a robotic platform, which takes the form ofcommon libraries that provide fundamental capability, such as communications, interfaces, configuration, and task libraries.The communication library is based on an event-based blackboard, which enables the easy interchange of components dur-ing testing and development. The configuration library permits the dynamic configuration of modules, programmed in ADA,and flowchart of the software system using the Ruby scripting language. The behavioral architecture is based on the conceptof identifying a set of driving contexts, which are classified at a high level as road, intersection, and zone. A selection of thebehavior is then performed depending on the driving context.

The code of Junior consists of 100 K lines of code, mostly ported from its predecessor Stanley, described in Montemerloet al. (2006), which participated in the DGC. This development is the basis of Shelley, described in Langer et al. (2012), whichis the latest autonomous car developed by the Center for Automotive Research at Stanford (CARS). This vehicle is able todrive smoothly to improve the comfort of the vehicle passengers, as described in Kritayakirana and Gerdes (2012). The over-all software system consists of about 30 modules executed in parallel. The functional architecture consists of two layers: toplevel control and path planner. The architecture pipelines data through a series of layers, which minimizes the dataprocessing latency. Each module communicates with other modules via an anonymous publish/subscribe message passingprotocol based on the Inter Process Communication Toolkit (IPC). The global driving mode is maintained in a Finite StateMachine (FSM), which returns to normal behavior after the successful execution of a robotic behavior. These behaviorsare programmed using the Task Definition Language (TDL) described in Simmons and Apfelbaum (1998): it is a platformindependent extension of C++ supporting task decomposition, synchronization, and exception handling. The developersfound the flexibility of the software during development was essential in achieving the high level of reliability necessaryfor long-term autonomous operation.

The development of Odin is the result of a collaborative effort between academia (Virginia Tech) and industry (TORCTechnologies), where a team formed by 46 undergraduate students, 8 graduate students, 4 faculty members, 5 full time TORCemployees, and industry partners collaborated in its software development. The development, described in Hurdus et al.(2008) and Currier et al. (2012), was divided into three parts: base vehicle platform, perception, and planning. Each onewas then divided into several components for parallel development. This modular approach provided the rapid developmenttime needed to complete the project in 14 months. The software structure employs a hybrid deliberative–reactive paradigm,where perception, planning, and acting occur at several levels and in parallel tasks. The deliberative components are kept at ahigh level, whereas the more reactive, behavior-based, components are used at a low level for direct actuator control. How-ever, it is possible the emergence of deliberative methods from low-level motion planning, it showing the functional archi-tecture a deliberative–reactive–deliberative progression. The communications between modules were implementedfollowing the JAUS (Joint Architecture for Unmanned Systems) specification, enabling automated dynamic configuration

Page 14: Multilayer distributed intelligent control of an autonomous car

Table 1Comparison between different implementations of autonomous vehicles.

Autonomousvehicle

Features of the development

Functional architecture Dynamicconfiguration

Behavioral organization Communicationsupport

Programminglanguage

SatAnt Deliberative–reactive (two layers) Yes Hierarchicalbehavioral-fusion

Event-basedblackboard

Java (BG forbehaviors)

Boss Deliberative (three layers) Yes Hierarchical behavior-selection

Event-basedblackboard

ADA

Junior Deliberative (two layers) No Finite state machine IPC C/C++ (TDL forbehaviors)

Odin Deliberative– reactive–deliberative(three layers)

Yes Hierarchical behavior-selection

JAUS Mixture of C++ andLabVIEW

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 107

and making the system highly reconfigurable, modular, expandable, and reusable. The behavior-based paradigm is adoptedto perform the driving tasks, which facilitates the modularity, the ability to test incrementally, and graceful degradation.These behaviors are selected by an arbitration method depending on the current situation. This development made use ofthe intuitive graphical interface of LabVIEW to quickly and efficiently create custom embedded software.

The SatAnt development consists of 200 K lines of code grouped into more than 10 modules depending on the applicationbeing run. The functional architecture employs a deliberative-reactive paradigm, where perception, planning, and acting oc-cur at several levels and in parallel tasks. The use of the TC-II framework permits the specification of run-time parameters forthe modules, the flexible configuration of the system, the use of a predefined components library, and the use of a processsynchronization mechanism for communications between modules. The behaviors are programmed using the BG language,described in Martínez-Barberá and Gómez-Skarmeta (2002), which is an extension of Java language providing a simple wayof specifying fuzzy behaviors and blenders used for producing the corresponding control values. The use of Java for both thelow-level controllers and the high-level framework has the advantage of allowing a faster design and development cycle,while the penalization in performance can be quite small if Java based micro-controllers are used. In addition, all thehigh-level software can be effectively tested in any platform before the actual deployment of the code. The use of the inte-grated simulator can drastically reduce the software debugging time. These features allowed for the successful completion ofthe project with a small development team.

We observe that the reference robotics vehicles incorporate most of the features needed by the SatAnt vehicle in the two-car convoy application; an effective organization of the code, the possibility of run-time configuration, and communicationsupport for synchronization of the tasks running in parallel. The developments differ in the programming language for themodules and especially for the behaviors. The use of a language designed for the high-level behaviors increases the flexibilityof the development and has the advantage of allowing a faster design and development cycle. This feature is only adopted bythe developers of Junior using TDL. On the other hand, the behavioral organization of the reference robotics vehicles is basedon hierarchical behaviors, whereas the behavioral organization of SanAnt is based on fused behaviors. The hierarchicalbehaviors organization implies an interdependence between the inputs and outputs of the behaviors connected throughthe hierarchy, and hence, these behaviors should be modified when the behavioral hierarchy changes. However, the fusedbehaviors organization permits a design of the behaviors which is independent of their inputs and outputs, and thus, theycan be reused without any modification. The problem of fused behaviors organization arises when the number of behaviorsto fuse is high, and the blender is difficult to adjust. However, this is not the case of SatAnt vehicle and the benefits of usingthe fused behaviors organization allows a fast design and modification of the behaviors needed in our application.

6. Experiments

Several simulated and real world experiments have been conducted to show that the proposed architecture works as ex-pected. The test scenario is the campus of the University of Murcia (Spain). It contains a private road that surrounds the cam-pus and whose traffic is restricted to vehicles going to or coming from the different faculties. At rush hours there are manycars, but during normal working hours the traffic is not very high. This allows testing with real traffic but with sufficientsafety. The maximum speed allowed on campus is 50 km/h and the main road length is about 3.5 km. In addition, thereare some traffic lights and road bumps to prevent cars achieving high speeds. Fig. 8(a) shows the automated vehicle SatAntfrom the manned car driving through the campus of the University of Murcia (Spain) while Fig. 8(b) shows the trajectory runin the actual experiment presented below. Fig. 8(c) shows the two-car convoy in a private circuit. The target speed of theconvoy is between 30 and 50 km/h.

6.1. Automatic speed control

The goal of this experiment is to show the performance of a closed-loop control using the low-level hardware layer. Forthis task, the electronic speed control, described in Section 4.1, is provided with a fixed target speed. Fig. 9(c) shows how this

Page 15: Multilayer distributed intelligent control of an autonomous car

Fig. 8. (a) The test scenario at the campus of the University of Murcia and (b) the route run by the two-car convoy. (c) The two-car convoy in a privatecircuit.

Fig. 9. Automatic speed control experiment (real environment): (a) raw data, (b) data filtering, and (c) speed controller output.

108 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

Page 16: Multilayer distributed intelligent control of an autonomous car

Table 2Gaussian error model simulated in the positioning sensors.

Odometers GPS with WAAS correction Compass

0.015 m per wheel turn 1.8 m RMS, 3.8 m 95% 1.0� RMS

Fig. 10. Autonomous convoy experiment (simulated environment): (a) degree of activation and (b) outputs of the behaviors.

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 109

controller performs the task of maintaining a target speed of 30 km/h. The controller applies a fuzzy controller depending onthe steepness of the road, which is measured by a roll and pitch sensor. As its output is quite noisy, shown in Fig. 9(a), thesignal is smoothed using a fuzzy IIR filter, shown in Fig. 9(b). The car starts from a no motion state and a level road. Thebehavior of the controller is shown in Fig. 9(c). The topmost curve is the actuation over the throttle, the horizontal line isthe set point, and the bottom curve is the actual speed. The fuzzy controller first puts the car into motion until the set pointis reached, while maintaining the acceleration under safe limits. In the final part of the experiment the road gets quite steepand thus the controller releases part of the throttle to maintain the velocity. The average error is about 5 km/h, mainly be-cause of the non-linearity of the plant due to the response of the engine, the abrupt changes in the steepness of the road, andthe automatic gearbox.

6.2. Autonomous convoy

The goal of this experiment is to show the performance of the whole architecture described in the paper. The experimenthas been conducted both in a simulator and with a real car. The set-up of the simulated experiment is as follows. The SatAntvehicle (manually driven) is used to collect real data in normal traffic conditions, and thus the velocity profile is as variable asin a real world situation. There are three different causes which can alter the desired convoy speed: the car approaches otherslower vehicles, the car approaches a bump, or the car approaches a red light, which implies fully stopping the vehicle. Thereal data has been fed into the simulator to reproduce the behavior of the human driven leading car. The autonomous vehicleis simulated using a kinematics model of an Ackerman drive configuration, the radar is simulated using a realistic sonar-based method, and the positioning sensors (odometers, GPS with WAAS and compass) are simulated using a Gaussian errormodel, specified in Table 2, with parameters similar to those in real world conditions.

The two-car convoy application is plugged into the simulator, and then run. Although a lot of data have been collected,only an interesting situation, shown in Fig. 10, is described. The trace starts when the leading car is taking a long curve to theright. Then, it suddenly finds a traffic regulation bump, and thus it drastically reduces its speed, before resuming its previous

Page 17: Multilayer distributed intelligent control of an autonomous car

Fig. 11. Autonomous convoy experiment (real environment): (a) throttle and (b) wheel position commands sent to the low level controllers.

110 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

speed. A little bit later the leading car approaches a traffic light, and then it stops for some seconds while the light is red anduntil it gets green. Then it slowly accelerates while turning to the left to enter the main road. Fig. 10(a) shows the degree ofactivation of the different behaviors and Fig. 10(b) shows the blended output of the different behaviors of the TC-II controllermodule. The two dangerous situations can be easily detected in the graphs by observing the brake activation. The brakesignal corresponds to the order to the low level brake controller. This, it depending on the current speed, generates a brakingpattern (speed of depression of the brake pedal, depth of the braking action, and time the pedal is left pressed).

The last experiment shows the low level signals that are produced in the real car experiment, and the set-up is as follows.The leader car is a standard car equipped with the portable equipment that sends the positioning data, and the following caris the SatAnt vehicle in autonomous mode. The data were collected while the leading car was performing an S-shaped curve.At the end of the curve the road presented a small downward inclination. Fig. 11 shows the actual throttle and wheel posi-tion commands that are sent to the low level controllers, after a similar fusion process as described in the previous example(Fig. 10(a)). As expected, the real car subjective performance is similar to that of the simulator with two main differences thatare due to the dynamic nature of the sports car: the speed profile is jerkier in the real car than in the simulator (because ofthe important non-linearity of the traction plant), and the actual path of the vehicle is also jerkier than the simulated(because of the slower turning speed of the servo actuated steering wheel). In any case, the ultimate goal of the experimentis to test the feasibility of the proposed architecture with a custom modified car. Much better results can be obtained with abetter experimental platform: one similar to a production car with a faster servo actuator for the steering wheel.

7. Conclusion

This paper has shown how ThinkingCap-II, a general-purpose behavior-based architecture for mobile robots, can be usedwithin the scope of an intelligent vehicle application. In particular, an intelligent convoy application is described. The leadingvehicle is manned and provides a path to the other vehicles. A SatAnt autonomous vehicle has been developed and is used asthe following vehicle in a real two car convoy. The TC-II framework provides a methodology to decompose a robotics systeminto different modules based on functionality, a software architecture to provide run-time support, dynamic configurationand a component’s library, and a communication infrastructure that allows distribution of the different components. Thisframework has been successfully used in other robotics applications.

Using the previous tools, and directed towards the intelligent vehicle application presented, the SatAnt vehicle has beenprovided with a robust localization system, a radar filtering and obstacle detection mechanism, and a set of behaviors todrive the vehicle. This kind of application and its components can also be developed with an ad hoc architecture, as in otherintelligent vehicle projects, at the expense of a lengthy and costly development. Most of the advantages come from the reuseof standard mobile robotics techniques and components. It is not that other development systems cannot reuse components,but TC-II has been designed with the portability of mobile robotics software in mind. The advantages of using the approachpresented are very obvious for small research and development teams. In the case of the project described, the distribution ofeffort of the development was approximately as follows: 60% automation of the vehicle, 35% algorithm development andtesting, 15% implementation of application specific software. The reason for the low percentage of software development

Page 18: Multilayer distributed intelligent control of an autonomous car

H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112 111

is due to the reuse of many mobile robotics components that were developed for other robotics systems: a custom indoormobile robot and an industrial one.

The application presented in this work can be referred to as a proof-of-concept one, and its interest lies in proving thefeasibility and adequacy of the framework proposed in the paper for building an intelligent vehicle application. For realnavigation, in real traffic conditions with sufficient safety margins, more sensors and modules are needed. For instance, avision module to check for road lanes, as many other projects use, is very important in other kinds of scenarios. Incorporatingsuch a system in the framework is straightforward. A new perception submodule would do all the processing and the follow-path behavior could be modified to take into account this information as well. Because of the high processing needs of visionthe module could be incorporated in its own CPU or specialized hardware, providing that the communication is compatiblewith TC-II Linda Spaces or by developing an interface module. Thus, in addition to portability, the same framework allows forextendibility.

Finally, it using Java for both the low-level controllers and the high-level framework has the advantage of allowing afaster design and development cycle, while the penalization in performance can be quite small if Java based micro-controllers are used. In addition, all the high-level software can be effectively tested in any platform before the actualdeployment of the code. The use of the integrated simulator can drastically reduce the software debugging time. Thesefeatures allowed for the successful completion of the project with a small development team.

Acknowledgments

This work was supported in part by the Spanish Ministry of Science and Innovation under TIC-2001-0245-C02-01,DPI-2004-07993-C03-02, and DPI-2007-66556-C03-02 CICYT projects, and by the Spanish Ministry of Development underFIT-1602002-2000-33, FIT-1602000-2001-53, and FIT-160300-2002-82 PROFIT projects. Special thanks to AlessandroSaffiotti for his valuable help and comments.

References

Ahlstrom, C., Nyström, M., Holmqvist, K., Fors, C., Sandberg, D., Anund, A., Kecklund, G., Åkerstedt, T., 2013. Fit-for-duty test for estimation of drivers’sleepiness level: eye movements improve the sleep/wake predictor. Transport. Res. Part C 26, 20–32.

Bacha, A. et al, 2008. Odin: team VictorTango’s entry in the DARPA urban challenge. J. Field Rob. 25, 467–492.Bauer, M., Tomizuka, M., 1995. Fuzzy Logic Traction Controllers and Their Effect on Longitudinal Vehicle Platoon Systems. Technical Report UCB-ITS-PRR-

95-14. California PATH Research Report.Bertozzi, M., Broggi, A., Fascioli, A., 2000. Vision-based intelligent vehicles: state of the art and perspectives. Robot. Auton. Syst. 32, 1–16.Bertozzi, M., Bombini, L., Broggi, A., Cerri, P., Grisleri, P., Medici, P., Zani, P., 2008. GOLD: a framework for developing intelligent-vehicle vision applications.

IEEE Intell. Syst. 23, 69–71.Borodani, P., 2000. Full automatic control for trucks in a tow-bar system. In: Proc. of Int. Symp. on Adv. Veh. Control, Ann Arbor, MI, USA, pp. 131–138.Broggi, A., Bertozzi, M., Fascioli, A., Conte, G., 1999. Automatic Vehicle Guidance: The Experience of the ARGO Autonomous Vehicle. World Scientific.Broggi, A., Bombini, L., Cattani, S., Cerri, P., Fedriga, R., 2010. Sensing requirements for a 13,000 km intercontinental autonomous drive. In: Proc. of IEEE

Intell. Veh. Symp., San Diego, CA, USA, pp. 500–505.Campbell, M., Egerstedt, M., How, J., Murray, R., 2010. Autonomous driving in urban environments: approaches, lessons and challenges. Phil. Trans. Roy. Soc.

A 368, 4649–4672.Chen, S.W., Fang, C.Y., Tien, C.T., 2013. Driving behaviour modelling system based on graph construction. Transport. Res. Part C 26, 314–330.Clark, M., Salesky, B., Urmson, C., 2008. Measuring software complexity to target risky modules in autonomous vehicle systems. In: Proc. of AUVSI:

Unmanned Systems North America, San Diego, CA, USA, pp. 1–11.Coda, A., Antonello, P., Peters, B., 1997. Technical and human factor aspects of automatic vehicle control in emergency situations. In: Proc. of World Congress

on Intell. Transp. Syst., ITS America, Ertico & Vertis, Berlin, Germany.Currier, P., Hurdus, J., Bacha, A., Faruque, R., Cacciola, S., Bauman, C., King, P., Terwelp, C., Reinholtz, C., Wicks, A., Hong, D., 2012. Experience from the DARPA

Urban Challenge. Springer London. chapter The VictorTango Architecture for Autonomous Navigation in the DARPA Urban Challenge, pp. 93–131.Di Stasi, L., Renner, R., Catena, A., Cañas, J., Velichkovsky, B., Pannasch, S., 2012. Towards a driver fatigue test based on the saccadic main sequence: a partial

validation by subjective report data. Transport. Res. Part C 21, 122–133.Franke, U., Bottiger, F., Zomotor, Z., Seeberger, D., 1995. Truck platooning in mixed traffic. In: Proc. of IEEE Symp. on Intell. Veh., Detroit, MI, USA, pp. 1–6.Fu, L., Yazici, A., Ozguner, U., 2008. Route planning for OSU-ACT autonomous vehicle in DARPA urban challenge. In: Proc. of IEEE Intell. Veh. Sym., Twente,

Netherlands, pp. 781–786.Gelernter, D., 1985. Generative communication in Linda. ACM Trans. Program. Lang. Syst. 7, 80–112.Handmann, U., Leefken, I., Tzomakas, C., von Seelen, W., 1999. A flexible architecture for intelligent cruise control. In: Proc. of IEEE Int. Conf. on Intell. Transp.

Syst., Tokyo, Japan, pp. 958–963.Herrero-Pérez, D., Martínez-Barberá, H., 2008a. Decentralized coordination of autonomous AGVs in flexible manufacturing systems. In: Proc. of IEEE/RSJ Int.

Conf. on Intell. Robots and Systems, Nice, France, pp. 3674–3679.Herrero-Pérez, D., Martínez-Barberá, H., 2008b. Petri Nets based coordination of flexible autonomous guided vehicles in flexible manufacturing systems. In:

Proc. of IEEE Int. Conf. on Emerging Technologies and Factory Automation, Hamburg, Germany, pp. 508–515.Herrero-Pérez, D., Martínez-Barberá, H., 2010. Modeling distributed transportation systems composed of flexible automated guided vehicles in flexible

manufacturing systems. IEEE Trans. Ind. Inf. 6, 166–180.Herrero-Pérez, D., Martínez-Barberá, H., 2011. Decentralized traffic control for non-holonomic flexible automated guided vehicles in industrial

environments. Adv. Rob. 25, 739–763.Horowitz, R., Varaiya, P., 2000. Control design of an automated highway system. Proc. IEEE 88, 913–925.Hurdus, J., Bacha, A., Bauman, C., Cacciola, S., Faruque, R., King, P., Terwelp, C., Currier, P., Hong, D., Wicks, A., Reinholtz, C., 2008. The VictorTango

architecture for autonomous navigation in the DARPA urban challenge. J. Aerosp. Comput., Inform., Commun. 5, 506–529.Ioannou, P., 1998. Development and Experimental Evaluation of Autonomous Vehicles for Roadway/Vehicle Cooperative Driving. Technical Report UCB-ITS-

PRR-98-09. California Partners for Advanced Transit and Highways (PATH).Jiménez, F., Naranjo, J., 2011. Improving the obstacle detection and identification algorithms of a laserscanner-based collision avoidance system. Transport.

Res. Part C 19, 658–672.

Page 19: Multilayer distributed intelligent control of an autonomous car

112 H. Martínez-Barberá, D. Herrero-Pérez / Transportation Research Part C 39 (2014) 94–112

Jochem, T., Pomerleau, D., Kumar, B., Armstrong, J., 1995. PANS: a portable navigation platform. In: Proc. of IEEE Symp. on Intell. Veh., Detroit, MI, USA, pp.107–112.

Kim, H., Dickerson, J., Kosko, B., 1996. Fuzzy throttle and brake control for platoons of smart cars. Fuzzy Set Syst. 84, 209–234.Kramer, J., Scheutz, M., 2007. Development environments for autonomous mobile robots: a survey. Auton. Robot 22, 101–132.Kritayakirana, K., Gerdes, J., 2012. Using the center of percussion to design a steering controller for an autonomous race car. Veh. Syst. Dyn. 50, 33–51.Langer, D., Funke, J., Theodosis, P., Hindiyeh, R., Kritayakirana, K., Gerdes, J., Mueller-Bessler, B., Huhnke, B., Hernandez, M., Stanek, G., 2012. Up to the limits:

autonomous audi TTS. In: Proc. of IEEE Intell. Veh. Symp., Alcalá de Henares, Spain, pp. 541–547.Laugier, C. et al, 1999. Sensor-based control architecture for a car-like vehicle. Auton. Robots 6, 165–185.Lee, G., Kim, S., 2002. A longitudinal control system for a platoon of vehicles using a fuzzy-sliding mode algorithm. Mechatronics 12, 97–118.Levinson, J., Askeland, J., Becker, J., Dolson, J., Held, D., Kammel, S., Kolter, J., Langer, D., Pink, O., Pratt, V., Sokolsky, M., Stanek, G., Stavens, D., Teichman, A.,

Werling, M., Thrun, S., 2011. Towards fully autonomous driving: systems and algorithms. In: Proc. of IEEE Intell. Veh. Symp., Baden-Baden, Germany, pp.5–9.

Martensson, J., Alam, A., Behere, S., Khan, A., Kjellberg, J., Liang, K.Y., Pettersson, H., Sundman, D., 2012. The development of a cooperative heavy-duty vehiclefor the GCDC 2011: team scoop. IEEE Trans. Intell. Transport. Syst. 13, 1033–1049.

Martín de Diego, I., Siordia, O., Crespo, R., Conde, C., Cabello, E., 2013. Analysis of hands activity for automatic driving risk detection. Transport. Res. Part C 26,380–395.

Martínez-Barberá, H., Gómez-Skarmeta, A., 2002. A framework for defining and learning fuzzy behaviours for autonomous mobile robots. Int. J. Intell. Syst.17, 1–20.

Martínez-Barberá, H., Herrero-Pérez, D., 2010a. Autonomous navigation of an automated guided vehicle in industrial environments. Robot. Cim-Int. Manuf.26, 296–311.

Martínez-Barberá, H., Herrero-Pérez, D., 2010b. Development of a flexible AGV for flexible manufacturing systems. Ind. Robot 37, 459–468.Martínez-Barberá, H., Herrero-Pérez, D., 2010c. Programming multirobot applications using the Thinkingcap-II Java framework. Adv. Eng. Inf. 24, 62–75.Martinez-Barbera, H., Zamora-Irquierdo, M., Toledo-Moreo, R., Ubeda, B., Gomez-Skarmeta, A., 2003. The MIMICS project: an application for intelligent

transport systems. In: Proc. of IEEE Intell. Veh. Symp., Columbus, OH, USA, pp. 633–638.McNaughton, M., Baker, C., Galatali, T., Salesky, B., Urmson, C., Ziglar, J., 2012. Experience from the DARPA Urban Challenge. Springer London. Chapter

Software Infrastructure for a Mobile Robot, pp. 19–43.Michon, J., 1985. A critical view of driver behaviour models: what do we know, what should we do. NATO Asi Ser. Plenum Press, New York, pp. 485–520.Montemerlo, M. et al, 2008. Junior: the stanford entry in the urban challenge. J. Field Rob. 25, 569–597.Montemerlo, M., Thrun, S., Dahlkamp, H., Stavens, D., Strohband, S., 2006. Winning the DARPA grand challenge with an AI ROBOT. In: Proc. of the AAAI,

Boston, MA, USA, pp. 982–987.Oreback, A., Christensen, H., 2003. Evaluation of architectures for mobile robotics. Auton. Robot 14, 33–49.Ossen, S., Hoogendoorn, S., 2011. Heterogeneity in car-following behavior: theory and empirics. Transport. Res. Part C 19, 182–195.Ozguner, U., et al., 1997. The OSU DEMO’97 vehicle. In: Proc. of IEEE Int. Conf. on Intell. Transp. Syst., Boston, MA, USA, pp. 221–226.Ozguner, U., Stiller, C., Redmill, K., 2007. Systems for safety and autonomous behavior in cars: the DARPA grand challenge experience. Proc. IEEE 95, 397–

412.Patz, B., Papelis, Y., Pillat, R., Stein, G., Harper, D., 2009. The DARPA Urban Challenge. Springer Tracts in Advanced Robotics, vol. 56. Springer-Verlag, Berlin

Heidelberg, pp. 305–358, Chapter 3.Saffiotti, A., 1997. Fuzzy logic in autonomous robotics: behaviour coordination. In: Proc. of IEEE Int. Conf. on Fuzzy Systems, Barcelona, Spain, pp. 573–578.Saffiotti, A., Konolige, K., Ruspini, E., 1995. A multivalued-logic approach to integrating planning and control. Artif. Intell. 76, 481–526.Simmons, R., Apfelbaum, D., 1998. A task description language for robot control. In: Proc. of IEEE/RSJ Int. Conf. on Intell. Robots and Systems, Victoria, BC,

Canada, pp. 1931–1937.Thorpe, C., Hebert, M., Shafer, S., 1988. Vision and navigation for the Carnegie-Mellon Navlab. IEEE Trans. Pattern Anal. Mach. Intell. 10, 362–373.Thorpe, C., Hebert, M., Kanade, T., Shafer, S., 1991. Toward autonomous driving: the CMU Navlab. I. Perception. IEEE Expert 6, 31–42.Urmson, C. et al, 2008. Autonomous driving in urban environments: boss and the urban challenge. J. Field Rob. 25, 425–466.Zheng, J., Suzuki, K., Fujita, M., 2013. Car-following behavior with instantaneous driver-vehicle reaction delay: a neural-network-based methodology.

Transport. Res. Part C 36, 339–351.Zohdy, I., Rakha, H., 2012. Optimizing driverless vehicles at intersections. In: Proc. of ITS World Congress, Vienna, Austria.