using fuzzy logic in automated vehicle...

10
36 1541-1672/07/$25.00 © 2007 IEEE IEEE INTELLIGENT SYSTEMS Published by the IEEE Computer Society I n t e l l i g e n t T r a n s p o r t a t i o n S y s t e m s Using Fuzzy Logic in Automated Vehicle Control José E. Naranjo, Carlos González, Ricardo García, and Teresa de Pedro, Instituto de Automática Industrial Miguel A. Sotelo, Universidad de Alcalá de Henares U ntil recently, in-vehicle computing has been largely relegated to auxiliary tasks such as regulating cabin temperature, opening doors, and monitoring fuel, oil, and battery-charge levels. Now, however, computers are increasingly assum- ing driving-related tasks in some commercial models. Among those tasks are maintaining a reference velocity or keeping a safe distance from other vehicles, improving night vision with infrared cameras, and building maps and providing alternative routes. Still, many traffic situations remain complex and difficult to manage, particularly in urban settings. The driving task belongs to a class of problems that depend on underlying systems for logical reasoning and dealing with uncertainty. So, to move vehicle computers beyond monitoring and into tasks related to environment perception or driving, we must inte- grate aspects of human intelligence and behaviors so that vehicles can manage driving actuators in a way similar to humans. This is the motivation behind the AUTOPIA program, a set of national research projects in Spain. AUTOPIA has two primary objectives. First, we want to imple- ment automatic driving using real, mass-produced vehicles tested on real roads. Although this objective might be called “utopian” at the moment, it’s a great starting point for exploring the future. Our second aim is to develop our automated system using mod- ular components that can be immediately applied in the automotive industry. AUTOPIA builds on the Instituto de Automática Industrial’s extensive experience developing auto- nomous robots and fuzzy control systems and the Universidad de Alcalá de Henares’s knowledge of artificial vision. (The “Intelligent-Vehicle Systems” sidebar discusses other such projects.) We’re devel- oping a testbed infrastructure for vehicle driving that includes control-system experimentation, strategies, and sensors. All of our facilities and instruments are available for collaboration with other research groups in this field. So far, we’ve automated and instru- mented two Citroën Berlingo mass-produced vans to carry out our objectives. Automated-vehicle equipment Figure 1 shows two mass-produced electric Cit- roën Berlingo vans, which we’ve automated using an embedded fuzzy-logic-based control system to control their speed and steering. The system’s main sensor inputs are a CCD (charge-coupled device) color camera and a high-precision global position- ing system. Through these, the system controls the vehicle-driving actuators—that is, the steering, throt- tle, and brake pedals. Both vehicles include an onboard PC-based computer; a centimetric, real-time kinematic differential GPS (RTK DGPS); Wireless LAN support; two servomotors; and an analog/dig- ital I/O card. We added a vision system in another computer connected to the control computer. Figure Automated versions of a mass-produced vehicle use fuzzy logic techniques to both address common challenges and incorporate human procedural knowledge into the vehicle control algorithms.

Upload: dinhbao

Post on 25-Mar-2018

227 views

Category:

Documents


4 download

TRANSCRIPT

36 1541-1672/07/$25.00 © 2007 IEEE IEEE INTELLIGENT SYSTEMSPublished by the IEEE Computer Society

I n t e l l i g e n t T r a n s p o r t a t i o n S y s t e m s

Using Fuzzy Logic in Automated Vehicle ControlJosé E. Naranjo, Carlos González, Ricardo García, and Teresa de Pedro, Instituto deAutomática Industrial

Miguel A. Sotelo, Universidad de Alcalá de Henares

Until recently, in-vehicle computing has been largely relegated to auxiliary

tasks such as regulating cabin temperature, opening doors, and monitoring

fuel, oil, and battery-charge levels. Now, however, computers are increasingly assum-

ing driving-related tasks in some commercial models. Among those tasks are

• maintaining a reference velocity or keeping a safedistance from other vehicles,

• improving night vision with infrared cameras, and• building maps and providing alternative routes.

Still, many traffic situations remain complex anddifficult to manage, particularly in urban settings.The driving task belongs to a class of problems thatdepend on underlying systems for logical reasoningand dealing with uncertainty. So, to move vehiclecomputers beyond monitoring and into tasks relatedto environment perception or driving, we must inte-grate aspects of human intelligence and behaviorsso that vehicles can manage driving actuators in away similar to humans.

This is the motivation behind the AUTOPIA program,a set of national research projects in Spain. AUTOPIA

has two primary objectives. First, we want to imple-ment automatic driving using real, mass-producedvehicles tested on real roads. Although this objectivemight be called “utopian” at the moment, it’s a greatstarting point for exploring the future. Our secondaim is to develop our automated system using mod-ular components that can be immediately applied inthe automotive industry.

AUTOPIA builds on the Instituto de AutomáticaIndustrial’s extensive experience developing auto-

nomous robots and fuzzy control systems and theUniversidad de Alcalá de Henares’s knowledge ofartificial vision. (The “Intelligent-Vehicle Systems”sidebar discusses other such projects.) We’re devel-oping a testbed infrastructure for vehicle driving thatincludes control-system experimentation, strategies,and sensors. All of our facilities and instruments areavailable for collaboration with other research groupsin this field. So far, we’ve automated and instru-mented two Citroën Berlingo mass-produced vansto carry out our objectives.

Automated-vehicle equipmentFigure 1 shows two mass-produced electric Cit-

roën Berlingo vans, which we’ve automated usingan embedded fuzzy-logic-based control system tocontrol their speed and steering. The system’s mainsensor inputs are a CCD (charge-coupled device)color camera and a high-precision global position-ing system. Through these, the system controls thevehicle-driving actuators—that is, the steering, throt-tle, and brake pedals. Both vehicles include anonboard PC-based computer; a centimetric, real-timekinematic differential GPS (RTK DGPS); WirelessLAN support; two servomotors; and an analog/dig-ital I/O card. We added a vision system in anothercomputer connected to the control computer. Figure

Automated versions

of a mass-produced

vehicle use fuzzy logic

techniques to both

address common

challenges and

incorporate human

procedural knowledge

into the vehicle control

algorithms.

2 shows the control system that we developedto handle all these devices.

The computer drives the vans using twofuzzy-logic-based controllers: the steering(lateral) control and the speed (longitudinal)

control. To automate the steering, we installeda DC servomotor in the steering wheel col-umn. The Berlingo has an electronic throttlecontrol, so we shortened the electronic circuitto actuate the throttle using an analog output

card. The brake pedal is fully mechanical; weautomated it using a pulley and a DC servo-motor. We equipped the transmission with anelectronic gearbox with forward and reverseselection. We automated this using a digital

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 37

According to Ernst Dickmanns, it’s likely to be at least 20years before we’ll apply intelligent transportation systems toroad transportation and achieve the ultimate goal of full au-tonomous vehicle driving.1 The STARDUST final report assumesthat ITS progress toward this goal will be rooted in adaptivecruise control.2

In 1977, Sadayuki Tsugawa’s team in Japan presented thefirst intelligent vehicle with fully automatic driving.3 Althoughthat system was limited, it demonstrated autonomous vehicles’technical feasibility, and Tsugawa’s group continues its autono-mous vehicle work today, and many other international proj-ects have also been launched, including the following effortsin Europe and the US.

European projectsIn Europe, researchers completed autonomous-vehicle de-

velopments such as VaMoRs (advanced platform for visualautonomous road vehicle guidance) and Argo as part of thePrometheus program (the Program for European Traffic withHighest and Unprecedented Safety).

As part of the VaMoRs project, Ernst Dickmanns’ team atthe Universität der Bundeswehr developed an automatic-dri-ving system using artificial vision and transputers in the late1980s. The team installed the system in a van with automatedactuators and ran a range of automatic-driving experiments,some of which were on conventional highways at speeds upto 130 kmh. Projects such as VAMP (the advanced platform forvisual autonomous road vehicle guidance) are continuing thiswork today.4

Alberto Broggi’s team developed the ARGO vehicle at ParmaUniversity.5 In 1998, the team drove ARGO 2,000 km along Ital-ian highways in automatic mode using artificial-vision-basedsteering.

In France, projects such as Praxitele and “La route automa-tisée” focus on driving in urban environments, as do the Euro-pean Union’s Cybercars and CyberCars-2 projects. Another Eu-ropean project, Chauffeur, focuses on truck platoon driving.

US projectsCalifornia’s PATH (Partners for Advanced Transit and High-

ways)6 is a multidisciplinary program launched in 1986. Its ulti-mate goal is to solve the state’s traffic problems by totally orpartially automating vehicles. PATH uses special lanes for auto-matic vehicles, which will circulate autonomously (guided bymagnetic marks) and form platoons.

Carnegie Mellon University’s NAVLAB has automated 11 vehi-cles since 1984 to study and develop autonomous-driving tech-niques. In 1995, NAVLAB #5 carried out the No Hands acrossAmerica experiment, a trip from Washington D.C. to Californiaalong public highways in which artificial vision and a neuralnetwork control system managed the vehicle’s steering wheel.

The University of Arizona’s VISTA (Vehicles with IntelligentSystems for Transport Automation) project started in 1998 toconduct intelligent-vehicle research and develop technologyfor vehicle control.7 Since 2000, the project has been cooperat-ing with the Chinese Academy of Sciences, whose intelligent-

transportation-system activities include research8 and manag-ing the National Field Testing Complex.9 In 2004, DARPA decidedto test automatic-vehicle technology by organizing the DARPA

Grand Challenge. The experiment consisted of a set of activi-ties for autonomous vehicles, with 25 out of 106 groups se-lected to participate.10 Only 14 groups qualified for the final,which was a race of approximately 320 km; the winner madeit only 12 km.11

A second edition was held on 8 October 2005 in the US. TheStanford Racing Team won, with a winning time of 6 hours, 53minutes. In all, five teams completed the Grand Challengecourse, which was 132 miles over desert terrain. DARPA has an-nounced a third Grand Challenge, set for 3 November 2007. In it,participants will attempt an autonomous, 60-mile route in anurban area, obeying traffic laws and merging into moving traffic.

References

1. E. Dickmanns, “The Development of Machine Vision for Road Vehi-cles in the Last Decade,” Proc. IEEE Intelligent Vehicle Symp., vol. 1,IEEE Press, 2002, pp. 268–281.

2. Sustainable Town Development: A Research on Deployment of Sus-tainable Urban Transport (Stardust), Critical Analysis of ADAS/AVGOptions to 210, Selection of Options to Be Investigated, EuropeanCommission Fifth Framework Programme on Energy, Environment,and Sustainable Development Programme, 2001.

3. S. Kato et al., “Vehicle Control Algorithms for Cooperative Drivingwith Automated Vehicles and Intervehicle Communications,” IEEETrans. Intelligent Transportation Systems, vol. 3, no. 3, 2002, pp.155–161.

4. U. Franke et al., “Autonomous Driving Goes Downtown,” IEEEIntelligent Systems, vol. 13, no. 6, 1998, pp. 40–48.

5. A. Broggi et al., Automatic Vehicle Guidance: The Experience ofthe ARGO Autonomous Vehicle, World Scientific, 1999.

6. R. Rajamani et al., “Demonstration of Integrated Longitudinal andLateral Control for the Operation of Automated Vehicles in Pla-toons,” IEEE Trans. Control Systems Technology, vol. 8, no. 4, 2000,pp. 695–708.

7. F.Y. Wang, P.B. Mirchandani, and Z. Wang, “The VISTA Project and ItsApplications,” IEEE Intelligent Systems, vol. 17, no. 6, 2002, pp. 72–75.

8. N.N. Zheng et al., “Toward Intelligent Driver-Assistance and SafetyWarning System,” IEEE Intelligent Systems, vol. 19, no. 2, 2004, pp.8–11.

9. F.Y. Wang et al., “Creating a Digital-Vehicle Proving Ground,” IEEEIntelligent Systems, vol. 18, no. 2, 2003, pp. 12–15.

10. Q. Chen, U. Ozguner, and K. Redmill, “Ohio State University at theDARPA Grand Challenge: Developing a Completely Autonomous Ve-hicle,” IEEE Intelligent Systems, vol. 19, no. 5, 2004, pp. 8–11.

11. J. Kumagai, “Sand Trap,” IEEE Spectrum, June 2004, pp. 34–40.

Intelligent-Vehicle Systems

I/O card that sends the correct gear to theinternal vehicle computer. We designed ourdriving area to emulate an urban environmentbecause automatic urban driving is one ofITS’s less researched topics.

Guidance systemWe modeled the guidance system using

fuzzy variables and rules. In addition to thesteering wheel and vehicle velocity func-tionalities, we also consider variables that thesystem can use in adaptive cruise control(ACC) and overtaking capabilities. Amongthese variables are the distance to the nextbend and the distance to the lead vehicle (thatis, any vehicle driving directly in front of theautomated vehicle).

Car driving is a special control problembecause mathematical models are highlycomplex and can’t be accurately linearized.We use fuzzy logic because it’s a well-testedmethod for dealing with this kind of system,provides good results, and can incorporate

human procedural knowledge into controlalgorithms. Also, fuzzy logic lets us mimichuman driving behavior to some extent.

Steering controlThe steering control system’s objective is

to track a trajectory. To model lateral andangular tracking deviations perceived by ahuman driver, we use two fuzzy variables:Lateral_Error and Angular_Error. These variablesrepresent the difference between the vehi-cle’s current and correct position and its ori-entation to a reference trajectory.

Both variables can take left or right linguisticvalues. Angular_Error represents the anglebetween the orientation and vehicle velocityvectors. If this angle is counterclockwise, theAngular_Error value is left. If the angle is clock-wise, the Angular_Error value is right. Lateral_Errorrepresents the distance from the vehicle to thereference trajectory. If the vehicle is positionedon the trajectory’s left, the Lateral_Error value isleft; it’s right if the vehicle is on the right.

We compute the variables’ instantaneousvalue using the DGPS data and a digital envi-ronment map. The fuzzy output variable isSteering_Wheel and indicates which directionthe system must turn the steering wheel tocorrect the input errors. Again, the variablealso has left and right linguistic values. Thevalue is left if the steering wheel must turncounterclockwise, and right if it must turnclockwise. We define the fuzzy sets thatdefine the left and right values in an interval of–540 degrees and 540 degrees.

As with human behavior, our guidancesystem works differently for tracking lanesor turning on sharp bends. When travelingalong a straight road, people drive at rela-tively high speeds while gently turning thesteering wheel. In contrast, on sharp bends,they rapidly reduce speed and quickly turnthe steering wheel. We emulate such behav-ior by changing the membership functionparameters of the Lateral_Deviation, Angular_Devi-ation, and Steering_Wheel linguistic variables. Torepresent the human procedural knowledgein the driving task, we need only two fuzzyrules. These rules tell the fuzzy inferencemotor how to relate the fuzzy input and out-put variables:

IF Angular_Error left OR Lateral_Error left THENSteering_Wheel rightIF Angular_Error right OR Lateral_Error right THENSteering_Wheel left

Although these rules are simple, they gen-erate results that are close to human driving.The rules are the same for all situations, butthe definition of the fuzzy variables’ linguis-tic values change. Figure 3 shows this fea-ture in the membership function definitionfor Lateral_Error and Angular_Error. Figures 3a and3b show the degree of truth for the input errorvalues in straight-path tracking situations.This definition lets the system act quicklywhen trajectory deviations occur—again inkeeping with human behavior.

To prevent accidents, we must limit themaximum turning angle for straight-lanedriving. This limitation is also similar tohuman behavior; we achieve it by definingthe output variable membership function asa singleton, confining this turning to 2.5 per-cent of the total. Figure 3c and 3d show sim-ilar function definitions, but their shape’s gra-dient is lower. This makes the driving systemless reactive when tracking a straight trajec-tory and assures that they’ll adapt to the routesmoothly. We can also represent the output

I n t e l l i g e n t T r a n s p o r t a t i o n S y s t e m s

38 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 1. The AUTOPIA testbed vehicles. An embedded fuzzy-logic-based control systemcontrols both speed and steering in each Citroën Berlingo.

Guidance system

Steering wheel

Throttle

Brake

Tachometer

GPS

WirelessLAN

Camera

DCmotor

Analog card

DCmotor

Vision system

Figure 2. The AUTOPIA system control structure. The sensorial equipment supplies thenecessary data to the fuzzy-logic-based guidance system, which decides the optimalcontrol signals to manage the vehicle actuators (steering wheel, throttle, and brake ).

using a singleton without turning limitations.We fine-tuned the membership functionsexperimentally, comparing their behaviorwith human operations and correcting itaccordingly until the system performed ac-ceptably. So, the driving system selects afuzzy membership function set depending onthe situation, which leads to different reac-tions for each route segment.

Speed controlTo control speed, we use two fuzzy input

variables: Speed_Error and Acceleration. To con-trol the accelerator and the brake, we usetwo fuzzy output variables: Throttle and Brake.The Speed_Error crisp value is the differencebetween the vehicle’s real speed and the

user-defined target speed, and the Accelerationcrisp value is the speed’s variation during atime interval. The throttle pressure range is2–4 volts, and the brake pedal range is0–240 degrees of the actuation motor.

The fuzzy rules containing proceduralknowledge for throttle control are

IF Speed_Error MORE THAN null THEN Throttle upIF Speed_Error LESS THAN null THEN Throttle downIF Acceleration MORE THAN null THEN Throttle upIF Acceleration LESS THAN null THEN Throttle down

The rules for brake control are

IF Speed_Error MORE THAN nullf THEN Brake downIF Speed_Error LESS THAN nullf THEN Brake up

IF Acceleration LESS THAN nullf THEN Brake up

where brake/throttle down means depress the brakeand throttle, and brake/throttle up means releasethe brake and throttle. The associated mem-bership functions of the fuzzy linguistic labelsnull and nullf define the degree of nearness to 0of Acceleration and Speed_Error, respectively.

Figures 3e through 3h show the member-ship functions of null (for the throttle con-troller) and nullf (for the brake controller) forSpeed_Error and Acceleration, respectively. Anasymmetry exists in the two variable defini-tions for two reasons:

• to account for the difference in how accel-erating and braking vehicles behave, and

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 39

(b)(a)

(d)(c)

(f)(e)

(h)(g)

0.8–0.8 0

1

0

Meters

Right Left

2–2 0

1

0

Right Left

Right Left Right Left

–180Degrees

20–15 0

1

0

kmh

LESS THAN null MORE THAN nullnull

13–35 0

1

0

kmh/s

LESS THAN null MORE THAN nullnull

3.75–5.9 0

1

0

Meters63–53 0

1

0–180

180

180Degrees

3–14 0

1

0

kmh

LESS THAN nullf MORE THAN nullf

25

nullf

–5 0

1

0

kmh/s

LESS THAN nullf nullf

Figure 3. The membership function definition for fuzzy variables: (a) Lateral_Error straight, (b) Angular_Error straight, (c) Lateral_Error curves,(d) Angular_Error curves, (e) Speed_Error throttle, (f) Acceleration throttle, (g) Speed_Error brake, and (h) Acceleration brake.

• to coordinate both pedals’ actuation toemulate human driving.

Throttle and brake controllers are inde-pendent, but they must work cooperatively.Activating the two pedals produces similaroutcomes and can

• increase the target speed (stepping on thethrottle or stepping off the brake on down-hill roads),

• maintain speed (stepping on or off eitherpedal when necessary), and

• reduce the vehicle’s speed (downshiftingthe throttle or stepping on the brake.

ACC+Stop&GoWith ACC, the system can change the

vehicle’s speed to keep a safe distance fromthe lead vehicle. As an extreme example, thelead vehicle might come to a complete stopowing to a traffic jam. In this case, the ACCmust stop the vehicle using a stop-and-gomaneuver; when the road is clear, the ACCreaccelerates the vehicle until it reaches thetarget speed. Combining ACC with stop-and-go maneuvers increases driving com-fort, regulates traffic speed, and breaks upbottlenecks more quickly. Many rear-endcollisions happen in stop-and-go situationsbecause of driver distractions.

ACC systems have been on the marketsince 1995, when Mitsubishi offered the Pre-view Distance Control system in its Dia-mante model. Several sensors can providethe vehicle with ACC capability: radar, laservision, or a combination thereof. Almost allcar manufacturers now offer ACC systemsfor their vehicles, but they all have two cleardrawbacks. First, the ACC systems don’t

work at speeds lower than 40 kmh, so theycan’t offer stop-and-go maneuvers. Second,the systems manage only the throttle auto-matically; consequently, the speed adapta-tion range is limited.

Our system overcomes these limitationsby automating the throttle and brake, whichlets the system act across the vehicle’s entirespeed range. In our case, we selected GPS asthe safety-distance sensor. We installed GPSin both vehicles, and they communicate theirposition to one another via WLAN.

Keeping a user-defined safety distancefrom the next vehicle is a speed-dependentfunction: the higher the speed, the larger therequired intervehicle gap. This is the time-headway concept—a time-dependent safetydistance maintained between two vehicles.If we set a safety time gap of two seconds,for example, the space gap is 22.2 meters fora vehicle moving at 40 kmh but approxi-mately 55.5 meters for 100 kmh. The timegap setting depends on the vehicle’s brakingpower, the weather, the maximum speed, andso on. For example, Article 54 of the Span-ish Highway Code states,

The driver of a vehicle trailing another shallkeep a distance such that he or she can stop hisor her vehicle without colliding with the lead-ing vehicle should this brake suddenly, takinginto account speed, adherence and brakingconditions.

Figure 4 shows our ACC+Stop&Go con-troller’s performance in one of our auto-mated vehicles. At the experiment’s begin-ning, the trailing vehicle starts moving,speeds up, and eventually stops because thelead vehicle is blocking the way. The leadvehicle then starts moving, gains speed, andbrakes again, emulating a congested traffic

situation. A few seconds later, the trailingvehicle starts up again, eventually stoppingbehind the lead vehicle. Figure 5a shows thesystem using the throttle and brake to con-tinuously control the distance and time head-way. Figure 5b shows that the trailing vehi-cle maintains its distance even if the speed islow and the time headway is no longer sig-nificant. When the lead vehicle starts up, thetrailing vehicle follows, respecting the timeheadway (see figure 5c). Finally, figure 5dshows each vehicle’s speed profile, whichindicates the cars’ behavior in relation topedal pressure.

OvertakingThe system can also manage obstacles or

other vehicles in the vehicle’s path by calcu-lating when the vehicle should change lanesto overtake the (mobile or immobile) obsta-cle. First,

• the vehicle must be in the straight-lanedriving mode,

• the left lane must be free, and• there must be room for the overtaking.2

Given this, overtaking occurs as follows:

1. Initially, the vehicle is in straight-lanemode.

2. The driving mode changes to lane-change mode, and the vehicle movesinto the left lane.

3. The driving mode changes to straight-lane mode until the vehicle has passedthe obstacle or vehicle.

4. The driving mode again changes tolane-change mode, and the vehiclereturns to the right lane.

I n t e l l i g e n t T r a n s p o r t a t i o n S y s t e m s

40 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

(b) (c)(a)

Target headway: 2sSpeed: 30 kmhDistance: 16.6 m

Target headway: 2sSpeed: 9 kmhDistance: 5 m

Target headway: 2mSpeed: 0 kmhDistance: 2 m

Figure 4. The adaptive cruise control + Stop&Go controller’s performance in an automated vehicle. Keeping a safe distance (a) at 30kmh, (b) during speed reduction, and (c) in stop-and-go situations.

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 41

(b)

(a)

(d)

(c)

0

5

10

15

20

25

30

0 10 20 30 40 50 60 70Time (sec.)

Spee

d (k

mh)

Pursued speedPursuer speed

109876543210

–1–2–3–4

Head

way

tim

e (s

ec.)

Time-headway error

0

10

20

30

40

50

60

Head

way

(m)

Headway distance

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0Pe

dal p

ress

ure

norm

alize

dThrottle pressure normalizedBrake pedal pressure normalized

Figure 5. Performance measures: (a) normalization of throttle and brake pressure, (b) headway distance, (c) time-headway error,and (d) each vehicle’s speed.

5. When the vehicle is centered in thelane, the driving mode changes back tostraight-lane mode, and driving contin-ues as usual.

Figure 6 shows a detailed flowchart ofthis algorithm. We calculate the time forstarting the transition from the first to thesecond step as a function of the vehicles’relative speed and the overtaking vehicle’slength. Figure 7 illustrates the overtakingmaneuver. The overtaking vehicle mustchange lanes at point A + l, where A is thedistance at which the vehicle changes lanesand l is the vehicle’s length. The dot on theback of each vehicle represents a GPSantenna, located over the rear axle. Vehiclesuse the GPS receptor and the WLAN linkto continuously track their own position andthat of other vehicles. The lane change pro-ceeds only if the front of the overtakingvehicle is completely in the left lane uponreaching the rear of the overtaken vehiclein the right lane.

A is speed dependent—A = F(v), where vis the relative speed between the overtakingand overtaken vehicles because the higherthe velocity, the larger the lane-change dis-tance. A is a function of the relative speedbetween both vehicles because overtakingdepends on the two mobile objects’speed. Inthis case, l is 4 meters, a Citroën Berlingo’slength.

The system transitions from step 2 to step3 when the overtaking vehicle’s angular andlateral errors are both low. Specifically, Angu-

lar_Error must be less than 2 degrees and Lat-eral_Error less than 0.8 meter. The system tran-sitions to step 4 when the overtaking vehicle’srear end passes the overtaken vehicle’s frontend and the separation is l (see figure 7b).Finally, the transition to step 5 is the same asfrom steps 2 to 3.

Vision-based vehicle detectionTo achieve reliable navigation, all auto-

nomous vehicles must master the basic skillof obstacle detection. This vision-based taskis complex. Consider, for example, commonsituations in urban environments, such asmissing lane markers, vehicles parked onboth sides of the street, or crosswalks. Allsuch situations make it difficult for a sys-tem to reliably detect other vehicles, creat-ing hazards for the host vehicle. To addressthis, we use a monocular color-vision sys-tem to give our GPS-based navigator visualreactive capacity.

Search and vehicle detectionWe sharply reduce execution time by lim-

iting obstacle detection to a predefined areain which obstacles are more likely to appear.This rectangular area—or region of interest(ROI)—covers the image’s central section.

To robustly detect and track vehiclesalong the road, we need two consecutiveprocessing stages. First, the system locatesvehicles on the basis of their color andshape properties, using vertical edge andcolor symmetry characteristics. It combinesthis analysis with temporal constraints for

I n t e l l i g e n t T r a n s p o r t a t i o n S y s t e m s

42 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

No

Yes

No

Yes

No

No

Yes

Yes

Yes

No

Adaptive cruise control + Stop&Go straight-lane

tracking

Select overtaking speed

Car ahead–overtaking condition

Select lane-change mode

Headway?

Change to the left lane

Lanechange

completed?

Select straight-lane tracking

Pass the overtaken car

Overtakecompleted?

Select lane-change mode

Change to the right lane

Lanechange

completed?

Figure 6. A flow chart of the overtakingalgorithm.

(b)

(a) A l

l

Figure 7. The overtaking maneuver (a) starts with a change from the right to the leftlane and (b) ends with a change from the left to the right lane. A is the distance atwhich the vehicle changes lanes, and l is the vehicle’s length.

consistency, assuming that vehicles gener-ally have artificial rectangular and sym-metrical shapes that make their verticaledges easily distinguishable. Second, thesystem tracks the detected vehicle using areal-time estimator.

Vertical-edge and symmetrydiscriminating analysis

After identifying candidate edges repre-senting the target vehicle’s limits, the systemcomputes a symmetry map3 of the ROI toenhance the objects with strong color sym-metry characteristics. It computes these char-acteristics using pixel intensity to measurethe match between two halves of an imageregion around a vertical axis. It then consid-ers the vertical edges of paired ROIs withhigh symmetry measures (rejecting uniformareas). It does this only for pairs represent-ing possible vehicle contours, disregardingany combinations that lead to unrealisticvehicle shapes.

Temporal consistencyIn the real world, using only spatial fea-

tures to detect obstacles leads to sporadic,incorrect detection due to noise. We there-fore use a temporal-validation filter toremove inconsistent objects from the scene.4

That is, the system must detect any spatiallyinteresting object in several consecutiveimage iterations to consider that object a realvehicle; it discards all other objects.

We use the value t = 0.5s to ensure that avehicle appears in a consistent time se-

quence. A major challenge of temporal-spa-tial validation is for the system to identify thesame vehicle’s appearance in two consecu-tive frames. To this end, our system uses theobject’s (x, y) position in correlative frames.That is, it can use the position differences todescribe the vehicle’s evolution in the imageplane. At time instant t0, the system annotateseach target object’s (x, y) position in adynamic list, and starts a time count to trackall candidate vehicles’ temporal consistency.At time t0 + 1, it repeats the process using thesame spatial-validation criterion. We increasethe time count only for those objects whosedistance from some previous candidate vehi-cles is less than dv. Otherwise, we reset thetime count. A candidate object is validatedas a real vehicle when its time count reachest = 0.5s.

Given that the vision algorithm’s com-plete execution time is 100 ms, an empiri-cal value dv = 1m has proven successful ineffectively detecting real vehicles in thescene. Figure 8 shows examples of the orig-inal and filtered images along with the ROIsymmetry map and the detected vehicle’sfinal position.

Vehicle trackingWe track the detected vehicle’s position

using position measurement and estimation.We use the detected vehicle’s ROI image asa template to detect position updates in thenext image using a best-fit correlation. Wethen use the vehicle’s (x, y) location in dataassociation for position validation. Basically,

we want to determine whether any object inthe current frame matches the vehicle beingtracked. To do this, we specify a limitedsearch area around the vehicle position, lead-ing to fast, efficient detection. We also estab-lish a minimum correlation value and tem-plate size to end the tracking process if thesystem obtains poor correlations or if thevehicle moves too far away or leaves thescene.

Next, we filter the vehicle position mea-surements using a recursive least-squaresestimator with exponential decay.5 To avoidpartial occlusions, the system keeps the pre-viously estimated vehicle position for fiveconsecutive iterations—without calculatingany validated position—before consideringthe vehicle track as lost. Given a loss, thesystem stops vehicle tracking and restartsthe vehicle detection stage. Figure 9 illus-trates our algorithm, showing how the sys-tem tracked the lead vehicle in real trafficsituations.

Adaptive navigationAfter detecting the lead vehicle’s position,

we must ensure safe navigation in ACC modeif the lead vehicle suddenly brakes within thesafety gap limits. This event could easily leadto a crash unless the host vehicle rapidlydetects the braking situation and brakes hard.To ensure this, the system must detect thelead vehicle’s brake light activation, whichclearly indicates braking.

A vehicle’s brake light position varies de-pending on its model and manufacturer. So,

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 43

(b)(a) (d)(c)

Figure 8. Two vehicle detection examples: (a) the original image, (b) the image with region-of-interest edge enhancement, (c) a vertical symmetry map, and (d) the detected vehicle’s position.

the system must carry out a detailed search toaccurately locate these lights inside the vehi-cle’s ROI. We do have some a priori infor-mation to ease the search: brake indicatorsare typically two red lights symmetricallylocated near the vehicle’s rear left and rightsides. Once the system locates these lights,it must detect sudden brake light activation;it does this by continuously monitoring thelights’ luminance. In case of sudden activa-tion, the system raises an alarm to the vehi-cle navigator to provoke emergency braking.

Figure 10 shows an example of sudden-braking detection. Brake lights are a redun-dant safety feature: if they’re activated, abraking procedure has already started. For-tunately, our system continuously computesthe distance to the lead vehicle. If this dis-

tance is too short, it automatically stops thevehicle.

We carried out all of our experimentswith real vehicles on real roads,

albeit within a private circuit. The resultsshow that the fuzzy controllers perfectlymimic human driving behavior in driving androute tracking, as well as in more complex,multiple-vehicle maneuvers, such ACC orovertaking. In the near future, we’re planningto run new experiments involving three auto-matic driving cars in more complex situa-tions, such as intersections or roundabouts.

Fuzzy control’s flexibility let us integratea host of sensorial information to achieve our

results. Also, using vision for vehicle and ob-stacle detection lets the host vehicle react toreal traffic conditions, and has proven a cru-cial complement to the GPS-based naviga-tion system. To improve and further this work,we’re collaborating with other European insti-tutions specializing in autonomous vehicledevelopment under the UE Contract Cyber-Cars-2. Through this collaboration, we planto perform a cooperative driving involvingmore than six vehicles, adding new sensorsfor pedestrian detection, traffic-sign detection,and infrastructure monitoring. We’ll also inte-grate new wireless communication systemsthat include vehicle-to-vehicle, vehicle-to-infrastructure, and in-vehicle informationtransmission. Finally, we’re planning to useGalileo and GPS-2, next-generation GPSsystems that address some existing GPSpositioning problems and improve locationaccuracy.

References

1. M.A. Sotelo et al., “Vehicle Fuzzy DrivingBased on DGPS and Vision,” Proc. 9th Int’lFuzzy Systems Assoc., Springer, 2001, pp.1472–1477.

I n t e l l i g e n t T r a n s p o r t a t i o n S y s t e m s

44 www.computer.org/intelligent IEEE INTELLIGENT SYSTEMS

Figure 9. Vehicle tracking in two real-world traffic situations.

Figure 10. Sudden-braking detection. Once the system locates the brake lights, it continuously monitors their luminance to detect brake-light activation.

2. R. Garcia et al., “Frontal and Lateral Controlfor Unmanned Vehicles in Urban Tracks,”IEEE Intelligent Vehicle Symp. (IV2002), vol.2, IEEE Press, 2002, pp. 583–588.

3. A. Broggi et al., “The Argo AutonomousVehicle’s Vision and Control Systems,” Int’lJ. Intelligent Control and Systems, vol. 3, no.4, 2000, pp. 409–441.

4. M.A. Sotelo, Global Navigation SystemApplied to the Guidance of an TerrestrialAutonomous Vehicle in Partially Known Out-

door Environments, doctoral dissertation,Univ. of Alcalá, 2001.

5. H. Scheneiderman and M. Nashman, “A Dis-criminating Feature Tracker for Vision-BasedAutonomous Driving,” IEEE Trans. Roboticsand Automation, vol. 10, no. 6, 1994, pp.769–775.

For more information on this or any other com-puting topic, please visit our Digital Library atwww.computer.org/publications/dlib.

JANUARY/FEBRUARY 2007 www.computer.org/intelligent 45

José E. Naranjo is a researcher in the Industrial Computer Science Depart-ment at the Instituto de Automática Industrial in Madrid. His research inter-ests include fuzzy logic control and intelligent transportation systems. Hereceived his PhD in computer science from the Polytechnic University ofMadrid. Contact him at Instituto de Automática Industrial (CSIC), Ctra.Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid, Spain;[email protected].

T h e A u t h o r s

Miguel A. Sotelo is an associate professor in the University of Alcalá’sDepartment of Electronics. His research interests include real-time computervision and control systems for autonomous and assisted intelligent road vehi-cles. He received Spain’s Best Research Award in Automotive and VehicleApplications in 2002, the 3M Foundation awards in eSafety in 2003 and 2004,and the University of Alcalá’s Best Young Researcher Award in 2004. Hereceived his PhD in electrical engineering from the University of Alcalá. He’sa member of the IEEE and the IEEE ITS Society. Contact him at Universi-dad de Alcalá de Henares, Departamento de Electrónica, Madrid, Spain;[email protected].

Carlos González is a software specialist in automation projects at the Insti-tuto de Automática Industrial. He received his PhD in physics from MadridUniversity and is an IEEE member. Contact him at Instituto de AutomáticaIndustrial (CSIC), Ctra. Campo Real Km. 0,200 La Poveda, Arganda delRey, Madrid, Spain; [email protected].

Ricardo García is research professor and founder of the Consejo Superiorde Investigaciones Científicas’ Instituto de Automatica Industrial, where heworks in intelligent robotics. His AUTOPIA project earned the 2002 BarreirosResearch on Automotive Field Prize. He received his PhD in physics fromBilbao University. Contact him at Instituto de Automática Industrial (CSIC),Ctra. Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid, Spain;[email protected].

Teresa de Pedro is a researcher at the Consejo Superior de InvestigacionesCientíficas’ Instituto de Automatica Industrial, where she works on AI asapplied to automation and leads the ISAAC (Integration of Sensors to ActiveAided Conduction) project. Her research interests include fuzzy models forunmanned vehicles. She received her PhD in physics from the UniversidadComplutense of Madrid. Contact her at Instituto de Automática Industrial(CSIC), Ctra. Campo Real Km. 0,200 La Poveda, Arganda del Rey, Madrid,Spain; [email protected].

Tomorrow'sPCs,handhelds,and Internet willuse technologythat exploitscurrent research inartificial intelligence.Breakthroughs in areas suchas intelligent agents, theSemantic Web, data mining,and natural languageprocessing will revolutionizeyour work and leisureactivities. Read about thisresearch as it happens in IEEE Intelligent Systems.

ww

w.c

om

pu

ter.

org

/in

tell

igen

t/su

bsc

rib

e.h

tm

SEE THEFUTURE OFCOMPUTING

NOWin IEEEIntelligent Systems