université libre de bruxellesiridia.ulb.ac.be/iridiatrseries/rev/iridiatr2006-023r001.pdf ·...

14
Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle Ant Colony Optimization Artificial Ants as a Computational Intelligence Technique Marco Dorigo, Mauro Birattari, and Thomas St ¨ utzle IRIDIA – Technical Report Series Technical Report No. TR/IRIDIA/2006-023 September 2006

Upload: vudiep

Post on 15-Mar-2018

225 views

Category:

Documents


5 download

TRANSCRIPT

Page 1: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

Université Libre de BruxellesInstitut de Recherches Interdisciplinaires et de Développements en Intelligence Artificielle

Ant Colony Optimization

Artificial Ants as a Computational Intelligence Technique

Marco Dorigo, Mauro Birattari, and Thomas Stutzle

IRIDIA – Technical Report Series

Technical Report No.

TR/IRIDIA/2006-023

September 2006

Page 2: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – Technical Report SeriesISSN 1781-3794

Published by:IRIDIA, Institut de Recherches Interdisciplinaires

et de Developpements en Intelligence ArtificielleUNIVERSITE L IBRE DE BRUXELLES

Av F. D. Roosevelt 50, CP 194/61050 Bruxelles, Belgium

Technical report number TR/IRIDIA/2006-023

The information provided is the sole responsibility of the authors and does not necessarily reflect the opinionof the members of IRIDIA. The authors take full responsibility for any copyright breaches that may result frompublication of this paper in the IRIDIA – Technical Report Series. IRIDIA is not responsible for any use thatmight be made of data appearing in this publication.

Page 3: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023 1

Ant Colony OptimizationArtificial Ants as a Computational Intelligence Technique

Marco Dorigo, Mauro Birattari, and Thomas Stutzle

Universite Libre de Bruxelles, BELGIUM

Swarm intelligence is a relatively new approach to problemsolving that takes inspiration from the social behaviors ofinsects and of other animals. In particular, ants have inspireda number of methods and techniques among which the moststudied and the most successful is the general purpose opti-mization technique known as ant colony optimization.

Ant colony optimization (ACO) takes inspiration from theforaging behavior of some ant species. These ants depositpheromone on the ground in order to mark some favorable paththat should be followed by other members of the colony. Antcolony optimization exploits a similar mechanism for solvingoptimization problems.

From the early nineties, when the first ant colony optimiza-tion algorithm was proposed, ACO attracted the attention ofmore researchers and a relatively large amount of successfulapplications are now available. Moreover, a substantial corpusof theoretical results is becoming available that providesuseful guidelines to researchers and practitioners in furtherapplications of ACO.

The goal of this article is to introduce ant colony optimiza-tion and to survey its most notable applications. Section Iprovides some background information on the foraging be-havior of ants, which is the biological source of inspirationof ant colony optimization. Section II describes ant colonyoptimization and its main variants. Section III surveys themostnotable theoretical results concerning ACO, and Section IVillustrates some of its most successful applications. Section Vhighlights some current hot research topics, and Section VIprovides an overview of some other algorithms that, althoughnot directly related to ACO, are nonetheless inspired by thebehavior of ants. Section VII concludes the article.

I. B IOLOGICAL INSPIRATION

In the forties and fifties of the twentieth century, theFrench entomologist Pierre-Paul Grasse [1] observed thatsomespecies of termites react to what he called “significant stimuli”.He observed that the effects of these reactions can act asnew significant stimuli for both the insect that produced themand for the other insects in the colony. Grasse used the termstigmergy[2] to describe this particular type of communicationin which the “workers are stimulated by the performance theyhave achieved”.

The two main characteristics of stigmergy that differentiateit from other forms of communication are the following.

• Stigmergy is an indirect, non-symbolic form of commu-nication mediated by the environment: insects exchangeinformation by modifying their environment; and

Nest Food600

15 cm

Nest Food1 2

(a) (b)

Fig. 1. Experimental setup for the double bridge experiment. (a) Brancheshave equal lengths [3]. (b) Branches have different lengths[4].

• Stigmergic information is local: it can only be accessedby those insects that visit the locus in which it wasreleased (or its immediate neighborhood).

Examples of stigmergy can be observed in colonies of ants.In many ant species, ants walking to and from a food sourcedeposit on the ground a substance calledpheromone. Otherants perceive the presence of pheromone and tend to followpaths where pheromone concentration is higher. Through thismechanism, ants are able to transport food to their nest in aremarkably effective way.

Along their path between food source andnest, Argentine ants deposit pheromone.

Deneubourg et al. [3] thoroughly investigated thepheromone laying and following behavior of ants. In anexperiment known as the “double bridge experiment”, thenest of a colony of Argentine ants was connected to a foodsource by two bridges of equal lengths. In such a setting, antsstart to explore the surroundings of the nest and eventuallyreach the food source. Along their path between food sourceand nest, Argentine ants deposit pheromone. Initially, eachant randomly chooses one of the two bridges. However,due to random fluctuations, after some time one of the twobridges presents a higher concentration of pheromone thanthe other and, therefore, attracts more ants. This brings afurther amount of pheromone on that bridge making it moreattractive with the result that after some time the wholecolony converges toward the use of the same bridge.1

This colony-level behavior, based on autocatalysis, that is,on the exploitation of positive feedback, can be exploited byants to find the shortest path between a food source and their

1Deneubourg and co-workers repeated the experiment a numberof timesand observed that each of the two bridges is used in about 50% of the cases.

Page 4: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

2 IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023

TABLE I

A NON-EXHAUSTIVE LIST OF SUCCESSFUL ANT COLONY OPTIMIZATION

ALGORITHMS (IN CHRONOLOGICAL ORDER).

Algorithm Authors Year References

Ant System (AS) Dorigo et al. 1991 [6]–[8]Elitist AS Dorigo et al. 1992 [7], [8]Ant-Q Gambardella & Dorigo 1995 [9]Ant Colony System Dorigo & Gambardella 1996 [10]–[12]MAX–MIN AS Stutzle & Hoos 1996 [13]–[15]Rank-based AS Bullnheimer et al. 1997 [16], [17]ANTS Maniezzo 1999 [18]BWAS Cordon et al. 2000 [19]Hyper-cube AS Blum et al. 2001 [20], [21]

nest. Goss et al. [4] considered a variant of the double bridgeexperiment in which one bridge is significantly longer thanthe other. In this case, the stochastic fluctuations in the initialchoice of a bridge are much reduced and a second mechanismplays an important role: the ants choosing by chance theshort bridge are the first to reach the nest. The short bridgereceives, therefore, pheromone earlier than the long one andthis fact increases the probability that further ants select itrather than the long one. Goss et al. [4] developed a modelof the observed behavior: assuming that at a given moment intime m1 ants have used the first bridge andm2 the secondone, the probabilityp1 for an ant to choose the first bridge is:

p1 =(m1 + k)h

(m1 + k)h + (m2 + k)h, (1)

where parametersk andh are to be fitted to the experimentaldata—obviously,p2 = 1−p1. Monte Carlo simulations showeda very good fit fork ≈ 20 andh ≈ 2 [5].

II. T HE OPTIMIZATION TECHNIQUE

The model proposed by Goss and co-workers for explain-ing the foraging behavior of ants was the main source ofinspiration for the development of ant colony optimization.In ACO, a number of artificial ants build solutions to anoptimization problem at hand and exchange information onthe quality of these solutions via a communication schemethat is reminiscent of the one adopted by real ants.

In ACO, a number of artificial ants buildsolutions to an optimization problem andexchange information on their quality viaa communication scheme that is reminis-cent of the one adopted by real ants.

Different ant colony optimization algorithms have beenproposed. The original ant colony optimization algorithm isknown as Ant System [6]–[8] and was proposed in the earlynineties. Since then, a number of other ACO algorithms wereintroduced. See Table I for a (non-exhaustive) list of successfulvariants. All ant colony optimization algorithms share thesame characteristic idea, which is best illustrated through anexample of how they can be applied. Section II-A describes

Fig. 2. An ant in city i chooses the next city to visit via a stochasticmechanism: ifj has not been previously visited, it can be selected with aprobability that is proportional to the pheromone associated with edge(i, j).

in simple terms how a generic ACO algorithm is applied tothe well-known traveling salesman problem, and Section II-Bgives a more formal description of ACO.

A. ACO for the Traveling Salesman Problem

In the traveling salesman problem, a set of cities is givenand the distance between each of them is known. The goalis to find the shortest tour that allows each city to be visitedonce and only once. In more formal terms, the goal is to finda Hamiltonian tour of minimal length on a fully connectedgraph.

In ant colony optimization, the problem is tackled bysimulating a number of artificial ants moving on a graphthat encodes the problem itself: each vertex represents a cityand each edge represents a connection between two cities. Avariable called pheromone is associated with each edge andcan be read and modified by ants.

Ant colony optimization is an iterative algorithm. At eachiteration, a number of artificial ants are considered. Eachof them build a solution by walking from vertex to vertexon the graph with the constraint of not visiting any vertexthat she has already visited in her walk. At each step ofthe solution construction, an ant selects the following vertexto be visited according to a stochastic mechanism that isbiased by the pheromone: when in vertexi, the followingvertex is selected stochastically among the previously unvisitedones. In particular, ifj has not been previously visited, itcan be selected with a probability that is proportional to thepheromone associated with edge(i, j).

At the end of an iteration, on the basis of the quality of thesolutions constructed by the ants, the pheromone values aremodified in order to bias ants in future iterations to constructsolutions similar to the best ones previously constructed.

B. The Ant Colony Optimization Metaheuristic

Ant colony optimization (ACO) has been formalized intoa metaheuristic for combinatorial optimization problems byDorigo and co-workers [22], [23]. Ametaheuristicis a setof algorithmic concepts that can be used to define heuristicmethods applicable to a wide set of different problems. In other

Page 5: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023 3

words, a metaheuristic is a general-purpose algorithmic frame-work that can be applied to different optimization problemswith relatively few modifications. Examples of metaheuristicsinclude simulated annealing [24], [25], tabu search [26]–[28],iterated local search [29], evolutionary computation [30]–[33],and ant colony optimization [8], [22], [23], [34].

In order to apply ACO to a given a combinatorial optimiza-tion problem, an adequate model is needed:

A combinatorial optimization problemA modelP = (S,Ω, f) of a combinatorial optimization

problem consists of:• a search spaceS defined over a finite set of discrete

decision variablesXi, i = 1, ..., n;• a set Ω of constraints among the variables; and• an objective functionf : S→ R

+0 to be minimized.2

The generic variableXi takes values inDi = v1i , ..., v

|Di|i .

A feasible solutions ∈ S is a complete assignment of valuesto variables that satisfies all constraints inΩ. A solutions∗ ∈ S is called a global optimum if and only if:f(s∗) ≤f(s) ∀s ∈ S.

The model of a combinatorial optimization problem is used todefine the pheromone model of ACO. A pheromone value isassociated with each possiblesolution component; that is, witheach possible assignment of a value to a variable. Formally,thepheromone valueτij is associated with the solution componentcij , which consists in the assignmentXi = vj

i . The set of allpossible solution components is denoted byC.

In ACO, an artificial ant builds a solution by traversingthe fully connectedconstruction graphGC(V,E), whereV

is a set of vertices andE is a set of edges. This graph canbe obtained from the set of solution componentsC in twoways: components may be represented either by vertices orby edges. Artificial ants move from vertex to vertex along theedges of the graph, incrementally building apartial solution.Additionally, ants deposit a certain amount of pheromoneon the components; that is, either on the vertices or onthe edges that they traverse. The amount∆τ of pheromonedeposited may depend on the quality of the solution found.Subsequent ants use the pheromone information as a guidetoward promising regions of the search space.

In the traveling salesman problem, a solution can be rep-resented through a set ofn variables, wheren is the numberof cities. Each of these variables is associated with a city.The variableXi indicates the city to be visited after cityi.Here, solution components are pairs of cities to be visitedone after the other, in the given order: the solution componentcij = (i, j) indicates that the solution under analysis prescribesthat city j should be visited immediately after cityi. In thiscase, the construction graph is a graph in which the verticesare the cities of the original traveling salesman problem, andthe edges are solution components. As a consequence, antsdeposit pheromone on the edges of the construction graph.

It should be noticed that the construction graph could beobtained by representing solution components with vertices

2Any maximization problem can be trivially reduced to a minimizationproblem: maximizing a given functiong is clearly equivalent to minimizingf = −g.

Fig. 3. Example of possible construction graphs for a four-city TSP wherecomponents are associated with (a) the edges or with (b) the vertices of thegraph.

Algorithm 1 The Ant Colony Optimization MetaheuristicSet parameters, initialize pheromone trailswhile termination condition not metdo

ConstructAntSolutionsApplyLocalSearch(optional)UpdatePheromones

end while

on which pheromone is deposited. Although this second wayof obtaining a construction graph seems less natural for thetraveling salesman problem, it is nonetheless correct. Thetwo ways of defining the construction graph for a four-citytraveling salesman problem are represented in Figure 3.

The ACO metaheuristic is shown in Algorithm 1. Afterinitialization, the metaheuristic iterates over three phases: ateach iteration, a number of solutions are constructed by theants; these solutions are then improved through a local search(this step is optional), and finally the pheromone is updated.The following is a more detailed description of the threephases:

ConstructAntSolutions: A set of m artificial ants constructssolutions from elements of a finite set of available solutioncomponentsC = cij, i = 1, ..., n, j = 1, ..., |Di|. Asolution construction starts from an empty partial solutionsp = ∅. At each construction step, the partial solutionsp isextended by adding a feasible solution component from the setN(sp) ⊆ C, which is defined as the set of components that canbe added to the current partial solutionsp without violatingany of the constraints inΩ. The process of constructingsolutions can be regarded as a walk on the construction graphGC = (V,E).

The choice of a solution component fromN(sp) is guidedby a stochastic mechanism, which is biased by the pheromoneassociated with each of the elements ofN(sp). The rule for thestochastic choice of solution components vary across differentACO algorithms but, in all of them, it is inspired by the modelof the behavior of real ants given in Equation 1.

ApplyLocalSearch: Once solutions have been constructed,and before updating the pheromone, it is common to improvethe solutions obtained by the ants through a local search. This

Page 6: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

4 IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023

phase, which is highly problem-specific, is optional althoughit is usually included in state-of-the-art ACO algorithms.

UpdatePheromones: The aim of the pheromone update isto increase the pheromone values associated with good orpromising solutions, and to decrease those that are associatedwith bad ones. Usually, this is achieved (i) by decreasing allthe pheromone values throughpheromone evaporation, and (ii)by increasing the pheromone levels associated with a chosenset of good solutions.

The original ant colony optimization algo-rithm is known as Ant System and wasproposed in the early nineties. Since then,several other ACO algorithms have beenproposed.

C. Main ACO algorithms

Several ACO algorithms have been proposed in the litera-ture. Here we present the original Ant System, and the twomost successful variants:MAX -MIN Ant System and AntColony System. In order to illustrate the differences betweenthese three algorithms, we use the traveling salesman problemas a concrete example.

1) Ant System (AS):It is the first ACO algorithm proposedin the literature [6]–[8]. Its main characteristic is that,at eachiteration, the pheromone values are updated byall them antsthat have built a solution in the iteration itself. The pheromoneτij , associated with the edge joining citiesi andj, is updatedas follows:

τij ← (1− ρ) · τij +

m∑

k=1

∆τkij , (2)

whereρ is the evaporation rate,m is the number of ants, and∆τk

ij is the quantity of pheromone laid on edge(i, j) by antk:

∆τkij =

Q/Lk if ant k used edge(i, j) in its tour,0 otherwise,

(3)where Q is a constant, andLk is the length of the tourconstructed by antk.

In the construction of a solution, ants select the followingcity to be visited through a stochastic mechanism. When antk is in city i and has so far constructed the partial solutionsp, the probability of going to cityj is given by:

pkij =

ταij ·η

βij

P

cil∈N(sp) ταil·ηβ

il

if cij ∈ N(sp),

0 otherwise,(4)

whereN(sp) is the set of feasible components; that is, edges(i, l) where l is a city not yet visited by the antk. Theparametersα and β control the relative importance of thepheromone versus the heuristic informationηij , which is givenby:

ηij =1

dij

, (5)

wheredij is the distance between citiesi andj.2) MAX -MIN Ant System (MMAS): This algo-

rithm [15] is an improvement over the original Ant System.Its characterizing elements are that only the best ant updatesthe pheromone trails and that the value of the pheromone isbound. The pheromone update is implemented as follows:

τij ←[

(1− ρ) · τij + ∆τbestij

]τmax

τmin, (6)

where τmax and τmin are respectively the upper and lowerbounds imposed on the pheromone; the operator[x]ab is definedas:

[x]ab =

a if x > a,b if x < b,x otherwise;

(7)

and∆τbestij is:

∆τbestij =

1/Lbest if (i, j) belongs to the best tour,0 otherwise.

(8)

Lbest is the length of the tour of the best ant. This may be(subject to the algorithm designer decision) either the best tourfound in the current iteration—iteration-best, Lib—or the bestsolution found since the start of the algorithm—best-so-far,Lbs—or a combination of both.

Concerning the lower and upper bounds on the pheromonevalues,τmin andτmax, they are typically obtained empiricallyand tuned on the specific problem at hand [35]. Nonetheless,some guidelines have been provided for definingτmin andτmax on the basis of analytical considerations [15].

3) Ant Colony System (ACS):The most interesting con-tribution of ACS [10]–[12] is the introduction of alocalpheromone updatein addition to the pheromone update per-formed at the end of the construction process (calledofflinepheromone update).

The local pheromone update is performed by all the antsafter each construction step. Each ant applies it only to thelast edge traversed:

τij = (1− ϕ) · τij + ϕ · τ0 , (9)

whereϕ ∈ (0, 1] is the pheromone decay coefficient, andτ0

is the initial value of the pheromone.The main goal of the local update is to diversify the

search performed by subsequent ants during an iteration:by decreasing the pheromone concentration on the traversededges, ants encourage subsequent ants to choose other edgesand, hence, to produce different solutions. This makes it lesslikely that several ants produce identical solutions during oneiteration.

The offline pheromone update, similarly toMMAS, isapplied at the end of each iteration by only one ant, whichcan be either theiteration-bestor the best-so-far. However,the update formula is slightly different:

τij ←

(1− ρ) · τij + ρ ·∆τij if (i, j) belongs to best tour,τij otherwise.

(10)As inMMAS, ∆τij = 1/Lbest, whereLbest can be eitherLib

or Lbs.

Page 7: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023 5

Another important difference between ACS and AS is in thedecision rule used by the ants during the construction process.In ACS, the so-calledpseudorandom proportionalrule is used:the probability for an ant to move from cityi to city j dependson a random variableq uniformly distributed over[0, 1], anda parameterq0; if q ≤ q0, thenj = arg maxcil∈N(sp)τilη

βil,

otherwise Equation 4 is used.3

III. T HEORETICAL RESULTS

The initial work on ACO has been driven by experimentalwork, with the aim of showing that the ideas underlying thistechnique can lead to successful algorithms. After this initialphase, researchers tried to deepen their understanding of thetechnique by building theoretical foundations.

Typically, the first question considered when dealing withmetaheuristics concerns convergence: will a given ACO al-gorithm ever find an optimal solution? The first convergenceproofs were presented by Gutjahr for an ACO algorithm calledgraph-based ant system (GBAS). Gutjahr proved convergencewith probability 1 − ǫ to the optimal solution [36], and morein general to any optimal solution [37]. GBAS is a ratherpeculiar ACO algorithm and the above mentioned results donot directly extend to other ACO algorithm. In particular, theydo not extend to ACO algorithms that are commonly adoptedin applications. Nonetheless, for two of the top performingACO algorithms, ACS andMMAS, convergence has beenproved [34], [38]. Unfortunately, all these convergence resultsdo not allow one to predict how quickly optimal solutionscan be found. Only recently, Gutjahr presented an analyticalframework that allows theoretical predictions about the speedof convergence of specific ACO algorithms to be derived [39].

Other research in ACO theory has focused on establishingformal links of ACO to other techniques for learning andoptimization. One research direction focused on the connectionbetween ACO and the fields of optimal control and reinforce-ment learning [40], while another aimed at examining the con-nections between ACO and probabilistic learning algorithmssuch as stochastic gradient ascent (SGA) [41], and the cross-entropy (CE) method [42]. In particular, Zlochin et al. [42]have proposed a unifying framework for so-calledmodel-based search(MBS) algorithms. Among other advantages, thisframework allows a better understanding of ACO and willpossibly lead to a cross-fertilization among MBS algorithms.

While convergence proofs give insight into some mathe-matically relevant properties of algorithms, they usuallydonot provide guidance to practitioners for the implementationof efficient algorithms. More relevant for practical applicationsare research efforts that aim at a better understanding of thebehavior of ACO algorithms. Blum and Dorigo [43], [44] haveshown that ACO algorithms in general suffer fromfirst orderdeceptionin the same way as genetic algorithms suffer fromdeception. They further introduced the concept ofsecond orderdeception, which occurs, for example, in situations wheresome solution components on average receive updates from

3The notationarg maxx f(x) stands for the value ofx for which f(·) ismaximized. If the maximum is attained for more than one valueof x, it is amatter of indifference which one is considered.

Fig. 4. The front page of the official Web site of the ant colonymetaheuristic:www.aco-metaheuristic.org

more solutions than others with which they compete [45]. Thefirst to study the behavior of ACO algorithms by analyzingthe dynamics of the pheromone model were Merkle andMiddendorf [46]. They showed that, in idealized permutationproblems, constraints on the feasibility of solutions introducewhat they calledselection biasin the solution constructionprocess.

For the best-performing ACO algorithms,convergence to optimal solutions has beenproved.

IV. A PPLICATIONS OFANT COLONY OPTIMIZATION

In recent years, the interest of the scientific community inACO has risen sharply. In fact, several successful applicationsof ACO to a wide range of different discrete optimizationproblems are now available. The large majority of theseapplications are toNP-hard problems; that is, to problems forwhich the best known algorithms that guarantee to identify anoptimal solution have exponential time worst case complexity.The use of such algorithms is often infeasible in practice,and ACO algorithms can be useful for quickly finding high-quality solutions. Other popular applications are to dynamicshortest path problems arising in telecommunication networksproblems. The number of successful applications to academicproblems has motivated people to adopt ACO for the solutionof industrial problems, proving that this computational intel-ligence technique is also useful in real-world applications.

Page 8: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

6 IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023

Resources on Ant Colony Optimization• Web pages:

– www.aco-metaheuristic.org: The official Web site of the ant colony metaheuristic.– www.metaheuristics.org: Web site of the “Metaheuristics Network” project. This European Union funded project is

dedicated to the theoretical analysis and experimental comparison of metaheuristics.• Books:

– M. Dorigo and T. Stutzle,Ant Colony Optimization. MIT Press, Cambridge, MA, 2004.– E. Bonabeau, M. Dorigo, and G. Theraulaz,Swarm Intelligence: From Natural to Artificial Systems. Oxford University Press,

1999.• Scientific Journals. Scientific articles on ACO are published in many journals, including “IEEE Transactions on Systems, Man,

and Cybernetics”, “IEEE Transactions on Evolutionary Computation”, “Artificial Life”, “INFORMS Journal on Computing”, “Journalof Heuristics”, “Computers and Operations Research”, “Computational Optimization and Applications”, and “EuropeanJournal ofOperational Research”. The first issue of the new “Journal ofSwarm Intelligence” is forecast for late 2007.

• Conferences:– The biannual series of workshops “ANTS – From Ant Colonies toArtificial Ants: A Series of International Workshops on Ant

Algorithms” (iridia.ulb.ac.be/∼ants), held for the first time in 1998, is the oldest conference in the ACO and swarmintelligence fields.

– The series of conferences “IEEE Swarm Intelligence” (http://www.computelligence.org/sis/) focuses on swarmintelligence techniques and also ant colony optimization.

– Articles on ACO are regularly presented at other conferences such as “IEEE Congress on Evolutionary Computation (CEC)”,“Genetic and Evolutionary Computation Conference (GECCO)”, “Parallel Problem Solving from Nature (PPSN)”, “INFORMS”meetings, “European Chapter on Combinatorial Optimization (ECCO)” meetings, the “Metaheuristics International Conference(MIC)” and many others,

• Software: Software, distributed under the GNU license, is available at: www.aco-metaheuristic.org/aco-code/• Popular press: ACO is often covered by the popular press. Pointers to popularization articles can be found at:www.aco-metaheuristic.org/aco-in-the-press.html

• Mailing list: A moderated mailing list dedicated to the exchange of information related to ACO is accessible at:www.aco-metaheuristic.org/mailing-list.html

A. Applications toNP-hard problems

The usual approach to show the usefulness of a newmetaheuristic technique is to apply it to a number of differentproblems and to compare its performance with that of alreadyavailable techniques. In the case of ACO, this research initiallyconsisted of testing the algorithms on TSP. Subsequently, otherNP-hard problems were also considered. So far, ACO hasbeen tested on probably more than one hundred differentNP-hard problems. Many of the tackled problems can beconsidered as falling into one of the following categories:routing problemsas they arise, for example, in the distributionof goods;assignment problems, where a set of items (objects,activities, etc.) has to be assigned to a given number ofresources (locations, agents, etc.) subject to some constraints;scheduling problems, which–in the widest sense–are concernedwith the allocation of scarce resources to tasks over time; andsubset problems, where a solution to a problem is considered tobe a selection of a subset of available items. In addition, ACOhas been successfully applied to other problems emerging infields such as machine learning and bioinformatics.

Common to many of these applications is that the best-performing ACO algorithms make intensive use of the optionallocal search phase of the ACO metaheuristic (see Algo-rithm 1). This is typically very effective since, on the onehand, the solutions constructed by ants can often be improvedby an adequate local search algorithm; on the other hand,generating proper initial solutions for local search algorithmsis a difficult task and many experimental results show thatthe probabilistic, adaptive solution generation process of antcolony optimization is particularly suited to this task.

In Table II, we report some of the most noteworthy ap-

plications of ACO algorithms; for a detailed description ofthese and several other applications, we refer the reader to[34]. The overall result that emerges from these applicationsis that, for many problems, ACO algorithms produce resultsthat are very close to those of the best-performing algorithms,while on some problems they are the state-of-the-art. Theselatter problems include the sequential ordering problem, open-shop scheduling problems, some variants of vehicle routingproblems, classification problems, and protein–ligand docking.

B. Applications to telecommunication networks

ACO algorithms have shown to be a very effective approachfor routing problems in telecommunication networks where theproperties of the system, such as the cost of using links or theavailability of nodes, varies over time. ACO algorithms werefirst applied to routing problems in circuit switched networks(like telephone networks) [69] and then in packet-switchednet-works (like local area networks or the Internet) [70]. Followingthe proof of concept provided by Schoonderwoerd et al., ant-inspired strategies for network communication improved tothe point of being state-of-the-art in wired networks. A well-known example is AntNet [70]. AntNet has been extensivelytested, in simulation, on different networks and under differenttraffic patterns, proving to be highly adaptive and robust. Acomparison with state-of-the-art routing algorithms has shownthat, in most of the considered situations, AntNet outperformsits competitors.

Ant-based algorithms have given rise to several other rout-ing algorithms, enhancing performance in a variety of wirednetwork scenarios; see [71], [72] for a survey. More recently,an ACO algorithm designed for the challenging class of mobile

Page 9: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023 7

TABLE II

A NON-EXHAUSTIVE LIST OF APPLICATIONS OFACO ALGORITHMS GROUPED BY PROBLEM TYPE.

Problem type Problem name Authors Year References

Routing Traveling salesman Dorigo et al. 1991, 1996 [6], [8]Dorigo & Gambardella 1997 [11]Stutzle & Hoos 1997, 2000 [15], [47]

Vehicle routing Gambardella et al. 1999 [48]Reimann et al. 2004 [49]

Sequential ordering Gambardella & Dorigo 2000 [50]Assignment Quadratic assignment Stutzle & Hoos 2000 [15]

Maniezzo 1999 [18]Course timetabling Socha et al. 2002, 2003 [35], [51]Graph coloring Costa & Hertz 1997 [52]

Scheduling Project scheduling Merkle et al. 2002 [53]Total weighted tardiness den Besten et al. 2000 [54]Total weighted tardiness Merkle & Middendorf 2000 [55]Open shop Blum 2005 [56]

Subset Set covering Lessing et al. 2004 [57]l-cardinality trees Blum & Blesa 2005 [58]Multiple knapsack Leguizamon & Michalewicz 1999 [59]Maximum clique Fenet & Solnon 2003 [60]

Other Constraint satisfaction Solnon 2000, 2002 [61], [62]Classification rules Parpinelli et al. 2002 [63]

Martens et al. 2006 [64]Bayesian networks Campos, Fernandez-Luna, 2002 [65], [66]Protein folding Shmygelska & Hoos 2005 [67]Docking Korb et al. 2006 [68]

ad hoc networks was shown to be competitive with state-of-the-art routing algorithms [73], [74], while at the same timeoffering better scalability.

The good results of ACO algorithms onacademic problems has made them ap-pealing for applications in industrial set-tings.

C. Applications to industrial problems

The success on academic problems has raised the attentionof a number of companies that have started to use ACOalgorithms for real-world applications. Among the first toexploit algorithms based on the ACO metaheuristic is Euro-Bios (www.eurobios.com). They have applied ACO to anumber of different scheduling problems such as a continuoustwo-stage flow shop problem with finite reservoirs. The prob-lems modeled included various real-world constraints suchassetup times, capacity restrictions, resource compatibilities andmaintenance calendars. Another company that has played, andstill plays, a very important role in promoting the real-worldapplication of ACO is AntOptima (www.antoptima.com).AntOptima’s researchers have developed a set of tools forthe solution of vehicle routing problems whose optimizationalgorithms are based on ACO. Particularly successful productsbased on these tools are (i) DYVOIL, for the managementand optimization of heating oil distribution with a nonho-mogeneous fleet of trucks, used for the first time by PinaPetroli in Switzerland, and (ii) AntRoute, for the routing ofhundreds of vehicles of companies such as Migros, the mainSwiss supermarket chain, or Barilla, the main Italian pasta

maker. Still another vehicle routing application was developedby BiosGroup for the French company Air Liquide. Otherinteresting real-world applications are those by Gravel, Priceand Gagne [75], who have applied ACO to an industrialscheduling problem in an aluminum casting center, and byBautista and Pereira [76], who successfully applied ACO tosolve an assembly line balancing problem with multi-objectivefunction and constraints between tasks for a bike assemblyline.

V. CURRENT HOT TOPICS INACO

A significant part of research on ACO is still concernedwith applications as they have been presented in the previoussection. However, increasing attention is and will be givento even more challenging problems that, for example, involvemultiple objectives, dynamic modificationsof the data, andthe stochastic natureof the objective function and of theconstraints. Other developments focus on the extension of theapplicability of ACO algorithms from discrete to continuousoptimization problems and to the study of parallel implemen-tations of ACO algorithms.

A. Dynamic optimization problems

Dynamic problems are characterized by the fact that thesearch space changes during time. Hence, while searching,the conditions of the search, the definition of the probleminstance and, thus, the quality of the solutions already foundmay change. In such a situation, it is crucial that the algorithmbe able to adjust the search direction, following the changesof the problem being solved.

A paradigmatic example is routing in telecommunicationnetworks, an application problem already discussed in theprevious section. For this problem, ACO algorithms belong

Page 10: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

8 IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023

to the state-of-the-art techniques [70], [74]. ACO algorithmshave also been applied to dynamic versions of the TSP, whereeither the distance between some pairs of cities changes [77]–[79], or cities are dynamically added or removed from theset of cities to be visited. More recently, an ACS algorithmhas also been applied to dynamic vehicle routing problems[80], showing good behavior on randomly generated as wellas real-world instances.

B. Stochastic optimization problems

In stochastic optimization problems, some variables have astochastic nature. Apart from the network routing problems,for which the main focus was put on their dynamic character,the probabilistic traveling salesman problem (PTSP) was thefirst stochastic problem tackled by ACO algorithms. In thePTSP, each city has a given probability of requiring a visitand the goal is to find ana priori tour of minimal expectedlength over all the cities, with the strategy of visiting a randomsubset of cities in the same order as they appear in the a prioritour. The first ACO algorithm for this problem was proposedby Bianchi et al. [81]. Further ACO algorithms for the PTSPhave been proposed by Branke and Guntsch [82], Gutjahr [83],[84], and Birattari et al. [85].

C. Multi-objective optimization

Multiple objectives can often be handled by ordering orweighting them according to their relative importance. In thetwo-colony ACS algorithm for the vehicle routing problemwith time window constraints [48] and in theMMAS forthe bi-objective two-machine permutation flow shop problem[86], the multi-objective optimization problem is handledbyordering the objectives; differently, Doerner et al. [87] applyACO to a bi-objective transportation problem and combine theobjectives in a weighted sum. On the other hand, if preferencesor weights cannot be givena priori, the goal is to find a set ofnon-dominatedsolutions that are optimal in the Pareto sense.The first ACO algorithm for finding non-dominated solutionswas proposed by Iredi et al. [88] for the bi-objective schedul-ing problem. Other applications include portfolio optimization[89] and the quadratic assignment problem [90].

It is foreseeable that future research onACO will focus more strongly on rich opti-mization problems that include stochastic-ity, dynamic data modifications, and mul-tiple objectives.

D. Parallel implementations

ACO algorithms lend themselves to be parallelized in thedata or population domains. In particular, any parallel mod-els used in other population-based algorithms can be easilyadapted to ACO. Two main strategies have been followed. Infine-grainedparallelization, very few individuals are assigned

to single processors and information exchange among theprocessors is frequent. Incoarse-grainedapproaches, on thecontrary, larger subpopulations are assigned to single proces-sors and information exchange is rather rare. Research onparallel ACO algorithms has quickly shown that fine-grainedparallelization results in a very significant communicationoverhead. Therefore, the focus has mostly turned to coarse-grained parallelization schemes, wherep colonies run parallelon p processors [91]–[95].

E. Continuous optimization

Recently, ACO algorithms have been applied to continuousoptimization. When an algorithm designed for combinatorialoptimization is used to tackle a continuous problem, thesimplest approach would be to divide the domain of eachvariable into a set of intervals. However, when the domainof the variables is large and the required accuracy is high, thisapproach is not viable. For this reason, ACO algorithms havebeen developed, which are specifically designed for continuousand mixed continuous-discrete variables [96], [97]. Researchin this direction is currently ongoing.

VI. OTHER ANT-INSPIRED ALGORITHMS

The source of inspiration of ACO is the path marking behav-ior that some ant species exhibit when foraging. Nonetheless,this behavior is not the only behavior of ants that has inspiredcomputer scientists. We present here, in a very concise way,some other examples of algorithms that are inspired by ants.The common trait of all these techniques is that they makeuse ofstigmergic variables; that is, variables associated withthe environment that hold the information that artificial antsshare and exploit. (A more comprehensive discussion of antalgorithms and stigmergy can be found in [98].)

Various algorithmic techniques have beeninspired by behaviors of ants. Ant colonyoptimization is the most successful andbest-known among them.

A. Other algorithms inspired by foraging and path marking

Apart from ACO, a few other approaches take inspirationfrom the path marking behavior of ants. Two algorithmshave been proposed for graph exploration:Edge Ant Walk[99] and Vertex Ant Walk[100]. In these algorithms, antsmark with pheromone the edges they visit to coordinategraph exploration. Contrary to ACO, in these algorithms thepheromones direct the ants toward unexplored areas of thesearch space. In fact, their goal is to cover the graph; that isto visit all the nodes, without knowing the graph topology.Another example of algorithm inspired by ants’ path markingis a search algorithm for continuous optimization problemsthat was inspired by the foraging behavior of thePachycondylaapicalis ants [101].

Page 11: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023 9

B. Algorithms inspired by brood sorting

Brood sorting is an activity that can be observed in manyant species (e.g., inPheidole pallidulaants [102]). These antscompactly cluster their smaller eggs and microlarvae at thecenter of the nest brood area and the largest larvae at theperiphery of the brood cluster. Deneubourg et al. [102] haveproposed a model of this phenomenon in which an ant picks upand drops an item according to the number of similar surround-ing items. Lumer and Faieta [103] and Kuntz et al. [104] haveapplied this model to a specific clustering problem, obtainingresults that were qualitatively equivalent to those obtainedby classical techniques but at a lower computational cost.Recently, Handl et al. [105] described an improved version ofLumer and Faieta’s algorithm, and compared its performanceto other standard clustering techniques, such as k-means. Oneof the salient features of this ant-based algorithm is its abilityto propose a “natural” number of clusters. For an overview ofother developments, we refer to [105].

C. Algorithms inspired by division of labor

In ant colonies, individual workers tend to specialize onspecific tasks in their lifetime [106]. However, ants can adapttheir behavior to the circumstances: a soldier ant can becomea forager, a nurse ant a guard, and so on. This combination ofspecialization and flexibility is a desirable feature for multi-agent optimization and control, especially in task or resourceallocation problems that require continuous adaptation tochanging conditions. Many approaches inspired by divisionof labor in real ant colonies are based on a threshold modeldeveloped by Robinson [106], in which workers with lowresponse thresholds respond to lower levels of stimuli thando workers with high response thresholds. Such a response-threshold model has been applied to the problem of choosinga paint booth for trucks coming out of an assembly line in atruck factory [98], [107]–[110].

D. Algorithms inspired by cooperative transport

The behavior of ant colonies has also inspired research inrobotics, in particular for the design of distributed controlalgorithms for groups of robots [111]. An example of a taskthat has been used as a benchmark for ant algorithms appliedto distributed robotics problems is cooperative box pushing[112]. Another example of application of ant algorithms isthe one to the related problem of pulling an object. This hasbeen achieved [113] within theSwarm-bots project (www.swarm-bots.org), a project dedicated to the study of antalgorithms for autonomous robotics applications.

VII. O UTLOOK AND CONCLUSIONS

As we have discussed, nowadays hundreds of researchersworldwide are applying ACO to classicNP-hard optimizationproblems, while only a few works concern variations thatinclude dynamic and stochastic aspects as well as multipleobjectives. The study of how best to apply ACO to suchvariations will certainly be one of the major research directionsin the near future. A better understanding of the theoretical

properties of ACO algorithm is certainly another researchdirection that will be pursued in the future.

Fifteen years ago, when the first ACO algorithm was intro-duced, taking inspiration from ants for designing optimizationalgorithms seemed a crazy idea. The many successful appli-cations presented in this article have changed our perspective:what seemed a far out idea is now considered one of the mostpromising approaches to the approximate solution of difficultoptimization problems.

REFERENCES

[1] P. P. Grasse,Les Insectes Dans Leur Univers. Paris, France: Ed. duPalais de la decouverte, 1946.

[2] P. P. Grasse, “La reconstruction du nid et les coordinations interindi-viduelles chezBellicositermes Natalensis et Cubitermes sp.La theoriede la stigmergie: Essai d’interpretation du comportementdes termitesconstructeurs,”Insectes Sociaux, vol. 6, pp. 41–81, 1959.

[3] J.-L. Deneubourg, S. Aron, S. Goss, and J.-M. Pasteels, “The self-organizing exploratory pattern of the Argentine ant,”Journal of InsectBehavior, vol. 3, p. 159, 1990.

[4] S. Goss, S. Aron, J.-L. Deneubourg, and J. M. Pasteels, “Self-organizedshortcuts in the Argentine ant,”Naturwissenschaften, vol. 76, pp. 579–581, 1989.

[5] J. M. Pasteels, J.-L. Deneubourg, and S. Goss, “Self-organizationmechanisms in ant societies (i): Trail recruitment to newlydiscoveredfood sources,,”Experientia Supplementum, vol. 54, p. 155, 1987.

[6] M. Dorigo, V. Maniezzo, and A. Colorni, “Positive feedback as asearch strategy,” Dipartimento di Elettronica, Politecnico di Milano,Italy, Tech. Rep. 91-016, 1991.

[7] M. Dorigo, “Optimization, learning and natural algorithms(in italian),”Ph.D. dissertation, Dipartimento di Elettronica, Politecnico di Milano,Italy, 1992.

[8] M. Dorigo, V. Maniezzo, and A. Colorni, “Ant System: Optimizationby a colony of cooperating agents,”IEEE Transactions on Systems,Man, and Cybernetics – Part B, vol. 26, no. 1, pp. 29–41, 1996.

[9] L. M. Gambardella and M. Dorigo, “Ant-Q: A reinforcementlearn-ing approach to the traveling salesman problem,” inProceedings ofthe Twelfth International Conference on Machine Learning (ML-95),A. Prieditis and S. Russell, Eds. Morgan Kaufmann Publishers, 1995,pp. 252–260.

[10] M. Dorigo and L. M. Gambardella, “Ant colonies for the travelingsalesman problem,”BioSystems, vol. 43, no. 2, pp. 73–81, 1997.

[11] ——, “Ant Colony System: A cooperative learning approach to thetraveling salesman problem,”IEEE Transactions on Evolutionary Com-putation, vol. 1, no. 1, pp. 53–66, 1997.

[12] L. M. Gambardella and M. Dorigo, “Solving symmetric andasym-metric TSPs by ant colonies,” inProceedings of the 1996 IEEEInternational Conference on Evolutionary Computation (ICEC’96),T. Baeck et al., Eds. IEEE Press, Piscataway, NJ, 1996, pp. 622–627.

[13] T. Stutzle and H. H. Hoos, “Improving the Ant System: A detailedreport on theMAX–MIN Ant System,” FG Intellektik, FB Infor-matik, TU Darmstadt, Germany, Tech. Rep. AIDA–96–12, Aug. 1996.

[14] T. Stutzle, Local Search Algorithms for Combinatorial Problems:Analysis, Improvements, and New Applications, ser. DISKI. Infix,Sankt Augustin, Germany, 1999, vol. 220.

[15] T. Stutzle and H. H. Hoos, “MAX–MIN Ant System,” FutureGeneration Computer Systems, vol. 16, no. 8, pp. 889–914, 2000.

[16] B. Bullnheimer, R. F. Hartl, and C. Strauss, “A new rank based versionof the Ant System — a computational study,” Institute of ManagementScience, University of Viena, Tech. Rep., 1997.

[17] ——, “A new rank-based version of the Ant System: A computationalstudy,” Central European Journal for Operations Research and Eco-nomics, vol. 7, no. 1, pp. 25–38, 1999.

[18] V. Maniezzo, “Exact and approximate nondeterministictree-searchprocedures for the quadratic assignment problem,”INFORMS Journalon Computing, vol. 11, no. 4, pp. 358–369, 1999.

[19] O. Cordon, I. F. de Viana, F. Herrera, and L. Moreno, “A new ACOmodel integrating evolutionary computation concepts: Thebest-worstAnt System,” in Abstract proceedings of ANTS 2000 – From AntColonies to Artificial Ants: Second International Workshopon AntAlgorithms, M. Dorigo et al., Eds. IRIDIA, Universite Libre deBruxelles, Belgium, 2000, pp. 22–29.

Page 12: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

10 IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023

[20] C. Blum, A. Roli, and M. Dorigo, “HC–ACO: The hyper-cubeframe-work for Ant Colony Optimization,” inProceedings of MIC’2001 –Metaheuristics International Conference, vol. 2, Porto, Portugal, 2001,pp. 399–403.

[21] C. Blum and M. Dorigo, “The hyper-cube framework for antcolonyoptimization,” IEEE Transactions on Systems, Man, and Cybernetics –Part B, vol. 34, no. 2, pp. 1161–1172, 2004.

[22] M. Dorigo and G. Di Caro, “The Ant Colony Optimization meta-heuristic,” inNew Ideas in Optimization, D. Corneet al., Eds. McGrawHill, London, UK, 1999, pp. 11–32.

[23] M. Dorigo, G. Di Caro, and L. M. Gambardella, “Ant algorithms fordiscrete optimization,”Artificial Life, vol. 5, no. 2, pp. 137–172, 1999.

[24] V. Cerny, “A thermodynamical approach to the traveling salesmanproblem,” Journal of Optimization Theory and Applications, vol. 45,no. 1, pp. 41–51, 1985.

[25] S. Kirkpatrick, C. D. Gelatt Jr., and M. P. Vecchi, “Optimization bysimulated annealing,”Science, vol. 220, pp. 671–680, 1983.

[26] F. Glover, “Tabu search – part I,”ORSA Journal on Computing, vol. 1,no. 3, pp. 190–206, 1989.

[27] ——, “Tabu search – part II,”ORSA Journal on Computing, vol. 2,no. 1, pp. 4–32, 1990.

[28] F. Glover and M. Laguna,Tabu Search. Kluwer Academic Publishers,1997.

[29] H. R. Lourenco, O. Martin, and T. Stutzle, “Iterated local search,”in Handbook of Metaheuristics, ser. International Series in OperationsResearch & Management Science, F. Glover and G. Kochenberger, Eds.Kluwer Academic Publishers, 2002, vol. 57, pp. 321–353.

[30] L. J. Fogel, A. J. Owens, and M. J. Walsh,Artificial IntelligenceThrough Simulated Evolution. John Wiley & Sons, 1966.

[31] J. Holland,Adaptation in Natural and Artificial Systems.Ann Arbor:University of Michigan Press, 1975.

[32] I. Rechenberg,Evolutionsstrategie – Optimierung technischer Systemenach Prinzipien der biologischen Information. Fromman Verlag,Freiburg, Germany, 1973.

[33] H.-P. Schwefel,Numerical Optimization of Computer Models. JohnWiley & Sons, 1981.

[34] M. Dorigo and T. Stutzle,Ant Colony Optimization. MIT Press,Cambridge, MA, 2004.

[35] K. Socha, J. Knowles, and M. Sampels, “AMAX -MIN ant systemfor the university timetabling problem,” inAnt Algorithms, 3rd Inter-national Workshop, ANTS 2002, ser. LNCS, M. Dorigoet al., Eds.,vol. 2463. Berlin, Germany: Springer Verlag, 2002, p. 1.

[36] W. J. Gutjahr, “A graph-based ant system and its convergence,”FutureGeneration Computer Systems, vol. 16, no. 9, pp. 873–888, 2000.

[37] ——, “ACO algorithms with guaranteed convergence to theoptimalsolution,” Information Processing Letters, vol. 82, no. 3, pp. 145–153,2002.

[38] T. Stutzle and M. Dorigo, “A short convergence proof for a class ofACO algorithms,” IEEE Transactions on Evolutionary Computation,vol. 6, no. 4, pp. 358–365, 2002.

[39] W. J. Gutjahr, “On the finite-time dynamics of ant colonyoptimization,”Methodology and Computing in Applied Probability, vol. 8, no. 1, pp.105–133, 2006.

[40] M. Birattari, G. Di Caro, and M. Dorigo, “Toward the formal founda-tion of ant programming,” inAnt Algorithms – Proceedings of ANTS2002 – Third International Workshop, ser. LNCS, M. Dorigoet al.,Eds., vol. 2463. Springer Verlag, 2002, pp. 188–201.

[41] N. Meuleau and M. Dorigo, “Ant colony optimization and stochasticgradient descent,”Artificial Life, vol. 8, no. 2, pp. 103–121, 2002.

[42] M. Zlochin, M. Birattari, N. Meuleau, and M. Dorigo, “Model-basedsearch for combinatorial optimization: A critical survey,” Annals ofOperations Research, vol. 131, no. 1–4, pp. 373–395, 2004.

[43] C. Blum, Theoretical and Practical Aspects of Ant Colony Opti-mization, ser. Dissertations in Artificial Intelligence. AkademischeVerlagsgesellschaft Aka GmbH, Berlin, Germany, 2004, vol.282.

[44] C. Blum and M. Dorigo, “Search bias in ant colony optimization:On the role of competition-balanced systems,”IEEE Transactions onEvolutionary Computation, vol. 9, no. 2, pp. 159–174, 2005.

[45] C. Blum, M. Sampels, and M. Zlochin, “On a particularityin model-based search,” inProceedings of the Genetic and Evolutionary Compu-tation Conference (GECCO-2002), W. B. Langdon et al., Ed. MorganKaufmann Publishers, 2002, pp. 35–42.

[46] D. Merkle and M. Middendorf, “Modelling the dynamics ofant colonyoptimization algorithms,”Evolutionary Computation, vol. 10, no. 3, pp.235–262, 2002.

[47] T. Stutzle and H. H. Hoos, “TheMAX–MIN Ant System andlocal search for the traveling salesman problem,” inProceedings ofthe 1997 IEEE International Conference on Evolutionary Computation(ICEC’97), T. Back et al., Eds. IEEE Press, Piscataway, NJ, 1997,pp. 309–314.

[48] L. M. Gambardella, E. D. Taillard, and G. Agazzi, “MACS-VRPTW:A multiple ant colony system for vehicle routing problems withtime windows,” in New Ideas in Optimization, D. Corneet al., Eds.McGraw Hill, London, UK, 1999, pp. 63–76.

[49] M. Reimann, K. Doerner, and R. F. Hartl, “D-ants: Savings basedants divide and conquer the vehicle routing problems,”Computers &Operations Research, vol. 31, no. 4, pp. 563–591, 2004.

[50] L. M. Gambardella and M. Dorigo, “Ant Colony System hybridizedwith a new local search for the sequential ordering problem,” IN-FORMS Journal on Computing, vol. 12, no. 3, pp. 237–255, 2000.

[51] K. Socha, M. Sampels, and M. Manfrin, “Ant algorithms for theuniversity course timetabling problem with regard to the state-of-the-art,” in Applications of Evolutionary Computing, Proceedings ofEvoWorkshops 2003, ser. LNCS, G. R. Raidlet al., Eds., vol. 2611.Springer Verlag, 2003, pp. 334–345.

[52] D. Costa and A. Hertz, “Ants can colour graphs,”Journal of theOperational Research Society, vol. 48, pp. 295–305, 1997.

[53] D. Merkle, M. Middendorf, and H. Schmeck, “Ant colony optimizationfor resource-constrained project scheduling,”IEEE Transactions onEvolutionary Computation, vol. 6, no. 4, pp. 333–346, 2002.

[54] M. L. den Besten, T. Stutzle, and M. Dorigo, “Ant colonyoptimizationfor the total weighted tardiness problem,” inProceedings of PPSN-VI,ser. LNCS, M. Schoenaueret al., Eds., vol. 1917. Springer Verlag,2000, pp. 611–620.

[55] D. Merkle and M. Middendorf, “Ant colony optimization with globalpheromone evaluation for scheduling a single machine,”Applied Intel-ligence, vol. 18, no. 1, pp. 105–111, 2003.

[56] C. Blum, “Beam-ACO—Hybridizing ant colony optimization withbeam search: An application to open shop scheduling,”Computers &Operations Research, vol. 32, no. 6, pp. 1565–1591, 2005.

[57] L. Lessing, I. Dumitrescu, and T. Stutzle, “A comparison betweenACO algorithms for the set covering problem,” inANTS’2004, FourthInternatinal Workshop on Ant Algorithms and Swarm Intelligence, ser.LNCS, M. Dorigoet al., Eds., vol. 3172. Springer Verlag, 2004, pp.1–12.

[58] C. Blum and M. J. Blesa, “New metaheuristic approaches for theedge-weighted k-cardinality tree problem,”Computers & OperationsResearch, vol. 32, no. 6, pp. 1355–1377, 2005.

[59] G. Leguizamon and Z. Michalewicz, “A new version of AntSystem forsubset problems,” inProceedings of CEC’99. IEEE Press, Piscataway,NJ, 1999, pp. 1459–1464.

[60] S. Fenet and C. Solnon, “Searching for maximum cliques with antcolony optimization,” in Applications of Evolutionary Computing,Proceedings of EvoWorkshops 2003, ser. LNCS, G. R. Raidlet al.,Eds., vol. 2611. Springer Verlag, 2003, pp. 236–245.

[61] C. Solnon, “Solving permutation constraint satisfaction problems witharti cial ants,” in Proceedings of ECAI’2000. Amsterdam, TheNetherlands: IOS Press, 2000, pp. 118–122.

[62] ——, “Ants can solve constraint satisfaction problems,” IEEE Transac-tions on Evolutionary Computation, vol. 6, no. 4, pp. 347–357, 2002.

[63] R. S. Parpinelli, H. S. Lopes, and A. A. Freitas, “Data mining with anant colony optimization algorithm,”IEEE Transactions on EvolutionaryComputation, vol. 6, no. 4, pp. 321–332, 2002.

[64] D. Martens, M. D. Backer, R. Haesen, B. Baesens, C. Mues,andJ. Vanthienen, “Ant-based approach to the knowledge fusionproblem,”in Ant Colony Optimization and Swarm Intelligence, 5th InternationalWorkshop, ANTS 2006, ser. LNCS, M. Dorigoet al., Eds., vol. 4150.Springer Verlag, 2006, pp. 84–95.

[65] L. M. de Campos, J. M. Fernandez-Luna, J. A. Gamez, andJ. M.Puerta, “Ant colony optimization for learning Bayesian networks,”International Journal of Approximate Reasoning, vol. 31, no. 3, pp.291–311, 2002.

[66] L. M. de Campos, J. A. Gamez, and J. M. Puerta, “Learning bayesiannetworks by ant colony optimisation: Searching in the spaceof order-ings,” Mathware and Soft Computing, vol. 9, no. 2–3, pp. 251–268,2002.

[67] A. Shmygelska and H. H. Hoos, “An ant colony optimisation algorithmfor the 2d and 3d hydrophobic polar protein folding problem,” BMCBioinformatics, vol. 6:30, 2005.

[68] O. Korb, T. Stutzle, and T. E. Exner, “Application of ant colony opti-mization to structure-based drug design,” inAnt Colony Optimizationand Swarm Intelligence, 5th International Workshop, ANTS 2006, ser.

Page 13: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023 11

LNCS, M. Dorigoet al., Eds., vol. 4150. Springer Verlag, 2006, pp.247–258.

[69] R. Schoonderwoerd, O. Holland, J. Bruten, and L. Rothkrantz, “Ant-based load balancing in telecommunications networks,”Adaptive Be-havior, vol. 5, no. 2, pp. 169–207, 1996.

[70] G. Di Caro and M. Dorigo, “AntNet: Distributed stigmergetic controlfor communications networks,”Journal of Artificial Intelligence Re-search, vol. 9, pp. 317–365, 1998.

[71] K. M. Sim and W. H. Sun, “Ant colony optimization for routing andload-balancing: Survey and new directions,”IEEE Transactions onSystems, Man, and Cybernetics-Part A: Systems and Humans, 2003.

[72] G. Di Caro, “Ant colony optimization and its application to adaptiverouting in telecommunication networks,” Ph.D. dissertation, UniversiteLibre de Bruxelles, Brussels, Belgium, 2004.

[73] F. Ducatelle, G. Di Caro, and L. M. Gambardella, “Using ant agentsto combine reactive and proactive strategies for routing inmobile adhoc networks,”International Journal of Computational Intelligence andApplications, vol. 5, no. 2, pp. 169–184, 2005.

[74] G. Di Caro, F. Ducatelle, and L. M. Gambardella, “AntHocNet:An adaptive nature-inspired algorithm for routing in mobile ad hocnetworks,” European Transactions on Telecommunications, vol. 16,no. 5, pp. 443–455, 2005.

[75] M. Gravel, W. L. Price, and C. Gagne, “Scheduling continuous cast-ing of aluminum using a multiple objective ant colony optimizationmetaheuristic,”European Journal of Operational Research, vol. 143,pp. 218–229, 2002.

[76] J. Bautista and J. Pereira, “Ant algorithms for assembly line balancing,”in Ant Algorithms – Proceedings of ANTS 2002 – Third InternationalWorkshop, ser. LNCS, M. Dorigoet al., Eds., vol. 2463. SpringerVerlag, 2002, pp. 65–75.

[77] M. Guntsch and M. Middendorf, “Pheromone modification strategiesfor ant algorithms applied to dynamic TSP,” inApplications of Evo-lutionary Computing: Proceedings of EvoWorkshops 2001, ser. LNCS,E. J. W. Boerset al., Eds., vol. 2037. Springer Verlag, 2001, pp.213–222.

[78] ——, “A population based approach for ACO,” inApplications ofEvolutionary Computing, Proceedings of EvoWorkshops2002: EvoCOP,EvoIASP, EvoSTim, ser. LNCS, S. Cagnoniet al., Eds., vol. 2279.Springer Verlag, 2002, pp. 71–80.

[79] C. J. Eyckelhof and M. Snoek, “Ant systems for a dynamic TSP: Antscaught in a traffic jam,” inAnt Algorithms – Proceedings of ANTS2002 – Third International Workshop, ser. LNCS, M. Dorigoet al.,Eds., vol. 2463. Springer Verlag, 2002, pp. 88–99.

[80] R. Montemanni, L. M. Gambardella, A. E. Rizzoli, and A. V. Donati,“Ant colony system for a dynamic vehicle routing problem,”Journalof Combinatorial Optimization, vol. 10, pp. 327–343, 2005.

[81] L. Bianchi, L. M. Gambardella, and M. Dorigo, “An ant colonyoptimization approach to the probabilistic traveling salesman problem,”in Proceedings of PPSN-VII, ser. LNCS, J. J. Mereloet al., Eds., vol.2439. Springer Verlag, 2002, pp. 883–892.

[82] J. Branke and M. Guntsch, “New ideas for applying ant colonyoptimization to the probabilistic TSP,” inApplications of EvolutionaryComputing, Proceedings of EvoWorkshops 2003, ser. LNCS, G. R.Raidl et al., Eds., vol. 2611. Springer Verlag, 2003, pp. 165–175.

[83] W. J. Gutjahr, “A converging ACO algorithm for stochastic combina-torial optimization,” inProc. SAGA 2003, ser. LNCS, A. Albrecht andT. Steinhofl, Eds., vol. 2827. Berlin, Germany: Springer Verlag, 2003,pp. 10–25.

[84] ——, “S-ACO: An ant based approach to combinatorial optimizationunder uncertainity,” inAnt Colony Optimization and Swarm Intelli-gence, 4th International Workshop, ANTS 2004, ser. LNCS, Dorigoet al., Eds., vol. 3172. Berlin, Germany: Springer Verlag, 2004, pp.1–12.

[85] M. Birattari, P. Balaprakash, and M. Dorigo, “ACO/F-Race: Ant colonyoptimization and racing techniques for combinatorial optimizationunder uncertainty,” inMIC 2005: The 6th Metaheuristics InternationalConference, K. F. Doerneret al., Eds. Vienna, Austria: University ofVienna, Department of Business Administration, 2005, pp. 107–112.

[86] V. T’kindt, N. Monmarche, F. Tercinet, and D. Laugt, “An antcolony optimization algorithm to solve a 2-machine bicriteria flowshopscheduling problem,”European Journal of Operational Research, vol.142, no. 2, pp. 250–257, 2002.

[87] K. Doerner, R. F. Hartl, and M. Reimann, “Competants forproblemsolving – the case of full truckload transportation,”Central EuropeanJournal for Operations Research and Economics, vol. 11, no. 2, pp.115–141, 2003.

[88] S. Iredi, D. Merkle, and M. Middendorf, “Bi-criterion optimizationwith multi colony ant algorithms,” inFirst International Conferenceon Evolutionary Multi-Criterion Optimization, (EMO’01), ser. LNCS,E. Zitzler et al., Eds., vol. 1993. Springer Verlag, 2001, pp. 359–372.

[89] K. F. Doerner, W. J. Gutjahr, R. F. Hartl, C. Strauss, andC. Stummer,“Pareto ant colony optimization in multiobjective projectportfolioselection with ILP preprocessing,”European Journal of OperationalResearch, vol. 171, no. 3, pp. 830–841, 2006.

[90] M. L. Ibanez, L. Paquete, and T. Stutzle, “On the design of ACOfor the biobjective quadratic assignment problem,” inProceedings ofANTS’2004, ser. LNCS, M. Dorigoet al., Eds., vol. 3172. SpringerVerlag, 2004, pp. 214–225.

[91] T. Stutzle, “Parallelization strategies for ant colony optimization,” inProceedings of PPSN-V, ser. LNCS, A. E. Eibenet al., Eds., vol. 1498.Springer Verlag, 1998, pp. 722–731.

[92] E.-G. Talbi, O. Roux, C. Fonlupt, and D. Robillard, “Parallel antcolonies for combinatorial optimization problems,” inParallel andDistributed Processing, 11 IPPS/SPDP’99 Workshops, ser. LNCS,J. Rolim et al., Eds., vol. 1586, 1999, pp. 239–247.

[93] M. Middendorf, F. Reischle, and H. Schmeck, “Multi colony antalgorithms,”Journal of Heuristics, vol. 8, no. 3, pp. 305–320, 2002.

[94] M. Manfrin, M. Birattari, T. Stutzle, and M. Dorigo, “Parallel antcolony optimization for the traveling salesman problem,” in Proceed-ings of ANTS 2006, ser. LNCS, M. Dorigoet al., Eds., vol. 4150.Springer Verlag, 2006, pp. 224–234.

[95] S. Benker, K. F. Doerner, R. F. Hartl, G. Kiechle, and M. Lucka, “Com-munication strategies for parallel cooperative ant colonyoptimizationon clusters and grids,” inComplimentary Proceedings of PARA’04Workshop on State-of-the-Art in Scientific Computing, 2005, pp. 3–12.

[96] K. Socha, “ACO for continuous and mixed-variable optimization,” inAnt Colony Optimization and Swarm Intelligence, 4th InternationalWorkshop, ANTS 2004, ser. LNCS, M. Dorigoet al., Eds., vol. 3172.Springer Verlag, 2004, pp. 25–36.

[97] K. Socha and M. Dorigo, “Ant colony optimization for continuousdomains,”European Journal of Operational Research, 2006, in press.

[98] E. Bonabeau, M. Dorigo, and G. Theraulaz,Swarm Intelligence: FromNatural to Artificial Systems. Oxford University Press, 1999.

[99] I. A. Wagner, M. Lindenbaum, and A. M. Bruckstein, “Smell as acomputational resource – a lesson we can learn from the ant,”inProceedings of the Fourth Israeli Symposium on Theory of Computingand Systems (ISTCS-99), 1996, pp. 219–230.

[100] ——, “Efficient graph search by a smell-oriented vertexprocess,”Annals of Mathematics and Artificial Intelligence, vol. 24, pp. 211–223, 1998.

[101] N. Monmarche, G. Venturini, and M. Slimane, “On howPachycondylaapicalis ants suggest a new search algorithm,”Future GenerationComputer Systems, vol. 16, no. 8, pp. 937–946, 2000.

[102] J.-L. Deneubourg, S. Goss, N. Franks, A. Sendova-Franks, C. Detrain,and L. Chretien, “The dynamics of collective sorting: Robot-likeants and ant-like robots,” inProceedings of the First InternationalConference on Simulation of Adaptive Behavior: From Animals toAnimats, J.-A. Meyer and S. W. Wilson, Eds. MIT Press, Cambridge,MA, 1991, pp. 356–363.

[103] E. Lumer and B. Faieta, “Diversity and adaptation in populations ofclustering ants,” inProceedings of the Third International Conferenceon Simulation of Adaptive Behavior: From Animals to Animats3, J.-A.Meyer and S. W. Wilson, Eds. MIT Press, Cambridge, MA, 1994,pp. 501–508.

[104] P. Kuntz, D. Snyers, and P. Layzell, “A stochastic heuristic for visu-alizing graph clusters in a bi-dimensional space prior to partitioning,”Journal of Heuristics, vol. 5, pp. 327–351, 1999.

[105] J. Handl, J. Knowles, and M. Dorigo, “Ant-based clustering andtopographic mapping,”Artificial Life, vol. 12, no. 1, pp. 35–62, 2006.

[106] G. E. Robinson, “Regulation of division of labor in insect societies,”Ann. Rev. Entomol., vol. 37, pp. 637–665, 1992.

[107] M. Campos, E. Bonabeau, G. Theraulaz, and J.-L. Deneubourg, “Dy-namic scheduling and division of labor in social insects,”AdaptiveBehavior, vol. 8, no. 3, pp. 83–96, 2000.

[108] V. A. Cicirello and S. F. Smith, “Ant colony control forautonomousdecentralized shop floor routing,” inProceedings of the InternationalSymposium on Autonomous Decentralized Systems. IEEE ComputerSociety Press, 2001, pp. 383–390.

[109] S. Nouyan, “Agent-based approach to dynamic task allocation,” inAnt Algorithms – Proceedings of ANTS 2002 – Third InternationalWorkshop, ser. LNCS, M. Dorigoet al., Eds., vol. 2463. SpringerVerlag, 2002, pp. 28–39.

Page 14: Université Libre de Bruxellesiridia.ulb.ac.be/IridiaTrSeries/rev/IridiaTr2006-023r001.pdf · Université Libre de Bruxelles Institut de Recherches Interdisciplinaires et de Développements

12 IRIDIA – TECHNICAL REPORT SERIES: TR/IRIDIA/2006-023

[110] S. Nouyan, R. Ghizzioli, M. Birattari, and M. Dorigo, “An insect-based algorithm for the dynamic task allocation problem,”KunstlicheIntelligenz, vol. 4/05, pp. 25–31, 2005.

[111] A. Martinoli, M. Yamamoto, and F. Mondada, “On the modelling ofbio-inspired collective experiments with real robots,” inProceedingsof the Fourth European Conference on Artificial Life, P. Husbands andI. Harvey, Eds. MIT Press, Cambridge, MA, 1997.

[112] C. R. Kube and H. Zhang, “Collective robotics: From social insects torobots,” Adaptive Behavior, vol. 2, pp. 189–218, 1994.

[113] M. Dorigo, V. Trianni, E. Sahin, R. Groß, T. H. Labella, G. Baldassarre,S. Nolfi, J.-L. Deneubourg, F. Mondada, D. Floreano, and L. M.Gambardella, “Evolving self-organizing behaviors for a Swarm-bot,”Autonomous Robots, vol. 17, no. 2–3, pp. 223–245, 2004.