research article double flight-modes particle swarm...
TRANSCRIPT
Hindawi Publishing CorporationJournal of OptimizationVolume 2013 Article ID 356420 8 pageshttpdxdoiorg1011552013356420
Research ArticleDouble Flight-Modes Particle Swarm Optimization
Wang Yong1 Li Jing-yang1 and Li Chun-lei2
1 College of Information Science and Engineering Guangxi University for Nationalities Nanning 530006 China2Nanning Power Supply Bureaus Nanning Guangxi 530031 China
Correspondence should be addressed to Wang Yong wangygxnnsinacom
Received 20 February 2013 Revised 29 September 2013 Accepted 30 September 2013
Academic Editor Jein-Shan Chen
Copyright copy 2013 Wang Yong et al This is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
Getting inspiration from the real birds in flight we propose a new particle swarm optimization algorithm that we call the doubleflight modes particle swarm optimization (DMPSO) in this paper In the DMPSO each bird (particle) can use both rotationalflight mode and nonrotational flight mode to fly while it is searching for food in its search space There is a King in the swarm ofbirds and the King controls each birdrsquos flight behavior in accordance with certain rules all the time Experiments were conductedon benchmark functions such as Schwefel Rastrigin Ackley Step Griewank and Sphere The experimental results show that theDMPSO not only has marked advantage of global convergence property but also can effectively avoid the premature convergenceproblem and has good performance in solving the complex and high-dimensional optimization problems
1 Introduction
Particle swarm optimization (PSO) was developed byKennedy and Eberhart in 1995 [1] based on the swarmbehav-ior of birds in searching for food Since then PSO has gotmore and more attention from the researchers in the domainof information and has generated much wider interestsbecause of its simplicity of implementation and less domainknowledge required However the original PSO still has thephenomenon of the premature convergence problem whichexists in most of the stochastic optimization algorithms Inorder to improve the performance of the PSO many scholarshave proposed various approaches to improve the perfor-mance of the PSO such as listed in the paper [2ndash22] Themethods presented by the authorsmentioned in the paper [2ndash22] can be summed up into two strategiesThe first strategy isto add the group quantity of information through increasingthe population size of swarm in order to achieve the pur-pose of improving the performance of algorithm Howeverthis strategy cannot fundamentally overcome the prematureconvergence problem andwill certainly lead to the increase inrunning time of computation The second strategy is underthe condition of not increasing the population size of swarmto excavate or to increase every particlersquos latent capacity toachieve the goal of improving the performance of algorithm
Although these approaches mentioned in the paper [2ndash22]can improve the performance of the PSO to some extent butcannot fundamentally solve the premature convergence prob-lem which exists in the original PSO
In this paper we intend to present a new particle swarmoptimization namely the double flightmodes particle swarmoptimization (DMPSO for short) based on the flight charac-teristics of birdsThe rest of this paper is organized as followsIn Section 2 we briefly introduce the original PSO the PSO-W and the CLPSO Section 3 describes the double flight-modes particle swarm optimization In Section 4 we conductour simulation experiments on some test functions for eachalgorithm and compare the performance of the DMPSOwiththat of the original PSOwith that of the PSO-W andwith thatof the CLPSO We give our conclusions in Section 5
2 Particle Swarm Optimizers
21 The Original PSO Particle swarm optimization (PSO)was developed by Kennedy and Eberhart [1] PSO emulatesthe swarm behavior and the individuals are treated aspoints in the 119863-dimensional search space Each individualis named as a ldquoparticlerdquo which represents a potential solutionto a problem The position and the velocity of the 119894thparticle are represented as 119883
119894(119905) = (119909
1198941(119905) 119909
119894119863(119905)) and
2 Journal of Optimization
119881119894(119905) = (V
1198941(119905) V
119894119863(119905)) respectively The best previous
position (the position yielding the best fitness value) of the119894th particle is represented as 119875best119894(119905) = (119901
1198941(119905) 119901
119894119863(119905))
The best position discovered by the whole population isrepresented as 119866best(119905) = (1198921198871(119905) 119892119887119863(119905)) Then the vector119881119894(119905 + 1) and the position 119883
119894(119905 + 1) of the 119894th particle are
updated according to the following equation [1]
V119894119895 (119905 + 1) = V
119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905)) 119895 = 1 119863
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + V
119894119895 (119905 + 1) 119895 = 1 119863
(1)
where 1198881and 1198882are the acceleration coefficients reflecting
the weighting of stochastic acceleration terms that pull eachparticle toward 119866best and 119875best positions respectively and 1199031and 1199032are two random numbers in the range [0 1]
22 Some Variants PSO Since PSO was introduced byKennedy and Eberhart [1] many researchers have worked onimproving its performance in various ways and derivingmany interesting variants One of the variant PSOs [2] intro-duces a parameter called inertia weight 119908 into the originalPSO as follows
V119894119895 (119905 + 1) = 119908V119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905))
(2)
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + V
119894119895 (119905 + 1) 119895 = 1 119863 (3)
in which the inertia weight 119908 plays the role of balancing theglobal and local search A large inertia weight facilitates aglobal search while a small inertia weight facilitates a localsearch In (2) if its inertia weight 119908 is a linearly decreasingweight over the course of search then the variant PSO [2] isusually represented as PSO-W
Another variant PSO [16] called the comprehensivelearning particle swarm optimizer (CLPSO) presents a newlearning strategy In the CLPSO the velocity updating equa-tion is changed to
119881119889
119894larr997888 119908 lowast 119881
119889
119894+ 119888lowastrand119889
119894lowast (119901best
119889
119891119894(119889)minus 119883119889
119894) (4)
in which 119891119894= [119891119894(1) 119891
119894(119872)] defines which particlesrsquo
119901bests the 119894th particle should follow 119901best119889
119891119894(119889)can be the
corresponding dimension of any particlersquos 119901best (including itsown 119901best) and the decision depends on probability 119875
119888
referred to as the learning probability which can take differ-ent values for different particles We first generate a randomnumber for each dimension of the 119894th particle If this randomnumberis larger than 119875
119888119894 then the corresponding dimension
will learn from its own 119901best otherwise it will learn fromanother particlersquos 119901best
3 The Double Flight Modes ParticleSwarm Optimization
31The Flight Characteristics of Birds Through careful obser-vation we have found that (1) most of birds have superbflight-skills They can use various flight modes such asrotational flight mode and non-rotational flight mode to flyin their search space can avoid the attack of their naturalenemies and various obstacles and can avoid themselvesbeing immersed into blind alley and (2) there is usually aKingof birds (a flight commander) in most of the swarms of birdsthe King controls or directs every birdrsquos flightmode and flyingdirection while the swarms of birds are searching for food inthe search space Therefore we think if a bird uses only oneflight mode to fly in its search space all the time it will beunable to avoid the attack of its natural enemies and variousobstacles and will easily be immersed into blind alley If thereis not a King of birds controlling the flying direction of theswarm then the swarmwill be fallen into being scattered anddisunited In most cases if a bird has superb flight skills itusually can find more food when it is foraging for food in itssearch space
For the sake of simplicity we use the following idealizedrules
(1) Each bird only uses rotational flight mode and non-rotational flight mode to fly while it is searching forfood in its search space
(2) There is a King of birds among a swarm of birds TheKing controls or directs every birdrsquos flight behavior inaccordance with certain rules and directs each birdrsquosflight mode and flying direction while a swarm ofbirds is searching for food in its search space
(3) The flight speed of a bird has something to dowith thedistance between the bird and its flying destinationWe can say to a certain degree that the farther thedistance between the bird and its flying destinationthe faster the speed flying to the destination
If we idealize the flight characteristics of a swarm of birdsaccording to the previous description then we can redevelopa new particle swarm optimization inspired by the real birdsin flight In simulations we use virtual birds (particles)naturally
32 The Flight Modes of Birds Let119883119894(119905) = (119909
1198941(119905) 119909
119894119863(119905))
and119881119894(119905) = (V
1198941(119905) V
119894119863(119905)) be the position and the velocity
of particle 119894 respectively 119875best119894(119905) = (1199011198941(119905) 119901119894119863(119905)) be thebest previous position yielding the best fitness value for the 119894thparticle and 119866best = (1198921198871(119905) 119892119887119863(119905)) be the best positiondiscovered by the whole population
We first give the conceptions of rotational flightmode andnon-rotational flight mode respectively
Journal of Optimization 3
Bird 1
Bird 2
Bird 3 Bird 4
Bird k
Bird q
A tree
Gbest
Figure 1 A diagrammatic sketch of a swarmof birds using rotationalflight mode to fly to the 119866best
Definition 1 Let119883119894(119905) = (119909
1198941(119905) 119909
119894119863(119905)) be the position of
particle 119894 at the time instant 119905 and119866best(119905) = (1198921(119905) 119892119863(119905))
be the best position discovered by the whole population
(1) We call that particle 119894 uses rotational flight mode to flyto the position119866best(119905) if particle 119894 flies to the position119866best(119905)according to the following equation
119909119894119895 (119905 + 1) = 119892119896 (119905) 119895 = 1 119863 (5)
where the number 119896 is a random integer in the set 1 119863We can use a diagrammatic sketch to depict that a group
of birds is using rotational flight mode to fly to the 119866best as inFigure 1
We can foresee if a group of birds is using rotational flightmode to fly to the position 119866best(119905) at the time instant 119905 thento some extent the group will gather around the position119866best(119905) at the time instant 119905 + 1
(2) We call that particle 119894 uses non-rotational flight modeto fly to the position 119866best(119905) if particle 119894 flies to the position119866best(119905) according to the following equation
V119894119895 (119905 + 1) = V
119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905))
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + 119908(119905)V119894119895 (119905 + 1) 119895 = 1 119863
(6)
where 119908(119905) is an increasing function about the variable|119892119895(119905) minus 119909
119894119895(119905)| (the distance between the 119895th component of
119883119894(119905) and the 119895th component of 119866best(119905)) 1198881 and 1198882 are the
acceleration coefficients and both 1199031and 1199032are two uniformly
distributed random numbers in the range [0 1]In our simulations of this paper we select 119908(119905) = 120583(1 minus
exp(minus120588|119892119895(119905)minus119909119894119895(119905)|)) as the increasing function in (6) where
120583 = 08 and 120588 = 5 119895 = 1 119863
The search area ofparticle i at the
time instant t + 1
Xi(t + 1)
Xi(t)
Particle i atthe time instant t
Gbest
pbest 119894
Figure 2 A diagrammatic sketch of particle 119894 using non-rotationalflight mode to fly to the 119866best
We can use a diagrammatic sketch to depict that particle119894 is using non-rotational flight mode to fly to the position119866bestas in Figure 2
33 The Flight-Control Approach of the King Since the Kingof birds controls each birdrsquos flight behavior in accordancewithcertain rules therefore we look at the King as a flight com-mander and we think that every birdrsquos flight behavior iscontrolled by the King Following that we will set up a flight-command rule for the King and the King uses this rule tocontrol each birdrsquos flight behavior We first give the con-ception of flight command as follows
Definition 2 Let119872 be the population size of the swarm and119904119894(119905) be the order of particle 119894 in the swarm according to the
ascending sort of the fitness value at the time instant 119905 Thenthe conception of flight command is defined as the followingequation
FC119894 (119905) =
119904119894 (119905) minus 1
119872 minus 1 (7)
in which if 119904119894(119905) = 1 (FC
119894(119905) = 0 accordingly) then the fitness
value of particle 119894will be the best one in the swarm at the timeinstant 119905 Meanwhile if 119904
119894(119905) = 119872 (FC
119894(119905) = 1 accordingly)
then the fitness value of particle 119894 will be the worst one in theswarm at the time instant 119905
The King controls each particlersquos flight mode according tothe following approach
Step 1 The King first gives an instruction 120575 randomly where120575 is a normal random distribution in the range [0 1]
Step 2 Each particle chooses its flight mode according thefollowing rule
particle 119894 chooses the formula (6) to fly if FC119894gt 120575
particle 119894 chooses the formula (5) to fly if FC119894le 120575
(8)
4 Journal of Optimization
Objective function 119891(119883)119883 = (1199091 119909
119863)
Initialize each particlersquos position119883119894and velocity 119881
119894randomly and assign119883
119894to
the 119875best119894 at the same time (119894 = 1 119872)while (The stop criterion is not satisfied) do
For 119894 = 1 119872 do
if (119894 le 119872)calculate the fitness value of particle 119894 119891 (119883
119894(119905))
end if
Rank the swarm according to the ascending sort of the fitness value and get119904119894(119905) 119865119862
119894(119905) 119866best (119905)
Assign each particlersquos flight-mode according to the Rule (8)Update 119875best (119905)
119905 = 119905 + 1
end whileoutput 119866best 119891 (119866best)
Procedure 1 Double flight-modes particle swarm optimization
That is to say if 119904119894(119905) gt 120575(119872minus1)+1 then particle 119894will choose
non-rotational flight mode to fly to next step Otherwiseparticle 119894 will choose rotational flight mode to fly to next step
The procedure of the DMPSO can be simply described asin Procedure 1
4 Validation and Comparison
In order to test the performance of the DMPSO we havetested it against the original PSO [1] the PSO-W [2] andthe CLPSO [16] For the ease of visualization we haveimplemented our simulations using Matlab for various testfunctions
41 Benchmark Functions For the sake of having a fair andreasonable comparison between the DMPSO and the PSOthe DMPSO and the PSO-W or the DMPSO and the CLPSOwe have chosen six well-known high-dimensional functionsas our optimization simulation tests All functions are testedon 50 dimensions The properties and the formula of thesefunctions are presented as follows
(a) Schwefelrsquos function
1198911 (119883) =
119863
sum
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 +
119863
prod
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 minus10 le 119909
119894le 10 119863 = 50 (9)
It is a unimodal function and has a global minimum 119891min = 0at (0 0) The complexity of Schwefelrsquos function is due toits deep local optima being far from the global optimum It
will be hard to find the global optimum if many particles fallinto one of the deep local optima
(b) Rastriginrsquos function
1198912 (119883) = 10119863
119863
sum
119894=1
[1199092
119894minus 10 cos (2120587119909
119894)]
minus 512 le 119909119894le 512 119863 = 50
(10)
The function is a complex multimodal function and thenumber of its local minima shows an exponential increasewith the problem dimension and its peak shape is up anddown in jumping When attempting to solve Rastriginrsquosfunction algorithms may easily fall into a local optimumTherefore an algorithm capable of maintaining a largerdiversity is likely to yield better results so Rastrigin is viewedas a typical function used for testing global search perfor-mance of algorithm It has a global minimum 119891min = 0 at(0 0)
(c) Step function
1198913 (119883) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 minus100 le 119909
119894le 100 119863 = 50 (11)
in which 119910 = lfloor119909rfloor is a bracket function (Gaussian function)and 119909 minus 1 lt lfloor119909rfloor le 119909 lt lfloor119909rfloor + 1 forall119909 isin R Step function is a
Journal of Optimization 5
Table 1 Experimental results
f Algorithms BFV WFV Mean MFEs plusmn STDEV
1198911
DMPSO 0 1125465119890 minus 009 6434905119890 minus 011 7350 plusmn 1771972119890 minus 010 (56)PSO 9736494 61567035 31111268 15000 plusmn 11065242 (0)
CLPSO 8174606119890 minus 005 0001325 5125738119890 minus 021 15000 plusmn 2702365119890 minus 004 (0)PSO-w 4429573 1994109119890 + 002 80584602 15000 plusmn 58997786 (0)
1198912
DMPSO 0 49747953 3979836 11130 plusmn 13496281 (64)PSO 1790843119890 + 002 4039120119890 + 002 2847614119890 + 002 15000 plusmn 49430124 (0)
CLPSO 21153304 47516825 31762105 15000 plusmn 6732226 (0)PSO-w 1416269119890 + 002 2890130119890 + 002 21243064119890 + 002 15000 plusmn 43169379 (0)
1198913
DMPSO 0 0 0 1470 plusmn 0 (100)PSO 351 11160 3083180119890 + 003 15000 plusmn 3937885119890 + 003 (0)
CLPSO 0 122 422 10440 plusmn 17105894 (48)PSO-w 45 311 1405400119890 + 002 15000 plusmn 54365809 (0)
1198914
DMPSO 8881784119890 minus 016 3286260119890 minus 014 1140421119890 minus 014 60000 plusmn 8672115119890 minus 015 (0)PSO 0079333 14476725 3692153 60000 plusmn 4976905 (0)
CLPSO 1289395119890 minus 010 2200675 1082862 60000 plusmn 0692050 (0)PSO-w 0003600 11916679 0656150 60000 plusmn 1737976 (0)
1198915
DMPSO 0 1729170119890 minus 069 3458545119890 minus 071 11130 plusmn 2420835119890 minus 070 (28)PSO 1236611119890 minus 004 26217801 4196887 60000 plusmn 9707225 (0)
CLPSO 2700980119890 minus 038 9595381119890 minus 036 9748685119890 minus 037 60000 plusmn 1924460119890 minus 036 (0)PSO-w 5651214119890 minus 021 1681261119890 minus 017 2451834119890 minus 018 60000 plusmn 3861006119890 minus 018 (0)
1198916
DMPSO 0 3330669119890 minus 016 3774758119890 minus 017 35430 plusmn 9049510119890 minus 017 (82)PSO 004331 1810140119890 + 002 20290841 60000 plusmn 41921578 (0)
CLPSO 1787459119890 minus 014 0092264 0014715 60000 plusmn 0021761 (0)PSO-w 1760398119890 minus 004 90751966 1826534 60000 plusmn 12832618 (0)
discontinuous function and has a global minimum 119891min = 0
in the domain 119883 isin R50 | minus05 le 119909119894le 05
(d) Ackleyrsquos function
1198914 (119883) = minus20 exp[
[
minus1
5
radic1
119863
119863
sum
119894=1
1199092
119894]
]
minus exp[ 1119863
119863
sum
119894=1
cos (2120587119909119894)]
+ 20 + 119890 minus30 le 119909119894le 30 119863 = 50
(12)
The function has one narrow global optimumbasin andmanyminor local optima and has a global optimum 119891min = 0 at(0 0)
(e) Sphere function
1198915 (119883) =
119863
sum
119894=1
1199092
119894 minus512 le 119909
119894le 512 119863 = 50 (13)
It is a unimodal function and its global minimum 119891min = 0at (0 0)
(f) Griewankrsquos function
1198916 (119883) =
1
4000
119863
sum
119894=1
1199092
119894minus
119863
prod
119894=1
cos(119909119894
radic119894
) + 1
minus 600 le 119909119894le 600 119863 = 50
(14)
The search space is relatively large in this optimization prob-lem Griewankrsquos function has a prod119863
119894=1cos(119909119894radic119894) component
causing linkages among variables thereby making it difficultto reach the global optimum Therefore it is generallyregarded as a complex multimodal problem that is hard tooptimize The function has a global minimum 119891min = 0 at(0 0)
42 Comparison of Experimental Results and DiscussionsThere are many means to carry out the comparison ofalgorithmperformanceWe can use suchways to compare thenumber of function evaluations (FEs) for a given accuracy orto compare their accuracies for a fixed number of functionevaluations and so forth In our simulations we use the two
6 Journal of Optimization
0 50 100 150 200 250 300 350 400 450 500
Fitn
ess v
alue
s (lo
g)
Generations
1025
1020
1015
1010
105
100
10minus5
(a)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
100
101
102
103
(b)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
106
105
104
103
102
101
100
(c)
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fitn
ess v
alue
s (lo
g)
Generations
105
100
10minus5
10minus10
10minus15
10minus20
(d)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
Fitn
ess v
alue
s (lo
g)
PSOCLPSO
PSO-wDMPSO
1010
100
10minus10
10minus20
10minus30
10minus40
(e)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
PSOCLPSO
PSO-wDMPSO
Fitn
ess v
alue
s (lo
g)
104
103
100
101
102
10minus1
10minus2
(f)
Figure 3 The median convergence characteristics of 50D test functions (a) Schwefelrsquos function (b) Rastriginrsquos function (c) Step function(d) Ackleyrsquos function (e) Sphere function and (f) Griewankrsquos function
waysmentioned previously andwe set up a running-stoppingcondition for each algorithm If a run has found the globaloptimal solution of optimization problem or has reached afixed number of function evaluations then it will stop run-ning We run each algorithm for 50 times so that we can doreasonable and meaningful analysis
In order to ensure the comparability of the experimentalresults the parameters settings are set as consistent aspossible for the DMPSO the PSO the PSO-W and theCLPSO The parameters settings are listed as follows thepopulation size 119872 = 30 and the acceleration coefficients1198881= 1198882= 2 for all simulations As for the test functions
1198911(119883) 119891
2(119883) and 119891
3(119883) we set the maximum iterations
at 500 (or the maximum number of function evaluationsat 15000 correspondingly) As for the test functions 119891
4(119883)
1198915(119883) and 119891
6(119883) we set the maximum iterations at 2000
(or the maximum number of function evaluations at 60000correspondingly) On the other hand we set the inertiaweight 119908 = 0628 for the CLPSO and set 119908(119905) = 08(1 minus
exp(minus5|119892119895(119905) minus 119909
119894119895(119905)|)) for the DMPSO As for the PSO-W
the linearly decreasing inertia weight 119908 is used which startsat 09 and ends at 04 and Vmax le 20(119906119889 minus 119897119889)
In our experiments we choose the best fitness value(BFV) the worst fitness value (WFV) the mean value(Mean) and the number of function evaluations in the formof mean (MFEs) plusmn standard deviation (STDEV) (success rate
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
2 Journal of Optimization
119881119894(119905) = (V
1198941(119905) V
119894119863(119905)) respectively The best previous
position (the position yielding the best fitness value) of the119894th particle is represented as 119875best119894(119905) = (119901
1198941(119905) 119901
119894119863(119905))
The best position discovered by the whole population isrepresented as 119866best(119905) = (1198921198871(119905) 119892119887119863(119905)) Then the vector119881119894(119905 + 1) and the position 119883
119894(119905 + 1) of the 119894th particle are
updated according to the following equation [1]
V119894119895 (119905 + 1) = V
119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905)) 119895 = 1 119863
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + V
119894119895 (119905 + 1) 119895 = 1 119863
(1)
where 1198881and 1198882are the acceleration coefficients reflecting
the weighting of stochastic acceleration terms that pull eachparticle toward 119866best and 119875best positions respectively and 1199031and 1199032are two random numbers in the range [0 1]
22 Some Variants PSO Since PSO was introduced byKennedy and Eberhart [1] many researchers have worked onimproving its performance in various ways and derivingmany interesting variants One of the variant PSOs [2] intro-duces a parameter called inertia weight 119908 into the originalPSO as follows
V119894119895 (119905 + 1) = 119908V119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905))
(2)
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + V
119894119895 (119905 + 1) 119895 = 1 119863 (3)
in which the inertia weight 119908 plays the role of balancing theglobal and local search A large inertia weight facilitates aglobal search while a small inertia weight facilitates a localsearch In (2) if its inertia weight 119908 is a linearly decreasingweight over the course of search then the variant PSO [2] isusually represented as PSO-W
Another variant PSO [16] called the comprehensivelearning particle swarm optimizer (CLPSO) presents a newlearning strategy In the CLPSO the velocity updating equa-tion is changed to
119881119889
119894larr997888 119908 lowast 119881
119889
119894+ 119888lowastrand119889
119894lowast (119901best
119889
119891119894(119889)minus 119883119889
119894) (4)
in which 119891119894= [119891119894(1) 119891
119894(119872)] defines which particlesrsquo
119901bests the 119894th particle should follow 119901best119889
119891119894(119889)can be the
corresponding dimension of any particlersquos 119901best (including itsown 119901best) and the decision depends on probability 119875
119888
referred to as the learning probability which can take differ-ent values for different particles We first generate a randomnumber for each dimension of the 119894th particle If this randomnumberis larger than 119875
119888119894 then the corresponding dimension
will learn from its own 119901best otherwise it will learn fromanother particlersquos 119901best
3 The Double Flight Modes ParticleSwarm Optimization
31The Flight Characteristics of Birds Through careful obser-vation we have found that (1) most of birds have superbflight-skills They can use various flight modes such asrotational flight mode and non-rotational flight mode to flyin their search space can avoid the attack of their naturalenemies and various obstacles and can avoid themselvesbeing immersed into blind alley and (2) there is usually aKingof birds (a flight commander) in most of the swarms of birdsthe King controls or directs every birdrsquos flightmode and flyingdirection while the swarms of birds are searching for food inthe search space Therefore we think if a bird uses only oneflight mode to fly in its search space all the time it will beunable to avoid the attack of its natural enemies and variousobstacles and will easily be immersed into blind alley If thereis not a King of birds controlling the flying direction of theswarm then the swarmwill be fallen into being scattered anddisunited In most cases if a bird has superb flight skills itusually can find more food when it is foraging for food in itssearch space
For the sake of simplicity we use the following idealizedrules
(1) Each bird only uses rotational flight mode and non-rotational flight mode to fly while it is searching forfood in its search space
(2) There is a King of birds among a swarm of birds TheKing controls or directs every birdrsquos flight behavior inaccordance with certain rules and directs each birdrsquosflight mode and flying direction while a swarm ofbirds is searching for food in its search space
(3) The flight speed of a bird has something to dowith thedistance between the bird and its flying destinationWe can say to a certain degree that the farther thedistance between the bird and its flying destinationthe faster the speed flying to the destination
If we idealize the flight characteristics of a swarm of birdsaccording to the previous description then we can redevelopa new particle swarm optimization inspired by the real birdsin flight In simulations we use virtual birds (particles)naturally
32 The Flight Modes of Birds Let119883119894(119905) = (119909
1198941(119905) 119909
119894119863(119905))
and119881119894(119905) = (V
1198941(119905) V
119894119863(119905)) be the position and the velocity
of particle 119894 respectively 119875best119894(119905) = (1199011198941(119905) 119901119894119863(119905)) be thebest previous position yielding the best fitness value for the 119894thparticle and 119866best = (1198921198871(119905) 119892119887119863(119905)) be the best positiondiscovered by the whole population
We first give the conceptions of rotational flightmode andnon-rotational flight mode respectively
Journal of Optimization 3
Bird 1
Bird 2
Bird 3 Bird 4
Bird k
Bird q
A tree
Gbest
Figure 1 A diagrammatic sketch of a swarmof birds using rotationalflight mode to fly to the 119866best
Definition 1 Let119883119894(119905) = (119909
1198941(119905) 119909
119894119863(119905)) be the position of
particle 119894 at the time instant 119905 and119866best(119905) = (1198921(119905) 119892119863(119905))
be the best position discovered by the whole population
(1) We call that particle 119894 uses rotational flight mode to flyto the position119866best(119905) if particle 119894 flies to the position119866best(119905)according to the following equation
119909119894119895 (119905 + 1) = 119892119896 (119905) 119895 = 1 119863 (5)
where the number 119896 is a random integer in the set 1 119863We can use a diagrammatic sketch to depict that a group
of birds is using rotational flight mode to fly to the 119866best as inFigure 1
We can foresee if a group of birds is using rotational flightmode to fly to the position 119866best(119905) at the time instant 119905 thento some extent the group will gather around the position119866best(119905) at the time instant 119905 + 1
(2) We call that particle 119894 uses non-rotational flight modeto fly to the position 119866best(119905) if particle 119894 flies to the position119866best(119905) according to the following equation
V119894119895 (119905 + 1) = V
119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905))
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + 119908(119905)V119894119895 (119905 + 1) 119895 = 1 119863
(6)
where 119908(119905) is an increasing function about the variable|119892119895(119905) minus 119909
119894119895(119905)| (the distance between the 119895th component of
119883119894(119905) and the 119895th component of 119866best(119905)) 1198881 and 1198882 are the
acceleration coefficients and both 1199031and 1199032are two uniformly
distributed random numbers in the range [0 1]In our simulations of this paper we select 119908(119905) = 120583(1 minus
exp(minus120588|119892119895(119905)minus119909119894119895(119905)|)) as the increasing function in (6) where
120583 = 08 and 120588 = 5 119895 = 1 119863
The search area ofparticle i at the
time instant t + 1
Xi(t + 1)
Xi(t)
Particle i atthe time instant t
Gbest
pbest 119894
Figure 2 A diagrammatic sketch of particle 119894 using non-rotationalflight mode to fly to the 119866best
We can use a diagrammatic sketch to depict that particle119894 is using non-rotational flight mode to fly to the position119866bestas in Figure 2
33 The Flight-Control Approach of the King Since the Kingof birds controls each birdrsquos flight behavior in accordancewithcertain rules therefore we look at the King as a flight com-mander and we think that every birdrsquos flight behavior iscontrolled by the King Following that we will set up a flight-command rule for the King and the King uses this rule tocontrol each birdrsquos flight behavior We first give the con-ception of flight command as follows
Definition 2 Let119872 be the population size of the swarm and119904119894(119905) be the order of particle 119894 in the swarm according to the
ascending sort of the fitness value at the time instant 119905 Thenthe conception of flight command is defined as the followingequation
FC119894 (119905) =
119904119894 (119905) minus 1
119872 minus 1 (7)
in which if 119904119894(119905) = 1 (FC
119894(119905) = 0 accordingly) then the fitness
value of particle 119894will be the best one in the swarm at the timeinstant 119905 Meanwhile if 119904
119894(119905) = 119872 (FC
119894(119905) = 1 accordingly)
then the fitness value of particle 119894 will be the worst one in theswarm at the time instant 119905
The King controls each particlersquos flight mode according tothe following approach
Step 1 The King first gives an instruction 120575 randomly where120575 is a normal random distribution in the range [0 1]
Step 2 Each particle chooses its flight mode according thefollowing rule
particle 119894 chooses the formula (6) to fly if FC119894gt 120575
particle 119894 chooses the formula (5) to fly if FC119894le 120575
(8)
4 Journal of Optimization
Objective function 119891(119883)119883 = (1199091 119909
119863)
Initialize each particlersquos position119883119894and velocity 119881
119894randomly and assign119883
119894to
the 119875best119894 at the same time (119894 = 1 119872)while (The stop criterion is not satisfied) do
For 119894 = 1 119872 do
if (119894 le 119872)calculate the fitness value of particle 119894 119891 (119883
119894(119905))
end if
Rank the swarm according to the ascending sort of the fitness value and get119904119894(119905) 119865119862
119894(119905) 119866best (119905)
Assign each particlersquos flight-mode according to the Rule (8)Update 119875best (119905)
119905 = 119905 + 1
end whileoutput 119866best 119891 (119866best)
Procedure 1 Double flight-modes particle swarm optimization
That is to say if 119904119894(119905) gt 120575(119872minus1)+1 then particle 119894will choose
non-rotational flight mode to fly to next step Otherwiseparticle 119894 will choose rotational flight mode to fly to next step
The procedure of the DMPSO can be simply described asin Procedure 1
4 Validation and Comparison
In order to test the performance of the DMPSO we havetested it against the original PSO [1] the PSO-W [2] andthe CLPSO [16] For the ease of visualization we haveimplemented our simulations using Matlab for various testfunctions
41 Benchmark Functions For the sake of having a fair andreasonable comparison between the DMPSO and the PSOthe DMPSO and the PSO-W or the DMPSO and the CLPSOwe have chosen six well-known high-dimensional functionsas our optimization simulation tests All functions are testedon 50 dimensions The properties and the formula of thesefunctions are presented as follows
(a) Schwefelrsquos function
1198911 (119883) =
119863
sum
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 +
119863
prod
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 minus10 le 119909
119894le 10 119863 = 50 (9)
It is a unimodal function and has a global minimum 119891min = 0at (0 0) The complexity of Schwefelrsquos function is due toits deep local optima being far from the global optimum It
will be hard to find the global optimum if many particles fallinto one of the deep local optima
(b) Rastriginrsquos function
1198912 (119883) = 10119863
119863
sum
119894=1
[1199092
119894minus 10 cos (2120587119909
119894)]
minus 512 le 119909119894le 512 119863 = 50
(10)
The function is a complex multimodal function and thenumber of its local minima shows an exponential increasewith the problem dimension and its peak shape is up anddown in jumping When attempting to solve Rastriginrsquosfunction algorithms may easily fall into a local optimumTherefore an algorithm capable of maintaining a largerdiversity is likely to yield better results so Rastrigin is viewedas a typical function used for testing global search perfor-mance of algorithm It has a global minimum 119891min = 0 at(0 0)
(c) Step function
1198913 (119883) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 minus100 le 119909
119894le 100 119863 = 50 (11)
in which 119910 = lfloor119909rfloor is a bracket function (Gaussian function)and 119909 minus 1 lt lfloor119909rfloor le 119909 lt lfloor119909rfloor + 1 forall119909 isin R Step function is a
Journal of Optimization 5
Table 1 Experimental results
f Algorithms BFV WFV Mean MFEs plusmn STDEV
1198911
DMPSO 0 1125465119890 minus 009 6434905119890 minus 011 7350 plusmn 1771972119890 minus 010 (56)PSO 9736494 61567035 31111268 15000 plusmn 11065242 (0)
CLPSO 8174606119890 minus 005 0001325 5125738119890 minus 021 15000 plusmn 2702365119890 minus 004 (0)PSO-w 4429573 1994109119890 + 002 80584602 15000 plusmn 58997786 (0)
1198912
DMPSO 0 49747953 3979836 11130 plusmn 13496281 (64)PSO 1790843119890 + 002 4039120119890 + 002 2847614119890 + 002 15000 plusmn 49430124 (0)
CLPSO 21153304 47516825 31762105 15000 plusmn 6732226 (0)PSO-w 1416269119890 + 002 2890130119890 + 002 21243064119890 + 002 15000 plusmn 43169379 (0)
1198913
DMPSO 0 0 0 1470 plusmn 0 (100)PSO 351 11160 3083180119890 + 003 15000 plusmn 3937885119890 + 003 (0)
CLPSO 0 122 422 10440 plusmn 17105894 (48)PSO-w 45 311 1405400119890 + 002 15000 plusmn 54365809 (0)
1198914
DMPSO 8881784119890 minus 016 3286260119890 minus 014 1140421119890 minus 014 60000 plusmn 8672115119890 minus 015 (0)PSO 0079333 14476725 3692153 60000 plusmn 4976905 (0)
CLPSO 1289395119890 minus 010 2200675 1082862 60000 plusmn 0692050 (0)PSO-w 0003600 11916679 0656150 60000 plusmn 1737976 (0)
1198915
DMPSO 0 1729170119890 minus 069 3458545119890 minus 071 11130 plusmn 2420835119890 minus 070 (28)PSO 1236611119890 minus 004 26217801 4196887 60000 plusmn 9707225 (0)
CLPSO 2700980119890 minus 038 9595381119890 minus 036 9748685119890 minus 037 60000 plusmn 1924460119890 minus 036 (0)PSO-w 5651214119890 minus 021 1681261119890 minus 017 2451834119890 minus 018 60000 plusmn 3861006119890 minus 018 (0)
1198916
DMPSO 0 3330669119890 minus 016 3774758119890 minus 017 35430 plusmn 9049510119890 minus 017 (82)PSO 004331 1810140119890 + 002 20290841 60000 plusmn 41921578 (0)
CLPSO 1787459119890 minus 014 0092264 0014715 60000 plusmn 0021761 (0)PSO-w 1760398119890 minus 004 90751966 1826534 60000 plusmn 12832618 (0)
discontinuous function and has a global minimum 119891min = 0
in the domain 119883 isin R50 | minus05 le 119909119894le 05
(d) Ackleyrsquos function
1198914 (119883) = minus20 exp[
[
minus1
5
radic1
119863
119863
sum
119894=1
1199092
119894]
]
minus exp[ 1119863
119863
sum
119894=1
cos (2120587119909119894)]
+ 20 + 119890 minus30 le 119909119894le 30 119863 = 50
(12)
The function has one narrow global optimumbasin andmanyminor local optima and has a global optimum 119891min = 0 at(0 0)
(e) Sphere function
1198915 (119883) =
119863
sum
119894=1
1199092
119894 minus512 le 119909
119894le 512 119863 = 50 (13)
It is a unimodal function and its global minimum 119891min = 0at (0 0)
(f) Griewankrsquos function
1198916 (119883) =
1
4000
119863
sum
119894=1
1199092
119894minus
119863
prod
119894=1
cos(119909119894
radic119894
) + 1
minus 600 le 119909119894le 600 119863 = 50
(14)
The search space is relatively large in this optimization prob-lem Griewankrsquos function has a prod119863
119894=1cos(119909119894radic119894) component
causing linkages among variables thereby making it difficultto reach the global optimum Therefore it is generallyregarded as a complex multimodal problem that is hard tooptimize The function has a global minimum 119891min = 0 at(0 0)
42 Comparison of Experimental Results and DiscussionsThere are many means to carry out the comparison ofalgorithmperformanceWe can use suchways to compare thenumber of function evaluations (FEs) for a given accuracy orto compare their accuracies for a fixed number of functionevaluations and so forth In our simulations we use the two
6 Journal of Optimization
0 50 100 150 200 250 300 350 400 450 500
Fitn
ess v
alue
s (lo
g)
Generations
1025
1020
1015
1010
105
100
10minus5
(a)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
100
101
102
103
(b)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
106
105
104
103
102
101
100
(c)
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fitn
ess v
alue
s (lo
g)
Generations
105
100
10minus5
10minus10
10minus15
10minus20
(d)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
Fitn
ess v
alue
s (lo
g)
PSOCLPSO
PSO-wDMPSO
1010
100
10minus10
10minus20
10minus30
10minus40
(e)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
PSOCLPSO
PSO-wDMPSO
Fitn
ess v
alue
s (lo
g)
104
103
100
101
102
10minus1
10minus2
(f)
Figure 3 The median convergence characteristics of 50D test functions (a) Schwefelrsquos function (b) Rastriginrsquos function (c) Step function(d) Ackleyrsquos function (e) Sphere function and (f) Griewankrsquos function
waysmentioned previously andwe set up a running-stoppingcondition for each algorithm If a run has found the globaloptimal solution of optimization problem or has reached afixed number of function evaluations then it will stop run-ning We run each algorithm for 50 times so that we can doreasonable and meaningful analysis
In order to ensure the comparability of the experimentalresults the parameters settings are set as consistent aspossible for the DMPSO the PSO the PSO-W and theCLPSO The parameters settings are listed as follows thepopulation size 119872 = 30 and the acceleration coefficients1198881= 1198882= 2 for all simulations As for the test functions
1198911(119883) 119891
2(119883) and 119891
3(119883) we set the maximum iterations
at 500 (or the maximum number of function evaluationsat 15000 correspondingly) As for the test functions 119891
4(119883)
1198915(119883) and 119891
6(119883) we set the maximum iterations at 2000
(or the maximum number of function evaluations at 60000correspondingly) On the other hand we set the inertiaweight 119908 = 0628 for the CLPSO and set 119908(119905) = 08(1 minus
exp(minus5|119892119895(119905) minus 119909
119894119895(119905)|)) for the DMPSO As for the PSO-W
the linearly decreasing inertia weight 119908 is used which startsat 09 and ends at 04 and Vmax le 20(119906119889 minus 119897119889)
In our experiments we choose the best fitness value(BFV) the worst fitness value (WFV) the mean value(Mean) and the number of function evaluations in the formof mean (MFEs) plusmn standard deviation (STDEV) (success rate
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 3
Bird 1
Bird 2
Bird 3 Bird 4
Bird k
Bird q
A tree
Gbest
Figure 1 A diagrammatic sketch of a swarmof birds using rotationalflight mode to fly to the 119866best
Definition 1 Let119883119894(119905) = (119909
1198941(119905) 119909
119894119863(119905)) be the position of
particle 119894 at the time instant 119905 and119866best(119905) = (1198921(119905) 119892119863(119905))
be the best position discovered by the whole population
(1) We call that particle 119894 uses rotational flight mode to flyto the position119866best(119905) if particle 119894 flies to the position119866best(119905)according to the following equation
119909119894119895 (119905 + 1) = 119892119896 (119905) 119895 = 1 119863 (5)
where the number 119896 is a random integer in the set 1 119863We can use a diagrammatic sketch to depict that a group
of birds is using rotational flight mode to fly to the 119866best as inFigure 1
We can foresee if a group of birds is using rotational flightmode to fly to the position 119866best(119905) at the time instant 119905 thento some extent the group will gather around the position119866best(119905) at the time instant 119905 + 1
(2) We call that particle 119894 uses non-rotational flight modeto fly to the position 119866best(119905) if particle 119894 flies to the position119866best(119905) according to the following equation
V119894119895 (119905 + 1) = V
119894119895 (119905) + 11988811199031 (119901119894119895 (119905) minus 119909119894119895 (119905))
+ 11988821199032(119892119895 (119905) minus 119909119894119895 (119905))
119909119894119895 (119905 + 1) = 119909119894119895 (119905) + 119908(119905)V119894119895 (119905 + 1) 119895 = 1 119863
(6)
where 119908(119905) is an increasing function about the variable|119892119895(119905) minus 119909
119894119895(119905)| (the distance between the 119895th component of
119883119894(119905) and the 119895th component of 119866best(119905)) 1198881 and 1198882 are the
acceleration coefficients and both 1199031and 1199032are two uniformly
distributed random numbers in the range [0 1]In our simulations of this paper we select 119908(119905) = 120583(1 minus
exp(minus120588|119892119895(119905)minus119909119894119895(119905)|)) as the increasing function in (6) where
120583 = 08 and 120588 = 5 119895 = 1 119863
The search area ofparticle i at the
time instant t + 1
Xi(t + 1)
Xi(t)
Particle i atthe time instant t
Gbest
pbest 119894
Figure 2 A diagrammatic sketch of particle 119894 using non-rotationalflight mode to fly to the 119866best
We can use a diagrammatic sketch to depict that particle119894 is using non-rotational flight mode to fly to the position119866bestas in Figure 2
33 The Flight-Control Approach of the King Since the Kingof birds controls each birdrsquos flight behavior in accordancewithcertain rules therefore we look at the King as a flight com-mander and we think that every birdrsquos flight behavior iscontrolled by the King Following that we will set up a flight-command rule for the King and the King uses this rule tocontrol each birdrsquos flight behavior We first give the con-ception of flight command as follows
Definition 2 Let119872 be the population size of the swarm and119904119894(119905) be the order of particle 119894 in the swarm according to the
ascending sort of the fitness value at the time instant 119905 Thenthe conception of flight command is defined as the followingequation
FC119894 (119905) =
119904119894 (119905) minus 1
119872 minus 1 (7)
in which if 119904119894(119905) = 1 (FC
119894(119905) = 0 accordingly) then the fitness
value of particle 119894will be the best one in the swarm at the timeinstant 119905 Meanwhile if 119904
119894(119905) = 119872 (FC
119894(119905) = 1 accordingly)
then the fitness value of particle 119894 will be the worst one in theswarm at the time instant 119905
The King controls each particlersquos flight mode according tothe following approach
Step 1 The King first gives an instruction 120575 randomly where120575 is a normal random distribution in the range [0 1]
Step 2 Each particle chooses its flight mode according thefollowing rule
particle 119894 chooses the formula (6) to fly if FC119894gt 120575
particle 119894 chooses the formula (5) to fly if FC119894le 120575
(8)
4 Journal of Optimization
Objective function 119891(119883)119883 = (1199091 119909
119863)
Initialize each particlersquos position119883119894and velocity 119881
119894randomly and assign119883
119894to
the 119875best119894 at the same time (119894 = 1 119872)while (The stop criterion is not satisfied) do
For 119894 = 1 119872 do
if (119894 le 119872)calculate the fitness value of particle 119894 119891 (119883
119894(119905))
end if
Rank the swarm according to the ascending sort of the fitness value and get119904119894(119905) 119865119862
119894(119905) 119866best (119905)
Assign each particlersquos flight-mode according to the Rule (8)Update 119875best (119905)
119905 = 119905 + 1
end whileoutput 119866best 119891 (119866best)
Procedure 1 Double flight-modes particle swarm optimization
That is to say if 119904119894(119905) gt 120575(119872minus1)+1 then particle 119894will choose
non-rotational flight mode to fly to next step Otherwiseparticle 119894 will choose rotational flight mode to fly to next step
The procedure of the DMPSO can be simply described asin Procedure 1
4 Validation and Comparison
In order to test the performance of the DMPSO we havetested it against the original PSO [1] the PSO-W [2] andthe CLPSO [16] For the ease of visualization we haveimplemented our simulations using Matlab for various testfunctions
41 Benchmark Functions For the sake of having a fair andreasonable comparison between the DMPSO and the PSOthe DMPSO and the PSO-W or the DMPSO and the CLPSOwe have chosen six well-known high-dimensional functionsas our optimization simulation tests All functions are testedon 50 dimensions The properties and the formula of thesefunctions are presented as follows
(a) Schwefelrsquos function
1198911 (119883) =
119863
sum
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 +
119863
prod
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 minus10 le 119909
119894le 10 119863 = 50 (9)
It is a unimodal function and has a global minimum 119891min = 0at (0 0) The complexity of Schwefelrsquos function is due toits deep local optima being far from the global optimum It
will be hard to find the global optimum if many particles fallinto one of the deep local optima
(b) Rastriginrsquos function
1198912 (119883) = 10119863
119863
sum
119894=1
[1199092
119894minus 10 cos (2120587119909
119894)]
minus 512 le 119909119894le 512 119863 = 50
(10)
The function is a complex multimodal function and thenumber of its local minima shows an exponential increasewith the problem dimension and its peak shape is up anddown in jumping When attempting to solve Rastriginrsquosfunction algorithms may easily fall into a local optimumTherefore an algorithm capable of maintaining a largerdiversity is likely to yield better results so Rastrigin is viewedas a typical function used for testing global search perfor-mance of algorithm It has a global minimum 119891min = 0 at(0 0)
(c) Step function
1198913 (119883) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 minus100 le 119909
119894le 100 119863 = 50 (11)
in which 119910 = lfloor119909rfloor is a bracket function (Gaussian function)and 119909 minus 1 lt lfloor119909rfloor le 119909 lt lfloor119909rfloor + 1 forall119909 isin R Step function is a
Journal of Optimization 5
Table 1 Experimental results
f Algorithms BFV WFV Mean MFEs plusmn STDEV
1198911
DMPSO 0 1125465119890 minus 009 6434905119890 minus 011 7350 plusmn 1771972119890 minus 010 (56)PSO 9736494 61567035 31111268 15000 plusmn 11065242 (0)
CLPSO 8174606119890 minus 005 0001325 5125738119890 minus 021 15000 plusmn 2702365119890 minus 004 (0)PSO-w 4429573 1994109119890 + 002 80584602 15000 plusmn 58997786 (0)
1198912
DMPSO 0 49747953 3979836 11130 plusmn 13496281 (64)PSO 1790843119890 + 002 4039120119890 + 002 2847614119890 + 002 15000 plusmn 49430124 (0)
CLPSO 21153304 47516825 31762105 15000 plusmn 6732226 (0)PSO-w 1416269119890 + 002 2890130119890 + 002 21243064119890 + 002 15000 plusmn 43169379 (0)
1198913
DMPSO 0 0 0 1470 plusmn 0 (100)PSO 351 11160 3083180119890 + 003 15000 plusmn 3937885119890 + 003 (0)
CLPSO 0 122 422 10440 plusmn 17105894 (48)PSO-w 45 311 1405400119890 + 002 15000 plusmn 54365809 (0)
1198914
DMPSO 8881784119890 minus 016 3286260119890 minus 014 1140421119890 minus 014 60000 plusmn 8672115119890 minus 015 (0)PSO 0079333 14476725 3692153 60000 plusmn 4976905 (0)
CLPSO 1289395119890 minus 010 2200675 1082862 60000 plusmn 0692050 (0)PSO-w 0003600 11916679 0656150 60000 plusmn 1737976 (0)
1198915
DMPSO 0 1729170119890 minus 069 3458545119890 minus 071 11130 plusmn 2420835119890 minus 070 (28)PSO 1236611119890 minus 004 26217801 4196887 60000 plusmn 9707225 (0)
CLPSO 2700980119890 minus 038 9595381119890 minus 036 9748685119890 minus 037 60000 plusmn 1924460119890 minus 036 (0)PSO-w 5651214119890 minus 021 1681261119890 minus 017 2451834119890 minus 018 60000 plusmn 3861006119890 minus 018 (0)
1198916
DMPSO 0 3330669119890 minus 016 3774758119890 minus 017 35430 plusmn 9049510119890 minus 017 (82)PSO 004331 1810140119890 + 002 20290841 60000 plusmn 41921578 (0)
CLPSO 1787459119890 minus 014 0092264 0014715 60000 plusmn 0021761 (0)PSO-w 1760398119890 minus 004 90751966 1826534 60000 plusmn 12832618 (0)
discontinuous function and has a global minimum 119891min = 0
in the domain 119883 isin R50 | minus05 le 119909119894le 05
(d) Ackleyrsquos function
1198914 (119883) = minus20 exp[
[
minus1
5
radic1
119863
119863
sum
119894=1
1199092
119894]
]
minus exp[ 1119863
119863
sum
119894=1
cos (2120587119909119894)]
+ 20 + 119890 minus30 le 119909119894le 30 119863 = 50
(12)
The function has one narrow global optimumbasin andmanyminor local optima and has a global optimum 119891min = 0 at(0 0)
(e) Sphere function
1198915 (119883) =
119863
sum
119894=1
1199092
119894 minus512 le 119909
119894le 512 119863 = 50 (13)
It is a unimodal function and its global minimum 119891min = 0at (0 0)
(f) Griewankrsquos function
1198916 (119883) =
1
4000
119863
sum
119894=1
1199092
119894minus
119863
prod
119894=1
cos(119909119894
radic119894
) + 1
minus 600 le 119909119894le 600 119863 = 50
(14)
The search space is relatively large in this optimization prob-lem Griewankrsquos function has a prod119863
119894=1cos(119909119894radic119894) component
causing linkages among variables thereby making it difficultto reach the global optimum Therefore it is generallyregarded as a complex multimodal problem that is hard tooptimize The function has a global minimum 119891min = 0 at(0 0)
42 Comparison of Experimental Results and DiscussionsThere are many means to carry out the comparison ofalgorithmperformanceWe can use suchways to compare thenumber of function evaluations (FEs) for a given accuracy orto compare their accuracies for a fixed number of functionevaluations and so forth In our simulations we use the two
6 Journal of Optimization
0 50 100 150 200 250 300 350 400 450 500
Fitn
ess v
alue
s (lo
g)
Generations
1025
1020
1015
1010
105
100
10minus5
(a)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
100
101
102
103
(b)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
106
105
104
103
102
101
100
(c)
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fitn
ess v
alue
s (lo
g)
Generations
105
100
10minus5
10minus10
10minus15
10minus20
(d)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
Fitn
ess v
alue
s (lo
g)
PSOCLPSO
PSO-wDMPSO
1010
100
10minus10
10minus20
10minus30
10minus40
(e)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
PSOCLPSO
PSO-wDMPSO
Fitn
ess v
alue
s (lo
g)
104
103
100
101
102
10minus1
10minus2
(f)
Figure 3 The median convergence characteristics of 50D test functions (a) Schwefelrsquos function (b) Rastriginrsquos function (c) Step function(d) Ackleyrsquos function (e) Sphere function and (f) Griewankrsquos function
waysmentioned previously andwe set up a running-stoppingcondition for each algorithm If a run has found the globaloptimal solution of optimization problem or has reached afixed number of function evaluations then it will stop run-ning We run each algorithm for 50 times so that we can doreasonable and meaningful analysis
In order to ensure the comparability of the experimentalresults the parameters settings are set as consistent aspossible for the DMPSO the PSO the PSO-W and theCLPSO The parameters settings are listed as follows thepopulation size 119872 = 30 and the acceleration coefficients1198881= 1198882= 2 for all simulations As for the test functions
1198911(119883) 119891
2(119883) and 119891
3(119883) we set the maximum iterations
at 500 (or the maximum number of function evaluationsat 15000 correspondingly) As for the test functions 119891
4(119883)
1198915(119883) and 119891
6(119883) we set the maximum iterations at 2000
(or the maximum number of function evaluations at 60000correspondingly) On the other hand we set the inertiaweight 119908 = 0628 for the CLPSO and set 119908(119905) = 08(1 minus
exp(minus5|119892119895(119905) minus 119909
119894119895(119905)|)) for the DMPSO As for the PSO-W
the linearly decreasing inertia weight 119908 is used which startsat 09 and ends at 04 and Vmax le 20(119906119889 minus 119897119889)
In our experiments we choose the best fitness value(BFV) the worst fitness value (WFV) the mean value(Mean) and the number of function evaluations in the formof mean (MFEs) plusmn standard deviation (STDEV) (success rate
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
4 Journal of Optimization
Objective function 119891(119883)119883 = (1199091 119909
119863)
Initialize each particlersquos position119883119894and velocity 119881
119894randomly and assign119883
119894to
the 119875best119894 at the same time (119894 = 1 119872)while (The stop criterion is not satisfied) do
For 119894 = 1 119872 do
if (119894 le 119872)calculate the fitness value of particle 119894 119891 (119883
119894(119905))
end if
Rank the swarm according to the ascending sort of the fitness value and get119904119894(119905) 119865119862
119894(119905) 119866best (119905)
Assign each particlersquos flight-mode according to the Rule (8)Update 119875best (119905)
119905 = 119905 + 1
end whileoutput 119866best 119891 (119866best)
Procedure 1 Double flight-modes particle swarm optimization
That is to say if 119904119894(119905) gt 120575(119872minus1)+1 then particle 119894will choose
non-rotational flight mode to fly to next step Otherwiseparticle 119894 will choose rotational flight mode to fly to next step
The procedure of the DMPSO can be simply described asin Procedure 1
4 Validation and Comparison
In order to test the performance of the DMPSO we havetested it against the original PSO [1] the PSO-W [2] andthe CLPSO [16] For the ease of visualization we haveimplemented our simulations using Matlab for various testfunctions
41 Benchmark Functions For the sake of having a fair andreasonable comparison between the DMPSO and the PSOthe DMPSO and the PSO-W or the DMPSO and the CLPSOwe have chosen six well-known high-dimensional functionsas our optimization simulation tests All functions are testedon 50 dimensions The properties and the formula of thesefunctions are presented as follows
(a) Schwefelrsquos function
1198911 (119883) =
119863
sum
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 +
119863
prod
119894=1
10038161003816100381610038161199091198941003816100381610038161003816 minus10 le 119909
119894le 10 119863 = 50 (9)
It is a unimodal function and has a global minimum 119891min = 0at (0 0) The complexity of Schwefelrsquos function is due toits deep local optima being far from the global optimum It
will be hard to find the global optimum if many particles fallinto one of the deep local optima
(b) Rastriginrsquos function
1198912 (119883) = 10119863
119863
sum
119894=1
[1199092
119894minus 10 cos (2120587119909
119894)]
minus 512 le 119909119894le 512 119863 = 50
(10)
The function is a complex multimodal function and thenumber of its local minima shows an exponential increasewith the problem dimension and its peak shape is up anddown in jumping When attempting to solve Rastriginrsquosfunction algorithms may easily fall into a local optimumTherefore an algorithm capable of maintaining a largerdiversity is likely to yield better results so Rastrigin is viewedas a typical function used for testing global search perfor-mance of algorithm It has a global minimum 119891min = 0 at(0 0)
(c) Step function
1198913 (119883) =
119863
sum
119894=1
(lfloor119909119894+ 05rfloor)
2 minus100 le 119909
119894le 100 119863 = 50 (11)
in which 119910 = lfloor119909rfloor is a bracket function (Gaussian function)and 119909 minus 1 lt lfloor119909rfloor le 119909 lt lfloor119909rfloor + 1 forall119909 isin R Step function is a
Journal of Optimization 5
Table 1 Experimental results
f Algorithms BFV WFV Mean MFEs plusmn STDEV
1198911
DMPSO 0 1125465119890 minus 009 6434905119890 minus 011 7350 plusmn 1771972119890 minus 010 (56)PSO 9736494 61567035 31111268 15000 plusmn 11065242 (0)
CLPSO 8174606119890 minus 005 0001325 5125738119890 minus 021 15000 plusmn 2702365119890 minus 004 (0)PSO-w 4429573 1994109119890 + 002 80584602 15000 plusmn 58997786 (0)
1198912
DMPSO 0 49747953 3979836 11130 plusmn 13496281 (64)PSO 1790843119890 + 002 4039120119890 + 002 2847614119890 + 002 15000 plusmn 49430124 (0)
CLPSO 21153304 47516825 31762105 15000 plusmn 6732226 (0)PSO-w 1416269119890 + 002 2890130119890 + 002 21243064119890 + 002 15000 plusmn 43169379 (0)
1198913
DMPSO 0 0 0 1470 plusmn 0 (100)PSO 351 11160 3083180119890 + 003 15000 plusmn 3937885119890 + 003 (0)
CLPSO 0 122 422 10440 plusmn 17105894 (48)PSO-w 45 311 1405400119890 + 002 15000 plusmn 54365809 (0)
1198914
DMPSO 8881784119890 minus 016 3286260119890 minus 014 1140421119890 minus 014 60000 plusmn 8672115119890 minus 015 (0)PSO 0079333 14476725 3692153 60000 plusmn 4976905 (0)
CLPSO 1289395119890 minus 010 2200675 1082862 60000 plusmn 0692050 (0)PSO-w 0003600 11916679 0656150 60000 plusmn 1737976 (0)
1198915
DMPSO 0 1729170119890 minus 069 3458545119890 minus 071 11130 plusmn 2420835119890 minus 070 (28)PSO 1236611119890 minus 004 26217801 4196887 60000 plusmn 9707225 (0)
CLPSO 2700980119890 minus 038 9595381119890 minus 036 9748685119890 minus 037 60000 plusmn 1924460119890 minus 036 (0)PSO-w 5651214119890 minus 021 1681261119890 minus 017 2451834119890 minus 018 60000 plusmn 3861006119890 minus 018 (0)
1198916
DMPSO 0 3330669119890 minus 016 3774758119890 minus 017 35430 plusmn 9049510119890 minus 017 (82)PSO 004331 1810140119890 + 002 20290841 60000 plusmn 41921578 (0)
CLPSO 1787459119890 minus 014 0092264 0014715 60000 plusmn 0021761 (0)PSO-w 1760398119890 minus 004 90751966 1826534 60000 plusmn 12832618 (0)
discontinuous function and has a global minimum 119891min = 0
in the domain 119883 isin R50 | minus05 le 119909119894le 05
(d) Ackleyrsquos function
1198914 (119883) = minus20 exp[
[
minus1
5
radic1
119863
119863
sum
119894=1
1199092
119894]
]
minus exp[ 1119863
119863
sum
119894=1
cos (2120587119909119894)]
+ 20 + 119890 minus30 le 119909119894le 30 119863 = 50
(12)
The function has one narrow global optimumbasin andmanyminor local optima and has a global optimum 119891min = 0 at(0 0)
(e) Sphere function
1198915 (119883) =
119863
sum
119894=1
1199092
119894 minus512 le 119909
119894le 512 119863 = 50 (13)
It is a unimodal function and its global minimum 119891min = 0at (0 0)
(f) Griewankrsquos function
1198916 (119883) =
1
4000
119863
sum
119894=1
1199092
119894minus
119863
prod
119894=1
cos(119909119894
radic119894
) + 1
minus 600 le 119909119894le 600 119863 = 50
(14)
The search space is relatively large in this optimization prob-lem Griewankrsquos function has a prod119863
119894=1cos(119909119894radic119894) component
causing linkages among variables thereby making it difficultto reach the global optimum Therefore it is generallyregarded as a complex multimodal problem that is hard tooptimize The function has a global minimum 119891min = 0 at(0 0)
42 Comparison of Experimental Results and DiscussionsThere are many means to carry out the comparison ofalgorithmperformanceWe can use suchways to compare thenumber of function evaluations (FEs) for a given accuracy orto compare their accuracies for a fixed number of functionevaluations and so forth In our simulations we use the two
6 Journal of Optimization
0 50 100 150 200 250 300 350 400 450 500
Fitn
ess v
alue
s (lo
g)
Generations
1025
1020
1015
1010
105
100
10minus5
(a)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
100
101
102
103
(b)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
106
105
104
103
102
101
100
(c)
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fitn
ess v
alue
s (lo
g)
Generations
105
100
10minus5
10minus10
10minus15
10minus20
(d)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
Fitn
ess v
alue
s (lo
g)
PSOCLPSO
PSO-wDMPSO
1010
100
10minus10
10minus20
10minus30
10minus40
(e)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
PSOCLPSO
PSO-wDMPSO
Fitn
ess v
alue
s (lo
g)
104
103
100
101
102
10minus1
10minus2
(f)
Figure 3 The median convergence characteristics of 50D test functions (a) Schwefelrsquos function (b) Rastriginrsquos function (c) Step function(d) Ackleyrsquos function (e) Sphere function and (f) Griewankrsquos function
waysmentioned previously andwe set up a running-stoppingcondition for each algorithm If a run has found the globaloptimal solution of optimization problem or has reached afixed number of function evaluations then it will stop run-ning We run each algorithm for 50 times so that we can doreasonable and meaningful analysis
In order to ensure the comparability of the experimentalresults the parameters settings are set as consistent aspossible for the DMPSO the PSO the PSO-W and theCLPSO The parameters settings are listed as follows thepopulation size 119872 = 30 and the acceleration coefficients1198881= 1198882= 2 for all simulations As for the test functions
1198911(119883) 119891
2(119883) and 119891
3(119883) we set the maximum iterations
at 500 (or the maximum number of function evaluationsat 15000 correspondingly) As for the test functions 119891
4(119883)
1198915(119883) and 119891
6(119883) we set the maximum iterations at 2000
(or the maximum number of function evaluations at 60000correspondingly) On the other hand we set the inertiaweight 119908 = 0628 for the CLPSO and set 119908(119905) = 08(1 minus
exp(minus5|119892119895(119905) minus 119909
119894119895(119905)|)) for the DMPSO As for the PSO-W
the linearly decreasing inertia weight 119908 is used which startsat 09 and ends at 04 and Vmax le 20(119906119889 minus 119897119889)
In our experiments we choose the best fitness value(BFV) the worst fitness value (WFV) the mean value(Mean) and the number of function evaluations in the formof mean (MFEs) plusmn standard deviation (STDEV) (success rate
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 5
Table 1 Experimental results
f Algorithms BFV WFV Mean MFEs plusmn STDEV
1198911
DMPSO 0 1125465119890 minus 009 6434905119890 minus 011 7350 plusmn 1771972119890 minus 010 (56)PSO 9736494 61567035 31111268 15000 plusmn 11065242 (0)
CLPSO 8174606119890 minus 005 0001325 5125738119890 minus 021 15000 plusmn 2702365119890 minus 004 (0)PSO-w 4429573 1994109119890 + 002 80584602 15000 plusmn 58997786 (0)
1198912
DMPSO 0 49747953 3979836 11130 plusmn 13496281 (64)PSO 1790843119890 + 002 4039120119890 + 002 2847614119890 + 002 15000 plusmn 49430124 (0)
CLPSO 21153304 47516825 31762105 15000 plusmn 6732226 (0)PSO-w 1416269119890 + 002 2890130119890 + 002 21243064119890 + 002 15000 plusmn 43169379 (0)
1198913
DMPSO 0 0 0 1470 plusmn 0 (100)PSO 351 11160 3083180119890 + 003 15000 plusmn 3937885119890 + 003 (0)
CLPSO 0 122 422 10440 plusmn 17105894 (48)PSO-w 45 311 1405400119890 + 002 15000 plusmn 54365809 (0)
1198914
DMPSO 8881784119890 minus 016 3286260119890 minus 014 1140421119890 minus 014 60000 plusmn 8672115119890 minus 015 (0)PSO 0079333 14476725 3692153 60000 plusmn 4976905 (0)
CLPSO 1289395119890 minus 010 2200675 1082862 60000 plusmn 0692050 (0)PSO-w 0003600 11916679 0656150 60000 plusmn 1737976 (0)
1198915
DMPSO 0 1729170119890 minus 069 3458545119890 minus 071 11130 plusmn 2420835119890 minus 070 (28)PSO 1236611119890 minus 004 26217801 4196887 60000 plusmn 9707225 (0)
CLPSO 2700980119890 minus 038 9595381119890 minus 036 9748685119890 minus 037 60000 plusmn 1924460119890 minus 036 (0)PSO-w 5651214119890 minus 021 1681261119890 minus 017 2451834119890 minus 018 60000 plusmn 3861006119890 minus 018 (0)
1198916
DMPSO 0 3330669119890 minus 016 3774758119890 minus 017 35430 plusmn 9049510119890 minus 017 (82)PSO 004331 1810140119890 + 002 20290841 60000 plusmn 41921578 (0)
CLPSO 1787459119890 minus 014 0092264 0014715 60000 plusmn 0021761 (0)PSO-w 1760398119890 minus 004 90751966 1826534 60000 plusmn 12832618 (0)
discontinuous function and has a global minimum 119891min = 0
in the domain 119883 isin R50 | minus05 le 119909119894le 05
(d) Ackleyrsquos function
1198914 (119883) = minus20 exp[
[
minus1
5
radic1
119863
119863
sum
119894=1
1199092
119894]
]
minus exp[ 1119863
119863
sum
119894=1
cos (2120587119909119894)]
+ 20 + 119890 minus30 le 119909119894le 30 119863 = 50
(12)
The function has one narrow global optimumbasin andmanyminor local optima and has a global optimum 119891min = 0 at(0 0)
(e) Sphere function
1198915 (119883) =
119863
sum
119894=1
1199092
119894 minus512 le 119909
119894le 512 119863 = 50 (13)
It is a unimodal function and its global minimum 119891min = 0at (0 0)
(f) Griewankrsquos function
1198916 (119883) =
1
4000
119863
sum
119894=1
1199092
119894minus
119863
prod
119894=1
cos(119909119894
radic119894
) + 1
minus 600 le 119909119894le 600 119863 = 50
(14)
The search space is relatively large in this optimization prob-lem Griewankrsquos function has a prod119863
119894=1cos(119909119894radic119894) component
causing linkages among variables thereby making it difficultto reach the global optimum Therefore it is generallyregarded as a complex multimodal problem that is hard tooptimize The function has a global minimum 119891min = 0 at(0 0)
42 Comparison of Experimental Results and DiscussionsThere are many means to carry out the comparison ofalgorithmperformanceWe can use suchways to compare thenumber of function evaluations (FEs) for a given accuracy orto compare their accuracies for a fixed number of functionevaluations and so forth In our simulations we use the two
6 Journal of Optimization
0 50 100 150 200 250 300 350 400 450 500
Fitn
ess v
alue
s (lo
g)
Generations
1025
1020
1015
1010
105
100
10minus5
(a)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
100
101
102
103
(b)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
106
105
104
103
102
101
100
(c)
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fitn
ess v
alue
s (lo
g)
Generations
105
100
10minus5
10minus10
10minus15
10minus20
(d)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
Fitn
ess v
alue
s (lo
g)
PSOCLPSO
PSO-wDMPSO
1010
100
10minus10
10minus20
10minus30
10minus40
(e)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
PSOCLPSO
PSO-wDMPSO
Fitn
ess v
alue
s (lo
g)
104
103
100
101
102
10minus1
10minus2
(f)
Figure 3 The median convergence characteristics of 50D test functions (a) Schwefelrsquos function (b) Rastriginrsquos function (c) Step function(d) Ackleyrsquos function (e) Sphere function and (f) Griewankrsquos function
waysmentioned previously andwe set up a running-stoppingcondition for each algorithm If a run has found the globaloptimal solution of optimization problem or has reached afixed number of function evaluations then it will stop run-ning We run each algorithm for 50 times so that we can doreasonable and meaningful analysis
In order to ensure the comparability of the experimentalresults the parameters settings are set as consistent aspossible for the DMPSO the PSO the PSO-W and theCLPSO The parameters settings are listed as follows thepopulation size 119872 = 30 and the acceleration coefficients1198881= 1198882= 2 for all simulations As for the test functions
1198911(119883) 119891
2(119883) and 119891
3(119883) we set the maximum iterations
at 500 (or the maximum number of function evaluationsat 15000 correspondingly) As for the test functions 119891
4(119883)
1198915(119883) and 119891
6(119883) we set the maximum iterations at 2000
(or the maximum number of function evaluations at 60000correspondingly) On the other hand we set the inertiaweight 119908 = 0628 for the CLPSO and set 119908(119905) = 08(1 minus
exp(minus5|119892119895(119905) minus 119909
119894119895(119905)|)) for the DMPSO As for the PSO-W
the linearly decreasing inertia weight 119908 is used which startsat 09 and ends at 04 and Vmax le 20(119906119889 minus 119897119889)
In our experiments we choose the best fitness value(BFV) the worst fitness value (WFV) the mean value(Mean) and the number of function evaluations in the formof mean (MFEs) plusmn standard deviation (STDEV) (success rate
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
6 Journal of Optimization
0 50 100 150 200 250 300 350 400 450 500
Fitn
ess v
alue
s (lo
g)
Generations
1025
1020
1015
1010
105
100
10minus5
(a)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
100
101
102
103
(b)
0 50 100 150 200 250 300 350 400 450 500Generations
Fitn
ess v
alue
s (lo
g)
106
105
104
103
102
101
100
(c)
0 200 400 600 800 1000 1200 1400 1600 1800 2000
Fitn
ess v
alue
s (lo
g)
Generations
105
100
10minus5
10minus10
10minus15
10minus20
(d)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
Fitn
ess v
alue
s (lo
g)
PSOCLPSO
PSO-wDMPSO
1010
100
10minus10
10minus20
10minus30
10minus40
(e)
0 200 400 600 800 1000 1200 1400 1600 1800 2000Generations
PSOCLPSO
PSO-wDMPSO
Fitn
ess v
alue
s (lo
g)
104
103
100
101
102
10minus1
10minus2
(f)
Figure 3 The median convergence characteristics of 50D test functions (a) Schwefelrsquos function (b) Rastriginrsquos function (c) Step function(d) Ackleyrsquos function (e) Sphere function and (f) Griewankrsquos function
waysmentioned previously andwe set up a running-stoppingcondition for each algorithm If a run has found the globaloptimal solution of optimization problem or has reached afixed number of function evaluations then it will stop run-ning We run each algorithm for 50 times so that we can doreasonable and meaningful analysis
In order to ensure the comparability of the experimentalresults the parameters settings are set as consistent aspossible for the DMPSO the PSO the PSO-W and theCLPSO The parameters settings are listed as follows thepopulation size 119872 = 30 and the acceleration coefficients1198881= 1198882= 2 for all simulations As for the test functions
1198911(119883) 119891
2(119883) and 119891
3(119883) we set the maximum iterations
at 500 (or the maximum number of function evaluationsat 15000 correspondingly) As for the test functions 119891
4(119883)
1198915(119883) and 119891
6(119883) we set the maximum iterations at 2000
(or the maximum number of function evaluations at 60000correspondingly) On the other hand we set the inertiaweight 119908 = 0628 for the CLPSO and set 119908(119905) = 08(1 minus
exp(minus5|119892119895(119905) minus 119909
119894119895(119905)|)) for the DMPSO As for the PSO-W
the linearly decreasing inertia weight 119908 is used which startsat 09 and ends at 04 and Vmax le 20(119906119889 minus 119897119889)
In our experiments we choose the best fitness value(BFV) the worst fitness value (WFV) the mean value(Mean) and the number of function evaluations in the formof mean (MFEs) plusmn standard deviation (STDEV) (success rate
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Journal of Optimization 7
of algorithm in finding the global optima) as the evaluationindicators of optimization ability for the four algorithmsmentioned previously These evaluation indicators cannotonly reflect the optimization ability but also indicate the com-puting cost We have got the experimental results being listedin Table 1
In order to more easily contrast the convergence char-acteristics of the four algorithms Figure 3 presents the con-vergence characteristics in term of the best fitness value of themean run of each algorithm for each test function
Discussions From the results listed in Table 1 and the con-vergence curve simulation diagram in Figure 3 we can seethat (1) the DMPSO performs much better than the originalPSO the CLPSO and the PSO-W and (2) the DMPSO ismuch superior to the original PSO the CLPSO and thePSO-W in terms of global convergence property accuracyand efficiency So we conclude that the performance of theDMPSO is much better than that of the original PSO that ofthe CLPSO and that of the PSO-W
5 Conclusions
This paper presents a novel particle swam optimization withdouble flight modes that we call the double flight modesparticle swam optimization (DMPSO) In the optimizationalgorithm each bird (particle) can use both rotational flightmode andnon-rotational flightmode to flywhile it is foragingfor food in the search space By using its superb flight skillseach bird (particle) has much improved its searching effi-ciency From the experiments we conduct on some bench-mark functions such as Schwefel Rastrigin Ackley StepGriewank and Sphere we can conclude that the DMPSO notonly has marked advantage of global convergence propertybut also can effectively avoid the premature convergenceproblem to some extent and is one of good choices for use tosolve the complex and high-dimensional optimization prob-lems although the DMPSO is not necessarily the best choicefor solving various real-world optimization problems
Acknowledgments
This work was supported by the key programs of the Insti-tution of Higher Learning Guangxi China under Grant(no 201202ZD032) the Guangxi Key Laboratory of HybridComputation and IC Design Analysis the Natural ScienceFoundation of Guangxi China under Grant (no 0832084)and the Natural Science Foundation of China under Grant(no 61074185)
References
[1] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks Part 1 pp 1942ndash1948 PiscatawayNJUSADecember1995
[2] Y Shi and R Eberhart ldquoEmpirical study of particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation vol 3 pp 1945ndash1950 1999
[3] K M Rasmussen and T Krink ldquoHybrid particle swarm opti-mization with breeding and subpopulationsrdquo in Proceedings ofthe 3rd Genetic and Evolutionary third Genetic and EvolutionaryComputation Conference San Francisco Calif USA 2001
[4] Y H Shi and R C Eberhart ldquoFuzzy adaptive particle swarmoptimizationrdquo in Proceedings of the Congress on EvolutionaryComputation pp 101ndash106 IEEE PiscatawayNJUSAMay 2001
[5] A Ratnaweera S K Halgamuge and H C Watson ldquoSelf-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficientsrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 240ndash255 2004
[6] J Kennedy and R Mendes ldquoPopulation structure and particleswarm performancerdquo in Proceedings of the IEEE Congress onEvolutionary Computation pp 1671ndash1676 Honolulu HawaiiUSA 2002
[7] F van den Bergh andA P Engelbrecht ldquoA cooperative approachto participle swam optimizationrdquo IEEE Transactions on Evolu-tionary Computation vol 8 no 3 pp 225ndash239 2004
[8] K E Parsopoulos and M N Vrahatis ldquoOn the Computationof all global minimizers through particle swarm optimizationrdquoIEEE Transactions on Evolutionary Computation vol 8 no 3pp 211ndash224 2004
[9] J Sun and W B Xu ldquoA global search of quantum-behavedparticle swarm optimizationrdquo in Proceedings of the Congress onEvolutionary Computation pp 325ndash331 IEEE Press Washing-ton DC USA 2004
[10] J Sun W Xu and J Liu ldquoParameter selection of quantum-behaved particle swarm optimizationrdquo Lecture Notes in Com-puter Science Springer Berlin Germany
[11] Z-S Lu and Z-R Hou ldquoParticle swarm optimization withadaptive mutationrdquo Acta Electronica Sinica vol 32 no 3 pp416ndash420 2004 (Chinese)
[12] R He Y-J Wang Q Wang J-H Zhou and C-Y Hu ldquoAnImproved particle swarm optimization based on self-adaptiveescape velocityrdquo Journal of Software vol 16 no 12 pp 2036ndash2044 2005 (Chinese)
[13] L Cong Y-H Sha and L-C Jiao ldquoOrganizational evolution-ary particle swarm optimization for numerical optimizationrdquoPattern Recognition and Artificial Intelligence vol 20 no 2 pp145ndash153 2007 (Chinese)
[14] B Jiao Z Lian and X Gu ldquoA dynamic inertia weight particleswarm optimization algorithmrdquo Chaos Solitons and Fractalsvol 37 no 3 pp 698ndash705 2008
[15] J J Liang and P N Suganthan ldquoDynamic multi-swarm particleswarm optimizerrdquo in Proceedings of the IEEE Swarm IntelligenceSymposium (SIS rsquo05) pp 124ndash129 Pasadena Calif USA June2005
[16] J J Liang A K Qin P N Suganthan and S Baskar ldquoCom-prehensive learning particle swarm optimizer for global opti-mization of multimodal functionsrdquo IEEE Transactions on Evo-lutionary Computation vol 10 no 3 pp 281ndash295 2006
[17] X F Wang F Wang and Y-H Qiu ldquoResearch on a novel parti-cle swarmalgorithmwith dynamic topologyrdquoComputer Sciencevol 34 no 3 pp 205ndash207 2007 (Chinese)
[18] P S Shelokar P Siarry V K Jayaraman and B D KulkarnildquoParticle swarm and ant colony algorithms hybridized forimproved continuous optimizationrdquo Applied Mathematics andComputation vol 188 no 1 pp 129ndash142 2007
[19] Q Lu S-R Liu and X-N Qiu ldquoDesign and realization ofparticle swarm optimization based on pheromonemechanismrdquoActa Automatica Sinica vol 35 no 11 pp 1410ndash1419 2009
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
8 Journal of Optimization
[20] Q Lu X-N Qiu and S-R Liu ldquoA discrete particle swarmoptimization algorithm with fully communicated informationrdquoin Proceedings of the Genetic and Evolutionary ComputationConference (GEC rsquo09) pp 393ndash400 ACMSIGEVO New YorkNY USA June 2009
[21] Q Lu and S-R Liu ldquoA particle swarm optimization algorithmwith fully communicated informationrdquo Acta Electronica Sincavol 38 no 3 pp 664ndash667 2010 (Chinese)
[22] Z-Z Shao H-GWang andH Liu ldquoDimensionality reductionsymmetrical PSO algorithm characterized by heuristic detec-tion and self-learningrdquo Computer Science vol 37 no 5 pp 219ndash222 2010 (Chinese)
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of
Submit your manuscripts athttpwwwhindawicom
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical Problems in Engineering
Hindawi Publishing Corporationhttpwwwhindawicom
Differential EquationsInternational Journal of
Volume 2014
Applied MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Probability and StatisticsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Mathematical PhysicsAdvances in
Complex AnalysisJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
OptimizationJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
CombinatoricsHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Operations ResearchAdvances in
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Function Spaces
Abstract and Applied AnalysisHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of Mathematics and Mathematical Sciences
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Algebra
Discrete Dynamics in Nature and Society
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Decision SciencesAdvances in
Discrete MathematicsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom
Volume 2014 Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Stochastic AnalysisInternational Journal of