[ieee 2006 ieee international conference on evolutionary computation - vancouver, bc, canada (16-21...

8
An Empirical Study on the Settings of Control Coefficients in Particle Swarm Optimization N. M. Kwok, D. K. Liu, K. C. Tan and Q. P. Ha Abstract— The effects of randomness of control coefficients in Particle Swarm Optimization (PSO) are investigated through empirical studies. The PSO is viewed as a method to solve a coverage problem in the solution space when the global- best particle is reported as the solution. Randomness of the control coefficients, therefore, plays a crucial role in providing an efficient and effective algorithm. Comparisons of perfor- mances are made between the uniform and Gaussian distributed random coefficients in adjusting particle velocities. Alternative strategies are also tested, they include: i) pre-assigned ran- domness through the iterations, ii) selective hybrid random adjustment based on the fitness of the particles. Furthermore, the effect of velocity momentum factor is compared between a constant and random momentum. Numerical results show that performances of the proposed variations are comparable to the conventional implementation for simple test functions. However, enhanced performances using the selective and hybrid strategy are observed for complicate functions. I. I NTRODUCTION The particle swarm optimization (PSO) [1] is an agent based search algorithm for difficult optimization problems. The algorithm emulates the social behavior of living species which is noticeable in bird flocks and fish schools [2]. Each agent is treated as a particle that moves across the solution space of the problem to be solved and the movements of the particles are governed by a set of control coefficients. The PSO, being classified in the family of meta-heuristic algorithms, had been shown to be an attractive alternative to gradient-based optimization routines [3] where the need for differentiable system models can be relaxed. On the other hand, benefits are brought about by hybridizing these approaches and are realized in the context of the differential evolution algorithm [4] and memetic algorithms [5]. Applications of PSO can be found in a wide domain in science and engineering practices. For example, in [6], the PSO was applied in the design of the constructions of a magnetic devise called a Loney’s solenoid. A cantilevered beam was designed by using the PSO [7] in the structural application domain. Settings for a PID controller parameters were determined for robustness against adverse operating conditions, see [8]. The PSO had also found its application in optimizing the economy in electricity distributions [9]. In radio communications, a phased array was designed using N. M. Kwok , D. K. Liu and Q. P. Ha are with the ARC Centre of Excellence in Autonomous Systems (CAS), Faculty of Engineering, Uni- versity of Technology, Sydney, Broadway, NSW 2007, Australia, ( email: [email protected]). K. C. Tan is with Dept. of Electrical and Com- puter Engineering, National University of Singapore, Singapore 119260, Republic of Singapore. the PSO [10] for a specified radiation pattern. The project scheduling problem with resource constraints was tackled by employing the PSO and promising results were reported in [11]. Furthermore, the PSO is well applicable in multi- objective optimization problems, e.g., [12]. The successes reported in the real-world applications are largely attributed to the implementation simplicity and flexibility. However, further improvements in PSO performances may be antic- ipated by critically setting the control coefficients. Early developments in PSO were focused on the con- vergence of particles to the optimal solution. Control co- efficients are incorporated to adjust the particle positions in the context of a flying velocity. Appropriate choice of coefficients will drive the particles into oscillations around the optimum [13]. The treatment of PSO in complex spaces was discussed in [14] with attentions paid to the explosion, stability and again in the convergence characteristics. Fur- ther studies were conducted in [15] relating the coefficient settings to convergence. Apart from the research work in the PSO convergence behavior, the implementation architecture was recently considered in [16] with investigations into the exchange topology of information among the particles. The evaluation of the objective function was manipulated in [17] to avoid trapping in local optima. In general, these works had implicated that control coefficient settings are inter-related and problem dependent. A variety of proposals aiming at improving the PSO performance had also been presented in the literature. Ideas borrowed from genetic algorithms were integrated with PSO in the context of selection [18], breeding and sub-populations [19]. In [20], boundary conditions were imposed when parti- cles had traveled to the limits of the solution space. Velocity adjustments according to the expired number of iterations were proposed in [21] which implicitly boosts convergence. The velocity control coefficient was purposefully selected to guarantee stability, see [22]. An inertia weight was in- troduced in [23] to adjust the particle movements through a velocity control coefficient. A neighborhood information was used in [24] to guide the particle movements in addition to the global- and self-information gained by a particle. The division of labor concept was implemented in [25], where the movements of particles were adapted to environmental changes. Although the cited variations to the PSO were rationally justifiable, the philosophy adopted was mostly to enhance the convergence capability. It is widely noted that the PSO maintains a global-best candidate solution during its iter- 0-7803-9487-9/06/$20.00/©2006 IEEE 2006 IEEE Congress on Evolutionary Computation Sheraton Vancouver Wall Centre Hotel, Vancouver, BC, Canada July 16-21, 2006 823

Upload: qp

Post on 11-Mar-2017

214 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

An Empirical Study on the Settings of Control Coefficients in ParticleSwarm Optimization

N. M. Kwok, D. K. Liu, K. C. Tan and Q. P. Ha

Abstract— The effects of randomness of control coefficientsin Particle Swarm Optimization (PSO) are investigated throughempirical studies. The PSO is viewed as a method to solvea coverage problem in the solution space when the global-best particle is reported as the solution. Randomness of thecontrol coefficients, therefore, plays a crucial role in providingan efficient and effective algorithm. Comparisons of perfor-mances are made between the uniform and Gaussian distributedrandom coefficients in adjusting particle velocities. Alternativestrategies are also tested, they include: i) pre-assigned ran-domness through the iterations, ii) selective hybrid randomadjustment based on the fitness of the particles. Furthermore,the effect of velocity momentum factor is compared between aconstant and random momentum. Numerical results show thatperformances of the proposed variations are comparable to theconventional implementation for simple test functions. However,enhanced performances using the selective and hybrid strategyare observed for complicate functions.

I. INTRODUCTION

The particle swarm optimization (PSO) [1] is an agentbased search algorithm for difficult optimization problems.The algorithm emulates the social behavior of living specieswhich is noticeable in bird flocks and fish schools [2]. Eachagent is treated as a particle that moves across the solutionspace of the problem to be solved and the movements ofthe particles are governed by a set of control coefficients.The PSO, being classified in the family of meta-heuristicalgorithms, had been shown to be an attractive alternativeto gradient-based optimization routines [3] where the needfor differentiable system models can be relaxed. On theother hand, benefits are brought about by hybridizing theseapproaches and are realized in the context of the differentialevolution algorithm [4] and memetic algorithms [5].

Applications of PSO can be found in a wide domain inscience and engineering practices. For example, in [6], thePSO was applied in the design of the constructions of amagnetic devise called a Loney’s solenoid. A cantileveredbeam was designed by using the PSO [7] in the structuralapplication domain. Settings for a PID controller parameterswere determined for robustness against adverse operatingconditions, see [8]. The PSO had also found its applicationin optimizing the economy in electricity distributions [9]. Inradio communications, a phased array was designed using

N. M. Kwok∗, D. K. Liu and Q. P. Ha are with the ARC Centre ofExcellence in Autonomous Systems (CAS), Faculty of Engineering, Uni-versity of Technology, Sydney, Broadway, NSW 2007, Australia, (∗email:[email protected]). K. C. Tan is with Dept. of Electrical and Com-puter Engineering, National University of Singapore, Singapore 119260,Republic of Singapore.

the PSO [10] for a specified radiation pattern. The projectscheduling problem with resource constraints was tackledby employing the PSO and promising results were reportedin [11]. Furthermore, the PSO is well applicable in multi-objective optimization problems, e.g., [12]. The successesreported in the real-world applications are largely attributedto the implementation simplicity and flexibility. However,further improvements in PSO performances may be antic-ipated by critically setting the control coefficients.

Early developments in PSO were focused on the con-vergence of particles to the optimal solution. Control co-efficients are incorporated to adjust the particle positionsin the context of a flying velocity. Appropriate choice ofcoefficients will drive the particles into oscillations aroundthe optimum [13]. The treatment of PSO in complex spaceswas discussed in [14] with attentions paid to the explosion,stability and again in the convergence characteristics. Fur-ther studies were conducted in [15] relating the coefficientsettings to convergence. Apart from the research work in thePSO convergence behavior, the implementation architecturewas recently considered in [16] with investigations into theexchange topology of information among the particles. Theevaluation of the objective function was manipulated in [17]to avoid trapping in local optima. In general, these works hadimplicated that control coefficient settings are inter-relatedand problem dependent.

A variety of proposals aiming at improving the PSOperformance had also been presented in the literature. Ideasborrowed from genetic algorithms were integrated with PSOin the context of selection [18], breeding and sub-populations[19]. In [20], boundary conditions were imposed when parti-cles had traveled to the limits of the solution space. Velocityadjustments according to the expired number of iterationswere proposed in [21] which implicitly boosts convergence.The velocity control coefficient was purposefully selectedto guarantee stability, see [22]. An inertia weight was in-troduced in [23] to adjust the particle movements througha velocity control coefficient. A neighborhood informationwas used in [24] to guide the particle movements in additionto the global- and self-information gained by a particle. Thedivision of labor concept was implemented in [25], wherethe movements of particles were adapted to environmentalchanges.

Although the cited variations to the PSO were rationallyjustifiable, the philosophy adopted was mostly to enhancethe convergence capability. It is widely noted that the PSOmaintains a global-best candidate solution during its iter-

0-7803-9487-9/06/$20.00/©2006 IEEE

2006 IEEE Congress on Evolutionary ComputationSheraton Vancouver Wall Centre Hotel, Vancouver, BC, CanadaJuly 16-21, 2006

823

Page 2: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

ations, however, the convergence to the vicinity of thistemporary candidate may hamper the discovery of a bettersolution in other sectors of the solution space. In otherwords, exploration of the solution space is as important asconvergence. With regard to the PSO implementation, controlcoefficients followed the early developments by injectingrandomness from a uniform distribution and has become aconvention.

In this work, the use of alternative random distribu-tions, e.g. the Gaussian distribution, will be investigatedthrough empirical studies. This is motivated by the need tocompromise between exploring and exploiting the solutionspace, such that an efficient PSO can be designed. TheGaussian distribution will be applied in randomizing theparticle velocity, the global- and self-movements of theparticles. Further implementation variations in deterministicand selective schemes will also be investigated.

The rest of the paper is organized as follows. In Section II,the PSO background is briefly reviewed and the motivationof the present work is revealed. Alternative implementationstrategies are proposed in Section III with the objective ofenhancing the PSO performance by a study on the coveragecapability. Experimental settings are described in Section IVand results are presented. A conclusion is drawn in SectionV together with directions for further research.

II. PARTICLE SWARM OPTIMIZATION

A. Mathematical Description

The particle swarm optimization algorithm can be viewedas a stochastic search method for solving non-deterministicoptimization problems and can be described by the followingexpression,

vik+1 = wvi

k + c1(gbest,k − xik) + c2(pi

best,k − xik)

xik+1 = xi

k + vk+1,(1)

where x is the particle position in the solution space, v isthe velocity of the particle movement assuming a unity timestep, w is the velocity control coefficient, c1, c2 are the gaincontrol coefficients, gbest is the global-best position, pbest isthe position of a particular particle where the best fitnessis obtained so far, subscript k is the iteration index andsuperscript i is the particle index.

Since the pioneering work in PSO [1], the gain controlcoefficients c1, c2 had been conventionally taken as randomnumbers sampled from a uniform distribution, i.e.,

c1 ∼ U[0, c1,max], c2 ∼ U[0, c2,max], (2)

where U[·] stands for a uniform distribution and c1,max isthe maximum value for c1 and c2,max for c2. Guidelines indetermining the gain control coefficients were suggested in[15].

There have been proposals in determining the velocitycontrol coefficient w, see [23] [21], on the basis of governingthe movements of particles towards the global-best position.This coefficient can be kept constant or made adaptive tosome criteria, e.g., decreases in proportion to the expirediterations.

B. Procedural Description

At the start of the algorithm, the particle positions aregenerated to cover the solution space. Their positions maybe deterministically or randomly distributed and the numberof particles is pre-defined. In general, a small number reducesthe computational load but at the expense of extended itera-tions required to obtain the optimum (but the optimal solutionis not known a priori). The velocities vi

0 can also be setrandomly or simply assigned as zeros. A problem dependentfitness function is evaluated and a fitness is assigned to eachparticle. For the set of fitness, the one with the highestvalue is taken as the global-best gbest,0 (for a maximizationproblem). This set of initial fitness values are denoted as theparticle-best pi

best,0. The velocity is then calculated using therandom gain coefficients. The particle positions are updatedand the procedure repeats. Finally, at the satisfaction of sometermination criteria, the global-best particle is reported as thesolution to the problem.

C. Analysis

An analysis on the behavior of the PSO algorithm can beconducted by adopting developments from control engineer-ing principles [26]. Denote the position error of a particularparticle (the particle index (i) is dropped) as

εk = gbest,k − xk, (3)

the velocity expression becomes

vk+1 = wvk + c1εk + c2pbest,k − c2gbest,k + c2εk

= wvk + (c1 + c2)εk − c2(gbest,k − pbest,k).(4)

The particle update becomes

xk+1 = xk +wvk + (c1 + c2)εk − c2(gbest,k − pbest,k), (5)

Substituting the position error and assume that the globaloptimal remains constant, i.e., gbest,k+1 = gbest,k, we have

εk+1 = εk −wvk − (c1 + c2)εk + c2(gbest,k − pbest,k). (6)

The particle position error and velocity can be written inthe state-space form as[

εk+1

vk+1

]=

[1− c1 − c2 −ω

c1 + c2 ω

] [εk

vk

]

+[

c2

−c2

](gbest,k − pbest,k),

(7)

orzk+1 = Azk + Buk, (8)

where zk+1 = [εk+1, vk+1]T and uk = gbest,k − pbest,k andmatrices A, B are self-explanatory.

It becomes clear that the requirement for convergenceimplies εk → 0 and vk → 0 as time k → ∞. When thebest solution is found gbest,k becomes a constant and pbest,k

will tend to gbest,k if the system is stable. The stability ofa discrete control system can be ascertained by constrainingthe magnitude of the eigenvalues λ1,2 of the system matrixA ∈ R

2×2 to be less than unity, that is

λ1,2 ∈ {λ|λ2−(1+ω−c1−c2)λ+ω = 0}, |λ1,2| < 1. (9)

824

Page 3: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

By choosing the maximal random variables c1 and c2 to bec1,max = 2 and c2,max = 2 and take the expectation valuesfrom a uniform distribution, then the coefficients becomec1 = 1 and c2 = 1. This case corresponds to a total feedbackof the discrepancy of the particle positions from the desiredsolution at gbest,k. Writing out the eigenvalues, we have

λ1,2 =12

(ω − 1±

√1− 6ω + ω2

). (10)

After some manipulations, it can be shown that ω < 1 withc1,max = c2,max = 2 will guarantee stability for the system,hence, the particles will converge to gbest,k.

D. Motivation

The position of the global-best particle is derived fromthe start of the algorithm, it is replaced only when a bettercandidate solution is found. However, the availability of abetter solution depends on its position being discovered by aparticle when this is governed by the trajectory followed bythe particle. The control coefficients, namely, w, c1 and c2

therefore play crucial roles in the behavior of the algorithm.Each particle, under the influence of the control coefficients,is steered towards or away from the optimal solution in thecontext of a stable or unstable control system.

It should be noted here that, since the PSO algorithmimplicitly maintains the global-best solution during its itera-tion, it requires the algorithm to explore the solution space.Hence, exploiting the vicinity of the global solution becomesa secondary objective. Alternative strategies will be proposedin the following which are aimed at achieving the explorationgoal.

III. ALTERNATIVE STRATEGIES

Variations to the conventional PSO implementation areproposed with the following strategies. A combination ofuniform and Gaussian distributed randomness is adoptedtogether with a selection process that results in a test forexploring and exploiting the solution space.

A. Strategy 1: Uniform Distributed Gain Control

This is the conventional PSO implementation and is usedas a reference for performance comparison. The controlgains c1 and c2 are given by Eq. 2. It should be notedthat the controls are renewed (re-sampled from the uniformdistribution) during each iteration.

B. Strategy 2: Pre-assigned Uniform Distributed Gain Con-trol

This strategy aims at testing the randomness over theiteration horizon. The uniform distribution is adopted as inStrategy 1, however, the random control coefficients gener-ated at the start of the algorithm are kept un-changed duringthe iterations, i.e., pre-assigned. This approach determineswhether a particular particle is initially made stable orunstable (for searching) respectively.

C. Strategy 3: Uniform Distributed Gain Control with Se-lection

The principle of division of labor is employed in this strat-egy. Promising particles with a higher than average fitnessuse the pre-assigned gain control coefficients (see Strategy2) and are directed towards refining the solution. The belowaverage particles are re-assigned with a uniform randomcoefficient generated in each iteration that are devoted tothe searching of the remaining solution space. The selectionis implemented as

if f i > f, then ci1,k = ci

1,0, ci2,k = ci

2,0,

otherwise ci1,k ∼ U[0, c1,max], ci

2,k ∼ U[0, c2,max],(11)

where f i is the fitness of the ith particle, f is the averagefitness.

D. Strategy 4: Gaussian Distributed Gain Control

This is similar to the conventional approach (Strategy 1)with the exception that the coefficients are sampled from aGaussian distribution. That is

c1 ∼ N(0.5, 1), c2 ∼ N(0.5, 1), (12)

where the Gaussian distributions are characterized by a meanof 0.5 and variance of 1. By this setting of the Gaussian,approximately 30% of the coefficients are negative andparticles will be bounced from the optimum and be devotedto the search task. Furthermore, approximately another 30%of particles are intentionally made un-stable and deployed inexploring the solution space.

E. Strategy 5: Pre-assigned Gaussian Distributed Gain Con-trol

A similar approach as Strategy 2, in assigning the gaincoefficients, is adopted here with the use of a Gaussiandistribution (see Strategy 4) instead of a uniform distribution.

F. Strategy 6: Gaussian Distributed Gain Control with Se-lection

This is a replica of Strategy 3 with the use of the Gaussiandistribution.

G. General Variations

The velocity control (coefficient w in Eq. 1) is kept con-stant for all the strategies in a set of test runs. Furthermore, itis a uniform distribution generated at the start of the iterationsin another set of tests. The effect of the size of the swarmis also varied. The numbers of particles used in the tests are50, 100 and 200 respectively.

IV. EXPERIMENTS AND RESULTS

Numerical simulations are conducted to study the behaviorof the PSO in various test functions and randomize strategieson control coefficients. Several test scenarios are formulatedand each case is run through a number of simulations inorder to collect statistics of the results. Performances arethen assessed on the basis of the expected values of theoptimization error and the probabilities of achieving specificperformance thresholds are reported.

825

Page 4: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

A. Test Functions

Let the optimization problem at hand be the maximizationof functions. Solution surfaces in the form of 2-dimensionallandscapes are constructed that range from a simple singlepeak to complicate multiple peaks and deeps. The valuesobtained from the landscapes are taken directly as the fitnessfunctions in the PSO algorithm. They are described andillustrated in the following.

1) Single Peak: This test function is constructed from anexponential function, see Fig. 1, contains a single peak overthe solution space. That is

f = exp(−0.5(x2 + y2)

σ2

), σ = 2,

x ∈ [−10, 10], y ∈ [−10, 10], fmax at x = 0, y = 0.

(13)

The support of the peak covers a relative large portionof the overall landscape. A large support ensures a higherprobability for the existence of an initial particle falling inthe vicinity of the peak. The optimization for a single peak isconsidered as a simple case for the PSO and will be servedas a bench mark for performance comparison.

Fig. 1. Landscape 1: single peak.

2) Single Peak with Multiple Ridges and Trenches: Asingle peak is derived from a sine function on the radiusr from the origin and a small positive number ε is added.The function is

f =√

x2 + y2 + ε, f =sin(2f)

f,

x ∈ [−10, 10], y ∈ [−10, 10], fmax at x = 0, y = 0.

(14)

The incorporation of the sine function produces a multitudeof ridges and trenches as depicted in Fig. 2. The peakmaintains a relatively higher magnitude as compared to theridges but the support covers only a small portion of thesolution space. The small support hinders the finding ofthe optimum and the particles may be trapped in the localridges. The resulting degree of difficulty is thus increased ascompared to the previous test function.

Fig. 2. Landscape 2: single peak, multiple ridges and trenches.

3) Multiple Peaks with Multiple Deeps: This is a compli-cate landscape consists of multiple peaks and deeps con-structed from different scales, polarities and shifts of thecenters of exponential functions. That is

f = 3(1− x2) exp(−(x2 + (y + 1)2))

− 13

exp(1− (x + 1)2 + y2)

− 10(

x

5− x3 − y3

)exp(−(x2 + y2))

+ 8 exp(−0.5(x2 + y2)

0.332

), x = 0.33x, y = 0.33y,

x ∈ [−10, 10], y ∈ [−10, 10], fmax at x = 0, y = 0.(15)

The landscape is shown in Fig. 3. Wider supports withlower peaks are located around the highest but narrow peak.Deeps are also found around the highest peak. Due to theexistences of the other peaks, particles may be attracted tolocal optimum (the lower peaks). Furthermore, the narrowpeak may also impose difficulties for the initial particlesto cover the optimal solution. Therefore, this is a difficultoptimization problem.

Fig. 3. Landscape 3: multiple peaks, multiple deeps.

826

Page 5: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

4) Remarks: For the comparison of performances in thesequel, the range of the solution spaces are made the same forall the test functions. The fitness values are also normalizedsuch that the maximum fitness is unity. That is

f ik ←

f ik

maxi(f ik)

, (16)

where i = 1 · · ·N , N is the number of particles.

B. Performance Assessment

Since the PSO is a agent based or stochastic algorithm,statistical tests are employed to assess the performance.They include the calculation of the expected value and thedetermination of performance index in terms of cumulativeprobabilities against a confidence threshold.

1) Expected Values: Each test (totally, Tr = 50 test runs)goes through a specified number of iterations (Ng = 50) andthe terminating global-best particle position xj (superscriptj = 1 · · ·Tr is the test index) are stored. They are usedto construct a probability distribution function (pdf) or ahistogram containing H = 50 bins spanning across the rangeof the particle positional coordinates, see Fig. 4.

−6 −5 −4 −3 −2 −1 0 1 2 3 4

x 10−5

0

0.02

0.04

0.06

0.08

0.10

0.12

x−E

rr P

rob

−8 −6 −4 −2 0 2 4

x 10−5

0

0.05

0.10

0.15

0.20

y−E

rr P

rob

Fig. 4. Typical error probability distribution.

The statistical expected value is calculated from

x = E {x} =H∑

k=1

hxyk xk, (17)

where hxyk is the probability of the particle falling inside the

kth bin centered at xk in the corresponding histogram forthe x- and y-coordinate of the landscape. A typical result ofthe evolution of the global-best particle is shown in Fig. 5where the convergence to the solution is indicated.

2) Confidence Threshold: During each test run, the nor-malized fitness of the global-best particle against the overallfitness

j fk = maxi

(f ik) (18)

evaluated at the kth iteration in the jth test run, are stored.These traces of fitness are averaged, that is

fk = T−1r

Tr∑j=1

j fk. (19)

0 5 10 15 20 25 30 35 40 45 50

−3

−2

−1

0

1

2

3

x−co

ord

0 5 10 15 20 25 30 35 40 45 50

−2

−1

0

1

2

y−co

ord

Iteration

Fig. 5. Typical global-best particle position against iterations.

0 5 10 15 20 25 30 35 40 45 500

0.2

0.4

0.6

0.8

1

3 5

Iteration

Fitn

ess

Fig. 6. Typical global-best fitness cumulative distribution.

Thresholds used in this work are the 95% and 99% con-fident levels. The corresponding iteration indices satisfyingthese thresholds are determined such that,

k95 ← arg mink

(fk) > 0.95,

k99 ← arg mink

(fk) > 0.99.(20)

A typical confidence result is sketched in Fig. 6. It isshown that at the expiry of the 3rd iteration, the fitness ofthe best particle obtained so far approach 95% of the finalfitness (see markers at the left-bottom of the plot). Moreover,at the 5th iteration, the fitness of best particle reaches 99%of the final value.

C. Test Cases

Test cases are combinations of the following conditions.They include: i) landscape, ii) randomness for velocity ad-justment, iii) distribution of control coefficients, iv) controlstrategy and v) number of particles. The combinations arelisted in the Table I.

The landscape and velocity adjustment are combined toform 6 conditions while the distribution and control formanother 6 conditions. For each setting on the number of

827

Page 6: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

TABLE I

CONDITIONS FOR TEST CASES

Condition Descriptionlandscape single peak / ridge & trench / multi-peak & deepvel. adjustment constant / pre-assigned uniformdistribution uniform / Gaussiancontrol adaptive / pre-assigned / selectiveparticles 50 / 100 / 200

particles, these conditions give a total of 6 × 6 = 36 testruns. The number of particles is also varied between 50, 100and 200. Implementation strategies are defined according tothe presentation in Section III.

D. Results

Test results are consolidated into the error between theoptimal and the reported solution (in x-/y-coordinates) fromdifferent PSO strategies, see Table II. Moreover, the numberof iterations needed to reach the vicinity of the highestfitness obtained, is given in Table III (where NA indicatestermination beyond test iteration limits of 50). The bestperforming strategies are marked with a (∗) in the tables.

1) Effectiveness: The effectiveness of the PSO algorithmis reflected by the closeness of the reported solution to theanalytical optimum (which is known by the design of thetest functions but is not used in the algorithm). As shownin Table II, the conventional implementation (Strategy 1) isnot always the best performing strategy. The majority of bestperformer found is the selective strategy (Strategy 3) wherethe allocation of randomness depends on the fitness of theparticles. This is expected from the principle of division oflabor, that fitter particles are used in refining the solutionwhile the others are deployed in searching for un-exploredsolution spaces. This results in a more effective use ofcomputational resources.

For the simple test function (Landscape 1), performancesare relatively comparable over all strategies. This finding isreasonable that the focus of attraction (the single peak) isunique and there is no local optimum over the solution space.With regard to the test function of moderate complexity(Landscape 2), strategies employing uniformly distributedrandom control coefficients generally perform better thancases with a Gaussian distribution. By the characteristicsof the Gaussian distribution, only a small portion (withinthe range [0,1] and accounts for approximately 40% ofthe particles) makes the algorithm stable while the othersare devoted to searching. The results from the complicatetest function (Landscape 3) indicate that the strategy withselection and Gaussian distribution performs better. This isbecause the landscape contains several local peaks in thevicinity of the global optimum, the consequence is thatparticles are trapped at these local peaks. Fig. 7 showsa typical case when the global-best particle is trapped inthe local optimum (solutions in y-coordinate are away fromzero).

The effects attributed to the velocity control coefficient(constant vs. uniform random) are not noticeable. This is

0 5 10 15 20 25 30 35 40 45 50

−1

−0.5

0

0.5

1

x−co

ord

0 5 10 15 20 25 30 35 40 45 50

−2

−1

0

1

2

y−co

ord

Iteration

Fig. 7. Solution error due to local optima trap.

due to the fact that this coefficient is responsible to thetime rate of convergence and is not directly involved inthe search operation. A larger number of particles naturallyleads to a better coverage of the solution space at the initialiterations. Thus, a temporal global-best is available with ahigher probability of being the actual optimum.

2) Efficiency: The PSO efficiency is derived from thenumber of iterations required to reach a specified level to thehighest fitness obtained in the test runs. In this work, the 95%and 99% thresholds are used as the efficiency indicators. Theselective and uniform distributed control strategy (Strategy3) is the most promising across the test settings. This resultcoincides with that observed from the effectiveness results.

For the simple test function (Landscape 1), all strategiesperform similarly with small numbers of needed iterations toreach the fitness threshold. This is because the test functionis so simple that the benefits derived from a better strategyare not noticeable. The results from the test function withmoderate difficulty (Landscape 2) indicate that strategieswith the uniform distribution slightly outperforms their coun-terparts using a Gaussian distribution. However, as the sizeof particles increases (e.g., 200 particles), the difference inefficiency is marginal. This can be explained by the fact thatonce the global-best particle is found, subsequent efforts willnot further improve the solution. Hence, a complete coverageof the solution space is always desirable. For the complicatetest function (Landscape 3), strategies using the uniformdistribution generally outperforms the Gaussians especiallywhen the number of particles used is also increased. A typicalresult from the conventional PSO implementation (Strategy1) is shown in Fig. 8 where an extended reaching towardsthe highest fitness is indicated (thick line).

As observed from the effectiveness results, the effectof velocity control is also not noticeable in the contextof efficiency. Moreover, a larger number of particles usedobviously aids in improving the efficiency.

E. Discussion

The overall performance combining effectiveness and effi-ciency of the proposed strategies is difficult to assess. This is

828

Page 7: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

0 5 10 15 20 25 30 35 40 45 500

0.2

0.4

0.6

0.8

1

Iteration

Fitn

ess

Fig. 8. Degradation in fitness due to local optima trap.

due to the stochastic nature of the PSO algorithm. Becausethe random numbers generated do not accurately follow thespecified distribution, in particular, for a small particle size.Another ambiguity encountered is that a particular strategyis applicable to a specific class of functions. This is aconsequence of the well known principle of no-free-lunch.For the test cases considered in this work, the strategy withselection and uniform distributed random control coefficientsperforms better than the others, including the conventionalPSO implementation, in a majority of cases.

V. CONCLUSION

This paper had presented an empirical study on the ef-fect of randomness on the control coefficients in the PSOalgorithm. The need for alternative randomization strategieswas revealed by analyzing the algorithm through a controlengineering perspective which had led to the proposal forsolution space coverage. Alternative strategies on parameterrandomness were tested on functions with different degreesof landscape complexity. The strategies included the use ofuniform and Gaussian distributed control coefficients andthe incorporation of selection schemes. Effectiveness andefficiency were commented on the basis of the accuracy ofthe reported solution and the iterations required to obtaina statistically ascertained level of confidence. Experimentalresults had shown that for simple test functions, all pro-posed strategies perform satisfactorily. But for complicatefunctions, the selective and uniformly distributed randomcoefficients, by invoking the principle of division-of-labor,performs better. Further work are directed towards real-worldapplications, e.g., model parameter identification, systemestimation and controller design.

REFERENCES

[1] J. Kennedy and R. Eberhart, ”Particle swarm optimization,” Proc.IEEE Intl. Conf. on Neural Networks, Perth, Australia, Nov. 1995,pp. 1942-1948.

[2] T. Ray and K. M. Liew, ”Society and civilization: an optimizationalgorithm based on the simulation of social behavior,” IEEE Trans. onEvolutionary Computation, vol. 7, no. 4, Aug. 2003, pp. 386-396.

[3] R. Salomon, ”Evolutionary algorithms and gradient search: similaritiesand differences,” IEEE Trans. on Evolutionary Computation, vol. 2,no. 2, Jul. 1998, pp. 45-55.

[4] S. L. Cheng and C. Hwang, ”Optimal approximation of linear systemsby a differential evolution algorithm,” IEEE Trans. on Systems, Man,and Cybernetics - Pt. A: Systems and Humans, vol. 31, no. 6, Nov.2001, pp. 698-707.

[5] N. Krasnogor and J. Smith, ”A tutorial for competent memeticalgorithms: model, taxonomy and design issues,” IEEE Trans. onEvolutionary Computation, vol. 9, no. 3, Oct. 2005, pp. 474-488.

[6] G. Ciuprina, D. Ioan and I. Munteanu, ”Use of intelligent-particleswarm optimization in electromagnetics,” IEEE Trans. on Magnetics,vol. 38, no. 2, Mar. 2002, pp. 1037-1040.

[7] G. Venter and J. S. Sobieski, ”Particle swarm optimization,” Proc.43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamicsand Material Conf., Denver, Colorado, Apr. 2002, AIAA 2002-1235.

[8] Y. Zheng, L. Ma, L. Zhang and J. Qian, ”Robust PID controller designusing particle swarm optimizer,” Proc. 2003 IEEE Intl. Symposium onIntelligent Control, Houston, Texas, Oct. 2003, pp. 974-979.

[9] S. Naka, T. Genji, T. Yura and Y. Fukuyama, ”A hybrid particle swarmoptimization for distributed state estimation,” IEEE Trans. on PowerSystems, vol. 18, no. 1, Feb. 2003, pp. 60-68.

[10] D. W. Boeringer and D. H. Werner, ”Particle swarm optimizationversus genetic algorithms for phased array synthesis,” IEEE Trans.on Antennas and Propagation, vol. 52, no. 3, Mar. 2004, pp. 771-779.

[11] H. Zhang, X. Li, H. Li and F. Huang, ”Particle swarm optimization-based schemes for resource-constrained project scheduling,” Automa-tion in Constructions, vol. 14, 2005, pp. 393-404.

[12] S. L. Ho, S. Yang, G. Ni, E. W. C. Lo and H. C. Wong, ”Aparticle swarm optimization-based method for multiobjective designoptimizations,” IEEE Trans. on Magnetics, vol. 41, no. 5, May 2005,pp. 1756-1759.

[13] E. Ozcan and C. K. Mohan, ”Particle swarm optimization: surfing thewaves,” Proc. 1999 Congress on Evolutionary Computation, Washing-ton, DC, Jul. 1999, pp. 1939-1944.

[14] M. Clerc and J. Kennedy, ”The particle swarm - explosion, stability,and convergence in multidimensional complex space,” IEEE Trans. onEvolutionary Computation, vol. 6, no. 1, Feb. 2002, pp. 58-73.

[15] I. C. Trelea, ”The particle swarm optimization algorithm: convergenceanalysis and parameter selection,” Information Processing Letters, vol.85, 2003, pp. 317-325.

[16] R. Mendes, J. Kennedy and J. Neves, ”The fully informed particleswarm: simpler, maybe better,” IEEE Trans. on Evolutionary Compu-tation, vol. 8, no. 3, Jun. 2004, pp. 204-210.

[17] K. E. Parsopoulos, V. P. Plagianakos, G. D. Magoulas and M. N.Vrahatis, ”Objective function stretching to alleviate convergence tolocal minima,” Nonlinear Analysis, vol. 47, 2001, pp. 3419-3424.

[18] P. J. Angeline, ”Using selection to improve particle swarm optimiza-tion,” Proc. 1998 IEEE Intl. Conf. on Evolutionary Computation,Anchorage, Alaska, May 1998, pp. 84-89.

[19] M. Lovbjerg, T. K. Rasmussen and T. Krink, ”Hybrid particle swarmoptimiser with breeding and subpopulations,” Proc. Genetic and Evolu-tionary Computation Conf. 2001, San Francisco, California, Jul. 2001,pp. 469-476.

[20] T. Huang and A. Sanagavarapu, ”A hybrid boundary condition forrobust particle swarm optimization,” IEEE Antennas and WirelessPropagation Letters, vol. 4, 2005, pp. 112-117.

[21] H. Fan, ”A modification to particle swarm optimization algorithm,”Engineering Computations, vol. 19, no. 8, 2002, pp. 970-989.

[22] K. Yasuda, A. Ide and N. Iwasaki, ”Adaptive particle swarm optimiza-tion,” Proc. 2003 IEEE Intl. Conf. on Systems, Man and cybernetics,Washington, DC, Oct. 2003, pp. 1554-1559.

[23] Y. Shi and R. Eberhart, ”A modified particle swarm optimizer,” Proc.1998 IEEE Intl. Conf. on Evolutionary Computation, Anchorage,Alaska, May 1998, pp. 69-73.

[24] J. Xu and Z. Xin, ”An extended particle swarm optimizer,” Proc. 19thIEEE Intl. Parallel and Distributed Processing Symposium, Apr. 2005,pp. 193-197.

[25] A. Rodriguez and J. A. Reggia, ”Extending self-organizing particlesystems to problem solving,” Artificial Life, vol. 10, 2004, pp. 379-395.

[26] Z. Gajic and M. Lelic, Modern Control Systems Engineering, London:Prentice Hall, 1996.

829

Page 8: [IEEE 2006 IEEE International Conference on Evolutionary Computation - Vancouver, BC, Canada (16-21 July 2006)] 2006 IEEE International Conference on Evolutionary Computation - An

TABLE II

TEST RESULTS FOR EFFECTIVENESS (EXPECTED ERRORS IN

X-/Y-COORDINATES)

Constant velocity coefficient, 50 particlesLandscape

Strategy 1 2 31 +0.000/-0.000 -0.000/+0.000 -0.028/+4.7922 +0.000/+0.000 +0.000/-0.000 -0.022/+4.7923* -0.000/+0.000 +0.000/-0.000 -0.023/+3.1604 -0.015/+0.009 +0.010/+0.001 -0.037/+3.4535 +0.000/+0.000 -0.000/-0.000 -0.019/+4.7926 -0.000/-0.000 +0.002/+0.002 -0.043/+4.506

Random velocity coefficient, 50 particlesLandscape

Strategy 1 2 31* -0.000/+0.000 -0.000/+0.000 -0.019/-0.0132 -0.000/+0.000 -0.000/+0.000 -0.026/+2.6793 -0.000/-0.000 -0.000/+0.000 -0.022/+0.7604 +0.006/+0.000 -0.003/+0.001 -0.010/-0.0285 +0.000/+0.000 -0.004/-0.000 -0.022/+1.8266 +0.002/+0.003 +0.005/-0.012 +0.047/+1.911

Constant velocity coefficient, 100 particlesLandscape

Strategy 1 2 31 -0.000/-0.000 +0.000/-0.000 -0.019/+0.7582 +0.000/-0.000 +0.000/-0.000 -0.023/+2.5833* +0.000/+0.000 +0.000/-0.000 -0.018/+0.0854 -0.005/+0.001 +0.001/+0.002 -0.031/+2.2005 +0.000/-0.000 +0.000/-0.000 -0.028/+4.7926 -0.000/+0.000 +0.000/+0.000 -0.020/+2.298

Random velocity coefficient, 100 particlesLandscape

Strategy 1 2 31 +0.000/-0.000 +0.000/-0.000 -0.020/-0.0112 +0.000/+0.000 -0.000/-0.000 -0.017/+4.1203* -0.000/+0.000 +0.000/+0.000 -0.019/-0.0104 -0.000/-0.001 -0.001/+0.001 +0.005/+2.2025 +0.000/+0.000 +0.000/+0.000 -0.015/+0.0876 -0.000/+0.000 -0.003/-0.008 +0.003/+1.737

Constant velocity coefficient, 200 particlesLandscape

Strategy 1 2 31* -0.000/+0.000 +0.000/-0.000 -0.018/-0.0102* -0.000/+0.000 +0.000/+0.000 -0.018/-0.0103 -0.000/-0.000 +0.000/-0.000 -0.020/+4.6004 -0.002/+0.001 -0.000/-0.000 -0.027/+0.0475 -0.000/+0.000 +0.000/+0.000 -0.025/-0.0106 +0.000/-0.000 +0.000/+0.000 -0.017/-0.018

Random velocity coefficient, 200 particlesLandscape

Strategy 1 2 31* +0.000/-0.000 +0.000/-0.000 -0.018/-0.0102* -0.000/-0.000 +0.000/+0.000 -0.018/-0.0103* +0.000/-0.000 +0.000/+0.000 -0.018/-0.0104 +0.000/-0.000 -0.000/+0.000 -0.008/-0.0195* +0.000/+0.000 -0.000/-0.000 -0.018/-0.0106 +0.000/+0.000 +0.000/+0.000 -0.021/+0.001

TABLE III

TEST RESULTS FOR EFFICIENCY (ITERATIONS TO REACH 95% AND 99%

OF HIGHEST FITNESS)

Constant velocity coefficient, 50 particlesLandscape

Strategy 1 2 31 3 / 5 6 / 10 NA / NA2 2 / 5 5 / 8 NA / NA3* 2 / 5 5 / 8 33 / NA4 3 / 6 11 / 30 NA / NA5 3 / 5 8 / 11 NA / NA6 3 / 7 8 / 17 NA / NA

Random velocity coefficient, 50 particlesLandscape

Strategy 1 2 31 2 / 4 6 / 9 30 / NA2 2 / 4 5 / 7 38 / NA3* 2 / 4 5 / 9 26 / NA4 3 / 6 9 / 16 NA / NA5 3 / 6 7 / 13 NA / NA6 3 / 6 9 / 21 NA / NA

Constant velocity coefficient, 100 particlesLandscape

Strategy 1 2 31 2 / 3 4 / 6 20 / NA2 2 / 3 3 / 6 24 / NA3* 2 / 3 3 / 6 17 / 504 2 / 4 5 / 11 34 / NA5 2 / 4 4 / 8 NA / NA6 2 / 4 5 / 10 NA / NA

Random velocity coefficient, 100 particlesLandscape

Strategy 1 2 31 2 / 3 4 / 6 16 / 482 2 / 3 4 / 5 13 / NA3* 2 / 3 3 / 6 18 / 464 2 / 4 5 / 10 48 / NA5 2 / 4 5 / 8 35 / NA6 2 / 4 5 / 11 43 / NA

Constant velocity coefficient, 200 particlesLandscape

Strategy 1 2 31 2 / 2 3 / 5 10 / 192* 2 / 2 2 / 5 6 / 193 2 / 2 2 / 4 9 / 244 2 / 3 3 / 7 21 / NA5 2 / 3 3 / 6 15 / NA6 2 / 3 3 / 6 28 / NA

Random velocity coefficient, 200 particlesLandscape

Strategy 1 2 31* 2 / 2 3 / 4 7 / 232 2 / 2 2 / 4 9 / 233 2 / 2 2 / 4 9 / 244 2 / 3 3 / 6 23 / NA5 2 / 3 3 / 5 12 / 406 2 / 3 3 / 5 14 / NA

830