[ieee 2011 ieee symposium on swarm intelligence - part of 17273 - 2011 ssci - paris, france...

7
Magnetic Particle Swarm Optimization Paulo S. Prampero Federal Institute of Science, Technology and Education of São Paulo – Campus Salto Salto, Brazil Department of Computer Engineering and Industrial Automation FEEC – University of Campinas (Unicamp) Campinas, Brazil [email protected] Romis Attux Department of Computer Engineering and Industrial Automation FEEC – University of Campinas (Unicamp) Campinas, Brazil [email protected] Abstract— In this paper, we propose a new particle swarm approach based on the idea of repulsion by a magnetic field. The structure of the method is presented and, using a number of well- known benchmark functions in a 30-dimension search space, its performance is compared to that of well-established algorithms of similar inspiration. The global search potential of the proposal is also analyzed with the aid of a simpler simulation setup. Index Terms – particle swarm optimization, magnetic particle swarm, global search methods. I. INTRODUCTION The problem of performing function optimization is, particularly when a significant global search potential is required, of the utmost relevant both in theoretical and in practical terms. Among the metaheuristics used to solve complex instances of this problem, the class of particle swarm optimization methods [1] has received in the last years a significant deal of attention. In this work, we use elements of electromagnetic theory, more specifically, the effect of repulsion by a magnetic field to derive a solution that, to the best of our knowledge, is novel: this solution shall be referred to as magnetic particle swarm algorithm. In order to compare the performance of the new solution to that of methods of similar inspiration, we carried out tests using eight benchmark functions with different features, always assuming a 30- dimension search space. Finally, we illustrate the global search potential of the magnetic particle swarm algorithm using a simpler two-dimensional scenario. The results indicate that the proposal is effective and has a significant applicability to deal with complex optimization tasks. The work is structured as follows. In section II, we describe the general idea of particle swarm optimization and also discuss some well-established approaches thereby inspired. Section III presents metaheuristics inspired by electromagnetic theory, which are a most relevant source of inspiration to the proposed algorithm, which is analyzed in section IV. Section V brings the obtained results and section VI closes the work by presenting our conclusions. II. PARTICLE SWARM OPTIMIZATION In [1], it is stated that a particle swarm algorithm should include the following behaviors: Assess – the particles (individuals) should have the ability to sense the environment and to estimate their own behavior; Compare – individuals should use each other as a comparative reference; Mimic – keeping in mind that imitation is central to human social organizations and important for the acquisition and maintenance of mental abilities. These fundamental behaviors can be found in the approach known as Particle Swarm Optimization (PSO). PSO was developed by Kennedy and Eberhart in 1995 [2], being, in simple terms, an optimization method based on the modeling of collective behavior. The method employs a population of particles - each one corresponding to a potential solution to a given problem – which are capable of moving in the search space and of “recalling” favorable positions visited by themselves and by their neighbors. An original version of the PSO can be implemented along the following lines, if all particles are assumed to be neighbors (i.e. if a global neighborhood pattern is adopted): 1) Initialize a population of particles; 2) Evaluate the desired optimization function at the position given by each particle; 3) Compare the evaluation of each particle with its last best value (the best value obtained so far), called pbest. If current value is better than pbest, adopt it as the new pbest; 978-1-61284-052-9/11/$26.00 ©2011 IEEE

Upload: romis

Post on 16-Dec-2016

214 views

Category:

Documents


2 download

TRANSCRIPT

Magnetic Particle Swarm Optimization

Paulo S. Prampero

Federal Institute of Science, Technology and Education of São

Paulo – Campus Salto

Salto, Brazil

Department of Computer Engineering and Industrial

Automation

FEEC – University of Campinas (Unicamp)

Campinas, Brazil

[email protected]

Romis Attux

Department of Computer Engineering and Industrial

Automation

FEEC – University of Campinas (Unicamp)

Campinas, Brazil

[email protected]

Abstract— In this paper, we propose a new particle swarm

approach based on the idea of repulsion by a magnetic field. The

structure of the method is presented and, using a number of well-

known benchmark functions in a 30-dimension search space, its

performance is compared to that of well-established algorithms

of similar inspiration. The global search potential of the proposal

is also analyzed with the aid of a simpler simulation setup.

Index Terms – particle swarm optimization, magnetic particle

swarm, global search methods.

I. INTRODUCTION

The problem of performing function optimization is, particularly when a significant global search potential is required, of the utmost relevant both in theoretical and in practical terms. Among the metaheuristics used to solve complex instances of this problem, the class of particle swarm optimization methods [1] has received in the last years a significant deal of attention. In this work, we use elements of electromagnetic theory, more specifically, the effect of repulsion by a magnetic field to derive a solution that, to the best of our knowledge, is novel: this solution shall be referred to as magnetic particle swarm algorithm. In order to compare the performance of the new solution to that of methods of similar inspiration, we carried out tests using eight benchmark functions with different features, always assuming a 30-dimension search space. Finally, we illustrate the global search potential of the magnetic particle swarm algorithm using a simpler two-dimensional scenario. The results indicate that the proposal is effective and has a significant applicability to deal with complex optimization tasks.

The work is structured as follows. In section II, we describe the general idea of particle swarm optimization and also discuss some well-established approaches thereby inspired. Section III presents metaheuristics inspired by electromagnetic theory, which are a most relevant source of inspiration to the proposed algorithm, which is analyzed in section IV. Section V brings the obtained results and section VI closes the work by presenting our conclusions.

II. PARTICLE SWARM OPTIMIZATION

In [1], it is stated that a particle swarm algorithm should include the following behaviors:

• Assess – the particles (individuals) should have the ability to sense the environment and to estimate their own behavior;

• Compare – individuals should use each other as a comparative reference;

• Mimic – keeping in mind that imitation is central to human social organizations and important for the acquisition and maintenance of mental abilities.

These fundamental behaviors can be found in the approach known as Particle Swarm Optimization (PSO). PSO was developed by Kennedy and Eberhart in 1995 [2], being, in simple terms, an optimization method based on the modeling of collective behavior. The method employs a population of particles - each one corresponding to a potential solution to a given problem – which are capable of moving in the search space and of “recalling” favorable positions visited by themselves and by their neighbors. An original version of the PSO can be implemented along the following lines, if all particles are assumed to be neighbors (i.e. if a global neighborhood pattern is adopted):

1) Initialize a population of particles;

2) Evaluate the desired optimization function at the position given by each particle;

3) Compare the evaluation of each particle with its last best value (the best value obtained so far), called pbest. If current value is better than pbest, adopt it as the new pbest;

978-1-61284-052-9/11/$26.00 ©2011 IEEE

4) Compare the current evaluation with the best value visited by the entire population, called gbest. If the current value is better than gbest, adopt it as the new gbest;

5) Update the velocity (v) and the position (x) of the particle according to equations (1) and (2), respectively:

v ← v + c1⊗ [pbest – x]+c2⊗ [gbest – x] (1)

x ← x+ v (2)

In (1), c1 and c2 are random vectors whose elements obey,

in general, a uniform distribution, and ⊗ indicates elementwise product.

6) Return to step 2) until a stopping criterion is met.

A. PSO: Some Aspects

The methodology of PSO belongs to the broader field of swarm intelligence [3], which can be considered, from the standpoint of bio-inspired optimization, relatively young and certainly very promising in terms of research perspectives. The stochastic character of the approach makes its analysis quite complex in comparison, for instance, with deterministic nonlinear optimization methods. Notwithstanding, as exposed in [4], it is possible to indicate concrete advantages and disadvantages associated with PSO. Some advantages are its relative simplicity (in terms of structure and parameter setting), its efficiency in terms of exploration – as “the most optimistic particle” can transmit information to the other particles – and the straightforward use of real coding to solve real-valued problems. One disadvantage is that global convergence is not guaranteed in practice.

The natural development of the research field of swarm intelligence led to the proposal of alternatives to the standard PSO, some of which will be discussed in the sequel.

B. Variations of Standard PSO

A number of relevant proposals have been made and incorporated to the theoretical corpus that characterizes research on swarm intelligence, of which we shall give some relevant examples. In 1998, Shi and Eberhart [5] proposed the use of an inertia weight ω as a factor to establish a direct dependence between the present and the updated velocities:

v= ω.v + c1⊗ [pbest – x]+c2⊗ [gbest – x] (3)

being c1 and c2, again, random vectors whose elements obey, in general, a uniform distribution. An interesting feature is that larger values of ω tend to enhance global search / exploration, whereas smaller values tend to emphasize local search / exploitation [4].

In 1999, Clerc [7] introduced the PSO with convergence agents [7]. The convergence factor allows the particles to oscillate around the randomly defined regions related to pbest and gbest, which may lead to a more efficient exploitation and convergence. The factor and the resulting algorithm are: �� = �����������, where � = �1 + �2 > 4 (4)

v=CF( v + c1⊗ [pbest – x]+c2⊗ [gbest – x]) (5)

with c1 and c2 defined as in (3). Experimental results show that, compared with the particle swarm optimization algorithm with inertia weights, the convergence of the algorithm defined in (5) is significantly improved.

Among other variations of the PSO and related proposals, we may cite: the idea of Lu and Hou of introducing auto-adaptation [8]; the operator of differential evolution of Zhang and Xie [9]; the use of Kalman filtering in [10]; the nonlinear dynamical elements exposed in [11] and the use of the concept of natural selection [12]. Finally, it is important to remark that a more systematic view can be found, for instance, in the taxonomy of PSO elaborated by Sedighizadeh and Masehian [13].

III. META-HEURISTICS INSPIRED IN ELECTROMAGNETIC

THEORY

This class of algorithms is inspired by the attraction–repulsion mechanisms found in electromagnetic theory [14]. Each sample point (solution) is viewed as a charged particle placed on a certain space, being its charge related to the value of the cost function to be optimized. It is important to keep in mind that this charge determines the intensity of the force acting on the particles.

The algorithm analyzes the particles to calculate a direction of movement for each point, in accordance with the resulting force exerted by other points. In analogy with electromagnetic forces, this resulting force is determined via vector summation of all pertinent forces.

Firstly, the algorithm distributes m points randomly in the feasible domain and calculates the value of the objective function for each point, being the best solution stored [14]. Secondly, a search step, which is defined in accordance with the aforementioned electromagnetic inspiration, is applied to each coordinate of each point. The new solutions are kept in the population only if there is improvement, and the best solution is duly updated.

Let us now consider a more detailed mathematical analysis. The charge of each point i, qi, determines its capacity of attraction or repulsion. This charge is calculated in accordance with the expression:

�� = ��� �−� ���������� !"�∑ $���%������ !"�&'%() * , ∀- (6)

Thus, points that have better objective values possess higher charges. Notice that, unlike electrical charges, no sign is associated with the charge of an individual point, being the direction of a particular force defined as follows:

�� = ∑ /0��1���� 2�21

34154�3 , �� ���1�6����������1� 2�21

34154�3 , �� ���1�7�����89:;<� , ∀- (7)

The better point, therefore, attracts a point with a smaller cost value. Consequently, the best solution xbest attracts all the other individuals.

After evaluating the total force vector Fi, the i-th point is

moved in the direction determined by the force in accordance with a random – here, assumed to be uniformly distributed

between 0 and 1 – step size λ. Each dimension of space is limited by the upper bound u

k and the lower bound l

k.

�=� = > �=� + ? @%�A@�A �B= − �=� �, -C �=� > 0�=� + ? @%�A@�A ��=� − E=�, FGℎ�IJ-K�L , k=1,2, ..,n

(8)

It is interesting to remark that there are points of similarity between the PSO and electromagnetic approaches, as both of them deal with a collection of solutions subject to mutual interaction. More details concerning the latter can be found in [14][15].

IV. PROPOSED ALGORITHM

The proposed approach, called magnetic particle swarm algorithm, corresponds, in essence, to a method that uses elements of the behavior of dipoles

1 subject to a magnetic field

to establish search mechanisms in harmony with key favorable aspects of general-purpose optimizers. We will analyze it in detail, but, before we proceed, it is important to point out that the idea of bringing together these aspects is not in the “naïve” spirit of building a “method of methods” – which would be against the essence of the so-called free lunch theorems [18] – but of finding an efficient tool to deal with complex multimodal tasks.

1 We will use the term “particle” while referring to a magnetic dipole, but it is

important to stress that we do not wish to establish any analogy with the

complex notion of “magnetic charge”.

A. Motivation

If we consider efficient methods for function optimization, it is possible to distinguish a number of relevant features that appear, to some extent, in their structure:

• These algorithms are iterative and possess memory, e.g. [1][16][17];

• They are based on a population of solutions and have some mechanism for controlling it, e.g. [16];

• They employ some definition of a radius of influence of individuals, e.g. [16][17];

• They establish a balance between exploration and exploitation, e.g. [17].

These features, which can be associated with methods like PSO, genetic algorithms, artificial immune systems and ant colony approaches, will form the basis of the proposed algorithm, which is described in the following section.

B. The Proposed Algorithm

The proposed algorithm possess the interesting compromise of being faithful to the spirit of the aforementioned desirable features and only requiring the a priori setting of two parameters: the number of employed particles and the maximum number of cost evaluations.

The proposed initialization is to distribute in a uniform way the particles in the search space. Afterwards, the maximum and minimum radii of each particle repulsion region are calculated based on the number of particles and the size of the search space. The minimum radius is set throughout this work as the thousandth part of the maximum radius. This, naturally, is somewhat arbitrary, but experiments revealed that the performance of the algorithm is not particularly sensitive to this choice.

The repulsion region is important to prevent the particles from becoming excessively close to one another. Nothing happens when there is intersection only between repulsion regions, but, when the algorithm detects that a particle has invaded another’s repulsion region, the worse particle (in terms of the cost function) is removed from it. The size of the repulsion region of each particle is calculated during the algorithm execution. Its size is bounded by the maximum and minimum radii, and depends on the values of the cost function: when a particle improves her cost, its repulsion region increases, decreasing otherwise.

In the beginning, the particle moves randomly in the search space until it finds a better point: notice that, on the other hand, a decrease in the cost is not accepted. When a better solution is found, the particle stores it and the direction along which it has moved: in the next step, the algorithm generates the new point on this line. If the new point thus generated is worse than current, two perpendicular points are studied. The best perpendicular point can be adopted if it is better than the current point, and, in this case, the perpendicular line will also

be adopted. If any point along the perpendicular line is better than the current, the particle loses the direction and the new point is randomly generated within the repulsion region. In this case, the repulsion region decreases until the algorithm finds a better point than current.

If the particle is unable to find better points, the radius of its region of repulsion is decreased until the minimum value is reached, which accounts for a refinement of the obtained good solutions. As mentioned earlier, if a particle enters another’s region, the particle with lowest value will be taken away along the line of better particle. The size of this position change should be enough to place the particle outside the repulsion region.

However, if the better particle does not have a direction to follow, the worse particle uses its own direction to leave the region, with a step sufficient size. Finally, if both particles have no associated direction, the worse particle will be sent towards the best particle. In Alg. 1, we find the main structure of the method.

Algorithm 1: Magnetic Particle Swarm Algorithm

initialize()

while termination criteria are not satisfied do

Move particles()

Verify confronts()

end while

Let us now consider in more detail each function.

1) Initialize()

This function uniformly distributes the particles in the space of feasible solutions, and calculates the upper and the lower bound of the repulsion regions. The upper bound of the repulsion region (ubrr) is:

BMII= = �N%�O%�� $'P & (9)

where uk is the upper bound of the search space in

dimension k, and lk is the lower bound of the search space in

dimension k, n is a number of dimensions, k =1,.., n, and m is a number of particles. The lower bound of the repulsion region (lbrr) is: EMII= = 0.001 BMII= (10)

2) Move particles()

for each particle do if(particle has a direction) generate one point in this direction if( the new point is better) store this point increase in 10% the repulsion region limited by ubrr else generate two perpendicular points if(the best of them is better than current point) store the best of them and its direction else

lose direction end if end if else generate three points using eqs.(11)(12)(13) if( the best generated point is better than the current point) store the best generated point and its direction else if( never had a direction) store best generated point else decrease in 1% the repulsion region limited by lbrr end if end if end if end for

At the beginning, the particle “does not know” where it needs to go, it does not have a direction. Therefore, it walks randomly in the search space. When it finds a better point than the current, it stores this point and defines a good direction. In the next step, the first point to be tested is one along this line (direction). If the new point is better than current, it continues walking on this line and the repulsion region is increased. When the particle has a direction, all points are generated inside the particle repulsion region.

When the line / direction is not good, two points on a perpendicular line will be tested. If the best perpendicular point is better than the current one, the new direction is stored. In problems with more two dimensions, the perpendicular is generated on a randomly selected plane.

When the particle does not know how to move, it walks randomly as: RSB�1= = R�= + ?�= T��T�U-F�� (11) RSB�2 = R� + ? R� (12) RSB�3 = R� + ? T��T�U-F�� (13)

where k=1,…, n and i=1,…, m, being n and m as in equation (9). RepRegioni is the repulsion region of particle i. λ is

random number belonging to [-1,1], and ?�= is calculated for each dimension n.

The difference between equations (11) and (13) is that, in (11), for each dimension, a random number is used, in (13) the same random number is used for all dimensions.

If the best particle of Paux1, Paux

2 and Paux

3 is better than

the current point, the algorithm accepts the better point and discovers a new line to move; otherwise the repulsion region is decreased.

3) Verify confronts ()

for each particle pair do if(distance between pair is less than the largest repulsion region) if(the better particle has a direction)

move the worse particle in this direction else if(the worse particle has a direction) move it in this direction else move the worse particle in the direction of the best particle. end if end if end if end for

This function verifies if a particle invades the repulsion region of another. If this is the case, the worst particle is removed from this region. If the best of them has a direction to move, the worst particle is forced to this direction. If not, but the worse particle has a defined direction, then it follows it with a step large enough to place it outside the repulsion region.

Finally, if neither has a defined direction, the worst of them follows the direction of the best particle of all.

C. Characteristics of Proposed Algorithm

In the proposed algorithm, the particles detect their evolution and the algorithm generates new points towards promising regions according to the obtained directions. The step of comparison is present when a particle enters the repulsion region of another one, being the particle with the smallest cost excluded from the region dominated by the other particle. Imitation happens when conflict occurs between two particles. The particle with the smaller cost copies the direction of the particle with the better value: in this case, the “weaker” particle mimics the other particle.

The algorithm employs the concept of radius of action, treated, in the spirit of electromagnetic theory, as a repulsion region. This radius is larger at the begging and smaller at the end. Search begins with “steps” more expressive than the final ones, which means that the algorithm favors exploration in the beginning and exploitation in the end.

V. EXPERIMENTS AND RESULTS

In this section, the proposed algorithm will be tested in the context of different optimization problems and compared with representative methods of similar inspiration.

A. Algorithms

In order to establish a fair comparison with methods of similar inspiration, we chose to use as benchmarks the PSO, the PSO with inertia weights, the PSO with convergence factor - all of which were discussed in section II – and the EM, which was described in section III. With respect to the EM, we also included a local search mechanism that tends to improve the performance of the algorithm in terms of precision without a significant increase in the number of function evaluations.

B. Cost Functions

All algorithms were tested in the context of eight optimization problems defined by cost functions with different characteristics, all of which are well-known in the literature. In all cases the search space possesses 30 dimensions, establishes, in our opinion, a sufficiently challenging scenario for our purposes.

1) Sphere Function C_�` = ∑ ���a�bc , (14)

where x ∈ [-100,100]30

. The global minimum is located at x*=[0,0,…,0].

2) Rosenbrock Function C_�` = ∑ 100_��dc − ���`� +a�c�bc _�� − 1`�, (15)

where x ∈ [-30,30]30

and the global optimum is x*=[1,1,…,1].

3) Griewank Function C_�` = 1 + 0.00025 ∑ ��� − ∏ cos ��√�a�bca�bc , (16)

where x ∈ [-600,600]30

. The global optimum is x*=[0,0,…,0].

4) Rastrigin Function C_�` = 10� + ∑ j��� − 10 cos_2k��`la�bc , (17)

where x ∈ [-5.12, 5.12]30

and the global optimum is x*=[0,0,…,0].

5) Salomon Function

C_�` = 1 − cos �2km∑ ���a�bc * + 0.1m∑ ���a�bc , (18)

where x ∈ [-100,100]30

and the optimum is x*=[0,0,…,0].

6) Schwefel Function C_�` = − ∑ �� sin �|��|a�bc , (19)

where x ∈ [-500,500]30

and the global optimum is x*=[420.97,…,420.97].

7) Levy Function

(20)

where o� = 1 + _���c` , x ∈ [-10,10]30

and the global

optimum is x*=[1,1,…,1].

8) F1 Function C_�` = ∑ |�� − -|a�bc , (21)

where x ∈ [-100,100]30

and the global optimum is x*=[1,2,…,n].

C. Results

In the experiments, all algorithms have their performances compared with that of the proposed magnetic particle swarm algorithm. To allow common bases for comparison, all algorithms used 30 particles and were allowed to evaluate 1.000.000 times each cost function. We tried to use similar settings for analogous methods and employed several preliminary trials to define all parameters. All algorithms were run 10 times for each case and the mean and standard deviation of the attained cost were calculated. The results are summarized in Tab. I, being the best performance for each problem boldfaced.

TABLE I. MEAN AND STANDARD DEVIATION VALUES OF THE BEST COST VALUE.

Functions

Algorithms

Global PSO PSO with Inertia

weights

PSO Convergence

Factor

EM with Other local

Search

EM Magnetic Particle Swarm

Sphere 0,00 0.01 ± 0.01 0.00 ± 0.00 0.01 ± 0.02 0.21 ± 0.02 0.21 ± 0.06 0.00 ± 0.00

Rosenbrock 0,00 29.14 ± 0.22 28.83 ± 0.09 29.00 ± 0.07 44.58 ± 30.95 111.14 ± 213.94

0.00 ± 0.00

Griewank 0,00 0.02 ± 0.04 0.00 ± 0.00 0.03 ± 0.06 0.37 ± 0.05 0.58 ± 0.08 0.00 ± 0.00

Rastrigin 0,00 0.02 ± 0.04 16.69 ± 26.21 0.02 ± 0.04 35.03 ± 13.84 39.79 ± 14.78 0.00 ± 0.00

Salomon 0,00 0.10 ± 0.00 0.08 ± 0.02 0.09 ± 0.00 1.09 ± 0.08 1.39 ± 0.12 0.00 ± 0.00

Schwefel -12569,49 -9729.18 ± 621.07

-9612.99 ± 519.12

-9698.40 ± 432.77

-8970.60 ± 1033.75

-8071.08 ± 922.09

-12569.47 ± 0.05

Levy 0,00 0.70 ± 2.13 0.49 ± 1.56 0.02 ± 0.05 0.23 ± 0.20 0.18 ± 0.12 0.00 ± 0.00

F1 0,00 72.85 ± 11.39

51.91 ± 36.12 69.39 ± 9.45 5.92 ± 2.85 22.45 ± 6.80 7.08 ± 2.59

Note that the proposed Magnetic Particle Swarm Algorithm reaches the best performance for seven out of eight problems, being, in the case of the F1 function, second only to the EM with local search. In order to verify if there was room for performance improvement, we decided to increase the number

of function evaluations to 3.000.000 for the F1 function, being the results shown in Tab. II. In general, all methods showed, as expected, a better performance, with a significant progress on the part of the new method.

TABLE II. MEAN AND STANDARD DEVIATION VALUES OF THE BEST COST VALUE - 3.000.000 FUNCTION EVALUATIONS.

Functions Global PSO PSO with Inertia

weights

PSO Convergence

Factor

EM with Other local

search

EM Magnetic Particle Swarm

F1 0,00 54.92 ± 10.66 33.18 ± 12.71 57.67 ± 12.36 5.37 ± 3.62 18.92 ± 8.98 1.48 ± 0.28

Finally, to illustrate the performance of the proposed approach in a more intuitive way, we decided to briefly study a simpler two-dimensional case. The chosen multimodal function f(x,y) is: C_�, o` = � sin_4k�` − o sin_4ko + k` + 1 (22)

where x, y ∈ [-1,1]2.

After 20000 cost function evaluations, using only four particles, the algorithm found exactly the four global maxima, as shown in Fig. 1. This indicates the multimodal optimization potential of the proposed approach, which was already revealed by the much harder set of problems previously dealt with.

Figure 1: Result of the application of the new algorithm in a two-dimensional multimodal function optimization problem.

VI. CONCLUSIONS

In this work, a new optimization algorithm, the magnetic particle swarm algorithm, was proposed. The algorithm, which has points of contact with particle swarm and electromagnetism-inspired methods, has mechanisms that are potentially capable of engendering a number of features well known to be relevant from the standpoint of general-purpose global search techniques. The proposal was tested for a number of well-known cost functions in the context of a 30-dimension search space, and its performance was very good in comparison with that of algorithms of similar inspiration. Perspectives for future work include the study of the application of the magnetic particle swarm algorithm in large-scale optimization problems and comparisons with other metaheuristics.

REFERENCES

[1] J. Kennedy, R. C. Eberhart, Y. SHI, “Swarm Intelligence,” San

Francisco: Morgan Kaufmann/ Academic Press, 2001.

[2] J. Kennedy, R. Eberhart, “Particle Swarm Optimization,” Proceedings of IEEE International Conference on Neural Networks, pp. 1942–1948, 1995.

[3] Eberhart, R.; Y. Shi, “Particle Swarm Optimization: Developments, Aplications and Resources”, IEEE, 2001.

[4] Q. BAI, “Analysis of Particle Swarm Optization Algorithm”. Computer and Information Science, vol. 3, n. 1,pp. 180 -184, 2010.

[5] Y. Shi, R. C. Eberhart. “A modified particle swam optimizer,” IEEE Word Congress on Computational Intelligence, pp. 69-73, 1998.

[6] R. C. Eberhart, Y. Shi, “Comparing inertia weights and constriction factors in Particle Swarm Optimization,” Proceedings of the Congress on Evolutionary Computating, pp 84-88, 2000.

[7] M. Clerc “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” Proceedings of the Congress on Evolutionary Computation. Piscataway, NJ:IEEE Service Center, pp:1951-1957, 1999

[8] Z. S. Lu, Z. R. Hou, “Particle Swarm Optimization with Adaptive Mutation,” Acta Electronica Sinica, Vol. 32, No. 3, pp. 416–420, 2004.

[9] W. Zhang, X. Xie, “DPSO: Hybrid particle swarm with differential evolution operator,” Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, pp. 3816–3821, 2003

[10] C. K. Monson, K. D. Seppi, “The Kalman swarm: A new approach to particlemotion in swarm optimization,” Proceedings of the Genetic and Evolutionary Computation Conference, pp. 140–150, 2004.

[11] B. R. Chen, X. T. Feng, “Particle swarm optimization with contracted ranges of both search space and velocity,” Journal of Northeastern University (Natural Science), Vol. 26, No. 5, pp. 488–491, 2005.

[12] J. Tillet, T. M. Rao, F. Sahin and R. Rao, “Darwian particle swarm optimization,” Proceedings of the 2nd Indian International Conference on Artificial Intelligence, pp. 1474–1487, 2005.

[13] Davoud Sedighizadeh and Ellips Masehian, “Particle Swarm Optimization Methods, Taxonomy and Applications,” International Journal of Computer Theory and Engineering, vol 1, nº 5, pp. 1893-8201, 2009.

[14] S. Ilker Birbil, Shu-Cherng Fang, “An Electromagnetism-Like Mechanism for Glocal Optimization,” Journal of Global Optimization, vol. 25, pp. 263-283, 2003.

[15] S. Ilker Birbil, Shu-Cherng Fang, Ruey-Lin Sheu, “On Convergence of a Population-Based Global Optimization Algortithm,” Journal of Global Optimization, vol. 30, pp. 301-318, 2004.

[16] L. N. de Castro, J. Timmis, ”An Artificial Immune Network for Multimodal Function Optimization,” Proceedings of the IEEE Congress on Evolutionary Computation (CEC’02), Vol. 1, pp. 699-674, May, Hawaii, 2002

[17] Mohammad Hadi Afshar, H. Ketabchi, E. Rasa, “Elitist Continuous Ant Colony Optimization Algorithm: Application to Reservoir Operation Problems,” International Journal of Civil Engineerng. Vol. 4 , No. 4, December 2006.

[18] D. H. Wolpert, W. G. Macready, “No Free Lunch Theorems for Optimization,” IEEE Transactions on Evolutionary Computation, Vol. 1, No. 1, pp. 67-82, 1997.