Download - PSO Tutorial
![Page 1: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/1.jpg)
An Overview of Particle Swarm Optimization
Jagdish Chand BansalMathematics Group
Birla Institute of Technology and Science, PilaniEmail: [email protected], bits-pilani.ac.in
![Page 2: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/2.jpg)
2
Overview
IntroductionAn ExampleSome DevelopmentsResearch Issues
![Page 3: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/3.jpg)
3
Optimization Methods
Deterministic and
Probabilistic
![Page 4: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/4.jpg)
4
Deterministic Method
MeritsGive exact solutionsDo not use any stochastic techniqueRely on the thorough search of the feasible domain.
DemeritsNot Robust- can only be applied to restricted class of problems.Often too time consuming or sometimes unable to solve real world problems.
![Page 5: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/5.jpg)
5
Merits• Applicable to wider set of problems i.e. function need not be
convex, continuous or explicitly defined• Use the stochastic or probabilistic approach i.e. random
approach
Probabilistic Method
DemeritsConverges to the global optima probabilisticallySome times get stuck at local optima.
![Page 6: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/6.jpg)
6
Some Existing Probabilistic Methods
Simulated Annealing (SA)Random Search Technique (RST)Genetic Algorithm (GA)Memetic Algorithm (MA)Ant Colony Optimization (ACO)Differential Evolution (DE)Particle Swarm Optimization (PSO)
![Page 7: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/7.jpg)
7
Why PSO for Optimization ?
Continuous Optimization ProblemNon-differentiable,Non-Convex Highly nonlinear Many local-optima
Discrete Optimization ProblemNP-Complete problems: Nobody has found so far any good algorithm for any problem in this class
Search speed
![Page 8: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/8.jpg)
8
Artificial Life
The term artificial life is used to describe research into human made systems that possess some of the essential properties of life. A-life includes two folded research:
A-life studies how computational techniques can help studying biological phenomena
Particle Swarm Optimization Inspiration
A-life studies how biological techniques can help out with computational problem
![Page 9: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/9.jpg)
9
Inspiration cont..
Based on bird flocking or fish schooling and swarming theory of A-Life.
About fish schooling: “In theory at least, individual members of the school can profit from the discoveries and previous experience of all other members of the school during the search for food “.(a sociobiologist E. O. Wilson)
This is the basic concept behind PSO.
![Page 10: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/10.jpg)
10
Inventors
Developed in 1995 by
Prof. James Kennedy (Right)
Prof. Russel Eberhart (Left)
![Page 11: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/11.jpg)
11
PSO uses a population of individuals, to search feasible region of the function space. In this context, the population is called swarm and the individuals are called particles.
Though the PSO algorithm has been shown to perform well, researchers have not been able to explain fully how it works yet.
![Page 12: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/12.jpg)
12
Each particle tries to modify its current position and velocity according to the distance between its current position and pbest, and the distance between its current position and gbest.
Update Equations
![Page 13: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/13.jpg)
13
)()( 2211 currentgbestrccurrentpbestrcvv −+−+=
Current Velocity
Updated Velocity
Velocity Update Equation (Rate of Change in Particle’s Position)
rand (0,1), to stop the swarm converging too quickly
Acceleration factors, can be used to change the weighting between personal and population experience
This is the cognitive component which draws individuals back to their previous best situations.
This is the social component where individuals compare themselves to others in their group.
vcurrentcurrent += Position Update Equation
![Page 14: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/14.jpg)
14
PSO Parameters
1. The number of particles :
20 – 40 particles. For most of the problems 10 particles are large enough to get good results.
2. Dimension of particles :It is determined by the problem to be optimized.
3. Range of particles :It is also determined by the problem to be optimized, we can specify different ranges for different dimension of particles.
![Page 15: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/15.jpg)
4. Vmax :This is done to help keep the swarm under control. we set the range of the particle as the Vmax. e.g. X belongs [-10, 10], then Vmax = 20.One another approach is Vmax= ⎣(UpBound – LoBound)/5⎦
5. Learning/Acceleration factors :c1 and c2 usually equal to 2. However, other settings were also used in different papers. But usually c1equals to c2 and ranges from [0, 4].
6. The stopping criteria :The maximum number of iterations the PSO execute and the minimum error requirement.
![Page 16: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/16.jpg)
16
Basic Flow of PSO
1. Initialize the swarm from the solution space2. Evaluate fitness of individual particles3. Modify gbest, pbest and velocity4. Move each particle to a new position.5. Go to step 2, and repeat until convergence or a
stopping condition is satisfied.
![Page 17: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/17.jpg)
17
An Example
Understanding of Step by step Procedure of PSO
![Page 18: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/18.jpg)
18
gbest PSO - global version is faster but might converge to local optimum for some problems.
lbest PSO - local version is a little bit slower but not easy to be trapped into local optimum.
One can use global version to get quick result and use local version to refine the search
Two Versions of PSO
![Page 19: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/19.jpg)
19
BINARY PSO
This version has attracted much lesser attention as compared to PSO
Particle position is not a real value, but either 0 or 1
Velocity represents the probability of a bit to take the value 0 or 1 not the rate of change in particle’s position as in PSO for continuous optimization
![Page 20: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/20.jpg)
20
BINARY PSO
The particle’s position in a dimension is randomly generated using sigmoid function
)exp(11)(
xxsigm
−+=
0
0.2
0.4
0.6
0.8
1
-6 -4 -2 0 2 4 6
![Page 21: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/21.jpg)
21
Velocity and Position Update
)()( 2211 idgdidididid xprcxprcvv −+−+=
⎩⎨⎧ <
=otherwise
vsigmrandifx id
id 0)(()1
![Page 22: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/22.jpg)
22
No Free Lunch Theorem
• In a controversial paper in 1997 (available at AUC library), Wolpert and Macready proved that “averaged over all possible problems or cost functions, the performance of all search algorithms is exactly the same”
• No algorithm is better on average than blind guessing
![Page 23: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/23.jpg)
23
Important Developments
Almost all modifications vary in some way the velocity update equation.
![Page 24: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/24.jpg)
24
PSO-W : With Inertia WeightPSO-C : With Constriction FactorFIPSO : Fully Informed PSOHPSOM : Hybrid PSO with Mutation MeanPSO : Mean PSOqPSO : Quadratic approximation PSO
A Brief Review
![Page 25: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/25.jpg)
25
Inertia Weight
Shi and Eberhart introduced the inertia weight w in thealgorithm (PSO-W).Then the iterative expression becomes:
w represents the inertia weight, which enhances the exploration ability of particles
)()(* 2211 currentgbestrccurrentpbestrcvwv −+−+=vcurrentcurrent +=
![Page 26: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/26.jpg)
26
Why Inertia Weight
When using PSO, it is possible for the magnitude of the velocities to become very large.
Performance can suffer if Vmax is inappropriately set.
For controlling the growth of velocities a dynamically adjusted or constant inertia weight were introduced.
Larger w - greater global search ability
Smaller w - greater local search ability.
![Page 27: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/27.jpg)
27
Constriction Factor
Clerc and Kennedy proposed that the constriction factor is effective for the algorithm to converge (PSO-C)
))()((* 2211 currentgbestrccurrentpbestrcvv −+−+= χ
vcurrentcurrent +=
421 >+= ccφ( )42
2−−−
=φφφ
χ
![Page 28: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/28.jpg)
28
Fully Informed PSO
A particle is attracted by every other particle in its neighborhood.
{ }⎥⎦
⎤⎢⎣
⎡−+= ∑
∈ iNiii icurrentiprcvv )()(*χ
![Page 29: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/29.jpg)
29
)()(* 2211 currentgbestrccurrentpbestrcvwv −+−+=
)()(* 2211 currentgbestrccurrentpbestrcvv −+−+= χ
PSO algorithm performs well in the early stage, but easily becomes premature in the local optima area.
The velocity is only related with inertia weight and constriction factor
If the current position of a particle is identical with the global best position and if the current velocity is a small value, the velocity in next iteration will be smaller. Then the particle will be trapped in this area which leads to premature convergence.
This phenomenon is known as stagnation
Stagnation
![Page 30: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/30.jpg)
30
Hybrid Particle Swarm Optimizer with Mutation (HPSOM).
HPSOM has the potential to escape from the local optimum and search in a new position. The mutation scheme randomly chooses a particle and then move to a different position in search area. The operation shows as follows:
5.0(),)(
5.0(),)(
>∆−=
<∆+=
randxxxmut
randxxxmutdi
di
di
di
x∆ is randomly obtained from
))](min)((max1.0,0[ drangedrange −×
This mutation operation is governed by a constant called probability of mutation
![Page 31: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/31.jpg)
31
MeanPSO
gbest Solution0 current pbest
PSO
47.1pbestgbest −
47.1pbestgbest +
MeanPSO
![Page 32: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/32.jpg)
32
![Page 33: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/33.jpg)
33
R1 Particle with best fitness value
R2 and R3 Randomly chosen distinct particles
( ) ( ) ( ) ( )( ) ( ) ( ) ( ) ⎟
⎟⎠
⎞⎜⎜⎝
⎛−+−+−−+−+−
=321213132
322
212
21
231
23
22*
)()()()(*5.0R
RfRRRfRRRfRRRfRRRfRRRfRR
Where f(Ri) is the objective function value at Ri , for i=1, 2, and 3.
The calculations are to be done component wise to obtain R* ..
qPSO:Quadratic Approximation (QA)
![Page 34: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/34.jpg)
34
The Process of Hybridization
Figure 4.1: Transition from ith iteration to i+1th iteration
s1s2---sp
sp+1sp+2--sm
s'1s'2---s'p
s'p+1s'p+2--s'm
qPSO
PSO
QA
qPSO
PSO
QA
ith iteration i+1th iteration
Particle Index
The percentage of swarm which is to be updated by QA is called Coefficient of Hybridization (CH)
![Page 35: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/35.jpg)
35
Random Swarm
ITER =0
Yes
End
Evaluate Objective Function Value of all Particles and Determine GBEST
Stopping Criterion Satisfied?
ITER =ITER + 1
Split Swarm S into subswarms S1 and S2
Is it possible to Determine R1, R2 and
R3 such that atleast two of them are distinct?
No
Determine pbest and gbest (=
GBEST) No
Yes
Velocity Update
Position Update using PSO
Determine R1 (= GBEST), R2 and R3
Position Update using QA
Report Best Particle
Evaluate Objective Function Value of all Particles and Determine
GBEST
Start
For S1 For S2
Flowchart of qPSO Process
![Page 36: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/36.jpg)
36
Research Issues
Hybridization
Parallel Implementation
New Variants modification in Velocity Update EquationIntroduce some new operators in PSO
Discrete Particle Swarm Optimization
Interaction with biological intelligence
Convergence Analysis 36
![Page 37: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/37.jpg)
37
Some Unsolved Issues
• Convergence analysis.• Dealing with discrete variables.• Combination of various PSO techniques
to deal with complex problems.• Interaction with biological intelligence.• Cryptanalysis.
![Page 38: PSO Tutorial](https://reader031.vdocuments.us/reader031/viewer/2022012316/54721fbcb4af9f593f8b4591/html5/thumbnails/38.jpg)