anovelangular-guidedparticleswarmoptimizerfor...

18
Research Article A Novel Angular-Guided Particle Swarm Optimizer for Many-Objective Optimization Problems Fei Chen, 1,2,3 Shuhuan Wu, 1,2,3 Fang Liu , 1,2,3 Junkai Ji , 1,2,3 and Qiuzhen Lin 1,2,3 1 College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China 2 National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, Shenzhen, China 3 Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ), Shenzhen, China Correspondence should be addressed to Fang Liu; [email protected] Received 22 November 2019; Revised 4 March 2020; Accepted 12 March 2020; Published 17 April 2020 Academic Editor: Quanmin Zhu Copyright © 2020 Fei Chen et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Most multiobjective particle swarm optimizers (MOPSOs) often face the challenges of keeping diversity and achieving con- vergence on tackling many-objective optimization problems (MaOPs), as they usually use the nondominated sorting method or decomposition-based method to select the local or best particles, which is not so effective in high-dimensional objective space. To better solve MaOPs, this paper presents a novel angular-guided particle swarm optimizer (called AGPSO). A novel velocity update strategy is designed in AGPSO, which aims to enhance the search intensity around the particles selected based on their angular distances. Using an external archive, the local best particles are selected from the surrounding particles with the best convergence, while the global best particles are chosen from the top 20% particles with the better convergence among the entire particle swarm. Moreover, an angular-guided archive update strategy is proposed in AGPSO, which maintains a consistent population with balanceable convergence and diversity. To evaluate the performance of AGPSO, the WFG and MaF test suites with 5 to 10 objectives are adopted. e experimental results indicate that AGPSO shows the superior performance over four current MOPSOs (SMPSO, dMOPSO, NMPSO, and MaPSO) and four competitive evolutionary algorithms (VaEA, θ-DEA, MOEA\D-DD, and SPEA2-SDE), when solving most of the test problems used. 1.Introduction Many real-world applications often face the problems of optimizing m (often conflicting) objectives [1]. is kind of engineering problem is called multiobjective optimization problems (MOPs), when m is 2 or 3, or called many-ob- jective optimization problem (MaOPs), when m > 3. A generalized MOP or MaOP can be modeled as follows: minimize F(x)� f 1 (x),f 2 (x), ... ,f m (x) ( , subjectto x ∈Ω, (1) where x �(x 1 ,x 2 , ... ,x D ) is a decision vector in the D- dimensional search space Ω and F(x) defines m objective functions. Assuming that two solutions x and y(x y) locate in the search space Ω, if f i (x) f i (y), for i 1, 2, ... ,m { }, x is said to dominate y. en, if no so- lution can dominate x, it is called nondominated solution. For MOPs or MaOPs, the aim is to search a set of non- dominated solutions called Pareto-optimal set (PS), with its mapping in the objective space called Pareto-optimal Front (PF). During the past two decades, a number of nature-in- spired computational methodologies have been proposed to solve various kinds of MOPs and MaOPs, such as multiobjective evolutionary algorithms (MOEAs) [2, 3], multiobjective particle swarm optimizers (MOPSOs) [4–6], and multiobjective ant colony optimizers [7]. Especially, MOPSOs become one of the most outstanding population- based approaches due to the easy implementation and strong search ability [8], which can be also applied to other kinds of optimization problems, such as multimodal op- timization problems [9] and standard image segmentation [10], as well as some real-world engineering problems [11, 12]. Hindawi Complexity Volume 2020, Article ID 6238206, 18 pages https://doi.org/10.1155/2020/6238206

Upload: others

Post on 03-Jun-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

Research ArticleA Novel Angular-Guided Particle Swarm Optimizer forMany-Objective Optimization Problems

Fei Chen123 Shuhuan Wu123 Fang Liu 123 Junkai Ji 123 and Qiuzhen Lin 123

1College of Computer Science and Software Engineering Shenzhen University Shenzhen China2National Engineering Laboratory for Big Data System Computing Technology Shenzhen University Shenzhen China3Guangdong Laboratory of Artificial Intelligence and Digital Economy (SZ) Shenzhen China

Correspondence should be addressed to Fang Liu fliuszueducn

Received 22 November 2019 Revised 4 March 2020 Accepted 12 March 2020 Published 17 April 2020

Academic Editor Quanmin Zhu

Copyright copy 2020 Fei Chen et al is is an open access article distributed under the Creative Commons Attribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited

Most multiobjective particle swarm optimizers (MOPSOs) often face the challenges of keeping diversity and achieving con-vergence on tackling many-objective optimization problems (MaOPs) as they usually use the nondominated sorting method ordecomposition-based method to select the local or best particles which is not so effective in high-dimensional objective space Tobetter solveMaOPs this paper presents a novel angular-guided particle swarm optimizer (called AGPSO) A novel velocity updatestrategy is designed in AGPSO which aims to enhance the search intensity around the particles selected based on their angulardistances Using an external archive the local best particles are selected from the surrounding particles with the best convergencewhile the global best particles are chosen from the top 20 particles with the better convergence among the entire particle swarmMoreover an angular-guided archive update strategy is proposed in AGPSO which maintains a consistent population withbalanceable convergence and diversity To evaluate the performance of AGPSO the WFG and MaF test suites with 5 to 10objectives are adoptede experimental results indicate that AGPSO shows the superior performance over four currentMOPSOs(SMPSO dMOPSO NMPSO and MaPSO) and four competitive evolutionary algorithms (VaEA θ-DEA MOEAD-DD andSPEA2-SDE) when solving most of the test problems used

1 Introduction

Many real-world applications often face the problems ofoptimizing m (often conflicting) objectives [1] is kind ofengineering problem is called multiobjective optimizationproblems (MOPs) when m is 2 or 3 or called many-ob-jective optimization problem (MaOPs) when mgt 3 Ageneralized MOP or MaOP can be modeled as follows

minimize F(x) f1(x) f2(x) fm(x)( 1113857

subject to x isin Ω(1)

where x (x1 x2 xD) is a decision vector in the D-dimensional search space Ω and F(x) defines m objectivefunctions Assuming that two solutions x and y(xney)

locate in the search space Ω if fi(x)lefi(y) forforalli isin 1 2 m x is said to dominate y en if no so-lution can dominate x it is called nondominated solution

For MOPs or MaOPs the aim is to search a set of non-dominated solutions called Pareto-optimal set (PS) with itsmapping in the objective space called Pareto-optimal Front(PF)

During the past two decades a number of nature-in-spired computational methodologies have been proposedto solve various kinds of MOPs and MaOPs such asmultiobjective evolutionary algorithms (MOEAs) [2 3]multiobjective particle swarm optimizers (MOPSOs) [4ndash6]and multiobjective ant colony optimizers [7] EspeciallyMOPSOs become one of the most outstanding population-based approaches due to the easy implementation andstrong search ability [8] which can be also applied to otherkinds of optimization problems such as multimodal op-timization problems [9] and standard image segmentation[10] as well as some real-world engineering problems[11 12]

HindawiComplexityVolume 2020 Article ID 6238206 18 pageshttpsdoiorg10115520206238206

Currently MOPSOs have been validated to be very ef-fective and efficient when solving MOPs which can search aset of approximate solutions with balanceable diversity andconvergence to cover the entire PFs Based on the selectionof local or global best particles which can effectively guidethe flight of particles in the search space most of MOPSOscan be classified into three main categoriese first categoryembeds the Pareto-based sorting approach [2] into MOPSO(called Pareto-based MOPSOs) such as OMOPSO [13]SMPSO [14] CMPSO [15] AMOPSO [5] and AGMOPSO[16] e second category decomposes MOPs into a set ofscalar subproblems and optimizes all the subproblems si-multaneously using a collaborative search process (calleddecomposition-based MOPSOs) including dMOPSO [17]D2MOPSO [18] and MMOPSO [19] e third categoryemploys the performance indicators (eg HV [20] and R2[21]) to guide the search process in MOPSOs (called indi-cator-based MOPSOs) such as IBPSO-LS [22] S-MOPSO[23] NMPSO [24] R2HMOPSO [25] and R2-MOPSO [26]

Although the above MOPSOs are effective for tacklingMOPs with the objective number mle 3 their performancewill significantly deteriorate for solving MaOPs with mgt 3mainly due to the several challenges brought by ldquothe curse ofdimensionalityrdquo in MaOPs [27 28] Generally there arethree main challenges for MOPSOs to deal with MaOPs

e first challenge is to provide the sufficient selectionpressure to approach the true PFs of MaOPs ie thechallenges in maintaining convergence With the increase ofobjectives in MaOPs it becomes very difficult for MOPSOsto pay the same attention to optimize all the objectiveswhich may lead to an imbalanced evolution such that so-lutions are very good at solving some objectives but performpoorly on the others [29] Moreover most solutions ofMaOPs are often nondominated with each other at eachgenerationerefore MOPSOs based on Pareto dominancemay lose the selection pressure on addressing MaOPs andshow a poor convergence performance [24]

e second challenge is to search a set of solutions thatare evenly distributed along the whole PF ie the challengesin diversity maintenance Since the objective space is en-larged rapidly with the increase of dimensionality inMaOPsthis requires a large number of solutions to approximate thewhole PF Some well-known diversity maintenance methodsmay not work well onMaOPs eg the crowding distance [2]may prefer some dominated resistant solutions [30] thedecomposition approaches require a larger set of well-dis-tributed weight vectors [3] and the performance indicatorsrequire an extremely high computational cost [31] usdiversity maintenance in MOPSOs becomes less effectivewith the increase of objectives in MaOPs [32]

e third challenge is to propose the effective velocityupdate strategies for MOPSOs to guide the particle search inthe high-dimensional space [33] ere are a number ofresearch studies to present the improved velocity updatestrategies for MOPs [34] For example in SMPSO [14] thevelocity of the particles is constrained in order to avoid thecases in which the velocity becomes too high In AgMOPSO[35] a novel archive-guided velocity update method is usedto select the global best and personal best particles from the

external archive which is more effective to guide the swarm InMaOPSO2s-pccs [36] a leader group is selected from thenondominated solutions with the better diversity in the externalarchive according to the parallel cell coordinates which areselected as the global best solutions to balance the exploitationand exploration of a population In NMPSO [24] anothersearch direction from the personal best particle pointing to theglobal best one is provided to make more disturbances invelocity update However the abovementioned velocity updatestrategies are not so effective for MaOPs With the enlargementof the objective space it is more difficult to select the suitablepersonal or global best particles for velocity update

To overcome the abovementioned challenges this paperproposes a novel angular-guided MOPSO (called AGPSO)A novel angular-based archive update strategy is designed inAGPSO to well balance convergence and diversity in theexternal archive Moreover a density-based velocity updatestrategy is proposed to effectively guide the PSO-basedsearch When compared to the existing MOPSOs and somecompetitive MOEAs for MaOPs the experimental results onthe WFG and MaF test suites with 5 to 10 objectives haveshown the superiority of AGPSO on most cases

To summarize the main contributions of this paper areclarified as follows

(1) An angular-guided archive update strategy is pro-posed in this paper As the local-best and global-bestparticles are both selected from the external archivethe elitist solutions with balanceable convergence anddiversity should be preserved to effectively guide thesearch direction to approximate the true PF In thisstrategyN evenly distributed particles (N is the swarmsize) are firstly selected by the angular distances tomaintain the diversity in different search directionswhich helps to alleviate the second challenge above(diversity maintenance) After that the rest particlesare respectively associated to the closest particle basedon the angular distance which will form N groupswith similar particles At last the particle with the bestconvergence in each particle group is saved in theexternal archive trying to alleviate the first challengeabove (keeping convergence)

(2) A density-based velocity update strategy is designedin this paper which can search high-quality particleswith fast convergence and good distribution andhelps to alleviate the third challenge above inMOPSOs e local best and global best particles areselected according to the densities of the particles forguiding the PSO-based search e sparse particleswill be guided by the local best particles to encourageexploitation in the local region as the surroundingparticles are very few while the crowded particles aredisturbed by the global best particles to put moreattentions on convergence By this way the proposedvelocity update strategy is more suitable for solvingMaOPs as experimentally validated in Section 45

e rest of this paper is organized as follows Section 2introduces the background of the particle swarmoptimizer

2 Complexity

and some current MOPSOs and MOEAs for MaOPs edetails of AGPSO are given in Section 3 To validate theperformance of AGPSO some experimental studies areprovided in Section 4 At last our conclusions and futurework are given in Section 5

2 Background

21 Particle SwarmOptimizer Particle swarm optimizer is asimple yet efficient swarm intelligent algorithm which isinspired by the social behaviors of the individuals in flocks ofbirds and schools of fish as proposed by Kennedy andEberhart [6] in 1995 Generally a particle swarm S is sto-chastically initialized to have N particles according to thecorresponding search range of the problem (N is the swarmsize) Each particle xi isin S is a candidate solution to theproblem characterized with the velocity vi(t) and the po-sition xi(t) at tth iteration

Moreover each particle xi will record the historicalposition-information in S ie the personal best position forxi (pbesti) and the global best position for xi (gbesti) whichare used to guide the particle fly to the next positionxi(t + 1) en the velocity and location of each particle areiteratively updated as follows

vi(t + 1) wvi(t) + c1r1 pbesti minus xi(t)( 1113857

+ c2r2 gbesti minus xi(t)( 1113857(2)

xi(t + 1) xi(t) + vi(t + 1) (3)

where w is an inertia weight to follow the previous velocityand r1 and r2 are two random numbers uniformly generatedin [0 1] while c1 and c2 are the two learning weight factorsfrom the personal and global best particles [14]

22 Some Current MOPSOs and MOEAs for MaOPsRecently a number of nature-inspired optimization algo-rithms have been proposed for solving MaOPs such asMOEAs andMOPSOs Especially a number ofMOEAs havebeen proposed which can be classified into three mainkinds such as Pareto-based MOEAs indicator-basedMOEAs and decomposition-based MOEAs A lot of ap-proaches have been proposed to enhance their performancefor solving MaOPs For example Pareto dominance relationwas modified to enhance the convergence pressure forPareto-based MOEAs such as ε minus dominance [30 37]θ minus dominace a strengthened dominance relation [38] anda special dominance relation [39] New association mech-anisms for solutions and weight vectors were designed fordecomposition-based MOEAs such as the use of referencevectors in NSGA-II [40] a dynamical decomposition inDDEA [41] and a self-organizing map-based weight vectordesign inMOEAD-SOM [42] A lot of effective and efficientperformance indicators were presented for indicator-basedMOEAs such as a unary diversity indicator based on ref-erence vectors [43] an efficient indicator as a combination ofsum of objectives and shift-based density estimation calledISDE+ [44] and an enhanced inverted generational distance(IGD) indicator [45]

In contrast to the abovementioned MOEAs there areonly a few of MOPSOs presented to solve MaOPs ereasonmay refer to the three challenges faced byMOPSOs asintroduced in Section 1 e main difference betweenMOPSOs andMOEAs is their evolutionary search eg mostMOEAs adopt differential evolution or simulated binarycrossover while most MOPSOs use the fly of particles to runthe search e environmental selection in external archiveof MOPSOs and MOEAs are actually similar As inspired bythe environmental selection in some MOEAs MOPSOs canbe easily extended to solve MaOPs Recently some com-petitive MOPSOs have been proposed to better solveMaOPs For example in pccsAMOPSO [5] a parallel cellcoordinate system was employed to update the externalarchive for maintaining the diversity which was used toselect the global best and personal best particles In itsimproved version MaOPSO2s-pccs [36] the two-stagestrategy was further presented to emphasize convergenceand diversity respectively by using a single-objective op-timizer and a many-objective optimizer In NMPSO [24] abalanceable fitness estimation method was proposed bysummarizing different weights of convergence and diversityfactors which aims to offer sufficient selection pressure inthe search process However this method is very sensitive tothe used parameters and requires a high-computational costIn CPSO [29] a coevolutionary PSO with bottleneck ob-jective learning strategy was designed for solving MaOPsMultiple swarms coevolved in a distributed fashion tomaintain diversity for approximating the entire PFs while anovel bottleneck objective learning strategy was used toaccelerate convergence for all objectives In MaPSO [46] anovel MOPSO based on the acute angle was proposed inwhich the leader of particles was selected from its historicalparticles by using the scalar projections and each particleowned K historical particle information (K was set to 3 in itsexperiments) Moreover the environmental selection inMaPSO was run based on the acute angle of each pair ofparticles Although these MOPSOs are effective for solvingMaOPs there are still some improvements for the design ofMOPSOs us this paper proposes a novel angular-guidedMOPSO with an angular-based archive update strategy anda density-based velocity update strategy to alleviate the threeabovementioned challenges in Section 1

3 The Proposed AGPSO Algorithm

In this section the details of the proposed algorithm areprovided At first two main strategies used in AGPSO iea density-based velocity update strategy and an angular-guided archive update strategy are respectively intro-duced e density-based velocity update strategy willconsider the density around each particle which is used todetermine whether the local best particle or the global bestparticle is used to guide the swarm search In this strategythe local best particle is selected based on the local angularinformation aiming to encourage exploitation in the localregion around the current particle while the global bestparticle is selected by the convergence performancewhich is used to perform exploration in the whole search

Complexity 3

space In order to provide the elite particles for leading thePSO-based search the angular-guided archive updatestrategy is designed to provide an angular search directionfor each particle which is more effective for solvingMaOPs At last the complete framework of AGPSO is alsoshown in order to clarify its implementation as well asother components

31ADensity-BasedVelocityUpdate Strategy As introducedin Section 21 the original velocity update strategy includestwo best positional information of a particle ie the per-sonal-best (pbest) and global-best (gbest) particles as definedin equation (1) which are used to guide the swarm searcheoretically the quality of generated particles is dependenton the guiding information from pbest and gbest in velocityupdate However the traditional velocity update strategy willface the great challenges in tackling MaOPs as the selectionof effective personal-best and global-best particles is difficultMoreover two velocity components used in equation (2)may lead to too much disturbance for the swarm search asmost of the particles are usually sparse in the high objectivespace us the quality of particles generated by the tra-ditional velocity formula may not be promising as the largedisturbance in velocity update will lower the search intensityin the local region around the current particle Some ex-periments have been given in Section 45 to study the ef-fectiveness of different velocity update strategies In thispaper a density-based velocity update strategy is proposedin equation (4) which can better control the search intensityin the local region of particles

vi(t + 1) wvi(t) + c1r1 lbesti minus xi(t)( 1113857 Dx gemedSDE

wvi(t) + c2r2 gbesti minus xi(t)( 1113857 Dx ltmedSDE1113896

(4)

where t is the iteration number vi(t) and vi(t + 1) are the tthand the (t+ 1)th iteration velocity of particle xi respectivelyw is the inertial weight c1 and c2 are two learning factorsr1and r2 are two uniformly distributed random numbers in [01] lbesti and gbesti are the positional information of thelocal-best and global-best particles for xi i (1 2 N)respectively Dx is the shift-based density estimation (SDE)of particle x as defined in [47] and medSDE is the medianvalue of all Dx in the swarm By this way the sparse particleswith Dx gemedSDE will be guided by the local best particles toencourage exploitation in the local region as their sur-rounding particles are very few while the crowded particleswith Dx ltmedSDE are disturbed by the global best particlesto put more attentions on convergence Please note thatlbesti and gbesti of xi(t) are selected from the external archiveA as introduced below

To the selection of gbesti of particle xi the selection ofgbest focuses on the particles with better convergence in thisarchive Meanwhile we also need to ensure a certain per-turbation thus we randomly choose from the top 20 of theconvergence performance e gbesti of particle xi is ran-domly selected from the top 20 particles in the externalarchive with the better convergence performance values For

each particle x we calculate its convergence performancevalues using the following formulation

Con(x) 1113944m

k1fkprime(x) (5)

where m is the number of objectives and fkprime(x) returns the

kth normalized objective vector of x which is obtained bythe normalization procedure as follows

fkprime (x)

fk(x) minus zlowastk

znadirk minus zlowastk

(6)

where zlowastk and znadirk are the kth objective value of the ideal

and nadir points respectively k 1 2 mTo the selection of lbesti the current particle xi in the

particle swarm (i= 1 N where N is the number ofparticles size) will be firstly associated to the closest particley in the external archive (A) by comparing the angular-distance of xi to each particle in the extra archive (A) Anduse y particle to find the lbesti in A Here the angular-distance [41] between the particle x to another particle ytermed AD(x y) is computed by

AD(x y) λ(x) minus λ(y)2 (7)

where λ(x) is defined as follows

λ(x) 1

1113936mk1fjprime(x)

Fprime(x) (8)

where Fprime(x) (f1prime(x) f2prime(x) fmprime(x))T and fprime(x) is

calculated by equation (6)en lbesti is selected from the Tangular-distance-based

closest neighbors of y with the best convergence value byequation (5) Specially each particle of A owns their Tnearest neighbors which are found by calculating the an-gular distance of each particle to the other particles of Ausing equation (7) e neighborhood information of xi in Ais obtained in B(i) where B(i) i1 i2 iT1113864 1113865 and(xi1

xi2 xiT

) are T closest angular-distance-based solu-tions to xi by equation (7)

en for each particle in S lbesti and gbesti are obtainedto update the velocity by equation (4) and position byequation (3) respectively After that the object values of theparticle are evaluated To further clarify the selection of localbest and global best particles and the density-based velocityupdate the related pseudocode is given in Algorithm 1

32 An Angular-Guided Archive Update Strategy Afterperforming the abovementioned density-based velocityupdate for the swarm search each particle flies to the newposition to produce the new particle swarm S In order to geta population consisting of a fixed number of elitist solutionswhich can maintain an excellent balance between conver-gence and distribution in the external archive an appro-priate selection strategy is required to update A In thispaper we propose an angular-guided archive updatestrategy following the principle of distributivity first andconvergence second and the pseudocode of updating theexternal archive is given in Algorithm 2

4 Complexity

In Algorithm 2 the input is S and A N is the size of bothS and A First S and A are combined into a union set U andset A Θ in line 1 en all the particles in U is normalizedby equation (6) in line 2 After that in lines 3ndash6 a set ofparticles are found to represent the distribution of all par-ticles To do this each time the current particle pair (xh xu)

with minimum angular-distance of U is found in line 4which is computed as

xh xu( 1113857 argmin xhxu( ) AD xh xu( 11138571113868111386811138681113868xh xu isin U1113966 1113967 (9)

where the A D(x y) is computed as equation (7) en oneparticle xlowast in (xh xu)with theminimum angular-distance toU is found and added to A and deleted from U in lines 5-6Here the angular-distance of (xh xu) to U termed asAD(x U) is computed as

AD(x U) minisize AD(x y)|y isin U1113864

y notin xh xu( 1113857 x isin xh xu( 11138571113865(10)

where the AD(x y) is computed as equation (7) N subsetsS1S2 SN are obtained in line 8ndash11 where particle xi inUis saved into Si i 1 2 N In lines 12ndash15 each particle x

in A is associated with the minimum angular-distance to theparticle xt in theU and then this particle x ofA is added intothe subset St t 1 2 N in line 14 en set A empty inline 16 Finally in each subset Si i 1 2 N a particlewith the best convergence performance computed byequation (5) is added into A in lines 17ndash20 e A isreturned in line 21 as the final result

In order to facilitate the understanding of the process ofthis angular-guided strategy a simple example is illustratedin Figure 1 e U includes ten individuals x1 x2 x10and their mapping solutions which map individuals to thehyperplane f1prime + f2prime 1 (calculating angular-distance) inthe normalized biobjective space as shown in Figure 1(a)First five particles (half the population size) iex1 x4 x7 x9 x10 that represent the distribution of all tenparticles are kept in U as shown in Figure 1(b) en theremaining particles ie x2 x3 x5 x6 x8 are preserved in Aas in Figure 1(c) Second each particle in Figure 1(c) is

associated with a minimum angular-distance particle inFigure 1(b) After that five subsets S1 S2 S5 are ob-tained where S1 preserves x1 and its associated particles inA ie x2 Similarly S2 preserves x3 and x4 S3 preserves x5x6 and x7 S4 preserves x8 and x9 S5 preserves x10 asshown in Figure 1(d) Finally in each subset only theparticle with the best convergence is selected as inFigure 1(e)

33 8e Complete Algorithm of AGPSO e above sectionshave introduced the main components of AGPSO whichinclude the velocity update strategy and archive updateoperator In addition the evolutionary search is also pro-posed on the external archive In order to describe theremaining operator and to facilitate the implementation ofAGPSO the pseudocode of its complete algorithm is pro-vided in Algorithm 3 e initialization procedure is firstactivated in lines 1ndash6 of Algorithm 3 For each particle in Sits positional information is randomly generated and itsvelocity is set to 0 in line 3 After that the objectives of eachparticle are evaluated in line 4 e external archive A isupdated with Algorithm 2 in line 7 and sorted in the as-cending order by calculating the convergence fitness of eachparticle using equation (5) in line 8 After that the iterationcounter t is increased by one en AGPSO steps into themain loop which contains the particle search and theevolutionary search until the maximum number of itera-tions is reached

In the main loop the SDE distance of each particle in S iscalculated and these particles are sorted by the SDE distanceto get median SDE distance of particles in S which is calledmedSDE as in lines 11-12 en the PSO-based search isperformed in line 13 with density-based velocity update ofAlgorithm 1 After that the angular-guided archive updatestrategy of Algorithm 2 is executed in line 14 with the inputsS A andN (the sizes of S and A are bothN)en in line 15the evolutionary strategy is applied on A to optimize theswarm leaders providing another search strategy to coop-erate with the PSO-based search

(1) sort the extra archive A in ascending order based on convergence value as in equation (5)(2) for each solution xi isin A i (1 2 N)

(3) identify its T neighborhood index as B(i)(4) end for(5) for each particle xi isin S i (1 2 N)

(6) associate xi to the angular-guided closest particle y isin A with index j in A(7) get the T nearest neighbors in A of y by the neighborhood index B(j)(8) sort the T neighbors of y in an ascending order based on convergence value by equation (5)(9) select the first angle-distance-based neighboring particle of y as lbesti(10) select randomly from the top 20 particles in A as gbesti(11) update the velocity vi of xi by equation (4)(12) update the position xi by equation (3)(13) evaluate the objective values for xi(14) end for(15) return S

ALGORITHM 1 Density-based velocity update (S A medSDE)

Complexity 5

Finally the objectives of solutions which are newlygenerated by the evolutionary strategy are evaluated in line16 Also the angular-guided archive update strategy of

Algorithm 2 is carried out again in line 17 and the A issorted in the ascending order again in line 18 for selectingthe global-best (gbest) particles e iteration counter t is

x1x2x3

x4x5x6 x7

x8

x9x10

x2x3

x5x6

x8

x1x2x3

x4x5x6 x7

x8

x9x10

x1

x4

x6

x9x10

Obtained solutions Mapping solutions

(a)

(b)

(c)

(d) (e)

x1

x4

x7x9

x10

Figure 1 An example to illustrate the process of the proposed angular-guided archive update strategy

(1) combine S and A into a union set U and set A empty(2) normalize all particles in U by equation (6)(3) for i 1 to N(4) find the particle pair (xh xu) in U with the minimum angular-distance for all particle pairs in U(5) find xlowast in (xh xu) such that xlowast has the smaller angular-distance to U by equation (9)(6) add xlowast to A then deleted xlowast from U(7) end for(8) for each particle xi isin U

(9) initialize an empty subset Si(10) add the particle xi into Si(11) end for(12) for each particle xi isin A

(13) associate xi to the particle xt in U that has the smallest angular-distance to xi(14) add xi into St(15) end for(16) set A empty(17) for i 1 to N(18) find the particle xlowast of Si that has the best convergence performance computed by equation (5)(19) add xlowast into A(20) end for(21) return A

ALGORITHM 2 Angular-guided archive update (S A N)

6 Complexity

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 2: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

Currently MOPSOs have been validated to be very ef-fective and efficient when solving MOPs which can search aset of approximate solutions with balanceable diversity andconvergence to cover the entire PFs Based on the selectionof local or global best particles which can effectively guidethe flight of particles in the search space most of MOPSOscan be classified into three main categoriese first categoryembeds the Pareto-based sorting approach [2] into MOPSO(called Pareto-based MOPSOs) such as OMOPSO [13]SMPSO [14] CMPSO [15] AMOPSO [5] and AGMOPSO[16] e second category decomposes MOPs into a set ofscalar subproblems and optimizes all the subproblems si-multaneously using a collaborative search process (calleddecomposition-based MOPSOs) including dMOPSO [17]D2MOPSO [18] and MMOPSO [19] e third categoryemploys the performance indicators (eg HV [20] and R2[21]) to guide the search process in MOPSOs (called indi-cator-based MOPSOs) such as IBPSO-LS [22] S-MOPSO[23] NMPSO [24] R2HMOPSO [25] and R2-MOPSO [26]

Although the above MOPSOs are effective for tacklingMOPs with the objective number mle 3 their performancewill significantly deteriorate for solving MaOPs with mgt 3mainly due to the several challenges brought by ldquothe curse ofdimensionalityrdquo in MaOPs [27 28] Generally there arethree main challenges for MOPSOs to deal with MaOPs

e first challenge is to provide the sufficient selectionpressure to approach the true PFs of MaOPs ie thechallenges in maintaining convergence With the increase ofobjectives in MaOPs it becomes very difficult for MOPSOsto pay the same attention to optimize all the objectiveswhich may lead to an imbalanced evolution such that so-lutions are very good at solving some objectives but performpoorly on the others [29] Moreover most solutions ofMaOPs are often nondominated with each other at eachgenerationerefore MOPSOs based on Pareto dominancemay lose the selection pressure on addressing MaOPs andshow a poor convergence performance [24]

e second challenge is to search a set of solutions thatare evenly distributed along the whole PF ie the challengesin diversity maintenance Since the objective space is en-larged rapidly with the increase of dimensionality inMaOPsthis requires a large number of solutions to approximate thewhole PF Some well-known diversity maintenance methodsmay not work well onMaOPs eg the crowding distance [2]may prefer some dominated resistant solutions [30] thedecomposition approaches require a larger set of well-dis-tributed weight vectors [3] and the performance indicatorsrequire an extremely high computational cost [31] usdiversity maintenance in MOPSOs becomes less effectivewith the increase of objectives in MaOPs [32]

e third challenge is to propose the effective velocityupdate strategies for MOPSOs to guide the particle search inthe high-dimensional space [33] ere are a number ofresearch studies to present the improved velocity updatestrategies for MOPs [34] For example in SMPSO [14] thevelocity of the particles is constrained in order to avoid thecases in which the velocity becomes too high In AgMOPSO[35] a novel archive-guided velocity update method is usedto select the global best and personal best particles from the

external archive which is more effective to guide the swarm InMaOPSO2s-pccs [36] a leader group is selected from thenondominated solutions with the better diversity in the externalarchive according to the parallel cell coordinates which areselected as the global best solutions to balance the exploitationand exploration of a population In NMPSO [24] anothersearch direction from the personal best particle pointing to theglobal best one is provided to make more disturbances invelocity update However the abovementioned velocity updatestrategies are not so effective for MaOPs With the enlargementof the objective space it is more difficult to select the suitablepersonal or global best particles for velocity update

To overcome the abovementioned challenges this paperproposes a novel angular-guided MOPSO (called AGPSO)A novel angular-based archive update strategy is designed inAGPSO to well balance convergence and diversity in theexternal archive Moreover a density-based velocity updatestrategy is proposed to effectively guide the PSO-basedsearch When compared to the existing MOPSOs and somecompetitive MOEAs for MaOPs the experimental results onthe WFG and MaF test suites with 5 to 10 objectives haveshown the superiority of AGPSO on most cases

To summarize the main contributions of this paper areclarified as follows

(1) An angular-guided archive update strategy is pro-posed in this paper As the local-best and global-bestparticles are both selected from the external archivethe elitist solutions with balanceable convergence anddiversity should be preserved to effectively guide thesearch direction to approximate the true PF In thisstrategyN evenly distributed particles (N is the swarmsize) are firstly selected by the angular distances tomaintain the diversity in different search directionswhich helps to alleviate the second challenge above(diversity maintenance) After that the rest particlesare respectively associated to the closest particle basedon the angular distance which will form N groupswith similar particles At last the particle with the bestconvergence in each particle group is saved in theexternal archive trying to alleviate the first challengeabove (keeping convergence)

(2) A density-based velocity update strategy is designedin this paper which can search high-quality particleswith fast convergence and good distribution andhelps to alleviate the third challenge above inMOPSOs e local best and global best particles areselected according to the densities of the particles forguiding the PSO-based search e sparse particleswill be guided by the local best particles to encourageexploitation in the local region as the surroundingparticles are very few while the crowded particles aredisturbed by the global best particles to put moreattentions on convergence By this way the proposedvelocity update strategy is more suitable for solvingMaOPs as experimentally validated in Section 45

e rest of this paper is organized as follows Section 2introduces the background of the particle swarmoptimizer

2 Complexity

and some current MOPSOs and MOEAs for MaOPs edetails of AGPSO are given in Section 3 To validate theperformance of AGPSO some experimental studies areprovided in Section 4 At last our conclusions and futurework are given in Section 5

2 Background

21 Particle SwarmOptimizer Particle swarm optimizer is asimple yet efficient swarm intelligent algorithm which isinspired by the social behaviors of the individuals in flocks ofbirds and schools of fish as proposed by Kennedy andEberhart [6] in 1995 Generally a particle swarm S is sto-chastically initialized to have N particles according to thecorresponding search range of the problem (N is the swarmsize) Each particle xi isin S is a candidate solution to theproblem characterized with the velocity vi(t) and the po-sition xi(t) at tth iteration

Moreover each particle xi will record the historicalposition-information in S ie the personal best position forxi (pbesti) and the global best position for xi (gbesti) whichare used to guide the particle fly to the next positionxi(t + 1) en the velocity and location of each particle areiteratively updated as follows

vi(t + 1) wvi(t) + c1r1 pbesti minus xi(t)( 1113857

+ c2r2 gbesti minus xi(t)( 1113857(2)

xi(t + 1) xi(t) + vi(t + 1) (3)

where w is an inertia weight to follow the previous velocityand r1 and r2 are two random numbers uniformly generatedin [0 1] while c1 and c2 are the two learning weight factorsfrom the personal and global best particles [14]

22 Some Current MOPSOs and MOEAs for MaOPsRecently a number of nature-inspired optimization algo-rithms have been proposed for solving MaOPs such asMOEAs andMOPSOs Especially a number ofMOEAs havebeen proposed which can be classified into three mainkinds such as Pareto-based MOEAs indicator-basedMOEAs and decomposition-based MOEAs A lot of ap-proaches have been proposed to enhance their performancefor solving MaOPs For example Pareto dominance relationwas modified to enhance the convergence pressure forPareto-based MOEAs such as ε minus dominance [30 37]θ minus dominace a strengthened dominance relation [38] anda special dominance relation [39] New association mech-anisms for solutions and weight vectors were designed fordecomposition-based MOEAs such as the use of referencevectors in NSGA-II [40] a dynamical decomposition inDDEA [41] and a self-organizing map-based weight vectordesign inMOEAD-SOM [42] A lot of effective and efficientperformance indicators were presented for indicator-basedMOEAs such as a unary diversity indicator based on ref-erence vectors [43] an efficient indicator as a combination ofsum of objectives and shift-based density estimation calledISDE+ [44] and an enhanced inverted generational distance(IGD) indicator [45]

In contrast to the abovementioned MOEAs there areonly a few of MOPSOs presented to solve MaOPs ereasonmay refer to the three challenges faced byMOPSOs asintroduced in Section 1 e main difference betweenMOPSOs andMOEAs is their evolutionary search eg mostMOEAs adopt differential evolution or simulated binarycrossover while most MOPSOs use the fly of particles to runthe search e environmental selection in external archiveof MOPSOs and MOEAs are actually similar As inspired bythe environmental selection in some MOEAs MOPSOs canbe easily extended to solve MaOPs Recently some com-petitive MOPSOs have been proposed to better solveMaOPs For example in pccsAMOPSO [5] a parallel cellcoordinate system was employed to update the externalarchive for maintaining the diversity which was used toselect the global best and personal best particles In itsimproved version MaOPSO2s-pccs [36] the two-stagestrategy was further presented to emphasize convergenceand diversity respectively by using a single-objective op-timizer and a many-objective optimizer In NMPSO [24] abalanceable fitness estimation method was proposed bysummarizing different weights of convergence and diversityfactors which aims to offer sufficient selection pressure inthe search process However this method is very sensitive tothe used parameters and requires a high-computational costIn CPSO [29] a coevolutionary PSO with bottleneck ob-jective learning strategy was designed for solving MaOPsMultiple swarms coevolved in a distributed fashion tomaintain diversity for approximating the entire PFs while anovel bottleneck objective learning strategy was used toaccelerate convergence for all objectives In MaPSO [46] anovel MOPSO based on the acute angle was proposed inwhich the leader of particles was selected from its historicalparticles by using the scalar projections and each particleowned K historical particle information (K was set to 3 in itsexperiments) Moreover the environmental selection inMaPSO was run based on the acute angle of each pair ofparticles Although these MOPSOs are effective for solvingMaOPs there are still some improvements for the design ofMOPSOs us this paper proposes a novel angular-guidedMOPSO with an angular-based archive update strategy anda density-based velocity update strategy to alleviate the threeabovementioned challenges in Section 1

3 The Proposed AGPSO Algorithm

In this section the details of the proposed algorithm areprovided At first two main strategies used in AGPSO iea density-based velocity update strategy and an angular-guided archive update strategy are respectively intro-duced e density-based velocity update strategy willconsider the density around each particle which is used todetermine whether the local best particle or the global bestparticle is used to guide the swarm search In this strategythe local best particle is selected based on the local angularinformation aiming to encourage exploitation in the localregion around the current particle while the global bestparticle is selected by the convergence performancewhich is used to perform exploration in the whole search

Complexity 3

space In order to provide the elite particles for leading thePSO-based search the angular-guided archive updatestrategy is designed to provide an angular search directionfor each particle which is more effective for solvingMaOPs At last the complete framework of AGPSO is alsoshown in order to clarify its implementation as well asother components

31ADensity-BasedVelocityUpdate Strategy As introducedin Section 21 the original velocity update strategy includestwo best positional information of a particle ie the per-sonal-best (pbest) and global-best (gbest) particles as definedin equation (1) which are used to guide the swarm searcheoretically the quality of generated particles is dependenton the guiding information from pbest and gbest in velocityupdate However the traditional velocity update strategy willface the great challenges in tackling MaOPs as the selectionof effective personal-best and global-best particles is difficultMoreover two velocity components used in equation (2)may lead to too much disturbance for the swarm search asmost of the particles are usually sparse in the high objectivespace us the quality of particles generated by the tra-ditional velocity formula may not be promising as the largedisturbance in velocity update will lower the search intensityin the local region around the current particle Some ex-periments have been given in Section 45 to study the ef-fectiveness of different velocity update strategies In thispaper a density-based velocity update strategy is proposedin equation (4) which can better control the search intensityin the local region of particles

vi(t + 1) wvi(t) + c1r1 lbesti minus xi(t)( 1113857 Dx gemedSDE

wvi(t) + c2r2 gbesti minus xi(t)( 1113857 Dx ltmedSDE1113896

(4)

where t is the iteration number vi(t) and vi(t + 1) are the tthand the (t+ 1)th iteration velocity of particle xi respectivelyw is the inertial weight c1 and c2 are two learning factorsr1and r2 are two uniformly distributed random numbers in [01] lbesti and gbesti are the positional information of thelocal-best and global-best particles for xi i (1 2 N)respectively Dx is the shift-based density estimation (SDE)of particle x as defined in [47] and medSDE is the medianvalue of all Dx in the swarm By this way the sparse particleswith Dx gemedSDE will be guided by the local best particles toencourage exploitation in the local region as their sur-rounding particles are very few while the crowded particleswith Dx ltmedSDE are disturbed by the global best particlesto put more attentions on convergence Please note thatlbesti and gbesti of xi(t) are selected from the external archiveA as introduced below

To the selection of gbesti of particle xi the selection ofgbest focuses on the particles with better convergence in thisarchive Meanwhile we also need to ensure a certain per-turbation thus we randomly choose from the top 20 of theconvergence performance e gbesti of particle xi is ran-domly selected from the top 20 particles in the externalarchive with the better convergence performance values For

each particle x we calculate its convergence performancevalues using the following formulation

Con(x) 1113944m

k1fkprime(x) (5)

where m is the number of objectives and fkprime(x) returns the

kth normalized objective vector of x which is obtained bythe normalization procedure as follows

fkprime (x)

fk(x) minus zlowastk

znadirk minus zlowastk

(6)

where zlowastk and znadirk are the kth objective value of the ideal

and nadir points respectively k 1 2 mTo the selection of lbesti the current particle xi in the

particle swarm (i= 1 N where N is the number ofparticles size) will be firstly associated to the closest particley in the external archive (A) by comparing the angular-distance of xi to each particle in the extra archive (A) Anduse y particle to find the lbesti in A Here the angular-distance [41] between the particle x to another particle ytermed AD(x y) is computed by

AD(x y) λ(x) minus λ(y)2 (7)

where λ(x) is defined as follows

λ(x) 1

1113936mk1fjprime(x)

Fprime(x) (8)

where Fprime(x) (f1prime(x) f2prime(x) fmprime(x))T and fprime(x) is

calculated by equation (6)en lbesti is selected from the Tangular-distance-based

closest neighbors of y with the best convergence value byequation (5) Specially each particle of A owns their Tnearest neighbors which are found by calculating the an-gular distance of each particle to the other particles of Ausing equation (7) e neighborhood information of xi in Ais obtained in B(i) where B(i) i1 i2 iT1113864 1113865 and(xi1

xi2 xiT

) are T closest angular-distance-based solu-tions to xi by equation (7)

en for each particle in S lbesti and gbesti are obtainedto update the velocity by equation (4) and position byequation (3) respectively After that the object values of theparticle are evaluated To further clarify the selection of localbest and global best particles and the density-based velocityupdate the related pseudocode is given in Algorithm 1

32 An Angular-Guided Archive Update Strategy Afterperforming the abovementioned density-based velocityupdate for the swarm search each particle flies to the newposition to produce the new particle swarm S In order to geta population consisting of a fixed number of elitist solutionswhich can maintain an excellent balance between conver-gence and distribution in the external archive an appro-priate selection strategy is required to update A In thispaper we propose an angular-guided archive updatestrategy following the principle of distributivity first andconvergence second and the pseudocode of updating theexternal archive is given in Algorithm 2

4 Complexity

In Algorithm 2 the input is S and A N is the size of bothS and A First S and A are combined into a union set U andset A Θ in line 1 en all the particles in U is normalizedby equation (6) in line 2 After that in lines 3ndash6 a set ofparticles are found to represent the distribution of all par-ticles To do this each time the current particle pair (xh xu)

with minimum angular-distance of U is found in line 4which is computed as

xh xu( 1113857 argmin xhxu( ) AD xh xu( 11138571113868111386811138681113868xh xu isin U1113966 1113967 (9)

where the A D(x y) is computed as equation (7) en oneparticle xlowast in (xh xu)with theminimum angular-distance toU is found and added to A and deleted from U in lines 5-6Here the angular-distance of (xh xu) to U termed asAD(x U) is computed as

AD(x U) minisize AD(x y)|y isin U1113864

y notin xh xu( 1113857 x isin xh xu( 11138571113865(10)

where the AD(x y) is computed as equation (7) N subsetsS1S2 SN are obtained in line 8ndash11 where particle xi inUis saved into Si i 1 2 N In lines 12ndash15 each particle x

in A is associated with the minimum angular-distance to theparticle xt in theU and then this particle x ofA is added intothe subset St t 1 2 N in line 14 en set A empty inline 16 Finally in each subset Si i 1 2 N a particlewith the best convergence performance computed byequation (5) is added into A in lines 17ndash20 e A isreturned in line 21 as the final result

In order to facilitate the understanding of the process ofthis angular-guided strategy a simple example is illustratedin Figure 1 e U includes ten individuals x1 x2 x10and their mapping solutions which map individuals to thehyperplane f1prime + f2prime 1 (calculating angular-distance) inthe normalized biobjective space as shown in Figure 1(a)First five particles (half the population size) iex1 x4 x7 x9 x10 that represent the distribution of all tenparticles are kept in U as shown in Figure 1(b) en theremaining particles ie x2 x3 x5 x6 x8 are preserved in Aas in Figure 1(c) Second each particle in Figure 1(c) is

associated with a minimum angular-distance particle inFigure 1(b) After that five subsets S1 S2 S5 are ob-tained where S1 preserves x1 and its associated particles inA ie x2 Similarly S2 preserves x3 and x4 S3 preserves x5x6 and x7 S4 preserves x8 and x9 S5 preserves x10 asshown in Figure 1(d) Finally in each subset only theparticle with the best convergence is selected as inFigure 1(e)

33 8e Complete Algorithm of AGPSO e above sectionshave introduced the main components of AGPSO whichinclude the velocity update strategy and archive updateoperator In addition the evolutionary search is also pro-posed on the external archive In order to describe theremaining operator and to facilitate the implementation ofAGPSO the pseudocode of its complete algorithm is pro-vided in Algorithm 3 e initialization procedure is firstactivated in lines 1ndash6 of Algorithm 3 For each particle in Sits positional information is randomly generated and itsvelocity is set to 0 in line 3 After that the objectives of eachparticle are evaluated in line 4 e external archive A isupdated with Algorithm 2 in line 7 and sorted in the as-cending order by calculating the convergence fitness of eachparticle using equation (5) in line 8 After that the iterationcounter t is increased by one en AGPSO steps into themain loop which contains the particle search and theevolutionary search until the maximum number of itera-tions is reached

In the main loop the SDE distance of each particle in S iscalculated and these particles are sorted by the SDE distanceto get median SDE distance of particles in S which is calledmedSDE as in lines 11-12 en the PSO-based search isperformed in line 13 with density-based velocity update ofAlgorithm 1 After that the angular-guided archive updatestrategy of Algorithm 2 is executed in line 14 with the inputsS A andN (the sizes of S and A are bothN)en in line 15the evolutionary strategy is applied on A to optimize theswarm leaders providing another search strategy to coop-erate with the PSO-based search

(1) sort the extra archive A in ascending order based on convergence value as in equation (5)(2) for each solution xi isin A i (1 2 N)

(3) identify its T neighborhood index as B(i)(4) end for(5) for each particle xi isin S i (1 2 N)

(6) associate xi to the angular-guided closest particle y isin A with index j in A(7) get the T nearest neighbors in A of y by the neighborhood index B(j)(8) sort the T neighbors of y in an ascending order based on convergence value by equation (5)(9) select the first angle-distance-based neighboring particle of y as lbesti(10) select randomly from the top 20 particles in A as gbesti(11) update the velocity vi of xi by equation (4)(12) update the position xi by equation (3)(13) evaluate the objective values for xi(14) end for(15) return S

ALGORITHM 1 Density-based velocity update (S A medSDE)

Complexity 5

Finally the objectives of solutions which are newlygenerated by the evolutionary strategy are evaluated in line16 Also the angular-guided archive update strategy of

Algorithm 2 is carried out again in line 17 and the A issorted in the ascending order again in line 18 for selectingthe global-best (gbest) particles e iteration counter t is

x1x2x3

x4x5x6 x7

x8

x9x10

x2x3

x5x6

x8

x1x2x3

x4x5x6 x7

x8

x9x10

x1

x4

x6

x9x10

Obtained solutions Mapping solutions

(a)

(b)

(c)

(d) (e)

x1

x4

x7x9

x10

Figure 1 An example to illustrate the process of the proposed angular-guided archive update strategy

(1) combine S and A into a union set U and set A empty(2) normalize all particles in U by equation (6)(3) for i 1 to N(4) find the particle pair (xh xu) in U with the minimum angular-distance for all particle pairs in U(5) find xlowast in (xh xu) such that xlowast has the smaller angular-distance to U by equation (9)(6) add xlowast to A then deleted xlowast from U(7) end for(8) for each particle xi isin U

(9) initialize an empty subset Si(10) add the particle xi into Si(11) end for(12) for each particle xi isin A

(13) associate xi to the particle xt in U that has the smallest angular-distance to xi(14) add xi into St(15) end for(16) set A empty(17) for i 1 to N(18) find the particle xlowast of Si that has the best convergence performance computed by equation (5)(19) add xlowast into A(20) end for(21) return A

ALGORITHM 2 Angular-guided archive update (S A N)

6 Complexity

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 3: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

and some current MOPSOs and MOEAs for MaOPs edetails of AGPSO are given in Section 3 To validate theperformance of AGPSO some experimental studies areprovided in Section 4 At last our conclusions and futurework are given in Section 5

2 Background

21 Particle SwarmOptimizer Particle swarm optimizer is asimple yet efficient swarm intelligent algorithm which isinspired by the social behaviors of the individuals in flocks ofbirds and schools of fish as proposed by Kennedy andEberhart [6] in 1995 Generally a particle swarm S is sto-chastically initialized to have N particles according to thecorresponding search range of the problem (N is the swarmsize) Each particle xi isin S is a candidate solution to theproblem characterized with the velocity vi(t) and the po-sition xi(t) at tth iteration

Moreover each particle xi will record the historicalposition-information in S ie the personal best position forxi (pbesti) and the global best position for xi (gbesti) whichare used to guide the particle fly to the next positionxi(t + 1) en the velocity and location of each particle areiteratively updated as follows

vi(t + 1) wvi(t) + c1r1 pbesti minus xi(t)( 1113857

+ c2r2 gbesti minus xi(t)( 1113857(2)

xi(t + 1) xi(t) + vi(t + 1) (3)

where w is an inertia weight to follow the previous velocityand r1 and r2 are two random numbers uniformly generatedin [0 1] while c1 and c2 are the two learning weight factorsfrom the personal and global best particles [14]

22 Some Current MOPSOs and MOEAs for MaOPsRecently a number of nature-inspired optimization algo-rithms have been proposed for solving MaOPs such asMOEAs andMOPSOs Especially a number ofMOEAs havebeen proposed which can be classified into three mainkinds such as Pareto-based MOEAs indicator-basedMOEAs and decomposition-based MOEAs A lot of ap-proaches have been proposed to enhance their performancefor solving MaOPs For example Pareto dominance relationwas modified to enhance the convergence pressure forPareto-based MOEAs such as ε minus dominance [30 37]θ minus dominace a strengthened dominance relation [38] anda special dominance relation [39] New association mech-anisms for solutions and weight vectors were designed fordecomposition-based MOEAs such as the use of referencevectors in NSGA-II [40] a dynamical decomposition inDDEA [41] and a self-organizing map-based weight vectordesign inMOEAD-SOM [42] A lot of effective and efficientperformance indicators were presented for indicator-basedMOEAs such as a unary diversity indicator based on ref-erence vectors [43] an efficient indicator as a combination ofsum of objectives and shift-based density estimation calledISDE+ [44] and an enhanced inverted generational distance(IGD) indicator [45]

In contrast to the abovementioned MOEAs there areonly a few of MOPSOs presented to solve MaOPs ereasonmay refer to the three challenges faced byMOPSOs asintroduced in Section 1 e main difference betweenMOPSOs andMOEAs is their evolutionary search eg mostMOEAs adopt differential evolution or simulated binarycrossover while most MOPSOs use the fly of particles to runthe search e environmental selection in external archiveof MOPSOs and MOEAs are actually similar As inspired bythe environmental selection in some MOEAs MOPSOs canbe easily extended to solve MaOPs Recently some com-petitive MOPSOs have been proposed to better solveMaOPs For example in pccsAMOPSO [5] a parallel cellcoordinate system was employed to update the externalarchive for maintaining the diversity which was used toselect the global best and personal best particles In itsimproved version MaOPSO2s-pccs [36] the two-stagestrategy was further presented to emphasize convergenceand diversity respectively by using a single-objective op-timizer and a many-objective optimizer In NMPSO [24] abalanceable fitness estimation method was proposed bysummarizing different weights of convergence and diversityfactors which aims to offer sufficient selection pressure inthe search process However this method is very sensitive tothe used parameters and requires a high-computational costIn CPSO [29] a coevolutionary PSO with bottleneck ob-jective learning strategy was designed for solving MaOPsMultiple swarms coevolved in a distributed fashion tomaintain diversity for approximating the entire PFs while anovel bottleneck objective learning strategy was used toaccelerate convergence for all objectives In MaPSO [46] anovel MOPSO based on the acute angle was proposed inwhich the leader of particles was selected from its historicalparticles by using the scalar projections and each particleowned K historical particle information (K was set to 3 in itsexperiments) Moreover the environmental selection inMaPSO was run based on the acute angle of each pair ofparticles Although these MOPSOs are effective for solvingMaOPs there are still some improvements for the design ofMOPSOs us this paper proposes a novel angular-guidedMOPSO with an angular-based archive update strategy anda density-based velocity update strategy to alleviate the threeabovementioned challenges in Section 1

3 The Proposed AGPSO Algorithm

In this section the details of the proposed algorithm areprovided At first two main strategies used in AGPSO iea density-based velocity update strategy and an angular-guided archive update strategy are respectively intro-duced e density-based velocity update strategy willconsider the density around each particle which is used todetermine whether the local best particle or the global bestparticle is used to guide the swarm search In this strategythe local best particle is selected based on the local angularinformation aiming to encourage exploitation in the localregion around the current particle while the global bestparticle is selected by the convergence performancewhich is used to perform exploration in the whole search

Complexity 3

space In order to provide the elite particles for leading thePSO-based search the angular-guided archive updatestrategy is designed to provide an angular search directionfor each particle which is more effective for solvingMaOPs At last the complete framework of AGPSO is alsoshown in order to clarify its implementation as well asother components

31ADensity-BasedVelocityUpdate Strategy As introducedin Section 21 the original velocity update strategy includestwo best positional information of a particle ie the per-sonal-best (pbest) and global-best (gbest) particles as definedin equation (1) which are used to guide the swarm searcheoretically the quality of generated particles is dependenton the guiding information from pbest and gbest in velocityupdate However the traditional velocity update strategy willface the great challenges in tackling MaOPs as the selectionof effective personal-best and global-best particles is difficultMoreover two velocity components used in equation (2)may lead to too much disturbance for the swarm search asmost of the particles are usually sparse in the high objectivespace us the quality of particles generated by the tra-ditional velocity formula may not be promising as the largedisturbance in velocity update will lower the search intensityin the local region around the current particle Some ex-periments have been given in Section 45 to study the ef-fectiveness of different velocity update strategies In thispaper a density-based velocity update strategy is proposedin equation (4) which can better control the search intensityin the local region of particles

vi(t + 1) wvi(t) + c1r1 lbesti minus xi(t)( 1113857 Dx gemedSDE

wvi(t) + c2r2 gbesti minus xi(t)( 1113857 Dx ltmedSDE1113896

(4)

where t is the iteration number vi(t) and vi(t + 1) are the tthand the (t+ 1)th iteration velocity of particle xi respectivelyw is the inertial weight c1 and c2 are two learning factorsr1and r2 are two uniformly distributed random numbers in [01] lbesti and gbesti are the positional information of thelocal-best and global-best particles for xi i (1 2 N)respectively Dx is the shift-based density estimation (SDE)of particle x as defined in [47] and medSDE is the medianvalue of all Dx in the swarm By this way the sparse particleswith Dx gemedSDE will be guided by the local best particles toencourage exploitation in the local region as their sur-rounding particles are very few while the crowded particleswith Dx ltmedSDE are disturbed by the global best particlesto put more attentions on convergence Please note thatlbesti and gbesti of xi(t) are selected from the external archiveA as introduced below

To the selection of gbesti of particle xi the selection ofgbest focuses on the particles with better convergence in thisarchive Meanwhile we also need to ensure a certain per-turbation thus we randomly choose from the top 20 of theconvergence performance e gbesti of particle xi is ran-domly selected from the top 20 particles in the externalarchive with the better convergence performance values For

each particle x we calculate its convergence performancevalues using the following formulation

Con(x) 1113944m

k1fkprime(x) (5)

where m is the number of objectives and fkprime(x) returns the

kth normalized objective vector of x which is obtained bythe normalization procedure as follows

fkprime (x)

fk(x) minus zlowastk

znadirk minus zlowastk

(6)

where zlowastk and znadirk are the kth objective value of the ideal

and nadir points respectively k 1 2 mTo the selection of lbesti the current particle xi in the

particle swarm (i= 1 N where N is the number ofparticles size) will be firstly associated to the closest particley in the external archive (A) by comparing the angular-distance of xi to each particle in the extra archive (A) Anduse y particle to find the lbesti in A Here the angular-distance [41] between the particle x to another particle ytermed AD(x y) is computed by

AD(x y) λ(x) minus λ(y)2 (7)

where λ(x) is defined as follows

λ(x) 1

1113936mk1fjprime(x)

Fprime(x) (8)

where Fprime(x) (f1prime(x) f2prime(x) fmprime(x))T and fprime(x) is

calculated by equation (6)en lbesti is selected from the Tangular-distance-based

closest neighbors of y with the best convergence value byequation (5) Specially each particle of A owns their Tnearest neighbors which are found by calculating the an-gular distance of each particle to the other particles of Ausing equation (7) e neighborhood information of xi in Ais obtained in B(i) where B(i) i1 i2 iT1113864 1113865 and(xi1

xi2 xiT

) are T closest angular-distance-based solu-tions to xi by equation (7)

en for each particle in S lbesti and gbesti are obtainedto update the velocity by equation (4) and position byequation (3) respectively After that the object values of theparticle are evaluated To further clarify the selection of localbest and global best particles and the density-based velocityupdate the related pseudocode is given in Algorithm 1

32 An Angular-Guided Archive Update Strategy Afterperforming the abovementioned density-based velocityupdate for the swarm search each particle flies to the newposition to produce the new particle swarm S In order to geta population consisting of a fixed number of elitist solutionswhich can maintain an excellent balance between conver-gence and distribution in the external archive an appro-priate selection strategy is required to update A In thispaper we propose an angular-guided archive updatestrategy following the principle of distributivity first andconvergence second and the pseudocode of updating theexternal archive is given in Algorithm 2

4 Complexity

In Algorithm 2 the input is S and A N is the size of bothS and A First S and A are combined into a union set U andset A Θ in line 1 en all the particles in U is normalizedby equation (6) in line 2 After that in lines 3ndash6 a set ofparticles are found to represent the distribution of all par-ticles To do this each time the current particle pair (xh xu)

with minimum angular-distance of U is found in line 4which is computed as

xh xu( 1113857 argmin xhxu( ) AD xh xu( 11138571113868111386811138681113868xh xu isin U1113966 1113967 (9)

where the A D(x y) is computed as equation (7) en oneparticle xlowast in (xh xu)with theminimum angular-distance toU is found and added to A and deleted from U in lines 5-6Here the angular-distance of (xh xu) to U termed asAD(x U) is computed as

AD(x U) minisize AD(x y)|y isin U1113864

y notin xh xu( 1113857 x isin xh xu( 11138571113865(10)

where the AD(x y) is computed as equation (7) N subsetsS1S2 SN are obtained in line 8ndash11 where particle xi inUis saved into Si i 1 2 N In lines 12ndash15 each particle x

in A is associated with the minimum angular-distance to theparticle xt in theU and then this particle x ofA is added intothe subset St t 1 2 N in line 14 en set A empty inline 16 Finally in each subset Si i 1 2 N a particlewith the best convergence performance computed byequation (5) is added into A in lines 17ndash20 e A isreturned in line 21 as the final result

In order to facilitate the understanding of the process ofthis angular-guided strategy a simple example is illustratedin Figure 1 e U includes ten individuals x1 x2 x10and their mapping solutions which map individuals to thehyperplane f1prime + f2prime 1 (calculating angular-distance) inthe normalized biobjective space as shown in Figure 1(a)First five particles (half the population size) iex1 x4 x7 x9 x10 that represent the distribution of all tenparticles are kept in U as shown in Figure 1(b) en theremaining particles ie x2 x3 x5 x6 x8 are preserved in Aas in Figure 1(c) Second each particle in Figure 1(c) is

associated with a minimum angular-distance particle inFigure 1(b) After that five subsets S1 S2 S5 are ob-tained where S1 preserves x1 and its associated particles inA ie x2 Similarly S2 preserves x3 and x4 S3 preserves x5x6 and x7 S4 preserves x8 and x9 S5 preserves x10 asshown in Figure 1(d) Finally in each subset only theparticle with the best convergence is selected as inFigure 1(e)

33 8e Complete Algorithm of AGPSO e above sectionshave introduced the main components of AGPSO whichinclude the velocity update strategy and archive updateoperator In addition the evolutionary search is also pro-posed on the external archive In order to describe theremaining operator and to facilitate the implementation ofAGPSO the pseudocode of its complete algorithm is pro-vided in Algorithm 3 e initialization procedure is firstactivated in lines 1ndash6 of Algorithm 3 For each particle in Sits positional information is randomly generated and itsvelocity is set to 0 in line 3 After that the objectives of eachparticle are evaluated in line 4 e external archive A isupdated with Algorithm 2 in line 7 and sorted in the as-cending order by calculating the convergence fitness of eachparticle using equation (5) in line 8 After that the iterationcounter t is increased by one en AGPSO steps into themain loop which contains the particle search and theevolutionary search until the maximum number of itera-tions is reached

In the main loop the SDE distance of each particle in S iscalculated and these particles are sorted by the SDE distanceto get median SDE distance of particles in S which is calledmedSDE as in lines 11-12 en the PSO-based search isperformed in line 13 with density-based velocity update ofAlgorithm 1 After that the angular-guided archive updatestrategy of Algorithm 2 is executed in line 14 with the inputsS A andN (the sizes of S and A are bothN)en in line 15the evolutionary strategy is applied on A to optimize theswarm leaders providing another search strategy to coop-erate with the PSO-based search

(1) sort the extra archive A in ascending order based on convergence value as in equation (5)(2) for each solution xi isin A i (1 2 N)

(3) identify its T neighborhood index as B(i)(4) end for(5) for each particle xi isin S i (1 2 N)

(6) associate xi to the angular-guided closest particle y isin A with index j in A(7) get the T nearest neighbors in A of y by the neighborhood index B(j)(8) sort the T neighbors of y in an ascending order based on convergence value by equation (5)(9) select the first angle-distance-based neighboring particle of y as lbesti(10) select randomly from the top 20 particles in A as gbesti(11) update the velocity vi of xi by equation (4)(12) update the position xi by equation (3)(13) evaluate the objective values for xi(14) end for(15) return S

ALGORITHM 1 Density-based velocity update (S A medSDE)

Complexity 5

Finally the objectives of solutions which are newlygenerated by the evolutionary strategy are evaluated in line16 Also the angular-guided archive update strategy of

Algorithm 2 is carried out again in line 17 and the A issorted in the ascending order again in line 18 for selectingthe global-best (gbest) particles e iteration counter t is

x1x2x3

x4x5x6 x7

x8

x9x10

x2x3

x5x6

x8

x1x2x3

x4x5x6 x7

x8

x9x10

x1

x4

x6

x9x10

Obtained solutions Mapping solutions

(a)

(b)

(c)

(d) (e)

x1

x4

x7x9

x10

Figure 1 An example to illustrate the process of the proposed angular-guided archive update strategy

(1) combine S and A into a union set U and set A empty(2) normalize all particles in U by equation (6)(3) for i 1 to N(4) find the particle pair (xh xu) in U with the minimum angular-distance for all particle pairs in U(5) find xlowast in (xh xu) such that xlowast has the smaller angular-distance to U by equation (9)(6) add xlowast to A then deleted xlowast from U(7) end for(8) for each particle xi isin U

(9) initialize an empty subset Si(10) add the particle xi into Si(11) end for(12) for each particle xi isin A

(13) associate xi to the particle xt in U that has the smallest angular-distance to xi(14) add xi into St(15) end for(16) set A empty(17) for i 1 to N(18) find the particle xlowast of Si that has the best convergence performance computed by equation (5)(19) add xlowast into A(20) end for(21) return A

ALGORITHM 2 Angular-guided archive update (S A N)

6 Complexity

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 4: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

space In order to provide the elite particles for leading thePSO-based search the angular-guided archive updatestrategy is designed to provide an angular search directionfor each particle which is more effective for solvingMaOPs At last the complete framework of AGPSO is alsoshown in order to clarify its implementation as well asother components

31ADensity-BasedVelocityUpdate Strategy As introducedin Section 21 the original velocity update strategy includestwo best positional information of a particle ie the per-sonal-best (pbest) and global-best (gbest) particles as definedin equation (1) which are used to guide the swarm searcheoretically the quality of generated particles is dependenton the guiding information from pbest and gbest in velocityupdate However the traditional velocity update strategy willface the great challenges in tackling MaOPs as the selectionof effective personal-best and global-best particles is difficultMoreover two velocity components used in equation (2)may lead to too much disturbance for the swarm search asmost of the particles are usually sparse in the high objectivespace us the quality of particles generated by the tra-ditional velocity formula may not be promising as the largedisturbance in velocity update will lower the search intensityin the local region around the current particle Some ex-periments have been given in Section 45 to study the ef-fectiveness of different velocity update strategies In thispaper a density-based velocity update strategy is proposedin equation (4) which can better control the search intensityin the local region of particles

vi(t + 1) wvi(t) + c1r1 lbesti minus xi(t)( 1113857 Dx gemedSDE

wvi(t) + c2r2 gbesti minus xi(t)( 1113857 Dx ltmedSDE1113896

(4)

where t is the iteration number vi(t) and vi(t + 1) are the tthand the (t+ 1)th iteration velocity of particle xi respectivelyw is the inertial weight c1 and c2 are two learning factorsr1and r2 are two uniformly distributed random numbers in [01] lbesti and gbesti are the positional information of thelocal-best and global-best particles for xi i (1 2 N)respectively Dx is the shift-based density estimation (SDE)of particle x as defined in [47] and medSDE is the medianvalue of all Dx in the swarm By this way the sparse particleswith Dx gemedSDE will be guided by the local best particles toencourage exploitation in the local region as their sur-rounding particles are very few while the crowded particleswith Dx ltmedSDE are disturbed by the global best particlesto put more attentions on convergence Please note thatlbesti and gbesti of xi(t) are selected from the external archiveA as introduced below

To the selection of gbesti of particle xi the selection ofgbest focuses on the particles with better convergence in thisarchive Meanwhile we also need to ensure a certain per-turbation thus we randomly choose from the top 20 of theconvergence performance e gbesti of particle xi is ran-domly selected from the top 20 particles in the externalarchive with the better convergence performance values For

each particle x we calculate its convergence performancevalues using the following formulation

Con(x) 1113944m

k1fkprime(x) (5)

where m is the number of objectives and fkprime(x) returns the

kth normalized objective vector of x which is obtained bythe normalization procedure as follows

fkprime (x)

fk(x) minus zlowastk

znadirk minus zlowastk

(6)

where zlowastk and znadirk are the kth objective value of the ideal

and nadir points respectively k 1 2 mTo the selection of lbesti the current particle xi in the

particle swarm (i= 1 N where N is the number ofparticles size) will be firstly associated to the closest particley in the external archive (A) by comparing the angular-distance of xi to each particle in the extra archive (A) Anduse y particle to find the lbesti in A Here the angular-distance [41] between the particle x to another particle ytermed AD(x y) is computed by

AD(x y) λ(x) minus λ(y)2 (7)

where λ(x) is defined as follows

λ(x) 1

1113936mk1fjprime(x)

Fprime(x) (8)

where Fprime(x) (f1prime(x) f2prime(x) fmprime(x))T and fprime(x) is

calculated by equation (6)en lbesti is selected from the Tangular-distance-based

closest neighbors of y with the best convergence value byequation (5) Specially each particle of A owns their Tnearest neighbors which are found by calculating the an-gular distance of each particle to the other particles of Ausing equation (7) e neighborhood information of xi in Ais obtained in B(i) where B(i) i1 i2 iT1113864 1113865 and(xi1

xi2 xiT

) are T closest angular-distance-based solu-tions to xi by equation (7)

en for each particle in S lbesti and gbesti are obtainedto update the velocity by equation (4) and position byequation (3) respectively After that the object values of theparticle are evaluated To further clarify the selection of localbest and global best particles and the density-based velocityupdate the related pseudocode is given in Algorithm 1

32 An Angular-Guided Archive Update Strategy Afterperforming the abovementioned density-based velocityupdate for the swarm search each particle flies to the newposition to produce the new particle swarm S In order to geta population consisting of a fixed number of elitist solutionswhich can maintain an excellent balance between conver-gence and distribution in the external archive an appro-priate selection strategy is required to update A In thispaper we propose an angular-guided archive updatestrategy following the principle of distributivity first andconvergence second and the pseudocode of updating theexternal archive is given in Algorithm 2

4 Complexity

In Algorithm 2 the input is S and A N is the size of bothS and A First S and A are combined into a union set U andset A Θ in line 1 en all the particles in U is normalizedby equation (6) in line 2 After that in lines 3ndash6 a set ofparticles are found to represent the distribution of all par-ticles To do this each time the current particle pair (xh xu)

with minimum angular-distance of U is found in line 4which is computed as

xh xu( 1113857 argmin xhxu( ) AD xh xu( 11138571113868111386811138681113868xh xu isin U1113966 1113967 (9)

where the A D(x y) is computed as equation (7) en oneparticle xlowast in (xh xu)with theminimum angular-distance toU is found and added to A and deleted from U in lines 5-6Here the angular-distance of (xh xu) to U termed asAD(x U) is computed as

AD(x U) minisize AD(x y)|y isin U1113864

y notin xh xu( 1113857 x isin xh xu( 11138571113865(10)

where the AD(x y) is computed as equation (7) N subsetsS1S2 SN are obtained in line 8ndash11 where particle xi inUis saved into Si i 1 2 N In lines 12ndash15 each particle x

in A is associated with the minimum angular-distance to theparticle xt in theU and then this particle x ofA is added intothe subset St t 1 2 N in line 14 en set A empty inline 16 Finally in each subset Si i 1 2 N a particlewith the best convergence performance computed byequation (5) is added into A in lines 17ndash20 e A isreturned in line 21 as the final result

In order to facilitate the understanding of the process ofthis angular-guided strategy a simple example is illustratedin Figure 1 e U includes ten individuals x1 x2 x10and their mapping solutions which map individuals to thehyperplane f1prime + f2prime 1 (calculating angular-distance) inthe normalized biobjective space as shown in Figure 1(a)First five particles (half the population size) iex1 x4 x7 x9 x10 that represent the distribution of all tenparticles are kept in U as shown in Figure 1(b) en theremaining particles ie x2 x3 x5 x6 x8 are preserved in Aas in Figure 1(c) Second each particle in Figure 1(c) is

associated with a minimum angular-distance particle inFigure 1(b) After that five subsets S1 S2 S5 are ob-tained where S1 preserves x1 and its associated particles inA ie x2 Similarly S2 preserves x3 and x4 S3 preserves x5x6 and x7 S4 preserves x8 and x9 S5 preserves x10 asshown in Figure 1(d) Finally in each subset only theparticle with the best convergence is selected as inFigure 1(e)

33 8e Complete Algorithm of AGPSO e above sectionshave introduced the main components of AGPSO whichinclude the velocity update strategy and archive updateoperator In addition the evolutionary search is also pro-posed on the external archive In order to describe theremaining operator and to facilitate the implementation ofAGPSO the pseudocode of its complete algorithm is pro-vided in Algorithm 3 e initialization procedure is firstactivated in lines 1ndash6 of Algorithm 3 For each particle in Sits positional information is randomly generated and itsvelocity is set to 0 in line 3 After that the objectives of eachparticle are evaluated in line 4 e external archive A isupdated with Algorithm 2 in line 7 and sorted in the as-cending order by calculating the convergence fitness of eachparticle using equation (5) in line 8 After that the iterationcounter t is increased by one en AGPSO steps into themain loop which contains the particle search and theevolutionary search until the maximum number of itera-tions is reached

In the main loop the SDE distance of each particle in S iscalculated and these particles are sorted by the SDE distanceto get median SDE distance of particles in S which is calledmedSDE as in lines 11-12 en the PSO-based search isperformed in line 13 with density-based velocity update ofAlgorithm 1 After that the angular-guided archive updatestrategy of Algorithm 2 is executed in line 14 with the inputsS A andN (the sizes of S and A are bothN)en in line 15the evolutionary strategy is applied on A to optimize theswarm leaders providing another search strategy to coop-erate with the PSO-based search

(1) sort the extra archive A in ascending order based on convergence value as in equation (5)(2) for each solution xi isin A i (1 2 N)

(3) identify its T neighborhood index as B(i)(4) end for(5) for each particle xi isin S i (1 2 N)

(6) associate xi to the angular-guided closest particle y isin A with index j in A(7) get the T nearest neighbors in A of y by the neighborhood index B(j)(8) sort the T neighbors of y in an ascending order based on convergence value by equation (5)(9) select the first angle-distance-based neighboring particle of y as lbesti(10) select randomly from the top 20 particles in A as gbesti(11) update the velocity vi of xi by equation (4)(12) update the position xi by equation (3)(13) evaluate the objective values for xi(14) end for(15) return S

ALGORITHM 1 Density-based velocity update (S A medSDE)

Complexity 5

Finally the objectives of solutions which are newlygenerated by the evolutionary strategy are evaluated in line16 Also the angular-guided archive update strategy of

Algorithm 2 is carried out again in line 17 and the A issorted in the ascending order again in line 18 for selectingthe global-best (gbest) particles e iteration counter t is

x1x2x3

x4x5x6 x7

x8

x9x10

x2x3

x5x6

x8

x1x2x3

x4x5x6 x7

x8

x9x10

x1

x4

x6

x9x10

Obtained solutions Mapping solutions

(a)

(b)

(c)

(d) (e)

x1

x4

x7x9

x10

Figure 1 An example to illustrate the process of the proposed angular-guided archive update strategy

(1) combine S and A into a union set U and set A empty(2) normalize all particles in U by equation (6)(3) for i 1 to N(4) find the particle pair (xh xu) in U with the minimum angular-distance for all particle pairs in U(5) find xlowast in (xh xu) such that xlowast has the smaller angular-distance to U by equation (9)(6) add xlowast to A then deleted xlowast from U(7) end for(8) for each particle xi isin U

(9) initialize an empty subset Si(10) add the particle xi into Si(11) end for(12) for each particle xi isin A

(13) associate xi to the particle xt in U that has the smallest angular-distance to xi(14) add xi into St(15) end for(16) set A empty(17) for i 1 to N(18) find the particle xlowast of Si that has the best convergence performance computed by equation (5)(19) add xlowast into A(20) end for(21) return A

ALGORITHM 2 Angular-guided archive update (S A N)

6 Complexity

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 5: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

In Algorithm 2 the input is S and A N is the size of bothS and A First S and A are combined into a union set U andset A Θ in line 1 en all the particles in U is normalizedby equation (6) in line 2 After that in lines 3ndash6 a set ofparticles are found to represent the distribution of all par-ticles To do this each time the current particle pair (xh xu)

with minimum angular-distance of U is found in line 4which is computed as

xh xu( 1113857 argmin xhxu( ) AD xh xu( 11138571113868111386811138681113868xh xu isin U1113966 1113967 (9)

where the A D(x y) is computed as equation (7) en oneparticle xlowast in (xh xu)with theminimum angular-distance toU is found and added to A and deleted from U in lines 5-6Here the angular-distance of (xh xu) to U termed asAD(x U) is computed as

AD(x U) minisize AD(x y)|y isin U1113864

y notin xh xu( 1113857 x isin xh xu( 11138571113865(10)

where the AD(x y) is computed as equation (7) N subsetsS1S2 SN are obtained in line 8ndash11 where particle xi inUis saved into Si i 1 2 N In lines 12ndash15 each particle x

in A is associated with the minimum angular-distance to theparticle xt in theU and then this particle x ofA is added intothe subset St t 1 2 N in line 14 en set A empty inline 16 Finally in each subset Si i 1 2 N a particlewith the best convergence performance computed byequation (5) is added into A in lines 17ndash20 e A isreturned in line 21 as the final result

In order to facilitate the understanding of the process ofthis angular-guided strategy a simple example is illustratedin Figure 1 e U includes ten individuals x1 x2 x10and their mapping solutions which map individuals to thehyperplane f1prime + f2prime 1 (calculating angular-distance) inthe normalized biobjective space as shown in Figure 1(a)First five particles (half the population size) iex1 x4 x7 x9 x10 that represent the distribution of all tenparticles are kept in U as shown in Figure 1(b) en theremaining particles ie x2 x3 x5 x6 x8 are preserved in Aas in Figure 1(c) Second each particle in Figure 1(c) is

associated with a minimum angular-distance particle inFigure 1(b) After that five subsets S1 S2 S5 are ob-tained where S1 preserves x1 and its associated particles inA ie x2 Similarly S2 preserves x3 and x4 S3 preserves x5x6 and x7 S4 preserves x8 and x9 S5 preserves x10 asshown in Figure 1(d) Finally in each subset only theparticle with the best convergence is selected as inFigure 1(e)

33 8e Complete Algorithm of AGPSO e above sectionshave introduced the main components of AGPSO whichinclude the velocity update strategy and archive updateoperator In addition the evolutionary search is also pro-posed on the external archive In order to describe theremaining operator and to facilitate the implementation ofAGPSO the pseudocode of its complete algorithm is pro-vided in Algorithm 3 e initialization procedure is firstactivated in lines 1ndash6 of Algorithm 3 For each particle in Sits positional information is randomly generated and itsvelocity is set to 0 in line 3 After that the objectives of eachparticle are evaluated in line 4 e external archive A isupdated with Algorithm 2 in line 7 and sorted in the as-cending order by calculating the convergence fitness of eachparticle using equation (5) in line 8 After that the iterationcounter t is increased by one en AGPSO steps into themain loop which contains the particle search and theevolutionary search until the maximum number of itera-tions is reached

In the main loop the SDE distance of each particle in S iscalculated and these particles are sorted by the SDE distanceto get median SDE distance of particles in S which is calledmedSDE as in lines 11-12 en the PSO-based search isperformed in line 13 with density-based velocity update ofAlgorithm 1 After that the angular-guided archive updatestrategy of Algorithm 2 is executed in line 14 with the inputsS A andN (the sizes of S and A are bothN)en in line 15the evolutionary strategy is applied on A to optimize theswarm leaders providing another search strategy to coop-erate with the PSO-based search

(1) sort the extra archive A in ascending order based on convergence value as in equation (5)(2) for each solution xi isin A i (1 2 N)

(3) identify its T neighborhood index as B(i)(4) end for(5) for each particle xi isin S i (1 2 N)

(6) associate xi to the angular-guided closest particle y isin A with index j in A(7) get the T nearest neighbors in A of y by the neighborhood index B(j)(8) sort the T neighbors of y in an ascending order based on convergence value by equation (5)(9) select the first angle-distance-based neighboring particle of y as lbesti(10) select randomly from the top 20 particles in A as gbesti(11) update the velocity vi of xi by equation (4)(12) update the position xi by equation (3)(13) evaluate the objective values for xi(14) end for(15) return S

ALGORITHM 1 Density-based velocity update (S A medSDE)

Complexity 5

Finally the objectives of solutions which are newlygenerated by the evolutionary strategy are evaluated in line16 Also the angular-guided archive update strategy of

Algorithm 2 is carried out again in line 17 and the A issorted in the ascending order again in line 18 for selectingthe global-best (gbest) particles e iteration counter t is

x1x2x3

x4x5x6 x7

x8

x9x10

x2x3

x5x6

x8

x1x2x3

x4x5x6 x7

x8

x9x10

x1

x4

x6

x9x10

Obtained solutions Mapping solutions

(a)

(b)

(c)

(d) (e)

x1

x4

x7x9

x10

Figure 1 An example to illustrate the process of the proposed angular-guided archive update strategy

(1) combine S and A into a union set U and set A empty(2) normalize all particles in U by equation (6)(3) for i 1 to N(4) find the particle pair (xh xu) in U with the minimum angular-distance for all particle pairs in U(5) find xlowast in (xh xu) such that xlowast has the smaller angular-distance to U by equation (9)(6) add xlowast to A then deleted xlowast from U(7) end for(8) for each particle xi isin U

(9) initialize an empty subset Si(10) add the particle xi into Si(11) end for(12) for each particle xi isin A

(13) associate xi to the particle xt in U that has the smallest angular-distance to xi(14) add xi into St(15) end for(16) set A empty(17) for i 1 to N(18) find the particle xlowast of Si that has the best convergence performance computed by equation (5)(19) add xlowast into A(20) end for(21) return A

ALGORITHM 2 Angular-guided archive update (S A N)

6 Complexity

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 6: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

Finally the objectives of solutions which are newlygenerated by the evolutionary strategy are evaluated in line16 Also the angular-guided archive update strategy of

Algorithm 2 is carried out again in line 17 and the A issorted in the ascending order again in line 18 for selectingthe global-best (gbest) particles e iteration counter t is

x1x2x3

x4x5x6 x7

x8

x9x10

x2x3

x5x6

x8

x1x2x3

x4x5x6 x7

x8

x9x10

x1

x4

x6

x9x10

Obtained solutions Mapping solutions

(a)

(b)

(c)

(d) (e)

x1

x4

x7x9

x10

Figure 1 An example to illustrate the process of the proposed angular-guided archive update strategy

(1) combine S and A into a union set U and set A empty(2) normalize all particles in U by equation (6)(3) for i 1 to N(4) find the particle pair (xh xu) in U with the minimum angular-distance for all particle pairs in U(5) find xlowast in (xh xu) such that xlowast has the smaller angular-distance to U by equation (9)(6) add xlowast to A then deleted xlowast from U(7) end for(8) for each particle xi isin U

(9) initialize an empty subset Si(10) add the particle xi into Si(11) end for(12) for each particle xi isin A

(13) associate xi to the particle xt in U that has the smallest angular-distance to xi(14) add xi into St(15) end for(16) set A empty(17) for i 1 to N(18) find the particle xlowast of Si that has the best convergence performance computed by equation (5)(19) add xlowast into A(20) end for(21) return A

ALGORITHM 2 Angular-guided archive update (S A N)

6 Complexity

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 7: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

increased by 2 in line 19 because one PSO-based search andone evolutionary strategy are carried out in each particle andin each of the main loop e abovementioned operation isrepeated until the maximum of iterations (tmax) is achievedAt last the final particles in A are saved as the final ap-proximation of PF

4 The Experimental Studies

41 Related Experimental Information

411 Involved MOEAs Four competitive MOEAs are usedto evaluate the performance of our proposed AGPSO eyare briefly introduced as follows

(1) VaEA [48] this algorithm uses the maximum-vec-tor-angle-first principle in the environmental se-lection to guarantee the wideness and uniformity ofthe solution set

(2) θ minus DEA [30] this algorithm uses a new dominancerelation to enhance the convergence in the highdimension optimization

(3) MOEADD [49] this algorithm proposes to exploitthe merits of both dominance-based and decom-position-based approaches which balances theconvergence and the diversity for the population

(4) SPEA2+ SDE [47] this algorithm develops a generalmodification of density estimation in order to makePareto-based algorithms suitable for many-objectiveoptimization

Four competitive MOPSOs are also used to validate theperformance of our proposed AGPSO ey are also brieflyintroduced as follows

(1) NMPSO [24] this algorithm uses a balanceable fit-ness estimation to offer sufficient selection pressurein the search process which considers both of theconvergence and diversity with weight vectors

(2) MaPSO [46] this algorithm uses an angle-basedMOPSO called MaPSO MaPSO chooses the leaderpositional information of particles from its historicalparticles by using scalar projections to guideparticles

(3) dMOPSO [17] this algorithm integrates the de-composition method to translate an MOP into a setof single-objective optimization problems whichsolves them simultaneously using a collaborativesearch process by applying PSO directly

(4) SMPSO [14] this algorithm proposes a strategy tolimit the velocity of the particles

412 Benchmark Problems In our experiments sixteenvarious unconstrained MOPs are considered here to assessthe performance of the proposed AGPSO algorithm eirfeatures are briefly introduced below

In this study the WFG [50] and MaF [31] test problemswere used including WFG1-WFG9 and MaF1-MaF7 Foreach problem the number of objectives was varied from 5 to10 ie m isin 5 8 10 For MaF1-MaF7 the number of de-cision variables was set as n m + k minus 1 where n and m arerespectively the number of decision variables and thenumber of objectives As suggested in [31] the values of kwere set to 10 Regarding WFG1-WFG9 the decision var-iables are composed by k position-related parameters and ldistance-related parameters As recommended in [51] k isset to 2times(m-l) and l is set to 20

(1) Let t 0 A empty initial particles S x1 x2 xN1113864 1113865 and T be the value of the neighborhood size(2) for i 1 to N(3) randomly initialize position xi and set vi 0 for xi

(4) evaluate the objective values of xi

(5) end for(6) randomly initialize A with N new particles(7) AAngular-guided Archive update (S A N)(8) sort A in ascending order based on equation (5)(9) t t + 1(10) while tlt tmax(11) calculate the SDE of each particle in S(12) sort particles in S to get the medSDE(13) Snew Density-based Velocity Update (S A medSDE)(14) AAngular-guided Archive update (Snew A N)(15) apply evolutionary search strategy on A to get a new swarm Snew

(16) evaluate the objectives of the new particles in Snew

(17) AAngular-guided Archive update (Snew A N)(18) sort A in ascending order based on equation (5)(19) t t + 2(20) end while(21) output A

ALGORITHM 3 e complete algorithm of AGPSO

Complexity 7

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 8: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

413 Performance Metrics e goals of MaOPs include theminimization of the distance of the solutions to the true PF(ie convergence) and the maximization of the uniform andspread distribution of solutions over the true PF (iediversity)

Hypervolume (HV) [20] metric calculates the objectivespace between the prespecified reference pointzr (zr

1 zrm)T which is dominated by all points on the

true points and the solution set S we obtain e calculationof HV metric is given as follows

HV(S) Vol cupxisinS

f1(x) zr11113858 1113859 times middot middot middot fm(x) z

rm1113858 11138591113874 1113875 (11)

where Vol(middot) denotes the Lebesgue measure When calcu-lating the HV value the points are firstly normalized assuggested in [51] by the vector 11 times (fmax

1 (x)

fmax2 (x) fmax

m (x)) with fmaxk (k 1 2 m) being the

maximum value of kth objective in true PF ose thatcannot dominate the reference point will be discarded (iesolutions that cannot dominate the reference point are notincluded to compute HV) For each objective an integerlarger than the worst value of the corresponding objective inthe true PF is adopted as the reference point After thenormalization operation the referent point is set to(10 10 10) A larger value of HV indicates a betterapproximation of the true PF

42 General Experimental Parameter Settings In this paperfour PSO algorithms (dMOPSO [17] SMPSO [14] NMPSO[24] and MaPSO [46]) and four competitive MOEAs(VaEA θ minus DEAMOEADD and SPEA2+ SDE) were usedfor performance comparison

Because the weight vectors are implied to dMOPSO andMOEADD the population sizes of these two algorithmsare set the same as the number of weight vectors enumber of weight vectors is set to 210 240 and 275following [51] for the test problem with 5 8 and 10 ob-jectives In order to ensure fairness the other algorithmsadopt the populationswarm size the same as to the numberof weight vectors

To allow a fair comparison the related parameters of allthe compared algorithms were set as suggested in theirreferences as summarized in Table 1 pc and pm are thecrossover probability and mutation probability ηc and ηm

are respectively the distribution indexes of SBX andpolynomial-based mutation For these PSO-based algo-rithms mentioned above (SMPSO dMOPSO NMPSOAGPSO) the control parameters c1 c2 c3 are sampled in[15 25] and the inertia weight w is a random number of[01 05] In MaPSO K is the size of historical maintained byeach particle which is set to 3 the another algorithmicparameters are θmax 05 w 01 andC 20 In MOEADD T is the neighborhood size δ is the probability to selectparent solutions from T neighborhoods and nr is themaximum number of parent solutions that are replaced byeach child solution Regarding VaEA σ is a condition fordetermining whether the solution is searching for a similardirection which is set to π2(N + 1)

All the algorithms were run 30 times independently oneach test problem e mean HV values and the standarddeviations (included in bracket after the mean HV results) in30 runs were collected for comparison All the algorithmswere terminated when a predefined maximum number ofgenerations (tmax) was reached In this paper tmax is set to600 700 and 1000 for 5- 8- and 10-objective problemsrespectively For each algorithm the maximum functionevaluations (MFE) can be easily determined byMFE Nlowast tmax where N is the population size To obtain astatistically sound conclusion Wilcoxon rank sum test wasrun with a significance level (005 is set in this paper)showing the statistically significant differences on the resultsof AGPSO and other competitors In the following exper-imental results the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo indicate thatthe results of other competitors are significantly better thanworse than and similar to the ones of AGPSO using thisstatistical test respectively

All these nine algorithms were implemented by JAVAcodes and run on a personal computer with Intel (R) Core(TM) i7- 6700 CPU 340GHz (processor) and 20GB (RAM)

43 Comparison with State-of-the-Art Algorithms In thissection AGPSO is compared to four PSO algorithms(dMOPSO SMPSO NMPSO and MaPSO) and fourcompetitive MOEAs (VaEA θ minus DEA MOEADD andSPEA2+ SDE) onWFG1-WFG9 andMaF1-MaF7 problemsIn the following tables the symbols ldquo+rdquo ldquominusrdquo and ldquosimrdquo in-dicate that the results of other competitors are significantlybetter than worse than and similar to the ones of AGPSOusing statistical test respectively e best mean result foreach problem is highlighted in boldface

431 Comparison Results with Four Current MOPSOs

(1) Comparison Results on WFG1-WFG9 Table 2 shows theHV performance comparisons of four PSOs on WFG1-WFG9 which clearly demonstrate that AGPSO providespromising performance in solving WFG problems withthese PSOs as it is best on 21 out of 27 cases on the ex-perimental data In contrast NMPSO MaPSO dMOPSOand SMPSO are best on 6 1 0 and 0 cases for HV metricrespectively ese data are summarized in the second lastrow of Table 2e last row of Table 2 are the comparisons ofAGPSO with each PSO onWFG where the mean of ldquominussim+rdquois the number of test problem in which the corresponding

Table 1 General experimental parameter settings

Algorithm Parameter SettingsSMPSO w isin [01 05] c1 c2 isin [15 25] pm 1n ηm 20dMOPSO w isin [01 05] c1 c2 isin [15 25]

NMPSO w isin [01 05] c1 c2 c3 isin [15 25] pm 1n ηm 20MaPSO K 3 θmax 05 C 20 w 01VaEA pc 10 pm 1n ηm 20 σ π2(N + 1)

θ minus DEA pc 10 pm 1n ηm 20 ηc 30MOEADD pc 10 pm 1n ηm 20 ηc 30 T 20 δ 09SPEA2 + SDE pc 10 pm 1n ηm 20 ηc 30

8 Complexity

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 9: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

algorithm performs worse than similarly than better thanthe AGPSO

It is found that the AGPSO shows absolute advantagefrom the comparisons with NMPSO MaPSO dMOPSOand SMPSO on the five test problems WFG1 with a convexand mixed PF WFG3 with a linear and degenerate PFWFG4-WFG6 with concave PF Regarding WFG2 whichhas a disconnected and mixed PF AGPSO shows a betterperformance than the other PSOs except MaPSO AGPSO isthe best on WFG2 with 5 and 8 objectives while the MaPSOis best onWFG2 with 10 objectives ForWFG7 with concavePF nonseparability of position and distance parametersAGPSO is worse than NMPSO with 5 objectives but betterwith 8 and 10 objectives Regarding WFG8 where distancerelated parameters are dependent on position related pa-rameters AGPSO performs worse with 5 and 10 objectivesand similar with 8 objectives For WFG9 with a multimodaland deceptive PF AGPSO only performs the best with 10objectives and worse with 5 and 8 objectives

As observed from the one-to-one comparisons in the lastrow of Table 2 SMPSO and dMOPSO perform poorly insolving WFG problems AGPSO showed the superior per-formance over the traditional PSOs on WFG problems

From the results AGPSO is better than NMPSO andMaPSOin 19 and 25 out of 27 cases respectively Conversely it wasworse than NMPSO and MaPSO in 6 and 1 out of 27comparisons respectively erefore it is reasonable toconclude that AGPSO showed a superior performance overNMPSO and MaPSO in most problems of WFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 3provides themean HV comparison results of AGPSO with four PSO al-gorithms (NMPSO MaPSO dMOPSO and SMPSO) onMaF1-MaF7 with 5 8 and 10 objectives As observed from thesecond last row in Table 3 there are 12 best results in 21 testproblems obtained by AGPSO while NMPSO performs best in7 cases MaPSO and SMPSO perform best only in 1 case re-spectively and dMOPSO is not best on any MaF test problem

MaF1 is modified inverted DTLZ1 [52] which leads tothe shape of reference points not able to fit to the PF shape ofMAF1 e performance of reference point-based PSO(dMOPSO) is worse than PSO algorithms that do not usereference points (AGPSO NMPSO and MaPSO) while theAGPSO performs best on tackling the MaF1 test problemMaF2 is obtained from DTLZ2 to increase the difficulty ofconvergence which requires that all objectives are optimized

Table 2 Comparison of results of AGPSO and four current MOPSOs on WFG1-WFG9 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

WFG15 787eminus 01(427eminus 02) 597eminus 01(289eminus 02)minus 334eminus 01(258eminus 02)minus 298eminus 01(344eminus 03)minus 300eminus 01(101eminus 03)minus

8 875eminus 01(377eminus 02) 700eminus 01(138eminus 01)minus 274eminus 01(245eminus 02)minus 235eminus 01(626eminus 03)minus 250eminus 01(154eminus 03)minus

10 945eminus 01(319eminus 03) 807eminus 01(158eminus 01)minus 253eminus 01(198eminus 02)minus 213eminus 01(918eminus 03)minus 230eminus 01(129eminus 03)minus

WFG25 976eminus 01(576eminus 03) 969eminus 01(544eminus 03)minus 970eminus 01(470eminus 03)minus 894eminus 01(128eminus 02)minus 885eminus 01(131eminus 02)minus

8 982eminus 01(412eminus 03) 983eminus 01(356eminus 03)+ 984eminus 01(286eminus 03)+ 892eminus 01(148eminus 02)minus 848eminus 01(252eminus 02)minus

10 993eminus 01(199eminus 03) 991eminus 01(485eminus 03)sim 990eminus 01(148eminus 03)minus 900eminus 01(176eminus 02)minus 854eminus 01(255eminus 02)minus

WFG35 659eminus 01(516eminus 03) 613eminus 01(194eminus 02)minus 612eminus 01(164eminus 02)minus 589eminus 01(105eminus 02)minus 575eminus 01(732eminus 03)minus

8 678eminus 01(620eminus 03) 577eminus 01(152eminus 02)minus 596eminus 01(188eminus 02)minus 494eminus 01(458eminus 02)minus 584eminus 01(151eminus 02)minus

10 691eminus 01(788eminus 03) 583eminus 01(273eminus 02)minus 597eminus 01(258eminus 02)minus 356eminus 01(958eminus 02)minus 592eminus 01(152eminus 02)minus

vWFG4

5 774eminus 01(534eminus 03) 764eminus 01(609eminus 03)minus 722eminus 01(296eminus 02)minus 651eminus 01(108eminus 02)minus 509eminus 01(173eminus 02)minus

8 900eminus 01(805eminus 03) 862eminus 01(121eminus 02)minus 798eminus 01(176eminus 02)minus 388eminus 01(786eminus 02)minus 526eminus 01(213eminus 02)minus

10 937eminus 01(664eminus 03) 882eminus 01(854eminus 03)minus 827eminus 01(132eminus 02)minus 315eminus 01(134eminus 01)minus 552eminus 01(219eminus 02)minus

WFG5

5 741eminus 01(612eminus 03) 735eminus 01(373eminus 03)minus 663eminus 01(118eminus 02)minus 585eminus 01(153eminus 02)minus 430eminus 01(118eminus 02)minus

8 861eminus 01(435eminus 03) 832eminus 01(658eminus 03)minus 691eminus 01(112eminus 02)minus 181eminus 01(843eminus 02)minus 452eminus 01(108eminus 02)minus

10 888eminus 01(829eminus 03) 865eminus 01(792eminus 03)minus 699eminus 01(176eminus 02)minus 243eminus 02(154eminus 02)minus 465eminus 01(124eminus 02)minus

WFG65 750eminus 01(896eminus 03) 724eminus 01(396eminus 03)minus 709eminus 01(450eminus 03)minus 674eminus 01(793eminus 03)minus 543eminus 01(166eminus 02)minus

8 872eminus 01(126eminus 02) 808eminus 01(570eminus 03)minus 803eminus 01(266eminus 03)minus 489eminus 01(487eminus 02)minus 649eminus 01(838eminus 03)minus

10 909eminus 01(136eminus 02) 834eminus 01(203eminus 03)minus 832eminus 01(180eminus 03)minus 466eminus 01(296eminus 02)minus 703eminus 01(137eminus 02)minus

WFG75 790eminus 01(212eminus 03) 796eminus 01(298eminus 03)+ 789eminus 01(461eminus 03)sim 539eminus 01(272eminus 02)minus 444eminus 01(206eminus 02)minus

8 920eminus 01(339eminus 03) 904eminus 01(371eminus 03)minus 873eminus 01(140eminus 02)minus 225eminus 01(393eminus 02)minus 473eminus 01(123eminus 02)minus

10 955eminus 01(911eminus 03) 936eminus 01(697eminus 03)minus 917eminus 01(117eminus 02)minus 207eminus 01(636eminus 02)minus 514eminus 01(210eminus 02)minus

WFG85 663eminus 01(375eminus 03) 673eminus 01(541eminus 03)+ 629eminus 01(134eminus 02)minus 427eminus 01(157eminus 02)minus 372eminus 01(209eminus 02)minus

8 785eminus 01(746eminus 03) 797eminus 01(379eminus 02)sim 691eminus 01(213eminus 02)- 955eminus 02(320eminus 02)minus 411eminus 01(929eminus 03)minus

10 836eminus 01(130eminus 02) 862eminus 01(412eminus 02)+ 733eminus 01(204eminus 02)minus 773eminus 02(127eminus 02)minus 452eminus 01(190eminus 02)minus

WFG95 657eminus 01(382eminus 03) 735eminus 01(732eminus 03)+ 623eminus 01(701eminus 03)minus 597eminus 01(199eminus 02)minus 446eminus 01(161eminus 02)minus

8 730eminus 01(996eminus 03) 765eminus 01(680eminus 02)+ 655eminus 01(117eminus 02)minus 273eminus 01(606eminus 02)minus 455eminus 01(239eminus 02)minus

10 749eminus 01(903eminus 03) 747eminus 01(672eminus 02)minus 656eminus 01(130eminus 02)minus 224eminus 01(696eminus 02)minus 476eminus 01(123eminus 02)minus

Bestall 2027 627 127 027 027Worse

similarbetter 19minus2sim6+ 25minus1sim1+ 27minus0sim0+ 27minus0sim0+

Complexity 9

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 10: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

simultaneously to reach the true PF e performance ofAGPSO is the best on MaF2 with all objectives As forMaF3 with a large number of local fronts and convex PFAGPSO and NMPSO are the best in the PSOs describedabove and AGPSO performs better than NMPSO with 5objective and worse with 8 and 10 objectives MaF4containing a number of local Pareto-optimal fronts ismodified fromDTLZ3 by inverting PF shape and AGPSO isbetter than any PSOs mentioned above in Table 3 Re-garding MaF5 modified from DTLZ4 with badly scaledconvex PF AGPSO NMPSO and MaPSO solved it all welland the performance of NMPSO is slightly better thanAGPSO andMaPSO with 5 and 8 objectives and worse thanAGPSO with 10 objectives on MaF5 On MaF6 with adegenerate PF AGPSO performs best with 8 objectiveswhile SMPSO performs best with 5 objectives and MaPSOperforms best with 10 objectives Finally AGPSO does notsolve MAF7 with a disconnected PF very well and NMPSOperforms best with all objectives

In the last row of Table 3 SMPSO and dMOPSO performpoorly in solving MaF problems with 5 objectives especiallyin the high-dimensional objectives such as 8 and 10 objectivese main reason is that Pareto dominance used by SMPSOfailed in high-dimensional spaces e pure decomposition-based dMOPSO cannot solve MaOPs very well because afinite fixed reference point does not provide enough search forhigh-dimensional spaces AGPSO is better than dMOPSOand SMPSO in 20 and 21 out of 21 cases respectively whichshow superior performance over dMOPSO and SMPSO onMaF problems Regarding NMPSO it is competitive withAGPSO while AGPSO is better than NMPSO in 14 out of 21

caseserefore AGPSO showed the better performance overNMPSO on MaF test problems

432 Comparison Results with Five Current MOEAs In thesections mentioned above it is experimentally proved thatAGPSO has better performance compared to existing fourMOPSOs on most test problems (MaF andWFG) Howeverthere are not many existing PSO algorithms for dealing withMaOPs e comparison of PSO algorithms alone does notreflect the superiority of AGPSO thus we further comparedAGPSO with four current MOEAs (VaEA θ minus DEAMOEADD and SPEA2+ SDE)

(1) Comparison Results on WFG1-WFG9 e experimentaldata are listed in Table 4 which shows the HV metric valuesand the comparison of AGPSO with four competitiveMOEAs (VaEA θ minus DEA MOEADD and SPEA2+ SDE)on WFG1-WFG9 with 5 8 and 10 objectives ese fourcompetitive MOEAs are specifically designed for MaOPswhich are better than most of MOPSOs to solve MaOPshowever experimental results show that they are still worsethan AGPSO As observed from the second last row inTable 4 there are 22 best results in 27 test problems obtainedby AGPSO while MOEADD and SPEA2+ SDE performbest in 2 and 3 in 27 cases and VaEA and θ minus DEA are notbest on any WFG test problem

Regarding WFG4 AGPSO performs best with 8 and 10objectives and slightly worse than MOEADD with 5 ob-jectives For WFG8 MOEADD has slightly advantages with5 objectives compared to AGPSO but AGPSO shows

Table 3 Comparison of results of NMPSO and four current MOPSOs on MAF1-MAF7 using HV

Problem Obj AGPSO NMPSO MaPSO dMOPSO SMPSO

MaF15 131eminus 02(143eminus 04) 127eminus 02(581eminus 05)minus 115eminus 02(280eminus 04)minus 106eminus 02(126eminus 04)minus 683eminus 03(514eminus 04)minus8 423eminus 05(180eminus 06) 364eminus 05(799eminus 07)minus 279eminus 05(266eminus 06)minus 130eminus 05(149eminus 05)minus 121eminus 05(188eminus 06)minus10 616eminus 07(349eminus 08) 475eminus 07(183eminus 08)minus 383eminus 07(283eminus 08)minus 882eminus 08(892eminus 08)minus 146eminus 07(286eminus 08)minus

MaF25 264eminus 01(118eminus 03) 263eminus 01(871eminus 04)minus 238eminus 01(357eminus 03)minus 250eminus 01(155eminus 03)minus 212eminus 01(405eminus 03)minus8 246eminus 01(264eminus 03) 236eminus 01(232eminus 03)minus 209eminus 01(304eminus 03)minus 202eminus 01(213eminus 03)minus 174eminus 01(446eminus 03)minus10 245eminus 01(208eminus 03) 223eminus 01(281eminus 03)minus 206eminus 01(567eminus 03)minus 205eminus 01(215eminus 03)minus 188eminus 01(428eminus 03)minus

MaF35 988eminus 01(287eminus 03) 971eminus 01(147eminus 02)minus 926eminus 01(529eminus 02)minus 938eminus 01(827eminus 03)minus 734eminus 01(450eminus 01)minus8 995eminus 01(211eminus 03) 999eminus 01(750eminus 04)+ 984eminus 01(148eminus 02)minus 962eminus 01(144eminus 02)minus 000e+ 00(000e+ 00)minus10 985eminus 01(226eminus 03) 100e+ 00(192eminus 04)+ 995eminus 01(600eminus 03)sim 971eminus 01(943eminus 03)minus 000e+ 00(000e+ 00)minus

MaF45 129eminus 01(122eminus 03) 543eminus 02(325eminus 02)minus 101eminus 01(107eminus 02)minus 421eminus 02(498eminus 03)minus 796eminus 02(393eminus 03)minus8 602eminus 03(967eminus 05) 229eminus 03(777eminus 04)minus 318eminus 03(684eminus 04)minus 124eminus 04(769eminus 05)minus 182eminus 03(339eminus 04)minus10 566eminus 04(189eminus 05) 210eminus 04(126eminus 04)minus 236eminus 04(620eminus 05)minus 155eminus 06(826eminus 07)minus 112eminus 04(232eminus 05)minus

MaF55 800eminus 01(270eminus 03) 811eminus 01(133eminus 03)+ 793eminus 01(406eminus 03)minus 269eminus 01(130eminus 01)minus 668eminus 01(299eminus 02)minus8 932eminus 01(101eminus 03) 935eminus 01(140eminus 03)+ 929eminus 01(441eminus 03)minus 201eminus 01(684eminus 02)minus 359eminus 01(130eminus 01)minus10 967eminus 01(132eminus 03) 964eminus 01(161eminus 03)minus 965eminus 01(190eminus 03)minus 240eminus 01(318eminus 02)minus 160eminus 01(100eminus 01)minus

MaF65 130eminus 01(622eminus 05) 937eminus 02(455eminus 03)minus 110eminus 01(278eminus 03)minus 127eminus 01(333eminus 05)minus 130eminus 01(140eminus 05)+8 106eminus 01(166eminus 05) 983eminus 02(115eminus 02)minus 950eminus 02(286eminus 03)minus 104eminus 01(324eminus 05)minus 983eminus 02(126eminus 03)minus10 922eminus 02(118eminus 02) 912eminus 02(000e+ 00)sim 924eminus 02(153eminus 03)sim 100eminus 01(594eminus 06)sim 806eminus 02(142eminus 02)minus

MaF75 253eminus 01(305eminus 02) 329eminus 01(170eminus 03)+ 295eminus 01(658eminus 03)+ 226eminus 01(242eminus 03)minus 237eminus 01(673eminus 03)minus8 194eminus 01(259eminus 02) 289eminus 01(236eminus 03)+ 226eminus 01(639eminus 03)+ 288eminus 02(926eminus 02)minus 678eminus 02(938eminus 02)minus10 183eminus 01(318eminus 02) 264eminus 01(178eminus 03)+ 205eminus 01(598eminus 03)+ 637eminus 03(521eminus 04)minus 557eminus 02(125eminus 01)minus

Bestall 1221 721 121 021 121Worse

similarbetter 14minus0sim7+ 16minus1sim4+ 20minus0sim1+ 20minus0sim1+

10 Complexity

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 11: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

excellence in 8 and 10 objectives As for WFG9 θ minus DEA andSPEA2+ SDE have more advantages and AGPSO is worsethan θ minus DEA and SPEA2+SDE slightly on this test problembut still better than VaEA and MOEADD For the rest ofcomparisons on WFG1-WFG3 and WFG5-WFG7 AGPSOhas the best performance on most of test problems whichproves the superiority of the proposed algorithm

In the last row of Table 4 AGPSO performs better thanVaEA θ minus DEA MOEADD and SPEA2+SDE in 27 23 24and 23 out of 27 cases respectively while θ minus DEA MOEADD and SPEA2+SDE only have better performance thanAGPSO in 3 2 and 3 out of 27 cases respectively In con-clusion AGPSO is found to present a superior performanceover four competitiveMOEAs inmost cases forWFG1-WFG9

(2) Comparison Results on MaF1-MaF7 Table 5 lists com-parative results between AGPSO and four competitiveMOEAs using HV on MaF1-MaF7 with 5 8 and 10 ob-jectives e second last row of Table 5 shows that AGPSOperforms best on 11 out of 21 cases while VaEA θ minus DEAMOEADD and SPEA2+ SDE perform best on 2 3 3 2cases out of 21 cases respectively As a result AGPSO has aclear advantage over these MOEAs

Regarding MaF1-MaF2 AGPSO is the best to tacklethese test problems in MOEAs mentioned above For MaF3

MOEADD is the best one and AGPSO has a medianperformance among the compared MOEAs ConcerningMaF4 with concave PF AGPSO shows the best performancewith all objectives and MOEADD performs poorly in theMaF4 test problem For MaF5 AGPSO only obtained the3rd rank as it is better than VaEA MOEADD andSPEA2+ SDE while outperformed by θ minus DEA with 5 and10 objectives and θ minus DEA with 8 objectives For MaF6AGPSO is the best to tackle it with 10 objectives worse thanVaEA with 5 and 8 objectives As for MaF7 AGPSO per-forms poorly while SPEA2+ SDE is the best with 5 and 8objectives and θ minus DEA is the best with 10 objectives Asobserved from the one-to-one comparisons in the last row ofTable 5 AGPSO is better than VaEA θ minus DEA MOEADDand SPEA2+ SDE on 13 15 18 and 16 out of 21 casesrespectively while AGPSO is worse than VaEA θ minus DEAMOEADD and SPEA2 + SDE on 7 6 3 and 5 out of 21cases respectively

In conclusion it is reasonable to conclude that AGPSOshows a better performance over four compared MOEAs inmost cases of MaF1-MaF7 is superiority of AGPSO wasmainly produced by a novel density-based velocity strategywhich enhances the search for areas around each particleand the angle-based external archive update strategy whicheffectively enhances the performance of AGPSO on MaOPs

Table 4 Comparison of results of NMPSO and four current MOEAs on WF1-WFG9 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2+ SDE

WFG15 787eminus 01(427eminus 02) 364eminus 01(402eminus 02)minus 573eminus 01(476eminus 02)minus 494eminus 01(222eminus 02)minus 726eminus 01(227eminus 02)minus8 875eminus 01(377eminus 02) 578eminus 01(332eminus 02)minus 775eminus 01(351eminus 02)minus 348eminus 01(975eminus 02)minus 836eminus 01(100eminus 02)minus10 945eminus 01(319eminus 03) 810eminus 01(359eminus 02)minus 873eminus 01(159eminus 02)minus 340eminus 01(656eminus 02)minus 855eminus 01(850eminus 03)minus

WFG25 976eminus 01(576eminus 03) 962eminus 01(736eminus 03)minus 963eminus 01(636eminus 03)minus 970eminus 01(114eminus 03)minus 960eminus 01(383eminus 03)sim8 982eminus 01(412eminus 03) 975eminus 01(691eminus 03)minus 901eminus 01(165eminus 01)minus 942eminus 01(237eminus 02)minus 976eminus 01(356eminus 03)minus10 993eminus 01(199eminus 03) 986eminus 01(459eminus 03)minus 934eminus 01(353eminus 02)minus 976eminus 01(182eminus 02)minus 980eminus 01(309eminus 03)minus

WFG35 659eminus 01(516eminus 03) 571eminus 01(173eminus 02)minus 634eminus 01(613eminus 03)minus 569eminus 01(107eminus 02)minus 577eminus 01(254eminus 02)minus8 678eminus 01(620eminus 03) 577eminus 01(283eminus 02)minus 569eminus 01(673eminus 02)minus 505eminus 01(123eminus 02)minus 555eminus 01(406eminus 02)minus10 691eminus 01(788eminus 03) 563eminus 01(305eminus 02)minus 620eminus 01(176eminus 02)minus 498eminus 01(155eminus 02)minus 542eminus 01(389eminus 02)minus

WFG45 774eminus 01(534eminus 03) 713eminus 01(106eminus 02)minus 761eminus 01(428eminus 03)minus 783eminus 01(435eminus 03)+ 739eminus 01(510eminus 03)minus8 900eminus 01(805eminus 03) 805eminus 01(920eminus 03)minus 789eminus 01(207eminus 02)minus 681eminus 01(259eminus 02)minus 785eminus 01(107eminus 02)minus10 937eminus 01(664eminus 03) 843eminus 01(997eminus 03)minus 897eminus 01(114eminus 02)minus 853eminus 01(269eminus 02)minus 820eminus 01(819eminus 03)minus

WFG55 741eminus 01(612eminus 03) 697eminus 01(719eminus 03)minus 734eminus 01(391eminus 03)minus 741eminus 01(339eminus 03)sim 708eminus 01(605eminus 03)minus8 861eminus 01(435eminus 03) 791eminus 01(100eminus 02)minus 793eminus 01(822eminus 03)minus 666eminus 01(251eminus 02)minus 755eminus 01(142eminus 02)minus10 888eminus 01(829eminus 03) 825eminus 01(403eminus 03)minus 869eminus 01(320eminus 03)minus 803eminus 01(180eminus 02)minus 798eminus 01(963eminus 03)minus

WFG65 750eminus 01(896eminus 03) 700eminus 01(176eminus 02)minus 742eminus 01(106eminus 02)minus 740eminus 01(103eminus 02)minus 719eminus 01(101eminus 02)minus8 872eminus 01(126eminus 02) 818eminus 01(927eminus 03)minus 816eminus 01(116eminus 02)minus 723eminus 01(291eminus 02)minus 779eminus 01(113eminus 02)minus10 909eminus 01(136eminus 02) 849eminus 01(110eminus 02)minus 889eminus 01(108eminus 02)minus 880eminus 01(129eminus 02)minus 817eminus 01(180eminus 02)minus

WFG75 790eminus 01(212eminus 03) 744eminus 01(910eminus 03)minus 790eminus 01(204eminus 03)sim 785eminus 01(380eminus 03)minus 762eminus 01(415eminus 03)minus8 920eminus 01(339eminus 03) 871eminus 01(541eminus 03)minus 852eminus 01(116eminus 02)minus 744eminus 01(315eminus 02)minus 822eminus 01(128eminus 02)minus10 955eminus 01(911eminus 03) 908eminus 01(561eminus 03)minus 939eminus 01(222eminus 03)minus 922eminus 01(150eminus 02)minus 866eminus 01(663eminus 03)minus

WFG85 663eminus 01(375eminus 03) 598eminus 01(127eminus 02)minus 666eminus 01(558eminus 03)+ 681eminus 01(320eminus 03)+ 660eminus 01(595eminus 03)minus8 785eminus 01(746eminus 03) 625eminus 01(283eminus 02)minus 661eminus 01(185eminus 02)minus 672eminus 01(219eminus 02)minus 748eminus 01(119eminus 02)minus10 836eminus 01(130eminus 02) 673eminus 01(366eminus 02)minus 802eminus 01(111eminus 02)minus 812eminus 01(759eminus 03)minus 788eminus 01(994eminus 03)minus

WFG95 657eminus 01(382eminus 03) 643eminus 01(142eminus 02)minus 671eminus 01(434eminus 02)+ 655eminus 01(442eminus 03)minus 696eminus 01(852eminus 03)+8 730eminus 01(996eminus 03) 700eminus 01(112eminus 02)minus 712eminus 01(363eminus 02)minus 630eminus 01(335eminus 02)minus 738eminus 01(500eminus 02)+10 749eminus 01(903eminus 03) 720eminus 01(235eminus 02)minus 769eminus 01(570eminus 02)+ 674eminus 01(256eminus 02)minus 775eminus 01(687eminus 03)+

Bestall 2227 027 027 227 327Worse

similarbetter 27minus0sim0+ 23minus1sim3+ 24minus1sim2+ 23minus1sim3+

Complexity 11

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 12: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

44 Further Discussion and Analysis on AGPSO To furtherjustify the advantage of AGPSO it was particularly com-pared to a recently proposed angle-based evolutionary al-gorithm VaEA eir comparison results with HV on MaFand WFG problems on 5 to 10 objectives are already con-tained in Tables 3 and 5 According the results in the Tables 3and 5 we can also conclude that AGPSO showed a superiorperformance over VaEA in most problems of MaF1-MaF7andWFG1-WFG9 as AGPSO performs better than VaEA in40 out of 48 comparisons and is only defeated by VaEA in 7cases We compare the performance of our algorithm withVaEArsquos environment selection on different shapes of PF testproblems (MaF1-MaF7)ey have the same shape of PF buthave some different characteristics such as the difficulty ofconvergence and the deceptive PF property (WFG1-WFG9)erefore the angular-guided-based method embedded intoPSOs is effectively improved in tackling MaOPs e an-gular-guided-based method is adopted as a distributed in-dicator to distinguish the similarities of particles which istransformed by the vector anglee traditional vector angleperforms better with concave but worse with convex orlinear PFs based on MOEAC [52] erefore the angular-guided-based method improves this situation the angular-guided-based method performs better than the traditionalvector angle with convex or linear PFs and it will not behavebadly with concave PFs On the one hand compared withthe angle-based strategy designed in VaEA AGPSO mainlyfocuses on the diversity first in the update of its externalarchive and convergence second On the other handAGPSO designed a novel density-based velocity updatestrategy to produce new particles which mainly enhances

the search for areas around each particle and speeds up theconvergence speed while balancing convergence and di-versity concurrently From these analyses we think that ourenvironment selection is more favorable for some linear PFsand concave-shaped PFs with boundaries that are not dif-ficult to find In addition AGPSO also added the evolu-tionary operator on the external archive to improve theperformance erefore our proposed AGPSO is reasonableto be regarded as an effective PSO for solving MaOPs

To visually show and support the abovementioned dis-cussion results to better visualize the performance and toshow the solutions distribution in high-dimensional objectivespaces some final solution sets with the median HV valuesfrom 30 runs were plotted in Figures 2 and 3 respectively forMaF1 with an inverted PF and for WFG3 with a linear anddegenerate PF In Figure 2 comparedwith the graph of VaEAwe can find that our boundary cannot be found completelybut the approximate PF we calculated through our proposedAGPSO PF is more closely attached to the real PF is alsoproves the correctness of our analysis on environmentalselection ie our environment selection is more favorablefor some linear PFs and concave-shaped PFs withboundaries that are not difficult to finde HV trend chartof WFG3 WFG4 and MaF2 with 10 objectives is shown inFigure 4 As observed from Figure 4 the convergence rateof AGPSO is faster than that of the other four optimizers

45 Further Discussion and Analysis on Velocity UpdateStrategy e abovementioned comparisons have fully provedthe superiority of AGPSO over four MOPSOs and four

Table 5 Comparison of results of AGPSO and four current MOEAs on Maf1-Maf7 using HV

Problem Obj AGPSO VaEA θ minus DEA MOEADD SPEA2 + SDE

MaF15 131eminus 02(143eminus 04) 111eminus 02(273eminus 04)minus 566eminus 03(272eminus 05)minus 296eminus 03(834eminus 05)minus 129eminus 02(100eminus 04)minus8 423eminus 05(180eminus 06) 269eminus 05(204eminus 06)minus 293eminus 05(203eminus 06)minus 438eminus 06(477eminus 07)minus 387eminus 05(983eminus 07)minus10 616eminus 07(349eminus 08) 360eminus 07(221eminus 08)minus 339eminus 07(707eminus 08)minus 345eminus 08(221eminus 08)minus 530eminus 07(216eminus 08)minus

MaF25 264eminus 01(118eminus 03) 235eminus 01(480eminus 03)minus 233eminus 01(282eminus 03)minus 234eminus 01(148eminus 03)minus 262eminus 01(126eminus 03)minus8 246eminus 01(264eminus 03) 200eminus 01(759eminus 03)minus 171eminus 01(182eminus 02)minus 188eminus 01(139eminus 03)minus 236eminus 01(306eminus 03)minus10 245eminus 01(208eminus 03) 201eminus 01(125eminus 02)minus 194eminus 01(796eminus 03)minus 191eminus 01(321eminus 03)minus 230eminus 01(248eminus 03)minus

MaF35 988eminus 01(287eminus 03) 997eminus 01(113eminus 03)+ 993eminus 01(282eminus 03)+ 998eminus 01(954eminus 05)+ 992eminus 01(212eminus 03)+8 995eminus 01(211eminus 03) 998eminus 01(358eminus 04)+ 992eminus 01(383eminus 03)minus 100e+ 00(942eminus 07)+ 997eminus 01(166eminus 03)+10 985eminus 01(226eminus 03) 999eminus 01(936eminus 04)+ 970eminus 01(568eminus 03)minus 100e+ 00(396eminus 08)+ 999eminus 01(494eminus 04)+

MaF45 129eminus 01(122eminus 03) 117eminus 01(362eminus 03)minus 732eminus 02(889eminus 03)minus 000e+ 00(000e+ 00)minus 107eminus 01(537eminus 03)minus8 602eminus 03(967eminus 05) 244eminus 03(328eminus 04)minus 132eminus 03(849eminus 04)minus 000e+ 00(000e+ 00)minus 362eminus 04(780eminus 05)minus10 566eminus 04(189eminus 05) 148eminus 04(976eminus 06)minus 229eminus 04(357eminus 05)minus 000e+ 00(000e+ 00)minus 423eminus 06(165eminus 06)minus

MaF55 800eminus 01(270eminus 03) 792eminus 01(185eminus 03)minus 813eminus 01(451eminus 05)+ 773eminus 01(313eminus 03)minus 784eminus 01(438eminus 03)minus8 932eminus 01(101eminus 03) 908eminus 01(380eminus 03)minus 926eminus 01(979eminus 05)minus 873eminus 01(671eminus 03)minus 736eminus 01(962eminus 02)minus10 967eminus 01(132eminus 03) 943eminus 01(687eminus 03)minus 970eminus 01(461eminus 05)+ 928eminus 01(828eminus 04)minus 712eminus 01(118eminus 01)minus

MaF65 130eminus 01(622eminus 05) 130eminus 01(751eminus 05)+ 117eminus 01(312eminus 03)minus 819eminus 02(296eminus 02)minus 129eminus 01(184eminus 04)minus8 106eminus 01(166eminus 05) 106eminus 01(220eminus 05)+ 918eminus 02(121eminus 03)minus 248eminus 02(129eminus 02)minus 881eminus 02(226eminus 04)minus10 922eminus 02(118eminus 02) 705eminus 02(395eminus 02)minus 457eminus 02(466eminus 02)minus 483eminus 02(524eminus 02)minus 271eminus 02(100eminus 01)minus

MaF75 253eminus 01(305eminus 02) 303eminus 01(350eminus 03)+ 279eminus 01(742eminus 03)+ 119eminus 01(276eminus 02)minus 331eminus 01(176eminus 03)+8 194eminus 01(259eminus 02) 224eminus 01(617eminus 03)+ 220eminus 01(503eminus 02)+ 706eminus 04(780eminus 04)minus 245eminus 01(173eminus 02)+10 183eminus 01(318eminus 02) 191eminus 01(119eminus 02)sim 227eminus 01(201eminus 02)+ 905eminus 05(481eminus 05)minus 656eminus 02(105eminus 02)minus

Bestall 1121 221 321 321 221Worse

similarbetter 13minus1sim7+ 15minus0sim6+ 18minus0sim3+ 16minus0sim5+

12 Complexity

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 13: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

MOEAs on the WFG and MaF test problems with 5 8 and 10objectives In this section we make a deeper comparison of thespeed update formula In order to prove the superiority of theformula based on the density velocity update the effect iscompared with the traditional speed update formula as in [14](denoted as AGPSO-I) Furthermore to show that only em-bedding local best into the traditional velocity update method

vd(t) wdvd(t) + c1r1d xdpbest minus xd(t)1113872 1113873

+ c2r2d xdgbest minus xd(t)1113872 1113873 + c3r3d xdlbest minus xd(t)1113872 1113873

(12)

is not sufficient enough and the effect is also compared withequation (12) which is denoted as AGPSO-II

AGPSO-10obj-MaF1

002040608

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(a)

MaPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-MaF1

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

07

08

09

1

11

Obj

ectiv

e val

ue

(e)

SMPSO-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(f )

VaEA-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(g)

SPEA2+SDE-10obj-MaF1

2 3 4 5 6 7 8 9 101Objective no

002040608

1

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-MaF1

0

02

04

06

08

1

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(i)

TruePF-10obj-MaF1

0010203040506070809

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(j)

Figure 2 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on MaF1 problems

Complexity 13

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 14: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

Table 6 shows the comparisons of AGPSO and twomodified AGPSOs with different speed update formula onWFG1-WFG9 and MaF1-MaF7 using HV e mean HVvalues and the standard deviations (included in bracket afterthe mean HV results) in 30 runs are collected for comparisonAs in the second last row of Table 6 there are 34 best results in48 test problems obtained by AGPSO while AGPSO-I per-forms best in 10 cases and AGPSO-II performs best only in 4case respectively Regarding the comparison of AGPSO withAGPSO-I AGPSO performs better on 33 cases similarly on 5cases and worse on 10 cases e novel velocity updatestrategy based on density is effective in improving the per-formance of AGPSO on WFG4 WFG5 WFG7 WFG9 andMaF2-MaF6 From these comparison data the novel velocityupdate strategy can improve the AGPSO and enable particlegeneration to be more efficient under the coordination of the

proposed angular-guided update strategies Regarding thecomparison AGPSO and AGPSO-II AGPSO performs betteron 31 cases similarly on 12 cases and worse on 5 cases It alsoverified the superiority of the proposed novel velocity updatestrategy by comparing the variants of the traditional formuladefined by (12) Adding the local-best (lbest) positional in-formation of a particle to the traditional velocity formula isnot enough to control the search strength in the surroundingregion of this particle It also confirmed that the proposedformula can produce higher quality particles

46 Computational Complexity Analysis on AGPSO ecomputational complexity of AGPSO in one generation ismainly dominated by the environmental selection that is de-scribed in Algorithm 2 Algorithm 2 requires a time complexity

AGPSO-10obj-WFG3

0

5

10

15

20

Obj

ectiv

e val

ue

2 4 6 8 100Objective no

(a)

MaPSO-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(b)

NMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(c)

DMOPSO-10obj-WFG3

0

5

10

Obj

ectiv

e val

ue

2 3 4 5 6 7 8 9 101Objective no

(d)

THETADEA-10obj-WFG3

2 4 6 8 100Objective no

05

10152025

Obj

ectiv

e val

ue

(e)

SPEA2+SDE-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(f )

SMPSO-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

05

10152025

Obj

ectiv

e val

ue

(g)

VaEA-10obj-WFG3

2 4 6 8 100Objective no

0

5

10

15

20

Obj

ectiv

e val

ue

(h)

MOEADD-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

20

Obj

ectiv

e val

ue(i)

TruePF-10obj-WFG3

2 3 4 5 6 7 8 9 101Objective no

0

5

10

15

Obj

ectiv

e val

ue

(j)

Figure 3 e final solution sets achieved by nine MOEAsMOPSOs and the true PF on WFG3 problems

14 Complexity

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 15: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

04

05

06

07H

V m

etric

val

ueWFG3

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(a)

02

04

06

08

HV

met

ric v

alue

WFG4

AGPSOVaEA

SPEA2 + SDENMPSO

200 400 600 800 10000Number of generations

(b)

0 200 400 600 800 1000Number of generations

018

02

022

024

026

HV

met

ric v

alue

MaF2

AGPSOVaEA

SPEA2 + SDENMPSO

(c)

Figure 4 e HV trend charts (averaged over 30 runs) of AGPSO SPEA2+ SDE and NMPSO on WFG3 WFG4 and MaF2 with 10objectives

Table 6 Comparison of results of AGPSO with different velocity using HV

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG15 791eminus 01(347eminus 02) 748eminus 01(341eminus 02)minus 751eminus 01(388eminus 02)minus8 885eminus 01(254eminus 02) 814eminus 01(394eminus 02)minus 796eminus 01(397eminus 02)minus10 947eminus 01(477eminus 03) 949eminus 01(259eminus 03)+ 945eminus 01(349eminus 03)sim

WFG25 976eminus 01(513eminus 03) 976eminus 01(281eminus 03)sim 978eminus 01(519eminus 03)+8 987eminus 01(265eminus 03) 989eminus 01(221eminus 03)+ 988eminus 01(184eminus 03)sim10 992eminus 01(330eminus 03) 993eminus 01(138eminus 03eminus )+ 992eminus 01(197eminus 03)sim

WFG35 659eminus 01(502eminus 03) 666eminus 01(417eminus 03)+ 657eminus 01(540eminus 03)sim8 679eminus 01(520eminus 03) 686eminus 01(341eminus 03)+ 679eminus 01(402eminus 03)sim10 692eminus 01(715eminus 03) 702eminus 01(308eminus 03)+ 693eminus 01(413eminus 03)sim

WFG45 eminus 776eminus 01(569eminus 03) 757eminus 01(787eminus 03)minus 768eminus 01(507eminus 03)minus8 900eminus 01(101eminus 02) 885eminus 01(862eminus 03)minus 894eminus 01(843eminus 03)minus10 939eminus 01(742eminus 03) 931eminus 01(835eminus 03)minus 930eminus 01(884eminus 03)minus

WFG55 742eminus 01(466eminus 03) 734eminus 01(745eminus 03)minus 739eminus 01(773eminus 03)minus8 859eminus 01(474eminus 03) 852eminus 01(605eminus 03)minus 858eminus 01(545eminus 03)sim10 891eminus 01(528eminus 03) 873eminus 01(122eminus 02)minus 878eminus 01(221eminus 02)minus

WFG65 753eminus 01(752eminus 03) 692eminus 01(221eminus 03)minus 752eminus 01(385eminus 03)sim8 869eminus 01(124eminus 02) 810eminus 01(363eminus 03)minus 883eminus 01(564eminus 03)+10 912eminus 01(122eminus 02) 840eminus 01(357eminus 03)minus 923eminus 01(799eminus 03)+

WFG75 790eminus 01(213eminus 03) 781eminus 01(332eminus 03)minus 785eminus 01(278eminus 03)minus8 921eminus 01(289eminus 03) 915eminus 01(311eminus 03)minus 913eminus 01(318eminus 03)minus10 955eminus 01(972eminus 03) 955eminus 01(226eminus 03)sim 944eminus 01(159eminus 02)minus

WFG85 661eminus 01(562eminus 03) 644eminus 01(376eminus 03)minus 654eminus 01(595eminus 03)minus8 787eminus 01(711eminus 03) 780eminus 01(475eminus 03)minus 781eminus 01(589eminus 03)minus10 841eminus 01(548eminus 03) 845eminus 01(313eminus 03)+ 831eminus 01(176eminus 02)minus

Complexity 15

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 16: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

of O(mN) to obtain the union population U in line 1 andnormalize each particle inU in line 2 wherem is the number ofobjectives andN is the swarm size In lines 3ndash7 it requires a timecomplexity ofO (m2N2) to classify the population In lines 8ndash11it needs time complexity of O(N) In the loop for association italso requires a time complexity ofO(m2N2) as in lines 12ndash15 Inlines 17ndash20 it needs a time complexity ofO(N2) In conclusionthe overall worst time complexity of one generation in AGPSOis max O(mN) O(m2N2) O(N)1113864 O(N2) which performscompetitive time complexities with most optimizers eg [48]

5 Conclusions and Future Work

is paper proposes AGPSO which is a novel angular-guidedMOPSO with efficient density-based velocity update strategyand excellent angular-guided-based archive update strategyenovel density-based velocity update strategy uses adjacent in-formation (local-best) to explore information around particlesand uses globally optimal information (global-best) to search forbetter performing locations globally is strategy improves thequality of the particles produced by searching for the sur-rounding space of sparse particles Furthermore the angular-guided archive update strategy provides efficient convergencewhile maintaining good population distribution is combi-nation of proposed novel velocity update strategy and excellentarchive update strategy enables the proposed AGPSO toovercome the problems encountered in the existing state-of-the-art algorithms for solving MaOPs e performance of

AGPSO is assessed by usingWFG andMaF test suites with 5 to10 objectives Our experiments indicate that AGPSO has su-perior performance over four current PSOs (SMPSOdMOPSO NMPSO and MaPSO) and four evolutionary al-gorithms (VaEA θ-DEA MOEAD-DD and SPEA2+SDE)

is density-based velocity update strategy and angular-guided archive update strategy will be further studied in ourfuture work e performance of density velocity updatestrategy is not good enough on some partial problems whichwill be further studied Regarding angular-guided archive up-date strategy while reducing the amount of computation andperforming better than the tradition angle selection on concaveand linear PFs it also sacrifices a part of the performance againstmany objective optimization problems with convex PFs theimprovement of which will be further studied in future

Data Availability

Our source code could be provided by contacting the cor-responding author e source codes of the compared state-of-the-art algorithms can be downloaded from httpjmetalsourceforgenetindexhtml or provided by the original au-thor respectively Also most codes of the compared algo-rithm can be found in our source code such as VaEA [27]θ-DEA [48] and MOEADD [50] Test problems are theWFG [23] and MaF [24] WFG [23] can be found at httpjmetalsourceforgenetproblemshtml and MaF [24] can befound at httpsgithubcomBIMKPlatEMO

Table 6 Continued

Problem Obj AGPSO AGPSO-I AGPSO-II

WFG95 658eminus 01(454eminus 03) 653eminus 01(373eminus 03)minus 653eminus 01(299eminus 03)minus8 729eminus 01(955eminus 03) 720eminus 01(121eminus 02)minus 723eminus 01(894eminus 03)minus10 751eminus 01(121eminus 02) 735eminus 01(118eminus 02)minus 736eminus 01(104eminus 02)minus

MaF15 130eminus 02(129eminus 04) 130eminus 02(110eminus 04)sim 130eminus 02(104eminus 04)sim8 423eminus 05(145eminus 06) 412eminus 05(135eminus 06)minus 414eminus 05(178eminus 06)minus10 612eminus 07(440eminus 08) 607eminus 07(230eminus 08)sim 610eminus 07(283eminus 08)sim

MaF25 264eminus 01(997eminus 04) 262eminus 01(108eminus 03)minus 261eminus 01(110eminus 03)minus8 246eminus 01(164eminus 03) 240eminus 01(189eminus 03)minus 241eminus 01(166eminus 03)minus10 245eminus 01(184eminus 03) 239eminus 01(275eminus 03)minus 241eminus 01(219eminus 03)minus

MaF35 990eminus 01(496eminus 03) 976eminus 01(292eminus 02)minus 496eminus 01(382eminus 01)minus8 968eminus 01(195eminus 03) 936eminus 01(922eminus 02)minus 373eminus 01(308eminus 01)minus10 992eminus 01(255eminus 03) 972eminus 01(186eminus 02)minus 442eminus 01(377eminus 01)minus

MaF45 129eminus 01(133eminus 03) 108eminus 01(144eminus 02)minus 118eminus 01(208eminus 03)minus8 603eminus 03(188eminus 04) 333eminus 03(102eminus 03)minus 474eminus 03(212eminus 04)minus10 562eminus 04(147eminus 05) 258eminus 04(913eminus 05)minus 365eminus 04(255eminus 05)minus

MaF55 801eminus 01(181eminus 03) 800eminus 01(340eminus 03)sim 800eminus 01(206eminus 03)sim8 932eminus 01(197eminus 03) 931eminus 01(139eminus 03)minus 931eminus 01(158eminus 03)minus10 967eminus 01(109eminus 03) 966eminus 01(751eminus 04)minus 966eminus 01(118eminus 03)minus

MaF65 130eminus 01(582eminus 05) 130eminus 01(335eminus 05)minus 130eminus 01(560eminus 05)minus8 103eminus 01(278eminus 05) 451eminus 02(439eminus 02)minus 664eminus 02(581eminus 02)minus10 949eminus 02(127eminus 02) 455eminus 02(707eminus 02)minus 440eminus 02(492eminus 02)minus

MaF75 251eminus 01(440eminus 02) 309eminus 01(258eminus 03)+ 249eminus 01(373eminus 02)sim8 191eminus 01(411eminus 02) 254eminus 01(464eminus 03)+ 222eminus 01(168eminus 02)+10 178eminus 01(353eminus 02) 232eminus 01(446eminus 03)+ 201eminus 01(995eminus 03)+

Bestall 3448 1048 448Bettersimilarworse 33minus5sim10+ 31minus12sim5+

16 Complexity

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 17: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

Conflicts of Interest

e authors declare that they have no conflicts of interest

Acknowledgments

is work was supported in part by the National NaturalScience Foundation of China under Grants 6187611061872243 and 61702342 Natural Science Foundation ofGuangdong Province under Grant 2017A030313338Guangdong Basic and Applied Basic Research Foundationunder Grant 2020A151501489 and Shenzhen TechnologyPlan under Grants JCYJ20190808164211203 andJCYJ20180305124126741

References

[1] K Deb Multi-Objective Optimization Using EvolutionaryAlgorithms John Wiley amp Sons New York NY USA 2001

[2] K Deb A Pratap S Agarwal and T Meyarivan ldquoA fast andelitist multiobjective genetic algorithm NSGA-IIrdquo IEEETransactions on Evolutionary Computation vol 6 no 2pp 182ndash197 2002

[3] Q F Zhang and H Li ldquoMOEAD a multi-objective evolu-tionary algorithm based on decompositionrdquo IEEE Transac-tions on Evolutionary Computation vol 11 no 6 pp 712ndash7312007

[4] C A C Coello G T Pulido and M S Lechuga ldquoHandlingmultiple objectives with particle swarm optimizationrdquo IEEETransactions on Evolutionary Computation vol 8 no 3pp 256ndash279 2004

[5] W Hu and G G Yen ldquoAdaptive multi-objective particleswarm optimization based on parallel cell coordinate systemrdquoIEEE Transactions on Evolutionary Computation vol 19no 1 pp 1ndash18 2015

[6] J Kennedy and R C Eberhart ldquoParticle swarm optimizationrdquoin Proceedings of the IEEE International Conference on NeuralNetworks pp 1942ndash1948 Washington DC USA June 1995

[7] M Gong L Jiao H Du and L Bo ldquoMulti-objective immunealgorithm with nondominated neighbor-based selectionrdquoEvolutionary Computation vol 16 no 2 pp 225ndash255 2008

[8] R C Eberhart and Y H Shi ldquoParticle swarm optimizationdevelopments applications and resourcesrdquo in Proceedings ofthe 2001 Congress on Evolutionary Computation vol 1pp 81ndash86 Seoul Republic of Korea May 2001

[9] W Zhang G Li W Zhang J Liang and G G Yen ldquoA clusterbased PSO with leader updating mechanism and ring-to-pology for multimodal multi-objective optimizationrdquo Swarmand Evolutionary Computation vol 50 p 100569 In press2019

[10] D Tian and Z Shi ldquoMPSO modified particle swarm opti-mization and its applicationsrdquo Swarm and EvolutionaryComputation vol 41 pp 49ndash68 2018

[11] S Rahimi A Abdollahpouri and P Moradi ldquoA multi-ob-jective particle swarm optimization algorithm for communitydetection in complex networksrdquo Swarm and EvolutionaryComputation vol 39 pp 297ndash309 2018

[12] J Garcıa-Nieto E Lopez-Camacho Garcıa-GodoyM J Nebro A J Nebro and J F Aldana-Montes ldquoMulti-objective ligand-protein docking with particle swarm opti-mizersrdquo Swarm and Evolutionary Computation vol 44pp 439ndash452 2019

[13] M R Sierra and C A Coello Coello ldquoImproving PSO-basedmulti-objective optimization using crowding mutation andisin-dominancerdquo in Evolutionary Multi-Criterion Optimizationvol 3410 of Lecture Notes in Computer Science pp 505ndash519Sringer Berlin Germany 2005

[14] A J Nebro J J Durillo J Garcia-Nieto C A Coello CoelloF Luna and E Alba ldquoSMPSO a new PSO-based meta-heuristic for multi-objective optimizationrdquo in Proceedings ofthe 2009 IEEE Symposium on Computational Intelligence inMilti-Criteria Decision-Making pp 66ndash73 Nashville TNUSA March 2009

[15] Z Zhan J Li J Cao J Zhang H Chuang and Y ShildquoMultiple populations for multiple objectives a coevolu-tionary technique for solving multi-objective optimizationproblemsrdquo IEEE Transactions on Cybernetics vol 43 no 2pp 445ndash463 2013

[16] H Han W Lu L Zhang and J Qiao ldquoAdaptive gradientmultiobjective particle swarm optimizationrdquo IEEE Transac-tions on Cybernetics vol 48 no 11 pp 3067ndash3079 2018

[17] S Zapotecas Martınez and C A Coello Coello ldquoA multi-ob-jective particle swarm optimizer based on decompositionrdquo inProceedings of the 13th Annual Conference on Genetic andEvolutionary Computation pp 69ndash76 Dublin Ireland July 2011

[18] N AI Moubayed A Pertovski and J McCall ldquoD2MOPSOMOPSO based on decomposition and dominance with ar-chiving using crowding distance in objective and solutionspacesrdquo Evolutionary Computation vol 22 no 1 pp 47ndash772014

[19] Q Lin J Li Z Du J Chen and Z Ming ldquoA novel multi-objective particle swarm optimization with multiple searchstrategiesrdquo European Journal of Operational Researchvol 247 no 3 pp 732ndash744 2015

[20] E Zitzler and L iele ldquoMultiobjective evolutionary algo-rithms a comparative case study and the strength Paretoapproachrdquo IEEE Transactions on Evolutionary Computationvol 3 no 4 pp 257ndash271 1999

[21] D H Phan and J Suzuki ldquoR2-ibea R2 indicator basedevolutionary algorithm for multi-objective optimizationrdquo inProceedings of the 2013 IEEE Congress on EvolutionaryComputation pp 1836ndash1845 Cancun Mexico July 2013

[22] S Jia J Zhu B Du and H Yue ldquoIndicator-based particleswarm optimization with local searchrdquo in Proceedings of the2011 Seventh International Conference on Natural Compu-tation vol 2 pp 1180ndash1184 Shanghai China July 2011

[23] X Sun Y Chen Y Liu and D Gong ldquoIndicator-based setevolution particle swarm optimization for many-objectiveproblemsrdquo Soft Computing vol 20 no 6 pp 2219ndash22322015

[24] Q Lin S Liu Q Zhu et al ldquoParticle swarm optimization withA balanceable fitness estimation for many-objective optimi-zation problemsrdquo IEEE Transactions on Evolutionary Com-putation vol 22 no 1 pp 32ndash46 2018

[25] L-X Wei X Li R Fan H Sun and Z-Y Hu ldquoA hybridmultiobjective particle swarm optimization algorithm basedon R2 indicatorrdquo IEEE Access vol 6 pp 14710ndash14721 2018

[26] F Li J C Liu S B Tan and X Yu ldquoR2-MOPSO a multi-objective particle swarm optimizer based on R2-indicator anddecompositionrdquo in Proceedings of the 2015 IEEE Congress onEvolutionary Computation (CEC) pp 3148ndash3155 SendaiJapan May 2015

[27] K Narukawa and T Rodemann ldquoExamining the performanceof evolutionary many-objective optimization algorithms on areal-world applicationrdquo in Proceedings of the Sixth

Complexity 17

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity

Page 18: ANovelAngular-GuidedParticleSwarmOptimizerfor …downloads.hindawi.com/journals/complexity/2020/6238206.pdfperformance of AGPSO, some experimental studies are provided in Section 4

International Conference on Genetic and Evolutionary Com-puting pp 316ndash319 Kitakushu Japan August 2012

[28] F Lopez-Pires ldquoMany-objective resource allocation in cloudcomputing datacentersrdquo in Proceedings of the 2016 IEEEInternational Conference on Cloud Engineering Workshop(IC2EW) pp 213ndash215 Berlin Germany April 2016

[29] X F Liu Z H Zhan Y Gao J Zhang S Kwong andJ Zhang ldquoCoevolutionary particle swarm optimization withbottleneck objective learning strategy for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 23 no 4 pp 587ndash602 In press 2019

[30] Y Yuan H Xu B Wang and X Yao ldquoA new dominancerelation-based evolutionary algorithm for many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 20 no 1 pp 16ndash37 2016

[31] R Cheng M Li Y Tian et al ldquoA benchmark test suite forevolutionary many-objective optimizationrdquo Complex amp In-telligent Systems vol 3 no 1 pp 67ndash81 2017

[32] H Ishibuchi Y Setoguchi H Masuda and Y NojimaldquoPerformance of decomposition-based many-objective algo-rithms strongly depends on Pareto front shapesrdquo IEEETransactions on Evolutionary Computation vol 21 no 2pp 169ndash190 2017

[33] X He Y Zhou and Z Chen ldquoAn evolution path based re-production operator for many-objective optimizationrdquo IEEETransactions on Evolutionary Computation vol 23 no 1pp 29ndash43 2017

[34] N Lynn M Z Ali and P N Suganthan ldquoPopulation to-pologies for particle swarm optimization and differentialevolutionrdquo Swarm and Evolutionary Computation vol 39pp 24ndash35 2018

[35] Q Zhu Q Lin W Chen et al ldquoAn external archive-guidedmultiobjective particle swarm optimization algorithmrdquo IEEETransactions on Cybernetics vol 47 no 9 pp 2794ndash28082017

[36] W Hu G G Yen and G Luo ldquoMany-objective particleswarm optimization using two-stage strategy and parallel cellcoordinate systemrdquo IEEE Transactions on Cybernetics vol 47no 6 pp 1446ndash1459 2017

[37] M Laumanns L iele K Deb and E Zitzler ldquoCombiningconvergence and diversity in evolutionary multiobjectiveoptimizationrdquo Evolutionary Computation vol 10 no 3pp 263ndash282 2002

[38] Y Tian R Cheng X Zhang Y Su and Y Jin ldquoAstrengthened dominance relation considering convergenceand diversity for evolutionary many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 23no 2 pp 331ndash345 2019

[39] H Chen Y Tian W Pedrycz G Wu R Wang and L WangldquoHyperplane assisted evolutionary algorithm for many-ob-jective optimization problemsrdquo IEEE Transactions on Cy-bernetics pp 1ndash14 In press 2019

[40] K Deb and H Jain ldquoAn evolutionary many-objective opti-mization algorithm using reference-point-based non-dominated sorting approach Part I solving problems withbox constraintsrdquo IEEE Transactions on Evolutionary Com-putation vol 18 no 4 pp 577ndash601 2014

[41] X He Y Zhou Z Chen and Q Zhang ldquoEvolutionary many-objective optimization based on dynamical decompositionrdquoIEEE Transactions on Evolutionary Computation vol 23no 3 pp 361ndash375 2019

[42] F Gu and Y-M Cheung ldquoSelf-organizing map-based weightdesign for decomposition-based many-objective evolutionary

algorithmrdquo IEEE Transactions on Evolutionary Computationvol 22 no 2 pp 211ndash225 2018

[43] X Cai H Sun and Z Fan ldquoA diversity indicator based onreference vectors for many-objective optimizationrdquo Infor-mation Sciences vol 430-431 pp 467ndash486 2018

[44] T Pamulapati R Mallipeddi and P N Suganthan ldquoISDE+mdashan indicator for multi and many-objective optimiza-tionrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 346ndash352 2019

[45] Y N Sun G G Yen and Z Yi ldquoIGD indicator-basedevolutionary algorithm for many-objective optimizationproblemrdquo IEEE Transactions on Evolutionary Computationvol 23 no 2 pp 173ndash187 2019

[46] Y Xiang Y Zhou Z Chen and J Zhang ldquoA many-objectiveparticle swarm optimizer with leaders selected from historicalsolutions by using scalar projectionsrdquo IEEE Transactions onCybernetics pp 1ndash14 In press 2019

[47] M Li S Yang and X Liu ldquoShift-based density estimation forPareto-based algorithms in many-objective optimizationrdquoIEEE Transactions on Evolutionary Computation vol 18no 3 pp 348ndash365 2014

[48] Y Xiang Y R Zhou and M Q Li ldquoA vector angle basedevolutionary algorithm for unconstrained many-objectiveoptimizationrdquo IEEE Transactions on Evolutionary Compu-tation vol 21 no 1 pp 131ndash152 2017

[49] K Li K Deb Q F Zhang and S Kwong ldquoAn evolutionarymany-objective optimization algorithm based on dominanceand decompositionrdquo IEEE Transactions on EvolutionaryComputation vol 19 no 5 pp 694ndash716 2015

[50] S Huband L Barone R while and P Hingston ldquoA scalablemulti-objective test problem tookitrdquo in Proceedings of the 3rdInternational Conference on Evolutionary Multi-CriterionOptimization pp 280ndash295vol 3410 of Lecture Notes inComputer Science Guanajuato Mexico March 2005

[51] Q Lin S Liu K Wong et al ldquoA clustering-based evolu-tionary algorithm for many-objective optimization prob-lemsrdquo IEEE Transactions on Evolutionary Computationvol 23 no 3 pp 391ndash405 In press 2018

[52] K Deb L iele M Laumanns and E Zitzler ldquoScalable testproblems for evolutionary multi-objective optimizationrdquo inEvolutionary Multi-Objective Optimization Ser AdvancedInformation and Knowledge Processing A Abraham L Jainand R Goldberg Eds pp 105ndash145 Springer Lon-donLondon UK 2005

18 Complexity