[ieee 2014 ieee congress on evolutionary computation (cec) - beijing, china (2014.7.6-2014.7.11)]...

8
Integrating User Preferences and Decomposition Methods for Many-objective Optimization Asad Mohammadi * , Mohammad Nabi Omidvar * , Xiaodong Li * and Kalyanmoy Deb * School of Computer Science and IT RMIT University, Melbourne, Australia Emails: {asad.mohammadi, mohammad.omidvar, xiaodong.li@}rmit.edu.au Computational Optimization and Innovation Laboratory (COIN) Michigan State University, Michigan, USA Email: [email protected]) Abstract—Evolutionary algorithms that rely on dominance ranking often suffer from a low selection pressure problem when dealing with many-objective problems. Decomposition and user- preference based methods can help to alleviate this problem to a great extent. In this paper, a user-preference based evolutionary multi-objective algorithm is proposed that uses decomposition methods for solving many-objective problems. Decomposition techniques that are widely used in multi-objective evolutionary optimization require a set of evenly distributed weight vectors to generate a diverse set of solutions on the Pareto-optimal front. The newly proposed algorithm, R-MEAD2, improves the scalability of its previous version, R-MEAD, which uses a simplex- lattice design method for generating weight vectors. This makes the population size is dependent on the dimension size of the objective space. R-MEAD2 uses a uniform random number generator to remove the coupling between dimension and the population size. This paper shows that a uniform random number generator is simple and able to generate evenly distributed points in a high dimensional space. Our comparative study shows that R-MEAD2 outperforms the dominance-based method R-NSGA-II on many-objective problems. I. I NTRODUCTION Evolutionary multi-objective optimization (EMO) algo- rithms have been successfully applied to many real-world problems in the past decade [1]. It has been shown that evolutionary multi-objective approaches can find evenly dis- tributed and well-converged solutions on two or three objective problems [2]. However, when dealing with many-objective optimization problems which involve four or more objectives, the performance of EMO algorithms degrade rapidly [3], [4]. Hence, there is a need for developing EMO methods that can efficiently solve many-objective problems. A number of challenges exist in solving many-objective optimization problems [3], [5]. Firstly, in a high dimensional objective space, even in the initial random population most of the solutions are non-dominated to each other. Thus, there would not be an adequate selection pressure making the search process very slow or even completely stagnant when a dominance-based algorithm is used. Secondly, generating solutions to approximate the entire Pareto front becomes computationally expensive [3], [4], [6]. Thirdly, it is difficult to visualize the Pareto-optimal front in large dimensions, which makes it difficult for the decision makers to select their preferred solutions. Adopting a user-preference based approach can mostly alleviate the second problem. In a user- preference based approach the aim is to find a set of solutions on a smaller region of the Pareto-optimal front which is preferred by the decision maker. This technique requires less computational resources by performing a more focused and guided search rather than approximating the entire Pareto- optimal front. This is of practical value when dealing with many-objective problems. One popular type of user-preference based EMO algorithm is the a priori method, which allows a decision maker to provide the preference information (e.g. a reference point) at the beginning of the search. Reference point based [7], [8], reference direction based [9], and light beam based [10] EMO approaches are a few attempts in this area. Another approach for better handling many-objective op- timization problems is using decomposition-based methods. They convert a multi-objective problem into a set of single- objective problems. This feature makes them less sensitive to the selection pressure issue which is the prime source of the degrading performance of dominance-based approaches in dealing with many-objective problems. In other words, in dominance-based algorithms individuals (with many ob- jectives) become non-dominated to each other, hence the selection pressure diminishes rapidly, making the population difficult move towards the Pareto-optimal front [11]. Various techniques for decomposing a multi-objective problem have been developed, including boundary intersection [12], [13], Tchebycheff and weighted-sum [14]. Some popular evolution- ary EMO algorithms that employ such decomposition methods are MOGLS [15], and MOEA/D [16]. As explained before, utilizing user preference informa- tion and a decomposition-based approach can improve the efficiency of an EMO algorithm in solving many-objective problems. Also motivated by the fact that in most real- world situations users are not usually interested in find- ing the entire Pareto-optimal front and the availability of some form of preference information makes preference-based EMO approaches more applicable in real-world settings. This makes a decomposition-based algorithm that takes into account the preference information a promising approach for tackling many-objective problems. R-MEAD [17] is the first attempt in incorporating the user-preference information into a decomposition-based EMO algorithm. R-MEAD uses weighted-sum and Tchebycheff as decomposition methods and a priori approach to search for preferred regions. How- 421 2014 IEEE Congress on Evolutionary Computation (CEC) July 6-11, 2014, Beijing, China 978-1-4799-1488-3/14/$31.00 ©2014 IEEE

Upload: kalyanmoy

Post on 09-Feb-2017

213 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

Integrating User Preferences and Decomposition

Methods for Many-objective Optimization

Asad Mohammadi∗, Mohammad Nabi Omidvar∗, Xiaodong Li∗ and Kalyanmoy Deb†

∗ School of Computer Science and IT

RMIT University, Melbourne, Australia

Emails: {asad.mohammadi, mohammad.omidvar, xiaodong.li@}rmit.edu.au† Computational Optimization and Innovation Laboratory (COIN)

Michigan State University, Michigan, USA

Email: [email protected])

Abstract—Evolutionary algorithms that rely on dominanceranking often suffer from a low selection pressure problem whendealing with many-objective problems. Decomposition and user-preference based methods can help to alleviate this problem to agreat extent. In this paper, a user-preference based evolutionarymulti-objective algorithm is proposed that uses decompositionmethods for solving many-objective problems. Decompositiontechniques that are widely used in multi-objective evolutionaryoptimization require a set of evenly distributed weight vectorsto generate a diverse set of solutions on the Pareto-optimalfront. The newly proposed algorithm, R-MEAD2, improves thescalability of its previous version, R-MEAD, which uses a simplex-lattice design method for generating weight vectors. This makesthe population size is dependent on the dimension size of theobjective space. R-MEAD2 uses a uniform random numbergenerator to remove the coupling between dimension and thepopulation size. This paper shows that a uniform random numbergenerator is simple and able to generate evenly distributed pointsin a high dimensional space. Our comparative study shows thatR-MEAD2 outperforms the dominance-based method R-NSGA-IIon many-objective problems.

I. INTRODUCTION

Evolutionary multi-objective optimization (EMO) algo-rithms have been successfully applied to many real-worldproblems in the past decade [1]. It has been shown thatevolutionary multi-objective approaches can find evenly dis-tributed and well-converged solutions on two or three objectiveproblems [2]. However, when dealing with many-objectiveoptimization problems which involve four or more objectives,the performance of EMO algorithms degrade rapidly [3], [4].Hence, there is a need for developing EMO methods that canefficiently solve many-objective problems.

A number of challenges exist in solving many-objectiveoptimization problems [3], [5]. Firstly, in a high dimensionalobjective space, even in the initial random population mostof the solutions are non-dominated to each other. Thus, therewould not be an adequate selection pressure making thesearch process very slow or even completely stagnant whena dominance-based algorithm is used. Secondly, generatingsolutions to approximate the entire Pareto front becomescomputationally expensive [3], [4], [6]. Thirdly, it is difficultto visualize the Pareto-optimal front in large dimensions,which makes it difficult for the decision makers to selecttheir preferred solutions. Adopting a user-preference based

approach can mostly alleviate the second problem. In a user-preference based approach the aim is to find a set of solutionson a smaller region of the Pareto-optimal front which ispreferred by the decision maker. This technique requires lesscomputational resources by performing a more focused andguided search rather than approximating the entire Pareto-optimal front. This is of practical value when dealing withmany-objective problems. One popular type of user-preferencebased EMO algorithm is the a priori method, which allows adecision maker to provide the preference information (e.g. areference point) at the beginning of the search. Reference pointbased [7], [8], reference direction based [9], and light beambased [10] EMO approaches are a few attempts in this area.

Another approach for better handling many-objective op-timization problems is using decomposition-based methods.They convert a multi-objective problem into a set of single-objective problems. This feature makes them less sensitiveto the selection pressure issue which is the prime source ofthe degrading performance of dominance-based approachesin dealing with many-objective problems. In other words,in dominance-based algorithms individuals (with many ob-jectives) become non-dominated to each other, hence theselection pressure diminishes rapidly, making the populationdifficult move towards the Pareto-optimal front [11]. Varioustechniques for decomposing a multi-objective problem havebeen developed, including boundary intersection [12], [13],Tchebycheff and weighted-sum [14]. Some popular evolution-ary EMO algorithms that employ such decomposition methodsare MOGLS [15], and MOEA/D [16].

As explained before, utilizing user preference informa-tion and a decomposition-based approach can improve theefficiency of an EMO algorithm in solving many-objectiveproblems. Also motivated by the fact that in most real-world situations users are not usually interested in find-ing the entire Pareto-optimal front and the availability ofsome form of preference information makes preference-basedEMO approaches more applicable in real-world settings.This makes a decomposition-based algorithm that takes intoaccount the preference information a promising approachfor tackling many-objective problems. R-MEAD [17] is thefirst attempt in incorporating the user-preference informationinto a decomposition-based EMO algorithm. R-MEAD usesweighted-sum and Tchebycheff as decomposition methodsand a priori approach to search for preferred regions. How-

421

2014 IEEE Congress on Evolutionary Computation (CEC) July 6-11, 2014, Beijing, China

978-1-4799-1488-3/14/$31.00 ©2014 IEEE

Page 2: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

ever, it has only been applied to two and three-objectiveproblems [17]. The main difficulty in applying R-MEAD tomany-objective optimization lies in initializing a set of weightvectors. As it inherited the simplex-lattice design method fromMOEA/D, the number of sample points is governed by thedimensionality of the problem. In this paper, we aim to developan algorithm called R-MEAD2 which resolves this issue ofR-MEAD. In short, R-MEAD2 extends the R-MEAD in thefollowing aspects:

1) A new method for initializing the weight vector isused which makes R-MEAD2 scalable and applicableto many-objective optimization problems. This methoddecouples the population size from the number ofobjectives.

2) A uniform random number generator (ie., RNG) is usedwhich simplifies the update mechanism of weight vectorand make it more efficient.

3) The performance of R-MEAD2 is compared with the-state-of-the-art R-NSGA-II [7].

Recently, an algorithm called UMOEA/D [18] is proposedwhich detaches the population size from the number of objec-tives. It replaces the simplex-lattice design with the good latticepoint (GLP) [19]. This study shows that UMOEA/D generatesmore uniformly distributed solutions than MOEA/D using thesimplex-lattice design. In Section IV-A we will show that whenthe number of objectives goes beyond eight a simple uni-form random number generator(RNG), which is a commonlyused technique in various programming languages, producesmore uniformly distributed solutions than GLP according to ameasure called centered L2-discrepancy [20]. In case of lessthan eight objectives RNG and GLP have similar performancein most cases. R-MEAD2 replaces the simplex-lattice designwith a uniform random number generator (RNG). EmployingRNG is not only simpler and more efficient than GLP, but alsogenerates more uniformly distributed solutions when dealingwith problems with more than eight objectives. Another im-provement of R-MEAD2 over R-MEAD is related to updatingweight vectors. In R-MEAD once the weight vectors areinitialized their relative distances to each other stay the same.This is because a single descent direction vector is applied toall weight vectors which is calculated based on the locationof the best and worst weight vectors. In R-MEAD2 a simplehill climber is used to generate mutants based on a uniformdistribution to maintain the uniformity of the weight vectors.In each iteration, a set of new weight vectors are generatedbased on a uniform distribution within a certain region aroundthe best weight vector that is found so far and the process isrepeated until the weights converge to a solution. Finally, R-MEAD2 with two well-known decomposition methods namelyPBI and Tchebycheff are compared with R-NSGA-II [7] toshow the advantage of decomposition-based approaches overdominance based approaches.

The remainder of this paper is organized as follows.Section II introduces some preliminaries and a brief survey ofprevious studies is given. The proposed algorithm is describedin section III. Section IV contains the experimental results andcomparison between R-NSGA-II and R-MEAD2. The paper is

concluded in Section V.

II. PRELIMINARIES AND RELATED WORKS

A. Basic Definitions

A minimization multi-objective optimization problem withm objectives is defined as:

minimize F(x) = (f1(x), . . . , fm(x)) , (1)

where F(x) ∈ Rm is the objective vector and fi(x) is the ith

objective function. Each decision vector x ∈ Rn is defined

as (x1, . . . , xn) where n is the number of dimensions indecision space. A multi-objective problem with more thanthree objectives is commonly referred to as a many-objectiveproblem.

Dominance concept can be introduced to multi-objectiveoptimization for relative comparison of two solutions. In aminimization problem x1 dominates x2 which is shown asx1 ≺ x2 if:

∀i∃j (fi(x1) ≤ fi(x2) ∧ fj(x1) < fj(x2)) ,

where i, j ∈ {1, . . . ,m}. Pareto-optimal set (PS) is a set ofnon-dominated solutions (decision space) which dominate theother solutions. When PS is plotted in objective space thenon-dominated vectors are collectively known as the Pareto-optimal front (PF ).

B. Decomposition Approaches

There are several techniques to convert a multi-objectiveproblem to a set of single-objective problems. Here, we discusstwo most widely used approaches Tchebycheff (Te) [14], andpenalty-based boundary intersection (PBI) [16].

1) Tchebycheff Approach: Tchebycheff method is formu-lated as follows:

minimize (gtch(x,w, z⋆) = max{wi|fi(x)− z⋆i |}), (2)

where z⋆ ∈ Rm and is the ideal point, and w = (w1, . . . , wm)

is a weigh vector, which is positive. For each Pareto optimalpoint x⋆, there is at least a weight vector w such that x⋆

is optimal solution of equation (2). The effect of the weightvector is depicted in Figure 1.

Pareto−optimal Front

Search Direction

A Sub−optimal Solution

f1

f2

w

Fig. 1. Illustration of the Tchebycheff method.

422

Page 3: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

Pareto−optimal Front

Search Direction

f2

f1

d 1

d2

F(x)

z⋆

w

p

L

Fig. 2. Illustration of the PBI method.

2) Penalty-Based Boundary Intersection Approach (PBI):This decomposition method is an improved version of normalboundary intersection (NBI) [12]. Unlike NBI that can onlyhandle equality constraints, PBI [16] can handle both equalityand inequality constraints. PBI is formulated as follows:

minimize gpbi(x,w, z⋆) = d1 + θd2, (3)

where d1 =∥∥(F(x)− z⋆)Tw)

∥∥ / ‖w‖

and d2 = ‖F(x)− (z⋆ + d1w)‖ .

As shown in Figure 2, L is a line passing through z⋆ withthe direction of w and p is the projection of F(x) on L. Thepenalty parameter θ forces F(x) to be as close as possible toL (penalizing d2). In this paper, the value of θ is set to 5 assuggested in [16].

C. User-preference Based EMO Algorithms

This section gives a brief survey of user-preference basedEMO algorithms.

Guided multi-objective evolutionary algorithm (G-MOEA) [21], proposed by Branke et al., allows the user tospecify the linear trade-off between objectives. For example,in a two objective problem the decision maker has to specifyhow many units of the first objectives he/she is willingto trade for one unit of the second objective. G-MOEAthen uses these trade-off information to guide the searchtowards the more desired regions of the Pareto-optimal front.Although G-MOEA is more flexible and intuitive than otherapproaches, it is not always an easy task for the decisionmaker to specify the trade-off between objectives, especiallyfor many-objective problems. Deb [22] applied bias sharingtechnique on NSGA [23] where the biased Pareto-optimalsolutions are generated on a desired region by changing theweights. An objective with a higher priority takes a higherweight value. The main disadvantage of this technique is thatit cannot find solutions on a compromise region where allobjectives are of similar importance to the decision maker.In another study, Branke and Deb [24] improved the idea ofbiased sharing and compared its performance with G-MOEA.They proposed biased crowding distance in NSGA-II which

has more flexibility than biased sharing in terms of findingsolutions inside the region of interest.

Deb et al. [7] proposed a method that integrated use-preference information into NSGA-II [25]. The new methodwhich was called R-NSGA-II requires the decision maker toprovide one or more reference points at the beginning of thesearch process. In R-NSGA-II a modified version of crowdingdistance operator [25] preference distance was used to favorthe solutions which are closer to the reference point(s). Tocompute the preference distance, the Euclidean distances ofall solutions to the reference point(s) are calculated and thesolutions with lower distance are ranked higher. To maintainthe diversity of solutions in the desired regions an extraparameter ǫ was introduced.

Wickramasinghe and Li [26] integrated reference pointand light beam with particle swarm optimization (PSO). Thenew approach, which is based on a distance metric, changesthe position of particles based on the user supplied informa-tion to find the preferred regions. This distance-metric basedmethod was compared with another user-preference basedEMO (NSPSO) [27] which uses dominance-based comparisonand it was shown that the distance metric approach performedbetter than NSPSO.

The above approaches use reference points as preferenceinformation. However, the preference information can also beobtained in terms of reference direction and light beam. Deband Abhishek [9] applied reference direction on NSGA-II.In this new approach, RD-NSGA-II, user provided one ormore reference directions. The procedure finds a set of Pareto-optimal solutions corresponding to the reference directions. Inanother study, Deb and Kumar [10] applied light beam searchon NSGA-II. Decision maker provides aspiration, reservationand preference threshold for each objective. To control thedensity of solutions the parameter ǫ is used.

Most of the algorithms discussed so far are dominance-based and are not suitable for many-objective optimization. Inthis paper, we propose a scalable decomposition-based user-preference algorithm and we compare its performance withR-NSGA-II which is the state-of-the-art user-preference basedEMO algorithm.

III. PROPOSED APPROACH

This section describes the R-MEAD2 algorithm which isa reference point based evolutionary algorithm that uses de-composition methods for solving many-objective optimizationproblems.

The population size of algorithms such as MOEA/D andR-MEAD that rely on a simplex-lattice design grows dramati-cally as the number of objectives increases. More precisely,the weight values w1, w2, . . . , wm are chosen from the set

{ 0

H, 1

H,...,H

H } where H is a parameter chosen by the user.Therefore, the number of these vectors and consequentlythe population size is calculated by the following formula:

(H+m−1

m−1 ). Table I shows how the population size grows withnumber of objectives when the simplex-lattice design is usedto generate the weight vectors. As it can be seen, most ofthese population sizes are not commonly used in evolutionaryalgorithms.

423

Page 4: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

BestWeight

BestWeight

Preferred

Region

Reference Point

Pareto−optimal Front

Weight Vectors

Newly Generated

Pareto−optimal Front

Reference Point

Preferred

Region

f1

f2

w1

w1

w2

wb

wb w2

f2

f1

wb

Iteration t Iteration t+ 1

rr

Fig. 3. Illustration how weight vectors are updated in RMEAD-2

To remove the coupling between the population sizeand the number of objectives, UMOEA/D [18] replaced thesimplex-lattice design with the good lattice point (GLP) inMOEA/D [16] for generating the weight vectors. However, asshown in Section IV-A, GLP does not necessarily have a betteruniformity as compared to a uniform random number generator(RNG) when the number of objectives grows beyond eight.Additionally, using RNG for generating the weight vectorsalso removes the coupling between the population size and thenumber of objectives. This allows the user to pick any suitablepopulation size irrespective of the number of objectives.

A. The R-MEAD2 Algorithm

In the proposed decomposition-based user-preference ap-proach in order to guide the solutions towards the desiredregion, the weight vectors should be dynamically updated sothat the solutions can converge in the direction of the referencepoint, and ideally on the Pareto-optimal front. To achieve thiseffect each solution is assigned to a weight vector which isupdated during the course of optimization.

Figure 3 shows how the weights are updated during theoptimization. On the left the black squares denote the solutionsat some iteration t. The gray squares represent the solutionsafter running an iteration of the evolutionary optimizer. Thearrows show the update direction which is determined by theweight vectors associated with each individual. At this stagethe weight vector associated with the closest solution to thereference point is marked as the best weight vector (wb).Once the best weight vector is identified, a set of new weightvectors is generated using RNG within a region centeredaround the best weight vector. This forms a hypercube in an mdimensional space and the size of the region is determined byparameter r which is the edge size of hypercube. The weight

TABLE I. POP-SIZE FOR R-MEAD FOR DIFFERENT OBJECTIVES

(H = 10).

# Objs (m) 4 5 6 7 8 9 10

Pop-size 286 1001 3003 8008 19448 43758 92378

vectors w1 and w2 are generated with a uniform distributionaround wb as shown in the box at the center of Figure 3.Next, these weights are assigned back to the solutions thatwere obtained in the last iteration. Finally, the solutions areoptimized with the newly assigned weights. This process isshown on the right-hand side of Figure 3. As it can be seen,the new solutions marked by ‘⋆’ gradually converge within aconfined region in the direction of the reference point.

Algorithm 1 shows the details of the proposed method. Themain steps are outlined below:

Step 1 - Initialization: The initial population is cre-ated, and the weight vectors are initialized using rng andinit_weights functions respectively (lines 1-2). The initialweight vectors are generated using RNG over the entire spaceof weight vectors in order to increase the probability of findinga good initial weight vector. Also the weights are normalizedso that the components of a weight vector add up to one.

Step 2 - Main Evolutionary Cycle: The population isevolved and the weight vectors are updated until a stoppingcriterion is met (lines 3-10).

Step 2.1 - Evolving the Population: The population isevolved to find better solutions. The evolve function firstuses a decomposition method as specified by the dm variableto convert the multi-objective problem F(x) to a set of single-objective problems similar to MOEA/D [16]. Then, it appliesseveral genetic operations in order to evolve the population(lines 4). Finally, the new population is evaluated using theoriginal objective function F(x) to find a set of solutions (PF)in the objective space (line 5).

Step 2.2 - Updating weights: In order to update theweight vectors, the best weight vector which is associatedto the solution with the shortest Euclidean distance to thereference point, has to be found. The euclid_dist functioncalculates the distance of all solutions to the reference pointR. Then the min_ind function finds the index of the closestsolution to the reference point which matches the index of thebest weight vector. The closest solution helps the convergenceof solutions in the direction of reference points. Finally, the

424

Page 5: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

Algorithm 1: R-MEAD2

Inputs:popsize : the initial population size.

r : determines the size of the preferred region.

m : number of objectives.

n : number of decision variables.

dm : the decomposition method (Tchebycheff or PBI).

R : the reference point.

F(x) : the objective function.

Variables:P : population of individuals (popsize × n).

W : matrix of weight vectors (popsize×m).

PF : solution set (popsize×m).

1. P← rng(lbounds, ubounds, popsize , n)

2. W ← init_weights(popsize, m)

3. while stop criteria not met do

4. P← evolve(P, F(x), W, dm)

5. PF← evaluate(P, F(x))6. d← euclid_dist(PF, R)

7. best← min_ind(d)

8. wb ←W[best, :]9. W ← uniform(popsize, r, wb)

10. end while

uniform function uses RNG to generate a set of uniformweight vectors around the best weight vector wb (lines 7-9).The process repeats from Step 2.1.

IV. EXPERIMENTAL RESULTS AND ANALYSIS

This section includes three main experimental settings.Firstly, the uniformity of GLP and RNG is compared tomeasure which method is more suitable for many-objectiveproblems. Secondly, the convergence behavior of weight vec-tors on different reference points are analyzed. Finally, basedon the parameter setting in section IV-B the performance ofR-MEAD2 using Tchebycheff and PBI on DTLZ1-DTLZ6benchmark problems is compared with R-NSGA-II. All al-gorithms are tested on problems with 4 to 10 objectives.

A. RNG vs GLP

In order to compare the performance RNG and GLP, weuse a discrepancy measure called centered L2-discrepancy(CD2) [20]. Since we are interested in the uniformity ofsample points, a lower value of CD2 implies better uniformity.Let P be a n×m matrix of sample points where the samplepoint xi = (xi1, . . . , xim) is the ith row vector in matrixP, and m is the dimensionality of the sample points that,in this context, matches the number of objectives. CenteredL2-discrepancy [20] is defined as follows:

CD2(P) =

(13

12

)m

2

n

n∑k=1

m∏j=1

(1 +

1

2|xkj −

1

2| −

1

2|xkj −

1

2|2

)+

1

n2

n∑k=1

n∑j=1

m∏i=1

[1 +

1

2|xki −

1

2|+

1

2|xji −

1

2|+

1

2|xki − xji|

],

Table II contains CD2 values for RNG and GLP usingdifferent sample sizes and dimensions (number of objectivesm). As it can be seen, in case of all population sizes (50-5000) when the number of objectives grows beyond eight,

RNG is consistently better than GLP. For typical populationsizes such as 100, 250, and 500 which are commonly usedfor solving many-objective problems, RNG is better when thenumber of objectives is more than seven. Table II shows that incase of lower number of objectives (less than eight) when thepopulation size is relatively small, then GLP can be beneficial.GLP can be useful when solving problems with expensiveobjective function evaluation where a small population sizeis more practical. It is easy to see that when a small numberof sample points are allowed, then a more systematic approachsuch as GLP might be advantageous. However, when thenumber of sample points grow, GLP and RNG may performsimilarly. This behavior can be observed in Table II when thesample size is 1000 and 5000. It is clear that in majority ofcases, both RNG and GLP have similar CD2 values.

B. Parameter Settings and Performance Metrics

We used the population size of 200 for 4-7 objectiveproblems, 300 for 8-objectives and 350 for 9 and 10-objectiveproblems. A single reference point is used for all test problems(fi = 0.25, i ∈ {1, . . . ,m}). In R-MEAD2-Te and R-MEAD2-PBI the parameter r which determines the size of the preferredregion is set to 2, and in R-NSGA-II the parameter ǫ is set to0.002.

In order to compare the performance of R-MEAD2-PBI,R-MEAD2-Te and R-NSGA-II we have adopted inverted gen-erational distance (IGD) [28] that measures both convergenceand diversity of solutions. The calculation of IGD is basedon the average closest distances between sample points on thePareto-optimal front and the obtained solutions. We used 10m

sample points for four, five, six and seven objective problemsand 5m for eight, nine and ten objective problems, where mis the number of objectives. It should be noted that becauseof memory limitations, fewer number of samples points weregenerated to approximate the Pareto-optimal front for problemswith more than seven objectives. IGD is calculated as follows:

IGD(P ∗, Q) =

∑v∈P∗ d(v,Q)

|P ∗|. (4)

where Q is the set of obtained solutions and P ∗ is the setof sample points on the Pareto-optimal front, and functiond(v,Q) returns the Euclidean distance between a point v andthe closest point to it in a set Q. Since all algorithms usedin this study are user-preference based, we only consider thesolutions that fall within a desired region. The desired regionis a hypersphere with radius ρ around the sample point on thePareto-optimal front which is closest to the reference point.For all experiments in this study the parameter ρ is set to 2.

C. Weight Vector Convergence

Figure 4 shows the convergence behavior of weight vectorsfor both PBI and Tchebycheff on DTLZ1 using two differentreference points. As it can be seen the weight values fluctuateat the beginning of the search, but they gradually stabilize andconverge to a fixed value towards the end of a run. Dependingon the position of a reference point, weight vectors converge todifferent values. For example, when R = (0.25, 0.25) whichis near the center of the Pareto-optimal front, both weightvalues converge to a value close to 0.5. However, when thereference point is biased towards a particular objective (e.g.

425

Page 6: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

TABLE II. THE AVERAGE CD2 VALUE OF 25 INDEPENDENT RUNS FOR TWO INITIALIZATION METHODS: A UNIFORM RANDOM NUMBER

GENERATOR(RNG), AND GOOD LATTICE POINT (GLP) FOR DIFFERENT OBJECTIVES. THOSE LEAD TO STATISTICALLY SIGNIFICANTLY LOWER CD2 VALUE

(AT SIGNIFICANT LEVEL OF 5% ACCORDING TO MANN-WHITNEY-WILCOXON (MWW) TEST) OVER OTHERS ARE HIGHLIGHTED IN BOLD

# ObjectivesSample Sizes

50 100 250 500 1000 5000RNG GLP RNG GLP RNG GLP RNG GLP RNG GLP RNG GLP

4 2.74 2.71 2.73 2.73 2.75 2.74 2.75 2.75 2.75 2.75 2.75 2.755 2.99 2.94 2.97 2.96 2.98 2.97 2.98 2.98 2.98 2.98 2.98 2.986 3.21 3.19 3.23 3.22 3.24 3.22 3.23 3.23 3.22 3.23 3.23 3.227 3.48 3.47 3.51 3.50 3.50 3.49 3.49 3.50 3.50 3.50 3.50 3.508 3.82 3.76 3.78 3.81 3.78 3.81 3.78 3.81 3.80 3.79 3.79 3.799 4.12 4.14 4.12 4.15 4.11 4.14 4.11 4.14 4.10 4.14 4.11 4.1110 4.48 4.48 4.47 4.54 4.46 4.50 4.46 4.50 4.45 4.50 4.45 4.4511 4.91 4.92 4.86 4.94 4.83 4.89 4.82 4.90 4.83 4.89 4.82 4.8712 5.33 5.43 5.27 5.41 5.24 5.32 5.23 5.32 5.24 5.31 5.22 5.3013 5.83 6.01 5.73 5.94 5.68 5.69 5.69 5.79 5.67 5.78 5.66 5.7514 6.46 6.70 6.30 6.58 6.16 6.17 3.16 6.23 6.15 6.29 6.14 6.2315 6.97 7.50 6.80 7.33 6.73 6.86 6.69 6.71 6.66 6.76 6.64 6.76

0 50 100 150 200 250 3000.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

iterations

wei

ght v

alue

s

(a) DTLZ1 R-MEAD2-PBI R = (0.25, 0.25)

0 50 100 150 200 250 3000.4

0.45

0.5

0.55

0.6

0.65

iterations

wei

gh

t va

lues

(b) DTLZ1 R-MEAD2-Te R = (0.25, 0.25)

0 50 100 150 200 250 300

0.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

iterations

wei

gh

t va

les

(c) DTLZ1 R-MEAD2-PBI R = (0.2, 0.5)

0 50 100 150 200 250 3000.35

0.4

0.45

0.5

0.55

0.6

0.65

0.7

iterations

wei

gh

t va

lues

(d) DTLZ1 R-MEAD2-Te R = (0.2, 0.5)

Fig. 4. Weight vector convergence behavior on 2-objective DTLZ1 using PBI and Tchebycheff. Objective-1 is shown by ‘+’ and objective-2 by ‘O’.

R = (0.2, 0.5)) the weight values differ as it can be seen inFigures 4(c) and 4(d).

D. Benchmark Results

In this section we compare the performance of R-MEAD2based on two decomposition methods Tchebycheff and PBI(abbreviated as R-MEAD2-Te and R-MEAD2-PBI respec-tively) with R-NSGA-II which is a dominance-based approach.

To test the significance of the obtained results we used thenon-parametric Kruskal-Wallis one-way ANOVA [29] to detectif there is any significant difference between the performanceof the three algorithms. The null and alternative hypothesesfor Kruskal-Wallis are as follow:

H0 : all samples come from the same distribution.

Ha : at least one sample comes from a different distribution.

In order to rank the algorithms, we used Mann-Whitney-Wilcoxon (MWW) test with Bonferroni correction only whenthe null hypothesis of Kruskal-Wallis was rejected under 95%confidence interval. Bonferroni correction is a simple tech-nique for controlling the family-wise error rate [29]. Family-wise error rate is the accumulation of type I errors whenmore than one pair-wise comparison is used to rank a set ofresults. Under Bonferroni correction, in order to achieve anoverall significance level of α, the pair-wise tests should beperformed with a significance level of α′ = α

h, where h is the

number of pair-wise comparisons which is three in this study.For our experiments, the significance level of Kruskal-Wallis

was set to 5%, and for pair-wise MWW tests a significancelevel of 1.67% was used, which results in an overall significantlevel of approximately 5%. Table III contains the median for25 independent runs for the three algorithms. The last threecolumns show the p−value for three MWW pair-wise testsand the column labeled ‘K-W’ contains the p-value for theKruskal-Wallis ANOVA test.

Table III shows IGD results for all three algorithms. Itcan be observed that the difference between algorithms issignificant for all functions and over all numbers of objectives.So in all cases MWW test is used and the best performingalgorithm is shown in bold. If the performance of two al-gorithms are statistically similar then both entries are shownin bold. The table shows that in general a decompositionapproach is superior to the dominance-based approach, R-NSGA-II. For the sake of clarity we have summarized thecomparison between R-NSGA-II and both versions of R-MEAD2 in Table IV. From a total of 42 experiments, R-MEAD2-Te outperforms R-NSGA-II on 37 functions and tieson 2, and R-MEAD2-PBI outperforms R-NSGA-II on 35functions and ties on 2. By looking back at Table III we cansee that R-NSGA-II outperforms R-MEAD-PBI on DTLZ1when the number of objectives is below 9. However, R-NSGA-II on DTLZ1 with 9 and 10 objectives, is outperformed byR-MEAD-PBI. Although R-NSGA-II in some objectives ofDTLZ1 and DTLZ6 performs better than R-MEAD-PBI andR-MEAD-TE, in case of other test problems (DTLZ2-DTLZ5)R-NSGA-II is outperformed by both versions of R-MEAD.It should be noted that IGD takes both convergence and

426

Page 7: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

TABLE III. IGD VALUES ON DTLZ1-DTLZ6 TEST PROBLEMS. THE MEDIAN OF 25 INDEPENDENT RUNS ARE REPORTED.

R-MEAD2-PBI R-MEAD2-PBI R-MEAD2-Te

Func. # Obj R-MEAD2-PBI R-MEAD2-Te R-NSGA-II K-W vs vs vs

R-NSGA-II R-NSGA-II R-NSGA-II

DTLZ1

4 2.7179e-02 2.0047e-02 2.6913e-02 2.37e-12 2.77e-12 4.43e-01 9.73e-11

5 1.5969e-02 1.3061e-02 1.3489e-02 1.89e-12 2.77e-12 9.73e-11 1.99e-01

6 1.9221e-03 1.4998e-03 1.5202e-03 8.51e-13 2.77e-12 9.73e-11 2.02e-02

7 5.7668e-03 4.9279e-03 5.1213e-03 4.82e-13 2.77e-12 9.73e-11 4.50e-03

8 1.0997e-02 8.7635e-03 1.0280e-02 1.10e-13 2.77e-12 1.05e-04 9.73e-11

9 8.2376e-03 6.0180e-03 8.7701e-03 4.43e-14 2.77e-12 1.10e-05 9.73e-11

10 1.3500e-02 1.0895e-02 1.5834e-02 3.74e-16 2.77e-12 9.73e-11 9.73e-11

DTLZ2

4 8.0403e-03 8.6697e-03 2.2126e-02 3.74e-16 2.77e-12 9.73e-11 9.73e-11

5 4.4710e-03 4.9080e-03 9.1054e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

6 4.8012e-04 5.0287e-04 7.2498e-04 3.74e-16 2.77e-12 9.73e-11 9.73e-11

7 1.1177e-03 1.4606e-03 1.6713e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

8 1.7828e-03 2.1404e-03 2.7116e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

9 9.8180e-04 1.0688e-03 1.4740e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

10 1.8515e-03 2.0637e-03 2.5305e-03 3.74e-16 2.77e-12 9.72e-11 9.72e-11

DTLZ3

4 7.1956e-03 9.1197e-03 2.2122e-02 1.04e-15 4.27e-11 7.54e-10 9.73e-11

5 4.2700e-03 5.8662e-03 9.1130e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

6 1.1976e-03 4.6246e-04 7.2545e-04 3.74e-16 2.77e-12 9.73e-11 9.73e-11

7 1.2094e-03 1.3654e-03 1.6704e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

8 1.7592e-03 2.5105e-03 2.7037e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

9 9.9098e-04 1.1612e-03 1.4536e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

10 1.8656e-03 2.4228e-03 2.5326e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

DTLZ4

4 8.2896e-03 1.0301e-02 2.2212e-02 3.74e-16 2.77e-12 9.73e-11 9.73e-11

5 4.2842e-03 8.5091e-03 9.1201e-03 1.46e-15 2.77e-12 9.73e-11 2.64e-09

6 4.1579e-04 7.0932e-04 8.3514e-04 3.74e-16 2.77e-12 9.73e-11 9.73e-11

7 1.0711e-03 1.6623e-03 2.0305e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

8 1.6788e-03 2.6767e-03 2.9353e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

9 1.1728e-03 1.3986e-03 1.5749e-03 3.74e-16 2.77e-12 9.73e-11 9.73e-11

10 1.7495e-03 2.4007e-03 2.6921e-03 4.40e-16 4.43e-12 1.38e-10 9.73e-11

DTLZ5

4 1.6671e-02 1.6204e-02 1.7121e-02 3.74e-16 2.77e-12 9.73e-11 9.73e-11

5 5.8838e-03 5.7738e-03 5.9900e-03 9.53e-13 2.77e-12 5.51e-08 5.51e-08

6 1.8945e-03 1.7910e-03 1.9025e-03 1.46e-15 2.77e-12 2.64e-09 9.73e-11

7 6.4055e-04 6.2722e-04 6.4189e-04 5.11e-15 2.77e-12 5.51e-08 9.73e-11

8 9.6134e-04 9.1614e-04 9.6588e-04 3.74e-16 2.77e-12 9.73e-11 9.73e-11

9 8.1450e-04 7.9596e-04 8.1874e-04 1.46e-15 2.77e-12 2.64e-09 9.73e-11

10 7.2022e-04 6.7489e-04 7.2718e-04 1.46e-15 2.77e-12 2.64e-09 9.73e-11

DTLZ6

4 1.6829e-02 1.6831e-02 1.7129e-02 2.89e-11 2.77e-12 8.85e-07 8.85e-07

5 5.8428e-03 5.7819e-03 5.9870e-03 3.42e-09 2.77e-12 4.50e-03 1.10e-05

6 1.8867e-03 1.9213e-03 1.9077e-03 1.59e-14 2.77e-12 8.85e-07 9.73e-11

7 6.3685e-04 6.6313e-04 6.4248e-04 4.43e-14 2.77e-12 1.10e-05 9.73e-11

8 9.5918e-04 9.2020e-04 9.6915e-04 1.46e-15 2.77e-12 2.64e-09 9.73e-11

9 8.1404e-04 8.2422e-04 8.2033e-04 5.11e-15 2.77e-12 5.51e-08 9.73e-11

10 6.6082e-04 6.2611e-04 6.7947e-04 1.33e-12 4.43e-12 5.07e-02 9.73e-11

diversity of solutions into account. Therefore, we can concludethat decomposition methods generally perform better than R-NSGA-II in terms of both convergence and diversity on mostmany-objective problems. Finally, by comparing the resultsof R-MEAD2-PBI and R-MEAD2-Te we can see that bothdecomposition methods have very similar performance, but thePBI version performs slightly better than Tchebycheff on 24functions and losing on 18 functions.

TABLE IV. R-NSGA-II’S NUMBER OF WINS, LOSES AND TIES

AGAINST R-MEAD2-TE AND R-MEAD2-PBI.

R-MEAD2-Te R-MEAD2-PBI

Wins Loses Ties Wins Loses Ties

R-NSGA-II 3 37 2 5 35 2

V. CONCLUSIONS

In this paper, a user-preference based evolutionary algo-rithm for many-objective optimization problems has been pro-posed. The proposed algorithm R-MEAD2 has the followingadvantages: 1) Because of using a decomposition framework,R-MEAD2 is less susceptible to the selection pressure issue as

compared to dominance-based approaches such as R-NSGA-II.The experimental results using IGD metric showed that bothversions of R-MEAD2 (Tchebycheff and PBI) outperform R-NSGA-II on a range of many-objective problems. 2) UnlikeR-MEAD the population size and the number of objectivesare decoupled and the increase of number of objectives doesnot cause the grow of population size. As a result, R-MEAD2is better suited for solving many-objective problems. 3) R-MEAD2 uses a simple random number generator (RNG) toinitialize the weight vectors. The results of centered L2-discrepancy showed that RNG can generate more uniformweights than GLP when the number of objectives grow beyond7 with a typical sample size of 100, 250 and 500. In case of,other sample sizes (50, 1000 and 5000) RNG can producepoints with lower discrepancy than GLP when the number ofobjectives goes beyond eight. It should be noted that a uniformset of weight vectors do not necessarily map to a uniform set ofsolution in the objective space, especially on highly non-linearand complex Pareto-optimal fronts. This requires a feedbackmechanism to adjust the weights in order to obtain a set ofuniform solutions which is the topic of our future works. Inthis paper, the performance of PBI and Tchebycheff appeared

427

Page 8: [IEEE 2014 IEEE Congress on Evolutionary Computation (CEC) - Beijing, China (2014.7.6-2014.7.11)] 2014 IEEE Congress on Evolutionary Computation (CEC) - Integrating user preferences

to be very similar. However, a conclusive conclusion requiresfurther investigations on functions with more complex Pareto-optimal fronts.

REFERENCES

[1] C. A. C. Coello, G. B. Lamont, and D. A. V. Veldhuizen, EvolutionaryAlgorithms for Solving Multi-Objective Problems (Genetic and Evolu-

tionary Computation). Secaucus, NJ, USA: Springer-Verlag New York,Inc., 2006.

[2] R. C. Purshouse and P. J. Fleming, “Evolutionary many-objectiveoptimisation: An exploratory analysis,” in Evolutionary Computation,

2003. CEC’03. The 2003 Congress on, vol. 3. IEEE, 2003, pp. 2066–2073.

[3] H. Ishibuchi, N. Tsukamoto, and Y. Nojima, “Evolutionary many-objective optimization: A short review,” in Proc of 2008 IEEE Congress

on Evolutionary Computation, 2008, pp. 2419–2426.

[4] ——, “Evolutionary many-objective optimization,” in Proc. of 3rd

International Workshop on Genetic and Evolving Fuzzy Systems, no.March, 2008, pp. 47–52.

[5] H. Ishibuchi, Y. Sakane, N. Tsukamoto, and Y. Nojima, “Evolutionarymany-objective optimization by NSGA-II and MOEA/D with largepopulations,” in Systems, Man and Cybernetics, 2009. SMC 2009. IEEE

International Conference on. IEEE, 2009, pp. 1758–1763.

[6] K. Deb, Multi-Objective Optimization using Evolutionary Algorithm.Wiley, 2001.

[7] K. Deb, J. Sundar, N. Udaya Bhaskara Rao, and S. Chaudhuri, “Refer-ence point based multi-objective optimization using evolutionary algo-rithms,” International Journal of Computational Intelligence Research,vol. 2, no. 3, pp. 273–286, 2006.

[8] L. Thiele, K. Miettinen, P. J. Korhonen, and J. Molina, “A preference-based evolutionary algorithm for multi-objective optimization,” Evolu-

tionary Computation, vol. 17, no. 3, pp. 411–436, 2009.

[9] K. Deb and A. Kumar, “Interactive evolutionary multi-objective op-timization and decision-making using reference direction method,” inProceedings of the 9th annual conference on Genetic and evolutionary

computation. ACM, 2007, pp. 781–788.

[10] ——, “Light beam search based multi-objective optimization usingevolutionary algorithms,” in Evolutionary Computation, 2007. CEC

2007. IEEE Congress on. IEEE, 2007, pp. 2125–2132.

[11] H. Ishibuchi, Y. Sakane, N. Tsukamoto, and Y. Nojima, “Evolutionarymany-objective optimization by nsga-ii and moea/d with large popu-lations,” in Systems, Man and Cybernetics, 2009. SMC 2009. IEEE

International Conference on. IEEE, 2009, pp. 1758–1763.

[12] I. Das and J. E. Dennis, “Normal-boundary intersection: A new methodfor generating the pareto surface in nonlinear multicriteria optimizationproblems,” SIAM Journal on Optimization, vol. 8, no. 3, pp. 631–657,1998.

[13] A. Messac, A. Ismail-Yahaya, and C. A. Mattson, “The normalizednormal constraint method for generating the pareto frontier,” Structural

and multidisciplinary optimization, vol. 25, no. 2, pp. 86–98, 2003.

[14] K. Miettinen, Nonlinear Multiobjective Optimization. Norwell, MA:Kluwer, 1999.

[15] H. Ishibuchi and T. Murata, “A multi-objective genetic local search al-gorithm and its application to flowshop scheduling,” IEEE Transactions

on Systems, Man and Cybernetics, Part C (Applications and Reviews),pp. 392–403, 1998.

[16] Q. Zhang and H. Li, “MOEA/D: A multiobjective evolutionary al-gorithm based on decomposition,” Evolutionary Computation, IEEE

Transactions on, vol. 11, no. 6, pp. 712–731, 2007.

[17] A. Mohammadi, M. N. Omidvar, and X. Li, “Reference point basedmulti-objective optimization through decomposition,” in Evolutionary

Computation (CEC), 2012 IEEE Congress on. IEEE, 2012, pp. 1–8.

[18] Y. yan Tan, Y. chang Jiao, H. Li, and X. kuan Wang, “MOEA/D +uniform design: A new version of moea/d for optimization problemswith many objectives,” Computers and Operations Research, vol. 40,no. 6, pp. 1648 – 1660, 2013.

[19] K. Fang and Y. Wang, Number-theoretic methods in statistics. Chap-man and Hall/CRC, 1993.

[20] K.-T. Fang and D. K. Lin, “Uniform experimental designs and theirapplications in industry,” Handbook of Statistics, vol. 22, pp. 131–170,2003.

[21] J. Branke, T. Kaubler, and H. Schmeck, “Guidance in evolution-ary multi-objective optimization,” Advances in Engineering Software,vol. 32, no. 6, pp. 499–507, 2001.

[22] K. Deb, “Multi-objective evolutionary algorithms: Introducing biasamong pareto-optimal solutions,” in Advances in evolutionary comput-

ing. Springer, 2003, pp. 263–292.

[23] N. Srinivas and K. Deb, “Muiltiobjective optimization using nondomi-nated sorting in genetic algorithms,” Evolutionary computation, vol. 2,no. 3, pp. 221–248, 1994.

[24] J. Branke and K. Deb, “Integrating user preferences into evolutionarymulti-objective optimization,” in Knowledge incorporation in evolution-

ary computation. Springer, 2005, pp. 461–477.

[25] K. Deb, a. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitistmultiobjective genetic algorithm: NSGA-II,” IEEE Transactions on

Evolutionary Computation, pp. 182–197, 2002.

[26] U. K. Wickramasinghe and X. Li, “Using a distance metric to guide psoalgorithms for many-objective optimization,” in Proceedings of the 11th

Annual conference on Genetic and evolutionary computation. ACM,2009, pp. 667–674.

[27] ——, “Integrating user preferences with particle swarms for multi-objective optimization,” in Proceedings of the 10th annual conference

on Genetic and evolutionary computation. ACM, 2008, pp. 745–752.

[28] E. Zitzler, L. Thiele, M. Laumanns, C. M. Fonseca, and V. G. da Fon-seca, “Performance Assessment of Multiobjective Optimizers: An Anal-ysis and Review,” Evolutionary Computation, IEEE Transactions on,vol. 7, no. 2, pp. 117–132, 2003.

[29] D. J. Sheskin, Handbook of parametric and nonparametric statistical

procedures. crc Press, 2003.

428