a hybrid of differential evolution and genetic

10

Click here to load reader

Upload: sithansakthi

Post on 08-Mar-2015

24 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A Hybrid of Differential Evolution and Genetic

T.-D. Wang et al. (Eds.): SEAL 2006, LNCS 4247, pp. 318 – 327, 2006. © Springer-Verlag Berlin Heidelberg 2006

A Hybrid of Differential Evolution and Genetic Algorithm for Constrained Multiobjective

Optimization Problems

Min Zhang, Huantong Geng, Wenjian Luo, Linfeng Huang, and Xufa Wang

Nature Inspired Computation and Applications Laboratory, Department of Computer Science and Technology, University of Science and Technology of China,

230027, Hefei, Anhui, China {zhangmin, lfhuang}@mail.ustc.edu.cn, [email protected],

{wjluo, xfwang}@ustc.edu.cn

Abstract. Two novel schemes of selecting the current best solutions for mul-tiobjective differential evolution are proposed in this paper. Based on the search biases strategy suggested by Runarsson and Yao, a hybrid of multiobjective dif-ferential evolution and genetic algorithm with (N+N) framework for constrained MOPs is given. And then the hybrid algorithm adopting the two schemes respec-tively is compared with the constrained NSGA-II on 4 benchmark functions con-structed by Deb. The experimental results show that the hybrid algorithm has better performance, especially in the distribution of non-dominated set.

1 Introduction

Genetic Algorithm (GA) for Multiobjective Optimization Problems (MOPs) was suggested by Rosenberg in his dissertation as early as 1967 [1]. However, until 1985, the first genetic algorithm for MOPs, namely VEGA, was proposed by Shaffer [2]. Because of the deficiencies of VEGA, Multiobjective Evolutionary Algorithms (MOEAs) have been paid more and more attention. Two generations of Evolutionary Multiobjective Optimization have been classified by Coello Coello [3]. The first gen-eration (1985-1998) emphasizes the simplicity of algorithms, where the most repre-sentative are NSGA [4], NPGA [5] and MOGA [6]. The second generation started when elitism became a standard mechanism, which was firstly adopted by Zitzler [7] in SPEA. In this generation, efficiency is stressed and the most representative are SPEA [7], SPEA2 [8], PAES [9] and NSGA-II [10]. Meanwhile, the ε-dominance MOEAs [11], particle swarm optimization [12] and differential evolution [13] for MOPs are also proposed in this generation. These MOEAs are mainly for uncon-strained MOPs. However, the real-world MOPs are often with constraints. To solve these problems, the crucial problem is how to handle the constraints in MOPs, i.e., how to balance the search between the feasible and infeasible regions.

Runarsson and Yao [14] proposed the search biases strategy and introduced the dif-ferential evolution to their algorithm [15] in order to achieve a good compromise

Page 2: A Hybrid of Differential Evolution and Genetic

A Hybrid of Differential Evolution and Genetic Algorithm 319

between feasible and infeasible regions in constrained single objective optimization. In this paper, the search biases strategy is introduced to solve constrained MOPs. Firstly, two novel schemes of selecting the current best solutions for multiobjective differential evolution (MODE) in constrained MOPs are proposed. And then a hybrid of MODE and GA with the (N+N) framework for constrained MOPs is given. Finally, the hybrid algorithm is implemented on NSGA-II [10] with the two schemes respec-tively, and is compared with a state-of-the-art MOEA, i.e., constrained NSGA-II (CNSGA-II) [20].

2 Preliminary

2.1 Problem Definition

Definition 1 (Constrained MOP). A general constrained MOP includes a set of n decision variables, a set of m objective functions, and a set of p inequality constraints (equality ones may be approximated by inequalities [14]). The goal of optimization is

⎩⎨⎧

=≤=

pigts

ffff

i

m

,,2,1,0)(..

)}(,),(),({)(min 21

x

xxxx . (1)

Where x is the decision vector in decision space X, and f(x) is the objective vector in objective space Y.

In constrained optimization problems, p constraints are usually transformed to a constraint violation function, which is defined in (2).

∑ == p

i i gwg1

))0),((max())(( βϕ xx (2)

Where the exponent β is usually 1 or 2 and the weights wi>0 (j=1,…,p) would be tuned during search (β=1 and w=1 in this study). And the feasible set Xf is defined as the set of decision vector x which satisfy the p constraints, i.e., φ(g(x))=0.

Definition 2 (Pareto Dominance ""≺ ). For any two decision vectors a and b,

)()( 1)()( 1 bababa jjii ffmjffmi <≤≤∃∧≤≤≤∀⇔≺ . (3)

The non-dominated set P in Xf is Pareto-optimal set (POS) and the set f(P) is Pareto-optimal front (POF).

2.2 Constraint Pareto Dominance

Among constraint handling methods for MOPs, the most promising one is the Con-straint Pareto Dominance proposed by Deb [10] [20], which is defined as follows.

Definition 3 (Constraint Pareto Dominance "" c≺ ) For any two decision vectors a, b,

( )( )( )⎪

⎪⎨

>>>∧=

==∧⇔

0))(())((

0))((0))((

0))(())((

abor

baor

baba

ba

gg

gg

gg

c

ϕϕϕϕ

ϕϕ≺≺ . (4)

Page 3: A Hybrid of Differential Evolution and Genetic

320 M. Zhang et al.

3 Hybrid of MODE and GA for Constrained MOPs

3.1 Differential Evolution

Storn and Price [16] proposed the differential evolution method which is a simple and efficient adaptive scheme for global optimization over continuous spaces, and gave two most promising schemes of differential evolution. Differential Evolution (DE) is a population-based evolutionary algorithm with simple mutation and crossover opera-tors to create next generation. DE has similarities with traditional Evolutionary Algo-rithms. However, it doesn’t employ binary encoding like a simple GA and doesn’t utilize a probability density function to self-adapt its parameters like an ES [21].

Runarsson and Yao [14] suggested the search biases in constrained single objective optimization, and proposed the corresponding method as follows. When creating the next generation based on ES (μ, λ), λ individuals are generated according to

)( 11 +−+←′iik xxxx γ . (5)

Where x1 denotes the top one after stochastic ranking for the whole individuals, i.e., the current best solution, xi and xi+1 are random samples from the population, and γ is the parameter of search step length. It is suggested by the authors [19] that (5) can be deduced from the second scheme of the differential evolution method in [16]. And (5) has obtained satisfying results for constrained single objective optimization [14].

3.2 Two Schemes of MODE

According to (5), λ individuals generated newly are close to x1, which denotes the current best solution in the population. However, there are usually more than one solutions in the non-dominated set of MOPs. Furthermore, the diversity of population is crucial for MOPs, which determines whether the pure POF could be found or not. Therefore the diversity of population and the pureness of the non-dominated set will be affected seriously if (5) is applied to constrained MOPs directly. So how to choose the “current best solutions” in MOPs becomes a key problem.

In order to preserve the population diversity, the “current best solutions” should be extended to a solution set, which could denote the main search biases in MOPs. A simple way is to select the boundaries of current non-dominated solutions as the “cur-rent best solutions”. So the number of “current best solutions” is no more than the dimension of objective space. Individuals will be generated by sampling the bounda-ries uniformly, which is described in Algorithm 1.

In Algorithm 1, N denotes the size of population and α is the percentage of the in-dividuals generated by B-Scheme. The αN individuals are close with the boundaries

Algorithm 1. B-Scheme (Boundaries as the current best solutions) 1. for k = 1 to αN do 2. xbest=Uniform_Sample(Boundary(Non_Dominated_Set)) 3. i=rand[1,N] 4. )( 1+−+←′

ibestik xxxx γ

5. end for

Page 4: A Hybrid of Differential Evolution and Genetic

A Hybrid of Differential Evolution and Genetic Algorithm 321

of current non-dominated set. Then the boundary search ability will be improved, and the population diversity could be maintained. So the pure POF will be located with larger probability.

However, it may be still hard to find the pure POF just by enhancing the boundary search ability when the distribution of POF is non-continuous. Because the bounda-ries of non-dominated set can only represent part of the search biases in such situa-tions. To solve this problem, the representative individuals should be picked out from current population, and offspring are generated by differential evolution around the representatives to improve the search ability. So a better distribution of non-dominated set could be obtained. In this paper, a fitness function based on constraint Pareto dominance and crowding distance [10] is proposed to pick out the representa-tive individuals, which is given as Algorithm 2.

Algorithm 2. Representative Individuals Selection 1. Split the population P according to "" c≺ , and get kFFP ∪∪1=

2. Calculate the crowding distances I of P: the feasible individuals’ values are calculated by the definition in [10] while the infeasible ones’ values are set to 0

3. Calculate the fitness of the individuals in P: iFIifitness ∈++= xxx )),(2/(1)(

4. Sort P by fitness values and the top M are selected as the representatives

The representative individuals selected by Algorithm 2 are feasible solutions or in-feasible ones with smaller constraint violations. The diversity of these representative individuals is good because the crowding distance is employed. And then the off-spring generated by differential evolution are close to these representatives. So the search ability to feasible region during evolution will be improved and the diversity of the offspring will be better. Treating the M representative individuals (RI) as the “current best solutions”, the offspring will be generated according to Algorithm 3.

Algorithm 3. R-Scheme (Representative Individuals as the current best solutions) 1. for k = 1 to αN do 2. best=mod(k-1, M)+1 3. i=rand[1,N] 4. )( 1+−+←′

ibestik xRIxx γ

5. end for

According to the definition of crowding distance in [10], the I values of bound-ary individuals in each layer are ∞. Therefore, Algorithm 1 and Algorithm 3 are identical when M equals the number of the non-dominated set boundaries. But when M is less than the number of the boundaries, the representatives are part of the boundary individuals. Here the representatives may not denote the main search biases, and the diversity may be deteriorated in this condition. Thus the value of M should be greater than the number of non-dominated set boundaries to make a distinct difference otherwise the performance of R-Scheme will be similar to B-Scheme or even worse.

Page 5: A Hybrid of Differential Evolution and Genetic

322 M. Zhang et al.

3.3 Hybrid Algorithm with the Framework of NSGA-II

The hybrid algorithm (DE-MOEA) proposed in this paper is based on the (N+N) framework of NSGA-II [10]. And N individuals in the next generation are created as: αN individuals are generated by MODE and the left ones are by genetic operators (crossover and mutation) like NSGA-II. DE-MOEA is described in Algorithm 4.

Algorithm 4. DE-MOEA with (N+N) Framework 1. Initialization: create the initial population P0, t=0, |P0|=N 2. Evaluate the population Pt 3. Pt+1=generate_next_pop(Pt) 3.1. Calculate the fitness of individuals in Pt according to Algorithm 2 3.2. Sort Pt by the fitness values and select the top N as population Qt 3.3. Generate αN individuals with MODE on Qt and put them to Pt+1 3.4. Generate (1-α)N individuals from Qt with binary tournament selection,

crossover and mutation, and put the offspring to Pt+1 3.5. Pt+1=Pt+1+Qt 4. t=t+1, if termination satisfied then output non-dominated set, else go to step 2

In Algorithm 4, the MODE in step 3.3 can be implemented by Algorithm 1 or Al-gorithm 3 and the corresponding algorithms are denoted as HBGA and HRGA.

In NSGA-II the termination is satisfied when the number of already split individu-als is not less than N. However, it seems that it is needed to split the entire population in DE-MOEA, but actually it doesn’t. For any two individuals )(, jiFF ji ≠∈∈ ba

1)(,1)( +<≤+<≤ jfitnessjifitnessi ba . (6)

So (7) follows easily

jifitnessfitness <⇔< )()( ba . (7)

From (7) the terminating condition of splitting population in DE-MOEA is the same as NSGA-II, so the time complexity of algorithm 4 is equal to the NSGA-II.

4 Experimental Results and Discussions

4.1 Test Functions and Performance Measures

In this section, 4 benchmark functions (CTP1, CTP2, CTP6 and CTP7) are chosen from [20] to evaluate the performance of DE-MOEA, which are described in (8), (9).

⎩⎨⎧

=≥−−=−==

Jjxfbaxfxcs.t.

xgxfxgxfxxf

jjj ,,2,1,0))(exp()()(

))(/)(exp()()(,)( Min.1CTP

12

1211 (8)

( )( )

⎪⎩

⎪⎨

+−≥−−=

−==

dcxfexfba

xfexfxcts

xgxfxgxfxxf

|)))()cos())()((sin(sin(|

)()sin()()cos()( ..

)(/)(1)()( Min.,)( Min.

CTP7

CTP6

CTP2

12

12

1211

θθπθθ (9)

Page 6: A Hybrid of Differential Evolution and Genetic

A Hybrid of Differential Evolution and Genetic Algorithm 323

Where the decision space of each function has 5 dimensions, which are defined as: 0≤x1≤1, -5≤x2,3,4,5≤5, and ∑ =

−+=5

2

2 ))2cos(10(41)(i ii xxxg π . For CTP1, J=2, a1, 2=

(0.858, 0.728), b1, 2= (0.541, 0.295), and the parameters chosen to the different CTP2, CTP6 and CTP7 functions are listed in Table 1.

Table 1. Parameter Settings in CTP2, CTP6 and CTP7

Function θ a b c d e CTP2 -0.2π 0.2 10 1 6 1 CTP6 0.1π 40 0.5 1 2 -2 CTP7 -0.05π 40 5 1 6 0

Performance measures for MOPs are analyzed and classified by Zitzler [17]. And the unary indicator D1R [17] and binary indictor V(A, B) [18] are selected here, which are defined as follows.

∑ ∈∈−= * ||/};min{)(1D R Xa

*XXbbaX , (10)

where X* denotes a reference set (1,000 uniform samples from POF in this paper), and X is the non-dominated set found by the algorithm.

The definition of V(A, B) is the percentage of objective space dominated exclu-sively by A in the smallest hypercube which contains the both non-dominated set A and B. Like [18], 50,000 Monte Carlo samples are taken to calculate the values. When calculating the hypercube we desire that the solutions with the first objective less than 10-7 should be rejected in order to obtain more distinct differences between the two non-dominated sets by finite samples. The two situations are illustrated in Fig. 1 and Fig. 2, where the two non-dominated sets are obtained from CNSGA-II [20] and HBGA on CTP1 in a run.

0.0 0.2 0.4 0.6 0.8 1.0

0

5

10

15

20

25

f2

f1

CNSGA-II HBGA

0.0 0.2 0.4 0.6 0.8 1.00.5

0.6

0.7

0.8

0.9

1.0

1.1

f2

f1

CNSGA-II HBGA

Fig. 1. Two non-dominated set without rejection Fig. 2. Two non-dominated set with rejection

From Fig. 1, it can be found that there exist a few solutions with very large f2 val-ues and quite little f1 values in each set. This will influence the quality of the V(A,B) seriously because the great majority of samples are located in the area which is domi-nated by both sets and few samples can find out the slight but significant differences

Page 7: A Hybrid of Differential Evolution and Genetic

324 M. Zhang et al.

between the both sets. The values of V(A,B) in Fig.1 is (0.0000%, 0.2180%), so we can assert that HBGA outperforms CNSGA-II on CTP1 [17]. However, from the corresponding value (0.1900%, 11.8260%) in Fig.2, this assertion can’t come into existence. This is because the differences between the both sets can be captured by this appropriate scale in Fig. 2 with 50,000 samples. Meanwhile, the rejection influ-ences the experimental results little. The f1 values of all the benchmark functions are ranged from 0 to 1, and then the probability of sample points in the rejected area is 10-

7. So the mean and standard deviation values of the times in the rejected area by 50,000 samples are 0.005 and 0.0707 respectively, which can hardly influence the final experimental results. Therefore, this rejection is employed here in order to obtain distinct results without increasing sample times.

4.2 Results and Discussions

To the best of our knowledge, CNSGA-II [20] is the most promising method for con-strained MOPs and is selected to compare with DE-MOEA (HBGA and HRGA). All the algorithms are performed in Matlab 7.0, and the source code may be obtained from the author upon request. Real-coded GA (simulated binary crossover “SBX” and polynomial mutation “PM”) is adopted in the implementation and the parameter val-ues are the same as [20], which are listed in Table 2.

Table 2. Parameter Settings for CNSGA-II and DE-MOEA

N α pc pm SBX ηc* PM ηm

* FE* 100 10% 0.9 1/n* 20 20 50,000

* n denotes the dimension of decision space, and FE is the total number of function evaluations.

Table 3. Mean and Standard Deviation of the Unary Indicator D1R

Function CTP1 CTP2 CTP6 CTP7 HBGA 0.0043 (0.0003) 0.0023 (0.0003) 0.2234 (0.9297) 0.0097 (0.0125) HRGA 0.0056 (0.0007) 0.0037 (0.0021) 0.1103 (0.6587) 0.0241 (0.0932)

CNSGA-II 0.0970 (0.0570) 0.1889 (0.1236) 0.3525 (1.1969) 0.0610 (0.0847) Results highlighted in bold signify significantly better than CNSGA-II at α=0.05 by a two-tailed test.

Table 4. Mean and Standard Deviation of the Binary Indicator V(A, B)

CTP1 CTP2 CTP6 CTP7 V(HBGA, CNSGA-II) 4.4609% (0.0349) 12.5761% (0.0892) 5.4083% (0.1757) 2.6680% (0.0467)V(CNSGA-II, HBGA) 0.2617% (0.0007) 0.0497% (0.0002) 3.2613% (0.1345) 0.1681% (0.0024) V(HRGA, CNSGA-II) 4.0455% (0.0346) 12.1040% (0.0881) 5.4424% (0.1766) 2.9616% (0.0506)V(CNSGA-II, HRGA) 0.3684% (0.0014) 0.0961% (0.0006) 1.7138% (0.0965) 0.1814% (0.0021)

V(HBGA, HRGA) 0.4046% (0.0015) 0.2008% (0.0023) 1.7404% (0.0958) 0.4740% (0.0109) V(HRGA, HBGA) 0.2000% (0.0009) 0.0442% (0.0003) 3.5931% (0.1382) 0.7050% (0.0168)

Results highlighted in bold signify significantly better than the other at α=0.05 by a two-tailed test.

The values of search step length parameter γ in HBGA and HRGA are set to 1.1 and 0.85 respectively. And the number M in HRGA is set to 5, i.e., 5% of the popula-tion size. All the algorithms for each benchmark function perform 100 independent

Page 8: A Hybrid of Differential Evolution and Genetic

A Hybrid of Differential Evolution and Genetic Algorithm 325

runs, and the initial populations of all the algorithms in each run are the same for fair comparison. Table 3 and Table 4 show the means and standard deviations of the D1R and V(A, B) indicators obtained by all the algorithms, where the standard deviations are in the parenthesizes.

From Table 3 and Table 4 it can be observed that HBGA and HRGA have better results than CNSGA-II in all the 4 benchmark functions, and especially the results are significant better except CTP6. It can also be seen that HBGA performs better than HRGA in CTP1 and CTP2 while the latter has better results in CTP6.

To illustrate the pureness of the POF found by each algorithm, the statistical values in 100 independent runs are listed in table 5.

Table 5. Pureness Statistic of POF Found by the Three Algorithms

Benchmark Function

Algorithm Number of runs

produce pure POF Numbers of runs

produce partial POF Number of runs produce

local Pareto front HBGA 100 0 0 HRGA 100 0 0 CTP1

CNSGA-II 4 96 0 HBGA 100 0 0 HRGA 100 0 0 CTP2

CNSGA-II 10 90 0 HBGA 94 2 4 HRGA 97 1 2 CTP6

CNSGA-II 89 4 7 HBGA 48 52 0 HRGA 80 20 0 CTP7

CNSGA-II 15 85 0

From table 5, for CTP1 and CTP2, HBGA and HRGA converge to the pure POF with probability 1 in 100 runs, while the probability of CNSGA-II is not greater than 10%. For CTP6, all the three algorithms converge to the partial POF or local Pareto front with a low probability, but it is somewhat lower for HBGA and HRGA. For CTP7 it is hard for CNSGA-II to find the pure POF while the situation becomes easier for HBGA, and especially for HRGA.

The search ability in the boundaries of non-dominated set is improved in both HBGA and HRGA, so it is easier for both HBGA and HRGA to locate the pure POF in CTP1, which has a continuous POF. The CTP2 and CTP7 have non-continuous POF, and the intervals in CTP7’s POF are much larger. So it could be still efficient to find the pure POF of CTP2 for HBGA, but it is not the case in CTP7. However, the HRGA could solve the problem by choosing representatives from population for dif-ferential evolution. And the experimental results of CTP2 and CTP7 give an evidence for this explanation. CTP6 has several local Pareto fronts for the infeasible holes towards the Pareto-optimal region in objective space. By introducing the different evolution, a better tradeoff between feasible and infeasible regions could be achieved. So the number of convergence to local Pareto front is less than CNSGA-II. Compared with HBGA, the HRGA has better diversity during search for choosing more “current best solutions” for differential evolution, and the probability of its getting into local

Page 9: A Hybrid of Differential Evolution and Genetic

326 M. Zhang et al.

Pareto front is much lower than HBGA. However, the approximating and diversity of non-dominated set are two (possible) conflicting objectives [8]. So when the diversity exceeds the actually required, the approximating will be deteriorated. This could be the reason why HBGA performs better than HRGA on CTP1 and CTP2.

With the above results and analysis, the algorithm DE-MOEA proposed in this pa-per has superior performance compared with CNSGA-II, especially in the distribution of the non-dominated set.

5 Conclusion

This paper proposes two novel schemes of selecting the current best solutions for MODE. And then based on the search biases strategy suggested by Runarsson and Yao, a hybrid algorithm of MODE and GA is put forward here for constrained MOPs. We implement the hybrid algorithm based on NSGA-II with the two schemes of MODE respectively, named HBGA and HRGA. HBGA and HRGA are compared with CNSGA-II on 4 benchmark functions constructed by Deb. Experimental results show that the quality of non-dominated set obtained by the both algorithms is better than that of CNSGA-II on all the benchmark functions. The future work is to apply the hybrid algorithm to more complex problems and applications.

Acknowledgement

This work is partially supported by the National Natural Science Foundation of China (Grant No. 60428202), and the authors would like to thank Prof. X. Yao from Univer-sity of Birmingham for his helpful comments.

References

1. R.S. Rosenberg: Simulation of genetic populations with biochemical properties. Ph.D. the-sis, University of Michigan, Ann Harbor, Michigan, 1967

2. J. David Schaffer: Multiple objective optimization with vector evaluated genetic algo-rithms. In Genetic Algorithms and their Applications: Proceedings of the First Interna-tional Conference on Genetic Algorithms, 93–100, Lawrence Erlbaum, 1985

3. C.A. Coello Coello: Evolutionary multi-objective optimization: a historical view of the field. IEEE Computational Intelligence Magazine, 1(1): 28-36, Feb. 2006

4. N. Srinivas, K. Deb: Multiobjective optimization using nondominated sorting in genetic algorithms. Evolutionary Computation, 2(3): 221–248, Fall 1994

5. J. Horn, N. Nafpliotis, and D.E. Goldberg: A niched pareto genetic algorithm for multiob-jective optimization: In Proceedings of the 1st CEC, 1: 82–87, June 1994

6. C.M. Fonseca, P.J. Fleming: Genetic algorithms for multiobjective Optimization: Formula-tion, discussion and generalization. In Proceedings of the Fifth International Conference on Genetic Algorithms, 1993, 416–423

7. E. Zitzler, L. Thiele: Multiobjective evolutionary algorithms: A comparative case study and the strength pareto approach. IEEE Trans. Evol. Comput., 3(4): 257–271, Nov. 1999

Page 10: A Hybrid of Differential Evolution and Genetic

A Hybrid of Differential Evolution and Genetic Algorithm 327

8. E. Zitzler, M. Laumanns, L. Thiele: SPEA2: Improving the strength pareto evolutionary algorithm. In EUROGEN 2001. Evolutionary Methods for Design, Optimization and Con-trol with Applications to Industrial Problems, 2002, 95–100

9. J.D. Knowles, D.W. Corne: Approximating the nondominated front using the pareto ar-chived evolution strategy. Evolutionary Computation, 8(2): 149–172, 2000

10. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan: A fast and elitist multiobjective genetic algorithm: NSGA–II. IEEE Trans. Evol. Comput., 6(2): 182–197, Apr. 2002

11. M. Laumanns, L. Thiele, K. Deb, and E. Zitzler: Combining convergence and diversity in evolutionary multi-objective optimization. Evolutionary Computation, 10(3): 263–282, Fall 2002

12. C.A. Coello Coello, G. Toscano Pulido, and M. Salazar Lechuga: Handling multiple objec-tives with particle swarm optimization. IEEE Trans. Evol. Comput., 8(3): 256–279, June 2004

13. T. Robič, B. Filipič: DEMO: Differential Evolution for Multiobjective Optimization. EMO 2005, 520-533

14. T. P. Runarsson, X. Yao: Search Biases in Constrained Evolutionary Optimization. IEEE Trans. Syst. Man Cybern. Part C-Appl. Rev., 35(2): 233-243, May 2005

15. T. P. Runarsson, X. Yao: Stochastic Ranking for Constrained Evolutionary Optimization. IEEE Trans. Evol. Comput., 4(3):284-294, Sep. 2000

16. R. Storn, K. Price: Differential evolution-A simple and efficient heuristic for global opti-mization over continuous spaces. J. Global Optimiz., 11(4): 341–359, Dec. 1997

17. E. Zitzler, L. Thiele, M. Laumanns, C.M. Fonseca, and V. Grunert da Fonseca: Perform-ance Assessment of Multiobjective Optimizers: An Analysis and Review. IEEE Trans. Evol. Comput., 7(2): 117–132, Apr. 2003

18. J.E. Fieldsend, R.M. Everson, and S. Singh: Using unconstrained elite archives for mul-tiobjective optimization. IEEE Trans. Evol. Comput., 7(2): 305-323, June 2003

19. M. Zhang, H.T. Geng, W.J. Luo, L.F. Huang, X.F. Wang: A Novel Search Biases Selec-tion Strategy for Constrained Evolutionary Optimization. CEC 2006, to appear

20. K. Deb, A. Pratap, T. Meyarivan: Constrained Test Problems for Multi-objective Evolu-tionary Optimization. EMO 2001, 284-298

21. E. Mezura-Montes, J. Velázquez-Reyes, and C.A. Coello Coello: Promising infeasibility and multiple offspring incorporated to differential evolution for constrained optimization. GECCO 2005, 225-232