chap1 evolutionary algorithms for engineering applications

24

Upload: gaston-vertiz

Post on 16-Apr-2017

24 views

Category:

Engineering


2 download

TRANSCRIPT

Page 1: Chap1 evolutionary algorithms for engineering applications

1 Evolutionary Algorithms forEngineering Appli ationsZBIGNIEW MICHALEWICZy, KALYANMOY DEBz,MARTIN SCHMIDT, and THOMAS STIDSENDepartment of Computer S ien e, University of North Carolina, Char-lotte, NC 28223, USA and at the Institute of Computer S ien e, PolishA ademy of S ien es, ul. Ordona 21, 01-237 Warsaw, Poland, e-mail:zbyszek�un .eduDepartment of Me hani al Engineering, Indian Institute of Te hnology,Kanpur, Pin 208 016, India, e-mail: deb�iitk.a .inDepartment of Computer S ien e, Aarhus University, DK-8000 Aarhus C,Denmark, e-mail: mars h�daimi.au.dkDepartment of Computer S ien e, Aarhus University, DK-8000 Aarhus C,Denmark, e-mail: stidsen�daimi.au.dk1.1 INTRODUCTIONEvolutionary omputation te hniques have drawn mu h attention as optimiza-tion methods in the last two de ades [1, 10, 21, 2℄. They are are sto hasti optimization methods whi h are onveniently presented using the metaphorof natural evolution: a randomly initialized population of individuals (set ofpoints of the sear h spa e at hand) evolves following a rude parody of theDarwinian prin iple of the survival of the �ttest. New individuals are gener-ated using some variation operators (e.g., simulated geneti operations su h asmutation and rossover). The probability of survival of the newly generatedsolutions depends on their �tness (how well they perform with respe t to theoptimization problem at hand): the best are kept with a high probability, theworst are rapidly dis arded.yPresently on leave at Aarhus University, Aarhus, DenmarkzPresently on leave at University of Dortmund, Germany i

Page 2: Chap1 evolutionary algorithms for engineering applications

ii EVOLUTIONARY ALGORITHMS FOR ENGINEERING APPLICATIONSFrom the optimization point of view, one of the main advantages of evolu-tionary omputation te hniques is that they do not have mu h mathemati alrequirements about the optimization problem. They are 0{order methods(all they need is an evaluation of the obje tive fun tion), they an handlenonlinear problems, de�ned on dis rete, ontinuous or mixed sear h spa es,un onstrained or onstrained. Moreover, the ergodi ity of the evolution oper-ators makes them global in s ope (in probability).Many engineering a tivities involve unstru tured, real-life problems thatare hard to model, sin e they require in lusion of unusual fa tors (from a - ident risk fa tors to estheti s). Other engineering problems are omplex innature: job shop s heduling, timetabling, traveling salesman or fa ility layoutproblems are examples of NP- omplete problems. In both ases, evolutionary omputation te hniques represent a potential sour e of a tual breakthroughs.Their ability to provide many near-optimal solutions at the end of an optimiza-tion run enables to hoose the best solution afterwards, a ording to riteriathat were either inarti ulate from the expert, or badly modeled. Evolutionaryalgorithms an be made eÆ ient be ause they are exible, and relatively easyto hybridize with domain-dependent heuristi s. Those features of evolution-ary omputation have already been a knowledged in the �eld of engineering,and many appli ations have been reported (see, for example, [6, 11, 34, 43℄).A vast majority of engineering optimization problems are onstrained prob-lems. The presen e of onstraints signi� antly a�e ts the performan e of anyoptimization algorithm, in luding evolutionary sear h methods [20℄. This pa-per fo uses on the issue of evaluation of onstraints handling methods, as theadvantages and disadvantages of various methods are not well understood.The general way of dealing with onstraints | whatever the optimizationmethod | is by penalizing infeasible points. However, there are no guidelineson designing penalty fun tions. Some suggestions for evolutionary algorithmsare given in [37℄, but they do not generalize. Other te hniques that an be usedto handle onstraints in are more or less problem dependent. For instan e, theknowledge about linear onstraints an be in orporated into spe i� operators[24℄, or a repair operator an be designed that proje ts infeasible points ontofeasible ones [30℄.Se tion 1.2 provides a general overview of onstraints-handling methods forevolutionary omputation te hniques. The experimental results reported inmany papers suggest that making an appropriate a priori hoi e of an evolu-tionary method for a nonlinear parameter optimization problem remains anopen question. It seems that the most promising approa h at this stage of re-sear h is experimental, involving a design of a s alable test suite of onstrainedoptimization problems, in whi h many features ould be easily tuned. Then itwould be possible to evaluate merits and drawba ks of the available methodsas well as test new methods eÆ iently. Se tion 1.3 dis usses further the needfor a su h s alable test suite, whereas se tion 1.4 presents a parti ular test asegenerator proposed re ently [23℄. Se tion 1.5 provides an example of the use

Page 3: Chap1 evolutionary algorithms for engineering applications

CONSTRAINT-HANDLING METHODS iiiof this test ase generator for evaluation of a parti ular onstraint-handlingmethod. Se tion 1.6 on ludes the paper.1.2 CONSTRAINT-HANDLING METHODSThe general nonlinear programming (NLP) problem is to �nd ~x so as tooptimize f(~x); ~x = (x1; : : : ; xn) 2 IRn; (1.1)where ~x 2 F � S. The obje tive fun tion f is de�ned on the sear h spa eS � IRn and the set F � S de�nes the feasible region. Usually, the sear hspa e S is de�ned as a n-dimensional re tangle in IRn (domains of variablesde�ned by their lower and upper bounds):li � xi � ui; 1 � i � n,whereas the feasible regionF � S is de�ned by a set of p additional onstraints(p � 0):gj(~x) � 0, for j = 1; : : : ; q, and hj(~x) = 0, for j = q + 1; : : : ; p.At any point ~x 2 F , the onstraints gj that satisfy gj(~x) = 0 are alled thea tive onstraints at ~x.The NLP problem, in general, is intra table: it is impossible to develop adeterministi method for the NLP in the global optimization ategory, whi hwould be better than the exhaustive sear h [12℄. This makes a room for evolu-tionary algorithms, extended by some onstraint-handling methods. Indeed,during the last few years, several evolutionary algorithms (whi h aim at om-plex obje tive fun tions (e.g., non di�erentiable or dis ontinuous) have beenproposed for the NLP; a re ent survey paper [28℄ provides an overview of thesealgorithms.During the last few years several methods were proposed for handling on-straints by geneti algorithms for parameter optimization problems. Thesemethods an be grouped into �ve ategories: (1) methods based on preservingfeasibility of solutions, (2) methods based on penalty fun tions, (3) methodswhi h make a lear distin tion between feasible and infeasible solutions, (4)methods based on de oders, and (5) other hybrid methods. We dis uss thembrie y in turn.1.2.1 Methods based on preserving feasibility of solutionsThe best example of this approa h is Geno op (for GEneti algorithm forNumeri al Optimization of COnstrained Problems) system [24, 25℄. The ideabehind the system is based on spe ialized operators whi h transform feasibleindividuals into feasible individuals, i.e., operators, whi h are losed on thefeasible part F of the sear h spa e. The method assumes linear onstraintsonly and a feasible starting point (or feasible initial population). Linear equa-tions are used to eliminate some variables; they are repla ed as a linear om-

Page 4: Chap1 evolutionary algorithms for engineering applications

iv EVOLUTIONARY ALGORITHMS FOR ENGINEERING APPLICATIONSbination of remaining variables. Linear inequalities are updated a ordingly.A losed set of operators maintains feasibility of solutions. For example, whena parti ular omponent xi of a solution ve tor ~x is mutated, the system de-termines its urrent domain dom(xi) (whi h is a fun tion of linear onstraintsand remaining values of the solution ve tor ~x) and the new value of xi istaken from this domain (either with at probability distribution for uniformmutation, or other probability distributions for non-uniform and boundarymutations). In any ase the o�spring solution ve tor is always feasible. Simi-larly, arithmeti rossover, a~x+(1�a)~y, of two feasible solution ve tors ~x and~y yields always a feasible solution (for 0 � a � 1) in onvex sear h spa es (thesystem assumes linear onstraints only whi h imply onvexity of the feasiblesear h spa e F).Re ent work [27, 39, 40℄ on systems whi h sear h only the boundary areabetween feasible and infeasible regions of the sear h spa e, onstitutes anotherexample of the approa h based on preserving feasibility of solutions. Thesesystems are based on spe ialized boundary operators (e.g., sphere rossover,geometri al rossover, et .): it is a ommon situation for many onstrainedoptimization problems that some onstraints are a tive at the target globaloptimum, thus the optimum lies on the boundary of the feasible spa e.1.2.2 Methods based on penalty fun tionsMany evolutionary algorithms in orporate a onstraint-handling method basedon the on ept of (exterior) penalty fun tions, whi h penalize infeasible so-lutions. Usually, the penalty fun tion is based on the distan e of a solutionfrom the feasible region F , or on the e�ort to \repair" the solution, i.e., tofor e it into F . The former ase is the most popular one; in many methodsa set of fun tions fj (1 � j � m) is used to onstru t the penalty, where thefun tion fj measures the violation of the j-th onstraint in the following way:fj(~x) = � maxf0; gj(~x)g; if 1 � j � qjhj(~x)j; if q + 1 � j � m:However, these methods di�er in many important details, how the penaltyfun tion is designed and applied to infeasible solutions. For example, a methodof stati penalties was proposed [14℄; it assumes that for every onstraint weestablish a family of intervals whi h determine appropriate penalty oeÆ ient.The method of dynami penalties was examined [15℄, where individuals areevaluated (at the iteration t) by the following formula:eval(~x) = f(~x) + (C � t)�Pmj=1 f�j (~x),where C, � and � are onstants. Another approa h (Geno op II), also basedon dynami penalties, was des ribed [22℄. In that algorithm, at every iterationa tive onstraints only are onsidered, and the pressure on infeasible solutionsis in reased due to the de reasing values of temperature � . In [9℄ a method forsolving onstraint satisfa tion problems that hanges the evaluation fun tionbased on the performan e of a EA run was des ribed: the penalties (weights)

Page 5: Chap1 evolutionary algorithms for engineering applications

CONSTRAINT-HANDLING METHODS vof those onstraints whi h are violated by the best individual after terminationare raised, and the new weights are used in the next run. A method based onadaptive penalty fun tions was was developed in [3, 13℄: one omponent of thepenalty fun tion takes a feedba k from the sear h pro ess. Ea h individual isevaluated by the formula:eval(~x) = f(~x) + �(t)Pmj=1 f2j (~x),where �(t) is updated every generation t with respe t to the urrent state ofthe sear h (based on last k generations). The adaptive penalty fun tion wasalso used in [42℄, where both the sear h length and onstraint severity feed-ba k was in orporated. It involves the estimation of a near-feasible thresholdqj for ea h onstraint 1 � j � m); su h thresholds indi ate distan es from thefeasible region F whi h are \reasonable" (or, in other words, whi h determine\interesting" infeasible solutions, i.e., solutions relatively lose to the feasibleregion). Additional method (so- alled segregated geneti algorithm) was pro-posed in [18℄ as yet another way to handle the problem of the robustness ofthe penalty level: two di�erent penalized �tness fun tions with stati penaltyterms p1 and p2 were designed (smaller and larger, respe tively). The mainidea is that su h an approa h will result roughly in maintaining two subpop-ulations: the individuals sele ted on the basis of f1 will more likely lie in theinfeasible region while the ones sele ted on the basis of f2 will probably stayin the feasible region; the overall pro ess is thus allowed to rea h the feasibleoptimum from both sides of the boundary of the feasible region.1.2.3 Methods based on a sear h for feasible solutionsThere are a few methods whi h emphasize the distin tion between feasibleand infeasible solutions in the sear h spa e S. One method, proposed in [41℄( alled a \behavioral memory" approa h) onsiders the problem onstraintsin a sequen e; a swit h from one onstraint to another is made upon arrivalof a suÆ ient number of feasible individuals in the population.The se ond method, developed in [34℄ is based on a lassi al penalty ap-proa h with one notable ex eption. Ea h individual is evaluated by the for-mula:eval(~x) = f(~x) + rPmj=1 fj(~x) + �(t; ~x),where r is a onstant; however, the original omponent �(t; ~x) is an addi-tional iteration dependent fun tion whi h in uen es the evaluations of infea-sible solutions. The point is that the method distinguishes between feasibleand infeasible individuals by adopting an additional heuristi rule (suggestedearlier in [37℄): for any feasible individual ~x and any infeasible individual ~y:eval(~x) < eval(~y), i.e., any feasible solution is better than any infeasible one.1In a re ent study [8℄, a modi� ation to this approa h is implemented with thetournament sele tion operator and with the following evaluation fun tion:1For minimization problems.

Page 6: Chap1 evolutionary algorithms for engineering applications

vi EVOLUTIONARY ALGORITHMS FOR ENGINEERING APPLICATIONSeval(~x) = � f(~x); if ~x is feasible;fmax +Pmj=1 fj(~x); otherwise,where fmax is the fun tion value of the worst feasible solution in the popula-tion. The main di�eren e between this approa h and Powell and Skolni k'sapproa h is that in this approa h the obje tive fun tion value is not onsid-ered in evaluating an infeasible solution. Additionally, a ni hing s heme isintrodu ed to maintain diversity among feasible solutions. Thus, initially thesear h fo uses on �nding feasible solutions and later when adequate numberof feasible solutions are found, the algorithm �nds better feasible solutions bymaintaining a diversity in solutions in the feasible region. It is interesting tonote that there is no need of the penalty oeÆ ient r here, be ause the feasi-ble solutions are always evaluated to be better than infeasible solutions andinfeasible solutions are ompared purely based on their onstraint violations.However, normalization of onstraints fj(~x) is suggested. On a number of testproblems and on an engineering design problem, this approa h is better ableto �nd onstrained optimum solutions than Powell and Skolni k's approa h.The third method (Geno op III), proposed in [26℄ is based on the idea of re-pairing infeasible individuals. Geno op III in orporates the original Geno opsystem, but also extends it by maintaining two separate populations, wherea development in one population in uen es evaluations of individuals in theother population. The �rst population Ps onsists of so- alled sear h pointsfrom Fl whi h satisfy linear onstraints of the problem. The feasibility (inthe sense of linear onstraints) of these points is maintained by spe ializedoperators. The se ond population Pr onsists of so- alled referen e pointsfrom F ; these points are fully feasible, i.e., they satisfy all onstraints. Refer-en e points ~r from Pr, being feasible, are evaluated dire tly by the obje tivefun tion (i.e., eval(~r) = f(~r)). On the other hand, sear h points from Ps are\repaired" for evaluation.1.2.4 Methods based on de odersDe oders o�er an interesting option for all pra titioners of evolutionary te h-niques. In these te hniques a hromosome \gives instru tions" on how tobuild a feasible solution. For example, a sequen e of items for the knapsa kproblem an be interpreted as: \take an item if possible"|su h interpretationwould lead always to a feasible solution.However, it is important to point out that several fa tors should be takeninto a ount while using de oders. Ea h de oder imposes a mapping T be-tween a feasible solution and de oded solution. It is important that several onditions are satis�ed: (1) for ea h solution s 2 F there is an en oded solu-tion d, (2) ea h en oded solution d orresponds to a feasible solution s, and(3) all solutions in F should be represented by the same number of en odings

Page 7: Chap1 evolutionary algorithms for engineering applications

CONSTRAINT-HANDLING METHODS viid.2 Additionally, it is reasonable to request that (4) the transformation Tis omputationally fast and (5) it has lo ality feature in the sense that small hanges in the oded solution result in small hanges in the solution itself. Aninteresting study on oding trees in geneti algorithm was reported in [31℄,where the above onditions were formulated.However, the use of de oders for ontinuous domains has not been investi-gated. Only re ently [16, 17℄ a new approa h for solving onstrained numeri aloptimization problems was proposed. This approa h in orporates a homomor-phous mapping between n-dimensional ube and a feasible sear h spa e. Themapping transforms the onstrained problem at hand into un onstrained one.The method has several advantages over methods proposed earlier (no ad-ditional parameters, no need to evaluate|or penalize|infeasible solutions,easiness of approa hing a solution lo ated on the edge of the feasible region,no need for spe ial operators, et ).1.2.5 Hybrid methodsIt is relatively easy to develop hybrid methods whi h ombine evolutionary omputation te hniques with deterministi pro edures for numeri al optimiza-tion problems. In [45℄ a ombined evolutionary algorithm with the dire tionset method of Hooke-Jeeves is des ribed; this hybrid method was tested onthree (un onstrained) test fun tions. In [29℄ the authors onsidered a simi-lar approa h, but they experimented with onstrained problems. Again, they ombined evolutionary algorithm with some other method|developed in [19℄.However, while the method of [45℄ in orporated the dire tion set algorithmas a problem-spe i� operator of his evolutionary te hnique, in [29℄ the wholeoptimization pro ess was divided into two separate phases.Several other onstraint handling methods deserve also some attention. Forexample, some methods use of the values of obje tive fun tion f and penaltiesfj (j = 1; : : : ;m) as elements of a ve tor and apply multi-obje tive te hniquesto minimize all omponents of the ve tor. For example, in [38℄, Ve tor Eval-uated Geneti Algorithm (VEGA) sele ts 1=(m + 1) of the population basedon ea h of the obje tives. Su h an approa h was in orporated by Parmee andPur hase [33℄ in the development of te hniques for onstrained design spa es.On the other hand, in the approa h by [43℄, all members of the populationare ranked on the basis of onstraint violation. Su h rank r, together with2However, as observed by Davis [7℄, the requirement that all solutions in F should berepresented by the same number of de odings seems overly strong: there are ases in whi hthis requirement might be suboptimal. For example, suppose we have a de oding anden oding pro edure whi h makes it impossible to represent suboptimal solutions, and whi hen odes the optimal one: this might be a good thing. (An example would be a graph oloring order-based hromosome, with a de oding pro edure that gives ea h node its �rstlegal olor. This representation ould not en ode solutions where some nodes that ould be olored were not olored, but this is a good thing!)

Page 8: Chap1 evolutionary algorithms for engineering applications

viii EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONSthe value of the obje tive fun tion f , leads to the two-obje tive optimiza-tion problem. This approa h gave a good performan e on optimization of gassupply networks.Also, an interesting approa h was reported in [32℄. The method (de-s ribed in the ontext of onstraint satisfa tion problems) is based on a o-evolutionary model, where a population of potential solutions o-evolves witha population of onstraints: �tter solutions satisfy more onstraints, whereas�tter onstraints are violated by fewer solutions. There is some development onne ted with generalizing the on ept of \ant olonies" [5℄ (whi h wereoriginally proposed for order-based problems) to numeri al domains [4℄; �rstexperiments on some test problems gave very good results [48℄. It is also pos-sible to in orporate the knowledge of the onstraints of the problem into thebelief spa e of ultural algorithms [35℄; su h algorithms provide a possibilityof ondu ting an eÆ ient sear h of the feasible sear h spa e [36℄.1.3 A NEED FOR A TEST CASE GENERATORIt is not lear what hara teristi s of a onstrained problem make it diÆ ult foran evolutionary te hnique (and, as a matter of fa t, for any other optimizationte hnique). Any problem an be hara terized by various parameters; thesemay in lude the number of linear onstraints, the number of nonlinear on-straints, the number of equality onstraints, the number of a tive onstraints,the ratio � = jFj=jSj of sizes of feasible sear h spa e to the whole, the type ofthe obje tive fun tion (the number of variables, the number of lo al optima,the existen e of derivatives, et ). In [28℄ eleven test ases for onstrainednumeri al optimization problems were proposed (G1{G11). These test asesin lude obje tive fun tions of various types (linear, quadrati , ubi , polyno-mial, nonlinear) with various number of variables and di�erent types (linearinequalities, nonlinear equations and inequalities) and numbers of onstraints.The ratio � between the size of the feasible sear h spa e F and the size of thewhole sear h spa e S for these test ases vary from 0% to almost 100%; thetopologies of feasible sear h spa es are also quite di�erent. These test asesare summarized in table 1.1. For ea h test ase the number n of variables,type of the fun tion f , the relative size of the feasible region in the sear hspa e given by the ratio �, the number of onstraints of ea h ategory (linearinequalities LI, nonlinear equations NE and inequalities NI), and the numbera of a tive onstraints at the optimum (in luding equality onstraints) arelisted.The results of many tests did not provide meaningful on lusions, as nosingle parameter (number of linear, nonlinear, a tive onstraints, the ratio�, type of the fun tion, number of variables) proved to be signi� ant as amajor measure of diÆ ulty of the problem. For example, many tested meth-ods approa hed the optimum quite losely for the test ases G1 and G7 (with� = 0:0111% and � = 0:0003%, respe tively), whereas most of the methods ex-

Page 9: Chap1 evolutionary algorithms for engineering applications

A NEED FOR A TEST CASE GENERATOR ixFun tion n Type of f � LI NE NI aG1 13 quadrati 0.0111% 9 0 0 6G2 k nonlinear 99.8474% 0 0 2 1G3 k polynomial 0.0000% 0 1 0 1G4 5 quadrati 52.1230% 0 0 6 2G5 4 ubi 0.0000% 2 3 0 3G6 2 ubi 0.0066% 0 0 2 2G7 10 quadrati 0.0003% 3 0 5 6G8 2 nonlinear 0.8560% 0 0 2 0G9 7 polynomial 0.5121% 0 0 4 2G10 8 linear 0.0010% 3 0 3 6G11 2 quadrati 0.0000% 0 1 0 1Table 1.1 Summary of eleven test ases. The ratio � = jFj=jSj was deter-mined experimentally by generating 1,000,000 random points from S and he king whether they belong to F (for G2 and G3 we assumed k = 50).LI, NE, and NI represent the number of linear inequalities, and nonlinearequations and inequalities, respe tivelyperien ed diÆ ulties for the test ase G10 (with � = 0:0010%). Two quadrati fun tions (the test ases G1 and G7) with a similar number of onstraints (9and 8, respe tively) and an identi al number (6) of a tive onstraints at theoptimum, gave a di�erent hallenge to most of these methods. Also, severalmethods were quite sensitive to the presen e of a feasible solution in the ini-tial population. Possibly a more extensive testing of various methods wasrequired.Not surprisingly, the experimental results of [28℄ suggested that making anappropriate a priori hoi e of an evolutionary method for a nonlinear opti-mization problem remained an open question. It seems that more omplexproperties of the problem (e.g., the hara teristi of the obje tive fun tiontogether with the topology of the feasible region) may onstitute quite signi�- ant measures of the diÆ ulty of the problem. Also, some additional measuresof the problem hara teristi s due to the onstraints might be helpful. How-ever, this kind of information is not generally available. In [28℄ the authorswrote:\It seems that the most promising approa h at this stage of resear h isexperimental, involving the design of a s alable test suite of onstrainedoptimization problems, in whi h many [...℄ features ould be easilytuned. Then it should be possible to test new methods with respe t tothe orpus of all available methods."Clearly, there is a need for a parameterized test- ase generator whi h an beused for analyzing various methods in a systemati way (rather than testing

Page 10: Chap1 evolutionary algorithms for engineering applications

x EVOLUTIONARY ALGORITHMS FOR ENGINEERING APPLICATIONSthem on a few sele ted test ases; moreover, it is not lear whether additionof a few extra test ases is of any help).In this paper we propose su h a test- ase generator for onstrained param-eter optimization te hniques. This generator is apable of reating varioustest ases with di�erent hara teristi s:� problems with di�erent value of �: the relative size of the feasible regionin the sear h spa e;� problems with di�erent number and types of onstraints;� problems with onvex or non- onvex obje tive fun tion, possibly withmultiple optima;� problems with highly non- onvex onstraints onsisting of (possibly)disjoint regions.All this an be a hieved by setting a few parameters whi h in uen e di�erent hara teristi s of the optimization problem. Su h test- ase generator shouldbe very useful for analyzing and omparing di�erent onstraint-handling te h-niques.There were some attempts in the past to propose a test ase generatorfor un onstrained parameter optimization [46, 47℄. We are also aware of oneattempt to do so for onstrained ases; in [44℄ the author proposes so- alledstepping-stones problem de�ned as:obje tive: maximizePni=1(xi=� + 1),where �� � xi � � for i = 1; : : : ; n and the following onstraints are satis�ed:exi=� + os(2xi) � 1 for i = 1; : : : ; n.Note that the obje tive fun tion is linear and that the feasible region is splitinto 2n disjoint parts ( alled stepping-stones). As the number of dimensions ngrows, the problem be omes more omplex. However, as the stepping-stonesproblem has one parameter only, it an not be used to investigate some aspe tsof a onstraint-handling method.1.4 THE TEST CASE GENERATORAs explained in the previous se tion, it is of great importan e to have a pa-rameterized generator of test ases for onstrained parameter optimizationproblems. By hanging values of some parameters it would be possible to in-vestigate merits/drawba ks (eÆ ien y, ost, et ) of many onstraint-handlingmethods. Many interesting questions ould be addressed:� how the eÆ ien y of a onstraint-handling method hanges as a fun tionof the number of disjoint omponents of the feasible part of the sear hspa e?

Page 11: Chap1 evolutionary algorithms for engineering applications

THE TEST CASE GENERATOR xi� how the eÆ ien y of a onstraint-handling method hanges as a fun tionof the ratio between the sizes of the feasible part and the whole sear hspa e?� what is a relationship between the number of onstraints (or the numberof dimensions, for example) of a problem and the omputational e�ortof a method?and many others. In [23℄ a parameterized test- ase generator T CG was pro-posed:T CG(n;w; �; �; �; �);the meaning of its six parameters is as follows:n { the number of variables of the problemw { a parameter to ontrol the number of optima in the sear h spa e� { a parameter to ontrol the number of onstraints (inequalities)� { a parameter to ontrol the onne tedness of the feasible sear h regions� { a parameter to ontrol the ratio of the feasible to total sear h spa e� { a parameter to in uen e the ruggedness of the �tness lands apeThe ranges and types of the parameters are:n � 1; integer w � 1; integer 0 � � � 1; oat0 � � � 1; oat 0 � � � 1; oat 0 � � � 1; oatThe general idea behind this test ase generator was to divide the sear hspa e S into a number of disjoint subspa es Sk and to de�ne a unimodalfun tion fk for every Sk. Thus the obje tive fun tion G is de�ned on thesear h spa e S = Qni=1[0; 1) as follows:G(~x) = fk(~x) i� ~x 2 Sk.The total number of subspa es Sk is equal to wn, as ea h dimension of thesear h spa e is divided into w equal length segments (for exa t de�nition of asubspa e Sk, see [23℄). This number indi ates also the total number of lo aloptima of fun tion G. For ea h subspa e Sk (k = 0; : : : ; wn� 1) a fun tion fkis de�ned: fk(x1; : : : ; xn) = ak ��ni=1(uki � xi)(xi � lki )� 1n (1.2)where lki and uki are the boundaries of the k-th subspa e for the i-th dimension.The onstants ak are de�ned as follows:ak = ( 4w2(1� �2�2)k0[(1��)+�= log2(wn�n+1)℄�1 if �� > 04w2 � (��1)k00n(w�1) + 1� if �� = 0, (1.3)wherek0 = log2(Pni=1 qi;k + 1), andk00 =Pni=1 qi;k,

Page 12: Chap1 evolutionary algorithms for engineering applications

xii EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONSwhere (q1;k; : : : ; qn;k) is a n-dimensional representation of the number k in w-ary alphabet.3 Additionally, to remove this �xed pattern from the generatedtest ases, an additional me hanism: a permutation of subspa es Sk, was used.The third parameter � of the test- ase generator is related to the numberm of onstraints of the problem, as the feasible part of the sear h spa e S isde�ned by means on m double inequalities (0 � m � wn):r21 � k(~x) � r22 ; k = 0; : : : ;m� 1; (1.4)where 0 � r1 � r2 and ea h k(~x) is a quadrati fun tion: k(~x) = (x1 � pk1)2 + : : :+ (xn � pkn)2,where pki = (lki + uki )=2. These m double inequalities de�ne m feasible partsFk of the sear h spa e:~x 2 Fk i� r21 � k(~x) � r22 ,and the overall feasible sear h spa e F = [m�1k=0 Fk. Note, the interpretation of onstraints here is di�erent than the one in the standard de�nition of the NLPproblem. Here the feasible sear h spa e is de�ned as a union (not interse tion)of all double onstraints. In other words, a point ~x is feasible if and only ifthere exist an index 0 � k � m � 1 su h that double inequality (1.4) issatis�ed.The parameter 0 � � � 1 determines the number m of onstraints asfollows: m = b�(wn � 1) + 1 : (1.5)Clearly, � = 0 and � = 1 imply m = 1 and m = wn, i.e., minimum andmaximum number of onstraints, respe tively.The parameters � and � de�ne the radii r1 and r2:r1 = ��pn2w ; r2 = �pn2w : (1.6)These parameters determine the topology of the feasible part of the sear hspa e.If �� > 0, the fun tion G has 2n global maxima points, all in permutedS0. For any global solution (x1; : : : ; xn), xi = 1=(2w) � ��=(2w) for alli = 1; 2; : : : ; n. The fun tion values at these solutions are always equal toone. On the other hand, if �� = 0, the fun tion G has either one globalmaximum (if � < 1) or m maxima points (if � = 1), one in ea h of permutedS0; : : : ;Sm�1. If � < 1, the global solution (x1; : : : ; xn) is always at(x1; : : : ; xn) = (1=(2w); 1=(2w); : : : ; 1=(2w)).Figure 1.1 displays the �nal lands ape for test ase T CG(2; 4; 1; 1p2 ; 0:8; 0).3For w > 1. If w = 1 the whole sear h spa e onsists of one subspa e S0 only. In this asea0 = 4=(1 � �2�2) for all 0 � �; � � 1.

Page 13: Chap1 evolutionary algorithms for engineering applications

THE TEST CASE GENERATOR xiii

0.00.2

0.40.6

0.81.0 0.0

0.20.4

0.60.8

1.0-2

-1.5

-1

-0.5

0

0.5

1

x_1

x_2

Eval(x_1, x_2)

Fig. 1.1 Final lands ape for the test ase T CG(2; 5; 1; 1p2 ; 0:8; 0). Contours withfun tion values ranging from �0:2 to 1:0 at a step of 0.2 are drawn at the baseThe interpretation of the six parameters of the test- ase generator T CG isas follows:1. Dimensionality: By in reasing the parameter n the dimensionality ofthe sear h spa e an be in reased.2. Multimodality: By in reasing the parameter w the multimodality ofthe sear h spa e an be in reased. For the un onstrained fun tion, thereare wn lo ally maximum solutions, of whi h one is globally maximum.For the onstrained test fun tion with �� > 0, there are (2w)n di�erentlo ally maximum solutions, of whi h 2n are globally maximum solutions.3. Constraints: By in reasing the parameter � the number m of on-straints is in reased.4. Conne tedness: By redu ing the parameter � (from 1 to 1=pn andsmaller), the onne tedness of the feasible subspa es an be redu ed.When � < 1=pn, the feasible subspa esFk are ompletely dis onne ted.5. Feasibility: By in reasing the ratio � the proportion of the feasiblesear h spa e to the omplete sear h spa e (ratio �) an be redu ed. For

Page 14: Chap1 evolutionary algorithms for engineering applications

xiv EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONS� values loser to one, the feasible sear h spa e be omes smaller andsmaller. These test fun tions an be used to test an optimizer's abilityto be �nd and maintain feasible solutions.6. Ruggedness: By in reasing the parameter � the fun tion ruggedness an be in reased (for �� > 0). A suÆ iently rugged fun tion will testan optimizer's ability to sear h for the globally onstrained maximumsolution in the presen e of other almost equally signi� ant lo ally max-ima.In reasing the ea h of the above parameters ex ept � and de reasing � will ause diÆ ulty to any optimizer. However, it is diÆ ult to on lude whi hof these fa tors is most profoundly a�e t the performan e of an optimizer.Thus, it is re ommended that the user should �rst test his/her algorithmwith simplest possible ombination of the above parameters (small n, smallw, small �, large �, small �, and small �). Thereafter, the parameters may be hanged in a systemati manner to reate diÆ ult test fun tions. The mostdiÆ ult test fun tion is reated when large values of parameters n, w, �, �,and � together with a small value of parameter � are used.For full details of the test ase generator the reader is referred to [23℄.1.5 AN EXAMPLETo test the usefulness of the T CG, a simple steady-state evolutionary algo-rithm was developed. We have used a onstant population size of 100; ea hindividual is a ve tor ~x of n oating-point omponents. Parent sele tion wasperformed by a binary tournament sele tion; an o�spring repla es the worseindividual from a binary tournament. One of three operators was used in ev-ery generation (the sele tion of an operator was done a ordingly to onstantprobabilities 0.5, 0.15, and 0.35, respe tively):� Gaussian mutation:~x ~x+N(0; ~�),where N(0; ~�) is a ve tor of independent random Gaussian numbers witha mean of zero and standard deviations ~�4.� uniform rossover:~z (z1; : : : ; zn),where ea h zi is either xi or yi (with equal probabilities), where ~x and~y are two sele ted parents.4In all experiments reported in this se tion, we have used a value of 1=(2pn), whi h dependsonly on the dimensionality of the problem.

Page 15: Chap1 evolutionary algorithms for engineering applications

AN EXAMPLE xv� heuristi rossover:~z = r � (~x� ~y) + ~x,where r is a random number between 0 and 0.25, and the parent ~x isnot worse than ~y.The termination ondition was to quit the evolutionary loop if an improvementin the last N = 10; 000 generations was smaller than a prede�ned � = 0:001.Now the T CG an be used for investigating the merits of any onstraint-handling methods. For example, one of the simplest and the most popular onstraint-handling method is based on stati penalties; let us de�ne a par-ti ular stati penalty method as follows:eval(~x) = f(~x) +W � v(~x),where f is an obje tive fun tion, W is a onstant (penalty oeÆ ient) andfun tion v measures the onstraint violation. (Note that only one double onstraint is taken into a ount as the T CG de�nes the feasible part of thesear h spa e as a union of all m double onstraints.)The penalty oeÆ ient W was set to 10 and the value of v for any ~x isde�ned by the following pro edure:�nd k su h that ~x 2 Skset C = (�2w)=pnif the whole Sk is infeasible then v(~x) = �1else begin al ulate distan e Dist between ~x and the enterof the subspa e Skif Dist < r1 then v(~x) = C � (r1 �Dist)else if Dist > r2 then v(~x) = C � (Dist� r2)else v(~x) = 0endThus the onstraint violation measure v returns �1, if the evaluated point is ininfeasible subspa e (i.e., subspa e without a feasible ring); 0, if the evaluatedpoint is feasible; or some number q from the range [�1; 0), if the evaluatedpoint is infeasible, but the orresponding subspa e is partially feasible. Itmeans that the point ~x is either inside the smaller ring or outside the largerone. In both ases q is a s aled negative distan e of this point to the boundaryof the losest ring. Note that the s aling fa tor C guarantees that 0 < q � 1.Note, that the values of the obje tive fun tion for feasible points of the sear hspa e stay in the range [0; 1℄; the value at the global optimum is always 1.Thus, both the fun tion and onstraint violation values are normalized in[0,1℄.The Figures 1.2 and 1.3 display5 the results of typi al runs of the system ontwo di�erent test ases. Both test ases were generated for n = 20 (low dimen-sionality), w = 2 (i.e., the sear h spa e S was divided into 220 subspa es),5All �gures of this se tion report averages of 10 runs.

Page 16: Chap1 evolutionary algorithms for engineering applications

xvi EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONS� = 1 (i.e., there was a feasible ring in every subspa e), � = 0:9=p20 =0:201246 (the feasible sear h spa e onsisted of disjoint feasible omponentsof relatively large size), and � = 0:1 (there are small \inner rings", i.e., r1 isone tenth of r2). The only di�eren e between these two test ases is in valueof parameter �, whi h is either 0:9 (Figure 1.2) or 0:1 (Figure 1.3).-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

0 2500 5000 7500 10000 12500 15000 17500

Fig. 1.2 A run of the system for the T CG(20; 2; 1; 0:201246; 0:1; 0:9); generationnumber versus obje tive value and onstraint violation. The upper broken line rep-resents the value of the obje tive fun tion f , the lower broken line: the onstraintviolation v, and the ontinuous line: the �nal value eval

-3.5

-3

-2.5

-2

-1.5

-1

-0.5

0

0.5

1

1.5

0 2500 5000 7500 10000 12500 15000 17500

Fig. 1.3 A run of the system for the T CG(20; 2; 1; 0:201246; 0:1; 0:1); generationnumber versus obje tive value and onstraint violation. The upper broken line rep-resents the value of the obje tive fun tion f , the lower broken line: the onstraintviolation v, and the ontinuous line: the �nal value evalThese results mat hed our expe tations very well. The algorithm onvergesto a feasible solution ( onstraint violation is zero) in all runs. However, note

Page 17: Chap1 evolutionary algorithms for engineering applications

AN EXAMPLE xviithat the height of the highest peak in the sear h spa e (i.e., the global feasiblemaximum) is 1, and the height of the lowest peak is a fun tion of �:(1� �2�2)3:3923(1��)for �xed values of other parameters (note that the total number of peaksis wn = 220 = 1; 048; 576). In both ases the algorithm onverges to an\average" peak, whi h is slightly better for � = 0:9 than for � = 0:1. Thealgorithm failed to �nd the global peak in all runs.Figure 1.4 displays the performan e of the algorithm for varying values of n(between 20 and 320). This is one of the most interesting (however, expe tedalso) results. It indi ates learly that for a parti ular value of the penalty oeÆ ient (whi h was W = 10), the system works well up to n = 140. Forhigher dimensions, the algorithm sometimes onverges to an infeasible solution(whi h, due to a onstant penalty fa tor) is better than a feasible one. Forn > 200 the solutions from all 10 runs are infeasible. For di�erent valuesof W we get di�erent \breaking" points (i.e., values of n starting for whi hthe algorithm onverges to an infeasible solution). For example, if W = 1,the algorithm returns always feasible solutions up to n = 54. For n > 54 thealgorithm may return infeasible solution, and for n > 82 all returned solutionsare infeasible.-0.2

0

0.2

0.4

0.6

0.8

1

1.2

0 50 100 150 200 250 300 350

Fig. 1.4 The obje tive values f returned by the system for theT CG(n; 2; 1; 0:201246; 0:1; 0:1) with varying n between 20 and 320. The onstraint violation is zero for n < 120 onlyFigure 1.5 displays the performan e of the algorithm for varying values of� (between 0.0 and 1.0). These results indi ate learly that the �nal solutiondepends on the heights of the peaks in the sear h spa e: the algorithms always onverges to a \slightly above average" peak. For larger values of �, theaverage peaks are higher, hen e the better �nal value.It was also interesting to vary parameters � and �. Both hanges have sim-ilar e�e t. With �xed � = 0:1, an in rement of � de reases the performan e

Page 18: Chap1 evolutionary algorithms for engineering applications

xviii EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONS

0.99

0.992

0.994

0.996

0.998

1

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Fig. 1.5 The obje tive values f returned by the system for theT CG(20; 2; 1; 0:201246; 0:1; �) with varying � between 0.0 and 1.0. The on-straint violation is zero for all �of the system (Figure 1.6). This seems rather ounter-intuitive as the feasiblearea grows. However, it is important to remember that for ea h � the �tnesslands ape is di�erent as we hange the values of ak's (equation (1.3)). Notethat � varies between 0.011 and 0.224; the latter number equals to 1=p20, sothe feasible rings remain disjoint.0.998

0.9985

0.999

0.9995

1

1.0005

1.001

0.011 0.033 0.055 0.078 0.100 0.122 0.145 0.167 0.190 0.212

Fig. 1.6 The obje tive values f returned by the system for theT CG(20; 2; 1; �; 0:1; 0:1) with varying � between 0.011 and 0.224. The on-straint violation is zero for all �. The dotted line represents the height of the lowestpeak in the sear h spa eWe get similar results while varying �. For � = 0:05 (the inner rings arevery small), the system �nds an avarage peak and onverges there (for small

Page 19: Chap1 evolutionary algorithms for engineering applications

SUMMARY xixvalues of � the value of the lowest peak is lose to 1, thus the false impressionabout the quality of solution in that ase). In rease of � triggers de rease ofthe lowest peak: this explains the negative slope of the dotted line (Figure1.7). For � = 1 there is a slight onstraint violation as the feasible rings havethe width zero.0.86

0.88

0.9

0.92

0.94

0.96

0.98

1.0

0.05 0.15 0.25 0.35 0.45 0.55 0.65 0.75 0.85 0.95 1.05

Fig. 1.7 The obje tive values f returned by the system for theT CG(20; 2; 1; 0:201246; �; 0:1) with varying � between 0.05 and 1.0. The onstraint violation is zero for all �. The dotted line represents the height of thelowest peak in the sear h spa eA few observations an be made on the basis of the above experiments.Clearly, the results stati penalty method depend on the penalty oeÆ ientW , whi h should be sele ted arefully; in the ase of the T CG, the value ofthis oeÆ ient depends on the dimension n of the problem. In the ase ofone a tive onstraint (whi h is always the ase), the system does not haveany diÆ ulties in lo ating a feasible solution (for sele ted n = 20); however,due to a large number of peaks (over one million for the ase of n = 20 andw = 2), the method returns a feasible point from an average peak. For furtherexperiments, modi� ations of the T CG, and observations, see [23℄.1.6 SUMMARYWe have dis ussed how a test ase generator T CG(n;w; �; �; �; �) an be usedfor evaluation of a onstraint-handling te hnique. As explained in se tion 1.4,the parameter n ontrols the dimensionality of the test fun tion, the parameterw ontrols the modality of the fun tion, the parameter � ontrols the numberof onstraints in the sear h spa e, the parameter � ontrols the onne tednessof the feasible sear h spa e, the parameter � ontrols the ratio of the feasibleto total sear h spa e, and the parameter � ontrols the ruggedness of the testfun tion.

Page 20: Chap1 evolutionary algorithms for engineering applications

xx EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONSWhen the feasible region is disjointed, lassi al sear h and optimizationmethods may have diÆ ulty �nding the global optimal solution. Even forevolutionary algorithms, �nding the feasible island ontaining the global opti-mal solution would be a hallenging task. This task an be even more diÆ ultif the �tness lands ape is omplex (e.g., for large values of �).We believe that su h onstrained test problem generator should serve thepurpose of testing a onstrained optimizer. In the previous se tion we have in-di ated how it an be used to evaluate merits and drawba ks of one parti ular onstraint handling method (stati penalties). Note that it is possible to anal-yse further the performan e of a method by varying two or more parametersof the T CG (see [23℄ for a full dis ussion on these results).The proposed test ase generator is far from perfe t. For example, it mightbe worthwhile to modify it further to parametrize the number of a tive on-straints at the optimum. It seems ne essary to introdu e a possibility of agradual in rement of peaks (in the urrent version of the test ase generator,w = 1 implies one peak, and w = 2 implies 2n peaks). Also, the di�eren ebetween the lowest and the highest values of the peaks in the sear h spa e are,in the present model, too small. However, the proposed T CG even in its ur-rent form is an important tool in analysing any onstraint-handling method(for any algorithm: not ne essarily evolutionary algorithm), and it representsan important step in the \right" dire tion.A knowledgmentsThe se ond author a knowledges the support provided by the Alexander vonHumboldt Foundation, Germany. The resear h reported in this paper was par-tially supported by the ESPRIT Proje t 20288 Cooperation Resear h in Infor-mation Te hnology (CRIT-2): \Evolutionary Real-time Optimization Systemfor E ologi al Power Control".REFERENCES1. B�a k, T., Evolutionary Algorithms in Theory and Pra ti e, New-York,Oxford University Press, 1995.2. B�a k, T., Fogel, D.B., and Mi halewi z, Z., Handbook of EvolutionaryComputation, University Oxford Press, New York, 1996.3. Bean, J. C. and A. B. Hadj-Alouane (1992). A dual geneti algorithm forbounded integer programs. Te hni al Report TR 92-53, Department ofIndustrial and Operations Engineering, The University of Mi higan.4. Bil hev, G. and I. Parmee (1995). Ant olony sear h vs. geneti algo-rithms. Te hni al report, Plymouth Engineering Design Centre, Univer-sity of Plymouth.

Page 21: Chap1 evolutionary algorithms for engineering applications

SUMMARY xxi5. Colorni, A., M. Dorigo, and V. Maniezzo (1991). Distributed optimiza-tion by ant olonies. In Pro eedings of the First European Conferen e onArti� ial Life, Paris. MIT Press/Bradford Book.6. Dasgupta, D. and Mi halewi z, Z., (Editors), Evolutionary Algorithms inEngineering Appli ations, Springer-Verlag, New York, 1997.7. Davis, L. (1997). Private ommuni ation.8. Deb, K. (in press). An eÆ ient onstraint handling method for geneti algorithms. Computer Methods in Applied Me hani s and Engineering.9. Eiben, A. and Z. Ruttkay (1996). Self-adaptivity for Constraint Satis-fa tion: Learning Penalty Fun tions. In Pro eedings of the 3rd IEEEConferen e on Evolutionary Computation, IEEE Press, 1996, pp. 258{261.10. Fogel, D. B., Evolutionary Computation. Toward a New Philosophy ofMa hine Intelligen e, IEEE Press, 1995.11. Goldberg, D.E., Geneti Algorithms in Sear h, Optimization and Ma hineLearning, Addison-Wesley, Reading, MA, 1989.12. Gregory, J. (1995). Nonlinear Programming FAQ, Usenet s i.answers.Available at ftp://rtfm.mit.edu/pub/usenet/s i.answers/nonlinear-programming-faq.13. Hadj-Alouane, A. B. and J. C. Bean (1992). A geneti algorithm for themultiple- hoi e integer program. Te hni al Report TR 92-50, Departmentof Industrial and Operations Engineering, The University of Mi higan.14. Homaifar, A., S. H.-Y. Lai, and X. Qi (1994). Constrained optimizationvia geneti algorithms. Simulation 62 (4), 242{254.15. Joines, J. and C. Hou k (1994). On the use of non-stationary penaltyfun tions to solve nonlinear onstrained optimization problems with gas.In Z. Mi halewi z, J. D. S ha�er, H.-P. S hwefel, D. B. Fogel, and H. Ki-tano (Eds.), Pro eedings of the First IEEE International Conferen e onEvolutionary Computation, pp. 579{584. IEEE Press.16. Kozie l, S. and Z. Mi halewi z, (1998). A de oder-based evolutionary al-gorithm for onstrained parameter optimization problems. In Pro eedingsof the 5th Conferen e on Parallel Problems Solving from Nature. SpringerVerlag.17. Kozie l, S. and Z. Mi halewi z, (1999). Evolutionary Algorithms, Homo-morphous Mappings, and Constrained Parameter Optimization. to ap-pear in Evolutionary Computation.

Page 22: Chap1 evolutionary algorithms for engineering applications

xxii EVOLUTIONARYALGORITHMS FOR ENGINEERINGAPPLICATIONS18. Leri he, R. G., C. Knopf-Lenoir, and R. T. Haftka (1995). A segragatedgeneti algorithm for onstrained stru tural optimization. In L. J. Eshel-man (Ed.), Pro eedings of the 6th International Conferen e on Geneti Algorithms, pp. 558{565.19. Maa, C. and M. Shanblatt (1992). A two-phase optimization neural net-work. IEEE Transa tions on Neural Networks 3 (6), 1003{1009.20. Mi halewi z, Z. (1995b). Heuristi methods for evolutionary omputationte hniques. Journal of Heuristi s 1 (2), 177{206.21. Mi halewi z, Z. (1996). Geneti Algorithms+Data Stru tures=EvolutionPrograms. New-York: Springer Verlag. 3rd edition.22. Mi halewi z, Z. and N. Attia (1994). Evolutionary optimization of on-strained problems. In Pro eedings of the 3rd Annual Conferen e on Evo-lutionary Programming, pp. 98{108. World S ienti� .23. Mi halewi z, Z., K. Deb, M. S hmidt, and T. Stidsen, Test- ase Gener-ator for Constrained Parameter Optimization Te hniques, submitted forpubli ation.24. Mi halewi z, Z. and C. Z. Janikow (1991). Handling onstraints in geneti algorithms. In R. K. Belew and L. B. Booker (Eds.), Pro eedings of the 4thInternational Conferen e on Geneti Algorithms, pp. 151{157. MorganKaufmann.25. Mi halewi z, Z., T. Logan, and S. Swaminathan (1994). Evolutionaryoperators for ontinuous onvex parameter spa es. In Pro eedings of the3rd Annual Conferen e on Evolutionary Programming, pp. 84{97. WorldS ienti� .26. Mi halewi z, Z. and G. Nazhiyath (1995). Geno op III: A o-evolutionaryalgorithm for numeri al optimization problems with nonlinear onstraints.In D. B. Fogel (Ed.), Pro eedings of the Se ond IEEE International Con-feren e on Evolutionary Computation, pp. 647{651. IEEE Press.27. Mi halewi z, Z., G. Nazhiyath, and M. Mi halewi z (1996). A note onusefulness of geometri al rossover for numeri al optimization problems.In P. J. Angeline and T. B�a k (Eds.), Pro eedings of the 5th Annual Con-feren e on Evolutionary Programming.28. Mi halewi z, Z. and M. S hoenauer (1996). Evolutionary omputation forConstrained Parameter Optimization Problems. Evolutionary Computa-tion, Vol.4, No.1, pp.1{32.29. Myung, H., J.-H. Kim, and D. Fogel (1995). Preliminary investigation intoa two-stage method of evolutionary optimization on onstrained problems.In J. R. M Donnell, R. G. Reynolds, and D. B. Fogel (Eds.), Pro eedings

Page 23: Chap1 evolutionary algorithms for engineering applications

SUMMARY xxiiiof the 4th Annual Conferen e on Evolutionary Programming, pp. 449{463.MIT Press.30. Orvosh, D. and L. Davis (1993). Shall we repair? Geneti algorithms, ombinatorial optimization, and feasibility onstraints. In S. Forrest (Ed.),Pro eedings of the 5th International Conferen e on Geneti Algorithms,pp. 650. Morgan Kaufmann.31. Palmer, C.C. and A. Kershenbaum (1994). Representing Trees in Ge-neti Algorithms. In Pro eedings of the IEEE International Conferen eon Evolutionary Computation, 27{29 June 1994, 379{384.32. Paredis, J. (1994). Co-evolutionary onstraint satisfa tion. In Y. Davidor,H.-P. S hwefel, and R. Manner (Eds.), Pro eedings of the 3rd Conferen eon Parallel Problems Solving from Nature, pp. 46{55. Springer Verlag.33. Parmee, I. and G. Pur hase (1994). The development of dire ted geneti sear h te hnique for heavily onstrained design spa es. In Pro eedings ofthe Conferen e on Adaptive Computing in Engineering Design and Con-trol, pp. 97{102. University of Plymouth.34. Powell, D. and M. M. Skolni k (1993). Using geneti algorithms in en-gineering design optimization with non-linear onstraints. In S. Forrest(Ed.), Pro eedings of the 5th International Conferen e on Geneti Algo-rithms, pp. 424{430. Morgan Kaufmann.35. Reynolds, R. (1994). An introdu tion to ultural algorithms. In Pro- eedings of the 3rd Annual Conferen e on Evolutionary Programming, pp.131{139. World S ienti� .36. Reynolds, R., Z. Mi halewi z, and M. Cavaretta (1995). Using ulturalalgorithms for onstraint handling in Geno op. In J. R. M Donnell, R. G.Reynolds, and D. B. Fogel (Eds.), Pro eedings of the 4th Annual Confer-en e on Evolutionary Programming, pp. 298{305. MIT Press.37. Ri hardson, J. T., M. R. Palmer, G. Liepins, and M. Hilliard (1989).Some guidelines for geneti algorithms with penalty fun tions. In J. D.S ha�er (Ed.), Pro eedings of the 3rd International Conferen e on Geneti Algorithms, pp. 191{197. Morgan Kaufmann.38. S ha�er, D. (1985). Multiple obje tive optimization with ve tor evalu-ated geneti algorithms. In J. J. Grefenstette (Ed.), Pro eedings of the1st International Conferen e on Geneti Algorithms. Lauren e ErlbaumAsso iates.39. S hoenauer, M. and Z. Mi halewi z (1996). Evolutionary omputation atthe edge of feasibility. W. Ebeling, and H.-M. Voigt (Eds.), Pro eedingsof the 4th Conferen e on Parallel Problems Solving from Nature, SpringerVerlag.

Page 24: Chap1 evolutionary algorithms for engineering applications

xxiv40. S hoenauer, M. and Z. Mi halewi z (1997). Boundary Operators for Con-strained Parameter Optimization Problems. In T. B�a k (Ed.), Pro eedingsof the 7th International Conferen e on Geneti Algorithms, pp.320{329.Morgan Kaufmann.41. S hoenauer, M. and S. Xanthakis (1993). Constrained GA optimization.In S. Forrest (Ed.), Pro eedings of the 5th International Conferen e onGeneti Algorithms, pp. 573{580. Morgan Kaufmann.42. Smith, A. and D. Tate (1993). Geneti optimization using a penalty fun -tion. In S. Forrest (Ed.), Pro eedings of the 5th International Conferen eon Geneti Algorithms, pp. 499{503. Morgan Kaufmann.43. Surry, P., N. Rad li�e, and I. Boyd (1995). A multi-obje tive approa hto onstrained optimization of gas supply networks. In T. Fogarty (Ed.),Pro eedings of the AISB-95 Workshop on Evolutionary Computing, Vol-ume 993, pp. 166{180. Springer Verlag.44. van Kemenade, C.H.M. (1998). Re ombinative evolutionary sear h. PhDThesis, Leiden University, Netherlands, 1998.45. Waagen, D., P. Dier ks, and J. M Donnell (1992). The sto hasti dire -tion set algorithm: A hybrid te hnique for �nding fun tion extrema. InD. B. Fogel and W. Atmar (Eds.), Pro eedings of the 1st Annual Confer-en e on Evolutionary Programming, pp. 35{42. Evolutionary Program-ming So iety.46. Whitley, D., K. Mathias, S. Rana, and J. Dzubera (1995). Building bettertest fun tions. In L. Eshelmen (Editor), Pro eedings of the 6th Interna-tional Conferen e on Geneti Alforithms, Morgam Kaufmann, 1995.47. Whitley, D., K. Mathias, S. Rana, and J. Dzubera (1995). Evaluatingevolutionary algorithms. Arti� ial Intelligen e Journal, Vol.85, August1996, pp.245{276.48. Wodri h, M. and G. Bil hev (1997). Cooperative distributed sear h: theant's way. Control & Cyberneti s, 26 (3).