chapter 1 - emo

Upload: dung-quoc-van

Post on 04-Jun-2018

246 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/13/2019 Chapter 1 - EMO

    1/96

    1EE6701 Evolutionary Computation Chapter 1

    Evolutionary Computation

    for Multi-objective Optimization

    A/Prof Tan Kay Chen

    Department of Electrical and Computer Engineering

    National University of Singapore

    Tel: 6516 2127; Email: [email protected]; Office: E4 08-09

    mailto:[email protected]:[email protected]
  • 8/13/2019 Chapter 1 - EMO

    2/96

    2EE6701 Evolutionary Computation Chapter 1

    Rationale and Motivation

    Some real-world problems have several (possibly conflicting)objectives to be optimized

    Many of these problems are transformed (or aggregated) into asingle-objective problem (SOP) and are solved using single

    objective optimization techniques Transforming multi-objective problem (MOP) into SOP requires

    priori knowledge of relative importance of different objectives

    Defining the problem in multi-objective framework is more generaland it provides the decision maker different tradeoffs betweendifferent objectives

    The multi-objective optimization (also known as multi-criteriaoptimization) aims to find a set of vectors which satisfy the givenconstraints and optimizes a number of objective functions.

  • 8/13/2019 Chapter 1 - EMO

    3/96

    3EE6701 Evolutionary Computation Chapter 1

    Example:

    A two-objective functions,f1 andf2, to be minimized are given as

    28

    1

    812

    28

    1

    811

    8

    11),...,(

    8

    11),...,(

    i

    i

    i

    i

    xexpxxf

    xexpxxf

    The Pareto optimal front is all

    the points on the line defined by

    8

    1

    8

    11821

    xxxx f1

    f2

    Trade-off

    curve

    Unfeasible

    region

    where -2 xi< 2

  • 8/13/2019 Chapter 1 - EMO

    4/96

    4EE6701 Evolutionary Computation Chapter 1

    Formal Definitions

    A multi-objective optimization problem (MOP) can be written as

    min 1() 2() ()

    s 1() 2() () 0

    1() 2() () 0() , , , is the dimensional decision variable, is the number of objectives , inequality and equality constraints, ()and are respectively the lower and upper bound for each decision

    variable

    .

  • 8/13/2019 Chapter 1 - EMO

    5/96

    5EE6701 Evolutionary Computation Chapter 1

    For an MOP, must be greater than unity Based on the constraints, a MOP can be classified into one of the

    following classes

    () () Type of MOP0 0 Unconstrained MOP

    0

    0

    Bound constrained MOP

    > 0 0 x x Inequality constrained MOP0 > 0 x x Equality constrained MOPwhere and are some constants.

    For the sake of simplicity, we use denotes the feasible decisionspace. The feasible decision space is the region which satisfies theconstrained in decision space

    By using this convention, an MOP can be rewritten as

    min 1() 2() ()

  • 8/13/2019 Chapter 1 - EMO

    6/96

    6EE6701 Evolutionary Computation Chapter 1

    Total-order and Partial-order

    Order theory provides a formal framework for describing statementssuch as this is less than that or this precedes that

    A relation is a total order on set Sif the following properties hold

    Reflexivity: aafor all a S

    Anti-symmetry: a b and b a implies a = b

    Transitivity: a b and b c implies a c

    Comparability: For any a, bS, either a b or b a.

    For any partial order set, the comparability property does not hold In single-objective optimization, the feasible set is totally ordered

    according to the objective function f

    In multi-objective optimization, the feasible set ispartially orderedaccording to the objective function set f.

  • 8/13/2019 Chapter 1 - EMO

    7/96

    7EE6701 Evolutionary Computation Chapter 1

    Due to the partial ordering in MOP, multitude of trade-off solutionsexist in the objective space

    This is one of the major differences between MOP and SOP

    Dominance relation

    Definition

    A solution is said to dominate another solution ( ), if andonly if both of the following conditions are true:

    The solution is no worse than in all objectives The solution is strictly better than in at least one objective

    Mathematically,

    if and only if 1, ,

    1, , | <

  • 8/13/2019 Chapter 1 - EMO

    8/96

    8EE6701 Evolutionary Computation Chapter 1

    Non-dominated Set

    Given a set of solutions, we can perform all possible pairwisecomparisons and find the solutions which are not dominated by anysolution in the set

    Definition - Non-Dominated Set

    Among a set of solutions , the non-dominated set of solutionsare those that are not dominated by any member of the set Definition - Pareto Optimal Set (in decision space)

    The set of solutions that are non-dominated in the feasible

    objective space

    Definition - Pareto Optimal Front (in objective space)

    The set of non-dominated solutions in the feasible objectivespace

  • 8/13/2019 Chapter 1 - EMO

    9/96

    9EE6701 Evolutionary Computation Chapter 1

    Classical Approaches

    Generating Pareto optimal solutions plays an important role in multi-objective optimization. The term vector optimization is sometimesused to denote the problem of identifying the Pareto optimal set

    Classical approaches often solve the MOP by scalarization.

    Scalarization means converting the MOP into a single or a family ofSOP with a real-valued objective function

    We are interested in generating Pareto optimal solutions (POS), onlya posteriorimethods are discussed in this module

    A Posteriori Methods Generate the Pareto optimal set (or a part of it)

    Present it to the decision maker (DM)

    Let the DM select one

  • 8/13/2019 Chapter 1 - EMO

    10/96

    10EE6701 Evolutionary Computation Chapter 1

    Weighted Sum Method

    This method scalarizes MOP into SOP by multiplying each objectivewith a predefined weight

    min ()=

    where is the weight of the -th objective function. The weightcoefficients must be real and positive. It is usual practice that weightare normalized, such that 1=

    To use WSM as a posteriori method, a set of weight vectors has to bespecified. The correlation and some nonlinear effects between weights

    and MOP are not easy to be understood Weighted sum method is relatively simple and easy to use

    Evenly distributed weight vectors do not necessary produce an evenlydistributed representation of the Pareto optimal set. When the MOP is

    non-convex, some of the Pareto optimal solution may fail to be found

  • 8/13/2019 Chapter 1 - EMO

    11/96

    11EE6701 Evolutionary Computation Chapter 1

    Multi-objective Evolutionary Algorithms

    (MOEA)

    Evolutionary Multi-objective Optimization (EMO) is an application ofevolutionary algorithm to solve multi-objective optimization problem

    MOEAs are used to approximate the Pareto-optimal front of multi-objective problems

    Many MOEAs have been proposed in the literature

    Multi-objective Genetic Algorithm (MOGA)

    Non-Dominated Sorting Algorithm II (NSGAII)

    Incrementing Multi-objective Evolutionary Algorithm (IMOEA) Multiobjective Evolutionary Algorithm based on Decomposition

    (MOEA/D)

    These algorithms mainly different in, e.g., Fitness assignmentscheme; Mating selection scheme; Environmental selection scheme;Diversity preservation; Sharing and Elitism

  • 8/13/2019 Chapter 1 - EMO

    12/96

    12EE6701 Evolutionary Computation Chapter 1

    Common Terms Used in EA

  • 8/13/2019 Chapter 1 - EMO

    13/96

    13EE6701 Evolutionary Computation Chapter 1

    P2: 4 0 0 3 0 1 6 1

    P2: 4 0 03 0 1 6 1

    P3: 0 1 6 4 1 8 0 1

    P2: 4 0 0 3 0 1 6 1

    P2:4 0 13 08 0 1

    P3:0 1 6 4 1 1 6 1

    f(P1: 1 2 0 9 0 2 1 7)=5%

    f(P

    2: 4 0 0 3 0 1 6 1)=60%f(P3: 0 1 6 4 1 8 0 1)=35%

    f(P2)

    f(P3)

    f(P1)

    Fitness landscape

    Decoding

    Evaluation

    Simulation

    designs

    Final optimized

    designs coded

    Initial/random

    Selection

    Crossover

    Mutation

    New

    generation

  • 8/13/2019 Chapter 1 - EMO

    14/96

    14EE6701 Evolutionary Computation Chapter 1

    Multi-objective Evolutionary Algorithms

    x1, x2, , xN

    f(x1), f(x2), , f(xN)

    How to select

    parent solutions

    How to

    combine

    parent

    solutions to

    create

    offspring

    solutions

    Initialization

    Fitness

    Evaluation

    Mating

    Selection

    Recombination/

    Crossover

    Mutation

    Perturb the generated

    offspring solution

    Fitness

    Evaluation

    Selection

    Termination

    Check terminated

    condition

    Select

    individual tosurvive

    f(x1), f(x2), , f(xN)

  • 8/13/2019 Chapter 1 - EMO

    15/96

    15EE6701 Evolutionary Computation Chapter 1

    To achieve good approximation of POF, the output of any MOEAshould fulfill the following characteristics: (1) Sufficient proximity to

    the exact Pareto-optimal front (convergence); (2) Well distributedalong the Pareto-optimal front (diversity)

    Concept of Sharing And Elitism

    To achieve the goals of convergence and diversity,

    concept of sharing and elitism are introduced

    Sharing:

    To avoid the non-dominated solution clustering on someportions of Pareto-optimal front, certain penalty is applied tothe clustering solutions

    Example: aand bwill have smaller fitness compared to c

    Elitism:

    The elites (best solutions or non-dominated solutions) ofthe population should be given the opportunity to be

    directly carried over to the next generation

    f1

    f2

    ab

    c

  • 8/13/2019 Chapter 1 - EMO

    16/96

    16EE6701 Evolutionary Computation Chapter 1

    Fitness assignment, preference

    Good spread, uniform

    distribution, minimum proximity

    Many objectives, constraints

    Noise, dynamic landscape,

    robust optimization

    f1

    f2

    Unfeasible

    Region

    Pareto front/Non-dominated solutions

    Fonseca and Fleming,1995

    EMO needs to address several issues

    EA is a powerful tool for solving MO optimization problems

    Population-based, and capable of searching for the global trade-off

    Robust and applicable to a wide range of problems

    Capable of handling discontinuous and multi-dimensional problems

  • 8/13/2019 Chapter 1 - EMO

    17/96

    17EE6701 Evolutionary Computation Chapter 1

    The Pareto rankingassigns the samesmallest costfor all non-dominatedindividuals, while the dominated individualsare ranked according to how

    many individuals in the population dominating them

    f1

    f2

    1

    1

    1

    2

    24

    5

    Pareto optimal

    ranking

    The rank of an individualxin a population can be given by rank(Fx) =1 + qx, where qxis the number of individuals dominating theindividualFxexpressed in the objective domain.

  • 8/13/2019 Chapter 1 - EMO

    18/96

    18EE6701 Evolutionary Computation Chapter 1

    Goal (MOGA)

    Desired value for each objective

    Can be used to specify practical

    design specification/requirements

    Individuals that satisfy the goal

    setting have a lower (better) rank

    Tan, K. C., Khor, E. F., Lee, T. H. and Sathikannan, R., 'An evolutionaryalgorithm with advanced goal and priority specification for multi-objective optimization', Journal of Artificial Intelligence Research, vol. 18,

    pp. 183-215, 2003.

    In design optimization, a set of specification or design requirements areoften given in a-priori

  • 8/13/2019 Chapter 1 - EMO

    19/96

    19EE6701 Evolutionary Computation Chapter 1

    MOEA minimization with a feasible goal setting

    Gen = 5 Gen = 70

  • 8/13/2019 Chapter 1 - EMO

    20/96

    20EE6701 Evolutionary Computation Chapter 1

    Unfeasible goal settingFeasible but extreme goal setting

  • 8/13/2019 Chapter 1 - EMO

    21/96

    21EE6701 Evolutionary Computation Chapter 1

    (G1G2G3G4) (G1G2G3)

    Logical OR and AND connectives among goals

  • 8/13/2019 Chapter 1 - EMO

    22/96

    22EE6701 Evolutionary Computation Chapter 1

    Priority/Preference

    Assign relative importance of each

    objective for practical applications

    Among strings of equal rank, priority can

    be used to determine superiority

    Example: Stability in control system design

    Soft/hard Objectives

    Soft: Always considered in the evolution

    Hard: Considered while goal is not satisfied

    Example: SS error and actuator saturation

    EE6701 E l ti C t ti Ch t 1

  • 8/13/2019 Chapter 1 - EMO

    23/96

    23EE6701 Evolutionary Computation Chapter 1

    Compared with SO problems, MO optimization often requires alarger population size in order to cover the trade-off surface

    Population size can be changed according to the populationdistribution at each generation

    Generally, we can start with a small population for initial search.Subsequently increase or decrease the population size according tocurrent Pareto front in the evolution process

    The increased individuals can be obtained via local fine-tuning bygenerating additional good individuals to fill up gaps or

    discontinuities in the current Pareto front

    Tan, K. C., Lee, T. H. and Khor, E. F., 'Evolutionary algorithm withdynamic population size and local exploration for multiobjectiveoptimization', IEEE Transactions on Evolutionary Computation, vol.

    5, no. 6, pp. 565-588, 2001.

    Dynamic Population

    EE6701 E l ti C t ti Ch t 1

  • 8/13/2019 Chapter 1 - EMO

    24/96

    24EE6701 Evolutionary Computation Chapter 1

    Local Perturbation

    EE6701 E l ti C t ti Ch t 1

  • 8/13/2019 Chapter 1 - EMO

    25/96

    25EE6701 Evolutionary Computation Chapter 1

    Population size versus generation Population distribution

    EE6701 E l ti C t ti Ch t 1

  • 8/13/2019 Chapter 1 - EMO

    26/96

    26EE6701 Evolutionary Computation Chapter 1

    Handling Elements: Provide basic usefulness of finding the non-dominated individuals

    Min-Max, Sub-Pop and others received less interests ascompared to Pareto, Weights, Goals and Pref

    Weights has attracted significant attentions from 1985-2000.

    The popularity of Pareto continues to grow significantly

    MO Handling Elements

    0

    10

    20

    30

    4050

    1961

    1967

    1985

    1992

    1994

    1996

    1998

    2000

    2002

    Year

    Cumulativenumber

    of

    Methods

    Weights

    Min-Max

    Pareto

    Goals

    Pref

    Gene

    Sub-Pop

    Fuzzy

    Others

    EE6701 E l ti C t ti Ch t 1

  • 8/13/2019 Chapter 1 - EMO

    27/96

    27EE6701 Evolutionary Computation Chapter 1

    Supporting Elements

    0

    10

    20

    3040

    1961

    1967

    1985

    1992

    1994

    1996

    1998

    2000

    2002

    Year

    Cumulativenumb

    er

    ofMethods Dist

    Mat

    Sub-Reg

    Ext-Pop

    Elitsm

    A-Evo

    Supporting Elements: Play an indirect role of supporting the algorithm

    to achieve better performance

    It was developed more recently than MO handling elements The Dist feature and Elitism/Archive feature have been

    incorporated in many algorithms

    Other features, such as for noise and many objectives problems

    etc., are gaining more attentions recently

    28EE6701 E ol tionar Comp tation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    28/96

    28EE6701 Evolutionary Computation Chapter 1

    Generational Distance (GD) (Veldhuizen, 1999)

    Measures how far the evolved solution set is from

    the true Pareto front.

    Spacing (S) (Schott, 1995)

    Measures how evenly the evolvedsolutions distribute itself.

    Maximum Spread (MS)(Zitzler, 1999)

    Measures how well the true Pareto front

    is covered by the evolved solution set.f

    1

    f2

    Minimization

    Minimiza

    tion

    Non-dominated solution

    Pareto Frontier

    Non-dominated set

    Hyper-Volume Ratio (HVR) (Veldhuizen, 1999)

    Calculates the volume covered by the evolved solutions.

    29EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    29/96

    29EE6701 Evolutionary Computation Chapter 1

    Non-dominance Ratio (NR) (Goh and Tan, 2009)

    Compare the quality of the solution set from various algorithms

    Measures the ratio of non-dominated solutions contributed by a

    particular solution set to the non-dominated solutions provided byall solution sets

    Objective Function 1

    ObjectiveFunction2

    NR

    0.8

    0.2

    30EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    30/96

    30EE6701 Evolutionary Computation Chapter 1

    Inverted Generational Distance (IGD) (M. A. Villalobos-Arias, 2005)

    Measure the proximity as well as diversity between the obtained

    Pareto front and the Pareto Optimal front

    The Euclidean distance is measured from each obtained solution tothe optimal solutions set

    F2

    F1

    F2

    F1

    ParetoOptim

    alFront

    ParetoOptim

    alFront

    E

    volvedSolutions

    EvolvedSolutions

    ( )=

    31EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    31/96

    31EE6701 Evolutionary Computation Chapter 1

    TestProblem Features

    1 ZDT1 Pareto front is convex.

    2 ZDT2 Pareto front is non-convex.

    3 ZDT3 Pareto front consists of several noncontiguous convexparts.

    4 ZDT4 Pareto front is highly multi-modal where there are 219local Pareto fronts.

    5 ZDT6 The Pareto optimal solutions are non-uniformlydistributed along the global Pareto front. The density ofthe solutions is low near the Pareto front and highaway from the front.

    ZDT1

    ZDT2

    ZDT3 ZDT4 ZDT6

    0 0.2 0.4 0.6 0.8 10

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 10

    0.2

    0.4

    0.6

    0.8

    1

    0 0.2 0.4 0.6 0.8 1-1

    -0.5

    0

    0.5

    1

    0 0.2 0.4 0.6 0.8 10

    0.2

    0.4

    0.6

    0.8

    1

    0.2 0.4 0.6 0.8 10

    0.2

    0.4

    0.6

    0.8

    1

    32EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    32/96

    32EE6701 Evolutionary Computation Chapter 1

    Test

    Problem

    Features

    6 FON Pareto front is non-convex.

    7 KUR Pareto front consists of several noncontiguous convexparts.

    8 POL Pareto front and Pareto optimal solutions consist ofseveral noncontiguous convex parts.

    9 TLK Noisy landscape.

    10 TLK2 Non-stationary environment.

    0 0.2 0.4 0.6 0.8 10

    0.2

    0.4

    0.6

    0.8

    1

    FON

    -20 -19 -18 -17 -16 -15 -14-15

    -10

    -5

    0

    5

    KUR

    0 5 10 15 200

    5

    10

    15

    20

    25

    POL

    0 0.2 0.4 0.6 0.8 10

    2

    4

    6

    8

    10

    TLK

    0 0.2 0.4 0.6 0.8 10

    0.05

    0.1

    0.15

    0.2

    0.25

    TLK2

  • 8/13/2019 Chapter 1 - EMO

    33/96

    34EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    34/96

    34EE6701 Evolutionary Computation Chapter 1

    1. Increasing Dimensionality Tan, K. C., Lee, T. H. and Khor, E. F., 'Evolutionary algorithm with

    dynamic population size and local exploration for multiobjectiveoptimization', IEEE Transactions on Evolutionary Computation,vol. 5, no. 6, pp. 565-588, 2001.

    Liu, D. S., Tan, K. C., Goh, C. K. and Ho, W. K., 'A multiobjective

    memetic algorithm based on particle swarm optimization', IEEETrans on Systems, Man and Cybernetics: Part B (Cybernetics),vol. 37, no. 1, pp. 42-50, 2007.

    2. Expensive Function Evaluations

    Tan, K. C., Tay, A. and Cai, J., 'Design and implementation of adistributed evolutionary computing software', IEEE Trans onSystems, Man and Cybernetics: Part C, vol. 33, issue 3, pp. 325-338, 2003.

    Tan, K. C., Yang, Y. J. and Goh, C. K., 'A distributed cooperativecoevolutionary algorithm for multiobjective optimization', IEEETransactions on Evolutionary Computation, vol. 10, issue 5, pp.527- 549, 2006.

    35EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    35/96

    35EE6701 Evolutionary Computation Chapter 1

    Decompose a complex problem into smaller problems viaco-evolving subpopulations cooperatively

    A divide-and-conquer strategy

    Each subpopulation evolves adifferent decision variable

    Fitness is dependent on the

    collaboration between each

    subpopulation

    Subpop 1

    for

    variable 1

    Subpop i

    for

    variable i

    Subpop k

    for

    variable k

    Subpop m

    for

    variable m

    Collaborate

    Evaluate

    Update achive

    Rank

    Representatives

    Representatives

    Representatives

    Complete solution

    Complete solution

    and its objective

    Achive

    Individuals in

    subpop i

    Assign rank

    36EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    36/96

    36EE6701 Evolutionary Computation Chapter 1

    Cooperation Methods

    C1: The best individual is selected as representative

    C2: The best and a random individuals are selected as representatives.

    The better solution resulting from the two co-operations is retained

    C3: The best two individuals are selected as representatives. The better

    solution resulting from the co-operations with the best representative

    and either representative is retained

    C4: The best two individuals are selected as the representatives.

    Cooperation is performed with either one of the representatives

    The model of cooperation among subpopulations is animportant issue in cooperative co-evolution The simplest approach is to select the best individual as the representative.

    However, this approach tends to perform poorly for problems with high

    parameter inter-dependency

    Alternatively, a random individual can be selected as the representative.However, this often results in a slower convergence rate

    Four different methods have been examined.

    37EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    37/96

    37EE6701 Evolutionary Computation Chapter 1

    Peer 1

    ServerServer

    Subpopulations Peers Central server

    1

    32

    5

    4

    6

    1

    2

    Peer 2

    3

    4

    Peer 3

    5

    6

    Parallelization Strategy

    Subpopulations are partitioned

    into a number of groups and

    assigned to peer computers

    Indirect cooperation is

    achieved through the

    exchange of archive and

    representatives between peers

    and a central server

    Peers are synchronized at

    fixed intervals to ensure better

    cooperation

    A Distributed CCEA

    38EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    38/96

    38EE6701 Evolutionary Computation Chapter 1

    Developed upon the technology of Java 2

    platform of Enterprise Edition (J2EE)

    Exploit the inherent parallelism of

    evolutionary algorithms

    Incorporates the features of robustness,

    security, and workload balancing

    The DCCEA is designed and embedded in a distributed computingframework named Paladin-DEC.

    39EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    39/96

    39EE6701 Evolutionary Computation Chapter 1

    Comparative ResultsGenerational Distance

    ZDT4

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    0.5

    1

    1.5

    2

    Generational Distance

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    Generational Distance

    ZDT6 FON KUR

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.01

    0.02

    0.03

    0.04

    0.05

    Generational Distance

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    0.01

    0.02

    0.03

    0.04

    0.05

    0.06

    Generational Distance

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    0.5

    1

    1.5

    2

    Generational Distance

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    500

    1000

    1500

    2000

    Generational Distance

    DTLZ2 DTLZ3

    CCEA escapes from local optima of

    ZDT4 and DTLZ3 CCEA scales well for high-dimensional

    MO problems

    CCEA performs less well for KUR due tothe high parameter inter-dependency

    Observation:

    40EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    40/96

    40EE6701 Evolutionary Computation Chapter 1

    Comparative ResultsSpacing

    ZDT4 ZDT6 FON KUR

    DTLZ2 DTLZ3

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.5

    1

    1.5

    2

    Spacing

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    Spacing

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0

    0.5

    1

    1.5

    2

    2.5

    3

    3.5

    4

    4.5

    Spacing

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    1.4

    1.6

    1.8

    2

    Spacing

    CCEA PAES PESA NSGAII SPEA2 IMOEA0

    0.5

    1

    1.5

    2

    2.5

    3

    Spacing

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.5

    1

    1.5

    2

    2.5

    Spacing

    CCEA generally shows good

    performance in spacing

    CCEA handles the non-uniformdistribution of ZDT6 well

    Observation:

    41EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    41/96

    41EE6701 Evolutionary Computation Chapter 1

    Comparative ResultsMaximum Spread

    ZDT4 ZDT6 FON KUR

    DTLZ2 DTLZ3

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.7

    0.75

    0.8

    0.85

    0.9

    0.95

    1

    Maximum Spread

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.97

    0.975

    0.98

    0.985

    0.99

    0.995

    1

    Maximum Spread

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Maximum Spread

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.7

    0.75

    0.8

    0.85

    0.9

    0.95

    1

    Maximum Spread

    CCEA PAES PESA NSGAII SPEA2 IMOEA0.91

    0.92

    0.93

    0.94

    0.95

    0.96

    0.97

    0.98

    0.99

    1

    Maximum Spread

    CCEA PAES PESA NSGAII SPEA2 IMOEA

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Maximum Spread

    CCEA performs competitively ascompared to other MOEAs

    Observation:

    42EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    42/96

    42EE6701 Evolutionary Computation Chapter 1

    Behaviors of CCEA

    Oscillations ofx1help tosample the solutions along thePareto front uniformly

    Variablesx2tox10converge to

    the optimal location gradually 0 50 100 150 200 250 3000

    0.2

    0.4

    0.6

    0.8

    1

    Generation number

    x1

    0 50 100 150 200 250 3000

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    Generation number

    x2~x10

    0 50 100 150 200 250 300 350 400-1

    -0.8

    -0.6

    -0.4

    -0.2

    0

    0.2

    0.4

    0.6

    Generation number

    x1~x4

    x1

    x2

    x3

    x4

    0 50 100 150 200 250 300 350 400-0.4

    -0.3

    -0.2

    -0.1

    0

    0.1

    0.2

    0.3

    Generation number

    x5~x8

    x5

    x6

    x7

    x8

    ZDT4

    FON

    ZDT4

    All the variables oscillate and

    converge in a similar pace This behavior is related to the

    high inter-dependency amongthe parameters

    FON

    43EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    43/96

    EE6701 Evolutionary Computation Chapter 1

    Effects of the Cooperation

    C1 is the greediest method. It performs well for problems with low parameter inter-dependency, but gives poor results for those with high inter-dependency

    C4 is the least greedy approach. It performs well for problems with high parameterinter-dependency, but gives poor results for those with low inter-dependency

    C2 and C3 provide a balance between greedy search and diversity maintenance,which achieve generally good and robust performances

    Problem C1 C2 C3 C4

    ZDT1 4.13E-04 4.24E-04 4.04E-04 3.14E-01

    ZDT2 4.05E-04 4.81E-04 6.35E-04 5.09E-01

    ZDT3 1.45E-03 1.41E-03 1.27E-03 3.64E-01

    ZDT4 6.98E-05 7.08E-05 1.48E-04 1.26E+00

    ZDT5 1.27E-06 1.27E-06 1.26E-06 2.90E+00

    ZDT6 4.89E-07 5.00E-07 4.86E-07 1.43E+00FON 2.35E-02 2.32E-02 2.28E-02 2.13E-02

    KUR 3.55E-02 2.31E-02 2.35E-02 1.76E-02

    TLK 4.80E-01 5.23E-01 4.90E-01 5.34E-01

    DTLZ2 8.57E-04 4.07E-01 2.08E-02 5.44E-01

    DTLZ3 1.82E+00 2.24E+00 1.65E+00 5.82E+02

    Problem C1 C2 C3 C4

    ZDT1 0.1757 0.1771 0.1667 0.8240

    ZDT2 0.1613 0.1553 0.1600 0.9328

    ZDT3 0.2815 0.2867 0.2733 0.9952

    ZDT4 0.1333 0.1436 0.1409 1.9624

    ZDT5 0.7114 0.7125 0.7125 1.1676

    ZDT6 0.1754 0.1822 0.1881 0.8174FON 0.2687 0.3497 0.5291 0.2676

    KUR 0.7919 0.7145 0.7154 0.6817

    TLK 1.0905 1.0284 1.0135 1.0772

    DTLZ2 0.1245 0.2658 0.1604 0.2929

    DTLZ3 0.6638 0.8103 0.7525 0.7201

    Median GD for CCEA Median S for CCEA

    44EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    44/96

    y p p

    PC Configuration CPU

    (MHz)/RAM (MB)

    server PIV 1600/512

    peer 1 PIII 800/ 512

    peer 2 PIII 800/ 512

    peer 3 PIII 800/ 256

    peer 4 PIII 933/384

    peer 5 PIII 933/128

    peer 6 PIV 1300/ 128

    peer 7 PIV 1300/ 128peer 8 PIII 933/ 512

    peer 9 PIII 933/ 512

    peer 10 PIII 933/256

    11 PCs in LAN

    Populations Subpopulation size 20;

    archive size 100

    Chromosome length 30 bits for each variable

    Selection Tournament selection

    Crossover operator Uniform crossover

    Crossover rate 0.8

    Mutation operator Bit-flip mutation

    Mutation rate 2/L , where L is the

    chromosome length

    Number of evaluation 120,000

    Exchange interval 5 generations

    Sync. interval 10 generations

    Configurations

    45EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    45/96

    y p p

    Generational Distance Spacing

    Maximum Spread Hyper-volume Ratio

    46EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    46/96

    y p p

    Observation

    Effective in reducing simulation runtime without sacrificing performance

    Speedup achievable is more significant for large problems

    The increase of communication cost counteracts the reduction of

    computation cost with increasing number of peers

    Number of peers ZDT1 ZDT2 ZDT3 ZDT4 ZDT6

    2 1.52 1.70 1.47 1.23 1.01

    3 2.01 1.99 1.88 1.47 1.11

    4 2.25 2.21 1.95 1.50 1.14

    5 2.48 2.69 2.15 1.56 1.14

    6 2.81 3.03 2.83 1.70 1.29

    7 2.87 3.32 2.77 1.88 1.25

    8 3.38 3.27 2.92 1.82 1.26

    9 3.46 3.36 2.96 1.83 1.26

    10 3.46 3.18 2.79 1.82 1.25

    Speedup

    0

    50

    100

    150

    200

    250

    300

    1 2 3 4 5 6 7 8 9 10

    Number of peers

    Runtime(s)

    ZDT1

    ZDT2

    ZDT3

    ZDT4

    ZDT6

    Runtime

    47EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    47/96

    y p p

    3. Noise and Uncertainty Goh, C. K., and Tan, K. C., 'An investigation on noisy

    environments in evolutionary multi-objective optimization', IEEETransactions on Evolutionary Computation, vol. 11, no. 3, pp.354-381, 2007.

    Tan, K. C. and Goh, C. K., 'A competitive-cooperativecoevolutionary paradigm for dynamic multi-objective

    optimization', IEEE Transactions on Evolutionary Computation,vol. 13, no. 1, pp. 103-127, 2009.

    4. Estimation of Distribution Algorithms

    Shim, V. A., Tan, K. C., Chia, J.Y. and Mamun, A. Al., 'Multi-

    objective optimization with estimation of distribution algorithm innoisy environment', Evolutionary Computation (MIT Press),2012.

    Shim, V. A., Tan, K. C. and Cheong, C. Y., 'A hybrid estimationof distribution algorithm with decomposition for solving themultiobjective multiple traveling salesman problem', IEEETransactions on Systems, Man, and Cybernetic: Part C, vol. 42,no. 5, pp. 682-691, Sep 2012.

    48EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    48/96

    y p p

    Data encountered in practical applications may often be

    influenced by noise

    Noise may arise from different sources, e.g., sensor measurementerrors, incomplete simulation of computational models etc

    The problem with noise is that it is not simply adding a bias or offsetto the objective function

    When a system is presented with noise, each evaluation of the samesolution may result in different objective function values

    Noise encountered in EMO optimization is often modeled as arandom perturbation to the objective functions

    Performance is greatly influenced by the noise model adopted andthe level of noise intensity encountered

    49EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    49/96

    y p p

    1.0

    0.8

    0.6

    0.4

    0.2

    0.00.2 0.4 0.6 0.8 1.0 1.2 1.4

    f1

    f2

    1.0

    0.8

    0.6

    0.4

    0.2

    0.00.2 0.4 0.6 0.8 1.0 1.2 1.4

    f1

    f2

    A

    BA'B'

    Noise may change the way we perceive the solution

    E.g., a good solution may be perceivedas a bad solution and vice versa. Theproblem is worse for EMO.

    Non-dominated solutions in noisyoptimization may be an inferior orworse a non-feasible solution.

    Effect of Noise

    50EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    50/96

    Existing approaches for handling noise in SO and MOproblems can be classified as follows (Jin and Branke, 2005)

    Explicit Averaging (Singh, 2003; Bui et al, 2005; Basseur andZitzler, 2006)

    Implicit Averaging (Buche et al, 2002)

    Selection Modification (Hughes, 2001; Teiche, 2001)

    Heuristic Approach (Goh and Tan, 2007)

    Design considerations for handling noise in EMO

    To minimize error in the selection and elitism process

    To improve the efficiency of noise handling mechanism

    To handle the final set of solutions in archive for decision-making

    51EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    51/96

    Explicit Averaging

    Each solution is evaluated a number of times and is averaged to

    compute the expected objective value

    By increasing the number of samples, it reduces the degree of

    uncertainty in the optimization

    f1

    f2

    Original Solution

    Samples

    52EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    52/96

    A large population size is used

    There may be many similar solutions and thus influence of noise iscompensated as the search revisits the same region repeatedly

    This approach can be computationally expensive

    x1

    x2

    x1

    x2

    Small population Large population

    Implicit Averaging

  • 8/13/2019 Chapter 1 - EMO

    53/96

    54EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    54/96

    Preliminary InvestigationParameter Settings

    1/ chromosome_length

    1/ bit_number_per_variable

    Basic Algorithm

    Employs a fixed-sizepopulation and an archive.

    When predetermined archivesize is reached, a recurrent

    truncation process based onniche count is used.

    Binary tournament selectionscheme is used for thecombined archive andevolving population.

    Additive Noise Model2( ) ( ) Normal(0, )f x f x

    is represented as a percentage of the maximum of the i-th objective in the

    true Pareto front.

    2

    Chromosome Binary coding; 15 bits per decision variable.

    Population Population size 100; Archive (or secondary

    population) size 100.

    Selection Binary tournament selection

    Crossover operator Uniform crossover

    Crossover rate 0.8

    Mutation operator Bit-flip mutation

    Mutation rate for FON and KUR

    for ZDT1, ZDT4 and ZDT6

    Ranking scheme Pareto ranking

    Diversity operator Niche count with radius 0.01 in the normalized

    objective space.

    Evaluation number 50,000

    55EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    55/96

    0 100 200 300 400 500

    -7

    -6

    -5

    -4

    -3

    -2

    -1

    0

    1

    2

    Generation

    GD

    GD

    Generation

    0 100 200 300 400 500

    -1

    0

    1

    2

    3

    4

    5

    GD

    Generation

    0 100 200 300 400 500

    -8

    -6

    -4

    -2

    0

    2

    GD

    Generation

    0 100 200 300 400 500

    -4.5

    -4

    -3.5

    -3

    -2.5

    -2

    -1.5

    -1

    -0.5

    GD

    Generation

    0 100 200 300 400 500

    -7

    -6

    -5

    -4

    -3

    -2

    -1

    0

    ZDT1 ZDT6ZDT4

    KURFON

    No noise

    0.2%

    0.5%

    1.0%

    5.0%

    10.0%

    20.0%

    Preliminary InvestigationNoise impact on convergence

    56EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    56/96

    Preliminary InvestigationNoise impact on diversity

    ZDT1 ZDT6ZDT4

    KURFON

    0 100 200 300 400 500

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Generation

    MS

    Generation

    MS

    0 100 200 300 400 500

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    Generation

    MS

    0 100 200 300 400 500

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    Generation

    MS

    0 100 200 300 400 500

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    0.7

    0.8

    Generation

    MS

    0 100 200 300 400 500

    0.4

    0.5

    0.6

    0.7

    0.8

    0.9

    1

    No noise

    0.2%

    0.5%

    1.0%

    5.0%

    10.0%

    20.0%

    57EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    57/96

    Preliminary Investigation

    ZDT1 ZDT6ZDT4

    KURFON

    0.0 0.2 0.5 1.0 5.0 10.0 20.00

    20

    40

    60

    80

    100

    Noise Level

    ArchiveSize

    0.0 0.2 0.5 1.0 5.0 10.0 20.00

    20

    40

    60

    80

    100

    Noise Level

    ArchiveSize

    0.0 0.2 0.5 1.0 5.0 10.0 20.00

    20

    40

    60

    80

    100

    Noise Level

    ArchiveSize

    0.0 0.2 0.5 1.0 5.0 10.0 20.00

    20

    40

    60

    80

    100

    Noise Level

    ArchiveSize

    0.0 0.2 0.5 1.0 5.0 10.0 20.00

    20

    40

    60

    80

    100

    Noise Level

    ArchiveSize

    Noise impact on archiving

    Although MOEA is able to search

    for better solutions, the noise-enhanced solutions in the archiveare keeping the newly found non-dominated solutions away, thusresulting in the reduced numberof archived solutions.

    58EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    58/96

    Preliminary Investigation

    ZDT1 ZDT6ZDT4

    KURFON

    0 50 100 150 200

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    Generation

    Decision-errorratio

    Generation

    Decision-errorratio

    0 50 100 150 200

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    Generation

    Decision-errorratio

    0 50 100 150 200

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    Generation

    Decision-erro

    rratio

    0 50 100 150 200

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    Generation

    Decision-erro

    rratio

    0 50 100 150 200

    0

    0.05

    0.1

    0.15

    0.2

    No noise

    0.2%

    0.5%

    1.0%

    5.0%

    10.0%

    20.0%

    Noise impact on decision-making in selection process

    59EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    59/96

    Preliminary Investigation

    Generation

    Search

    range

    ofvariable,x

    0 100 200 300 400 500

    0.35

    0.4

    0.45

    0.5

    0.55

    0.6

    0.65

    0.7

    Generation

    Search

    range

    ofvariable,x

    0 100 200 300 400 5000

    0.2

    0.4

    0.6

    0.8

    1

    Generation

    S

    earch

    range

    ofvariable,x

    0 100 200 300 400 5000

    0.2

    0.4

    0.6

    0.8

    1

    Generation

    S

    earch

    range

    ofvariable,x

    0 100 200 300 400 5000

    0.2

    0.4

    0.6

    0.8

    1

    ZDT1

    FON

    0% noise level 20% noise level

    0% noise level 20% noise level

    BMOEA is capable ofnarrowing down the searchrange for better evolutionarysearch optimization in a noise

    free environment.

    On the other hand, the meanlocation of individuals remainsrelatively the same, and the

    range is concentrated near theoptimal region despite thepresence of noise, whichcould be due to the correctdecision-making in theselection process.

    Trace of search range for an arbitrary selected parameter

    60EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    60/96

    Better decision-making at early generations: Introduce a momentumterm to accelerate movement in the direction of improvement; whilerestricting movement otherwise

    Generation

    Decision-errorratio

    0 50 100 150 200

    0

    0.05

    0.1

    0.15

    0.2

    0.25

    0.3

    0.35

    0.4

    Decision-making error

    Experiential Learning Directed Perturbation

    ( ) ( ) ( 1)j j jp t p t p t

    min max( ) ( ), if ( 1)( 1)

    Bit-flip, otherwise

    j j j

    j

    p t p t p tp t

    Variation increases in magnitudein the same direction of change.

    61EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    61/96

    Gene Adaptation Selection Strategy

    Mean location of search range remains relatively invariant: Adjust thesearch range according to the distribution of solutions

    It has the advantage of concentrating resources on a smaller region. Byre-evaluating the similar solutions, it gives some form of implicit averaging

    j j j

    j j j

    a lowbd w meanbd

    b uppbd w meanbd

    Activation of geneadaptation

    Generation

    Current Search Region

    ja

    jb

    Defined Search Region

    jx

    jmeanbd

    jlowbd

    juppbd

    Generation

    Current Search Region

    Defined Search Region

    jx

    Activation of geneadaptation

    jb

    ja

    jmeanbd

    juppbd

    jlowbd

    j j j

    j j j

    a meanbd w meanbdb meanbd w meanbd

    Convergence model Divergence model

    62EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    62/96

    Possibilistic Archiving

    Based on the concept of possibilistic Pareto dominance tostore true non-dominated solutions in the archive

    Minimize removal of true non-dominated individuals due to noise

    Provide a chance for individuals degraded by noise to survive

    2L

    22L

    1

    2x

    1x

    Lis a reflection of the uncertainty

    level present in the system.

    The possibilistic archiving removesa solution only if it necessarily

    nominates a solution.

    The gray region denotes the region

    for which this solution dominates.

    Because of uncertainty, this solution

    can be anywhere denoted in the

    box and therefore it is possible that

    any solutions in the box may be

    non-dominated solutions.

    63EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    63/96

    Adapt gene based on

    adopted model

    Crossover

    ELDP

    Add offspring to

    population

    Combine archive

    and evolving

    population

    Possibilistic

    Archiving

    Evaluation and

    Pareto Ranking

    Initialize population

    Is stopping criteria met? NoYes

    Possibilistic

    Archiving

    Return Archive

    GASSConvergence or

    divergence model?

    YesNo

    BInary Tournament

    selection

    MOEA-RF

    Experiential LearningDirected Perturbation

    Gene AdaptationSelection Strategy

    Possibilistic Archiving

    64EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    64/96

    0.0 5.0 10.0 20.00

    0.5

    1

    1.5

    Noise Level

    GD

    0.0 5.0 10.0 20.00.5

    0.6

    0.7

    0.8

    0.9

    1

    Noise Level

    MS

    0.0 5.0 10.0 20.00

    0.2

    0.4

    0.6

    0.8

    1

    Noise Level

    HV

    R

    MOEA-RF

    RMOEA

    NTSPEA

    MOPSEASPEA2

    NSGAII

    PAES

    Performance in Noisy Environment

    ZDT1with different noise levels

    65EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    65/96

    Performance in Noisy Environment

    FON with 20% noise

    0 10000 20000 30000 40000 500000

    0.1

    0.2

    0.3

    0.4

    0.5

    Evaluation

    GD

    0 10000 20000 30000 40000 500000

    0.1

    0.2

    0.3

    0.4

    0.5

    0.6

    Evaluation

    MS

    MOEA-RF

    RMOEA

    NTSPEA

    MOPSEA

    SPEA2

    NSGAII

    PAES

    66EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    66/96

    Performance in Noisy Environment

    MOEA-RF RMOEA NTSPEA MOPSEA SPEA2 NSGAII PAES

    1st quartile 28 6 5 7 18 17 4

    ZDT1 Median 31 7.5 6 9 21.5 19.5 4.5

    3rd quartile 32 10 8 10 27 23 5

    1st quartile 29 5 6 9 12 20 5ZDT4 Median 33 7 8 11 26.5 26 6

    3rd quartile 41 8 10 13 29 32 9

    1st quartile 82 2 4 3 8 8 2

    ZDT6 Median 85 3 5 4 9 9 4

    3rd quartile 88 5 6 5 11 11 6

    1st quartile 9 1 1 1 6 6 1

    FON Median 11 2 1.5 2 8.5 8.5 2

    3rd quartile 17 2 2 3 12 12 3

    1st quartile 25 6 5 8 23 25 7

    KUR Median 27 8 5.5 9 25 27 9

    3rd quartile 30 9 7 11 28 30 10

    Number of non-dominated individuals found for the various problems(20% noise level)

    67EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    67/96

    Baseline MOEA

    Generation (a) 0, (b) 10, (c) 60, (d) 200, and (e) 350 for ZDT4 with 219local Pareto fronts.

    Baseline MOEA with ELDP

    Baseline MOEA with GASS

    0 0.5 1

    0

    20

    40

    60

    80

    100

    0 0.5 1

    0

    5

    10

    15

    20

    0 0.5 1

    0

    0.5

    1

    1.5

    2

    2.5

    0 0.5 1

    0

    0.5

    1

    1.5

    2

    2.5

    0 0.5 1

    0

    0.5

    1

    1.5

    2

    2.5

    0 0.5 10

    20

    40

    60

    80

    100

    0 0.5 10

    1

    2

    3

    4

    5

    0 0.5 10

    0.5

    1

    1.5

    0 0.5 10

    0.2

    0.4

    0.6

    0.8

    1

    0 0.5 10

    0.2

    0.4

    0.6

    0.8

    1

    0 0.5 10

    20

    40

    60

    80

    100

    0 0.5 10

    5

    10

    15

    20

    0 0.5 10

    0.5

    1

    1.5

    2

    2.5

    0 0.5 10

    0.5

    1

    1.5

    0 0.5 10

    0.5

    1

    1.5

    (a) (b) (c) (d) (e)

    (a) (b) (c) (d) (e)

    (a) (b) (c) (d) (e)

    Effects of the Proposed Features ELDP and GASS have the advantage of overcoming local optimality for ZDT4

    The population distribution converges faster when ELDP is incorporated

    68EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    68/96

    ZDT4 with 0% noise

    NSGAII

    0

    0.5

    1

    1.5

    Generational Distance

    NSGAII-RF SPEA2 SPEA2-RF NSGAII NSGAII-RF SPEA2 SPEA2-RF

    4

    6

    8

    x 10-3

    Spacing

    NSGAII NSGAII-RF SPEA2 SPEA2-RF

    0.7

    0.8

    0.9

    1

    Maximum Spread

    NSGAII NSGAII-RF SPEA2 SPEA2-RF

    0

    0.5

    1

    Hypervolume Ratio

    NSGAII NSGAII-RF SPEA2 SPEA2-RF

    0

    1

    2

    Generational Distance

    NSGAII NSGAII-RF SPEA2 SPEA2-RF

    0.05

    0.1

    0.15

    0.2

    0.25

    Spacing

    NSGAII NSGAII-RF SPEA2 SPEA2-RF

    0.6

    0.7

    0.8

    0.9

    1

    Maximum Spread

    NSGAII NSGAII-RF SPEA2 SPEA2-RF

    0

    0.2

    0.4

    0.6

    0.8

    Hypervolume Ratio

    ZDT4 with 20% noise

    Effects of ELDP and GASS onNSGA-II and SPEA2

    69EE6701 Evolutionary Computation Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    69/96

    Performance in Noisy EnvironmentZDT6 with 20% noise

    ZDT6 has a non-uniformly distributed and

    discontinuous tradeoff

    Most algorithms are unable to find the global

    tradeoff. MOEA-RF gives a rather consistent

    good performance with a relatively small

    variance for the performance metrics

    MOEA-RFRMOEA NTSPEAMOPSEASPEA2NSGAII PAES

    Generational Distance

    0

    1

    2

    3

    4

    MOEA-RFRMOEA NTSPEAMOPSEA SPEA2NSGAIIPAES

    -0.2

    0

    0.2

    0.4

    0.6

    Spacing

    MOEA-RFRMOEA NTSPEAMOPSEASPEA2NSGAIIPAES

    0

    0.5

    1

    Maximum Spread

    MOEA-RFRMOEA NTSPEAMOPSEASPEA2NSGAIIPAES

    0

    0.2

    0.4

    0.6

    0.8

    Hypervolume Ratio

    70EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    70/96

    Summary The behaviors of EMO in noisy environments are examined

    Two heuristics including experiential learning directed

    perturbation (ELDP) and gene adaptation selection strategy

    (GASS) are applied to exploit MOEA behaviors for better

    performance in noisy environment

    A possibilistic archiving model to reduce the impact of noise on

    the archive

    The proposed features also improve performance of general MOEAs,e.g., SPEA2 and NSGA-II, to exhibit competitive or better performancein noisy environments

    Three noise-handling features are proposed

    71EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    71/96

    Hard Disk Drive Servo Specifications The control input should not exceed 2 volts due to physical

    constraints on the actual actuator

    The overshoots and undershoots of the step response should

    be kept less than 5% as the R/W head can start to read or

    write within 5% of the target

    The 5% settling time in the step response should be less than

    2 milliseconds

    Excellent steady-state accuracy

    Robust in terms of disturbance rejection and uncertainty

    Tan, K. C., Sathikannan, R., Tan, W.W. and Loh, A. P., 'Evolutionarydesign and implementation of a hard disk drive servo control system',

    Soft Computing, vol. 11, no. 2, pp. 131-139, 2007.

    72EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    72/96

    103

    104

    -40

    -20

    0

    20

    40

    60

    frequency: rad/sec

    ma

    gnitude:dB

    :: Measured

    : Identified

    103

    104

    -400

    -300

    -200

    -100

    0

    frequency: rad/sec

    phase:de

    gree

    Gv(s) 4.4 1010s4.87 1015

    s2 (s2 1.45 103s 1.1 108 )

    x(k 1) 1 1.664

    0 1

    x(k)

    1.384

    1.664

    u

    HDD Model

    Sampling frequency of 4 kHz

    KpKfz ff1z ff2

    Controllers in Discrete Form

    Ks Kbz fb1z fb2

    73EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    73/96

    Time domain and frequency domain design specificationsCustomer specifications Objective Goal

    1. Stability ( Closed-loop poles) Nr([eig(A_clp)] > 0) 0

    Frequency

    Domain

    2. Closed-loop sensitivity or

    Disturbance rejection [S( j )]1

    3. Plant uncertainty [T( j )] 1

    4. Actuator saturation Max(u) 2 volts

    Time 5. Rise time Trise 2 milli sec

    Domain 6. Overshoots Oshoot 0.05

    7. Settling time 5%Tsettling 2 milli sec

    8. Steady-state error SSerror 0

    SumReference

    Input Position Output

    y(n)=Cx(n)+Du(n)

    x(n+1)=Ax(n)+Bu(n)

    Position Feedback Controller

    y(n)=Cx(n)+Du(n)

    x(n+1)=Ax(n)+Bu(n)

    Plant

    y(n)=Cx(n)+Du(n)

    x(n+1)=Ax(n)+Bu(n)

    FeedForwardController

    Controller Output

    Kp (0.038597) z0.63841

    z0.39488

    Ks

    (0.212) z0.783

    z0.014001

    74EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    74/96

    Layout of MOEA Toolbox

    Population Handling

    Simulation Objectives

    Remote Control

    Graphical Results

    Quick Setup

    Master Window

    75EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    75/96

    Design Trade-off Graph Progress Ratio

    Tan, K. C., Lee, T. H., Khoo, D. and Khor, E. F., 'A multi-objectiveevolutionary algorithm toolbox for computer-aided multi-objectiveoptimization', IEEE Transactions on Systems, Man and Cybernetics:Part B (Cybernetics), vol. 31, no. 4, pp. 537-556, 2001.

    76EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    76/96

    Servo Output Response Disturbance Rejection

  • 8/13/2019 Chapter 1 - EMO

    77/96

    78EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    78/96

    Real-time Implementation

    79EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    79/96

    Vehicle routing problem (VRP) involves

    the routing of a set of identical vehicles

    with limited capacity from a central depot

    to a set of geographically dispersedcustomers to satisfy their demands

    For VRPSD, customers demands are

    stochastic and other parameters such as

    vehicle capacity, customers and depotsare known a-priori

    Tan, K. C., Cheong, C. Y. and Goh, C. K., 'Solving multiobjective vehiclerouting problem with stochastic demand via evolutionary computation',

    European Journal of Operational Research, vol. 177, pp. 813-839, 2007.

    80EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    80/96

    Capacity constraint Each customer has a demand. Vehicle capacity cannot be exceeded

    Treated as a hard constraint, e.g., a route failure occurs when this

    constraint is violated

    Time constraint

    Service and travel time for each route should not exceed the length of

    a drivers workday, e.g., 8 hours

    Treated as a soft constraint in the form of remuneration for drivers. $10 for each of first 8 hours of work and $20 for each additional hour

    subsequently

    This is to penalize exceedingly long routes which may not be feasible

    to implement in practice

    Constraints in VRPSD

    81EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    81/96

    Stochastic demand

    Demand of each customer is a random variable whose

    distribution is known

    A normal distribution is usually used

    Actual demand is not known but is revealed only when the

    vehicle arrives at the customers location

    Examples of VRPSD

    Beer and soft drinks distribution

    Provision of ATMs with cash

    Trash collection

    Stochastic Demands

    82EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    82/96

    Route failure A vehicle may find that it is not able to satisfy a customers

    demand upon arrival due to the capacity constraint

    Recourse policy

    The vehicle returns to the depot to restock and then continues

    delivery according to the originally planned route

    Main obstacle

    The actual cost of a solution cannot be known before it is

    actually implemented, e.g., the main obstacle for solving

    VRPSD is to find a suitable objective function

    Difficulty in Solving VRPSD

    83EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    83/96

    Route Simulation Method (RSM)

    RSM is applied to evaluate the cost of routes for a particular

    realization of the customers demands

    Generate Nsets of demands based on the known

    distributions of the customers demands

    Averaging technique is used to obtain the expected cost of

    the solution

    Depot

    1

    2

    3

    4

    5

    6

    Customer GeneratedDemand

    1 5

    2 6

    3 2

    4 13

    5 9

    6 5

    Example: Vehicle capacity: 15

    84EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    84/96

    Multiple objectives

    Travel distance

    Driver remuneration

    Number of vehicles

    Variable-length chromosome

    Encodes the number of

    routes/vehicles and the order of

    customers served by these vehicles

    Every chromosome can have a

    different number of routes

    0

    2

    5

    7

    0

    0

    1

    3

    4

    8

    9

    0

    0

    6

    10

    0

    Chromosome

    0

    2

    5

    7

    0

    0

    1

    3

    4

    8

    9

    0

    0

    6

    10

    0

    Chromosome

    Each route contains a

    sequence of customers

    A chromosome encodes a

    complete routing solution

    85EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    85/96

    Route-exchange Crossover

    Allows good sequence of routes to be shared

    Infeasibility after the change can be eradicated easily

    A random shuffling operator is applied to increase the diversity

    of chromosomes to explore the large VRPSD search space

    86EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    86/96

    Multi-mode Mutation

    Partial swap Merge shortest route

    Split longest route

    Split LongestRoute

    Random Shuffle

    Partial Swaprand[0,1) < elastic rate

    Chromosomeselected for

    mutation

    No

    Yes

    rand[0,1) < squeeze rate

    No

    Merge ShortestRoute

    Yes

    87EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    87/96

    Shortest Path (SP) search

    Exploit the structure that route

    failures are more likely to occur

    at the end of a route

    Which Directional (WD) search

    Exploit the structure that the cost

    of a route is different for both

    directions

    Rebuild the route in the oppositedirection

    For both methods, the new route

    is compared with the original one

    and the better route is retained Flowchart

    Build customer

    database

    Population

    initialization

    Route Simulation

    Method and

    Pareto ranking

    Tournamentselection

    Route-exchange

    crossover

    Multi-mode

    mutation

    Elitism

    Local search

    exploitation

    Stopping

    criterion

    met?

    Start

    End

    No

    Yes

    Update archive

    Perform

    local

    search?

    Yes

    No

    Generation loop

    Build customer

    database

    Population

    initialization

    Route Simulation

    Method and

    Pareto ranking

    Tournamentselection

    Route-exchange

    crossover

    Multi-mode

    mutation

    Elitism

    Local search

    exploitation

    Stopping

    criterion

    met?

    Start

    End

    No

    Yes

    Update archive

    Perform

    local

    search?

    Yes

    No

    Generation loop

    While evolutionary operators focus on global exploration, local

    search contributes to the intensification of optimization results

    88EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    88/96

    Performance of Local Search Operators

    All Solutions Non-dominated Solutions

    0 100 200 300 4001000

    1500

    2000

    2500

    3000

    Generation

    Averagetraveldistance

    WD/SP

    SP/WD

    SP

    WD

    RAN

    NLS

    0 100 200 300 4001000

    1500

    2000

    2500

    3000

    Generation

    Averagetraveldistance

    WD/SP

    SP/WD

    SP

    WD

    RAN

    NLS

    0 100 200 300 400

    1000

    1200

    1400

    1600

    1800

    2000

    2200

    Generation

    Averagedriverremun

    eration

    WD/SP

    SP/WD

    SP

    WD

    RANNLS

    0 100 200 300 400

    800

    1000

    1200

    1400

    1600

    1800

    2000

    Generation

    Averagedriverremun

    eration

    WD/SP

    SP/WD

    SP

    WD

    RANNLS

    89EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    89/96

    Comparison of Different Optimization Criteria

    Multiobjective Performance

    0.00E+00

    1.00E+06

    2.00E+06

    3.00E+06

    4.00E+06

    5.00E+06

    6.00E+06

    7.00E+06

    All solutions Non-dominated solutions

    Traveld

    istanceXDriverremu

    neration

    MO

    DORV

    DODV

    DODR

    SOD

    SOR

    SOV

    90EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    90/96

    Search Space of MO

    91EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    91/96

    Performance for different value of N (e.g., expected cost of a solutionas compared to the actual cost of implementing the solution).

    Travel distance

    Driver remuneration

    Robust VRPSD solutions

    1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 1

    Increaseintraveldistance

    N1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 2

    Increaseintraveldistance

    N1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 3

    Increaseintraveldistance

    N1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 4

    Increaseintraveldistance

    N

    1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 1

    Increasein

    driverremuneration

    N1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 2

    Increasein

    driverremuneration

    N1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 3

    Increasein

    driverremuneration

    N1 3 5 10 30 50 70

    -100

    0

    100

    200

    300

    Test Demand Set 4

    Increasein

    driverremuneration

    N

    92EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    92/96

    Robust VRPSD solutions (Travel Distance)

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 1

    In

    creaseintraveldistance

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 2

    In

    creaseintraveldistance

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 3

    Increaseintraveldistance

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 4

    Increaseintraveldistance

    (Dror and Trudeau)

    93EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    93/96

    Robust VRPSD solutions (Driver Remuneration)

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 1

    Incre

    aseindriverremuneratio

    n

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 2

    Incre

    aseindriverremuneratio

    n

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    Test Demand Set 3

    Increaseindriverremu

    neration

    GEG GEM AEM DET

    -100

    0

    100

    200

    300

    400

    500

    Test Demand Set 4

    Increaseindriverremu

    neration

    94EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    94/96

    Robust VRPSD solutions

    ExpectedTravel

    Distance

    Increasein TravelDistance

    ActualTravel

    Distance

    ExpectedDriver

    Remuneration

    Increase inDriver

    Remuneration

    Actual DriverRemuneration

    MultiplicativeAggregate

    (X106)

    GEG 1086.59 33.85 1120.44 985.95 32.87 1018.82 1.142

    GEM 1120.52 16.54 1137.06 990.33 18.64 1008.97 1.147

    AEM 1002.08 122.35 1124.43 947.04 125.04 1072.08 1.205

    DET 970.17 213.40 1183.57 909.59 217.35 1126.94 1.334

    Averaged over the non-dominated solutions at the termination of the simulation.

    Test demand set 1

    95EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    95/96

    Significance of RSM

    The RSM was implemented by using demand sets randomly

    generated based on the customers demand distributions

    In actual implementation, there may not be a need to randomly

    generate the demand sets if the company keeps past demand

    records of their customers

    Past records are useful, and can be used to provide the demand sets

    for the RSM to operate on

    Important if the customers demand distributions are not known

    Multiple objectives were considered in solving the VRPSD problem

    New way of assessing quality of solutions (RSM)

    Solutions of the MOEA are robust to stochastic nature of the problem

    Summary

    96EE6701 Evolutionary Computation

    Chapter 1

  • 8/13/2019 Chapter 1 - EMO

    96/96

    Goh, C. K. and Tan, K. C., Evolutionary Multi-

    objective Optimization in Uncertain Environments:

    Issues and Algorithms, Springer-Verlag, 2009

    Tan, K. C., Khor, E. F. and Lee, T. H. Multiobjective

    Evolutionary Algorithms and Applications, Springer-

    Verlag, United Kingdom, 2005

    http://localhost/var/www/apps/conversion/tmp/scratch_6/noise.htm