practical multi objective ion

Upload: gyoginder

Post on 06-Apr-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/3/2019 Practical Multi Objective ion

    1/15

    Practical Multiobjective Optimisation

    by Chris Lucas

    "Now, as there are many actions, arts, and sciences, their ends also are many; ...

    other arts fall under yet others - in all of these the ends of the master arts are to be

    preferred to all the subordinate ends; for it is for the sake of the former that the

    latter are pursued."

    Aristotle, Nicomachean Ethics, 350 BCE, Book I,1

    "The great decisions of human life have as a rule far more to do with the instincts

    and other mysterious unconscious factors than with conscious will and well-

    meaning reasonableness. The shoe that fits one person pinches another; there is no

    recipe for living that suits all cases. Each of us carries his own life-form - an

    indeterminable form which cannot be superseded by any other."

    Carl Gustav Jung, Modern Man in Search of a Soul, 1933, p. 69

    Introduction

    In the world around us it is rare for any problem to concern only a single value or

    objective. Generally multiple objectives or parameters have to be met or optimised

    before any 'master' or 'holistic' solution is considered adequate. This introduction

    looks at some of the issues involved when we try to do this, and outlines the technique

    of Evolutionary Multiobjective Optimisation (EMOO) that can be used to solveMultiobjective Optimisation Problems (MOP) or Multiobjective Combinatorial

    Optimisation (MOCO) problems by using forms of genetic algorithms called

    Multiobjective Evolutionary Algorithms (MOEA). This process can become very

    technical and theoretical, with much of the information on the subject contained

    within offline PhDthesisdocuments andpapers, but here we will also look at

    simplified practical ways of employing this technique within our daily lives, without

    computer assistance. This introduction is more difficult than many of our others, so

    we recommend that readers new to this subject first read through (in sequence) all our

    priorintroductionsor theglossary, in order to gain familiarity with the many technical

    terms used herein.

    One of the most important findings when we pursue these studies is that normally

    there is no single solution to such problems, no Utopian 'answer' or optimum in the

    sense traditionally expected. Instead there is a large family of alternative solutions,

    different balances of the various objectives that all have the same global fitness

    (sometimes so many as to be in practice infinite). This diversity of solutions precludes

    http://www.calresco.org/lucas/holarch.htmhttp://www.calresco.org/lucas/holarch.htmhttp://www.calresco.org/lucas/holarch.htmhttp://www.calresco.org/papers.htm#disshttp://www.calresco.org/papers.htm#disshttp://www.calresco.org/papers.htm#disshttp://www.calresco.org/offline.htm#selhttp://www.calresco.org/offline.htm#selhttp://www.calresco.org/offline.htm#selhttp://www.calresco.org/themes.htm#inthttp://www.calresco.org/themes.htm#inthttp://www.calresco.org/themes.htm#inthttp://www.calresco.org/glossary.htmhttp://www.calresco.org/glossary.htmhttp://www.calresco.org/glossary.htmhttp://www.calresco.org/glossary.htmhttp://www.calresco.org/themes.htm#inthttp://www.calresco.org/offline.htm#selhttp://www.calresco.org/papers.htm#disshttp://www.calresco.org/lucas/holarch.htm
  • 8/3/2019 Practical Multi Objective ion

    2/15

    isolated 'right' or 'wrong' decisions, we must instead make our decision based upon the

    full dynamic context of the situation, in other words complex systems force complex

    choices with complex implications. We can regard the various objectives as resources,

    so agent populations naturally evolve to minimise competition for these, which leads

    to the formation of sets of specialist solutions or niches, a type of division of labour or

    speciation, a symbiosis orsynergygenerating maximum efficiency in the wider globalcontext. This result implies that multiple values, rather than single ones, drive real

    world ecologies and societies and that these techniques can equally be applied in those

    fields. It also implies a multi-level form of selection which operates on populations as

    well as individuals.

    The General Idea

    In normalgenetic algorithms(GA) we take a population of genomes (individuals)

    randomly scattered acrossstate spaceand evaluate the fitness of the results. The best

    are then retained (selection) and a new population created (reproduction),incorporating mutation and crossover operations to gain a different set of possibilities

    (variation). Over many generations the population will search state space and

    hopefully converge on the best solution, the global optimum. In multiobjective genetic

    algorithms we do much the same, except that in this case we are trying to optimise not

    for onefitnessparameter but against a collection of them. To achieve this we must

    generate an understanding of the overall fitness of the set of objectives, so that we can

    compare solutions, and there are many ways of doing this. In traditional

    multiobjective optimisation it is usual to simply aggregate together (add in some way)

    all the various objectives to form a single (scalar) fitness function, which can then be

    treated by classical techniques such as simple GAs, multiple objective linear

    programming (MOLP), multiple attribute utility theory (MAUT), random search,

    simulated annealing etc.

    A problem that arises however is how to normalise, prioritise and weight the

    contributions of the various objectives in arriving at a suitable measure, e.g. when

    choosing a car how do we compare incommensurable values like size and colour ? Is

    speed more important than cost and by how much ? How can we quantify

    suchfuzzyobjectives as beauty or comfort ? Additionally values can interact or

    conflict with each other, increasing one can reduce others in turn and this can happen

    innonlinearways. This mapping stage is in itself problematical in that the set of

    solutions produced is highly dependent upon the value sharing function used and how

    the weights are assigned. Only if this is adequate will the solutions also be adequate

    and it is very easy to assign prejudicial or inappropriate sharing factors which can lead

    to apparently quantitatively exact solutions that are sub-optimal or misleading in

    practice (e.g. traditional 'objective independence' assumptions). Thus a Multiobjective

    Optimisation Problem requires solving two problems, firstly to establish a statement

    http://www.calresco.org/wp/synergy.htmhttp://www.calresco.org/wp/synergy.htmhttp://www.calresco.org/wp/synergy.htmhttp://www.calresco.org/genetic.htmhttp://www.calresco.org/genetic.htmhttp://www.calresco.org/genetic.htmhttp://www.calresco.org/attract.htmhttp://www.calresco.org/attract.htmhttp://www.calresco.org/attract.htmhttp://www.calresco.org/lucas/fitness.htmhttp://www.calresco.org/lucas/fitness.htmhttp://www.calresco.org/lucas/fitness.htmhttp://www.calresco.org/lucas/fuzzy.htmhttp://www.calresco.org/lucas/fuzzy.htmhttp://www.calresco.org/lucas/fuzzy.htmhttp://www.calresco.org/nonlin.htmhttp://www.calresco.org/nonlin.htmhttp://www.calresco.org/nonlin.htmhttp://www.calresco.org/nonlin.htmhttp://www.calresco.org/lucas/fuzzy.htmhttp://www.calresco.org/lucas/fitness.htmhttp://www.calresco.org/attract.htmhttp://www.calresco.org/genetic.htmhttp://www.calresco.org/wp/synergy.htm
  • 8/3/2019 Practical Multi Objective ion

    3/15

    of the problem in a suitable mathematical form called the 'objective function' (i.e. a

    definition of the design space) and secondly to somehow search this space to locate

    acceptable solutions (ideally global optima) in terms of the 'decision variables'.

    Compromise Solutions

    Where multiple objectives or decision variables are treated it is usual that only some

    values of each are feasible in practice (e.g. a chair cannot be made too small to fit a

    person). These constraints (not smaller than x, not bigger than y, equal to z) reduce the

    extent of the design space to be searched in accordance with our goals, i.e. what we

    wish to achieve overall. In cases where we impose too strict constraints however it is

    possible to exclude all the solutions i.e. our problem then has no degrees of freedom

    (it is over-constrained), so we must try to balance design freedom with ideal

    preferences. In most situations, where values interact, it is quite impossible to

    optimise all the objectives at the same time, so we must adopt various trade-offs or

    compromises, e.g. between cost and performance. Many equally good solutions arepossible in these cases, e.g. do we want a cheap, low performance car or an expensive,

    high performance one, or something in-between ? What is the best compromise for

    such decisions always depends upon the environmental context in which it is to be

    used, i.e. our wider lifestyle or worldview.

    When we have a set of solutions such that we can't improve any objective further

    without at the same time worsening another then we have what is called the 'Pareto-

    optimal set' (of resultant fitnesses) or 'Pareto front' (of objective vectors). In such a

    case all the other lesser solutions are said to be 'dominated' by these better ones and

    we can discard them, e.g. the set of objective values 5,3,4 dominates the set 4,3,4 (the

    former improves the first objective without reducing the others), the set 4,8,2 doesn't

    dominate 4,7,3 however, but is an alternative trade off between the last two

    objectives. The set of these 'can't do better' trade-offs (the non-dominated set) contains

    all the acceptable solutions, different combinations of the objectives or 'niches' and we

    then need to select one or more of these for practical use by more intuitive means -

    their fitnesses are exactly the same, i.e. the 'Decision Maker' (DM) needs to prioritise

    the criteria or establish preferences (incorporate subjective values) appropriate to the

    full contextual situation. This can be done prior to optimisation (giving a scalar fitness

    function), after (choosing from the full Pareto-optimal set), or interactively (gradual

    convergence on an acceptable solution). The first is appropriate given static (hard)

    preferences and changing objectives, the second for static objectives and changing

    (soft) preferences, and the third where both are dynamically changing (coevolution).

    Conventional Approaches

    http://www.calresco.org/lucas/auto.htmhttp://www.calresco.org/lucas/auto.htmhttp://www.calresco.org/lucas/auto.htmhttp://www.calresco.org/lucas/auto.htm
  • 8/3/2019 Practical Multi Objective ion

    4/15

    Before we look at GA inspired approaches, let us mention a selection of the more

    conventional Operational Research methods of obtaining solutions or approaching the

    Pareto front (many other techniques also exist). These mostly focus on the first stage

    of ranking the objectives, i.e. trying to reduce the design space to a more easily

    managed mathematical form (since most such problems are far too complex to

    enumerate and evaluate all the possible combinations in any reasonable time).

    Stochastic - very general, but inefficient (e.g. random walk, simulatedannealing, Monte Carlo & tabu)

    Linear Programming - fast, but restricted to linearised situations only Gradient Based/Hill Climbing - nonlinear, applicable to smooth

    (differentiable) functions

    Simplex Based - nonlinear for discontinuous functions Sequential Optimisation - ranks objectives by preference and optimises them

    in order (lexicographic)

    Weighting Objectives - creating a single scalar vector function to optimise,multiple runs needed

    Constraint - optimises preferred objective with others treated as constraints Global Criterion - minimises the distance to an ideal vector Goal Programming - minimises deviation from target constraints Game Theory - searches for Nash equilibria Multiattribute Utility Theory (MAUT) - maximises preferences or fitnesses Surrogate Worth Trade-Off- quantifies and minimises compromises Q-Analysis - uses topology maths, multicriteria polyhedral dynamics (MCPD) Dynamic Compromise Programming - uses state transition functions,

    parameters change over time

    Comparing GA and MOEA Approaches

    In standard GAs we take the specifications of a number of parts or attributes (the

    genotype bits) and combine them to create a function (the phenotype), and it is the

    fitness of this function that we wish to optimise. In these situations we are not

    bothered about which parts are used or not used, as long as the required function is

    met. Technically the 'phenotype' function is anemergentproperty of the

    environmentally situated 'genotype' but most GAs simply relate the two

    mathematically. However generating just a single fitness function in such a linear

    style, e.g. of the form F = xA + yB + zC (which is then maximised or minimised), can

    lead us to difficulties. If we take for example A = air, B = food and C = warmth, then

    5A + 0B + 0C seems to achieve a fitter organism overall than say 1A + 1B + 1C, yet a

    creature cannot live without all three needs (objectives), so this would be an erroneous

    solution. Thus maximising either a single objective or a simple sum of them (where

    inter-dependencies exist) can lead to invalid (unstable) solutions. The problem is that

    http://www.calresco.org/emerge.htmhttp://www.calresco.org/emerge.htmhttp://www.calresco.org/emerge.htmhttp://www.calresco.org/emerge.htm
  • 8/3/2019 Practical Multi Objective ion

    5/15

    the apparent overall fitness (5 here) is unsustainable, the population size would decay

    dynamically P(t+n) 0 (i.e. the creatures would soon starve or freeze to death). We

    need instead to converge on stable equilibria (which are often short term ones in open,

    soft preference, dynamical systems). To deal with such problems we can add sets of

    constraints, based upon our needs and the available resources (e.g. B not less than w,

    A not greater than v), thus preventing certain of our objectives from escaping sensible

    bounds (i.e. we discard 'solutions' that arise violating these constraints).

    Usually, in MOP, function fitness is made a sum of the objective fitnesses, but if

    instead we use the product of those fitnesses then we can avoid this problem, e.g.

    5x0x0 = 0 (unfit), 1x1x1 = 1 (some fitness). This gives us the idea that interactions

    between objectives imply competitive (negative-sum) or cooperative (positive-sum)

    behaviours and that finding the optimum overall fitness in these cases will require

    cooperation (at least in a weak sense) and not competitive exclusion. The notion of a

    population of cooperative niches (diverse Pareto front solutions) is more naturally

    modelled by MOEAs. In these, instead of selecting and breeding just the best overall

    solution we wish to maintain a set of solutions based upon the values of all the

    separate objectives. These are only replaced by better solutions if they become

    dominated in the Pareto sense. In this way we avoid the situation where solutions that

    improve one variable at the expense of another come to dominate the population. This

    approach naturally allows niches to persist and in the end delivers a set of alternative

    optimised compromises from which the decision maker can make a choice, or

    alternatively allow a diversity of 'lifestyles' to coexist. For example, in a village we

    should have a balance between the various trades - farmers, carpenters, blacksmiths,

    shopkeepers, doctors, innkeepers, etc. None of these occupations are 'better' than theothers, and they can all only exist within the wider community, within a population-

    dependent mutual support arrangement, i.e. if we say 'doctor' is more 'profitable' then

    we should maximise that occupation - but that 'profitability' collapses if there are no

    patients, only doctors - all the fitnesses are mutually dependent upon the whole, the set

    of niches is collectively of higher fitness than any isolated occupation (this is the

    meaning of synergy and is a form of frequency-dependent selection).

    Evolutionary Multiobjective Approaches

    In traditional single-objective (scalar) GA approaches, to fully search state space, wecan vary the weights or treat most of the objectives as varying constraints and

    optimise just the main objective, employing multiple runs to generate Pareto-optimal

    solutions sequentially. For MOEAs we wish instead to generate all the Pareto-optimal

    solutions in a single run, for efficiency. A number of approaches are possible, often

    combined (we won't try to fully explain these here):

  • 8/3/2019 Practical Multi Objective ion

    6/15

    Switching Objectives - optimises (selects) each objective separately to exploretrade-offs

    Parameter Variation - changes the weighting during each selection phase toexplore state space

    Pareto Selection - uses the dominance concept for selection to eliminateinferior solutions

    Fitness Sharing - degrades the fitness of all competing members of a niche,encouraging diversity

    Restricted Mating - only allows close individuals to mate, avoiding losinggood building blocks

    Overspecification - adds genetic redundancy (ploidy), allowing fordiscontinuous environments

    Reinitialisation - generates occasional new random immigrants to introducenovelty

    Min-Max Optimisation - uses deviations from goal objectives to keep focusedon ideal solution

    One of the most useful techniques in practice is 'elitism' where we preserve the best

    individuals from each generation, but in multiobjective cases this can lock the system

    on a local optimum (premature convergence). We need in this case to decide how

    many solutions to include in the elite set and whether to keep them within the

    population (evolving) or as an external reference set used only in selection.

    Problem Areas

    Several problems occur when we try to search the space of possible solutions. First we

    wish to cover all of the search space (i.e. find all the good solutions) and not get stuck

    on local optima, secondly we wish to approach the Pareto front as closely as possible

    (ensuring optimal solutions), thirdly we wish to ensure a good spread of objective

    values (i.e. a diverse choice of niches) and fourthly we wish to achieve convergence in

    a reasonable time. These depend in turn upon how we choose to define the fitness or

    utility function and carry out the search, which can be problematical (again we won't

    go into the details, just give some outlines):

    Fitness function (decision variable space) related problems include:

    Context Dependency - fitness is affected by population size, environment,history and synergetic effects

    Poor Information - we can't specify preferences or weighting adequately, orobjectives are incomparable

    Intransitivity - outranking leads to conflicting chains of preferences, e.g.A>B>C>A

  • 8/3/2019 Practical Multi Objective ion

    7/15

    Dynamic Preferences - objective weighting changes over time or is cyclic(non-stationarity)

    Nonlinear Interdependencies - the ranking changes depending upon thevalues of other objectives

    Multiple Decision Makers - aggregating individual preferences cannot give avalid group preference

    Fitness Scaling - normalisation method affects selection pressure and isproblem dependent

    Search (objective function space) related problems include:

    Crossover & Mutation Rates - we do not know which values to choose orhow these affect results

    Population Size - we do not know an appropriate number of individuals toemploy for each case

    Deception - local solutions (building blocks) prove not to be part of the Pareto-optimal solutions

    Fitness Sharing Factor - what value should we to use to avoid genetic drift(clustering on the front)

    Front Fragmentation - isolated and fragmentary optima can be missed if thefront is discontinuous

    Concave Fronts - Many approaches have difficulty using non-convex solutionspaces (convex means that on a line interpolated between two solutions all

    points are also solutions, non-convex or concave means that some points on

    such a line generate invalid solutions)

    Convergence Time - sharing and similar front covering techniques can slowthe runs by generating many poor (dominated) solutions - this can sometimes

    be improved by using mating restrictions (akin toParallel Genetic Algorithms)

    and elitism

    Once we have a set of solutions our problems are not over. We then have no

    information as to how these solutions arose, so cannot say how critical they are, i.e. if

    we vary one of the parameters slightly does the solution persist ? Adopting a 'perfect'

    solution that turns into a disaster if some parameter varies by say 1% would be unwise

    ! Attempts to clarify this, i.e. provide some 'sensitivity assessment', require us to

    maintain and analyse an history of the search trajectories, e.g. by using such

    techniques as case-based reasoning (CBR). These analyses can also help improve

    search performance by generating a better understanding of the structure of design

    space. An example of just how complicated nonlinear optimisation can be is

    illustrated by the following graph of how the fitness value of just one objective (out of

    seven) changes when plotted against variations in two of the other variables within the

    system. The narrowness of the fitness peaks reflects a sensitivity to initial conditions

    http://www-unix.oit.umass.edu/~vndibere/AIHAMP/PAPERS/complex.ps.gzhttp://www-unix.oit.umass.edu/~vndibere/AIHAMP/PAPERS/complex.ps.gzhttp://www-unix.oit.umass.edu/~vndibere/AIHAMP/PAPERS/complex.ps.gzhttp://www-unix.oit.umass.edu/~vndibere/AIHAMP/PAPERS/complex.ps.gz
  • 8/3/2019 Practical Multi Objective ion

    8/15

    and the number of such peaks emphasises the difficulties in finding a representative

    spread of good solutions, i.e. objective nonlinear interactions generates a rugged

    'fitness landscape':

    Diversity, Creativity and Stability

    We noted at the start that multiple objectives or values implies a possible diversity ofspecialist niches and that part of the function of state space searches is to locate and

    exploit these possibilities. This is a form of creativity, where we become aware of

    ways of operating that go beyond individualistic success criteria and embody a form

    of group selection, a 'building block' approach to cooperative social fitness

    exemplified by the 'credit assignment' mode oflearning classifier systems(LCS). This

    relates to changing competing objectives into supporting ones, e.g. 'work' and 'fun' are

    often opposed, but by re-organising our approach (escaping the 'rules') work can also

    be fun - the two become mutually supporting. In order for these higher-level attractor

    niches to persist we must have a selection pressure opposing homogeneity (the lowest

    common denominator), i.e. keeping partial solutions (specialist functions) intact andrestoring them ifperturbed. This generally means that fitness is defined in terms of the

    whole, i.e. the mix of partial solutions present, similar to our MOEA focus on

    preserving all objectives. Thus as any partial solution grows in the fixed GA

    population, the others upon which its fitness depends must shrink, compensating and

    self-stabilising the whole - which has been called in Reinforcement Learning (RL) and

    LCS literature a strong cooperation mode or 'symbiotic evolution', which relates to the

    http://www.calresco.org/lucas/classify.htmhttp://www.calresco.org/lucas/classify.htmhttp://www.calresco.org/lucas/classify.htmhttp://www.calresco.org/perturb.htmhttp://www.calresco.org/perturb.htmhttp://www.calresco.org/perturb.htmhttp://www.calresco.org/perturb.htmhttp://www.calresco.org/lucas/classify.htm
  • 8/3/2019 Practical Multi Objective ion

    9/15

    multiple environments of ecological GAs. This balance reflects what has been called

    the 'edge of chaos', the upward causation of the part fitness functions being countered

    by the downward causation of the fitness of the whole population. An alternative way

    of implementing this is to select using different criteria for each specialist population,

    i.e. we use differential selection to both create and maintain niche diversity. One way

    of doing this is to subdivide state space so that each sub-population is optimised in adifferent environment and thus finds alternative optima or niches.

    The idea of competition for multimode resources can be illustrated by taking the three

    economic resources of Money, Time and Ideas (thought). These all interrelate, i.e.

    generating money needs time and ideas, generating ideas needs time and perhaps

    money, generating time needs ideas and money. A simple dynamic model for this

    would be:

    M( t+1 ) = a * M(t) + b * I ( t ) + c * T ( t )

    I ( t+1 ) = d * I ( t ) + e * T ( t ) + f * M( t )T ( t+1 ) = g * T (t) + h * M( t ) + j * I ( t )

    Models of this form are notoriously nonlinear, the cross

    coupling of the variables often leads to chaotic behaviour,

    solutions that take the form ofstrange attractors(example illustrated) where the trade-

    offs partition state space into semi-stable niches. This is a result offeedback, the

    system interconnections form causal loops which generate semi-stable equilibria,

    areas of balance in state space, i.e. dynamic compromises. In such systems the

    stability of analyses are highly suspect and we would be advised to take great care,

    employing risk analysis methods as far as we possibly can. Little work has yet taken

    place in understanding MOP in these unstable dynamical attractor terms.

    State Of The Art

    A few of the more advanced MOEA approaches used recently can be listed (there are

    many variations):

    Schaeffer's Vector Evaluated Genetic Algorithm (VEGA) - switchingobjective type.

    Hajela & Lin's Weighting-based Genetic Algorithm (HLGA) - parametervariation type

    Fonseca & Fleming's Multiobjective Genetic Algorithm (MOGA) - Pareto-based goal interaction

    Horn, Nafpliotis & Goldberg's Niched Pareto Genetic Algorithm (NPGA) -cooperative sharing

    http://www.calresco.org/wp/attrsoc.htmhttp://www.calresco.org/wp/attrsoc.htmhttp://www.calresco.org/wp/attrsoc.htmhttp://www.calresco.org/lucas/systems.htmhttp://www.calresco.org/lucas/systems.htmhttp://www.calresco.org/lucas/systems.htmhttp://www.calresco.org/lucas/systems.htmhttp://www.calresco.org/wp/attrsoc.htm
  • 8/3/2019 Practical Multi Objective ion

    10/15

    Srinivas & Deb's Nondominated Sorting Genetic Algorithm (NSGA) -fitness sharing Pareto

    Coello's Min-Max Optimisation (CMMO) - ideal non-Pareto feasibilityvetting

    Zitzler & Thiele's Strength Pareto Evolutionary Algorithm (SPEA) - elitistexternal niching type

    Knowles & Corne's Pareto Archived Evolution Strategy (PAES) - singleparent, local search

    A sister field to MOEAs, Multiple Criteria Decision Making (MCDM), also has many

    variants - outranking, value & utility based, group negotiation, and multiple objective

    programming. In addition to approaches using firm values we can also allow fuzziness

    in the objective values and weights. This reduces the need for accurate function

    definitions, and accounts for uncertainty in real world values and for incomplete

    information. A fuzzification of MCDM also allows interdependence of objectives,

    something few of the other MCDM techniques support well. Most of this class of

    techniques don't employ GA (evolving population) techniques, i.e. they concentrate

    on mapping from objectives to fitness function, rather than on searching state space,

    so a combination ofFMCDMwith MOEAs may well prove advantageous and initial

    work on such combinations is now starting to get underway. At the time of writing

    improvements to the techniques listed above, e.g. NSGA-II, SPEA2 and PESA-II, and

    for fast operation Coello & Pulido's micro-GA, seem to be the current state-of-the art

    for MOP. For MOCO problems, progress has been made in combining recombination

    with local search techniques in the form of Memetic algorithms - Jaszkiewicz's

    Random Directions Multiobjective Genetic Local Search (RD-MOGLS) or Pareto

    Simulated Annealing (PSA) and Knowles & Corne's M-PAES are the current state-of-

    the-art here, although another new technique, theALiferelated Ant Colony

    Optimisation (ACO), a form ofSwarmIntelligence, also shows promise.

    Issues of Accuracy

    Given that in 'real life' complex systems we generally do not have accurate

    measurements of our variables and that we are more interested in dynamics (the

    unknown future) rather than statics (the known present) then what use are MOEAs ?

    Applications have already been found in many engineering and organisational

    situations, where science has been able to generate stable functions linking the

    objectives. In these (usually) highly artificial systems we have a great deal of control

    over the aspects included and excluded, we specify both the parts and their

    interactions. What about more natural complex systems, like ecologies and societies,

    where these aspects are not generally under our control ? Systems that themselves

    evolve bring to the fore the soft preferences mode of DM choice, in other words we

    are likely (in such open systems) only to be able to guess at both the set of objectives

    ftp://ftp.abo.fi/pub/iamsr/fs13p.ps.gzftp://ftp.abo.fi/pub/iamsr/fs13p.ps.gzftp://ftp.abo.fi/pub/iamsr/fs13p.ps.gzhttp://www.calresco.org/cal.htm#alhttp://www.calresco.org/cal.htm#alhttp://www.calresco.org/cal.htm#alhttp://iridia.ulb.ac.be/~mdorigo/ACO/ACO.htmlhttp://iridia.ulb.ac.be/~mdorigo/ACO/ACO.htmlhttp://iridia.ulb.ac.be/~mdorigo/ACO/ACO.htmlhttp://dsp.jpl.nasa.gov/members/payman/swarm/http://dsp.jpl.nasa.gov/members/payman/swarm/http://dsp.jpl.nasa.gov/members/payman/swarm/http://dsp.jpl.nasa.gov/members/payman/swarm/http://iridia.ulb.ac.be/~mdorigo/ACO/ACO.htmlhttp://www.calresco.org/cal.htm#alftp://ftp.abo.fi/pub/iamsr/fs13p.ps.gz
  • 8/3/2019 Practical Multi Objective ion

    11/15

    and the interaction of their values. Experience counts for a great deal here, the larger

    the number of similar situations (historical cases) we have seen then the better our

    choices are likely to be. In nonstationary (evolving) EA systems genome diversity is

    essential, if crossover is to track the moving optima (assuming low mutation rates),

    since a converged population generates only identical genomes with crossover.

    Techniques to deal with dynamic optima are still rare, but Wineberg's ShiftingBalance Genetic Algorithm (SBGA) is worth a mention, as are approaches using

    Cooperative Coevolutionary Genetic Algorithms (CCGA), also called Coevolutionary

    Computation (Potter & De Jong) or symbiotic CGAs - all are ways of implementing

    continuous partial testing or life-time fitness evaluation (LTFE). New approaches,

    based upon compositional evolution, add in symbiotic and synergistic effects, e.g.

    Watson's Symbiogenetic Evolutionary Adaptation Method (SEAM).

    But even if we make good choices, those are only likely to be viable for a short time,

    we cannot assume that an optimisation exercise will give a set of solutions that can be

    applied indiscriminately to future situations. Thus a mathematical analysis can only be

    a guide, not a guarantee. As we widen the scope of our multiobjective systems to enter

    the human and cultural areas (the most difficult cases) our problems change from

    being 'well-defined' to being 'ill-defined', we no longer have firm data from which to

    plan which option to choose. This is the area of coevolutionary systems, where the

    optima or attractors aretransient, they do not persist long enough for us to fully

    optimise any solution. The best that we can do here is to find solutions that are an

    improvement, that go in the right direction, and continue to try to find better ones for

    as long as the problem is recognisable, i.e. we attempt to track the moving situation.

    Thus we need a technique that better fits these situations, a less rigid approach that

    nethertheless can give us better information, more informed options, than current

    single objective evaluations (e.g. 'Arab' versus 'Jew'). We need a dynamic

    multiobjective mode of day-to-day reasoning, a conscious equivalent to what is

    sometimes called 'intuition' or 'wisdom', i.e. our subconscious method of parallel

    constraint satisfaction. Here however we confront the well known limitation of

    conscious thought, the 7 item limit to the complexity we can keep in short-term

    memory, the problem of potential information overload. How can we overcome this ?

    An Informal Approach

    Whilst the EMOO techniques outlined are good in more formal situations (where

    adequate resources, expertise and time are available) they clearly cannot easily be

    applied to our day-to-day requirement to make decisions within casual multivalued

    situations. We require a cut-down version of this trial and error (heuristic) technique.

    We need to do four things:

    1. Identify the objectives involved (the set of values that interest us)

    http://www.calresco.org/transatr.htmhttp://www.calresco.org/transatr.htmhttp://www.calresco.org/transatr.htm
  • 8/3/2019 Practical Multi Objective ion

    12/15

    2. Determine how they interact (the fitness mapping orinterconnections)

    3. Generate viable alternatives (the population of possible choices orniches)

    4. Identify the best compromise (the Pareto-optimum) in the currentcontext

    This more informal approach doesn't look to perform exhaustive mathematical testing,

    just tries to correctly analyse a complex dynamical situation in order to

    identifyany good way forward and to highlight interdependencies which may

    otherwise be overlooked (i.e. the connectivity of the complex system). In this sense it

    is a single solution at a time form of MOO, close to the minimal EMOO technique of

    (1+1)-PAES, i.e. we try to move from our current situation to a better one following a

    gradual improvement strategy (or walk) through local state space. This process

    merges standard GA and OR techniques, and is close to hybrid genetic algorithms

    (HGA), also called genetic local search (GLS) or memetic algorithms (MA). These

    produce one suggested solution at a time (often creatively combining or 'crossing-

    over' two earlier solutions), which the DM then accepts or rejects (or puts to one side

    as a possible if nothing better shows up later). In other words, for each solution

    proposed we compare the effects onallthe other objectives and reject solutions that

    make other aspects of our situation (ourQuality of Lifeor utility function) worst, only

    accepting Pareto dominating solutions.

    It seems clear here that rather than one (7 item limited) solution generator/Decision

    Maker, we should attempt to gain a population of them (equivalent to one super-DM

    with a set of alternative preferences), each contributing diverse solutions (emulating

    the MOEA population). This technique is an informal dynamic Pareto version of

    MOGLS. Thus alternative preferences can then be compared and a compromise

    chosen by consensus (if a single solution is mandatory), or better still we can allow a

    diversity of equally good solutions to persist (ideally one non-dominated solution

    would then match the niche preferences of each DM). This distributed decision

    making (bottom-up) contrasts with the centralised (top-down) decision making

    common to our institutions - which appear sub-optimal by comparison, since they fail

    to search state space adequately and often impose poor local decisions on everyone.

    Thus complex systems techniques echo the growing transdisciplinary approach ofgetting all interested parties involved, rather than relying on 'experts' focused on

    single 'context free' objectives. This 'multiple-autonomous agent' strategy does seem

    to overcome the limitation of information overload, by distributing the problem

    processing amongst many concurrent DMs.

    Experimental Lessons

    http://www.calresco.org/lucas/qol.htmhttp://www.calresco.org/lucas/qol.htmhttp://www.calresco.org/lucas/qol.htmhttp://www.calresco.org/lucas/qol.htm
  • 8/3/2019 Practical Multi Objective ion

    13/15

    There are five axes (or dimensions) of interest when

    considering the optimisation of complex systems. First there is

    the axis of complexity, the number of objectives or decision

    variables included (e.g. air, food, water, warmth). Second there

    is the axis of variability, the values or space that each objective

    can take (e.g. density, volume, temperature). These two takentogether define the static objective function. Thirdly there is the

    axis of time, the dynamics of the system, how the objectives

    interact and change, their trajectories through state space (e.g. consumption, cooling).

    These first three axes are commonly used in science, but the final two are almost

    ignored. The fourth is thefractalstructure axis, the multi-level nature of emergence in

    both time and space (e.g. individual, society, ecology) and finally there is the axis of

    innovation, the novelty of evolutionary creativity (state space expansion). In almost

    all forms of science we treat only two or three of these axes at a time, we reduce the

    system to a subset of its real complexity. We may consider only static systems -

    ignoring time and evolution; or single aspects - ignoring most objectives;

    or specialisms - ignoring other levels; or classification - ignoring variability;

    or stochastic evolution - ignoring predictability. Each of these simplifications have

    their uses, but if we wish to understand complex systems as a whole then we must

    treat all of these aspects at once. Difficult as this seems, by living in the real world we

    must do this daily, for better or for worst, we approximate (as best we can) based on

    the full complex context of our world. So we can ask, do our formal scientific studies

    help us in any way to do this better ?

    The many experiments that have taken place over the years in MOEAs have yielded a

    few pointers that can help our informal treatments, as well as helping to direct formal

    model building.

    Retain Current Optima - don't replace what works, until a proven bettersolution is found (elitism)

    Retain Diversity Archive - alternatives may be needed later fordifferent/changing environments

    Local Search - try gradual improvements (by 'hill climbing') from currentsolutions

    Relative Fitness - derive relative fitness differences rather than absolutefitnesses for efficiency

    Fuzzy Preferences - rank alternatives linguistically, e.g. better, much worst,rather than quantitatively

    Pareto Dominance - only replace dominated solutions (don't degrade anyobjectives)

    http://www.calresco.org/fractal.htmhttp://www.calresco.org/fractal.htmhttp://www.calresco.org/fractal.htmhttp://www.calresco.org/fractal.htm
  • 8/3/2019 Practical Multi Objective ion

    14/15

    Innovative Randomness - try new approaches and new ideas, introducenovelty, new synergic niches

    Check Sensitivity - ensure solutions are stable under the expectedperturbations

    Interdependencies - understand which other partial solutions (specialisms)affect the fitness of each

    Dynamic Tracking - ensure solutions can coevolve with any environmentalchanges

    Conclusion

    The area of Multiobjective Optimisation Problems is extremely complex and

    mathematically difficult, with many under-researched areas and outstanding problems.

    There is a considerable way to go still before we have an adequate technical

    understanding of all these issues (for an indication of current progress and details of

    all the algorithms mentioned herein see thesereferences). Yet as we gain moreknowledge of complex systems, in all walks of life and areas of research, we can see

    the need to understand how objectives coevolve dynamically, and to determine the

    attractor structure (alternative optima) of such multidimensional state spaces. Outside

    the mathematical arena, our human systems relate strongly to such nonlinear

    interrelating values. Here we have the further complication of multiple levels, e.g. the

    environment, human physiology, psychology and sociology, where objectives often

    have interlevel effects as well asintralevelinterdependencies (e.g. socially produced

    pesticides intended for environmental purposes poison our physiology, leading to

    psychological problems and social negative feedback). This escalation in complexity

    is currently beyond our abilities to model computationally in any detail.

    As our artificial systems integrate more fully with our lives and cultures, and start to

    adapt to our needs, we must be wary of reducing the complexity of our models in

    order to obtainquantitative'solutions'. A solution to the wrong problem can be worst

    than none at all, especially if we then impose it upon the 'real world' as if it were the

    'truth'. We must be clear in specifying the assumptions we have made in formulating

    the model, since in complex systems we can never be certain that these are valid

    simplifications. Many real world situations potentially need many more objectives

    (values) to be taken into account than those few treated in much of the MOO literature

    (using as few as two objectives is very common). Attempts to employ EMOO

    techniques outside the realm of closed engineering systems (i.e. in optimising human

    and ecological issues) seem conspicuous by their absence, yet that is where the most

    difficult optimisation problems lie and where (presumably) we should concentrate our

    efforts in applying our most advanced techniques. Yet despite several orders of

    magnitude improvement in our computational ability over recent decades, the

    evaluation of complex optimisation situations is still a major limiting factor in

    http://www.lania.mx/~ccoello/EMOO/EMOObib.htmlhttp://www.lania.mx/~ccoello/EMOO/EMOObib.htmlhttp://www.lania.mx/~ccoello/EMOO/EMOObib.htmlhttp://www.calresco.org/lucas/cas.htmhttp://www.calresco.org/lucas/cas.htmhttp://www.calresco.org/lucas/cas.htmhttp://www.calresco.org/lucas/quantify.htmhttp://www.calresco.org/lucas/quantify.htmhttp://www.calresco.org/lucas/quantify.htmhttp://www.calresco.org/lucas/quantify.htmhttp://www.calresco.org/lucas/cas.htmhttp://www.lania.mx/~ccoello/EMOO/EMOObib.html
  • 8/3/2019 Practical Multi Objective ion

    15/15

    decision making at a conscious level, leaving us to rely on poorly understood

    'intuition' as a fallback. An informal conscious approach is suggested that can provide

    an intermediate, incremental improvement, step until this evaluation formalism

    becomes more possible.