week 6 - heuristics part i+ii

57
Solution Techniques

Upload: emile-cornelissen

Post on 04-Jan-2016

215 views

Category:

Documents


1 download

DESCRIPTION

Slides linear programming

TRANSCRIPT

Page 1: Week 6 - Heuristics Part I+II

Solution Techniques

Page 2: Week 6 - Heuristics Part I+II

Introduction

Additional resource: MIT open Courseware

Page 3: Week 6 - Heuristics Part I+II

Logistics Processes

Transport

• External: procurement, distribution

• Internal: between production sites, storage locations

• Mode of transport (rail, vessel, barge, truck)

Handling

• Loading and unloading, sorting, stocking and releasing

• Combining transport modes

• Interface between internal and external transport

Consignment

• assemble items into orders

Storage

Packaging

Page 4: Week 6 - Heuristics Part I+II

Designing Transport Networks

Determine number, location, function of the networks nodes.

Facility location problem

p-median problem

Page 5: Week 6 - Heuristics Part I+II

Designing Transport Networks

Facility location problem

-Single-commodity single-echelon continuous location problems: Ex:Weiszfeld heuristic -Single-commodity single-echelon discrete location problems: Ex: capacitated plant location (CPL) problem. simple plant location (SPL) model p-median model *As a rule, capacitated problems are harder than uncapacitated ones. A Lagrangian heuristic for the capacitated plant location problem To evaluate whether a heuristic solution is a tight upper bound (UB) on the optimal solution value, it is useful to determine a lower bound (LB) on the optimal solution value. This yields a ratio (UB − LB)/ LB which represents an overestimate of the relative deviation of the heuristic solution value from the optimum.

Page 6: Week 6 - Heuristics Part I+II

Planning Transport Paths and Modes

Transportation problem

Minimum cost flow problem

Multicommodity flow problems

s

b(s)= 5

v1 v2

v3

t

b(t)= − 5

[1,1]

[2,6]

3 [5,2]

2 [3,4]

[3,2] 3

[7,3] 2

[ς,c] f 3 [3,4]

Page 7: Week 6 - Heuristics Part I+II

Vehicle Use and Tour Planing

Travelling salesman problem

Vehicle routing problem

Pick-up-and-delivery problem

Dial-a-ride problem:

Page 8: Week 6 - Heuristics Part I+II

Vehicle Use and Tour Planing

The Dial-a-Ride Problem (DARP) consists of designing vehicle routes and schedules for n users who specify pickup and delivery requests between origins and destinations. Very often the same user will have two requests during the same day: an outbound request from home to a destination (e.g., a hospital), and an inbound request for the return trip. In the standard version, transport is supplied by a fleet of m identical vehicles based at the same depot. The aim is to plan a set of minimum cost vehicle routes capable of accommodating as many requests as possible, under a set of constraints. The most common example arises in door-to-door transportation services for elderly or disabled people. From a modeling point of view, the DARP generalizes a number of vehicle routing problems such as the Pickup and Delivery Vehicle Routing Problem (PDVRP) and the Vehicle Routing Problem with Time Windows (VRPTW). What makes the DARP different from most such routing problems is the human perspective. The dial-a-ride problem: models and algorithms. Cordeau, J. F., & Laporte, G. (2007). The dial-a-ride problem: models and algorithms. Annals of Operations Research, 153(1), 29-46.

Pickup and delivery problems constitute an important class of vehicle routing problems in which objects or people have to be collected and distributed. Berbeglia, G., Cordeau, J. F., Gribkovskaia, I., & Laporte, G. (2007). Static pickup and delivery problems: a classification scheme and survey. Top, 15(1), 1-31.

Page 9: Week 6 - Heuristics Part I+II

Heuristics

Page 10: Week 6 - Heuristics Part I+II

Heuristics 1 Local Search and

Metaheuristics Local Search

Variable Neighborhood Search

Evolutionary Algorithms

2 Capacitated Vehicle

Routing Construction

heuristics Two-Phase

Heuristics Improvement

Heuristics Evolutionary

Algorithm

3 Vehicle Routing Problem with Time Windows

(VRPTW) Construction Heuristics

Improvement Heuristics

Metaheuristics

4 Pickup and Delivery Problems

(PDP) Tabu Search

Page 11: Week 6 - Heuristics Part I+II

Why Do We Need Heuristics

It is often very hard to solve practical optimization problems.

Exact optimization methods can require a huge amount of time

and memory.

It might be even very hard to find a feasible solution.

Heuristics are often able to determine good solutions for difficult

problems.

What are the drawbacks?

• No optimiality guarantee.

• No performance guarantee.

Page 12: Week 6 - Heuristics Part I+II

Heuristics

One usually discerns between:

Construction heuristics: usually problem specific, fast and

simple

– builds a solution from scratch (starting with nothing).

–Often called “greedy heuristics”. Each step looks good, but it

doesn’t look ahead.

Improvement heuristics:

on top of construction heuristics (require some solution),

try stepwise improvement of solution with small changes,

possibly based on problem independent principles

(neighborhood search): starts with a solution, and then

tries to improve the solution, usually by making small

changes in the current solution.

Page 13: Week 6 - Heuristics Part I+II

Metaheuristics

Improvement heuristics usually result in local optima.

Hard problems often have several local optima that are not

globally optimal.

Page 14: Week 6 - Heuristics Part I+II

Metaheuristics

Metahaeuristics are methods able to escape local

optima.

Some Metaheuristics:

• Simulated Annealing

• Variable Neighbourhood Search

• Very Large Neighbourhood Search

• Tabu Search

• Ant Colony Optimization

• Genetic Algorithms

Page 15: Week 6 - Heuristics Part I+II

Simple Local Search

Local Search

begin x ← initial solution;

repeat: choose an xr ∈ N(x ); if f (xr) ≤ f (x ) then

x ← xr;

until termination criterion met;

end

N(x ): Neighborhood of x

(Assuming a minimization problem.)

Page 16: Week 6 - Heuristics Part I+II

Components of Local Search

Definition of solution representation

Definition of objective function f (x )

Initial solution

Neighborhood structure, i.e. what are neighboring solutions?

Step function, i.e. in what order is the neighborhood searched?

Termination criterion

Page 17: Week 6 - Heuristics Part I+II

Neighborhood Structure

Definition (Neighborhood Structure) A neighborhood structure is a function N : S → 2S , assigning

to every solution x ∈ S a set of neighbors N(x ) ⊆ S.

N(x ) is also called neighborhood (of x ).

Neighborhoods are usually defined implicitly by possible changes

(moves).

Tradeoff between neighborhood size and required search effort.

Page 18: Week 6 - Heuristics Part I+II

Step Function (Selection of x r)

Random Neighbor: Randomly select a neighbor from N(x ).

Next Improvement:

Search N(x ) in a fixed order and use the first improving solution

Best Improvement:

Search N(x ) completely and use the best neighboring solution.

This choice can/will have an influence on the results,

unfortunately no strategy dominates the others.

Page 19: Week 6 - Heuristics Part I+II

Variable Neighbourhood Search

(Hansen und Mladenovic, 1997)

This method is based on the following properties:

1.A local optimum for a specific neighborhood is not an optimum

for another one.

2.A global optimum is optimal for all neighborhood structures.

3.Local optima are often close to each other for a multitude of

problems.

Page 20: Week 6 - Heuristics Part I+II

Variable Neighbourhood Descent (VND)

Variable Neighbourhood Descent (x )

begin l ← 1;

repeat: find a x r ∈ N l (x ) with f (x r) ≤ f (xrr), ∀xrr ∈ N l (x ); if f (xr) < f (x ) then

x ← xr;

l ← 1;

else l ← l + 1;

until l > lmax;

return x ;

end

Page 21: Week 6 - Heuristics Part I+II

Traveling Salesperson Problem (TSP)

• Find the shortest distance tour passing through each node of the network exactly once.

• Cij = distance from i to j.

Page 22: Week 6 - Heuristics Part I+II

Try to find the best tour.

Page 23: Week 6 - Heuristics Part I+II

On solving hard problems

How did you select your tour?

– it relied on visualization

– perhaps you took a holistic view

How can we develop computer heuristics for

solving hard problems?

Page 24: Week 6 - Heuristics Part I+II

The TSP is a hard problem

NP-hard (also NP-complete)

Have you seen NP-hardness or NP-

completeness before?

1. Yes.

2. No.

Page 25: Week 6 - Heuristics Part I+II

The TSP is a hard problem

There is no known polynomial time algorithm.

Cannot bound the running time as less than nk for

any fixed integer k (say k = 15).

If there were a polynomial time algorithm, there

would be a polynomial time algorithm for every

NP-complete problem.

Question: what does one do with a hard

problem?

Page 26: Week 6 - Heuristics Part I+II

An easy construction heuristic:

Nearest unvisited neighbor

Page 27: Week 6 - Heuristics Part I+II

Can we do better in construction? Exercise: try to develop a heuristic in

which we add one city at a time, but the next city

can be added anywhere in the tour (not just the

beginning or the end.)

– Below is the beginning part of a tour

Page 28: Week 6 - Heuristics Part I+II

Courtesy of Stephan Mertens.

Page 29: Week 6 - Heuristics Part I+II

Which do you believe will give shorter length tours:

1. The nearest neighbor heuristic

2. An insertion based heuristic

Page 30: Week 6 - Heuristics Part I+II

Facility location problems.

Choose K facilities so as to minimize total

distance from customers to their closest facility.

• example with three facilities

Page 31: Week 6 - Heuristics Part I+II

Exercise: try developing a good

solution where there are 2 facilities

Page 32: Week 6 - Heuristics Part I+II

Exercise: Develop a construction heuristic

for the facility location problem

Data: locations in a city.

cij = distance from i to j

Page 33: Week 6 - Heuristics Part I+II

Improvement heuristics

Improvement heuristics start with a feasible

solution and look for an improved solution that

can be found by making a very small number of

changes.

Two TSP tours are called 2-adjacent if one can be

obtained from the other by deleting two edges

and adding two edges.

Page 34: Week 6 - Heuristics Part I+II

2-opt neighborhood search

A TSP tour T is called 2-optimal if there is no 2-

adjacent tour to T with lower cost than T.

2-opt heuristic. Look for a 2-adjacent tour with

lower cost than the current tour. If one is found,

then it replaces the current tour. This continues

until there is a 2-optimal tour.

Page 35: Week 6 - Heuristics Part I+II

An improvement heuristic: 2 exchanges.

Look for an improvement obtained by deleting

two edges and adding two edges.

9 6

5

1

2

3

7

8

4

10 T

1

3

2

4

5

6

9

10

8

7

Page 36: Week 6 - Heuristics Part I+II

After the two exchange

1

9 6

5

2

3

7

8

4

10 T1

1

3

2

4

5

6

9

10

8

7

T2

1

3

2

4

7

8

10

9

6

5

Deleting arcs (1,7) and (4, 5) flips the subpath from node 7 to node 5.

Page 37: Week 6 - Heuristics Part I+II

After the two exchange

1

9 6

5

2

3

7

8

4

10 T2

1

3

2

4

7

8

10

9

6

5

T3

1

2

3

4

7

8

10

9

6

5

Deleting arcs (1,3) and (2, 4) flips the subpath from 3 to 2.

Page 38: Week 6 - Heuristics Part I+II

After the final improving 2-exchange

1

9 6

5

2

3

7

8

4

10 T3

1

2

3

4

7

8

10

9

6

5

T4

1

2

3

4

7

10

8

9

6

5

Deleting arcs (7,8) and (10, 9) flips the subpath from 8 to 10.

Page 39: Week 6 - Heuristics Part I+II

2-exchange heuristic (also called 2-opt)

Page 40: Week 6 - Heuristics Part I+II

3-opt neighborhood

Two TSP tours are called 3-adjacent if one can be

obtained from the other by deleting three edges

and adding three edges.

A TSP tour T is called 3-optimal if there is no 3-

adjacent tour to T with lower cost than T.

3-opt heuristic. Look for a 3-adjacent tour with

lower cost than the current tour. If one is found,

then it replaces the current tour. This continues

until there is a 3-optimal tour.

Page 41: Week 6 - Heuristics Part I+II

On Improvement Heuristics

Improvement heuristics are based on

Let N(T) be searching a “neighborhood .“ the neighborhood of tour T.

In this case, the N(T) consists of all tours

that can be obtained from T deleting two

arcs and inserting two arcs.

Improvement heuristic:

start with tour T

if there is a tour T’ ∈ N(T) with c(T’) < c(T), then

replace T by T’ and repeat otherwise, quit with a locally optimal solution.

Page 42: Week 6 - Heuristics Part I+II

How good are improvement heuristics?

normalized running time in seconds

err

or

bo

und

2-opt

3-opt

The Nearest Neighbor algorithm will often keep its tours within 25% of the Held-Karp lower bound. -Running the 2-opt heuristic will often result in a tour with a length less than 5% above the Held-Karp bound. -The improvements of a 3-opt heuristic will usually give us a tour about 3% above the Held-Karp bound Implementers had to be

very clever to achieve

these running times. Selecting an approximation algorithm for the TSP involves several choices. Do we need a solution with less than 1% excess over the of the Held-Karp bound, or do we settle with 4%? The difference in running time can be substantial. FI: Furthest insertion

Page 43: Week 6 - Heuristics Part I+II

Facility location problems.

Class exercise. Suppose we want to solve a

facility location problem with 3 facilities. Design a

neighborhood search heuristic.

Page 44: Week 6 - Heuristics Part I+II

Using Randomization

An important idea in algorithm development:

randomization

Randomization in neighborhood improvement: a

way of using the same approach multiple times

and getting different answers. (Then choose the

best).

Simulated Annealing: randomization in order to

have an approach that is more likely to converge

to a good solution

Page 45: Week 6 - Heuristics Part I+II

On the use of randomization

Remark: 2-exchanges will behave differently

depending on the starting solution.

Randomization based heuristic:

Start with a random tour

use the 2-exchange neighborhood until obtaining

a local optimum.

Page 46: Week 6 - Heuristics Part I+II

One difficulty: random tours are terrible.

Page 47: Week 6 - Heuristics Part I+II

2-opt heuristic starting from a random

solution.

Page 48: Week 6 - Heuristics Part I+II

Another use of randomization

Replace the nearest neighbor tour with the

following: at each iteration, visit either the

closest neighbor or the second or third closest

neighbors. Choose each with 1/3 probability.

This generates a random tour that is “pretty

good” and may be a better starting point than a

totally random tour.

Page 49: Week 6 - Heuristics Part I+II

Other approaches to heuristics

The metaphor based approach to the design of

heuristics

– simulated annealing

– genetic algorithms

– neural networks

– ant swarming

That is, look for something that seems to work

well in nature, and then try to simplify it so that it

is practical and helps solve optimization

problems.

Page 50: Week 6 - Heuristics Part I+II

Evolutionary Algorithms

Computer-based problem solving systems using computable

models of natural, evolutionary processes as key elements

Charles Darwin, 1859:

Work on the origin of species

Natural inheritance with changes (mutations)

Auslese (Selection)

Gregor Johann Mendel, 1822–1884:

Mendelian laws (Inheritance)

Page 51: Week 6 - Heuristics Part I+II

Basic structure of EAs

Evolutionary Algorithms

begin P ← set of initial solutions;

Evaluate(P);

repeat: Q ← SelectSolutionsForVariation(P);

Q ← GenerateNewSolutionsByVariation(Q);

Evaluate(Q); P ← SelectSurvivingSolutions(P, Q);

until termination condition

end

Page 52: Week 6 - Heuristics Part I+II

Evolutionary Algorithms

Instead of a single solution a whole population of candidate

solutions is considered.

Chance plays an essential role in the main algorithmic steps:

initialisation, variation and selection.

The solution representation is a central aspect of EAs.

The initialisation is usually done by randomized construction

heuristics.

Variation operators are recombination and mutation.

Page 53: Week 6 - Heuristics Part I+II

Evolutionary Algorithms: Representation

Potential solutions have to be represented by a data structure.

Genotype: encoded form of a solution (=chromosome)

Phenotype: decoded form of a solution;

the genotype can be understood as the specification of the

phenotype.

The genotype can be a binary string or a permutation.

In the case of direct representation the genotype corresponds

exactly to the phenotype.

The phenotype has to be computed from the genotype using a

special decoding algorithm in the case of indirect representation.

Page 54: Week 6 - Heuristics Part I+II

Evolutionary Algorithms: Selection

Usually parents are randomly selected from generation to the

next.

Better individuals are selected with higher probability than worse

ones.

Selection drives the EA towards better solutions.

Fitness proportional selection (Roulette Wheel)

Rank selection

Tournament selection

Page 55: Week 6 - Heuristics Part I+II

Evolutionary Algorithms: Variation Operators

Recombination: generates new individuals from selected

parents.

A new individual should possibly be built up from attributes

appearing in the parental chromosomes (heredity).

Mutation: small random changes

Permits to introduce new or lost genetic material into the

population.

1 1 1 1 1 1 1 1

0 0 0 1 1 1 1 1

0 0 0 0 0 0 0 0

1 1 1 0 0 0 0 0

Page 56: Week 6 - Heuristics Part I+II

Hybrid Algorithms

If there exists problem specific knowledge, is might be usefull

to combine an EA with local optimization methods.

EA: global view, discovers „hills“

Local search: hill climbing finds „top of hills“

Exact methods can also be combined with metaheuristics.

Page 57: Week 6 - Heuristics Part I+II

More details

VU Heuristic Optimization Techniques

M. Gendreau and J. Y. Potvin, (Eds.), Handbook of

Metaheuristics. International Series in Operations Research &

Management Science, Vol. 146. Springer, 2nd edition 2010.