chapter ii linear programmingaksikas/chapter2.pdf · chapter ii linear programming 1.introduction...

66
1 Chapter II Linear Programming 1. Introduction 2. Simplex Method 3. Duality Theory 4. Optimality Conditions 5. Applications (QP & SLP) 6. Sensitivity Analysis 7. Interior Point Methods

Upload: others

Post on 11-Mar-2020

70 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1'

&

$

%

Chapter II

Linear Programming

1. Introduction

2. Simplex Method

3. Duality Theory

4. Optimality Conditions

5. Applications (QP & SLP)

6. Sensitivity Analysis

7. Interior Point Methods

Page 2: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 2'

&

$

%

1 Introduction

Linear programming became important during World War II :

=⇒ used to solve logistics problems for the military.

Linear Programming (LP) was the first widely used form ofoptimization in the process industries and is still the dominantform today.

Linear Programming has been used to :

→ schedule production,→ choose feedstocks,→ determine new product feasibility,→ handle constraints for process controllers, . . .

There is still a considerable amount of research taking place in thisarea today.

Page 3: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 3'

&

$

%

All Linear Programs can be written in the form :

minxi

c1x1 + c2x2 + c3x3 + . . .+ cnxn

subject to :

a11x1 + a12x2 + a13x3 + . . .+ a1nxn ≤ b1a21x1 + a22x2 + a23x3 + . . .+ a2nxn ≤ b2

......

am1x1 + am2x2 + am3x3 + . . .+ amnxn ≤ bm

0 ≤ xi ≤ xl,ibi ≥ 0

=⇒ note that all of the functions in this optimization problem arelinear in the variables xi.

Page 4: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 4'

&

$

%

In mathematical short-hand this problem can be re-written :

minx

cTx

subject to :

Ax ≤ b0 ≤ x ≤ xl

b ≥ 0

Note that :

1. the objective function is linear

2. the constraints are linear

3. the variables are defined as non-negative

4. all elements of b are non-negative.

Page 5: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 5'

&

$

%

Example

A market gardener has a small plot of land she wishes to use toplant cabbages and tomatoes. From past experience she knows thather garden will yield about 1.5 tons of cabbage per acre or 2 tons oftomatoes per acre. Her experience also tells her that cabbagesrequire 20 lbs of fertilizer per acre, whereas tomatoes require 60 lbsof fertilizer per acre. She expects that her efforts will produce$600/ton of cabbages and $750/ton of tomatoes.

Unfortunately, the truck she uses to take her produce to market isgetting old, so she doesn’t think that it will transport anymorethan 3 tons of produce to market at harvest time. She has alsofound that she only has 60 lbs of fertilizer.

Question : What combination of cabbages and tomatoes shouldshe plant to maximize her profits ?

Page 6: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 6'

&

$

%

The optimization problem is :

In the general matrix LP form :

Page 7: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 7'

&

$

%

We can solve this problem graphically :

We know that for well-posed LP problems, the solution lies at avertex of the feasible region. So all we have to do is try all of theconstraint intersection points.

Page 8: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 8'

&

$

%

Check each vertex :

c t P(x)0 0 02 0 18000 1 1500

6/5 3/5 1980

So our market gardener should plant 1.2 acres in cabbages and 0.6acres in tomatoes. She will then make $1980 in profit.

This graphical procedure is adequate when the optimizationproblem is simple. For optimization problems which include morevariables and constraints this approach becomes impractical.

Page 9: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 9'

&

$

%

Degeneracy in Linear Programming

Our example was well-posed, which means there was an uniqueoptimum. It is possible to formulate LP problems which areill-posed or degenerate. There are three main types of degeneracywhich must be avoided in Linear Programming :

1. unbounded solutions,

−→ not enough independent constraints.

2. no feasible region,

−→ too many or conflicting constraints.

3. non-unique solution,

−→ too many solutions,−→ dependance between profit function and some constraints.

Page 10: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 10'

&

$

%

1. Unbounded Solution :

As we move up either the x1-axis or the constraint the objectivefunction improves without limit.

In industrial-scale problems with many variables and constraints,this situation can arise when :

−→ there are too few constraints.−→ there is linear dependence among the constraints.

Page 11: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 11'

&

$

%

2. No Feasible Region :

Any value of x violates some set of constraints.

This situation occurs most often in industrial-scale LP problemswhen too many constraints are specified and some of them conflict.

Page 12: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

1 INTRODUCTION 12'

&

$

%

Non-Unique Solution :

Profit contour is parallel to constraint.

Page 13: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 13'

&

$

%

2 Simplex Method

Our market gardener example had the form :

minx

−[900 1500]x

subject to :[1.5 220 60

]x ≤

[360

]where : x = [acres cabbages acres tomatoes]

We need a more systematic approach to solving these problems,particularly when there are many variables and constraints.

−→ SIMPLEX method (Dantzig).−→ always move to a vertex which improves the value of theobjective function.

Page 14: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 14'

&

$

%

SIMPLEX Algorithm

1. Convert problem into standard form with :→ positive right-hand sides,→ lower bounds of zero.

2. Introduce slack/surplus variables :→ change all inequality constraints to equality constraints.

3. Define an initial feasible basis :→ choose a starting set of variable values which satisfy all ofthe constraints.

4. Determine a new basis which improves the objective function :→ select a new vertex with a better value of P (x).

5. Transform the equations :→ perform row reduction on the equation set.

6. Repeat steps 4 & 5 until no more improvement in the objectivefunction is possible.

Page 15: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 15'

&

$

%

The problem was written :

maxc,t

900c+ 1500t

subjcet to :

1.5c+ 2t ≤ 320c+ 60t ≤ 60

1. in standard form, the problem is

minx

−[900 1500]x

subject to :[1.5 220 60

]x ≤

[360

]where : x = [c t]T .

Page 16: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 16'

&

$

%

2. Introduce slack variables (or convert all inequality constraintsto equality constraints) :

minc,t

−900c− 1500t

subjcet to :

1.5c+ 2t+ s1 = 320c+ 60t+ s2 = 60

c, t, s1, s2 ≥ 0

or in matrix form :

minx

−[900 1500 0 0]x

subject to :[1.5 2 1 020 60 0 1

]x =

[360

]where : x = [c t s1 s2]T ≥ 0.

Page 17: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 17'

&

$

%

3. Define an initial basis and set up tableau :

→ choose the slacks as initial basic variables,

→ this is equivalent to starting at the origin.

the initial tableau is :

Page 18: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 18'

&

$

%

4. Determine the new basis :

→ examine the objective function coefficients and choose avariable with a negative weight. (you want to decrease theobjective function because you are minimizing. Usually we willchoose the most negative weight, but this is not necessary).

→ this variable will be brought into the basis.

→ divide each element of b by the corresponding constraintcoefficient of the new basic variable.

→ the variable which will be removed from the basis is in thepivot row (given by the smallest positive ratio of bi/aij ).

Page 19: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 19'

&

$

%

Page 20: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 20'

&

$

%

5. Transform the constraint equations :

→ perform row reduction on the constraint equations to makethe pivot element 1 and all other elements of the pivot column0.

i) new row #2 = row #2/60ii) new row #1 = row #1 -2*new row #2iii) new row #3 = row #3 + 1500*new row #2

Page 21: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 21'

&

$

%

6. Repeat steps 4&5 until no improvement is possible :

Page 22: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 22'

&

$

%

the new tableau is :

no further improvements are possible, since there are no morenegative coefficients in the bottom row of the tableau.

Page 23: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 23'

&

$

%

The final tableau is :

– optimal basis contains both cabbages and tomatoes (c, t).– optimal acres of cabbages is 6/5 and of tomatoes is 3/5.– the maximum profit is 1980.

Page 24: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 24'

&

$

%

Recall that we could graph the market garden problem :

We can track how the SIMPLEX algorithm moved through thefeasible region. The SIMPLEX algorithm started at the origin.Then moved along the tomato axis (this was equivalent tointroducing tomatoes into the basis) to the fertilizer constraint.The algorithm then moved up the fertilizer constraint until itfound the intersection with the trucking constraint, which isthe optimal solution.

Page 25: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 25'

&

$

%

Some other considerations

1. Negative lower bounds on variables :

– SIMPLEX method requires all variables to have non-negativelower bounds (i.e. x ≥ 0).

– problems can have variables for which negative values arephysically meaningful (e.g. inter-mediate cross-flows betweenparallel process units).

– introduce artificial variables.

If we have bounds on the variable x1 such that : a ≤ x ≤ b,then we introduce two new artificial variables (e.g. x5, x6)where :

x1 = x6 − x5and we can require x5 ≥ 0 and x6 ≥ 0 while satisfying thebounds on x1.

Page 26: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 26'

&

$

%

2. Mixed constraints :

– until this point we have always been able to formulate ourconstraints sets using only ≤ constraints (with positiveright-hand sides).

– how can we handle constraint sets that also include constraints(with positive right-hand sides) ?

– introduce surplus variables.

Suppose we have the constraint :

[ai1...ain]x ≥ biThen we can introduce a surplus variable si to form an equalityconstraint by setting :

ai1x1 + ...+ ainxn − si = bi

Page 27: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 27'

&

$

%

We have a problem if we start with an initial feasible basis suchthat x = 0 (i.e. starting at the origin). Then :

si = −biwhich violates the non-negativity constraint on all variables(including slack / surplus variables).

We need a way of finding a feasible starting point for thesesituations.

3. Origin not in the feasible region

The SIMPLEX algorithm requires a feasible starting point. Wehave been starting at the origin, but in this case the origin isnot in the feasible region. There are several differentapproaches :

−→ Big M method (Murty, 1983).

Page 28: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 28'

&

$

%

Big M Method

Consider the minimization problem with mixed constraints :

minx

x1 + 2x2

subject to :

3x1 + 4x2 ≥ 5x1 + 2x3 = 3x1 + x2 ≤ 4

1. As well as the usual slack / surplus variables, introduce aseparate artificial variable ai into each equality and ≥constraint :

3x1 + 4x2 − s1 + a1 = 5x1 + 2x3 + a2 = 3x1 + x2 + s2 = 4

Page 29: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 29'

&

$

%

2. Augment the objective function by penalizing non-zero valuesof the artificial variables :

P (x) = cTx+m∑i=1

Mai

Choose a large value for M ( say 10 times the magnitude of thelargest coefficient in the objective function). For the exampleminimization problem :

P (x) = x1 + 2x2 + 20a1 + 20a2

Page 30: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 30'

&

$

%

3. Set up the tableau as usual. The initial basis will include all ofthe artificial variables and the required slack variables :

Page 31: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 31'

&

$

%

4. Phase 1 : modify the objective row in the tableau to reduce thecoefficients of the artificial variables to zero :

Page 32: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 32'

&

$

%

5. Phase 2 : Solve the problem as before starting from the finaltableau from Phase 1.

Page 33: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 33'

&

$

%

The final tableau is :

Page 34: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 34'

&

$

%

Simplex Algorithm (Matrix Form)

To develop the matrix form of the Linear Programming problem :

minx

cTx

subject to :

Ax ≤ bx ≥ 0

we introduce the slack variables and partition x,A and c as follows :

x =

xB−−−xN

A =

[B | N

]

Page 35: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 35'

&

$

%

c =

cB−−−cN

Then, the Linear Programming problem becomes :

minx

cTBxB + cTNxN

subject to :

BxB +NxN = bxB , xN ≥ 0

Feasible values of the basic variables (xB) can be defined in termsof the values for non-basic variables (xN ) :

xB = B−1[b−NxN ]

Page 36: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 36'

&

$

%

The value of the objective function is given by :

P (x) = CTBB−1[b−NxN ] + cTNxN

orP (x) = CTBB

−1b+ [cTN − cTBB−1N ]xN

Then, the tableau we used to solve these problems can berepresented as :

xTB xTN b

xB I B−1N B−1b

0 −[cTN − cTBB−1N ] cTBB−1b

Page 37: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 37'

&

$

%

The Simplex Algorithm is :

1. form the B and N matrices. Calculate B−1.

2. calculate the shadow prices of the non-basic variables (xN )

−[cTN − cTBB−1N ]

3. calculate B−1N and B−1b.

4. find the pivot element by performing the ratio text using thecolumn corresponding to the most negative shadow price.

5. the pivot column corresponds to the new basic variable and thepivot row corresponds to the new non-basic variable. Modifythe B and N matrices accordingly. Calculate B−1.

6. repeat steps 2 through 5 until there are no negative

Page 38: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

2 SIMPLEX METHOD 38'

&

$

%

This method is computationally inefficient because you mustcalculate a complete inverse of the matrix B at each iteration of theSIMPLEX algorithm. There are several variations on the RevisedSIMPLEX algorithm which attempt to minimize the computationsand memory requirements of the method. Examples of these can befound in :

– Chvatal– Edgar & Himmelblau (references in 7.7)– Fletcher– Gill, Murray and Wright

Page 39: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

3 DUALITY & LINEAR PROGRAMMING 39'

&

$

%

3 Duality & Linear Programming

Many of the traditional advanced topics in LP analysis andresearch are based on the so-called method of Lagrange.Thisapproach to constrained optimization :

→ originated from the study of orbital motion.→ has proven extremely useful in the development of,

– optimality conditions for constrained problems,– duality theory,– optimization algorithms,– sensitivity analysis.

Page 40: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

3 DUALITY & LINEAR PROGRAMMING 40'

&

$

%

Consider the general Linear programming problem :

minx

cTx

subject to :

Ax ≥ b

This problem is often called the Primal problem to indicate that itis defined in terms of the Primal variables (x).

You form the Lagrangian of this optimization problem as follows :

L(x, λ) = cTx− λT [Ax− b]

Notice that the Lagrangian is a scalar function of two sets ofvariables : the Primal variables x, and the Dual variables (orLagrange multipliers) λ. Up until now we have been calling theLagrange multipliers λ the shadow prices.

Page 41: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

3 DUALITY & LINEAR PROGRAMMING 41'

&

$

%

We can develop the Dual problem first be re-writing theLagrangian as :

LT (x, λ) = xT c− [xTAT − bT ]λ

which can be re-arranged to yield :

LT (λ, x) = bTλ− xT [ATλ− c]

This Lagrangian looks very similar to the previous one except thatthe Lagrange multipliers λ and problem variables x have switchedplaces. In fact this re-arranged form is the Lagrangian for themaximization problem :

maxλ

bTλ

subject to :

ATλ ≤ cλ ≥ 0

This formulation is often called the Dual problem to indicate thatit is defined in terms of the Dual variables λ.

Page 42: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

3 DUALITY & LINEAR PROGRAMMING 42'

&

$

%

It is worth noting that in our market gardener example, if wecalculate the optimum value of the objective function using thePrimal variables x∗ :

P (x∗)|primal = cTx∗ = 900

(9

5

)+ 1500

(3

5

)= 1980

and using the Dual variables λ∗ :

P (λ∗)|dual = bTλ∗ = 3 (480) + 60 (9) = 1980

we get the same result. There is a simple explanation for this thatwe will examine later.

Page 43: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

3 DUALITY & LINEAR PROGRAMMING 43'

&

$

%

Besides being of theoretical interest, the Dual problem formulationcan have practical advantages in terms of ease of solution. Considerthat the Primal problem was formulated as :

minx

cTx

subject to :

Ax ≥ b

As we saw earlier, problems with inequality constraints of this formoften present some difficulty in determining an initial feasiblestarting point. They usually require a Phase 1 starting proceduresuch as the Big M method.

Page 44: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

3 DUALITY & LINEAR PROGRAMMING 44'

&

$

%

The Dual problem had the form :

maxλ

bTλ

subject to :

ATλ ≤ cλ ≥ 0

Problems such as these are usually easy to start since λ = 0 is afeasible point. Thus no Phase 1 starting procedure is required.

As a result you may want to consider solving the Dual problem,when the origin is not a feasible starting point for the Primalproblem.

Page 45: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

4 OPTIMALITY CONDITIONS 45'

&

$

%

4 Optimality Conditions

Consider the linear programming problem :

minx

cTx

subject to : M−−N

x ≥

bM−−bN

← active

← inactive

– if equality constraints are present, they can be used to eliminatesome of the elements of x, thereby reducing the dimensionality ofthe optimization problem.

– rows of the coefficient matrix (M) are linearly independent.

Page 46: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

4 OPTIMALITY CONDITIONS 46'

&

$

%

In our market gardener example, the problem looked like :

Form the Lagrangian :

L(x, λ) = cTx− λM [Mx− bM ]− λN [Nx− bN ]

At the optimum, we know that the Lagrange multipliers (shadowprices) for the inactive inequality constraints are zero (i.e. λN = 0).Also, since at the optimum the active inequality constraints areexactly satisfied :

Mx∗ − bM = 0

Page 47: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

4 OPTIMALITY CONDITIONS 47'

&

$

%

Then, notice that at the optimum :

L(x∗, λ∗) = P (x∗) = cTx∗

At the optimum, we have seen that the shadow prices for the activeinequality constraints must all be non-negative (i.e. λM ≥ 0). Thesemultiplier values told us how much the optimum value of theobjective function would increase if the associated constraint wasmoved into the feasible region, for a minimization problem.

Finally, the optimum (x∗, λ∗) must be a stationary point of theLagrangian :

∇xL(x∗, λ∗) = ∇xP (x∗)− (λ∗M )T gM (x∗) = cT − (λ∗M )TM = 0

∇λL(x∗, λ∗) = (gM (x∗))T = (Mx∗ − bM )T = 0

Page 48: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

4 OPTIMALITY CONDITIONS 48'

&

$

%

Thus, necessary and sufficient conditions for an optimum of aLinear Programming problem are :

– the rows of the active set matrix (M) must be linearlyindependent,

– the active set are exactly satisfied at the optimum point x∗ :

Mx∗ − bM = 0

– the Lagrange multipliers for the inequality constraints are :

λ∗N = 0 and λ∗M ≥ 0

this is sometimes expressed as :

(λ∗M )T [Mx∗ − bM ] + (λ∗N )T [Nx∗ − bN ] = 0

– the optimum point (x∗, λ∗) is a stationary point of theLagrangian :

∇xL(x∗, λ∗) = 0

Page 49: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

5 APPLICATIONS 49'

&

$

%

5 Applications

Quadratic Programming Using LP

Consider an optimization problem of the form :

minx

1

2xTHx+ cTx

subject toAx ≥ bx ≥ 0

where H is a symmetric, positive definite matrix.

This is a Quadratic Programming (QP) problem. Problems such asthese are encountered in areas such as parameter estimation(regression), process control and so forth.

Page 50: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

5 APPLICATIONS 50'

&

$

%

Recall that the Lagrangian for this problem would be formed as :

L(x, λ,m) =1

2xTHx+ cTx− λT (Ax− b)−mTx

where λ and m are the Lagrange multipliers for the constraints.

The optimality conditions for this problem are :

∇xL(x, λ,m) = xTH + cT − λTA−mT = 0

Ax− b ≥ 0

x, λ,m ≥ 0

λT (Ax− b)−mTx = 0

Page 51: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

5 APPLICATIONS 51'

&

$

%

It can be shown that the solution to the QP problem is also asolution of the following LP problem :

minx

1

2

[cTx+ bT (Ax− b)

]subject to :

Hx+ c−ATλ−m = 0

λT (Ax− b)−mTx = 0

Ax− b ≥ 0

x, λ,m ≥ 0

This LP problem may prove easier to solve than the original QPproblem.

Page 52: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

5 APPLICATIONS 52'

&

$

%

Successive Linear Programming

Consider the general nonlinear optimization problem :

minxP (x)

subject to :h(x) = 0

g(x) ≥ 0

If we linearize the objective function and all of the constraintsabout some point xk, the original nonlinear problem may beapproximated (locally) by :

minxP (xk) +∇xP |xk

(x− xk)

subject toh(xk+1) +∇xh|xk

(x− xk) = 0

g(xk+1) +∇xg|xk(x− xk) = 0

Page 53: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

5 APPLICATIONS 53'

&

$

%

Since xk is known, the problem is linear in the variables x and canbe solved using LP techniques.

The solution to the approximate problem is x∗ and will often notbe a feasible point of the original nonlinear problem (i.e. the pointx∗ may not satisfy the original nonlinear constraints). There are avariety of techniques to correct the solution x∗ to ensure feasibility.

A possibility would be to search in the direction of x∗, startingfrom xk, to find a feasible point :

xk+1 = xk + α(x∗ − xk)

We will discuss such ideas later with nonlinear programmingtechniques.

Page 54: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

5 APPLICATIONS 54'

&

$

%

The SLP algorithm is :

1. linearize the nonlinear problem about the current point xk,

2. solve the approximate LP,

3. correct the solution (x∗) to ensure feasibility of the new pointxk+1 (i.e. find a point xk+1 in the direction of x∗ which satisfiesthe original nonlinear constraints),

4. repeat steps 1 through 3 until convergence.

The SLP technique has several disadvantages of which slowconvergence and infeasible intermediate solutions are the mostimportant. However, the ease of implementation of the method canoften compensate for the method’s shortcomings.

Page 55: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 55'

&

$

%

6 Sensitivity Analysis

Usually there is some uncertainty associated with the values usedin an optimization problem. After we have solved an optimizationproblem, we are often interested in knowing how the optimalsolution will change as specific problem parameters change. This iscalled by a variety of names, including :

– sensitivity analysis,– post-optimality analysis,– parametric programming.

In our Linear Programming problems we have made use of pricinginformation, but we know such prices can fluctuate. Similarly thedemand / availability information we have used is also subject tosome fluctuation. Consider that in our market gardener problem,we might be interested in determining how the optimal solutionchanges when :

– the price of vegetables changes,– the amount of available fertilizer changes,– we buy a new truck.

Page 56: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 56'

&

$

%

In the market gardener example, we had :

– Changes in the pricing information (cT ) affects the slope of theobjective function contours.

– Changes to the right-hand sides of the inequality constraints (b),translates the constraints without affecting their slope.

– Changes to the coefficient matrix of the constraints (A) affectsthe slopes of the corresponding constraints.

Page 57: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 57'

&

$

%

For the general Linear Programming problem :

minx

cTx

subject to :

Ax ≥ b

we had the Lagrangian :

L(x, λ) = P (x)− λT g(x) = cTx− λT [Ax− b]

or in terms of the active and inactive inequality constraints :

L(x, λ) = cTx− λTM [Mx− bM ]− λTN [Nx− bN ]

Recall that at the optimum :

∇xL(x∗, λ∗) = cT − (λ∗M )TM = 0

∇λL(x∗, λ∗) = [Mx∗ − bM ] = 0

(λ∗N )T = 0

Page 58: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 58'

&

$

%

Notice that the first two sets of equations are linear in the variablesof interest (x∗, λ∗M ) and they can be solved to yield :

λ∗M = (MT )−1c and x∗ = M−1bM

Then, as we discussed previously, it follows that :

P (x∗) = cTx∗ = cTM−1bM = (λ∗M )T bM = bTMλ∗M

We now have expressions for the complete solution of our LinearProgramming problem in terms of the problem parameters.

In this course we will only consider two types of variation in ournominal optimization problem including :

1. changes in the right-hand side of constraints (b). Such changesoccur with variation in the supply / demand of materials,product quality requirements and so forth.

2. pricing (c) changes. The economics used in optimization arerarely known exactly.

Page 59: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 59'

&

$

%

For the more difficult problem of determining the effects ofuncertainty in the coefficient matrix (A) on the optimization resultssee :

Gal, T., Postoptimal Analysis, Parametric Programming andRelated Topics, McGraw-Hill, 1979.

Forbes, J.F., and T.E. Marlin, Model Accuracy for EconomicOptimizing Controllers : The Bias Update Case, Ind. Eng. Chem.Res., 33, pp. 1919-29, 1994.

Page 60: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 60'

&

$

%

For small (differential) changes in bM , which do not change theactive constraint set :

∇bP (x∗) = ∇b[(λ∗)T b

]= (λ∗)T

∇bx∗ = ∇b[M−1bM

]=

M−1

−−−0

∇bλ∗ = ∇b

M−1c−−−0

= 0

Note that the Lagrange multipliers are not a function of bM as longas the active constraint set does not change.

Page 61: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 61'

&

$

%

For small (differential) changes in c, which do not change the activeconstraint set :

∇cP (x∗) = ∇c[cTx∗

]= (x∗)T

∇cx∗ = ∇c[M−1bM

]= 0

∇cλ∗ = ∇c

(MT )−1c−−−0

=

(MT )−1

−−−0

Page 62: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 62'

&

$

%

Note that x∗ is not an explicit function of the vector c and will notchange so long as the active set remains fixed. If a change in ccauses an element of λ to become negative, then x∗ will jump to anew vertex.

Also it is worth noting that the optimal values P (x∗), x∗ and λ∗

are all affected by changes in the elements of the constraintcoefficient matrix (M). This is beyond the scope of the course andthe interested student can find some detail in the referencespreviously provided.

There is a simple geometrical interpretation for most of theseresults. Consider the 2 variable Linear Programming problem :

minx1,x2

c1x1 + c2x2

subject to g1(x)g2(x)

...

=

a11x1 + a12x2a21x1 + a22x2

...

= Ax ≥ b

Page 63: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 63'

&

$

%

Note that the set of inequality constraints can be expressed :

Ax =

M−−−N

x =

∇xg1∇xg2−−−

...

x ≥ bThis two variables optimization problem has an optimum at theintersection of the g1 and g2 constraints, and can be depicted as :

Page 64: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

6 SENSITIVITY ANALYSIS 64'

&

$

%

Summary

– The solution of a well-posed LP is uniquely determined by theintersection of the active constraints.

– most LPs are solved by the SIMPLEX method.

– commercial computer codes used the Revised SIMPLEXalgorithm (most efficient)

– Optimization studies include :

* solution to the nominal optimization problem

* a sensitivity study to determine how uncertainty can affect thesolution.

Page 65: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

7 INTERIOR POINT METHODS 65'

&

$

%

7 Interior Point Methods

Recall that the SIMPLEX method solved the LP problem bywalking around the boundary of the feasible region. The algorithmmoved from vertex to vertex on the boundary.

Interior point methods move through the feasible region toward theoptimum. This is accomplished by :

1. Converting the Lp problem into an unconstrained problemusing Barrier Functions for the constrains at each new point(iterate)

2. Solving the optimality conditions for the resultingunconstrained problem

3. Repeating until converged.

This is a very active research area and some exciting developmentshave occurred in the last two decades.

Page 66: Chapter II Linear Programmingaksikas/chapter2.pdf · Chapter II Linear Programming 1.Introduction 2.Simplex Method 3.Duality Theory 4.Optimality Conditions ... the optimal solution

7 INTERIOR POINT METHODS 66'

&

$

%