4.1 lecture_1e

121
Lecture 1e: Mathematical Theory of LPP Jeff Chak-Fu WONG Department of Mathematics Chinese University of Hong Kong MAT581SS Mathematics for Logistics Produced by Jeff Chak-Fu WONG 1

Upload: julianli0220

Post on 17-Jul-2016

236 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 4.1 Lecture_1e

Lecture 1e:

Mathematical Theory of LPP

Jeff Chak-Fu WONG

Department of Mathematics

Chinese University of Hong Kongjwong�math. uhk.edu.hkMAT581SS

Mathematics for Logistics

Produced by Jeff Chak-Fu WONG 1

Page 2: 4.1 Lecture_1e

TABLE OF CONTENTS

1. Introduction

2. Some Basic Definitions

3. Some Elementary Results for LPP

4. The Simplex Algorithm: Main Theorems

TABLE OF CONTENTS 2

Page 3: 4.1 Lecture_1e

INTRODUCTION

INTRODUCTION 3

Page 4: 4.1 Lecture_1e

• While describing the simplex method in the last lecture, we

have used many results which were guessed purely from the

geometry of the linear programming problem.

• The basic aim of this lecture is to prove all these results

mathematically so as to complete the discussion of the simplex

method from theoretical point of view as well.

INTRODUCTION 4

Page 5: 4.1 Lecture_1e

SOME BASIC DEFINITIONS

In what follows we introduce some basic definitions on convex sets

and related concepts, which are to be used in the following

lectures.

SOME BASIC DEFINITIONS 5

Page 6: 4.1 Lecture_1e

Definition 1 (Convex Set)

• Let S be a subset of Rn, i.e., S ⊆ Rn.

• The set S is called a convex set if for 0 ≤ λ ≤ 1,

x,u ∈ S ⇒ λx + (1− λ)u ∈ S.

SOME BASIC DEFINITIONS 6

Page 7: 4.1 Lecture_1e

Thus a set S ⊆ Rn is convex if for any two points x,u in S, the whole

line segment joining x and u is in the set S.

Figure 1:

In Figure 2, the first and third sets are convex in R2 but the second

set (the shaded portion) is not convex.

SOME BASIC DEFINITIONS 7

Page 8: 4.1 Lecture_1e

Remark 1

• By convention, an empty set and a single point set are always

considered to be convex.

• Also the intersection of arbitrary many convex sets is always a convex

set but the union may not be so.

SOME BASIC DEFINITIONS 8

Page 9: 4.1 Lecture_1e

Example 1 Is the following set

{(x, y) ∈ R2

∣∣ 3x− 4y > 5 or 4x + 3y < 7

}

convex?

SOME BASIC DEFINITIONS 9

Page 10: 4.1 Lecture_1e

Solution:

No. (4, 1) and (0, 1) are in the set, but the point (2, 1), which lies on

the line segment joining (4, 1) and (0, 1), is not in the set.

●●

(a)

●● ●

(b)

Figure 2:

The given set is the union of two convex sets which itself is not

convex.

SOME BASIC DEFINITIONS 10

Page 11: 4.1 Lecture_1e

Definition 2 (Convex Hull)

• Let S ⊆ Rn.

• Then the smallest convex set containing the given set S is called the

convex hull of S and is denoted by Conv(S).

SOME BASIC DEFINITIONS 11

Page 12: 4.1 Lecture_1e

It is obvious that if S is a convex set then Conv(S) = S.

S Conv ( )S

Figure 3:

In Figure 3, a nonconvex set and its convex hull are depicted in R2.

SOME BASIC DEFINITIONS 12

Page 13: 4.1 Lecture_1e

Definition 3 (Extreme Point/Corner Point)

• Let S ⊆ Rn be a convex set.

• A point x∗ of S is called an extreme point or a corner point of S if

∄ x,u (x 6= u) in S, and 0 < λ < 1 such that

x∗ = λx + (1− λ)u.

SOME BASIC DEFINITIONS 13

Page 14: 4.1 Lecture_1e

Thus a point x∗ is an extreme point of S if it does not lie on the line

segment of any two distinct points of S.

We may check that for the set

S1 = {(x1, x2)| x1 ≥ 0, x2 ≥ 0, x1 + x2 ≤ 1},

the extreme points are (0, 0), (1, 0), and (0, 1); whereas for the set

S1 = {(x1, x2)| x21 + x2

2 ≤ 1},

every point on the circle x21 + x2

2 = 1 is an extreme point.

x

x

x

x

1

1

2

2

(0,0) (0,1)

(1,0)

1

Figure 4:

SOME BASIC DEFINITIONS 14

Page 15: 4.1 Lecture_1e

Definition 4 (Hyperplane)

• Let p ∈ Rn and d ∈ R.

• Then the set H defined as

H = {x ∈ Rn| pTx = d}

is called a hyperplane.

SOME BASIC DEFINITIONS 15

Page 16: 4.1 Lecture_1e

Thus a hyperplane H in Rn is the natural extension of line in R2 or a

plane in R3.

p

x

x0

Figure 5:

Suppose x0 ∈ H. Then pT x0 = d (p 6= 0). And pT (x− x0) = 0. Thus, p

is normal to the vector x− x0.

Remark 2 Every hyperplane is a convex set.

SOME BASIC DEFINITIONS 16

Page 17: 4.1 Lecture_1e

Definition 5 (Closed Half Spaces)

• Let H = {x ∈ Rn : pT x = d} be a hyperplane in Rn.

• Then the sets

H1 = {x ∈ Rn| pTx ≤ d}

and

H2 = {x ∈ Rn| pTx ≥ d}

are called the closed half spaces generated by the hyperplane H .

We can check that H1 and H2 are convex sets in Rn.

SOME BASIC DEFINITIONS 17

Page 18: 4.1 Lecture_1e

Definition 6 (Supporting Hyperplane)

• Let S ⊆ Rn be a closed convex set.

• Let u be a boundary point of S. Then a hyperplane H is called a

supporting hyperplane at u if it passes through u and whole of S is

contained in one closed half spaces generated by H .

S

(a)

S

u

(b)

S

u

(c)

(a) A convex set S (in green), (b) a supporting hyperplane of S (the dashed line) (c)

the half-space delimited by the hyperplane which contains S (in light blue).

SOME BASIC DEFINITIONS 18

Page 19: 4.1 Lecture_1e

Remark 3 A supporting hyperplane to set S at boundary point u:

{x| pTx = p

Tu}

where p 6= 0 and pT x ≤ pT u for x ∈ S.

S

●u

p

SOME BASIC DEFINITIONS 19

Page 20: 4.1 Lecture_1e

Definition 7 (Edge)

• Let S ⊆ Rn be a convex set, and x,u ∈ S with x 6= u.

• Then the line segment joining x,u is called an edge of the convex set

S if it is the intersection of S with a supporting hyperplane.

SOME BASIC DEFINITIONS 20

Page 21: 4.1 Lecture_1e

Definition 8 (Adjacent Extreme Points)

• Let S ⊆ Rn be a convex set.

• Then two extreme points x and u of S are called adjacent extreme

points if they are joined by an edge.

SOME BASIC DEFINITIONS 21

Page 22: 4.1 Lecture_1e

Definition 9 (Convex Combination)

• Let x,u ∈ Rn.

• Then the combination λx + (1− λ)u, 0 ≤ λ ≤ 1 is called the convex

combination of x and u.

• In general, let x1,x2, · · · ,xr be r points in Rn.

• Then the combination∑r

k=1λkxk with λk ≥ 0 (k = 1, 2, · · · , r) and

∑r

k=1λk = 1 is called the convex combination of r points

x1,x2, · · · ,xr .

SOME BASIC DEFINITIONS 22

Page 23: 4.1 Lecture_1e

Remark 4 The difference between the linear combination and convex

combination.

• In linear combination

α1x1 + α2x2 + · · ·+ αrxr,

where α1, α2, · · · , αr ∈ R.

• But in the convex combination the coefficient are non-negative and

their sum equals one, i.e.,

r∑

k=1

αk = 1 with αk ≥ 0 (k = 1, 2, · · · , r).

SOME BASIC DEFINITIONS 23

Page 24: 4.1 Lecture_1e

Example 2 Consider the points

x1 =

1

−4

−6

, x2 =

0.1

−4

−0.2

.

Then, −0.5x1 + 0.4x2 is a linear combination of x1 and x2.

It can be computed as

−0.46

0.4

2.92

= −0.5

1

−4

−6

+ 0.4

0.1

−4

−0.2

where α1 = −0.5 and α2 = 0.4.

SOME BASIC DEFINITIONS 24

Page 25: 4.1 Lecture_1e

Example 3 Consider the points

x1 =

0

0

, x2 =

1

0

, x3 =

0

1

, x4 =

1

1

.

If x =

1/2

1/2

, then x can be expressed as a convex combination of {xi}

in the following ways:

x = 0x1 +1

2x2 +

1

2x3 + 0x4

=1

2x1 + 0x2 + 0x3 +

1

2x4

=1

4x1 +

1

4x2 +

1

4x3 +

1

4x4

and so forth.

SOME BASIC DEFINITIONS 25

Page 26: 4.1 Lecture_1e

Definition 10 (Convex set spanned by a set)

• Let S ⊆ Rn.

• Then the set CSpan(S) given by

CSpan(S) =

{k∑

r=1

λrxr,

k∑

r=1

λr = 1, k finite (arbitrary), λr ≥ 0, and xr ∈ S, ∀r

}

is called the convex set spanned by S.

Thus CSpan(S) is the set of all convex combinations of an arbitrary

but finitely many elements of S.

SOME BASIC DEFINITIONS 26

Page 27: 4.1 Lecture_1e

Definition 11 (Polyhedron/Polytope)

• Let S ⊆ Rn.

• Then S is called a polyhedron if it is the intersection of finite number

of closed half spaces, i.e.

S ={

x ∈ Rn| pTi x ≤ di (i = 1, · · · , r)

}

.

SOME BASIC DEFINITIONS 27

Page 28: 4.1 Lecture_1e

• If a polyhedron is also bounded then it is called a polytope.

• A polytope is thus a closed, bounded, convex set having finitely

many extreme points; the bounding surface being the

hyperplanes.

SOME BASIC DEFINITIONS 28

Page 29: 4.1 Lecture_1e

• The set R2+ given by

R2+ = {(x1, x2) ∈ R2 : x1 ≥ 0, x2 ≥ 0}

is a polyhedron in R2 but not a polytope, while the set

S = {(x1, x2) ∈ R2 : x1 + x2 ≤ 1, x1 ≥ 0, x2 ≥ 0}

is a polytope in R2.

Figure 6:

SOME BASIC DEFINITIONS 29

Page 30: 4.1 Lecture_1e

Definition 12 (Simplex in Rn)

• Let x1,x2, · · · ,xn,xn+1 be (n + 1) points in Rn.

• Then the convex set spanned by these (n + 1) points is called a

n-simplex in Rn.

SOME BASIC DEFINITIONS 30

Page 31: 4.1 Lecture_1e

A 2-simplex in R2 is a solid triangle and 3-simplex in R3 is a solid

tetrahedron (the points inside the triangle/tetrahedron are also

being included).

● ●

Figure 7:

SOME BASIC DEFINITIONS 31

Page 32: 4.1 Lecture_1e

It can also be checked that every point of the polytope can be

expressed as a convex combination of its extreme points.

Therefore,

if S is a polytope with extreme points x1,x2, · · · ,xk then for any

x ∈ S, there exist scalars α1, α2, · · · , αk such that

αr ≥ 0 (r = 1, · · · , k),

k∑

r=1

αr = 1

and

x =

k∑

r=1

αrxr.

SOME BASIC DEFINITIONS 32

Page 33: 4.1 Lecture_1e

SOME ELEMENTARY RESULTS FOR LPP

SOME ELEMENTARY RESULTS FOR LPP 33

Page 34: 4.1 Lecture_1e

• Consider the LPP

Max z = c1x1 + c2x2 + · · ·+ cnxn

subject to

a11x1 + a12x2 + · · ·+ a1nxn(≤, =,≥)b1

a21x1 + a22x2 + · · ·+ a2nxn(≤, =,≥)b2

...

am1x1 + am2x2 + · · ·+ amnxn(≤,=,≥)bm

with x1 ≥ 0, · · · , xn ≥ 0,

(1)

where as explained in the last lecture, only one inequality sign

holds in each constraint, though different constraints may have

different inequality sign.

• Denote by S, the feasible region of LPP (1).

SOME ELEMENTARY RESULTS FOR LPP 34

Page 35: 4.1 Lecture_1e

Theorem 1 The feasible region S is a convex subset of Rn.

Proof.

• Let Si denote the set of all points X ∈ Rn for which the ith

constraint of LPP (1) holds (i = 1, 2, · · · , m, m + 1, · · · , m + n).

• Then each Si is either

– a hyperplane

or

– one of the closed half spaces,

and hence a convex set.

• But S = ∩iSi is the intersection of finitely many convex sets,

consequently it is a convex set. �

SOME ELEMENTARY RESULTS FOR LPP 35

Page 36: 4.1 Lecture_1e

Theorem 2 Let LPP (1) has an optimal solution X∗. Then X∗ can not be

in the interior of the feasible region S.

SOME ELEMENTARY RESULTS FOR LPP 36

Page 37: 4.1 Lecture_1e

Theorem 3 If the given LPP has an optimal solution then at least one

corner point of S is optimal.

Proof.

• Although the proof holds in much more generality, we shall give the

proof for the case when S is a polytope.

• In this case, it is guaranteed that the given LPP has an optimal solution,

so that we have to only show that

the optimal value is certainly attained at an extreme point.

SOME ELEMENTARY RESULTS FOR LPP 37

Page 38: 4.1 Lecture_1e

• – Let X∗ be an optimal point of the given LPP.

– If X∗ is an extreme point then the result holds obviously.

• – So we consider the case when X∗ is not an extreme point.

– Then, since S is a polytope

∗ it has finitely many corner points, say X1,X2, · · · ,Xk, and

∗ any point of S can be expressed as a convex combination of the

corner points of S.

– In particular this holds for X∗ as well, i.e. there exist scalars

α∗1, α∗

2, · · · , α∗k

such that

X∗ =

k∑

r=1

α∗rXr,

k∑

r=1

α∗r = 1, α∗

r ≥ 0, ∀r. (2)

SOME ELEMENTARY RESULTS FOR LPP 38

Page 39: 4.1 Lecture_1e

– Therefore,

CTX

∗ = CT

(k∑

r=1

α∗rXr

)

=k∑

r=1

α∗r

(

CTXr

)

,

i.e. CT X∗ is the weighted arithmetic mean (with weights α∗r) of k

scalars CT Xr .

– Hence by the property of the arithmetic mean,

CTX

∗ =k∑

r=1

α∗r(CT

Xr) = α∗1(CT

X1) + α∗2(CT

X2) + · · · + α∗r(CT

Xr)

≤ max(CTX1, · · · ,CT

Xr, · · · ,CTXk)

︸ ︷︷ ︸

=CT Xp for some 1≤p≤k

≤ CTXp for some 1 ≤ p ≤ k. (3)

– But X∗ is optimal, so CT X∗ ≥ CT X,∀X ∈ S.

– In particular

CTX

∗ ≥ CTXp. (4)

– Equation (3) and (4) give CT X∗ = CT Xp, which proves the result.

SOME ELEMENTARY RESULTS FOR LPP 39

Page 40: 4.1 Lecture_1e

– This is because

∗ if X∗ is not an extreme point

∗ then there exists an extreme point Xp which is also optimal.

– Thus at least one extreme point is certainly optimal. �

SOME ELEMENTARY RESULTS FOR LPP 40

Page 41: 4.1 Lecture_1e

Theorem 4 Every local optimal point of LPP (1) is also a global optimal

point.

SOME ELEMENTARY RESULTS FOR LPP 41

Page 42: 4.1 Lecture_1e

Theorem 5 The set of all optimal solutions of a LPP is a convex set.

Proof.

• – If

∗ a LPP has no optimal solution

or

∗ a LPP has only one optimal solution

– then the result is true by convention.

• Let us assume that the given LPP has at least two optimal

solutions.

– Let V denote the set of all optimal solutions of the given LPP,

i.e.

V = {X ∈ S| X is optimal},

where S is the feasible region of the given LPP.

– We shall prove that V is a convex set.

SOME ELEMENTARY RESULTS FOR LPP 42

Page 43: 4.1 Lecture_1e

• – Let X1 ∈ V and X2 ∈ V .

– Let X = λX1 + (1− λ)X2, 0 ≤ λ ≤ 1.

– Then X ∈ S because X1 ∈ S, X2 ∈ S and S is a convex set.

– Also

∗ CT X1 ≥ CT X for all X ∈ S

and

∗ CT X2 ≥ CT X for all X ∈ S.

CTX = C

T (λX1 + (1− λ)X2)

= λ(CTX1) + (1− λ)(CT

X2)

≥ λ(CTX) + (1− λ)(CT

X)︸ ︷︷ ︸

=CT X

≥ CTX.

– Hence X ∈ V , which gives that V is a convex set.

SOME ELEMENTARY RESULTS FOR LPP 43

Page 44: 4.1 Lecture_1e

• As a consequence of Theorem 5, we get that if a LPP has more

than one optimal solution, then it has infinitely many optimal

solutions. �

SOME ELEMENTARY RESULTS FOR LPP 44

Page 45: 4.1 Lecture_1e

Remark 5 Though the above results have been proved for the LPP (1)

which is in the maximization form, these results hold for LPP’s in the

minimization form as well.

SOME ELEMENTARY RESULTS FOR LPP 45

Page 46: 4.1 Lecture_1e

In view of the above discussions, we observe that because of the

structure of linearity on LPP’s, the below given properties are

guaranteed. Further, only because of these properties, we

succeeded in developing a method like the simplex method to

solve LPP’s. These properties are

(P1) The feasible region of LPP is always a convex set. In fact it is

polyhedron/polytope.

(P2) If the given LPP has an optimal solution then at least one

corner point (extreme point) of the feasible region is optimal.

(P3) For LPP’s, every local max point is a global max point. Also

every local min point is a global min point.

SOME ELEMENTARY RESULTS FOR LPP 46

Page 47: 4.1 Lecture_1e

In view of the above discussions, we observe that because of the

structure of linearity on LPP’s, the below given properties are

guaranteed. Further, only because of these properties, we

succeeded in developing a method like the simplex method to

solve LPP’s. These properties are

(P1) The feasible region of LPP is always a convex set. Infact it is

polyhedron/polytope.

(P2) If the given LPP has an optimal solution then at least one

corner point (extreme point) of the feasible region is optimal.

(P3) For LPP’s, every local max point is a global max point. Also

every local min point is a global min point.

SOME ELEMENTARY RESULTS FOR LPP 47

Page 48: 4.1 Lecture_1e

In view of the above discussions, we observe that because of the

structure of linearity on LPP’s, the below given properties are

guaranteed. Further, only because of these properties, we

succeeded in developing a method like the simplex method to

solve LPP’s. These properties are

(P1) The feasible region of LPP is always a convex set. Infact it is

polyhedron/polytope.

(P2) If the given LPP has an optimal solution then at least one

corner point (extreme point) of the feasible region is optimal.

(P3) For LPP’s, every local max point is a global max point. Also

every local min point is a global min point.

SOME ELEMENTARY RESULTS FOR LPP 48

Page 49: 4.1 Lecture_1e

Thus properties (P1), (P2) and (P3), are very basic to the algorithmic

study of LPP’s and the main reason for having them is the presence

of the structure of linearity.

SOME ELEMENTARY RESULTS FOR LPP 49

Page 50: 4.1 Lecture_1e

THE SIMPLEX ALGORITHM: MAIN THEOREMS

THE SIMPLEX ALGORITHM: MAIN THEOREMS 50

Page 51: 4.1 Lecture_1e

Let us consider the LPP in the standard form, i.e.

max z = CT X

subject to

AX = b

and X ≥ 0,

(5)

with (i) b ≥ 0 and (ii) rankA = m(< n).

• Here X ∈ Rn,C ∈ Rn,b ∈ Rm and A = [aij ] is an (m× n) matrix.

• Also

S = {X ∈ Rn| AX = b, X ≥ 0}

is the feasible region of LPP (5).

THE SIMPLEX ALGORITHM: MAIN THEOREMS 51

Page 52: 4.1 Lecture_1e

• Recall our discussion of the simplex algorithm in the last lecture

and note that the most basic concept introduced there is the

concept of the basic feasible solution XB for the given basis

matrix B.

• Write

A =[

B R]

and accordingly partition the vector X as

X =

XB

XR

.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 52

Page 53: 4.1 Lecture_1e

• Then the system

AX = b

can be written as

[

B R]

XB

XR

= b,

BXB + RXR = b,

XB = B−1b−B−1RXR, (6)

where the inverse of B, B−1, exists.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 53

Page 54: 4.1 Lecture_1e

• – Equation (6), i.e.,

XB = B−1b− B−1RXR

is the well known result which states that if rankA = m(< n) then

the system AX = b will have infinitely many solutions depending

upon (n−m) parameters XR.

– If we choose XR = 0 then (6) gives XB = B−1b and this solution,

namely, (XB = B−1b, XR = 0), we have called as the basic

solution for the basis matrix B.

– Further if XB ≥ 0 then we call the solution as the basic feasible

solution for the basis matrix B.

– Also recall other notations introduced in the last lecture, namely

CB , yj = B−1aj (j = 1, · · · , n), z(XB) = CTBXB and

zj = CTByj (j = 1, · · · , n).

We now have the following main theorems, which have been used

in the last lecture for the development of the simplex algorithm.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 54

Page 55: 4.1 Lecture_1e

Theorem 6 Every extreme point of the set S is a b.f.s to the system of

equations AX = b, X ≥ 0 and conversely every b.f.s of the above system

is an extreme point of the set S.

Proof.

1. Let X be a b.f.s of the system AX = b, X ≥ 0.

We wish to prove that X is an extreme point of the set S.

As X is a b.f.s., it should be a b.f.s for a given basis matrix B and

therefore we can write X = (XB = B−1b, XR = 0).

THE SIMPLEX ALGORITHM: MAIN THEOREMS 55

Page 56: 4.1 Lecture_1e

Using the contradiction, we have:

• If possible, let X be not an extreme point of S, where

S = {X ∈ Rn| AX = b, X ≥ 0}.

• Then, by the definition of an extreme point, this implies that

there exist u ∈ S,v ∈ S,u 6= v, 0 < λ < 1 such that

X = λu + (1− λ)v.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 56

Page 57: 4.1 Lecture_1e

• Now partitioning u and v as per the partition of X, we have

u =

uB

uR

and v =

vB

vR

.

• Then X = λu + (1− λ)v gives

XB

0

= λ

uB

uR

+ (1− λ)

vB

vR

That is,

XB = λuB + (1− λ)vB (7)

0 = λuR + (1− λ)vR (8)

• But Au = b,u ≥ 0; Av = b,v ≥ 0.

• Also 0 ≤ λ ≤ 1, and hence (8) gives uR = 0 = vR.

• This together with Au = b and Av = b gives uB = B−1b and

vB = B−1b.

• Thus u = v, which contradicts that u and v are distinct.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 57

Page 58: 4.1 Lecture_1e

• Therefore X is an extreme point of S.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 58

Page 59: 4.1 Lecture_1e

2. Next let

X∗ =

x1

x2

...

xn

or[

x1 x2 · · · xn

]T

∈ Rn

be an extreme point of S.

We shall prove that X∗ is a b.f.s to the system AX = b, X ≥ 0.

• – For this, it is enough to show that columns aj of A

corresponding to nonzero components of X∗ are linearly

independent.

– Let k be the number of non-zero components of X∗.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 59

Page 60: 4.1 Lecture_1e

– Then without any loss of generality we can assume that

X∗ =

[

x∗1 x∗

2 · · · x∗k 0 0 · · · 0

]T

,

and then show that columns a1, a2, · · · , ak are linearly

independent.

– Here we should note that k ≤ m, as rank(A) = m.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 60

Page 61: 4.1 Lecture_1e

– For k = m, the linear independence of a1, a2, · · · , ak will give

that X∗ is a non-degenerate b.f.s.

– However for k < m, the columns a1, a2, · · · , ak will not form a

basis even if they are linearly independent.

– But then we can always augment (m− k) more columns such

that columns a1, a2, · · · , ak together with these (m− k)

columns are linearly independent and hence form a basis

(we may recollect that in a finite dimensional vector space

every set of linearly independent vectors can be extended to

form a basis).

– Now the components of X∗ corresponding to these (m− k)

augmented column are basic variables at the zero level and

therefore X∗ becomes a degenerate b.f.s.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 61

Page 62: 4.1 Lecture_1e

• – Now we proceed to prove that columns a1, a2, · · · , ak are linearly

independent.

– If possible, let these be linearly dependent, so that exist scalars λi

(not all zero) such that

k∑

i=1

λiai = 0. (9)

– Also AX∗ = b, X∗ ≥ 0.

– Therefore

k∑

i=1

x∗i ai = b, x∗

i ≥ 0 (i = 1, · · · , k). (10)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 62

Page 63: 4.1 Lecture_1e

– Now define

η = mini

{x∗

i

|λi|

∣∣∣∣

λi 6= 0

}

.

and choose ε > 0 such that 0 < ε < η.

– Multiplying (9) by ε gives

k∑

i=1

ελiai = 0. (11)

– Adding and subtracting (10) by (11) obtain

k∑

i=1

(x∗i + ελi) ai = b, (x∗

i + ελi) > 0 (i = 1, · · · , k). (12)

and

k∑

i=1

(x∗i − ελi) ai = b, (x∗

i − ελi) > 0 (i = 1, · · · , k). (13)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 63

Page 64: 4.1 Lecture_1e

– When ε is sufficiently small, this gives

(x∗i + ελi) > 0 (i = 1, · · · , k)

and

(x∗i − ελi) > 0 (i = 1, · · · , k).

– Now take

∗ Λ =[

λ1 λ2 · · · λk 0 0 · · · 0]T

∈ Rn;

∗ X1 = X∗ + εΛ;

∗ X2 = X∗ − εΛ.

– Then X1 ≥ 0 and X2 ≥ 0.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 64

Page 65: 4.1 Lecture_1e

– Also because of (9) and (10), i.e.,

k∑

i=1

λiai = 0 and

k∑

i=1

x∗i ai = b, x∗

i ≥ 0 (i = 1, · · · , k),

we have

AX1 = b, AX2 = b.

– Therefore X1 ∈ S and X2 ∈ S, where

S = {X ∈ Rn| AX = b, X ≥ 0}

is the feasible region of LPP (5).

– But

X∗ =

X1 + X2

2,

which contradicts that X∗ is an extreme point of S.

– Hence columns a1, a2, · · · , ak are linearly independent as desired.

– This prove that X∗ is a b.f.s of the system AX = b, X ≥ 0. �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 65

Page 66: 4.1 Lecture_1e

Let us now recall the simplex tableau

y1 y2 · · · · · · yj · · · yn

XB x1 x2 · · · · · · xj · · · xn

xB1y1j

xB2y2j

......

xBr yrj

......

xBm ymj

z(XB) (zj − cj)

and associated results which have been used in the stepwise

description of the simplex method.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 66

Page 67: 4.1 Lecture_1e

We shall prove these results now.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 67

Page 68: 4.1 Lecture_1e

Theorem 7 If some zj − cj < 0 and for that j some yij > 0, then there

exists a new b.f.s XB such that z(XB) ≥ z(XB).

THE SIMPLEX ALGORITHM: MAIN THEOREMS 68

Page 69: 4.1 Lecture_1e

Proof. Let the hypotheses of the theorem hold for the current b.f.s.

XB corresponding to the basis matrix

B =[

b1 b2 · · · br−1 br br+1 · · · bm−1 bm

]

.

Let B be changed to B, where

B =[

b1 b2 · · · br−1 aj br+1 · · · bm−1 bm

]

.

i.e. from B, column br has been taken out and at that place

column aj (i.e., aj ∈ A) has been entered.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 69

Page 70: 4.1 Lecture_1e

So far we have not put any conditions on j and r but obviously

there must be some conditions on j (entered) and r (taken out)

such that B gives a b.f.s. XB such that

z(XB) ≥ z(XB).

Then to prove the theorem, we have to show that

the conditions on j and r as chosen here, shall be met under the

hypotheses of the theorem.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 70

Page 71: 4.1 Lecture_1e

Now the first thing we require is that

columns of B are linearly independent because only then XB will be a

basic solution.

Next we wish that

every component xBiof XB is non-negative so that it is a b.f.s.; and

finally this XB should be such that

z(XB) ≥ z(XB).

We now consider all of these one by one.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 71

Page 72: 4.1 Lecture_1e

Since aj is a non-basic column and columns of B form a basis for

Rm, we have

aj = y1jb1 + · · ·+ yrjbr + · · ·+ ymjbm. (14)

Then if br is replaced by aj to get the new matrix B and we wish

that columns of B are linearly independent, then by the replacementtheorem of vector spaces, we must have yrj 6= 0.

(

• Let V be a vector space of dimension n with the basis as

v1,v2, · · · ,vn.

• For u ∈ V , we have u = α1v1 + α2v2 + · · ·+ αrvr + · · ·+ αnvn.

• The Replacement Theorem states that if αr 6= 0 then

(v1, · · · ,vr−1,u,vr+1, · · · ,vn) is also a basis of V

)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 72

Page 73: 4.1 Lecture_1e

So the first condition on r (taken out) and j (entered) is that yrj 6= 0

and this gives that columns of B are linearly independent and

hence it is a basis matrix which gives the basic solution XB .

Now we attempt to find XB explicitly. For this we note that

XB = B−1b, i.e. BXB = b or in terms of components.

xB1b1 + xB2

b2 + · · ·+ xBrbr + · · ·+ xBmbm =m∑

i=1

xBibi = b, (15)

i.e.m∑

i=1,i6=r

xBibi + xBrbr = b.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 73

Page 74: 4.1 Lecture_1e

But as yrj 6= 0, (14), i.e.,

aj = y1jb1 + · · ·+ yrjbr + · · ·+ ymjbm,

gives

br =1

yrj

aj −

m∑

i=1,i6=r

yijbi

(16)

which on substitution in (15), i.e.,

m∑

i=1,i6=r

xBibi + xBrbr = b,

gives

m∑

i=1,i6=r

xBibi +

xBr

yrj

aj −

m∑

i=1,i6=r

yijbi

= b. (17)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 74

Page 75: 4.1 Lecture_1e

But as yrj 6= 0, (14), i.e.,

aj = y1jb1 + · · ·+ yrjbr + · · ·+ ymjbm,

gives

br =1

yrj

aj −

m∑

i=1,i6=r

yijbi

(18)

which on substitution in (15), i.e.,

m∑

i=1,i6=r

xBibi + xBrbr = b,

gives

m∑

i=1,i6=r

xBibi +

xBr

yrj

aj −

m∑

i=1,i6=r

yijbi

= b. (19)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 75

Page 76: 4.1 Lecture_1e

i.e.

m∑

i=1,i6=r

(

xBi−

xBr yij

yrj

)

bi +xBr

yrj

aj = b. (20)

Therefore if we define XB as

xBi=

xBi−

xBr yij

yrj

, (i 6= r)

xBr

yrj

, (i = r)(21)

then (20) becomes BXB = b, i.e. XB = (B)−1b, is the basic solution

whose components are given in (21).

THE SIMPLEX ALGORITHM: MAIN THEOREMS 76

Page 77: 4.1 Lecture_1e

Having obtained the basic solution XB (for that we needed

condition on r (taken out) and j (entered) so that yrj 6= 0) for the

new basic matrix B, we have now to put additional conditions on r

and j so that XB becomes basic feasible.

For this we need that all components of XB should be non-negative.

Now looking at equation (21), i.e.,

xBi=

xBi−

xBr yij

yrj

, (i 6= r)

xBr

yrj

, (i = r)

we note that for xBr , to be non-negative, we need yrj > 0.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 77

Page 78: 4.1 Lecture_1e

Having obtained the basic solution XB (for that we needed

condition on r (taken out) and j (entered) so that yrj 6= 0) for the

new basic matrix B, we have now to put additional conditions on r

and j so that XB becomes basic feasible.

For this we need that all components of XB should be non-negative.

Now looking at equation (21), i.e.,

xBi=

xBi−

xBr yij

yrj

, (i 6= r)

xBr

yrj

, (i = r)

we note that for xBr , to be non-negative, we need yrj > 0.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 78

Page 79: 4.1 Lecture_1e

Consider

xBi=

xBi−

xBr yij

yrj

, (i 6= r)

xBr

yrj

, (i = r)

For i 6= r, component xBiis certainly non-negative for those i for

which yij ≤ 0.

So we have to bother only when yij > 0.

Thus we want that for yij > 0 as well

xBi−

xBr

yrj

yij ≥ 0,

i.e.xBi

yij

−xBr

yrj

≥ 0,

i.e.

xBr

yrj

= mini

{xBi

yij

∣∣∣∣

yij > 0

}

. (22)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 79

Page 80: 4.1 Lecture_1e

Consider

xBi=

xBi−

xBr yij

yrj

, (i 6= r)

xBr

yrj

, (i = r)

For i 6= r, component xBiis certainly non-negative for those i for

which yij ≤ 0.

So we have to bother only when yij > 0.

Thus we want that for yij > 0 as well

xBi−

xBr

yrj

yij ≥ 0,

i.e.xBi

yij

−xBr

yrj

≥ 0,

i.e.

xBr

yrj

= mini

{xBi

yij

∣∣∣∣

yij > 0

}

. (23)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 80

Page 81: 4.1 Lecture_1e

The relation (23) is called the minimum ratio criteria, i.e.,

xBr

yrj

= mini

{xBi

yij

∣∣∣∣

yij > 0

}

and it guarantees that irrespective of j, as long as yrj > 0,

if we chose r (taken out) as per (23) then the next solution XB is

certainly basic feasible.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 81

Page 82: 4.1 Lecture_1e

Next we wish to put condition on j (entered) (note that condition

on r (taken out) has already been fixed now as per equation (23))

so that

z(XB) ≥ z(XB).

For this we note that

z(XB) = CTBXB =

m∑

i=1

cBixBi

and

z(XB) =m∑

i=1

cBixBi

.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 82

Page 83: 4.1 Lecture_1e

z(XB) =

m∑

i=1

cBixBi

=

m∑

i=1,i6=r

cBixBi

+ cBr xBr

=

m∑

i=1,i6=r

cBi

(

xBi−

xBr

yrj

yij

)

+ cBr

xBr

yrj

=m∑

i=1

cBi

(

xBi−

xBr

yrj

yij

)

+ cjxBr

yrj

(24)

• In (27), cBr= cj as the entering column is aj and for that the

corresponding basic variable is xj whose coefficient in the objective

function is cj .

• For i 6= r, the basic columns have not changed and so cBi= cBi

. Also,

including i = r in the summation amounts to adding a ’zero’ value in

(27) and hence there is no change.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 83

Page 84: 4.1 Lecture_1e

z(XB) =

m∑

i=1

cBixBi

=

m∑

i=1,i6=r

cBixBi

+ cBr xBr

=

m∑

i=1,i6=r

cBi

(

xBi−

xBr

yrj

yij

)

+ cBr

xBr

yrj

=m∑

i=1

cBi

(

xBi−

xBr

yrj

yij

)

+ cjxBr

yrj

(25)

• In (27), cBr= cj as the entering column is aj and for that the

corresponding basic variable is xj whose coefficient in the objective

function is cj .

• For i 6= r, the basic columns have not changed and so cBi= cBi

. Also,

including i = r in the summation amounts to adding a ’zero’ value in

(27) and hence there is no change.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 84

Page 85: 4.1 Lecture_1e

z(XB) =

m∑

i=1

cBixBi

=

m∑

i=1,i6=r

cBixBi

+ cBr xBr

=

m∑

i=1,i6=r

cBi

(

xBi−

xBr

yrj

yij

)

+ cBr

xBr

yrj

=m∑

i=1

cBi

(

xBi−

xBr

yrj

yij

)

+ cjxBr

yrj

(26)

• In (27), cBr= cj as the entering column is aj and for that the

corresponding basic variable is xj whose coefficient in the objective

function is cj .

• For i 6= r, the basic columns have not changed and so cBi= cBi

. Also,

including i = r in the summation amounts to adding a ’zero’ value in

(27) and hence there is no change.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 85

Page 86: 4.1 Lecture_1e

z(XB) =

m∑

i=1

cBixBi

=

m∑

i=1,i6=r

cBixBi

+ cBr xBr

=

m∑

i=1,i6=r

cBi

(

xBi−

xBr

yrj

yij

)

+ cBr

xBr

yrj

=m∑

i=1

cBi

(

xBi−

xBr

yrj

yij

)

+ cjxBr

yrj

(27)

• In (27), cBr= cj as the entering column is aj and for that the

corresponding basic variable is xj whose coefficient in the objective

function is cj .

• For i 6= r, the basic columns have not changed and so cBi= cBi

. Also,

including i = r in the summation amounts to adding a ’zero’ value in

(27) and hence there is no change.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 86

Page 87: 4.1 Lecture_1e

z(XB) =

m∑

i=1

cBixBi−

m∑

i=1

cBi

xBr

yrj

yij + cjxBr

yrj

=

m∑

i=1

cBixBi−

xBr

yrj

(m∑

i=1

cBiyij − cj

)

= z(XB)−xBr

yrj

(zj − cj)

i.e.

z(XB) = z(XB)−xBr

yrj

(zj − cj). (28)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 87

Page 88: 4.1 Lecture_1e

z(XB) =

m∑

i=1

cBixBi−

m∑

i=1

cBi

xBr

yrj

yij + cj

xBr

yrj

=

m∑

i=1

cBixBi−

xBr

yrj

(m∑

i=1

cBiyij − cj

)

= z(XB)−xBr

yrj

(zj − cj)

i.e.

z(XB) = z(XB)−xBr

yrj

(zj − cj). (29)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 88

Page 89: 4.1 Lecture_1e

Now if we chose our j (entered) such that (zj − cj) < 0 then from

(29), i.e.,

z(XB) = z(XB)−xBr

yrj

(zj − cj)︸ ︷︷ ︸

<0

,

we have z(XB) ≥ z(XB).

z(XB) = z(XB)−xBr

yrj︸︷︷︸

?

(zj − cj)︸ ︷︷ ︸

<0

,

Thus we must have some j for which (zj − cj) < 0 and for that some

yij > 0.

Then we can chose r as from (23), i.e.,

xBr

yrj

= mini

{xBi

yij

∣∣∣∣

yij > 0

}

and get a new b.f.s. XB with z(XB) ≥ z(XB).

This proves the theorem. �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 89

Page 90: 4.1 Lecture_1e

Theorem 8 If all (zj − cj) ≥ 0 then the current b.f.s. XB is optimal to the

given LPP.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 90

Page 91: 4.1 Lecture_1e

Proof.

• Let X ∈ S be arbitrary point.

• We have to show that under the condition

(zj − cj) ≥ 0, ∀j, z(XB) ≥ z(X),

where z(XB) = CTBXB and z(X) = CT X.

• For this we start with zj − cj ≥ 0, ∀j, and as xj ≥ 0, ∀j, we have

zjxj ≥ cjxj , ∀j.

• Now summing over j, we get

n∑

j=1

zjxj ≥

n∑

j=1

cjxj

i.e.n∑

j=1

(m∑

i=1

cBiyij

)

xj ≥ CTX.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 91

Page 92: 4.1 Lecture_1e

Proof.

• Let X ∈ S be arbitrary point.

• We have to show that under the condition

(zj − cj) ≥ 0, ∀j, z(XB) ≥ z(X),

where z(XB) = CTBXB and z(X) = CT X.

• For this we start with zj − cj ≥ 0, ∀j, and as xj ≥ 0, ∀j, we have

zjxj ≥ cjxj , ∀j.

• Now summing over j, we get

n∑

j=1

zjxj ≥

n∑

j=1

cjxj

i.e.n∑

j=1

(m∑

i=1

cBiyij

)

xj ≥ CTX.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 92

Page 93: 4.1 Lecture_1e

• But on LHS these are finite sums and so we can interchange the

order and get

m∑

i=1

cBi

(n∑

j=1

yijxj

)

≥ CTX. (30)

• From (30) we observe that the theorem will be proved if we can

show that

xBi=

n∑

j=1

yijxj , (31)

because then the LHS of (30) will be CTBXB which equals z(XB).

THE SIMPLEX ALGORITHM: MAIN THEOREMS 93

Page 94: 4.1 Lecture_1e

• Now to prove (30) we note that AX = b and hence

i.e. XB = B−1b = B−1AX,

i.e. XB = B−1[

a1 a2 · · · aj · · · an

]

X,

i.e. XB =[

B−1a1 B−1a2 · · · B−1aj · · · B−1an

]

X,

i.e. XB =[

y1 y2 · · · yj · · · yn

]

X. (32)

• But (32) is a vector equality and therefore we have to equate

componentwise, i.e.,

THE SIMPLEX ALGORITHM: MAIN THEOREMS 94

Page 95: 4.1 Lecture_1e

xB1

xB2

...

xBi

...

xBn

=

y11 y12 · · · y1j · · · y1n

y21 y22 · · · y2j · · · y2n

......

yi1 yi2 · · · yij · · · yin

......

ym1 ym2 · · · ymj · · · ymn

x1

x2

...

xj

...

xn

• The ith component on LHS is

yi1x1 + yi2x2 + · · ·+ yinxn.

• Therefore xBi=∑n

j=1yijxj as desired. �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 95

Page 96: 4.1 Lecture_1e

Theorem 9 If some (zj − cj) < 0 and for that j all yij ≤ 0, then the

given LPP has unbounded solution.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 96

Page 97: 4.1 Lecture_1e

Proof.

• Let us choose that j for which the hypotheses of the theorem

hold, i.e. (zj − cj) < 0 and yij ≤ 0 for all i = 1, · · · , m.

• Now for the current b.f.s. XB , i.e., BXB = b or

m∑

i=1

xBibi = b. (33)

• If we choose θ ∈ R arbitrary then (33) can be rewritten as

m∑

i=1

xBibi +θaj − θaj︸ ︷︷ ︸

=0

= b. (34)

• But aj is a non-basic column and therefore

aj =

m∑

i=1

yijbi. (35)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 97

Page 98: 4.1 Lecture_1e

• Substitution for aj from (35) into (34) we get

m∑

i=1

xBibi + θaj − θ

(m∑

i=1

yijbi

)

= b,

i.e.

m∑

i=1

(xBi− θyij)bi + θaj = b. (36)

• If we now define a vector X as

X =[

x1 · · · xm xm+1 0 0 · · · 0]T

,

where

xi = xBi− θyij (i = 1, · · · , m) and xm+1 = θ. (37)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 98

Page 99: 4.1 Lecture_1e

m∑

i=1

(xBi− θ︸︷︷︸

>0

yij

︸ ︷︷ ︸

yij≤0

)bi + θ︸︷︷︸

>0

aj = b.

• Then (36) gives

AX = b.

If we further choose θ > 0, then X ≥ 0 as yij ≤ 0,∀i.

• Therefore using the condition yij ≤ 0,∀i, and knowing the

current b.f.s. XB , we have been able to construct a feasible

solution X as described above.

• Here we may note that X is feasible but NOT basic feasible

because in (36) there are m + 1 columns, namely b1,b2, · · · ,bm

and also aj .

THE SIMPLEX ALGORITHM: MAIN THEOREMS 99

Page 100: 4.1 Lecture_1e

• So far we have not used the condition (zj − cj) < 0. For this, let

us find the value of the objective function for X, i.e.

z(X) =

m∑

i=1

cBixi + xm+1(cj)

=m∑

i=1

cBi(xBi

− θyij) + θcj

=

m∑

i=1

cBixBi− θ

(m∑

i=1

cBiyij − cj

)

=

m∑

i=1

cBixBi− θ (zj − cj)

= z(XB)− θ (zj − cj) ,

THE SIMPLEX ALGORITHM: MAIN THEOREMS 100

Page 101: 4.1 Lecture_1e

i.e.,

z(X) = z(XB)− θ (zj − cj)︸ ︷︷ ︸

<0︸ ︷︷ ︸

θ>0

. (38)

• Since (zj − cj) < 0 and θ > 0 is arbitrary, equation (38) tells that

the objective function value z(X) can be made arbitrary large

by choosing θ > 0 arbitrary large, i.e.,

z(X)ր

This proves that the given LPP has unbounded solution. �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 101

Page 102: 4.1 Lecture_1e

One important point to be noted in the above proof is that it

• not only shows that the given LPP has unbounded solution

• but also constructs the feasible point X for which the arbitrary

large chosen value z(X) of the objective function will be

attained.

The following example is illustrative in this context.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 102

Page 103: 4.1 Lecture_1e

Example 4 Use the simplex method to verify that the following LPP has

unbounded solution

max z = 2x1 + 3x2

subject to

x1 − x2 ≤ 2

−3x1 + x2 ≤ 4

and x1, x2 ≥ 0.

Hence obtain a feasible solution for which the objective function

takes the value 496.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 103

Page 104: 4.1 Lecture_1e

Solution

Introducing the slack variables x3 and x4 and solving the problem as

usual by the simplex method we get the following result

X =

x1

x2

x3

x4

, b =

2

4

, C =

c1

c2

c3

c4

=

2

3

0

0

and

A =

1 −1 1 0

−3 1 0 1

,

and rank A = 2(< 4).

THE SIMPLEX ALGORITHM: MAIN THEOREMS 104

Page 105: 4.1 Lecture_1e

Now if we take B =

1 0

0 1

, as the starting basis matrix then

B = B−1 = I2.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 105

Page 106: 4.1 Lecture_1e

XB =

xB1 = x3

xB2 = x4

= B−1b =

1 0

0 1

2

4

=

2

4

y1 =

y11

y21

= B−1a1 =

1 0

0 1

1

−3

=

1

−3

y2 =

y12

y22

= B−1a2 =

1 0

0 1

−1

1

=

−1

1

y3 =

y13

y23

= B−1a3 =

1 0

0 1

1

0

=

1

0

y4 =

y14

y24

= B−1a4 =

1 0

0 1

0

1

=

0

1

CB =

cB1 = c3

cB2 = c4

=

0

0

THE SIMPLEX ALGORITHM: MAIN THEOREMS 106

Page 107: 4.1 Lecture_1e

z(XB) = CTBXB = 0

z1 = CTBy1 = 0

z2 = CTBy2 = 0

z3 = CTBy3 = 0

z4 = CTBy4 = 0.

z1 − c1 = 0− 2 = −2

z2 − c2 = 0− 3 = −3

z3 − c3 = 0− 0 = 0

z4 − c4 = 0− 0 = 0

and get the following tableaus

THE SIMPLEX ALGORITHM: MAIN THEOREMS 107

Page 108: 4.1 Lecture_1e

cj : (2 3 0 0)

y1 y2 y3 y4

CB XB x1 x2 x3 x4

0 x3 2 1 -1 1 0

0 x4 4 -3 1 0 1

zj − cj z(XB) = 0 -2 -3 0 0

cj : (2 3 0 0)

y1 y2 y3 y4

CB XB x1 x2 ↓ x3 x4

0 x3 2 1 -1 1 0

0 x4 4 -3 1 0 1

zj − cj z(XB) = 0 -2 -3 0 0

THE SIMPLEX ALGORITHM: MAIN THEOREMS 108

Page 109: 4.1 Lecture_1e

cj : (2 3 0 0)

y1 y2 y3 y4

CB XB x1 x2 ↓ x3 x4 Min. Ratio

0 x3 2 1 -1 1 0 −−

0 x4 ← 4 -3 1 0 1 4

1= 4

zj − cj z(XB) = 0 -2 -3 0 0

cj : (2 3 0 0)

y1 y2 y3 y4

CB XB x1 x2 x3 x4 Min. Ratio

0 x3 6 -2 0 1 1 −− R1 + R2

3 x2 4 -3 1 0 1 4

1= 4

zj − cj z(XB) = 12 -11 0 0 3 R3 + 3×R2

Now Theorem 9 is applicable which confirms that the given LPP has

THE SIMPLEX ALGORITHM: MAIN THEOREMS 109

Page 110: 4.1 Lecture_1e

unbounded solution.

Next we have to find a feasible solution X for which the objective

function attains the value 496.

In terms of our notations z(X) = 496, z(XB) = 12 and (z1 − c1) = −11.

Therefore using the relation (38), i.e.,

z(X) = z(XB)− θ(zj − cj).

we get

496 = 12− θ(−11)

i.e.

θ =484

11= 44.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 110

Page 111: 4.1 Lecture_1e

Now we use (37), i.e.,

xi = xBi− θyij (i = 1, · · · , m) and xm+1 = θ.

to construct X. For this we note that here x3 and x2 are basic

variables and j = 1. Hence

X =

x∗1

x∗2

x∗3

x∗4

=

x∗1 = 44

x∗2 = 4− 44(−3) = 136

x∗3 = 6− 44(−2) = 94

x∗4 = 0

is the desired feasible solution.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 111

Page 112: 4.1 Lecture_1e

Corollary 1 If all (zj − cj) ≥ 0 and there exists a j such that yij ≤ 0 for all

i = 1, · · · , m, then the given LPP has unbounded feasible region but

bounded optimal solution.

Proof.

• As (zj − cj) ≥ 0 for all j, by Theorem 8, the current solution XB is

optimal, so the given LPP has bounded optimal solution.

• The feasible region is unbounded follows from the construction

of X in the proof of Theorem 9.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 112

Page 113: 4.1 Lecture_1e

Theorem 10 If in the optimal simplex tableau, for some non-basic

variable xj , zj − cj = 0 and for that some yij > 0, then the given LPP has

infinitely many optimal solutions.

Proof

• We already have one optimal solution, namely the current one.

• Therefore the theorem will be proved

if we could provide one more optimal solution and then use the

fact that the set of all optimal solutions for a given LPP is a

convex set.

• For this, choose that j for which the hypothesis of the theorem

holds.

• As some yij > 0, we can certainly enter xj and do one more

iteration by choosing the variable to leave as per the minimum

ratio criteria.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 113

Page 114: 4.1 Lecture_1e

• This will give another basic feasible solution XB which will be

optimal because z(XB) = z(XB).

• This follows due to the fact that

z(XB) = z(XB)−xBr

yrj

(zj − cj),

and (zj − cj) = 0.

• This proves the theorem. �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 114

Page 115: 4.1 Lecture_1e

Theorem 11 Let

B =[

b1 b2 · · · br−1 br br+1 · · · bm−1 bm

]

.

be the current b.f.s. Let column to enter aj and column to leave br be

determined as per the criteria of Theorem 7. Then for the basis matrix

B =[

b1 b2 · · · br−1 aj br+1 · · · bm−1 bm

]

,

the new b.f.s. XB , new objective function value z(XB), new vector yk and

new values of (zk − ck) are given by

(i) XBi=

xBi−

xBryij

yrj, (i 6= r)

xBr

yrj, (i = r)

(ii) z(XB) = z(XB)−xBr

yrj(zj − cj)

(iii) yik =

yik −yijyrk

yrj, (i 6= r)

yrk

yrj, (i = r)

(iv) (zk − ck) = (zk − ck)− yrk

yrj(zj − cj)

THE SIMPLEX ALGORITHM: MAIN THEOREMS 115

Page 116: 4.1 Lecture_1e

Proof.

Here we must note that proving of above four relations is equivalent

to saying that pivoting is justified.

We have already proved (as part of the proof of Theorem 7) the first

two relations and therefore we shall prove relations (iii) and (iv) only.

THE SIMPLEX ALGORITHM: MAIN THEOREMS 116

Page 117: 4.1 Lecture_1e

Let us go back to the proof of Theorem 7 and observe that

br =1

yrj

aj −

m∑

i=1,i6=r

yijbi

.

But

ak = Byk =

m∑

i=1,i6=r

yikbi + yrkbr,

i.e.

ak =

m∑

i=1,i6=r

yikbi + yrk

1

yrj

aj −

m∑

i=1,i6=r

yijbi

,

THE SIMPLEX ALGORITHM: MAIN THEOREMS 117

Page 118: 4.1 Lecture_1e

Let us go back to the proof of Theorem 7 and observe that

br =1

yrj

aj −

m∑

i=1,i6=r

yijbi

.

But

ak = Byk =

m∑

i=1,i6=r

yikbi + yrkbr,

i.e.

ak =

m∑

i=1,i6=r

yikbi + yrk

1

yrj

aj −

m∑

i=1,i6=r

yijbi

,

THE SIMPLEX ALGORITHM: MAIN THEOREMS 118

Page 119: 4.1 Lecture_1e

i.e.

ak =

m∑

i=1,i6=r

(

yik −yrkyij

yrj

)

bi +yrk

yrj

aj ,

=m∑

i=1,i6=r

yikbi + yrkaj

= Byk,

where

yik =

yik −yijyrk

yrj, (i 6= r)

yrk

yrj, (i = r)

which proves relation (iii). �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 119

Page 120: 4.1 Lecture_1e

Next, let us consider the expression for (zk − ck). By definition

(zk − ck) =

m∑

i=1

cBiyik − ck

=m∑

i=1,i6=r

cBiyik + cj yrk − ck

=

m∑

i=1,i6=r

cBi

(

yik −yrkyij

yrj

)

+ cjyrk

yrj

− ck

=

m∑

i=1

cBi

(

yik −yrkyij

yrj

)

+ cjyrk

yrj

− ck

=

(m∑

i=1

cBiyik − ck

)

−yrk

yrj

(m∑

i=1

cBiyij − cj

)

,

i.e. (zk− ck) = (zk− ck)− yrk

yrj(zj− cj), which proves result (iv). �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 120

Page 121: 4.1 Lecture_1e

Next, let us consider the expression for (zk − ck). By definition

(zk − ck) =

m∑

i=1

cBiyik − ck

=

m∑

i=1,i6=r

cBiyik + cj yrk − ck

=

m∑

i=1,i6=r

cBi

(

yik −yrkyij

yrj

)

+ cjyrk

yrj

− ck

=

m∑

i=1

cBi

(

yik −yrkyij

yrj

)

+ cj

yrk

yrj

− ck

=

(m∑

i=1

cBiyik − ck

)

−yrk

yrj

(m∑

i=1

cBiyij − cj

)

,

i.e. (zk− ck) = (zk− ck)− yrk

yrj(zj− cj), which proves result (iv). �

THE SIMPLEX ALGORITHM: MAIN THEOREMS 121