inequalities for stochastic linear programming problems by albert madansky presented by kevin byrnes

40
Inequalities for Inequalities for Stochastic Linear Stochastic Linear Programming Problems Programming Problems By Albert Madansky By Albert Madansky Presented by Kevin Byrnes Presented by Kevin Byrnes

Upload: sharon-king

Post on 18-Jan-2018

222 views

Category:

Documents


0 download

DESCRIPTION

Introduction In this presentation, we shall consider stochastic linear programs with recourse, i.e. problems of the form: (i) Minimize c T x+f T y Subject to: Ax+By=b Subject to: Ax+By=b x, y >=0 Where the distribution of b is assumed known, the the specific value is not.

TRANSCRIPT

Page 1: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Inequalities for Inequalities for Stochastic Linear Stochastic Linear

Programming ProblemsProgramming ProblemsBy Albert MadanskyBy Albert Madansky

Presented by Kevin ByrnesPresented by Kevin Byrnes

Page 2: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

OutlineOutline

IntroductionIntroductionDefinition of TermsDefinition of Terms

Convexity and ContinuityConvexity and ContinuityPutting it all together: Jensen’s InequalityPutting it all together: Jensen’s Inequality

Conditions for EqualityConditions for EqualityAn ApplicationAn Application

Improving BoundsImproving BoundsCritiqueCritique

‘‘An Application’ © Kevin Byrnes, 2006An Application’ © Kevin Byrnes, 2006

Page 3: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

IntroductionIntroduction

In this presentation, we shall consider In this presentation, we shall consider stochastic linear programs with recourse, stochastic linear programs with recourse, i.e. problems of the form:i.e. problems of the form:

(i) Minimize c(i) Minimize cTTx+fx+fTTyy Subject to: Ax+By=bSubject to: Ax+By=b

x, y >=0x, y >=0

Where the distribution of b is assumed Where the distribution of b is assumed known, the the specific value is not.known, the the specific value is not.

Page 4: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

IntroductionIntroduction

We may interpret (i) in two We may interpret (i) in two different ways. different ways. First, that we wish to solve the First, that we wish to solve the problem all in one stage, and problem all in one stage, and thus wish to find minthus wish to find minxx ccTTx+E[minx+E[minyy f fTTy]y]

Page 5: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

IntroductionIntroduction

We may interpret (i) in two We may interpret (i) in two different ways. different ways. Second, that we wish to Second, that we wish to generate realizations of b, and generate realizations of b, and solve a sequence of solve a sequence of deterministic linear programs. deterministic linear programs. In this case, we are really In this case, we are really solving:solving:minminxx (c (cTTx+minx+miny|xy|x f fTTy)y)

Page 6: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

IntroductionIntroduction

We would generally expect that:We would generally expect that:minminxx c cTTy+E[miny+E[minyy f fTTy]>=E[miny]>=E[minxx (c(cTTx+minx+miny|xy|x f fTTy)]y)]And, indeed, this will be the case. And, indeed, this will be the case. Madansky’s paper investigates the Madansky’s paper investigates the circumstances under which equality circumstances under which equality holds above. (i.e. when the ‘Here and holds above. (i.e. when the ‘Here and Now’ objective function value is equal Now’ objective function value is equal to the expected value of the ‘Wait and to the expected value of the ‘Wait and See’ approach.)See’ approach.)

Page 7: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

IntroductionIntroduction

Finally, a related, but computationally Finally, a related, but computationally simpler problem is to solve:simpler problem is to solve:

(ii) Minimize c(ii) Minimize cTTx+fx+fTTyy Subject to: Ax+By=E[b]Subject to: Ax+By=E[b]

x, y >=0x, y >=0

Which is simply an LP with the random Which is simply an LP with the random vector b replaced by its mean. We shall vector b replaced by its mean. We shall shortly see that the value of (ii) is, in fact, shortly see that the value of (ii) is, in fact, a useful lower bound for both a useful lower bound for both interpretations of (i).interpretations of (i).

Page 8: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Definition of TermsDefinition of Terms

Let our stochastic LP be given by Let our stochastic LP be given by (i), and let our ‘approximate LP’ be (i), and let our ‘approximate LP’ be given by (ii).given by (ii).

Let minLet minyy c cTTx+fx+fTTy such that (x,y) is (i) y such that (x,y) is (i) feasible be denoted as C(b,x) feasible be denoted as C(b,x)

(i.e. C is a function of the random (i.e. C is a function of the random variable realization and our variable realization and our decision variable)decision variable)

Page 9: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Definition of TermsDefinition of Terms

Then we wish to find minThen we wish to find minxx E[C(b,x)] E[C(b,x)] and E[minand E[minxx C(b,x)] for (i), and min C(b,x)] for (i), and minxx C(E[b],x) for (ii).C(E[b],x) for (ii).

Page 10: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

Claim: minClaim: minx x C(b,x) is C(b,x) is convex in b.convex in b.Proof: Let b=tbProof: Let b=tb11+(1-t)b+(1-t)b2, 2,

and let and let xx(b(bii) be a vector ) be a vector that minimizes C(bthat minimizes C(bii,x) ,x) subject to our constraints. subject to our constraints. Let Let xx=t=txx(b(b11)+(1-t))+(1-t)xx(b(b22), ), clearly then clearly then xx is also (i) is also (i) feasible.feasible.

Page 11: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

Now:Now: b=Ab=Axx=tA=tAxx(b(b11)+(1-t)A)+(1-t)Axx(b(b22))

=tb=tb11+(1-t)b+(1-t)b22

Thus:Thus: C(b,x)=tC(bC(b,x)=tC(b11,x(b,x(b11))+(1-t)C(b))+(1-t)C(b22,x(b,x(b22))))

Since minSince minxx C(b,x) <=C(b, C(b,x) <=C(b,xx) we ) we have that:have that:

minminxx C(b,x)<=tmin C(b,x)<=tminxx C(b C(b11,x)+(1-,x)+(1-t)mint)minxx C(b C(b22,x), as desired.,x), as desired.

Page 12: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

So minSo minxx C(b,x) is a convex C(b,x) is a convex function in b. function in b.

Page 13: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

So minSo minxx C(b,x) is a convex C(b,x) is a convex function in b. function in b. Now we Now we claimclaim that min that minxx C(b,x) C(b,x) is also continuous.is also continuous.Proof: Clearly the feasible Proof: Clearly the feasible region for which there exists region for which there exists an x minimizing C(b,x) is an x minimizing C(b,x) is convex in b (it’s a polyhedron) convex in b (it’s a polyhedron)

Page 14: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

And we’ve just proven that And we’ve just proven that minminxx C(b,x) is a convex C(b,x) is a convex function. From function. From elementary analysis it elementary analysis it follows that minfollows that minxx C(b,x) is C(b,x) is convex at every point in convex at every point in the the interiorinterior of its domain. of its domain. Getting continuity at the Getting continuity at the boundary is routine boundary is routine exercise in limits.exercise in limits.

Page 15: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

Let b be a boundary point of Let b be a boundary point of our domain. Consider the our domain. Consider the sequence bsequence bii->b such that ->b such that xx(b(bii) ) is defined as before. Now:is defined as before. Now:lim lim i->infi->inf min minxx C(b C(bii,x)=lim ,x)=lim i->infi->inf C(bC(bii,,xx(b(bii))))>=min>=minxx C(lim C(lim i->infi->inf b bii,x)=C(b, ,x)=C(b, xx(b))(b))

Page 16: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Convexity and Convexity and ContinuityContinuity

By convexity of minBy convexity of minxx C(b,x) we have: C(b,x) we have:lim lim i->infi->inf min minxx C(b C(bii,x)<=min,x)<=minxx C(b,x) C(b,x)

Hence lim Hence lim i->infi->inf min minxx C(b C(bii,x)=min,x)=minxx C(b,x)C(b,x)Thus proving convexity.Thus proving convexity.

(Note: if there were no boundary b, (Note: if there were no boundary b, then we could’ve stopped with then we could’ve stopped with continuity in the interior.)continuity in the interior.)

Page 17: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Putting It All Together: Jensen’s Putting It All Together: Jensen’s TheoremTheorem

We’ve just shown that minWe’ve just shown that minxx C(b,x) is C(b,x) is a convex, continuous function on a a convex, continuous function on a convex domain. Recall that convex domain. Recall that Jensen’s Theorem states that for Jensen’s Theorem states that for any convex function J, and any real-any convex function J, and any real-valued function f, we have that:valued function f, we have that:

<Jof> >= J(<f>)<Jof> >= J(<f>)Where<.> denotes ‘average of’ Where<.> denotes ‘average of’

Page 18: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Putting It All Together: Jensen’s Putting It All Together: Jensen’s TheoremTheorem

In particular, we have that:In particular, we have that:E[minE[minxx C(b,x)]>=min C(b,x)]>=minxx C(E[b],x) C(E[b],x)

Now observe that:Now observe that:1)1) E[C(b,E[C(b,xx(b))]>=min(b))]>=minxx E[C(b,x)] E[C(b,x)]

Since the expected optimal value of C(b,x) Since the expected optimal value of C(b,x) given a realization b is certainly at given a realization b is certainly at least as large as the minimum of the least as large as the minimum of the expectation of C(b,x) over xexpectation of C(b,x) over x

Page 19: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Putting It All Together: Jensen’s Putting It All Together: Jensen’s TheoremTheorem

2) min2) minxx E[C(b,x)]>=E[min E[C(b,x)]>=E[minxx C(b,x)] C(b,x)]

To see 2), let x’ be the value of x To see 2), let x’ be the value of x that minimizes E[C(b,x)], and that minimizes E[C(b,x)], and let let xx(b) be defined as before. (b) be defined as before. Then:Then:

minminxx E[C(b,x)]=E[C(b,x’)], and E[C(b,x)]=E[C(b,x’)], andE[minE[minxx C(b,x)]=E[C(b, C(b,x)]=E[C(b,xx(b))](b))]

Page 20: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Putting It All Together: Jensen’s Putting It All Together: Jensen’s TheoremTheorem

Since:Since:C(b,x’)>=C(b,C(b,x’)>=C(b,xx(b)) for every b,(b)) for every b,

We have that:We have that:E[C(b,x’)]>=E[C(b,E[C(b,x’)]>=E[C(b,xx(b))](b))]

Page 21: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Putting It All Together: Jensen’s Putting It All Together: Jensen’s TheoremTheorem

Putting all of our inequalities together, we Putting all of our inequalities together, we have:have:

E[C(b,E[C(b,xx(E[b]))]>=min(E[b]))]>=minxx E[C(b,x)] E[C(b,x)]>=E[min>=E[minxx C(b,x)]>=min C(b,x)]>=minxx C(E[b],x) C(E[b],x)

With these inequalities, we shall With these inequalities, we shall demonstrate a necessary condition for demonstrate a necessary condition for

minminxx E[C(b,x)]= E[min E[C(b,x)]= E[minxx C(b,x)] C(b,x)]

by achieving ‘tightness’.by achieving ‘tightness’.

Page 22: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Conditions for Conditions for EqualityEquality

First consider the well known (cf. First consider the well known (cf. Savage, foundations of Savage, foundations of Statistics, 1954, p.265) result Statistics, 1954, p.265) result that when the probability that when the probability measure of the set of b’s is measure of the set of b’s is sigma additive (i.e. the sum of sigma additive (i.e. the sum of the measure of countably many the measure of countably many disjoint parts of the domain is disjoint parts of the domain is equal to that of their union) or equal to that of their union) or the set of b’s is finite with the set of b’s is finite with probability one, then:probability one, then:

Page 23: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Conditions for Conditions for EqualityEquality

E[minE[minxx C(b,x)]=min C(b,x)]=minxx C(E[b],x) C(E[b],x)If and only if: minIf and only if: minxx C(b,x) is a C(b,x) is a

linear function of b.linear function of b.

Page 24: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Conditions for Conditions for EqualityEquality

A simple condition for equality is A simple condition for equality is that C(b,x) be a linear function that C(b,x) be a linear function of b. To see why, note that in of b. To see why, note that in this case:this case:minminxx E[C(b,x)]=min E[C(b,x)]=minxx C(E[b],x) C(E[b],x)(compare with Charnes, Cooper and (compare with Charnes, Cooper and

Thomson’s result)Thomson’s result)Now by our earlier derived Now by our earlier derived

inequalities, we must have:inequalities, we must have:minminxx E[C(b,x)]= E[C(b,x)]=E[minE[minxx C(b,x)] C(b,x)]=min=minxx

C(E[b],x)C(E[b],x)

Page 25: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

Now let us consider an application Now let us consider an application of the results we’ve proven thus far. of the results we’ve proven thus far. In recent assignments, we’ve been In recent assignments, we’ve been asked to minasked to minxx E[C(b,x)], where b E[C(b,x)], where b has some known distribution. has some known distribution. In particular, consider the case In particular, consider the case where b is an environmental quality where b is an environmental quality level.level.

Page 26: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

Let bLet b11 be a random vector with be a random vector with a given distribution, and let ba given distribution, and let b22 be another, independent be another, independent random vector with a given random vector with a given distribution. Let the random distribution. Let the random vector b be defined as:vector b be defined as:b=tbb=tb11+(1-t)b+(1-t)b22, for a fixed value , for a fixed value

of t in [0,1]of t in [0,1]

Page 27: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

By convexity, we know that for By convexity, we know that for knownknown realizations of b realizations of b11 and b and b22, we , we have that:have that:

minminxx C(b,x)<=tmin C(b,x)<=tminxx C(b C(b11,x)+(1-,x)+(1-t)mint)minxx C(b C(b22,x),x)

Taking the expectation over bTaking the expectation over b11, b, b22 of of both sides and using independence both sides and using independence and linearity, we get:and linearity, we get:

E[minE[minxx C(b,x)]<=tE[min C(b,x)]<=tE[minxx C(b C(b11,x)]+(1-,x)]+(1-t)E[mint)E[minxx C(b C(b22,x)],x)]

Page 28: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

Now, let b’ be a random vector with the Now, let b’ be a random vector with the same distribution as b (but b’ is not same distribution as b (but b’ is not necessarily necessarily equalequal to b). Does this to b). Does this convexity of expectation also hold for b’?convexity of expectation also hold for b’?

Under the assumption that (i) is feasible Under the assumption that (i) is feasible and bounded for any realization of b or b’, and bounded for any realization of b or b’, we see that the optimal value of these we see that the optimal value of these problems, minproblems, minxx C(b,x) and min C(b,x) and minxx C(b’,x) C(b’,x) must be equal to those of their duals by must be equal to those of their duals by Strong Duality.Strong Duality.

Page 29: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

In particular, we have that:In particular, we have that:D(i) = max bD(i) = max bTTzz

Subject to: ASubject to: ATTz<=cz<=c BBTTz<=fz<=f

D(i’) = max b’D(i’) = max b’TTzzSubject to: ASubject to: ATTz<=cz<=c

BBTTz<=fz<=f

Page 30: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

Notice that the feasible region for Notice that the feasible region for both of these problems is the same, both of these problems is the same, let us call it F. Then maxlet us call it F. Then maxzz b bTTz z subject to z in F is the random subject to z in F is the random variable g(b), and maxvariable g(b), and maxzz b’ b’TTz subject z subject to z in F is the random variable to z in F is the random variable g(b’)g(b’)

Clearly g(b) and g(b’) have the Clearly g(b) and g(b’) have the same distribution, and thus the same distribution, and thus the same mean.same mean.

Page 31: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

Thus:Thus:E[minE[minxx C(b’,x)]=E[g(b’)]=E[g(b)]=E[min C(b’,x)]=E[g(b’)]=E[g(b)]=E[minxx

C(b,x)]C(b,x)]

And so the inequality holds for any And so the inequality holds for any b’ with the same distribution as b. b’ with the same distribution as b. In particular, if the bIn particular, if the bii were normal, were normal, we could’ve generated a random we could’ve generated a random vector with the (easily computable) vector with the (easily computable) distribution of their convex distribution of their convex combination, to have this hold.combination, to have this hold.

Page 32: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

An ApplicationAn Application

Now, under Madansky’s equality Now, under Madansky’s equality criteria, that C(b,x) be a linear criteria, that C(b,x) be a linear function of b (which it is obviously function of b (which it is obviously via the dual formulation), we have via the dual formulation), we have that E[minthat E[minxx C(b,x)]=min C(b,x)]=minxx E[C(b,x)] E[C(b,x)] and so the optimal objective and so the optimal objective function value that we observed function value that we observed should have been ‘convex in should have been ‘convex in distribution’.distribution’.

Page 33: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Improving BoundsImproving Bounds

From the string of equalities From the string of equalities we previously determined, we previously determined, we have found upper and we have found upper and lower bounds on minlower bounds on minxx E[C(b,x)], the function that E[C(b,x)], the function that we wish to minimize for (i). we wish to minimize for (i). These bounds are:These bounds are:

L=minL=minxx C(E[b],x) and C(E[b],x) and U=E[C(b,U=E[C(b,xx(E(b)))](E(b)))]

Page 34: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Improving BoundsImproving Bounds

L can be found by simply L can be found by simply solving the deterministic solving the deterministic approximate LP. U can be approximate LP. U can be found by simply taking an found by simply taking an expectation, since expectation, since xx(E[b]) is (E[b]) is a fixed quantity. A natural a fixed quantity. A natural question to ask is, if we add question to ask is, if we add more structure to the more structure to the problem, do we get tighter, problem, do we get tighter, easily computable bounds?easily computable bounds?

Page 35: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Improving BoundsImproving Bounds

Perhaps not, but as was seen in ‘An Perhaps not, but as was seen in ‘An Application’, E[minApplication’, E[minxx C(b,x)] is C(b,x)] is also a naturally arising quantity, also a naturally arising quantity, which shares the same bounds. which shares the same bounds.

Claim: If b is defined on a bounded Claim: If b is defined on a bounded m-dimensional rectangle Im-dimensional rectangle Imm, and , and the b’s are independent, then:the b’s are independent, then:

Page 36: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Improving BoundsImproving Bounds

E[minE[minxx C(b,x)]<= C(b,x)]<=

=H*(E[b])=H*(E[b])

),(min)(

])[()1()(

1 12

xCbE

x

m

j jj

jj j

j

Page 37: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Improving BoundsImproving Bounds

What can we do if b is a vector What can we do if b is a vector distributed normally? If the distributed normally? If the elements in b, belements in b, bii are are independent, then we may independent, then we may replace each of their replace each of their distributions with an distributions with an approximation.approximation.

In particular, we can construct In particular, we can construct smooth, compactly supported smooth, compactly supported approximation functions that approximation functions that match the distribution for the bmatch the distribution for the bii on all but a set of arbitrarily on all but a set of arbitrarily small measure.small measure.

Page 38: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

Improving BoundsImproving Bounds

These approximations are not These approximations are not necessarily bad things, because necessarily bad things, because they may, in some sense, be they may, in some sense, be more accurate functions than more accurate functions than our normal distributions our normal distributions (especially when a random (especially when a random vector is not permitted to vector is not permitted to assume negative values).assume negative values).

For details, consult the proof of For details, consult the proof of Urysohn’s Lemma.Urysohn’s Lemma.

Page 39: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

CritiqueCritique

From Madansky’s paper, we have:From Madansky’s paper, we have:1)1) Characterized the function minCharacterized the function minxx C(b,x) C(b,x)2)2) Derived easily computable bounds for Derived easily computable bounds for

(i) and (ii)(i) and (ii)3)3) Used these bounds to determine when Used these bounds to determine when

equality holds between E[minequality holds between E[minxx C(b,x)] C(b,x)] and minand minxx E[C(b,x)] E[C(b,x)]

4)4) Determined tighter bounds on E[minDetermined tighter bounds on E[minxx C(b,x)] under assumptions on the C(b,x)] under assumptions on the distribution of the bdistribution of the bii..

Page 40: Inequalities for Stochastic Linear Programming Problems By Albert Madansky Presented by Kevin Byrnes

CritiqueCritique

We did not:We did not:1) Further develop any of our 1) Further develop any of our

intermediate results.intermediate results.2)2) Develop applications for using Develop applications for using

these equality conditions.these equality conditions.