e061 tec cos't aio p6033333 33333333333w fr paar

26
GE0OUT0IA M-E'TCN~ 'E061 14S O TEC CoS'T AIO P6033333 33333333333W FR pAAR "

Upload: others

Post on 26-Apr-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

GE0OUT0IA M-E'TCN~

'E061 14S O TEC CoS'T AIO P6033333

33333333333W FR pAAR "

Page 2: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

II 11111-IIIIIIJ°

I41.2 IIII l. .0

M4 R 1.6

MICROCOPY RESOLUTION TEST CH*TNAT"~L BUREAU Of hNAC-93

Page 3: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

44

K V-

'I4

'I44

Ak

S *--ma".- '

* - F> t

40

Al

Page 4: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

$CCU RI TY SII5FICATION OF THIS PACE tmhen Data EA ISRUINj )REPORT DOCUMENTA.TION PA(,!:' BEFORE COMPLETING FORM

- 2. GOVTr ACCESSION NO. . RECIPIETVS CATALOG NUMBER

g4 VOSPTR-/5f 8 -44

TI T LE (and Subtitle) T PE,6F REPORT _PIERIOO COVERED

ORITHM FOR.IWNEARLY JONSTRAINED 4ONLINEAW)~ lot]m'c !~~~OAMING ;OBLEMS., S. PERFORMING 010. REPORT NUMBERE

7. okhtarV - a.Azaa . CONTRACT OR GRANT NUMBER(a)

JmeJ/Goode

S. PERFORMING ORGANIZATION NAME AND ADDRESS -.- 10. PROGRAM ELEMENT, PROJECT. A

Georgia Institute of Technology AREA: RK UNIT NUMBERS

School of Industrial & Systems Engineering /Atlanta, GA 30332 10E/2Y

11. CONTROLLING OFFICE NAME AND ADDRESS .REPORT DATE

Air Force Office of Scientific Research/NM ( / 6-9

Bolling AFB, Washington, DC 20332 R7 O1111tOF PAGE Pq

14. MONITORING AGENCY NAME & ADDRESS(if dillferent from Control ling Office) IS. SECURITY CLASS. (of thite eport)

UNCLASSIFIEDIS&. OECL ASSI FICATION/ODOWNGRADING

SCHEDULE

16. DISTRIBUTION STATEMENT (of this Report)

Approved for public release; distribution unlimited.

17. DISTRIB3UTION STATEMENT (of the abettedt entered In Block 20. i1 different fromn Report)

I8. SUPPLEMENTARY NOTESI

IS. KEY WORDS (Continue on revoese side it neceasary end Identify by block number)

0.ABSTRACT (Continu, op, revers side it necoesavy and Identify by block number)

In this paper an algorithm for solving a linearly constrained nonline program-ming problem is developed. Given a feasible point, a correction vector is com-puted by solving a least distance programming problem over a polyhedral cone de-fined in terms of the gradients of the talmostf binding constraints. M4ukai'sapproximate scheme for computing step sizes is generalized to handle the con-straints. This scheme provides as estimate for the step size based on a quad-ratic approximation of the function. This estimate is used in conjunction with

DD I AN7 1473 EDITION OF, I NOV es IS OBSOLETE UNCLASSIFIEDSECURITY CLASSIFICATION OF THIS PAGE (Man. Dota EantereQ

7pop'

Page 5: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

SECUMITY CLASSIFICATION4 OF "IM PAGR(IOhm Va 3uuaV

20. Abstract cont'

Armijo line search to calculate a new point. It is shown that each accumula-tion point is a Kuhn-Tucker point to a slight perturbation of the originalproblem. Furthermore, under suitable second order optimality conditions, itis shown that eventually only one trial is needed to compute the step size.

UNCLASSIFIED

Saf-UNITY CLASSIVICAION OR "Aaatphai Dae. i.

Page 6: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

DTIC

.. -CTE

AN ALGORITHM FOR LINEARLY CONSTRAINED

NONLINEAR PROGRAMTING "OROBLEMS

by

Mokhtar S. Bazaiaa

and

Jamie J. Goode

AIR FVO ., '"........T ... ,(AFSC!NOTICZ .'..

This t,, 'I s'd iiapprc)-... ..., . . . - (7b).

Distr ' .A. D. LTechnloal L : -. ;. Off Ier

This docit ho IIIIfor public totems. md luidistribution is uafluitsd.

Page 7: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

AN ALGORITHM FOR LINEARLY CONSTRAINEDNONLINEAR PROGRAMMING PROBLEMS

Mokhtar S. Bazaraa and Jamie J. Goode

In this paper an algorithm for solving a linearly constrained nonlinear

programming problem is developed. Given a feasilble point, a correction vector

is computed by solving a least distance programnming problem over a polyhedral

cone defined in terms of the gradients of the Oalmost 4 binding constraints.

Muk.:l's approximate scheme for computing step sizes is generalized to handle

the constraints. This scheme provides as estimate for the step size based on

a quadratic approximation of the function. This estimate is used in conjunc-

tion with Armijo line search to calculate a new point. It is shown that each

accumulation point is a Kuhn-Tucker point to a slight perturbation of the

original problem. Furthermore, under suitable second order optimality condi-

tions, it is shown that eventually only one trial is needed to compute the

step size.

A-

+School of Industrial and Systems Engineering, Georgia Institute of Tech-nology, Atlanta, Georgia. This author's work is supported under USAFOSRcontract number F49620-79-C-0120.

School of Mathematics, Georgia Institute of Te hnology, Atlanta, Georgia.

04IWO I.

o U ?V!

.4 -hI

Page 8: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

1. Introduction

This paper addresses the following linearly constrained nonlinear pro-

gramming problem:

P: minimize f(x)

subject to Ax < b

where f is a twice continuously differentiable function on R , and A is an

ttxn matrix whose jth row is denoted by a., and where a superscript t denotes

the transpose operation.

There are several approaches for solving this problem. The first one

relies on partitioning the variables into basic, nonbasic, and superbasic

variables. The values of the superbasic and basic variables are modified

while the nonbasic variables are fixed at their current values. Examples of

methods in this class are the convex simplex method of Zangwill 118), the

reduced gradient method of'Wolfe [17], the method of Murtagh and Saunders !12],

and the variable reduction method of McCormick [81.

Another class of methods is the extension of quasi-Newton algorithms from

unconstrained to constrained optimization. Here, at any iteration, a set of

active restrictions is identified, and then a modified Newton procedure is

used to minimize the objective function on the manifold defined by these active

constraints. See for example Goldfarb [6], and Gill and Murray [5].

Other approaches for solving problems with linear constraints are the

*gradient projection method and the method of feasible directions. The former

computes a direction by projecting the negative gradient on the space ortho-

gonal to the gradients of a subset of the binding constraints while the latter

method determines a search direction by solving a linear programing problem.

,=,__== _ __ ._ _---......

Page 9: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

i2

For a review of these methods the reader may refer to Rosen [14], Zoutendijk

[19], Frank and Wolfe 14j, and Topkis and Veinott [15).

In this paper, an algorithm for solving problem P is proposed. At each

iteration a correction vector is computed by finding the minimum distance

from a given point to a polyhedral cone defined in terms of the gradients

of the "almost" binding constraints. An approximate line search procedure

which extends those of Armijo [1] and Mukai [10, 11 for unconstrained opti-

mizatiou is developed for determining the step size. First, an estimate of

the step size based on a quadratic approximation to the objective function is

computed, and then adjusted if necessary.

In Section 2, we outline the algorithm. In Section 3, we show that

accumulation points of the algorithm are Kuhn-Tucker points to a slight per-

Lurbation of the original problem. Finally, in Section 4, assuming that the

algorithm converges, and under suitable second order sufficiency optimality

conditions, we show that the step size estimates which are based on the quad-

ratic approximation are adceptable so that only one functional evaluation is

eventually needed for performing the line search.

2. Statement of the Algorithm

Consider the following algorithm for solving Problem P.

Step 0

Choose values for the parameters c, z, 6, and c. Select a point x0 such that

Ax0 < b and let So - S. Let i 0 and go to Step 1.

step 1

Let w1 be the optimal solution to Problem D(xi) given below;

Page 10: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

D(x minimize Vf (x tw z w tw

1 2t

subject to a w < 0 for 1 I(x

where

I(x1 ) {j: a x1 > b - c} (2.1)

If wi 0 0, stop. Else, go to Step 2.

Step 2

Let

I+ (w f{j: at.w > 0} (2.2)

and lett

mi n fi, for J I+(w (2.3)a w

Let

d I (2.4)

and go to Step 3.

Step 3

If

f(x,+ E~d) + f(x,-cd,) - 2f(x i) > e 2 iIdJ~ 2 (2.5)

let

C 2Vf (x) t d

I f(x +Edi) + f(x -cdi) _ 2f(x ) (2.6)

_ , =- ,, ,,

Page 11: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

4

and let 6 + 6i, and go to Step 4. Otherwise, let X1I, 6i2l9

and go to Step 4.

Step 4

Let

a, = min tlxi} (2.7)

and compute the smallest nonnegative integer k satisfying

ilk tf(+ d - f (xi < c Vf (X ) d (2.8)

k

iii

Let ki =k, x a.(, d,, i i + 1, and go to Step 1.

The following remarks are helpful in interpreting the above algorithm.

1. A direction wi is determined by solving Problem D(xi). This problemt

finds the point in the convex polyhedral cone {w: a w < 0 for JcI(xi)}J -

which is closest to the vector - - Vf(xi). Methods of least distance pro-

gramr.ing, as in the works of Bazaraa and Goode 12], and Wolfe [16 can be

used for solving this problem. Special methods that take advantage of the

structure of the cone constraints may prove quite useful in this regard.

2. The restrictions enforced in Problem D(xi) are the c-binding constraints

at x, that is, those satisfying b - c < axi < b. If wi W 0, then the

ilgorithm is terminated with xi. In this case, from the Kuhn-Tucker condi-

tions for Problem D(x1), there exist u for Jcl(xi) such that:

Page 12: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

Vf (x ) + u a, 0~~JEI(X ) J

u. > 0 for jeI(x i )

These conditions imply that xi is a Kuhn-Tucker point for the following

problem:

minimize f (x)t t

subject to a1 x < ajxj for icl(xi )

atx < b for j+I(xi)

t

Noting that b c < a x < b for !cI(x1), if c is sufficiently small, It

is clear that the algortl-.m is terminated if xi is a Kuhn-Tucker solution to

a slightly perturbed version of Problem P. The following definition will

thus be useful.

Definition 2.1

Let x be a feasible point to Problem P. If the optimal solution to Problem

D(x*) is equal to zero, then x is called a c-KT solution to Problem P.

3. If xi + wi is feasible to Problem P, then the search vector di is taken

as w.. Otherwise, dI is taken to be the vector of maximum length along wi

which maintains feas;ibillty of x + d

Page 13: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

4. Steps 3 and 4 of the algorithm compute the step size taken along the

vector di in order to form xi+ I. As proposed by Mukai [10, 111, first an

estimate of the step size X.. is calculated. When appropriate, i is computed1

by utilizing a quadratic approrimation of the function f at x., otherwise 'A

is taken equal to 1. ID order to ensure feasibility to Problem P, the first

trial step size a. used in conjunction with Armijo line search [1], is the

minimum of A. and 1. As -ill bj shown in Section 4, under suitable assump-1

tions, for large i, test (2.5) passes, ki = 0, and ai = Xi < 1. This confirms

efficiency of the line search scheme where eventaally only one trial is

needed to compute the step size.

3. Accumulation Points of the Algorithm

Theorem 3.1 shows that each accumulation point of the proposed algoritm is

a c-KT point. In order to prove this theorem, lemmas 3.1 and 3.2 are needed.

These two lemmas extend similar results of Mukai [101 for unconstrained problems.

In order to facilitate the development in this section, the following

notation is used. Let w(x) be the optimal solution to Problem D(x) and let

S(x) be as given in (2.3) with x. replaced with x. Finally, let d(x) = (x)w(x).1

Lemma 3.1

Suppose that x is not a c-KT point for Problem P. Then, there exist scalars

p and s > 0 so that ji < cdx) < I for each x with ix-x II < s.

Proof

There exists sa > 0 so that I(x) - I(x*) for all fIx-xIf < s . Thus, the * [

feasible region for Problem Dx) is equal to that of Problem D(x ) for all x

satisfying [lx-Xi V By continuous differentiability of f, it then follows

that w(-) is continuous in x at x ,see for example Daniel 13]. Particularly,

Page 14: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

7

there exists a number s2 > 0 such that I+(w(x)) 1 (tx)) if jx-x < s8.

This together with the continuity of w(,) and the formula for computing 3(-)

imply that (.) is continuous in x at x . Hence, d() is also continuous.

Since x is not a c-K! point, then w(x*) # C. Furthermore, b. - a x* > c if

t**a w > 0 which implies that (x*) > 0. Therefore d(x ) 0 0. By continuity

of 3(') and d(') at x there exist scalars q and s > 0 so that

W(x)1l d (x) > 2 (* *Hd(x)112 if llx-x* ! s (3.1)

f(x+zdix)) + f(x-cd(x)) - 2f(x) < q if jHx-x 1I < s (3.2)

Now, let x be such that Hx-x I < 4. Since v(x) solves Problem D(x), then

t 1 I C 1Vf(x) w(x) < - ! Zw( . This, in turn, implies that - Vf(x)td(x) > 1 z22

(x) fld(x) I2 and from (3.1) we get:

- 7f(x) d x) > (3.3)

If test (2.5) passes, then from (3.2) and (3.3) the following lower bound on

Xi is at hand:

2 12Vf(x) t d(x) 2I f(x+Ed(x)) + f(x-cd(x) 2f(x) -

2

If test (2.5) fails, then Xi = I and hence Xi > rain jl, __ -q . Since

a i min {i, X 1i, the desired result follows.

Lemma 3.2

If x is not a c-KT point for Problem P, then there exist a number s 5 0 and

an integer m so that k(x) < m if 11 x-x*11 < a, where k(x) is the Armijo integer

Page 15: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

I6

given by (2.8) with xi and ai replaced with x and o(x) respectively.

Proof

As in the proof of Lemma 3.1 and by continuous differentiability of f, there

exist scalars s, h, and y > 0 so that for x-x < s the following hold:

Vf (x) td(x) < -y (3.4)

lVf(x+gd(x))td(x) - ?f(x)td(x)l < 2Z y for each gE[O,hl (3.5)-3

Now let m be the smallest nonnegative integer so that (-)m < h and let x be

such that IIx-x* l < s. Then there exists Oc[0,1] such that:

11 m(x+x a-x x X17()dx

() (x)Vf(x+e() a (x)d(x)) d(x) - (4) c(x)Vf(x)td(x)

=()m()rf(x+O()m(x)d(x))td(x) - Vf(x)td(x)I + 1 7f(X)td(x) (36)

Since e()m(x) _< h, (3.4) and (3.5) imply that the right hand side of (3.6)

is < 0 which in turn shows that k(x) < m, and the proof is complete.

Theorem 3.1

Either the algorithm terminates with a c-KT point for Problem P or else gen-

erates an infinite sequence {x1} of which any accumulation point is a c-KT

point for Problem P.

SI ... ....

Page 16: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

WN

ProofClearly the algorithm stops at x. only if xi is a c-KT point. Now, suppose

that the algorithm generates the infinite sequence {xi}. Suppose that x is

K *an accumulation point so that xi -+ x for some infinite set K of positive

1K

integers. Since f(x1) is decreasing monotonically and since f(xi) L f(x)

then f(xi) -+ f(x*). Suppose by contradiction to the desired conclusion that*x is not a c-KT point. From Lemmas 3.1 and 3.2, there exist positive numbers

p and y and an integer in so that ai > p , 1'7f(x.) d< - y, and ki < m for large

i in K. Therefore,

kf(x - f(Xz - a i Vf(xi) di _-y

for large i in K. This implies that f(xi) -- , contradicting the fact that*

f(xi) -+ f(x ). This completes the proof.

4. Eventual Acceptance of the Step Size Estimate

In the previous section, we showed that an accumulation point x of the

sequence {xi} generated by the algorithm is a KT point to the perturbed pro-

blem P' given below:

P': minimize f(x)

t t~subject to a x < a x for j-I(x*)J -- J

atx < b for jI(x*)

Here, we assume that the whole sequence .x I converges to a point x which

satisfies suitable second order sufficiency conditions. Under this assump-

tion, we show that test (2.5)1 . eventually passed. Furthermore, we show

Page 17: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

10

that A < I and that k i = 0 for i large enough.

The second order condition is given in Definition 4.1. It is well-known

that x satisfying this condition is a strong local minimum for problem P'.

That is, there exists a numoer y > 0 so that f(x ) < f(x) if x is feasible

to problem P' andlix-xl! < -f, see for example McCormick [9] and Ran and

Mangasarian [7].

Definition 4.1

Let x be such that Ax < b and let (x) {j: ax *> b cl. x is said

to satisfy the second order sufficiency optimality conditions for problem P'

if there exist scalars u > 0 for Jcl(x*) and y > 0 so that:

Vf(x + u.a 0JEl(x ) J

f(x*)td < 0, a d < 0 for jcl(x*), 1 d 1! => dtH(x*)d > y (4.1)

Theorem 4.1 shows that test (2.5) will eventually be passed so that

A. is given by (2.6). The following two intermediate results are needed to

prove this theorem.

Lemma 4.1

If Cd < 0 and !1 dli I imply that dtEd > y > 0 then there is a number 0 > 0

so that Cd < 81 and d II limply that dtHd > y/2.

Proof

Suppose by contradiction that for each integer k 'there is a vector dk such

that

Page 18: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

11

[1dkl 1, Cdk<., and d kti < - 2 (4.2)

Since the sequence {dk is bounded, it has an accumulation point d. From

(4.2), lid I - 1, Cd < 0, and d tHd < y/2 which contradicts the assumption of

the lemma.

Lemma 4.2

If either {xl} converges or {x: Ax < b, f(x) < f(x0 )1 is bounded, then

d -~0.

di O

Proof

Since 0 < 6i < I and di M 8iw, it suffices to prove that wi 0. Suppose

there exist an infinite set of positive integers K and a number e > 0 so

that

11i > c for icK (4.3)

Clearly, under either of the assumptions of the lemma, there exist an infinite set* K'X* *

K' C K and a point x so that x- x . By Theorem 3.1 x is a c-KT point

for Problem P. Thus, w 0 is the unique optimal solution to Problem D(x ).

But for large icK', I(x*) I(xi) and by continuity of the solutions to

D(-) we must have 11 w11 < e/2 for large i in K'. This contradicts (4.3) and

the proof is complete.

Throughout the remainder of this section, the following notation will

be used for any scalar Y;

Page 19: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

12

HY =2 f (l-y) H(xi+yyd1 )dy (4.3)0

We can integrate by parts to obtain

f(x 1 +Ydi) - f(x1 ) - yVf(zi)td, + . y dt1id, (4.4)

For further details, the read'!r'may refer to Polak [13, p. 293].

Theorem 4.1

Let fx1 } be a sequence generated by the algorithm. Suppose that xi x and

x satisfies the second order optimality conditions for problem P'. Then

there exists an integer m so that test (2.5) passes for all i > m.

Proof

From (4.3) and (4,4) we get:

£ - )t I 2tE

t I 2 t

f(xx ) = -Vf(xi)td + E dilHi di

Adding we obtain:

12 t C -C

f(Ji+Cdi) + f(xi-edi) - 2f(xi) = c 1 2d, (45)

* t * *Now for JEI(x ), jx > b - c. Since x i x then for i large enough,

t ta x > b - c so that Jcl(x) , By step 1 of the algorithm a v < 0 and soj d '

a I 1< 0 for i large enough and jlI(x*). Likewise, from step 1 of the

Page 20: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

13

t t~ d ialgorithm Vf (X) v < 0 and hence Vf(X) -- < 0. Since x x ,then for

anynubere> 0, Vf~x) e for £ large enough. Thus, Lemma 4.1 and

the second order conditions imply that

d tH~x*)d > YI1dif j2 for large 1 (4.6)

Now note that

lIHC - 1(x*) I =112 f (1-y) xyd)-x* yl0 I

<2 0f (1-y) IIH(x i+ycd p-H(x Ildy (4.7)

Since x i -~ x ,then by Lexuma 4.2, di4 0. Particularly, for i large enough,

!II(x +ed )-H(x*) Hj < -1far all yEj[0lI1 From (4.7), IIH~ C H (x*) I C

This together with (4.6) yields:

dtHd, .dt'H(x ),+ dHiH~ )

2

SIld~I for large 1 (4.8)

similarly,

*t -

From (45,(4.8)., and (4.9) it immediately follows that

Page 21: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

~14

+ f ~xiedX) - 2f(x) > ' 111d 1112 for large 1 (4.10)

From C4.10), if test (2.5) fails for a large i, we must have:

2 6 lidi 112 > f(y+di) + f ( ed) - 2f x) > E2 I Id 112

that is, 6 > If the conclusion of the leama does not hold, then test

1 4(2.5) fails infinitely often and then 6i 0 . This contradicts 6i > I for

large i, and the proof is complete.

Theorem 4.2

Let {x1) be a sequence generated by the algorithm. Suppose that x, -* x and

that x satisfies the second order optimality conditions for Problem P'. Then

there exists an integer m so that f(xi4cLd 1 ) - f(x i ) <1 aiVf(xi)tdi for all

i > m, that Is, ki a 0 for all I > m.

Proof

By Theorem (4.1), test (2.5) passes for large I so that X, is given by

-2 Vf(;K)d i - - Vf(x)td 4.1i

f(x+Ed) + f(xiedi) - 2f(x 1) 2t C -)- di'+i Id

If 1 so that -I W X,, then from (4.4) and (4.11) we get:

1 21 2tA. 1 2 ttf~x1 4Uld1 ) - f(,c1 ) - ciiVfbci~td 1 -j AdIHid

E d d(i+ H.d T Xid(H )d i (4.12)i 2 1 1 1 1 1111

Page 22: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

I

15

Since xI x ,then by Lemma.4.2, di 0. Thus H i H RiadH ovret

H(x*) and the first term in (4.12) will be less than -) X 2 lid 112 for i larget > -l ll fo2 arei

enough. As in the proof OE Theorem 4.1, d (KC4+HI )d i for lar. i.l

Substituting in (4.12), the desired result holds.

Now suppose that Ai > 1 so that a, M 1. Then

f(xi+adi) - -1 R"dd + I t (4.13)

Since Xi > 1, then from (4.11) we must have

Vf(x )td < - 1 C

Substituting in (4.13) we get:

1 1 1 t C -C

f(i+cidi) - 1 <1 ti - t d 2

1 t C -Ci2 d (H i+H i )d 1 (4.14)

That the right band side of (4.14) is < 0 for large i follows exactly in the

same manner in which we proved that (4.12) is < 0. This completes the proof.

Finally, we state certain conditions in Theorem 4.3 below which guarantee

that Ii < I so that o = X for i large enough.

Theorem 4.3

Let {Xti be a sequence generated by the algorithm. Suppose that x- x and*

that x satisfies the second order optimality conditions for Problem P'. If

4 then there ian integeraso that X <1I for all i> u, that is,

011aNfor all 1. >'m.

Page 23: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

f-6

Proof

By Theorem 4.1 there is an integer m so that for i > m we have:2Vf(x )tdi)td f

"i f(X +d ) + f(x1 -ed1 ) - 2f(x i ) 1d (H -)d

As in the proof of Theorem 4.1

It 6 -E

2 for I large enough (4.16)

Since w solves Problem D() then there exist scalars u > 0 for JcI(x

such that

f(x i ) + zwi + 0u a = o (4.17)

t

uji .0 for jci(x) (4.18)

From (4.17) and (4.18) it follows that Vf(x 1 ) - - z ilwi. But by Theorem 3.1* * *x is a c-KT point and hence the optimal solution w to Problem D(x is

* *

V 0. Since x - x , by continuity of the optimal solution to Problem D('),

and since b1 - atx > c for each Jci+(wp, it follows from (2.3) that 6, W 1

for large I. Thus di - wi so that

li

Vf(.xi) td, - z Jjdjjj 2 for large 1 (4.19)

Substituting (4.19) and (4.16) in (4,15), It is clear that X < 1 for i large

enough, and the proof is complete.

UI

Page 24: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

17

REFERENCES

1. L. Armijo, "Minimization of Functions Paving Cortinuous Partial Deri-vatives," Pacific Journal of Mathematics, Volume 16, pp. 1-3, 1966.

2. M. S. Bazaraa and J. j. Goode, "An Algorithm for Finding the ShortestElement of a Polyhedral Set with Application to !agrangian Duality,"Journal of Mathematical Analysis and Applications, Volume 65, pp. 278-288,1978.

3. J. W. Daniel, "Stability of the Solution of Definite Quadratic Programs,"Mathematical Programming, Volume 5, pp. 41-53, 1973.

4. M. Frank and P. Wolfe, "An Algorithm for Quadratic Programming," NavalResearch Logistics Quarterly, Volume 3, pp. 95-110, 1956.

5. P. E. Gill and W. Murray, "Newton-type Methods for Unconstrained andLinearly Constrained Optimization," Mathematical Programming, Volume 7,pp. 311-350, 1974.

6. D. Goldfarb, "Extension of Davidson's Varlable Metric Method to Maximi-zation Under Linear Inequality and Equality Constraints," SIAM JournalApplied Mathematics, Volume 17, pp. 739-764, 1969.

7. S. P. Han and 0. L. Mangasarian, "Exact Penalty Functions in NonlinearProgramming," Mathematical Programming, Volume 17, pp. 251-269, 1979.

8. G. P. McCormick, "The Variable-Reduction Method fox Nonlinear Programming,"Management Science, Volume 17, pp. 146-160, 1970.

9. G. P. McCormick, "Second Order Conditions for Constrained Minima,"SIAM Journal Applied Mathematics, Volume 15, pp. 641-652, 1967.

10. H. Mukai, "Readily Implementable Conjugate Gradient Methods," MathematicalProgramming, Volume 17, pp. 298-319, 1979.

11. H. Mukai, "A Scheme for Determining Step Sizes for Unconstrained Opti-mization Methods," IEEE Transactions on Automatic Control, Volume AC23,pp. 987-995, 1978.

12. B. A. Murtagh and M. A. Saunders, "Large Scale Linearly ConstrainedOptimization," Mathematical Programming, Volume 14, pp. 41-72, 1978.

13. E. Polak, Computational Methods in Optimization: A Unified Approach,Academic Press, New York, 1971.

_14. J._B. Rosen, "The Gradient Projection Method for Nonlinear Programming,Part 1, Linear Constraints' SIAM jaurnal Ap2plied Mathematics, Volume 9,pp. 514-553, 1961.

15. D.M. Topkis and A. F. Veinott, "On the Convergence of Some FeasibleDirection Algorithms for Nonlinear Programing, SIAM Journal ControlVolume 5, pp. 263-279, 1967.

Page 25: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

18

16. P. Wolfe, "Algorithm for a Least-Distance Programming Problem,"Mathematical Programming Study 1, pp. 190-205, 1974.

17. P. Wolfe, "Methods of Nonlinear Programming," in Nonlinear Programming,edited by J. Abadie, North Holland Publishing Company, Amsterdam, 1967.

18. W. I. Zangwill, "The Convex Simplex Method," Management Science, Volume14, pp. 221-283, 1967.

19. G. Zontendijk, Methods of Feasible Directions, Elsevier, Amsterdam,1960.

7

Page 26: E061 TEC CoS'T AIO P6033333 33333333333W FR pAAR

II