optimization in engineering design 78 what you can do for one variable, you can do for many (in...

12
Optimization in Engineering Design Georgia Institute of Technology Systems Realization Laboratory 1 What you can do for one variable, you can do for many (in principle)

Upload: basil-strickland

Post on 24-Dec-2015

215 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

1

What you can do for one variable, you can do for many (in principle)

What you can do for one variable, you can do for many (in principle)

Page 2: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

2

Method of Steepest DescentMethod of Steepest Descent

The method of steepest descent (also known as the gradient method) is the simplest example of a gradient based method for minimizing a function of several variables.

Its core is the following recursion formula:

xk+1 = xk ± k F k

xk , xk+1 = values of the variables in the k and k+1 iteration.F(x) = objective function to be minimized (or maximized)F = gradients of the objective function, constituting the direction of travel

= the size of the step in the direction of travel

Advantage: Simple

Disadvantage: Seldom converges reliably.

Advantage: Simple

Disadvantage: Seldom converges reliably.

Remember: Direction = dk = S(k) = -F(x(k))-

Refer to Section 3.5 for Algorithm and Stopping Criteria

Page 3: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

3

Newton's Method (multi-variable case)Newton's Method (multi-variable case)

How to extend Newton’s method to multivariable case?

xk+1 = xk - y’(xk)y’’(xk) Is this correct?

Start again with Taylor expansion:

y(x) = y(xk) + y(xk)(x-xk) + 0.5 (x-xk)H(xk) (x-xk)

Note that H is the Hessian containing the second order derivatives.

xk+1 = xk - y(xk)H(xk) Is this correct?

Newton’s method for finding an extreme point is

xk+1 = xk - H-1(xk) y(xk)

No. Why?

Not yet. Why?

Like the Steepest Descent Method,Newton’s searches in the negativegradient direction.

See Sec. 1.4.

Remainder is dropped.Significance?

Don’t confuse H-1 with α.

T

Page 4: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

4

Properties of Newton's MethodProperties of Newton's Method

Good properties (fast convergence) if started near solution.

However, needs modifications if started far away from solution.

Also, (inverse) Hessian is expensive to calculate.

To overcome this, several modifications are often made.

• One of them is to add a search parameter in from of the Hessian. (similar to steepest descent). This is often referred to as the modified Newton's method.

• Other modification focus on enhancing the properties of the second and first order gradient combination.

• Quasi-Newton methods build up curvature information by observing the behavior of the objective functions and its first order gradient. This info is used to generate an approximation of the Hessian.

Page 5: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

5

Conjugate Directions MethodConjugate Directions Method

Conjugate direction methods can be regarded as somewhat in between steepest descent and Newton's method, having the positive features of both of them.

Motivation: Desire to accelerate slow convergence of steepest descent, but avoid expensive evaluation, storage, and inversion of Hessian.

Application: Conjugate direction methods are invariably invented and solved for the quadratic problem:

Note: Condition for optimality is y = Qx - b = 0 or Qx = b (linear equation)

Minimize: (½) xTQx - bTx

Note: Textbook uses “A” instead of “Q”.

Page 6: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

6

Basic PrincipleBasic Principle

So, since the vectors di are independent, the solution to the nxn quadratic problem can be rewritten as

x* = 0d0 + ... + n-1 dn-1

Multiplying by Q and by taking the scalar product with di, you can express in terms of d, Q, and either x* or b

Definition: Given a symmetric matrix Q, two vectors d1 and d2 are said to be Q orthogonal or Q conjugate (with respect to Q) if d1

TQd2 = 0.

Note that orthogonal vectors (d1Td2 = 0)are a special case of conjugate

vectors

Note that A is used instead of Q in your textbook

Page 7: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

7

Conjugate Gradient MethodConjugate Gradient Method

The conjugate gradient method is the conjugate direction method that is obtained by selecting the successive direction vectors as a conjugate version of the successive gradients obtained as the method progresses.

You generate the conjugate directions as you go along.

ik

i

ikk dgd

1

0

or kkkk dgd 11Search direction@ iteration k.

Three advantages:

1) Gradient is always nonzero and linearly independent of all previous direction vectors.

2) Simple formula to determine the new direction. Only slightly more complicated than steepest descent.

3) Process makes good progress because it is based on gradients.

Page 8: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

8

0 - Starting at any x0 define d0 = -g0 = b - Q x0 , where gk is the column vector of gradients of the objective function at point f(xk)

1 - Using dk , calculate the new point xk+1 = xk + k dk , where

2 - Calculate the new conjugate gradient direction dk+1 , according to: dk+1 = - gk+1 + k dk

where

“Pure” Conjugate Gradient Method (Quadratic Case)“Pure” Conjugate Gradient Method (Quadratic Case)

T k = -

gk dkdkTQdk

k = gk+1TQdkdkTQdk

This is slightly different than your current textbook

Note that is calculated

Page 9: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

9

Non-Quadratic Conjugate Gradient MethodsNon-Quadratic Conjugate Gradient Methods

• For non-quadratic cases, you have the problem that you do not know Q, and you would have to make an approximation.

• One approach is to substitute Hessian H(xk) instead of Q. – Problem is that Hessian has to be evaluated at each point.

• Other approaches avoid the Q completely by using Line Searches

– Examples: Fletcher-Reeves and Polak-Robiere methods

• Difference in methods:– find k through line search

– different formulas for calculating k than the “pure” Conjugate Gradient algorithm

Page 10: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

10

Polak-Robiere & Fletcher Reeves Method for Minimizing f(x)Polak-Robiere & Fletcher Reeves Method for Minimizing f(x)

0 - Starting at any x0 define d0 = -g0 ,where g is the column vector of gradients of the objective function at point f(x)

1 - Using dk , find the new point xk+1 = xk + k dk , where k is found using a line search that minimizes f(xk + k dk)

2 - Calculate the new conjugate gradient direction dk+1 , according to: dk+1 = - gk+1 + k dk

where k can vary depending on what (update) formula you use.

Fletcher-Reeves: Polak-Robiere:

Note: gk+1 is the gradient of the objective function at point xk+1

)()(

)()( 11

kk

kkk

gg

gggT

T

k

)()(

)()( 11

kk

kk

gg

ggT

T

k

Page 11: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

11

Fletcher-Reeves Method for Minimizing f(x)Fletcher-Reeves Method for Minimizing f(x)

0 - Starting at any x0 define d0 = -g0 ,where g is the column vector of gradients of the objective function at point f(x)

1 - Using dk , find the new point xk+1 = xk + k dk , where k is found using a line search that minimizes f(xk + k dk)

2 - Calculate the new conjugate gradient direction dk+1 , according to: dk+1 = - gk+1 + k dk

where

)()(

)()( 11

kk

kk

gg

ggT

T

k

See also Example 3.9 (page 73) in your textbook

Page 12: Optimization in Engineering Design 78 What you can do for one variable, you can do for many (in principle)

Optimization in Engineering Design

Georgia Institute of TechnologySystems Realization Laboratory

12

Conjugate Gradient Method AdvantagesConjugate Gradient Method Advantages

http://www.esm.vt.edu/~zgurdal/COURSES/4084/4084-Docs/Animation.html

For animations of each of ALL preceding search techniques, check out:

See ‘em in action!

Attractive are the simple formulae for updating the direction vector.

Method is slightly more complicated than steepest descent, but converges faster.