L24 Numerical Methods part 4• Project Quesitons• Project problem added
3 position linkage synthesisSee Norton(2nd Ed) Chp 5
• Homework• Review• Conjugate Gradient Algorithm• Summary• Excel – User defined function
1
H22
2
10.52
3
ME483/582 Optimal Design Steepest Descent Example 10.52
Iteration 1 2 3 4 GRG
x1 1 2 2.5 3x2 1 0.5 1.5 1.25
c1 -4 -1 -2 -0.5c2 2 -2 1 -1||c|| 4.5 2.2 2.2 1.1
d1 4 1 2 0.5d2 -2 2 -1 1
α 0.25 0.5 0.25 0.5
xnew1 2 2.5 3 3.25 4xnew2 0.5 1.5 1.25 1.75 2
f (x) -5.5 -6.75 -7.375 -7.6875 -8
2 21 2 1 1 2
1 2
2 1 *
( ) 2 4 22 4 2
( *) 4 2
x
f x x x x xx x
fx x
x
c x
10.57
4
Prob 10.57
Iteration 1 2 3 4 GRGx1 1 0.382979 0.259181 0.205541x2 1 -0.23404 -0.2252 -0.15257x3 1 0.074468 0.145209 0.04226
c1 4 0.297872 0.067963 0.105934c2 8 -0.02128 -0.09202 -0.11469c3 6 -0.17021 0.130438 -0.13611||c|| 10.8 0.3 0.2 0.2
d1 -4 -0.29787 -0.06796 -0.10593d2 -8 0.021277 0.092018 0.114693d3 -6 0.170213 -0.13044 0.136107
α 0.154255 0.415605 0.789258 0.263192
xnew1 0.382979 0.259181 0.205541 0.17766 0xnew2 -0.23404 -0.2252 -0.15257 -0.12239 0xnew3 0.074468 0.145209 0.04226 0.078083 0
f (x) 0.053191 0.028639 0.016761 0.011115 0
2 2 21 2 3 1 2 2 3
1 2
2 1 3
3 2 *
( ) 2 2 2 2(1,1,1)
2 2( *) 4 2 2
4 2x
f x x x x x x x
x xf x x x
x x
xx
c x
10.61
5
Prob 10.61
Iteration 1 2 3 4 GRGx1 1 3.072336 2.935287 2.902186x2 2 1.770152 0.339758 0.530258x3 3 2.069488 1.428795 1.013562x4 4 1.979566 2.296792 2.302835
c1 -1118 22.93872 9.487391 3.811218c2 124 239.4148 -54.60108 34.59244c3 502 107.2371 119.0139 13.93831c4 1090 -53.0963 -1.732012 4.280717||c|| 1644.8 268.6 131.3 37.7
d1 1118 -22.9387 -9.487391 -3.811218d2 -124 -239.415 54.60108 -34.59244d3 -502 -107.237 -119.0139 -13.93831d4 -1090 53.09632 1.732012 -4.280717
α 0.00185 0.00597 0.00349 0.00603
xnew1 3.0723 2.9353 2.9022 2.8792 2.51E-10xnew2 1.7702 0.3398 0.5303 0.3218 0xnew3 2.0695 1.4288 1.0136 0.9296 0.001241xnew4 1.9796 2.2968 2.3028 2.2770 0.001241
f (x) 259.8004 45.8318 20.3837 16.0938 0.0000
2 2 4 41 2 3 4 2 3 1 4
31 2 1 4
31 2 2 3
33 4 2 3
33 4 1 4 *
( ) ( 10 ) 5( ) ( 2 ) 10( )(1,2,3,4)
2( 10 ) 40( )
20( 10 ) 4( 2 )( *)
10( ) 8( 2 )
10( ) 40( )x
f x x x x x x x x
x x x x
x x x xf
x x x x
x x x x
xx
c x
Alternate Equal Interval
Golden Section
Equal Interval aka “Exhaustive”
Fractional Reduction
6
ln( )1
ln(0.618)
FRN
Add these formulas to your notes for next test!
1/ (0.618)nnew oldFR I I
2ln( )1
ln(2 / 3)
FRN
2/
1new oldFR I In
21N
FR
/2/ (2 / 3)nnew oldFR I I
7How does it work?
Steepest descent algorithm
“Modified” Steepest-Descent Algorithm
8
(0)
(k) (0)
Step 0. Select convergence parameters (stopping) , 0 ( ) , 0 ( ( ), " ")
set iteration counter k=0
Step 1. Estimate starting pt Step 2. Calculate ( )Step 3
gradientf change in f Excel convergence
f
cx
xc x
(k) (k)
(k+1) (k) (k)
max
. Calculate , if stop, otherwise continue
Step 4. Set = -Step 5. Find optimal step size * alongStep 6. Update design, = + , calculate ( ), set k=k+1, Step 7. If k>k or ( ) s
ff x
c c
d cd
x x d xtop, otherwise go to Step 2.
Cauchy’s Method
9
Engineering OptimizationRavindran, Ragsdell,Reklaitus
Algorithms
10
• Algorithms include stopping criteria (||c||,∆f )• Steepest descent algorithm
Convergence is assuredHowever, lots of Fcn evals (in line search)Each iteration is independent of previous moves (i.e. totally “local” )Successive iterations slow down.. may stall
• Need non-stalling algorithm?
Conjugate Gradient
11
( ) ( )
( 1) ( ) ( )
( ) ( ) ( 1)
( ) ( ) ( 1) 2( )
( 1) ( )
Steepest Descent
Conjugate Gradient
( / )
k k
k k k
k k k
k k k k
k k
search directionstep size
search direction
step size
d cx x d
d c d
c c
x x d
http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf
Conj grad ex
12
Conj grad ex contd
13
Iteration 1 2 3 4 GRGx1 1 0.3829787 0.257143 -2.9E-10x2 1 -0.234043 -0.22857 5.09E-11x3 1 0.0744681 0.1428571 1.956E-10
c1 4 0.297872 0.057143 -4.8E-10c2 8 -0.02128 -0.11429 1.58E-11c3 6 -0.17021 0.114286 8.84E-10
||c|| 10.7703 0.3437 0.1714 0.0000
β 0.0010 0.2487 0.0000d1 -4 -0.30195 -0.13224 4.77E-10d2 -8 0.013128 0.117551 -1.6E-11d3 -6 0.164101 -0.07347 -8.8E-10
α 0.154255 0.416749 1.944444 0.263192
xnew1 0.382979 0.257143 -2.9E-10 -1.6E-10 0xnew2 -0.23404 -0.22857 5.09E-11 4.67E-11 0xnew3 0.074468 0.142857 1.96E-10 -3.7E-11 0
f (x) 0.053191 0.028571 1.56E-19 1.52E-20 0Steepest Desc. f(x) 0.053191 0.028639 0.016761 0.011115
“Deflected” Steepest Descent
14
A comparison of the convergence of gradient descent with optimal step size (in green) and conjugate vector (in red) for minimizing a quadratic function associated with a given linear system. Conjugate gradient, assuming exact arithmetic, converges in at most n steps where n is the size of the matrix of the system (here n=2). Wik
http://en.wikipedia.org/wiki/Conjugate_gradient_method
Higher Order Methods
15
1( ) ( )
( 1) ( ) ( )
1( ) ( ) ( )
( 1) ( )
( 1) ( ) ( )
( 1)
Modified Newton
Marqaurdt's compromise
( ). . 0.5 2
k k
k k k
k k kk
k k
k k k
k
search direction
step size
search direction
fraction or multiplee g or
step size
d H c
x x d
d H I c
x ( )k note no alphax d
Summary• Golden Section is very efficient• Algorithms include stopping criteria (||
c||,∆f )• Steepest descent algorithm may stall• Conjugate Gradient
Convergence in n iterations (n=# of design var’s)Lots of Fcn evals (in line search)May need to restart after n+1 iterations
16