gradient steepest method application on griewank function
TRANSCRIPT
Gradient method application
on Griewank Function
Talk byImane HAFNAOUI
University of M’Hamed Bouguara - IGEE
OUTLINES
•Introduction•Griewank Function•Gradient (steepest descent) Method•Simulation and Results•Improvements•Conclusion
INTRODUCTIONOptimization includes finding "best available"
values of some objective function given a defined domain.
Optimization problems can be found everywhere:
•Increase market profit•Operation search (decision science)•Minimize loss in power grids
Optimization methods and algorithms have been developed to solve these problems.
•Particle Swarm Optimization Algorithm (PSO) presentented by Fran Van Den Bergh in his work.
•Self Adaptive Differential Evaluation (SeDA) introduced by Qin et al. In their publication.
•Etc etc
GRIEWANK FUNCTION
Test Functions: they are special functions known in literature to be used as testbenches. They come in different classes and set for specific purposes.
One of the well-known functions is the Griewank function.
GRIEWANK FUNCTION
Griewank function depending on two variables.
GRADIENT METHOD
Steepest descent iteratively performs line searches in the local downhill gradient direction.
Steps:
1. Evaluate the gradient vector
2. Compute the search direction
3. Construct the next point
4. Perform the termination test for
minimization
5. Repeat the process
The changes in the function after each iteration. Only 4 iterations to reach a solution.
SIMULATION AND RESULTSResults with starting point x0 = (1, 1)
K xk Xk+1 Norm (sk)
0 1 1 0.1677 0.6767 -0.6402
-0.2487 0.6868
1 0.1677 0.6767 -0.0250 0.2589 0.1483 0.3214 0.35392 -0.0250 0.2589 0.0070 0.0915 -0.0246 0.1288 0.13123 0.0070 0.0915 -0.0021 0.0320 0.0070 0.0457 0.04634 -0.0021 0.0320 0.0006 0.0112 -0.0021 0.0160 0.01615 0.0006 0.0112 -0.0002 0.0039 0.0006 0.0056 0.00566 -0.0002 0.0039 0.0001 0.0014 -0.0002 0.0020 0.00207 0.0001 0.0014 -0.0000 0.0005 0.0001 0.0007 0.0007
Results with differing starting points
SIMULATION AND RESULTS
X(0) X(k) k3 -15 3.1400 -13.3159 7
45 23 47.1003 22.1928 8-120 245 -
122.4599244.1132 9
-300 -400 -301.4350
-399.4496
7
567 234 568.3355 235.2251 12
IMPROVEMENTS
We need to ensure that the algorithm always converges to the global optimum regardless of the starting point.
The idea is to keep the algorithm running and searching for the smallest of the local minima (global minimum) even if it reaches a local minimum.
To achieve this, the gradient method is once again applied, but this time to the “governing” function in the Griewank function.
IMPROVEMENTS
RESULTSResults with differing starting points and
differing step size
RESULTS
X0 X1 X2 X3 X4
7 25 0.25 6.280 26.630 3.1400 13.3153
3.1400 4.4385
0.00e-4 0.41e-4
-80 -120 0.35 -78.500 -155.345
-25.120
-44.384 -6.280 -17.753
0.00e-4 0.41e-4
-100 120 0.45 -97.340 119.837
-12.560
8.8769
-3.1400
4.4384
0.19e-6 0.57e-4
534 -120 0.50 530.659
-119.833
6.2800 -0.0001 -0.0e-4 0.48e-4
Table representing the results and the jumps over the local minima to reach the global optimum.
RESULTSContour to the Griewank function demonstrating the
jumps that the algorithm performs each time it reaches a local minimum. Starting point x0 = (120, -120), step size
CONCLUSION
•The Gradient method is good to detect the local minimum closest to the starting point.
•Additions to the steepest method have proven successful in locating the global minimum of the Griewank function regardless of how far the starting point is selected.
•These extensions cannot be assured to have the same results if applied to other test functions, nor if the number of variables is increased.
THANK YOU