optimization
TRANSCRIPT
What is Optimization?
Optimization is an iterative process by which a desired solution
(max/min) of the problem can be found while satisfying all its
constraint or bounded conditions.
Optimization problem could be linear or non-linear.
Non –linear optimization is accomplished by numerical ‘Search
Methods’.
Search methods are used iteratively before a solution is achieved.
The search procedure is termed as algorithm.
Figure 2: Optimum solution is found
while satisfying its constraint (derivative
must be zero at optimum).
Linear problem – solved by Simplex or Graphical methods.
The solution of the linear problem lies on boundaries of the feasible
region.
Non-linear problem solution lies within and on the boundaries of the
feasible region.
Figure 3: Solution of linear problem Figure 4: Three dimensional solution of
non-linear problem
What is Optimization?(Cont.)
Constraints
• Inequality
• Equality
Fundamentals of Non-Linear Optimization
Single Objective function f(x)
• Maximization
• Minimization
Design Variables, xi , i=0,1,2,3…..
Figure 5: Example of design variables and
constraints used in non-linear optimization.
Maximize X1 + 1.5 X2
Subject to:
X1 + X2 ≤ 150
0.25 X1 + 0.5 X2 ≤ 50
X1 ≥ 50
X2 ≥ 25
X1 ≥0, X2 ≥0
Optimal points
• Local minima/maxima points: A point or Solution x* is at local point
if there is no other x in its Neighborhood less than x*
• Global minima/maxima points: A point or Solution x** is at global
point if there is no other x in entire search space less than x**
Figure 6: Global versus local optimization. Figure 7: Local point is equal to global point if
the function is convex.
Fundamentals of Non-Linear Optimization (Cont.)
Function f is convex if f(Xa) is less than value of the corresponding
point joining f(X1) and f(X2).
Convexity condition – Hessian 2nd order derivative) matrix of
function f must be positive semi definite ( eigen values +ve or zero).
Fundamentals of Non-Linear Optimization (Cont.)
Figure 8: Convex and nonconvex set Figure 9: Convex function
Mathematical Background
Slop or gradient of the objective function f – represent the
direction in which the function will decrease/increase most rapidly
x
f
x
xfxxf
dx
df
xx 00lim
)()(lim
.......)(!2
1)()( 2
2
2
xdx
fdx
dx
dfxxf
pp xx
p
z
g
y
g
x
g
z
f
y
f
x
f
J
Taylor series expansion
Jacobian – matrix of gradient of f with respect to several variables
Hessian – Second derivative of f of several variables
Second order condition (SOC)
• Eigen values of H(X*) are all positive
• Determinants of all lower order of H(X*) are +ve
2
22
2
2
2
y
f
yx
f
xy
f
x
f
H
First order Condition (FOC)
0*)(Xf
Mathematical Background (Cont.)
Optimization Algorithm
Deterministic - specific rules to move from one iteration to next ,
gradient, Hessian
Stochastic – probalistic rules are used for subsequent iteration
Optimal Design – Engineering Design based on
optimization algorithm
Lagrangian method – sum of objective function and linear
combination of the constraints.
Multivariable Techniques ( Make use of Single variable Techniques
specially Golden Section)
Optimization Methods
Deterministic
• Direct Search – Use Objective function values to locate minimum
• Gradient Based – first or second order of objective function.
• Minimization objective function f(x) is used with –ve sign –
f(x) for maximization problem.
Single Variable
• Newton – Raphson is Gradient based technique (FOC)
• Golden Search – step size reducing iterative method
• Unconstrained Optimizationa.) Powell Method – Quadratic (degree 2) objective function polynomial is
non-gradient based.
b.) Gradient Based – Steepest Descent (FOC) or Least Square minimum
(LMS)
c.) Hessian Based -Conjugate Gradient (FOC) and BFGS (SOC)
• Constrained Optimizationa.) Indirect approach – by transforming into unconstrained
problem.
b.) Exterior Penalty Function (EPF) and Augmented Lagrange
Multiplier
c.) Direct Method Sequential Linear Programming (SLP), SQP and
Steepest Generalized Reduced Gradient Method (GRG)
Figure 10: Descent Gradient or LMS
Optimization Methods …Constrained
Global Optimization – Stochastic techniques
• Simulated Annealing (SA) method – minimum
energy principle of cooling metal crystalline structure
• Genetic Algorithm (GA) – Survival of the fittest
principle based upon evolutionary theory
Optimization Methods (Cont.)
Optimization Methods (Example)
Multivariable Gradient based optimization
J is the cost function to be minimized in two
dimension
The contours of the J paraboloid shrinks as it is
decrease
function retval = Example6_1(x)
% example 6.1
retval = 3 + (x(1) - 1.5*x(2))^2 + (x(2) - 2)^2;
>> SteepestDescent('Example6_1', [0.5 0.5], 20,
0.0001, 0, 1, 20)
Where
[0.5 0.5] -initial guess value
20 -No. of iteration
0.001 -Golden search tol.
0 -initial step size
1 -step interval
20 -scanning step
>> ans
2.7585 1.8960
Figure 11: Multivariable Gradient based optimization
Figure 12: Steepest Descent
MATLAB Optimization Toolbox
PART II
Presentation Outline
Introduction
– Function Optimization
– Optimization Toolbox
– Routines / Algorithms available
Minimization Problems
– Unconstrained
– Constrained Example
The Algorithm Description
Multiobjective Optimization
– Optimal PID Control Example
Function Optimization
Optimization concerns the minimization or maximization of
functions
Standard Optimization Problem:
~~
minx
f x
~0jg x
~0ih x
L U
k k kx x x
Equality ConstraintsSubject to:
Inequality Constraints
Side Constraints
~f x is the objective function, which measure and evaluate the performance of a
system. In a standard problem, we are minimizing the function. For
maximization, it is equivalent to minimization of the –ve of the objective
function.
Where:
~x is a column vector of design variables, which can
affect the performance of the system.
Function Optimization (Cont.)
~0ih x
L U
k k kx x x
Equality Constraints
Inequality Constraints
Side Constraints
~0jg x
Most algorithm require less than!!!
Constraints – Limitation to the design space. Can be linear or
nonlinear, explicit or implicit functions
Optimization Toolbox
Is a collection of functions that extend the capability of MATLAB.
The toolbox includes routines for:
• Unconstrained optimization
• Constrained nonlinear optimization, including goal attainment
problems, minimax problems, and semi-infinite minimization
problems
• Quadratic and linear programming
• Nonlinear least squares and curve fitting
• Nonlinear systems of equations solving
• Constrained linear least squares
• Specialized algorithms for large scale problems
Minimization Algorithm
Minimization Algorithm (Cont.)
Equation Solving Algorithms
Least-Squares Algorithms
Implementing Opt. Toolbox
Most of these optimization routines require the definition of an M-
file containing the function, f, to be minimized.
Maximization is achieved by supplying the routines with –f.
Optimization options passed to the routines change optimization
parameters.
Default optimization parameters can be changed through an
options structure.
Unconstrained Minimization
Consider the problem of finding a set of values [x1 x2]T that
solves
1
~
2 2
1 2 1 2 2~
min 4 2 4 2 1x
xf x e x x x x x
1 2~
Tx x x
Steps:
• Create an M-file that returns the function value (Objective Function). Call it objfun.m
• Then, invoke the unconstrained minimization routine. Use fminunc
Step 1 – Obj. Function
function f = objfun(x)
f=exp(x(1))*(4*x(1)^2+2*x(2)^2+4*x(1)*x(2)+2*x(2)+1);
1 2~
Tx x x
Objective function
Step 2 – Invoke Routine
x0 = [-1,1];
options = optimset(‘LargeScale’,’off’);
[xmin,feval,exitflag,output]=
fminunc(‘objfun’,x0,options);
Output argumentsInput arguments
Starting with a guess
Optimization parameters settings
• xmin =
• 0.5000 -1.0000
• feval =
• 1.3028e-010
• exitflag =
• 1
• output =
• iterations: 7
• funcCount: 40
• stepsize: 1
• firstorderopt: 8.1998e-004
• algorithm: 'medium-scale: Quasi-Newton line
search'
Minimum point of design variables
Objective function value
Exitflag tells if the algorithm is converged.If exitflag > 0, then local minimum is found
Some other information
Results
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
fun : Return a function of objective function.
x0 : Starts with an initial guess. The guess must be
a vector
of size of number of design variables.
Option : To set some of the optimization parameters.
(More after few slides)
P1,P2,… : To pass additional parameters.
More on fminunc – Input
[xmin,feval,exitflag,output,grad,hessian]=
fminunc(fun,x0,options,P1,P2,…)
xmin : Vector of the minimum point (optimal point).
The size is the number of design variables.
feval : The objective function value of at the optimal
point.
exitflag : A value shows whether the optimization routine
is terminated successfully. (converged if >0)
Output : This structure gives more details about the
optimization
grad : The gradient value at the optimal point.
hessian : The hessian value of at the optimal point
More on fminunc – Output
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
Options Setting – optimset
The routines in Optimization Toolbox has a set of default
optimization parameters.
However, the toolbox allows you to alter some of those
parameters, for example: the tolerance, the step size, the gradient
or hessian values, the max. number of iterations etc.
There are also a list of features available, for example: displaying
the values at each iterations, compare the user supply gradient or
hessian, etc.
You can also choose the algorithm you wish to use.
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
LargeScale - Use large-scale algorithm if
possible [ {on} | off ]
The default is with { }
Parameter (param1)
Value (value1)
Options Setting (Cont.)
Type help optimset in command window, a list of options
setting available will be displayed.
How to read? For example:
LargeScale - Use large-scale algorithm if
possible [ {on} | off ]
Since the default is on, if we would like to turn off, we just type:
Options = optimset(‘LargeScale’, ‘off’)
Options =
optimset(‘param1’,value1, ‘param2’,value2,…)
and pass to the input of fminunc.
Options Setting (Cont.)
Display - Level of display [ off | iter | notify | final ]
MaxIter - Maximum number of iterations allowed [ positive integer ]
TolCon - Termination tolerance on the constraint violation [
positive scalar ]
TolFun - Termination tolerance on the function value [ positive
scalar ]
TolX - Termination tolerance on X [ positive scalar ]
Highly recommended to use!!!
Useful Option Settings
fminunc and fminsearch
fminunc uses algorithm with gradient and hessian information.
Two modes:
• Large-Scale: interior-reflective Newton
• Medium-Scale: quasi-Newton (BFGS)
Not preferred in solving highly discontinuous functions.
This function may only give local solutions..
fminsearch is generally less efficient than fminunc for
problems of order greater than two. However, when the problem is highly discontinuous, fminsearch may be more robust.
This is a direct search method that does not use numerical or analytic gradients as in fminunc.
This function may only give local solutions.
[xmin,feval,exitflag,output,lambda,grad,hessian]
=
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,
P1,P2,…)
Vector of Lagrange
Multiplier at optimal
point
Constrained Minimization
~
1 2 3~
minx
f x x x x
2
1 22 0x x
1 2 3
1 2 3
2 2 0
2 2 72
x x x
x x x
1 2 30 , , 30x x x
Subject to:
1 2 2 0,
1 2 2 72A B
0 30
0 , 30
0 30
LB UB
function f = myfun(x)
f=-x(1)*x(2)*x(3);
Example
2
1 22 0x xFor
Create a function call nonlcon which returns 2 constraint vectors [C,Ceq]
function [C,Ceq]=nonlcon(x)
C=2*x(1)^2+x(2);
Ceq=[];Remember to return a nullMatrix if the constraint doesnot apply
Example (Cont.)
x0=[10;10;10];
A=[-1 -2 -2;1 2 2];
B=[0 72]';
LB = [0 0 0]';
UB = [30 30 30]';
[x,feval]=fmincon(@myfun,x0,A,B,[],[],LB,UB,@nonlcon)
1 2 2 0,
1 2 2 72A B
Initial guess (3 design variables)
CAREFUL!!!
fmincon(fun,x0,A,B,Aeq,Beq,LB,UB,NONLCON,options,P1,P2,…)
0 30
0 , 30
0 30
LB UB
Example (Cont.)
Warning: Large-scale (trust region) method does not currently solve this type of problem, switching to medium-scale (line search).
> In D:\Programs\MATLAB6p1\toolbox\optim\fmincon.m at line 213
In D:\usr\CHINTANG\OptToolbox\min_con.m at line 6
Optimization terminated successfully:
Magnitude of directional derivative in search direction less than 2*options.TolFun and maximum constraint violation is less than options.TolCon
Active Constraints:
2
9
x =
0.00050378663220
0.00000000000000
30.00000000000000
feval =
-4.657237250542452e-035
2
1 22 0x x
1 2 3
1 2 3
2 2 0
2 2 72
x x x
x x x
1
2
3
0 30
0 30
0 30
x
x
x
Const. 1
Const. 2
Const. 3
Const. 4
Const. 5
Const. 6
Const. 7
Sequence: A,B,Aeq,Beq,LB,UB,C,Ceq
Const. 8
Const. 9
Example (Cont.)