optimization using the nag library · optimization is a wide-ranging and technically detailed...
TRANSCRIPT
![Page 1: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/1.jpg)
– 1 –
r3
Optimization using the NAG Library
The Numerical Algorithms Group, Ltd.
Wilkinson House, Jordan Hill Road
Oxford OX2 8DR, United Kingdom
1. Introduction
This note describes some aspects of optimization, with an emphasis on the way in
which the NAG Library [1] can be used in the solution of problems in this area. The
following section contains a brief discussion (§2.1) of mathematical optimization,
which then focusses attention (§2.2) on the way problems in this area can be
categorized. This is important because it is necessary to use a solver which is
appropriate for the type of problem at hand; we illustrate this point in §2.3 with a
simple example. Section 3 describes some of the functionality of the NAG Library
optimization solvers; we draw a distinction between so-called local solvers (§3.1) and
global solvers (§3.2), before highlighting, in §3.3, the routines which have been added
to the Library in its latest release. We conclude §3 by looking at some related
problems which can be solved using routines from other chapters of the Library. Our
attention turns to applications of optimization in Section 4, which involves taking a
detailed look at a specific example from financial analysis in §4.1, before briefly
surveying (§4.2) a few other applications in finance, manufacturing and business
analytics. A final section (§5) contains the conclusions of this note.
2. Background
2.1 What is optimization?
Optimization is a wide-ranging and technically detailed subject, to which several
authoritative texts – for example, [2] [3] – have been devoted. It can be defined as
“the selection of a best element (with regard to some criteria) from some set of
![Page 2: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/2.jpg)
– 2 –
r3
available alternatives” [4]. Mathematically, it consists of locating the maximum or
minimum of a function of one or more variables (called the objective function):
( )
( )
( )
(1)
and, since maximizing ( ) is equivalent to minimizing the negative of ( ), we can
without loss of generality consider minimization only from hereon. A distinction may
be drawn between finding a local minimum – where, for all near , ( )
( ) – and a global minimum, where this condition is fulfilled for all . While many
algorithms have been developed for local optimization, the development of reliable
methods for global optimization has been harder to do – we return to this point below
in §3.2.
Methods for optimization often require information related to the derivatives of the
objective function, in order to assist with the search for the minimum. More
specifically, the elements of the gradient vector and the Hessian matrix are
( )
( )
(2)
A simple illustration of this point can be found below, in the description of the so-
called gradient descent method, which uses explicitly – see equation (10).
2.2 Constraints and categorization
The optimization problem may be supplemented by restrictions on the values of the
solution, which are usually specified by a set of constraint functions, for example:
(3)
which are bound constraints, or
(4)
![Page 3: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/3.jpg)
– 3 –
r3
which is an example of a linear constraint, or
(5)
which is a non-linear constraint. The three types of constraint can be expressed
generally as
{
( )
} (6)
in which and are vectors of lower and upper bound values, is a constant matrix,
and is a vector of nonlinear constraint functions.
The optimization problem is categorized by the form of the constraints, and also by
the properties of the objective function. More specifically, a distinction is made
between objective functions that are linear, quadratic and nonlinear. Another
categorization may be made if (1) can be written as
{ ∑
( ) (7)
in which case the optimization is referred to as a least squares problem (which may be
further classified as nonlinear or linear, depending on the form of the functions ).
Some specific combinations of the properties of the objective function and the
constraints receive special designation; thus, linear constraints together with a linear
is referred to as a linear programming (LP) problem; linear constraints plus a
quadratic makes a quadratic programming (QP) problem, while nonlinear
constraints with a nonlinear is called a nonlinear programming (NLP) problem.
![Page 4: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/4.jpg)
– 4 –
r3
These designations are important because a single, all-purpose optimization method
does not currently exist; instead, different methods have been developed which are
suitable for particular forms of the problem. Using a method that is inappropriate for
the problem at hand can be cumbersome and/or inefficient. We illustrate this point in
the following subsection using a concrete example.
2.3 An example problem
Figure 1 shows two attempts to find the minimum of the so-called Rosenbrock
function [5]:
Figure 1. Finding the minimum of Rosenbrock's function in
MATLAB® using the method of steepest descents (top) and a method
from the NAG Library (bottom)
![Page 5: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/5.jpg)
– 5 –
r3
( ) ( ) (
) (8)
This is a function which was designed as a performance test for optimization
algorithms. It has a global minimum at
( ) (9)
which is within a long, narrow, parabolic-shaped flat valley. The two methods used in
Figure 1 to minimize (8) are called gradient (or steepest) descent [6], and a sequential
quadratic programming (SQP) method [7], as implemented in the NAG Library [8].
The former is easier to understand (and program) than the latter: specifically, at each
point, the method takes a step in the direction of the negative gradient at that point:
( ) ( ) ( ) ( ( )) (10)
where ( ), the step size at the -th step, is proportional to | ( ( ))|, the modulus of
the gradient at that step. Figure 1 shows that, whilst this method is able to find the
location of the valley in a few big steps, its progress along the valley takes many more
small steps, because of the link between the size of the step and the gradient. The
method terminates after 200 steps in this example, with the current step still being a
long way away from the minimum. By contrast, the SQP method converges to the
minimum in 55 steps. (The number of steps taken by an optimization method should
be as small as possible for efficiency’s sake, because it is proportional to the number
of times the objective function is evaluated, which can be computationally expensive.)
We recall that many optimization methods require information about the objective
function’s derivatives. The NAG implementation of the SQP method has an option
for these to be specified explicitly (in the same way that the form of the objective
function is described); otherwise, it approximates the derivatives using finite
difference methods. This clearly requires many more evaluations of the objective
function – to be specific, in this example, it increases the number of steps required for
convergence to 280.
Finally in this section, we note that the Rosenbrock function (8) can be written in the
form of (7), which turns its minimization into a non-linear least squares problem:
( ) ( )
( ) (11)
![Page 6: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/6.jpg)
– 6 –
r3
( ) ( )
( ) ( )
Switching to a quasi-Newton method [9] which is specially designed for this type of
problem – implemented in the NAG Library at [10] – results in a convergence to the
minimum in 19 steps (which, since this method requires specification of derivatives,
is to be compared with the 55 steps required by the more general SQP method when
derivatives are provided).
3. Optimization and the NAG Library
In this section we look at the range of functionality and applicability that the NAG
Library offers for optimization problems. We consider both local (§3.1) and global
(§3.2) optimization and, whilst we do not for the most part discuss individual routines
in detail, describe in §3.3 the enhancements in optimization functionality which have
been included [11] in the most recent release (Mark 23) of the NAG Library. Finally
in this section, we mention (§3.4) a few problems which are related to optimization
that can be solved using functions in other chapters of the Library.
We note in passing that the Library is a collection of numerical routines which can be
accessed from a variety of computing environments and languages [12] including
MATLAB and Microsoft Excel (see figures 1-3 for examples). In addition, some
work has been recently done [13] to build interfaces between NAG solvers and
AMPL, a specialized language which is used for describing optimization problems.
The interface to two solvers [14] [15] is described in detail (and is available for
download from the NAG website), which provides a starting point for the
development of AMPL interfaces to other routines, and the construction of
mechanisms for calling NAG routines from other optimization environments.
3.1 Local optimization
The NAG Library contains a number of routines for the local optimization of linear,
quadratic or nonlinear objective functions; of objective functions that are sums of
squares of linear or nonlinear functions; subject to bounded, linear, sparse linear,
nonlinear or no constraints. The documentation of the local optimization chapter [16]
![Page 7: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/7.jpg)
– 7 –
r3
describes how to choose the most appropriate routine for the problem at hand with the
help of a decision tree which incorporates questions such as:
What kind of constraints does the problem have?
o None
o Bound
o Linear
If so, is the linear constraints matrix sparse?
o Nonlinear
What kind of form does the objective function have?
o Linear
o Quadratic
o Sum of squares
o Nonlinear
Does it have one variable?
o If not, does it have more than ten?
Are first derivatives available?
o If so, are second derivatives available?
Is computational cost critical?
Is storage critical?
Are you an experienced user?
The final question is related to the fact that many of the optimization methods are
implemented in two routines: namely, a comprehensive form and an easy-to-use form.
The latter are more appropriate for less experienced users, since they include only
those parameters which are essential to the definition of the problem (as opposed to
parameters relevant to the solution method). The comprehensive routines use
additional parameters which allow the user to improve the method’s efficiency by
‘tuning’ it to a particular problem.
![Page 8: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/8.jpg)
– 8 –
r3
Figure 2. Globally optimizing a function with more than one minimum (displayed as
a surface at top) using a method from the NAG Library (bottom)
![Page 9: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/9.jpg)
– 9 –
r3
3.2 Global optimization
The Library contains implementations of three methods for the solution of global
optimization problems. As previously (§3.1), the documentation for this chapter [17]
provides the user with suggestions about the choice of the most appropriate routine for
the problem at hand.
nag_glopt_bnd_mcs_solve [18] uses a so-called multilevel coordinate
search [19] which recursively splits the search space (i.e. that within which the
minimum is being sought) into smaller subspaces in a non-uniform fashion.
The problem may be unconstrained, or subject to bound constraints. The
routine does not require derivative information, but the objective function
must be continuous in the neighbourhood of the minimum, otherwise the
method is not guaranteed to converge. The use of this routine is illustrated in
Figure 2, the upper part of which shows a demo function having more than one
minimum1. The workings of the method are shown in the bottom part of the
figure, which displays the function as a contour plot, along with the search
subspaces and objective function evaluation points chosen by the method; it
can be seen that – for this example – the method spends most of its time in and
around the two minima, before correctly identifying the global minimum as
the lower of the two.
nag_glopt_nlp_pso [20] uses a stochastic method based on particle
swarm optimization [21] to search for the global minimum of the objective
function. The problem may be unconstrained, or subject to bound, or linear, or
nonlinear constraints (there also exists a simpler variant [22] of the routine
which only handles bound – or no – constraints). The routine does not require
derivative information, although this could be used by the optional
accompanying local optimizers. A particular feature of this routine is that it
can exploit parallel hardware, since it allows multiple threads to advance
subiterations of the algorithm in an asynchronous fashion.
nag_glopt_nlp_multistart_sqp [23] finds the global minimum of an
arbitrary smooth function subject to constraints (which may include simple
bounds on the variables, linear constraints and smooth nonlinear constraints)
1 More specifically, this is MATLAB’s peaks function.
![Page 10: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/10.jpg)
– 10 –
r3
by generating a number of different starting points and performing a local
search from each using SQP. This method can also exploit parallel hardware,
in the same fashion as the particle swarm method.
3.3 Recent additions to NAG Library optimization functionality
Two of the global optimization routines described in §3.2 (specifically,
nag_glopt_nlp_pso and nag_glopt_nlp_multistart_sqp) are new in
the most recent release of the NAG Library. In addition, the following local
optimization routines were also added to the Library at that release:
nag_opt_bounds_qa_no_deriv [24] is an easy-to-use routine that uses
methods of quadratic approximation to find a local minimum of an objective
function ( ), subject to bound constraints on the . Derivatives of are not
required. The routine is intended for functions that are continuous and that
have continuous first and second derivatives (although it will usually work
even if the derivatives have occasional discontinuities). The routine uses the
BOBYQA (Bound Optimization BY Quadratic Approximation) algorithm [25]
whose efficiency is preserved for large problem sizes2. For example, it solves
the problem of distributing 50 points on a sphere to have maximal pairwise
separation (starting from equally spaced points on the equator) using 4633
evaluations of . This is to be compared with, for example, the NAG routine
[26] which uses the so-called Nelder-Mead simplex solver [27] and takes
16757 evaluations.
nag_opt_nlp_revcomm [28] is designed to locally minimize an arbitrary
smooth function subject to constraints (which may include simple bounds,
linear constraints and smooth nonlinear constraints) using an SQP method.
The user should supply as many first derivatives as possible; any unspecified
derivatives are approximated by finite differences. It may also be used for
unconstrained, bound-constrained and linearly constrained optimization. It
2 i.e., large in equation (1).
![Page 11: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/11.jpg)
– 11 –
r3
uses a similar algorithm to nag_opt_nlp [15], but has an alternative
interface3.
3.4 Solving optimization-related problems using other NAG routines
In addition to the standard types of optimization problem which the routines from the
two optimization chapters [16] [17] can be used to solve, there are a few related
problems that can be solved using functions in other chapters of the NAG Library:
nag_ip_bb [29] solves various types of integer programming problem.
Briefly, this is an optimization problem of the form of (1) – possibly
incorporating constraints (6) – with the restriction that some (or all) of the
elements of the solution vector take integer values. The routine uses a so-
called branch and bound method, which divides the problem up into sub-
problems, each of which is solved internally using a QP solver from the local
optimization chapter [30].
There are several routines elsewhere in the Library [31] which solve linear
least squares problems – i.e., the minimization of
( ) ∑ ( )
(12)
where the residual is defined as
( ) ∑
(13)
In a similar fashion, the minimization of other metrics – for example the so-
called norm [32],
( ) ∑| ( )|
(14)
or the norm,
3 Specifically, it uses so-called reverse communication [51], as opposed to direct communication which
requires that user-supplied procedures – such as that which calculates ( ) – be included in the
function argument list.
![Page 12: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/12.jpg)
– 12 –
r3
( )
| ( )| (15)
can be solved4 using routines from the curve and surface fitting chapter of the
Library.
4. Applications
A complete review of applications for optimization [4] is beyond the scope of this
note; instead we first present (§4.1) a brief description of a specific, simple,
application taken from finance – namely, portfolio optimization – before discussing
(§4.2) a few other example applications of interest.
We note in passing that use of the NAG Library for portfolio optimization has
previously been discussed elsewhere [33]. Our account in §4.1 is less comprehensive,
but takes more recent developments – including the Library interfaces with MATLAB
and Microsoft Excel, and the treatment of the so-called nearest correlation matrix [34]
– into account. The interested reader is referred to [33] for a more detailed and
authoritative account of this subject.
4.1 Portfolio optimization
Portfolio optimization addresses the question of diversification – that is, how to
combine different assets in a portfolio which achieves maximum return with minimal
risk. This problem was initially addressed by Markowitz [35]; we outline its
treatment here although more refined models are often used by professional investors
as the Markowitz model has certain practical drawbacks.. The basic theory is
however still relevant and is taught in postgraduate finance courses at universities.
We assume that the portfolio contains assets, each of which has a return . Then,
the return from the portfolio is
( ) ∑
(16)
where is the proportion of the portfolio invested in asset . We further assume that
each is normally distributed with mean and variance ; the vector of returns
4 nag_lone_fit [53] for the norm, or nag_linf_fit [52] for the norm
![Page 13: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/13.jpg)
– 13 –
r3
then has ( ) distribution, with the vector of means and the covariance
matrix. These can be, for example, calculated from historical data:
∑ ( )
∑ ( ) ( )
(17)
for a collection of observations of at times (we note in passing that the NAG
Library contains a routine [36] for calculating and , as well as other statistical
data).
The expectation and variance of the portfolio return are
( ) ∑
( ) ∑∑
(18)
The portfolio optimization problem is then to determine the which results in a
specific return , with minimum risk. We define the latter as the variance of the
return – see (18). The objective function is then ( ) – i.e., the optimization
problem can then be stated as
(19)
with the constraint
∑
(20)
This problem can be solved using a QP method [37], as implemented in the NAG
Library at [38]. The problem may incorporate constraints – for example, that the
whole portfolio be fully invested:
∑
(21)
![Page 14: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/14.jpg)
– 14 –
r3
In addition, the composition of the portfolio may be constrained across financial
sectors. For example, the requirement that up to 20% be invested in commodities, and
at least 10% in emerging markets may be specified as
∑
∑
(22)
where each element of the coefficient vectors and is either 0 or 1, depending on
the classification of asset . Other types of linear constraints might capture the
selection of assets based on geography or stock momentum, while simple bound
constraints of the form of (3) can be used to allow or prohibit shorting5.
Figure 3 is a screenshot from an application [39] implemented in Excel which allows
the user to enter or edit the historical returns for a portfolio of eleven stocks, before
calculating and from (17). These are then used in (19) and (20), which are solved
using a NAG method6 [38]. The application calculates the portfolio which provides
a return (entered by the user), at minimum risk. It also calculates the so-called
efficient frontier [40], which shows the highest expected return for each level of risk.
5 A company with a licence to short a position sells stock it doesn’t own in the expectation that when
the time comes to deliver on the sale, they will be able to buy the stock at a lower price than it received
initially, thereby making a profit on the transaction.
6 More specifically, the method takes the form of a routine within the NAG Library (implemented as a
Dynamic Link Library, or DLL), which we call from within a Visual Basic for Applications (VBA)
function attached to the spreadsheet. More information about the technical details of how this works
can be found at [50].
![Page 15: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/15.jpg)
– 15 –
r3
As is well-known [35], this simplified portfolio model suffers from a number of
drawbacks that impede its application to real-world problems. For example, the
assumption that the returns are normally distributed is not borne out in practice, nor is
it the case that correlations between assets do not change in time7. It also assumes
that over-performance (a return greater than ) is to be eschewed in the same way as
under-performance; such an equal weighting clearly does not correspond to the
desires of real investors. In addition, the costs of each transaction – and other
refinements such as taxes and broker fees – are ignored in this model, which also
assumes that all assets can be divided into parcels of any size (which, in reality, is not
7 For example, during a financial crisis, all assets tend to become positively correlated, because they all
move in the same direction (i.e. down).
Figure 3. Portfolio optimization using a NAG method in a Microsoft Excel
spreadsheet, showing the so-called efficient frontier (see text).
![Page 16: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/16.jpg)
– 16 –
r3
the case). We must emphasise that in practice quantitative analysts use models that are
more sophisticated than that described here – see, for example, the work described in
[41] – which in turn require the use of a more general optimizer from the NAG
Library.
Before leaving this subject, we mention a related area where NAG methods are
proving helpful. The treatment above – and also that of more sophisticated models –
assumes that the covariance matrix is positive semi-definite – i.e., that the quantity
is non-negative for all (this is part of the definition of a covariance matrix,
and is a requirement for the optimizer to work). In practice, this is often not the case,
as is calculated from historical data [see (16)], which might be incomplete, thus
rendering inappropriate for use here.
In other models is a correlation matrix (a covariance matrix with unit diagonal). A
method which determines , the nearest correlation matrix to , by minimizing their
Frobenius norm distance has been recently developed [42] [43], and has been
implemented in the NAG Library [34].
4.2 Other applications
Other examples of applications which use NAG routines for optimization include
index tracking [44]. This problem – which is related to portfolio optimization (§4.1) –
aims to replicate the movements of the index of a particular financial market,
regardless of changes in market conditions, and can also be tackled using optimization
methods. In fact, the same NAG method [38] that was used for the portfolio
optimization problem can also be used here.
Optimization is also heavily used in the calibration of derivative pricing models [45],
in which values for the parameters of the model are calculated by determining the best
match between the results of the model and historical prices (known as the calibration
instruments). This can be expressed as the minimization of, for example, the chi-
squared metric:
∑
( ) (23)
i.e., the weighted sum, over the instruments in the calibration set, of the difference
between the market price of instrument and its price ( ) as predicted by the
![Page 17: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/17.jpg)
– 17 –
r3
model for some set of parameters . The weight is used to reflect the confidence
with which the market price for is known. As noted above (§3.4), it is also possible
to use other metrics – such as the (weighted) and norms – here. The
optimization problem is then how to determine the which minimizes the metric.
Other users [46] have optimized the performance of power stations by developing
mathematical models of engineering processes, and using NAG optimization routines
to solve them. The same routines have also found application in the solution of shape
optimization problems in manufacturing and engineering [47].
In the field of business analytics, NAG optimization routines have been used [48] in
the development of applications to help customers with – for example – fitting to a
pricing model used in the planning of promotions, and maximizing product income by
combining its specification, distribution channel and promotional tactics.
Other applications include the use of optimization for parameter estimation in the
fitting of statistical models to observed data [49].
5. Conclusions
In this note, we have examined some characteristics of optimization problems,
including their categorization, and the importance of using an appropriate solver for
the problem at hand. In this context, we have described some of the solvers which
can be found in the optimization chapters of the NAG Library, together with other
NAG routines that can be applied to the solution of optimization problems. We have
also briefly reviewed a few example applications which have incorporated the use of
NAG solvers.
Finally, it should perhaps be mentioned that, along with the routines that have been
mentioned in this note, the NAG Library [1] contains user-callable routines for
treatment of a wide variety of numerical and statistical problems, including – for
example – linear algebra, statistical analysis, quadrature, correlation and regression
analysis and random number generation.
![Page 18: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/18.jpg)
– 18 –
r3
6. Acknowledgements
We thank our colleagues David Sayers, Martyn Byng, Jan Fiala, Lawrence
Mulholland and Mick Pont for helpful technical discussions whilst preparing this
note.
7. Bibliography
1. Numerical Algorithms Group. NAG numerical components. [Online]
http://www.nag.com/numeric/numerical_libraries.asp.
2. Gill, P.E. and Murray, W., [ed.]. Numerical Methods for Constrained
Optimization. London : Academic Press, 1974.
3. Fletcher, R. Practical Methods of Optimization. 2nd edn. Chichester : Wiley, 1987.
4. Wikipedia. Mathematical optimization. [Online]
http://en.wikipedia.org/wiki/Mathematical_optimization.
5. —. Rosenbrock function. [Online]
http://en.wikipedia.org/wiki/Rosenbrock_function.
6. —. Gradient descent. [Online] http://en.wikipedia.org/wiki/Gradient_descent.
7. SNOPT: An SQP Algorithm for Large-scale Constrained Optimization. Gill, P.E.,
Murray, W. and Saunders, M.A. 2002, SIAM J. Optim., Vol. 12, pp. 979-1006.
8. Numerical Algorithms Group. nag_opt_nlp_solve (e04wd) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04wdc.pdf.
9. Algorithms for the solution of the nonlinear least-squares problem. Gill, P.E. and
Murray, W. 1978, SIAM J. Numer. Anal., Vol. 15, pp. 977-992.
10. Numerical Algorithms Group. nag_opt_lsq_deriv (e04gb) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04gbc.pdf.
11. —. NAG C Library News, Mark 23. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/GENINT/news.pdf.
12. —. Languages and Environments. [Online] http://www.nag.co.uk/languages-and-
environments.
![Page 19: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/19.jpg)
– 19 –
r3
13. Fiala, Jan. Nonlinear Optimization Made Easier with the AMPL Modelling
Language and NAG Solvers. [Online]
http://www.nag.co.uk/NAGNews/NAGNews_Issue95.asp#Article1.
14. Numerical Algorithms Group. nag_opt_nlp_sparse (e04ugc) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04ugc.pdf.
15. —. nag_opt_nlp (e04ucc) function document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04ucc.pdf.
16. —. NAG Chapter Introduction: e04 - Minimizing or Maximizing a Function.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04intro.pdf.
17. —. NAG Chapter Introduction: e05 - Global Optimization of a Function . [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/html/E05/e05intro.html.
18. —. nag_glopt_bnd_mcs_solve (e05jb) routine document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E05/e05jbc.pdf.
19. Global optimization by multi-level coordinate search. Huyer, W. and Neumaier,
A. 1999, Journal of Global Optimization, Vol. 14, pp. 331-355.
20. Numerical Algorithms Group. nag_glopt_nlp_pso (e05sb) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E05/e05sbc.pdf.
21. Vaz, A.I. and Vicente, L.N. A Particle Swarm Pattern Search Method for Bound
Constrained Global Optimization. Journal of Global Optimization. Vol. 39, 2, pp.
197-219.
22. Numerical Algorithms Group. nag_glopt_bnd_pso (e05sa) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E05/e05sac.pdf.
23. —. nag_glopt_nlp_multistart_sqp (e05ucc) routine document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/html/E05/e05ucc.html.
24. —. nag_opt_bounds_qa_no_deriv (e04jcc) function document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04jcc.pdf.
25. Powell, M.J.D. The BOBYQA algorithm for bound constrained optimization
without derivatives. University of Cambridge. 2009. Report DAMTP 2009/NA06.
![Page 20: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/20.jpg)
– 20 –
r3
26. Numerical Algorithms Group. nag_opt_simplex_easy (e04cbc) function
document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04cbc.pdf.
27. A simplex method for function minimization. Nelder, J.A. and Mead, R. 1965,
Comput. J., Vol. 7, pp. 308–313.
28. Numerical Algorithms Group. nag_opt_nlp_revcomm (e04ufc) function
document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04ufc.pdf.
29. —. nag_ip_bb (ho2bbc) routine document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/H/h02bbc.pdf.
30. —. nag_opt_qp (e04nfc) routine document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04nfc.pdf.
31. —. NAG chapter introduction - f08: Least Squares and Eigenvalue Problems.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/F08/f08intro.pdf.
32. Wikipedia. Norm (mathematics). [Online]
http://en.wikipedia.org/wiki/Norm_(mathematics).
33. Fernando, K.V. Practical Portfolio Optimization. [Online]
http://www.nag.co.uk/doc/techrep/index.html#np3484.
34. Numerical Algorithms Group. nag_nearest_correlation (g02aa) routine
document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/G02/g02aac.pdf.
35. Wikipedia. Modern Portfolio Theory. [Online]
http://en.wikipedia.org/wiki/Portfolio_theory.
36. Numerical Algorithms Group. nag_corr_cov (g02bx) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/G02/g02bxc.pdf.
37. Gill, P.E., et al. Users’ guide for LSSOL (Version 1.0) Report SOL 86-1.
Department of Operations Research, Stanford University. 1986.
38. Numerical Algorithms Group. nag_opt_lin_lsq (e04nc) routine document.
[Online] http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E04/e04ncc.pdf.
![Page 21: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/21.jpg)
– 21 –
r3
39. —. Demo Excel spreadsheet using NAG method to solve portfolio optimization
problem. [Online]
http://www.nag.co.uk/numeric/NAGExcelExamples/Portf_Opt_Simple_demo_FLDL
L.xls.
40. Wikipedia. Efficient Frontier. [Online]
http://en.wikipedia.org/wiki/Efficient_frontier.
41. Numerical Algorithms Group. Morningstar use NAG Library routines to assist
portfolio construction and optimization. [Online]
http://www.nag.co.uk/Market/articles/morningstar.pdf.
42. A quadratically convergent Newton method for computing the nearest correlation
matrix. Qi, J. and Sun, D. 2, 2006, SIAM J. Matrix Anal. Appl., Vol. 29, pp. 360-
385.
43. A preconditioned (Newton) algorithm for the nearest correlation matrix.
Borsdorf, R. and Higham, N.J. 1, 2010, IMA Journal of Numerical Analysis, Vol.
30, pp. 94-107.
44. Wikipedia. Index fund. [Online] http://en.wikipedia.org/wiki/Index_fund.
45. —. Derivative pricing. [Online]
http://en.wikipedia.org/wiki/Mathematical_finance.
46. Numerical Algorithms Group. PowerGen optimises power plant performance
using NAG's Algorithms. [Online]
http://www.nag.co.uk/Market/articles/NLCS213.asp.
47. Stebel, J. Test of numerical minimization package for the shape optimization of a
paper making machine header. [Online]
http://www.nag.co.uk/IndustryArticles/janstebel.pdf.
48. Walton, Jeremy, Fiala, Jan and Kubat, Ken. Case Study: Optimization for a
client with large-scale constrained problems. [Online]
http://www.nag.co.uk/Market/optimization_large-scale_constrained_problems.pdf.
49. Morgan, Geoff. The Use of NAG Optimisation Routines for Parameter
Estimation. [Online]
http://www.nag.co.uk/IndustryArticles/OptimisationParameterEstimation.pdf.
![Page 22: Optimization using the NAG Library · Optimization is a wide-ranging and technically detailed subject, to which several authoritative texts – for example, [2] [3] – have been](https://reader033.vdocuments.us/reader033/viewer/2022051920/600cca64276c5f1092096898/html5/thumbnails/22.jpg)
– 22 –
r3
50. Krzysztofik, Marcin and Walton, Jeremy. Using the NAG Library to calculate
financial option prices in Excel. [Online]
http://www.nag.co.uk/IndustryArticles/NAGOptionPricingExcel.pdf.
51. Krzysztofik, Marcin. Reverse Communication in the NAG Library explained.
[Online] http://www.nag.co.uk/NAGNews/NAGNews_Issue95.asp#Article2.
52. Numerical Algorithms Group. nag_linf_fit (e02gcc) routine document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E02/e02gcc.pdf.
53. —. nag_lone_fit (e02gac) routine document. [Online]
http://www.nag.co.uk/numeric/CL/nagdoc_cl23/pdf/E02/e02gac.pdf.