modified random-to-pattern search algorithm (mrps) for global ptimization

14
This article was downloaded by: [New York University] On: 09 December 2014, At: 18:35 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK International Journal of Computer Mathematics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gcom20 Modified random-to-pattern search algorithm (mrps) for global ptimization B. N. Rajani a , B. S. N. Murty a & P. J. Reddy a a Computer Centre , Indian Institute of Chemical Technology , Hyderabad, 500 007 Published online: 20 Mar 2007. To cite this article: B. N. Rajani , B. S. N. Murty & P. J. Reddy (1992) Modified random-to-pattern search algorithm (mrps) for global ptimization, International Journal of Computer Mathematics, 45:3-4, 193-205, DOI: 10.1080/00207169208804129 To link to this article: http://dx.doi.org/10.1080/00207169208804129 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://www.tandfonline.com/page/terms-and-conditions

Upload: p-j

Post on 11-Apr-2017

217 views

Category:

Documents


1 download

TRANSCRIPT

This article was downloaded by: [New York University]On: 09 December 2014, At: 18:35Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office:Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK

International Journal of Computer MathematicsPublication details, including instructions for authors and subscriptioninformation:http://www.tandfonline.com/loi/gcom20

Modified random-to-pattern search algorithm(mrps) for global ptimizationB. N. Rajani a , B. S. N. Murty a & P. J. Reddy aa Computer Centre , Indian Institute of Chemical Technology , Hyderabad, 500007Published online: 20 Mar 2007.

To cite this article: B. N. Rajani , B. S. N. Murty & P. J. Reddy (1992) Modified random-to-pattern searchalgorithm (mrps) for global ptimization, International Journal of Computer Mathematics, 45:3-4, 193-205, DOI:10.1080/00207169208804129

To link to this article: http://dx.doi.org/10.1080/00207169208804129

PLEASE SCROLL DOWN FOR ARTICLE

Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”)contained in the publications on our platform. However, Taylor & Francis, our agents, and ourlicensors make no representations or warranties whatsoever as to the accuracy, completeness, orsuitability for any purpose of the Content. Any opinions and views expressed in this publication arethe opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis.The accuracy of the Content should not be relied upon and should be independently verified withprimary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims,proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoevercaused arising directly or indirectly in connection with, in relation to or arising out of the use of theContent.

This article may be used for research, teaching, and private study purposes. Any substantialor systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, ordistribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use canbe found at http://www.tandfonline.com/page/terms-and-conditions

Inrern. J Compuwr Math., Vol. 4 5 , pp. 193-205 Repnnts ava~lable directly from the publisher Photocopying permitted by license only

0 1992 Gordon and Breach Science Publishers S.A. Printed in the United Kingdom

MODIFIED RANDOM-TO-PATTERN SEARCH ALGORITHM (MRPS) FOR GLOBAL

OPTIMIZATION

B. N. RAJANI, B. S. N. MURTY and P. J. REDDY*

Computer Centre, Indian Institute of Chemical Technology, Hyderabad 500 007

f Receiced 23 April 1991; ~n final form 31 October 1991)

An improved computer code, MRPS, for solving global optimization problems is presented. In this code, the salient features of random-to-pattern search algorithm proposed by Heydweiller c2t a l [6] and controlled random search algorithm (CRS2) of Price [I31 are suitably incorporated. The performance of MRPS was tested with nine constrained problems. reported in the literature. The results obtained clearly indicate the efficiency and reliability of the proposed code in achieving the global optimum.

KEY WORDS: Nonlinear programming, global optimization, random search techniques. pattern search. computational algorithm.

C.R. CATEGORIES: G.1.6 optimization.

1 INTRODUCTION

For the last two decades considerable work has been carried out in the development of optimization techniques and their application to various fields of engineering. The focus has been mainly in dealing with mathematical models of highly nonlinear nature and some are additionally complicated with multimodality and/or discontinuities in their formulations. Several studies have been made in handling such problems using methods involving evaluation of gradients of functions and derivative free algorithms. Even though most of the gradient methods offer local optimum solutions, they involve difficulties namely, finding a feasible initial point, evaluation of Jacobian and higher order approximations to matrices and search for an optimal step size for the movement of the independent variables having different magnitudes. In order to overcome difficulties faced in the gradient methods, considerable attention has been focused on direct search or derivative free methods. Among the direct search methods, the simplex method of Nelder & Mead [12] and its various modifications [17] have been found to be very useful in a variety of problems such as data fitting applications. But their usage is limited while handling problems of high dimensionality and implicit nonlinear constraints. Literature survey reveals that methods based on random search approach are considered to be more simple, reliable and convenient to apply.

In the recent past the applicability of random search methods has been further explored in obtaining global solutions to the nonlinear programming problems. Most

-- -

* To whom correspondence should be addressed.

193

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

194 B. N. RAJANI, B. S. N. MURTY AND P. J. REDDY

of these methods select a new point by random approach from various probability densities centred around the best feasible point. In these algorithms, the search is broken down into cycles, after each loop there is a reduction in the space searched. Thus, successive cycles concentrate the trial around the current best point and the effective range of each variable is reduced. Luus & Jaakola [8] are the first to present such a random search algorithm, LJ, using uniform random sampling and search region contraction. Further, Wang & Luus [1,2] and Luus & Brenek [9] modified LJ algorithm by incorporating into it a pseudo one dimensional search to enable the search to leave a local optimum and move towards a better optimum. In the recent paper of Salcedo et al. [18], a modification of LJ, called SGA was proposed, which employs a variable, parameter dependent, compression vector for contraction of the search region. A similar type of approach was suggested by Martin & Gaddy [lo], by considering previous history of the search in order to adjust the search region. Heuckroth et al. [5] recommended the use of skew distribution of the sampling points in their algorithm HGG and for some problems they found that convergence to the vicinity of the optimum could be obtained faster than with the use of uniform distribution as employed by LJ. But the computational effort of HGG is more in comparison to LJ, once the search moves closer to the optimum. Efforts were also made by generating new points by random selection from a normal distribution. Goulcher et al. [4] worked in this direction by adjusting the step length (standard deviation of the distribution) around the probable sample values of independent variables. Most of these methods require extra computational effort to improve the solution after attaining 1% of the true optimum.

The efficacy of the random search methods is further improved by incorporating pattern search procedures into them to achieve faster convergence in the vicinity of the optimum. Heydweiller & Fan [6] have successfully developed a modified flexible tolerance method by incorporating random search followed by simplex method. Another significant contribution based on a similar concept to obtain global optimum was proposed by Price 1131 and described it as controlled random search, CRS2. This procedure was found to be very effective in finding the global solution to problems involving nonlinear functions subject to bounds on independent variables. Price [13,14] dealt with the nonlinear equality and inequality constraints via penalty function approach. Even though CRS2 is very powerful, it suffers from the problems associated with the selection of penalty weights and their updation during iterative cycles. Further work on CRS2 was done by Mohan & Shanker [1 l] by handling the constraints without resorting to penalty function approach. In the present work, an effort to develop an improved computer code is made by considering the salient features of flexible tolerance approach and various versions of CRS2.

2 PROBLEM DEFINITION

The problem of interest in this paper is to find the global solution n-vector x that minimizes the scalar function.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

RANDOM SEARCH TECHNIQUES

subject to the equality constraints

and the inequality constraints

Ci(x) 2 0, i = (m, + I), . . . , m

and the region of computability is defined by

where L, and U, are specified lower and upper bounds respectively. This paper presents a computational algorithm MRPS, in Section 3, to solve the

constrained optimization problems as defined above. The details of computer program MRPS are given in Section 4. In Section 5, the efficacy of MRPS is discussed based on the results obtained from nine test problems taken frem the literature. Conclusions based on our experience are presented in Section 6.

3 COMPUTATIONAL ALGORITHM

The computational algorithm consists of two phases. In phase-I the global scanning of the search region is performed in order to generate feasible points using the random search approach. The systematic reduction of the search region adopted in the random search procedure also enables us to locate a suboptimal point near the true global optimum. In phase-11, these feasible set of points (including the suboptimal point) are further improved in order to achieve global optimum.

Phase-I

1. Expand the initial search domain (specified bounds) of each independent variable by one percent in order to include the end points in the search. i.e.,

2. Consider the expanded bounds (Lf, Uf) as active bounds for the first iteration.

3. Set the iteration counter, ITR = 0 and another counter IFAIL = 0 is set for testing failures during search. Let NFP be the counter for the number of feasible points. Initially NFP is set to a value between 0 and 3 depending on the satisfaction of the Eqns. (2) & (3) by the three points Li, Ui, (Li + Ui)/2.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

196 B. N. RAJANI, B. S. N. MURTY AND P. J. REDDY

4. Generate N (say 100) points randomly distributed in the search domain of active bounds using the formula:

where Z is a diagonal matrix consisting of uniform random numbers between 0 and 1.

5. Constraint satisfaction or feasibility test: Test the feasibility of all N points using Eqns. (2) & (3). Increment the counter NFP whenever a feasible point is encountered.

6. Set NTEST = max(5n. 3NS), where NS( = 15) is the initial sample size being suggested. If NFP < NTEST, go to step 4 else store NFP points in the increasing order of their function values in a two dimensional array Xi j (i = 1, 2, . . . , NFP, j = 1 , 2 , . . . , n) and corresponding function values in an array Fi and go to step 7.

7. Select the first NS points out of NFP for future computation. Identify the function value at the first point as F, and the highest value with F,,,, as the last point.

8. Determine weighted mean and weighted variance for each of the independent variables for the sample population (NS) as follows:

where k is the rank and the weighting factor w, = NS - k + 1.

9. Compute the active bounds for the search domain for subsequent iterations (ITR = ITR + 1) from the following equations.

10. Generate randomly NTEST points in the new active search domain using Eqn. (5). Find the feasible set of points which satisfy the constraints (Eqns. (2) & (3)) out of the NTEST points. If at least one feasible point is found then go to step 12, else set IFAIL = IFAIL + 1 and go to step 11.

11. If IFAIL > MAXFAIL then, go to step 15 otherwise go to step 10.

12. Calculate the function value at each of these feasible points and test it with FNs. If the function value is less than FNs, replace FNs with the current function value along with the corresponding point in the sample, and rerank the sample (Xi] i = l , 2 , . . . , NS, j = l , 2 , . . . , n) again. The checking with the current FNs and reranking has to be done till all the feasible points generated at step 10 are exhausted and proceed to step 13. If no better point is found to replace the initial

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

RANDOM SEARCH TECHNIQUES

Table 1 Selection of NS in relation to the change in Dj

Sample sequence, ' V 1

DD = Max (D,) I N S cF> 1.r

value of F,, then set IFAIL = IFAIL + 1 and if IFAIL > MAXFA[L then, go to step 15 otherwise go to step 10.

13. The possibility of further reduction in the sample size, NS is checked and the new value of NS is fixed depending on the value of DD given in Table 1.

14. Convergence criteria:

Criterion i: If NS is equal to 1 then, go to step 15 else test criterion ii.

Criterion ii: If NS is reduced in step 13 then, test criterion iii, else test following criterion for slow rate of convergence.

where 6, is a user-supplied tolerance. If this criterion is satisfied then go to step 15, else test criterion iii. Criterion iii: If number of iterations exceeds MAXITI then go to step 15. Else reset IFAIL = 0 and go to step 8. Select NP(=5n) points available in Xij (i = 1,. . . NS, . . . N P , j = 1, . . . , n) and corresponding function values ( F , , . . . , F,,,,, . . . , F,v,) and enter Phase 11.

1. Choose randomly n distinct points (out of NP points), say, r,, r , , . . . , r,, ,, excluding the points having least and greatest function values. Store the points having the least function value (F, ) in r , .

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

198 B. N. RAJANI, B. S. N. MURTY A N D P. J. REDDY

2. Determine the weighted centroid c,, of the points r , , r,, . . . , r , using the formula:

3. Compute the trial point p as follows

I f p satisfies the Eqns. (2), (3) and (4) then, go to step 4, else return to step 1.

4. Calculate fi). If f(p) t FNp . then ITR = ITR + 1 and replace XAVp,, with p and FJVp with&) and go to step 5. Else go to step 1.

5. Sort these NP points again in ascending order according to their objective function values. If ( I F N p - F , I / ( F N p I) < E , or ITR > MAXIT2, terminate the procedure and confirm the first point Xlj and its function value, F , as global minimum. Otherwise return to step 1.

4 MRPS COMPUTER PROGRAM

The computer code MRPS has been programmed in FORTRAN 77 and implemented on a Norsk Data 530 super mini computer. This system has a 32 bit architecture and a throughput of 0.6 MIPS and works on a multiuser, multitasking and multiprogramming operating system SINTRAN 111. The program was also ported on to an IBM compatible PC AT-80386 having a co-processer 80387 and throughput of 5 MIPS. The program, MRPS, requires a memory of 35 KB. The pseudo-random number generator used in the algorithm is based on the power residue method [15] and applicable for 32 bit machines. All the problems were run using 20 different seeds. The average execution times of all 20 seeds run on AT-80386 are presented in the last column of Table 3. User specified constants were: MAXITl = 25, 6, = 0.05, MAXIT2 = 1000, E = and MAXFAIL = 1000.

5 RESULTS AND DISCUSSION

The computational algorithm MRPS, has been tested with 9 problems involving equality and inequality constraints. Detailed description of the test problems is given in the Appendix. The characteristics of each problem viz., number of bounds, number of equality and inequality constraints (linear/nonlinear) are outlined in Table 2. They fairly represent the problems encountered in practice. The progress of computation in MRPS is reported in terms of objective function values at 1 %, 0.1 % and 0.01% of the global optimum (Table 3). Since the program MRPS involves a stochastic process, the number of iterations and function evaluations given at 1 %, 0.1 O/O and 0.01 O/O of the global optimum are the average values of all the twenty seeds used. Besides testing

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

Table 2 Details of optimization problems

Problem Title of no. the problem

PR 1 Goldstein & Price function

PR2 Becker's Problem PR3 Rosen and Suzuki

Mathematical Problem PR4 A Highly Non-Linear

Problem I PR5 A Highlv Non-Linear

Type of No. of No. of No. of No. of constrainls References problem variables independent bounds

vuriubles Equalities Inequalities (4

linear nonlineur lineur nonlinear

M in 2 2 - 2 - - - ~131. 1141

Min 2 Min 4

Max 2

~ r o b r e m 11 PR6 A Highly Non-Unimodal Max 6 6 6

ath he ma tical Problem PR7 Optimal Fuel Allocation Min 4

in Power Plants PR8 Alkylation Process Max 10

Optimization PR9 CSTR train Min 10

* Independent variables are (x,, x,, x,). " Independent variables are (x,, x,, x,). f Independent varrables are (A,, i,, x,, i,, A,, A,,)

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

-pau!elqo aq 01 uo!lnlos leqol% anq aql a m s a s a q ~ u a ~ e d u! sanleA aqJ

55.81 OSPl YSE EOSXPS-0- 68EI ZIE 8tZOXPS.O- P f Z l EIZ ERZPPSO- 9L6 ZZ 2Z6505'0- (L68PSOOOO~O f L68PS'OW) (L68PSWO ZL68PS'OW) (L~XPS(WO f ZL6XPS-O - ) U!W 6 B d

Y1.P EZE 16 L816~1911 OLZ LP L I W 1 9 1 1 9EZ L I P16L'ESII PZZ L EELI~XPI I (ZYII'O f ZYII) (ZYI‘I f O'ZYII) (29'11 TO'Z~II) x e ~ xad

16'P 968 EEI 81EZSO'E OYL PL IPPSO'E LZ9 EZ 6PLO-E 665 E l XZY60'E ( z s o ~ o w o r z s o r ) ( 2 5 0 ~ 0 ~ - o T ZSO-E) (ZSOEO'O T ZSO'E) "!MI L Y ~

81.12 SE8 06P 81L660E 869 OIE 6LSL-60E 265 ZEZ SOZE'XOE 08E SZ 6YYSO'ZLZ ( 0 1 ~ 0 ' 0 f 0 '01~) (01c.0 f 0.01~) (OIE F 0 ' 0 1 ~ ) XeW 9 B d

IS'S1 IZEZ 6PY LEIZ9SlO'O I L l Z X6P EZE9SIO'O Z961 YOE REPLSLO'O 8LY1 SZ 99180'0 ( z ~ s m m . a r ZYS rwo) (ZYS IWO z9s 10'0) ( z ~ s r o o o o ZYS 10-0) w s a d

IZ-E LOL 18 EZPOX'L IVY Of ZZOOX'L 029 Z I XOZSL-L OZY Z I XOZSL'L (ZbO8~000'0 T zPoX'L) (ZP08L00-0 T ZbOX-L) (ZPOXLWO f ZPOB'L) XeW P a d

OCL YES 651 PY66 'EP 6 Y t 801 S Z 9 6 - E P ZIP ZS EYIYEP- ZLE 91 XSZSP-ZP- b m 0 f O M - ) (Pbt.0 T O'PP-) (PVO T 0-PP-) U!W E a d

OL-E OZYI 061 L00000WO - YEEl L'I 8 6 W O (0) (0) (0) U!W Z B d

P t E SEY RP ZIOOO'E I0 YZ XZ100-E 185 11 S L W C ELS 9 PEEO-E (EOWO T or) (EOO'O T WE) ( t w o T WE) U!W I B ~

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

RANDOM SEARCH TECHNIQUES 201

mathematical problems PR1-PR6, the practical applicability of MRPS has also been explored by solving three chemical engineering problems PR7-PR9 stated in the literature. The computational times of all these problems at the global solution were found to be of the order of 4-10 sec., except for the problems PR5, PR6, and PR9 whose execution times were around 20 sec. Most of the problems converged to .0l0h of the global optimum in a resonable number of iterations (ITR) and function evaluations (NFE), and these are comparable to the results obtained by others.

The convergence aspects of phase-I of the algorithm depend on the sample size (NS) being selected. A small value of NS may converge the search procedure rapidly but it may not converge to the global optimum for multimodal problems. Conversely a large value of NS converges so slowly that the procedure is even inefficient for unimodal problems. Several trial runs were made to arrive at the initial value of NS. An optimum sample size of 15 was found to be more appropriate, confirming the suggestion made by Heydweiller et ul. [6]. Depending on the reduction in the search region (step 13), the convergence of phase-I can be quickly achieved by progressively reducing the NS value. This increases the relative weight of the best point of the sample as is evident from the last column of Table 1. For the problems PR1, PR2 and PR4, the convergence to 1% of the global optimum (through criterion i) was obtained reducing the sample size to 1. The best point obtained at the end of phase-I was further refined in phase-I1 without much computational effort. In other problems PR3, PR7-PR9 phase-I was terminated as the criterion ii was met. Termination of phase-I through criterion ii followed by the phase-I1 is justified in such cases, because phase-I1 avoids further random selection of feasible points and allows a faster movement towards the global optimum based on the pattern moves. In the case of the problems PR5 and PR6, phase-I was terminated through criterion iii and cons~derable progress to attain the global optimum was made in phase-11. Incorpora- tion of weighted centroid, c, in Eqn. (9) of phase-I1 has been found to generate favourable direction towards the global optimum, thus improving the overall perfor- mance of the algorithm, MRPS.

As long as the equality constraints are expressible as one variable (dependent) in terms of other remaining variables (independent) the algorithm MRPS can be applied safely. The presentation of equality constraints in the explicit form is not only amenable for the solution but also reduces the dimensionality of the problems. From Table 2 it is evident that the dimensionality of the problems PR8 and PR9 are drastically reduced by representing the equality constraints in the explicit form. In the case the problem PR9 two stationary points have been reported in the literature. At one point, the objective function value is -0.548972 which corresponds to the global solution and the other point with objective function value -0.508523 corres- ponds to a shallow local minimum. The performance of MRPS is affected by the shallow stationary point and always converged to this point. This was noticed on taking x,, .u,, x, and x, as dependent and the remaining variables as independent as stated in Ref. [3]. Instead the constraints when expressed in explicit form by taking x,, x,, x, and x, as dependent variables (as in Appendix), the MRPS was able to locate the exact global solution -0.548972. If explicit handling of equality constraints is not possible, then one should resort to penalty function approach as suggested by Price [13].

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

202 B. N. RAJANI, B. S. N. MURTY AND P. J. REDDY

6 CONCLUSIONS

It is observed from the results that MRPS is efficient and reliable in achieving global solution to various constrained problems reported in the literature. The algorithm converged to the solution in a reasonable number of iterations and function evaluations and comparable with the published results. This code can be used effectively for the solution of optimization problems where the initial estimates of the variables are not known. In phase-I, instead of applying a constant reduction factor for all variables, the compression of search region of each unknown variable is dealt with independently by considering weighted mean and variance of ordered sample. The switching strategy adopted in MRPS from random search to phase-I1 had not only helped in reducing the computational effort but also provided global solution with minimal execution times. Further experience with MRPS indicate that proper attention should be given in the handling of implicit equality constraints, as was done in the case of Problem PR9, before applying any random search technique.

The authors are grateful to Dr. A. A. Khan, Head Chemical Engineering Division IICT and Dr. A. V. Rama Rao, Director IICT for encouragement and providing necessary facilities to carry out the work.

References

[I] Bi-Chong Wang and Rein Luus, Optimization of non-unimodal systems, Inrernational Journal for Numerical Methods in Engineering 11 (1 977), 1235.

[2] Bi-Chong Wang and Rein Luus, Reliability of optimization procedures for obtaining global optimum, AIChE Journal 24(4) (1978) 619.

[3] Gade Pandu Rangiah, Studies in constrained optimization of chemical process problems, Computers & Chem. Engng. 9(4) (1985), 395.

[4] R. Goulcher and J. J. Casares Long, The solution of steady-state chemical engineering optimisation problems using a random-search algorithm, Computers & Chem. Engng. 2 (1978), 33.

[5] M. W. Heuckroth, J. L. Gaddy & L. D. Gaines, An Examination of the Adaptive Random Search Technique. AIChE Journal 22(4) (1976), 744.

[6] J. C. Heydweiller and L. T. Fan, A Random-To-Pattern-Search Procedure for Global Minimization of Constrained Problems, AlChE Annual Meeting, San Francisco, 1979.

[7] D. M. Himmelblau, Applied Nonlinear Programming, McGraw-Hill, New York, 1972. [8] Rein Luus and T. H. 1. Jaakola, Optimization by direct search and systematic reduction of the size

of search region, AIChE Journal 19(4) (1973), 760. [9] Rein Luus and Paul Brenek, Incorporation of gradient into random search optimization, Chem.

Eng. Tecknol. 12 (1989) 309. [lo] D. L. Martin and J. L. Gaddy, Process optimization with the adaptive randomly directed search,

AIChE Symp. Ser. 78 (1982). 99. [ l I ] C. Mohan and K. Shanker, Computational algorithms based on random search for solving global

optimization problems, Intern. J. Computer Math. 33 (1990), 115. 1121 J. A. Nelder and R. Mead, A simplex method for function minimization, Computer Journcil 7(4)

(1965), 308. [I31 W. L. Price, Global optimization by controlled random search, JOTA 40(3) (1983), 333. [I41 W. L. Price, Global optimization algorithms for a CAD workstation, JOTA 55(1) (1987), 133. [IS] R. E. Shannon, System Simulation the Art and Science, Prentice-Hall Inc. Englewood Cliffs, 1975. [16] J . B. Rosen and S. Suzuki, Construction of nonlinear programming test problem, Communication of

the ACM, 18 (1965), 1 13. [I71 M. W. Routh, P. A. Swartz and M. B. Dentor, Performance of the super modified simplex, Analytical

Chemistry Vol. 49 (1977), 1422. [I81 R. Salcedo, M. J. Goncalves & S. Feyo De Azevedo, An Improved Random-Search Algorithm For

Non-Linear Optimization, Computers & Chem. Engng. 14(10) (1990), 11 11.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

RANDOM SEARCH TECHNIQUES 203

APPENDIX

Problem PR1: Min f(x) = [I + (x, + x, + 1),(19 - 14x, + 3x: - 14x, + 6xlx, +3x$)] * [30 + (2x, - 3 ~ , ) ~ ( 1 8 - 32xl + 12x: + 48x2 -36xlx2 + 27xq)I s.t -2 1 x, 4 2; -2 < x, I 2 x* = (0, 1); f(x*) = 3

ProblemPR2: Min f(x) = exp(x,lx, - al) + Sin(p,(x, - a) - 1.57) + exp(cc,Jx, - bJ ) + Sin(P2(x2 - b) - 1.57)

where a, = 2, = 0.05; b, = P, = 1.0; a = 3 and b = 5 s.t. 0 I x, 1 20; 0 4 x, 1 20 x* = (3, 5); f (x*) = 0

Problem PR3: Min f(x) = x: + x: + 2x: + x: - 5x1 - 5x, - 21x3 + 7x4 s.t x: + x: + x: + x: + x, - x, + x, - x, - 8 < 0

x: + 2x22 + x: + 2x: - X I - x4 - 10 1 0 2x: + x: + x: + 2x1 -x , - x, - 5 I 0 -2.5 I x1 1 2 . 5 ; i = 1, 2, 3, 4

x* = (0, 1, 2, - 1); f(x*) = -44

Problem PR4: Max f(x) = 75.196 - 3 . 8 1 1 2 ~ ~ + 0.12694~: - 2.0567 * 10-3x; + 1.0345 * lO-'x': - 6 . 8 3 0 6 ~ ~ + 0 . 0 3 0 2 3 4 ~ ~ ~ ~ - 1.28134 * 1 0 - 3 ~ 2 ~ : -t- 3.5256 * 1 0 - 5 ~ 2 ~ : - 2.266 * 1 0 - 7 ~ 2 ~ :

-5.2375 * 10-6x:x;- 6.3 * 10-8x:x; + 7 * 10-'Ox:x~ + 3.4054 * 1 0 - 4 ~ 1 ~ $ - 1.6638 * 10-6x1x2 - 2.8673 exp(0.0005x,x2)

s.t xlx2 - 700 2 0; x, - 0.008~: 2 0; (x, - 50), - 5(x, - 55) 1 0; 0 I x, 4 75; 0 I x2 I 65

x* = (13.5501, 51.6602); f (x*) = 7.8042

Problem PRS: Min f(x) = 4 . 3 ~ ~ + 3 1 . 8 ~ ~ + 6 3 . 3 ~ ~ + 1 5 . 8 ~ ~ + 6 8 . 5 ~ ~ + 4 . 7 ~ ~ s.t. 1 7 . 1 ~ ~ + 3 8 . 2 ~ ~ + 2 0 4 . 2 ~ ~ + 2 1 2 . 3 ~ ~ + 6 2 3 . 4 ~ ~ + 1 4 9 5 . 5 ~ ~ - 169xlx3

- 3 . 5 8 0 ~ ~ ~ ~ - 3 8 1 0 ~ ~ ~ ~ - 1 8 5 0 0 ~ ~ ~ ~ - 2 4 3 0 0 ~ ~ ~ ~ - 4.97 2 0 1 7 . 9 ~ ~ + 3 6 . 8 ~ ~ + 1 1 3 . 9 ~ ~ + 1 6 9 . 7 ~ ~ + 3 3 7 . 8 ~ ~ + 1 3 8 5 . 2 ~ ~

- 139x1x3 - 2450x4x5 - 16600x4x6 - 172O0x5x, + 1.88 2 0 - 2 7 3 ~ ~ - 7ox4 - 8 1 9 ~ ~ + 26000x,x, + 29.08 2 0

1 5 9 . 9 ~ ~ - 311x2 + 5 8 7 ~ ~ + 391x, + 2 1 9 8 ~ ~ - 1400Ox,x6 + 78.02 2 0 0 1 x, 1 0.31; 0 I x, 4 0.046; 0 I x, $ 0.068 0 5 x, I 0.042; 0 $ x, 1 0.028; 0 I x, I 0.0134

x* = (0, 0, 0, 0, 0, 0.003323); f(x*) = 0.01562

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

204 B. N. RAJANI, B. S. N. MURTY AND P. J. REDDY

Problem PR6: Max f (x) = 25(x1 - 2), + (x, - 2)2 + (x, - + (x, - 4)2 + (x, - + (x6 - 4),

s.t 2 < x1 + x, < 6; -xl + x2 5 2 ; x, - 3x2 < 2 (x, - 3)2 + x4 --> 4; (xS - 3), + x6 2 4; x1 2 0; x2 2 0 O < x l ~ 6 ; O ~ x 2 < 5 ; 1 < x 3 < 5 o < X 4 < 6 ; 1 < X 5 < 5 ; o I X 6 < 1 0

There are eighteen local maximum of which the global maximum is at x* = (5, 1, 5, 0, 5, 10); f(x*) = 310

Problem PR7: Min f (x) = x, f, + x,g, where f, = 1.4609 + 0 . 1 5 1 8 6 ~ ~ + 0.00145s:

g, = 0.8008 + 0.2031x2 + 0.000916x~ s.t x 2 = 5 0 - x ,

(1 - x3)f2 + (1 - x,)g2 1 10.0; 14 1 x2 1 25 where f2 = 1.5742 + 0 . 1 6 3 1 ~ ~ + 0.001358xi

g2 = 0.7266 + 0 . 2 2 5 6 ~ ~ + 0 .000778~~ 18 I x, I 30; 0 1 x, I 1; 0 x4 I 1

s* = (30.0, 20.0, 0.00, 0.58); f(x*) = 3.052

Problem PR8: Max f(x) = 0 . 0 6 3 ~ ~ ~ ~ - 5 . 0 4 ~ ~ - 0 . 0 3 5 ~ ~ - lox, - 3 . 3 6 ~ ~

s.t X, = ~ ~ ( 1 . 1 2 + 0 . 1 3 1 6 7 ~ ~ - 0.006667~;); X, = 1 . 2 2 ~ ~ - x1 s2 = xlx, - x,; x,, = -133 + 3x, X, = 89 + [x, - (86.35 + 1 . 0 9 8 ~ ~ - 0.038~:)]/0.325 x, = 35.82 - 0 . 2 2 2 ~ ~ ~ ; x, = O.O01x4x,~,/(98 - x6) 0.01 I x2 I 16000; 0.01 1 x, I 120; 0.01 I x, I 5000; 0.01 I X, 1 2000; 85 I x, 1 9 3 ; 1.2 I x, 1 4 ; 145 I x,, I 162; 0.01 1 x, 1 2 0 0 0 ; 90 I x, 1 9 5 ; 3 1 x, I 12

x* = (1728.4, 16000, 98.4, 3056.0, 2000.0, 90.6, 94.2, 10.4, 2.6, 149.6) f (x*) = 1 162.0

Problem PR9:

Min f(x) = -O.Olx,x, - O.Olx,x, - 0 . 0 1 ~ ~ [x, + x,x,/(l x9 + x9x1o)l (1 + 0 . 0 1 ~ ~ + o.olx,/xlo)

0 1 xi 1 1.0 for i = 3, 4, 7, 8 0 1 xi 1 2 1 0 0 for i = 1, 5, 9; 0 1 xi I4 .934 for i = 2, 6, 10

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014

RANDOM SEARCH TECHNIQUES 205

1 The problem has two stationary points of which the global optimum is at

x* = (1.3802, 0.12327, 0.39211, 0.48072, 2.37916, 0.33424, 0.09393, 0.64312, 2100.0, 4.934)

f (x*) = -0.548972 The other stationary point (x" is at xs = (3.7944, 0.2087, 0.1790, 0.5569, 2100.0, 4.934, 0.00001436,

0.02236, 2100.0, 4.934) f (x" = - 0.508523 In Table 1

M = Llog,(NS + 1) J, NS is the value initially defined in step 6 of phase-I The notation LsJ denotes an integer less than or equal to s.

Dow

nloa

ded

by [

New

Yor

k U

nive

rsity

] at

18:

35 0

9 D

ecem

ber

2014