metamodel-based collaborative optimization framework

13
Struct Multidisc Optim (2009) 38:103–115 DOI 10.1007/s00158-008-0286-8 RESEARCH PAPER Metamodel-based collaborative optimization framework Parviz M. Zadeh · Vassili V. Toropov · Alastair S. Wood Received: 26 July 2007 / Revised: 20 May 2008 / Accepted: 8 June 2008 / Published online: 22 August 2008 © Springer-Verlag 2008 Abstract This paper focuses on the metamodel-based collaborative optimization (CO). The objective is to improve the computational efficiency of CO in order to handle multidisciplinary design optimization prob- lems utilising high fidelity models. To address these issues, two levels of metamodel building techniques are proposed: metamodels in the disciplinary optimization are based on multi-fidelity modelling (the interaction of low and high fidelity models) and for the system level optimization a combination of a global metamodel based on the moving least squares method and trust region strategy is introduced. The proposed method is demonstrated on a continuous fiber-reinforced com- posite beam test problem. Results show that methods introduced in this paper provide an effective way of P. M. Zadeh · A. S. Wood School Engineering, Design and Technology, University of Bradford, Bradford, BD7 1DP, UK P. M. Zadeh e-mail: [email protected] A. S. Wood e-mail: [email protected] Present Address: P. M. Zadeh Engineering Optimization Research Group, University of Tehran, Tehran, Iran V. V. Toropov (B ) Schools of Civil and Mechanical Engineering, University of Leeds, Leeds, West Yorkshire, LS2 9JT, UK e-mail: [email protected] improving computational efficiency of CO based on high fidelity simulation models. Keywords Multidisciplinary design optimization · Collaborative optimization · Metamodel · Moving least squares method · Multi-fidelity modelling 1 Introduction Most real world design problems are complex and multidisciplinary in nature. For example, both aircraft and automotive design are characterized by multidis- ciplinary interactions in which participating disciplines are intrinsically coupled to each other. Over the past several years there has been significant progress in the application of multidisciplinary design optimization (MDO) to solving such complex design problems. One of typical important problems associated with MDO is a high computational cost of analysis in individual disciplines. The interdisciplinary couplings inherent in MDO cause increased computational effort beyond that encountered in a single discipline optimization. The analysis codes for each discipline must interact with each other for the purpose of system optimiza- tion. These issues result in a very high overall com- putational cost that can become prohibitive. Several MDO approaches have been proposed that include multiple-discipline feasible (MDF; Kodiyalam and Sobieszczanski-Sobieski 2001), all-at-once (AAO) and individual discipline feasible (IDF; Cramer et al. 1992), collaborative optimization (CO; Kroo et al. 1994), bi-level integrated synthesis (BLISS; Sobieszczanski- Sobieski et al. 1998), concurrent subspace optimization

Upload: parviz-m-zadeh

Post on 10-Jul-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Metamodel-based collaborative optimization framework

Struct Multidisc Optim (2009) 38:103–115DOI 10.1007/s00158-008-0286-8

RESEARCH PAPER

Metamodel-based collaborative optimization framework

Parviz M. Zadeh · Vassili V. Toropov ·Alastair S. Wood

Received: 26 July 2007 / Revised: 20 May 2008 / Accepted: 8 June 2008 / Published online: 22 August 2008© Springer-Verlag 2008

Abstract This paper focuses on the metamodel-basedcollaborative optimization (CO). The objective is toimprove the computational efficiency of CO in orderto handle multidisciplinary design optimization prob-lems utilising high fidelity models. To address theseissues, two levels of metamodel building techniques areproposed: metamodels in the disciplinary optimizationare based on multi-fidelity modelling (the interactionof low and high fidelity models) and for the systemlevel optimization a combination of a global metamodelbased on the moving least squares method and trustregion strategy is introduced. The proposed methodis demonstrated on a continuous fiber-reinforced com-posite beam test problem. Results show that methodsintroduced in this paper provide an effective way of

P. M. Zadeh · A. S. WoodSchool Engineering, Design and Technology,University of Bradford,Bradford, BD7 1DP, UK

P. M. Zadehe-mail: [email protected]

A. S. Woode-mail: [email protected]

Present Address:P. M. ZadehEngineering Optimization Research Group,University of Tehran, Tehran, Iran

V. V. Toropov (B)Schools of Civil and Mechanical Engineering,University of Leeds, Leeds,West Yorkshire, LS2 9JT, UKe-mail: [email protected]

improving computational efficiency of CO based onhigh fidelity simulation models.

Keywords Multidisciplinary design optimization ·Collaborative optimization · Metamodel ·Moving least squares method ·Multi-fidelity modelling

1 Introduction

Most real world design problems are complex andmultidisciplinary in nature. For example, both aircraftand automotive design are characterized by multidis-ciplinary interactions in which participating disciplinesare intrinsically coupled to each other. Over the pastseveral years there has been significant progress inthe application of multidisciplinary design optimization(MDO) to solving such complex design problems. Oneof typical important problems associated with MDOis a high computational cost of analysis in individualdisciplines. The interdisciplinary couplings inherent inMDO cause increased computational effort beyondthat encountered in a single discipline optimization.The analysis codes for each discipline must interactwith each other for the purpose of system optimiza-tion. These issues result in a very high overall com-putational cost that can become prohibitive. SeveralMDO approaches have been proposed that includemultiple-discipline feasible (MDF; Kodiyalam andSobieszczanski-Sobieski 2001), all-at-once (AAO) andindividual discipline feasible (IDF; Cramer et al. 1992),collaborative optimization (CO; Kroo et al. 1994),bi-level integrated synthesis (BLISS; Sobieszczanski-Sobieski et al. 1998), concurrent subspace optimization

Page 2: Metamodel-based collaborative optimization framework

104 P. M. Zadeh et al.

(CSSO; Sobieszczanski-Sobieski 1988) and analyticaltarget cascading (ATC; Kim et al. 2003).

The key concept in the CO approach is the decompo-sition of the design problem into two levels, namely dis-ciplinary and system levels. The system level optimizeris used to minimize the system level objective while sat-isfying consistency requirements among the disciplinesby enforcing equality constraints at the system levelthat coordinate the interdisciplinary couplings (Krooet al. 1994). CO is designed in such a way that itsupports disciplinary autonomy while maintaining in-terdisciplinary compatibility thus providing added de-sign flexibility. These features make CO well suited foruse in a practical multidisciplinary design environment(Kroo 1995; Braun et al. 1996; Sobieski and Kroo 1996).

Despite these advantages, the CO methodology hasnot become a mainstream design optimization tool inindustry due to high computational costs involved thatis a feature common to all MDO approaches. In addi-tion, an important difficulty specifically associated withCO is its slow system level optimization convergencerate. This relates to the fact that CO ensures interdisci-plinary compatibility by means of system level equalityconstraints and attempts to minimize the disagreementbetween disciplines by sending targets to individualdisciplines that their optimization runs are required tomeet. The discipline level optima can be non-smoothand noisy functions of the system level variables. This,combined with the use of equality constraints at thesystem level to represent disciplinary feasible regionsintroduces computational difficulties (Alexandrov andLewis 2000; Alexandrov et al. 1999). The implicationsof these issues are that derivative-based optimizationtechniques cannot be used for the system level opti-mization whereas more robust optimization techniquessuch as genetic algorithms are prohibitively expensivefor the use in CO. These issues pose significant barriersto real-life applications of CO based on high fidelitysimulation models.

It has become increasingly important to utilize themost accurate simulation models in the design processto ensure a realistic design, hence high fidelity mod-els (e.g. finite element, computational fluid dynamics,etc.) are now used extensively. However, the con-siderable computational expense associated with suchmodels makes it difficult to use them in optimization,where achieving a solution may require hundreds oreven thousands of simulation calls. Because of this,in the field of design optimization in general, andMDO in particular, the use of approximationsbecame an effective tool for reducing the computa-tional effort. The basic approach is to replace thecomputationally expensive simulation model by an

approximate one, which is then used in optimizationruns. Such an approximate model is often referred toas a metamodel (“model of a model”). A survey onthe use of metamodels in structural optimization hasbeen carried out by Barthelemy and Haftka (1993)and in MDO by Sobieszczanski-Sobieski and Haftka(1997) and Simpson et al. (2004). The main benefitsof using metamodels in MDO include (Giunta et al.1995): (1) reduction of the number of expensive highfidelity simulations during optimization; (2) smoothingnumerical noise; (3) enabling separation of the simu-lation code from the optimization routines and easingthe integration of codes from various disciplines; (4)making possible the utilization of parallel computerarchitecture. While the use of metamodels has nowbecome an integral part of an optimization process,allowing a significant reduction of computational costs,it does have some limitations. The cost of providinghigh fidelity data for building the global metamodelincreases rapidly with the number of design variables.Toropov and Markine (1996) suggested the use of lowfidelity models (e.g. simplified numerical models) as thebasis for the high quality metamodel building.

There have been several metamodel building ap-proaches based on low fidelity models. Mason et al.(1994) used a coarse 2D finite element model as alow fidelity model to predict failure stresses, and cor-rections are calculated using a full 3D finite elementmodel. Venkataraman et al. (1988) demonstrated theeffectiveness of correcting inexpensive analysis basedon low fidelity models by results from more expensiveand accurate models in the design of shell structures forbuckling. Vitali et al. (1998) used a coarse low fidelityfinite element model to predict the stress intensity fac-tor, and corrected it with high fidelity model resultsbased on a detailed finite element model for optimizinga blade-stiffened composite panel. These approacheshave been developed for single discipline problems. Inthe field of MDO, there have been several approachesin the use of metamodels to large multidisciplinary opti-mization problems. The variable complexity modellingproposed by Giunta (1997) simultaneously utilizes lowand high fidelity models. The low fidelity predictionswere corrected using a scaling factor obtained fromthe high fidelity model. Toropov and Markine (1996)and Rodriguez et al. (1997) introduced a trust regionapproach for low fidelity approximate model manage-ment for constrained approximate optimization prob-lems to manage convergence. Alexandrov et al. (1998)proposed the trust region management framework tomanage the solution of MDO by alternating the useof high and low fidelity models during the optimizationprocess.

Page 3: Metamodel-based collaborative optimization framework

Metamodel-based collaborative optimization framework 105

In this study two levels of metamodel building tech-niques are introduced in order to significantly reducethe computational effort while handling high fidelitydiscipline level simulation models in CO. Metamodelsin the disciplinary optimization are based on multi-fidelity modelling and for the system level optimizationglobal metamodels are introduced using the movingleast squares method (MLSM) combined with a trustregion strategy. The multi-fidelity modelling consistsof computationally efficient simplified numerical mod-els (low fidelity) and expensive detailed (high fidelity)models. The main advantage of using simplified nu-merical models is that they reflect the most prominentfeatures of the original model whilst remaining compu-tationally inexpensive (can be used repeatedly in theoptimization process).

2 Collaborative optimization

CO is a bi-level optimization framework, with discipline-specific optimization runs that are free to obtain localdesigns, and a system level optimizer to coordinate thisprocess while minimizing the overall design objective(Kroo 1995; Braun et al. 1996). Using the terminol-ogy defined by Alexandrov and Lewis (2000), and,for brevity, assuming that there are only two individualdisciplines in an MDO problem, each discipline is basedon a disciplinary analysis, which takes as its input aset of design variables li, a set of system level designvariables s (also referred to as shared variables) andsome other parameters. The total outputs from a dis-cipline i including all data passed to the other disciplineas parameters and quantities passed to design con-straints and objectives are denoted by qi. For example,q1 = {

qp1 , qg

1

}where gp

1 are parameters passed to thediscipline 2 (e.g. aerodynamics loads), and qg

1 are theresponses essential to the discipline 1 (e.g. drag orlift) that serve in the definition of the objective andconstraints of discipline 1.

The system level optimization problem for a two-discipline case can be stated as (Alexandrov and Lewis2000):

Minimize : f (s, t1, t2) (1)

subject to : C (s, t1, t2) = 0 (2)

where C = {C1,...,CN} are the N interdisciplinary con-sistency (or compatibility) constraints, and t1 and t2 arethe interdisciplinary coupling variables for discipline 1and 2, respectively, serving as system level targets forthe disciplinary outputs q1 and q2. Note that the outputqi of the discipline i can be an input for the discipline j.

The system level optimization provides target valuest1, t2 and required values of shared (system level)variables to disciplines 1 and 2. The discipline leveloptimizer seeks to match these values (s, t1, t2) passeddown by the system level. The variables ψ1 and ψ2 areintroduced to relax the coupling between the disciplinesintroduced by the shared design variables s. Thus thevariables ψ i are used as local copies (for discipline i) ofthe shared variables s.

For discipline 1, ψ1 (s, t1, t2) and l̄1 (s, t1, t2) arecomponents of the solution of the following minimiza-tion problem in ψ1 and l1:

Minimize : ∥∥ψ1 − s∥∥2 +

∥∥∥qg1

(ψ1, l1, qps

2

) − t1

∥∥∥2

(3)

subject to : g1

(ψ1, l1, qg

1

(ψ1, l1, qps

2

)) ≤ g∗1 (4)

where (s, t1, t2) are target values passed to the dis-ciplinary optimizer by the system level as parameters(fixed values), qps

i are parameters that are frozen in thediscipline i, g1 are disciplinary constraint functions andg∗

1 are limits on them.Similarly, discipline 2 optimization problem can be

expressed as:

Minimize : ∥∥ψ2 − s∥∥2 +

∥∥∥qg2

(ψ1, l2, qps

1 − t2)∥∥∥

2(5)

subject to : g2

(ψ2, l2, qg

2

(ψ2, l2, qps

1

)) ≤ g∗2 (6)

The minimum values of the compatibility functions (3)and (5) will then serve as constraints (2) in the systemlevel optimization problem.

3 Metamodel-based CO framework

The organization of the optimization process and themain components of the metamodel-based CO frame-work are shown in Fig. 1 and are described in thesections below.

3.1 Disciplinary optimization

Constraints gi at the discipline level in CO correspondto functions describing the behaviour of a typical en-gineering system as related to a particular discipline i.The construction of metamodels at the discipline levelis based on well-established global metamodelling con-cepts. In this paper, the multi-fidelity modelling concept(Toropov and Markine 1996; Zadeh and Toropov 2002)is used. Here, the multi-fidelity modelling concept isimplemented to simultaneously utilize computationalmodels of different levels of fidelity in a CO process

Page 4: Metamodel-based collaborative optimization framework

106 P. M. Zadeh et al.

Fig. 1 Metamodel-basedcollaborative optimizationframework

to facilitate the solution of an MDO problem with highfidelity models, see schematic illustration in Fig. 1.

It consists of expensive detailed (high fidelity) mod-els and computationally efficient simplified numerical(low fidelity) models. The low fidelity models are tunedusing a small number of high fidelity model runs andthen used in place of expensive high fidelity models inthe optimization process. The tuned low fidelity modelapproaches the same level of accuracy as a high fidelitymodel but at the same time remains computationallyinexpensive to be used repeatedly in the optimizationprocess.

3.1.1 Metamodels based on low fidelity numericalmodels

The quality of a metamodel has a profound effect onthe computational cost and convergence characteristicsof metamodel-based optimization. It was shown by Boxand Draper (1987) that a mechanistic model (one thatis built upon some prior knowledge of the system)can often provide a higher quality metamodel than apurely empirical (e.g. polynomial) model. An exampleof an application to material parameter identificationcan be seen in Toropov and van der Giessen (1993).A more general way of constructing high quality meta-models adopted here is based on the use of a lower

fidelity numerical model (Toropov and Markine 1996).This can be achieved either by simplifying the analysismodel (e.g. coarser finite element mesh) or simplify-ing the modelling concept (e.g. simpler geometry andboundary conditions, use of 2D instead of 3D model,etc.). Such a model should reflect the most prominentphysical features of the system under consideration andat the same time remain computationally inexpensive.The low fidelity numerical model can then be used tobuild the metamodel as follows:

F̃ (x, a) ≡ F̃ ( f (x), a) (7)

where x is the vector of all design variables in a disci-pline being considered including the local copies ψ ofthe system level variables and the local variables l, see(3), (4) and (5), (6); f (x) is the function presenting aresponse obtained by the low fidelity model. The tuningparameters x are used for minimizing the discrepancybetween the high fidelity and the low fidelity models. Inbuilding metamodels, F̃(x), it is necessary to select anappropriate structure of the metamodel, i.e. to definethem as a function of design variables x and tuning pa-rameters a. The efficiency of the optimization processdepends strongly on the accuracy of such a metamodel.Zadeh and Toropov (2002) examined several types ofmetamodel functions on a test problem. The selectionof the best metamodel function was based on the error

Page 5: Metamodel-based collaborative optimization framework

Metamodel-based collaborative optimization framework 107

of the corrected low fidelity model as compared to thehigh fidelity model at points of a design of experiments(DoE) referred to as a verification DoE. The followingtypes of the metamodel functions have been suggestedby Toropov and Markine (1996).

Type 1 Linear and Multiplicative Metamodel withTwo Tuning Parameters

The simplest form of the metamodel function is a linearor intrinsically linear function with two tuning parame-ters, which can be represented as follows:

Type 1 linear : F̃ (x, a) = a0 + a1 f (x) (8)

Type 1 multiplicative : F̃ (x, a) = a0 f (x)a1 (9)

where a = [a0 a1]T

Type 2 Correction Functions

The type 2 metamodels are based on an explicit cor-rection function C(x, a) that depends on the designvariables and tuning parameters, such as:

multiplicative function:

F̃ (x, a) = f (x) C (x, a) (10)

where C (x, a) = a0

N∏

l=1xa1

l ;

or a linear function:

F̃ (x, a) = f (x) + C (x, a) (11)

where C(x, a) can be a low order polynomial in xwith coefficients a, e.g. a linear, quadratic or a cubicfunction.

Type 3 Use of model inputs as tuning parameters

A low fidelity model, f (x), which is used as a basisfor building an metamodel depends not only on thedesign variables x but also on other parameters, e.g.geometrical and material properties, etc. Some of theseparameters can be considered as tuning parameters aso that a metamodel function can be represented in thefollowing form:

F̃ (x, a) = f (x, a) . (12)

Once the metamodel type is chosen, the tuning para-meters a included in the metamodel are obtained byminimizing the sum of squares of the errors over thep sampling points at which both high fidelity and lowfidelity models have been run:

Minimize : G (a) =P∑

p=1

wp

(Fp − F̃

(xp, a

))2(13)

where xp is pth sampling point, wp is a weight thatdetermines the relative contribution of the informationat the pth point and Fp is the corresponding value ofresponse from the high fidelity model, for more detailssee Toropov (2001).

The linear and multiplicative functions of type 1 and2 have been successfully used for a variety of designoptimization problems (Toropov 2001). The advantageof these metamodels is that a relatively small numberof tuning parameters a are required. For higher ordermetamodels of type 2, such as full quadratic and cu-bic polynomials, the error measure reduces but meta-models can become prone to overfitting. In addition,these metamodels would require a significantly largernumber of designs to be evaluated. The type 3 meta-model similarly to mechanistic metamodels, is basedon deeper understanding of a process being modelled,which can be useful (Toropov 2001) but is problem-dependent.

3.1.2 Design of experiments

The selection of sampling points in the design variablespace, where the response must be evaluated, is com-monly referred to as Design of Experiments (DoE).DoE is an essential part of metamodel building process,and allows the designer to build metamodels moreefficiently by employing set points at which to evaluatethe response function(s). The planning of DoE (i.e. thechoice of points in the design variable space) can havea considerable effect on the accuracy and the efficiencyof the metamodel building. There are many schemesavailable in the literature for generating plans of exper-iments (Simpson et al. 1997). In this study the schemesuggested by Audze and Eglais (1977) is adopted. It isa space-filling DoE that is based on a Latin hypercubedesign.

The Latin hypercube (LH) approach McKay et al.(1979) is independent of the mathematical model ofa problem. It can be considered as an N-dimensionalextension of the traditional Latin square design(Montgomery 1997). In each level of every design varia-ble only one point is placed. In this approach the num-ber of design points to be analysed can be definedwithout an explicit dependence on the number of de-sign variables. There are two variations of LH ap-proach, the random sampling and the optimal LHmethod. The optimal LH method is based on opti-mizing the uniformity of the distribution of the designpoints. There are a number of methods are available forobtaining optimal LH, e.g. Bates et al. (2004). Audzeand Eglais (1977) introduced an optimality criterionusing an analogy with the potential energy of the system

Page 6: Metamodel-based collaborative optimization framework

108 P. M. Zadeh et al.

of points in the design space for obtaining a uniformdistribution of design points that is used in this work.

3.2 System level optimization

Constraints at the system level are equality constraints(discrepancy functions C = 0) in (2) that are muchmore computationally expensive as compared to dis-cipline level constraints. Values of the system levelconstraints are obtained by solving disciplinary opti-mization problems and correspond to a measure ofdisagreement between the targets given to a disciplineby the system level optimizer. Hence, they are non-smooth at the transition from a plane of zero values(discipline compliance with given targets) to a region ofnon-zero values (discipline non-compliance). This fea-ture can cause slow convergence of the CO system leveloptimization. Also, these characteristics of the systemlevel optimization in CO make it more demanding interms of a choice of a metamodelling technique.

Figure 2a and b show a 50-point uniform Latin Hy-percube plan, obtained corresponding to the systemlevel design variables for disciplines 1 and 2 for thetest problem used in this study (see Section 4 below).In these figures, larger dots indicate points at whichthe corresponding disciplinary optimizer returned non-zero values of the objective function. The remainingpoints correspond to zero values of the disciplinaryoptimization runs.

An initial study was carried out using a cubic poly-nomial response surface to build a metamodel for thesystem level optimization. Figure 3 shows the discipline1 objective function (a constraint at the system level)represented by a cubic polynomial response surfacewith negative values removed by forcing them to zero.

The cubic metamodel captures the basic behaviorof the function, but can exhibit an unrealistic non-zero domain (“hump” in Fig. 3) in the region of thezero-level plateau. It is therefore necessary to employa metamodel strategy, suitable for the characteristicsof the discrepancy function, for more accurate andefficient modelling within a CO framework. Jang et al.(2005) used kriging-based metamodels to approximatethe system level constraints and Zadeh et al. (2004)used the moving least squares method (MLSM) for thesame purpose In this study the use MLSM is demon-strated on a test example.

3.2.1 Moving least squares method

The moving least squares method (Wang and Yang2000; Choi et al. 2001) is a method for metamodelbuilding in design optimization. It can be thought of as

Fig. 2 Characteristics of the system level equality constraintsshown by the 50 point uniform Latin Hypercube DoE for the testproblem. Larger circular dots correspond to violated constraintsand remaining dots indicate that constraints are satisfied

a weighted least squares method with weight functionsthat vary with respect to the location at which themetamodel is evaluated. In the space of all systemlevel variables z that includes s, t1, t2 in (1), (2) the

Page 7: Metamodel-based collaborative optimization framework

Metamodel-based collaborative optimization framework 109

Fig. 3 Cubic metamodel, discipline 1 objective function (withnegative values removed)

weight associated with a particular sampling point, zi,

decays as a prediction point z moves away from zi

so the metamodel obtained by the least squares fit istermed a moving least squares metamodel of the orig-inal function F(z). Since the weights wi are functionsof z, the polynomial basis function coefficients are alsodependent on z. This means that it is not possible toobtain an analytical form of the function F̃(z) but itsevaluation is computationally inexpensive. It is possibleto control the “closeness of fit” of the metamodel to thesampling data set by changing a parameter in a weightdecay function wi(r), where r is the distance from theith sampling point. Such a parameter defines the rateof weight decay or the radius of a sphere beyond whichthe weight is assumed to be zero (sphere of influence ofa sampling point zi).

There are several types of weight decay function thatcan be used to control the closeness of fit. Some ofthese are described by Toropov et al. (2005). The mostfrequently used function is the Gaussian function, ex-pressed by wi = exp

(−θ r2i

)where θ defines the close-

ness of fit. The case θ = 0 is equivalent to conventionalleast squares regression. When θ is large it is possibleto obtain a very close fit through the sampling points(approaching interpolation).

4 Numerical example

This section presents a demonstration of the proposedmetamodel-based CO framework for solving MDOproblems with high fidelity models used in individualdisciplines on an intentionally simple problem. It isattempted to mimic some of the essential features ofreal-life problems such as multidisciplinarity and alsopotential for the use of high and low fidelity FE models

Fig. 4 Beam subjected to a parabolic distributed load p(ξ)

in the disciplinary simulation without involving a mas-sive computing effort. This test problem deals with theweight minimization of a continuous fiber-reinforcedcomposite cantilever beam subject to a parabolic dis-tributed load p(ξ) as shown in Fig. 4. The objectivefunction (to be minimized) is the weight of the beam.The design constraints include a limitation on the max-imum deflection at the free end of the beam δmax, aconstraint on the maximum bending stress in the beamámax and the geometric requirement that the depth ofthe beam h does not exceed ten times the width w

to avoid torsional lateral buckling, as well as the sideconstraints on the design variables. The design data forthis problem are listed in Table 1.

The maximum stress and deflection of the beam canbe calculated as follows:

max = Mmaxh2I

= 3

2

p0L2

wh2= p0L2h

8I(14)

δmax = 19p0L4

360EI(15)

where the parabolic load p(ζ ) over the beam isdefined as:

q (z) = p0

(1 − ζ 2

L2

)(16)

and the second moment of area I = wh2

12 . Based onthe rule of mixtures for a continuous fiber-reinforcedcomposite material with a fiber volume fraction vf anda matrix volume fraction vm, the following relationshipmust be satisfied:

vf + vm = 1 (17)

Table 1 Design data

Description (notation) Unit Value

Parameter in the parabolic N/mm 1.0distributed load p0

Length of the beam L mm 1,000Elastic modulus, graphite fiber Ef N/mm2 2.3 × 105

Elastic modulus, epoxy resin Em N/mm2 3.45 × 103

Weight density, graphite fiber ρf N/mm3 1.72 × 10−5

Weight density, epoxy resin ρm N/mm3 1.2 × 10−5

Stress limit ∗ N/mm2 166.667Displacement limit δ∗ mm 12.9387

Page 8: Metamodel-based collaborative optimization framework

110 P. M. Zadeh et al.

The longitudinal (fiber direction) Young’s modulus Eand the composite weight density, ρ, in terms of thefiber volume fraction using the rule of mixtures can beexpressed as:

E = Efvf + Em (1 − vf) (18)

ρ = ρfvf + ρm (1 − vf) (19)

where Ef and Em are the elastic moduli for graphitefiber and epoxy resin respectively, ρf and ρm are theweight density values of the graphite fiber and epoxyresin, respectively.

The fiber volume fraction can vary from zero (nofiber) to the maximum value of vmax

f . The maximumpossible fiber volume fraction using the packing geom-etry provided in Fig. 5 is calculated as follows:

vmaxf = Afiber

Atotal= 3π R2

6√

3R2= π

2√

3= 0.9069. (20)

4.1 Conventional optimization problem formulation

The objective is to minimize the weight of a compositecantilever beam. The design variables are shown in theTable 2. The function to be minimized is the weight ofthe beam:

Objective : weight = AL ρ

where A = 12Ih2

(mm2

), L = 1,000 (mm) and ρ = ρm +

vf (ρf − ρm).The constraints are:

g1 (I, h) = σmax

σ ∗ = p0 L2h8Iσ ∗ ≤ 1 (21)

g2 (I, vf) = δmax

δ= 19q0L4

360EIδ∗

= 19p0L4

360I[

Em + vf(Ef − Em

)]δ∗

≤ 1 (22)

Fig. 5 Geometry of fiberpacking in a unit volume

.

Table 2 Design variables

Description (notation) Unit Lower limit Upper limit

Second moment of area I mm4 0.333 × 104 20.833 × 104

Height of the beam h mm 20 50Fiber volume fraction vf 0.4 0.9069

g3 (I, h) = h10w

= h4

120I≤ 1. (23)

The inequalities (21)–(23) represent, respectively, theconstraints on the maximum stress, maximum deflec-tion at the end of the beam and the geometric require-ments that the depth of the beam must be equal orgreater than ten times of the width of the compositebeam. The limiting values of stress and displacementare shown in Table 1, and they correspond to the valuesfor the baseline design. The side constraints on thedesign variables are obtained by the lower and upperlimits, see Table 2.

Conventional (all-at-once) optimization is carriedout using a sequential quadratic programming (SQP)algorithm. Results are shown in Table 3.

4.2 Collaborative optimization formulation

The composite beam test problem is now posed tosuit the collaborative optimization framework. The testproblem is decomposed into two disciplines (the stress-constrained and the deflection-constrained problems)and a system level optimizer to coordinate the overalloptimization procedure. The design variables shown inTable 3 are grouped into two disciplines 1 and 2, shownin Table 4.

The system level design variables s1, s2 and s3 repre-sent the second moment of area I, depth of the beamh and the fiber volume fraction v f , respectively. In thediscipline 1 the variable ψ1,1 is a local copy of s1 andψ1,2 is that of s2, and in the discipline 2 the variableψ2,1 is a local copy of s1 and ψ2,2 is that of s3. Anoptimization run at the discipline level finds optimumvalues of the design variables which satisfy the disci-pline’s own constraints and minimize the discrepancy

Table 3 Results of optimization (All-At-Once) using SQP

Design variables Unit Baseline design Optimum

I = x1 mm4 2.25 × 104 3.361 × 104

h = x2 mm 30 44.814v f = x3 0.785 0.5205Objective F(x) N 4.8246 2.9535C1(x) 1.0 1.0C2(x) 1.0 1.0C3(x) 0.3 1.0

Page 9: Metamodel-based collaborative optimization framework

Metamodel-based collaborative optimization framework 111

Table 4 Design variables and constraints of the test problemusing (CO)

Discipline levels Design variables Constraints

Discipline 1 ψ1,1, ψ1,2 g1,1 = g1, g1,2 = g3

Discipline 2 ψ2,1, ψ2,2 g2,1 = g2

System level s1, s2, s3 C1, C2

from the target values generated by the system leveloptimizer. At the system level the constraints are:

C1 (s) = 0, C2 (s) = 0 (24)

where C1(s) and C2(s) are the discrepancies betweenthe actual and target values returned from discipline1 and discipline 2, respectively. The system level op-timization must satisfy these consistency requirementsamong disciplines 1 and 2 by enforcing the equalityconstraints (24).

4.3 Discipline level optimization

This section demonstrates the multi-fidelity modellingstrategy in the discipline level optimization using thetest problem and focusing on the use of a tuned lowfidelity model in place of a high fidelity model in theoptimization process. The models used here involvetwo levels of fidelity: a coarse FE model consisting of2 beam elements as a low fidelity model and a fine-meshed FE model consisting of 100 beam elementsas a high fidelity model. In order to make this sim-ple example mimic an important feature of real-lifeproblems, a commercial FE software ANSYS was used.The tuned low fidelity model is used in optimization.The discrepancy function to be minimized in the firstdiscipline is

(s1 − ψ1,1

)2 + (s2 − ψ1,2

)2. (25)

The constraints in discipline 1 are:

g1,1 = p0L2ψ1,2

8ψ1,1σ ∗ ≤ 1 (26)

g1,2 = ψ41,2

120ψ1,1≤ 1 (27)

The discrepancy function of discipline 2 can be ex-pressed as:

Minimize : (s1 − ψ2,1

)2 + (s3 − ψ2,2

)2 (28)

The second discipline has one constraint on the maxi-mum deflection of the beam:

g2,1 = 19p0L4

360ψ2,1

[Em + ψ2,2

(Ef − Em

)]δ∗

≤ 1. (29)

4.4 Implementation of multi-fidelity modellingmethodology in CO

The main steps in the multi-fidelity modelling method-ology applied to creation of a metamodel utilised in thediscipline level optimization in CO are described below,following Zadeh and Toropov (2002).

Step 1 Choice of a design of experiments. In this step,the scheme suggested by Audze and Eglais(1977) is used to uniformly plan P points inthe design variable space. In order to find whatnumber of points is sufficient for the construc-tion of high quality metamodel models, fiveseparate designs of experiments of ten, five,four, three and two points were studied on theabove test problem.

Step 2 Run low and high fidelity simulation models.Response values are computed at the selectedDoE points using a low fidelity (two beam ele-ments) and a high fidelity (100 beam elements)FE model.

Step 3 Choice of metamodel function. In this step,the original high fidelity simulation model isreplaced by the tuned low fidelity model andused as a metamodel. Six types of a metamodel(linear and multiplicative of types 1 and 2,quadratic and cubic of type 2) were examinedon the above test problem. The selection ofthe metamodel type was based on the rootmean square (RSM) error and the maximumdeviation error of the tuned low fidelity modelas compared to the high fidelity model atpoints of another DoE, referred to as a veri-fication DoE.The five-point plan was selected as an appro-priate one for building metamodels for con-straints in both disciplines. This was checkedagainst the verification plan. Based on the re-sults the linear metamodel type 1 was selectedfor the use in the collaborative optimizationframework.

Step 4 Tuning low fidelity model. Since the approxi-mation of the original high fidelity model F(x)

by the simplified numerical model f (x) is notexact, the discrepancy between them (7) is tobe minimized using the tuning parameters a.

Page 10: Metamodel-based collaborative optimization framework

112 P. M. Zadeh et al.

This procedure was used in the metamodeltype selection in the previous step.

Step 5 Run optimization using corrected low fidelitymodel. In this step, the tuned low fidelitymodel is used in place of the original high fi-delity model in the optimization process. In thecomposite beam test problem the discrepancybetween the results obtained using the highfidelity model and the metamodel is negligible.

4.5 System level optimization using MLSM

This section focuses on the system level optimization inthe test problem, which is described by:

Minimize : f = 12s1

s22

L[ρm + s3

(ρf − ρm

)](30)

subject to : C1 (s) = 0, C2 (s) = 0 (31)

0.333 × 104 ≤ s1 ≤ 20.833 × 104,

20.0 ≤ s2 ≤ 50.0, 0.4 ≤ s3 ≤ 0.9069 (32)

where s1, s2 and s3 are system level design variables;C1(s) and C2(s) are the system level equality com-patibility constraints. The values of functions C1(s)and C2(s) are produced as a result of the disciplinaryoptimization runs and can be expensive to evaluate,hence they are replaced by inexpensive metamodels.The main steps are outlined below.

Step 1 Choice of a design of experiments. The selec-tion of points in the design variable space isbased on a Audze–Eglais uniform Latin hyper-cube shown in Fig. 2a and b for disciplines 1and 2, respectively.

Step 2 Compute the values of compatibility constraintsC1 and C2. The values of C1 and C2 are com-puted by the disciplinary optimization runswhere targets correspond to the DoE pointsestablished in Step 1.

Step 3 Construct a metamodel. Compatibility con-straint and, if needed, objective function valuesare used to build global metamodels for alldisciplines.

In the moving least squares method there are severalparameters such as the order of the base polynomial(e.g. linear, quadratic, etc), the size of the domain ofinfluence and the type of the weight decay function thatcan be used to control the quality of a metamodel. Inthis study the Gaussian function, wi = exp

(−θ r2i

), is

used as a weight decay function and a quadratic poly-nomial is used as the base function with θ = 10. Zadehet al. (2005) investigated on the same test problem the

effects of the order of the base polynomial consideringlinear, quadratic and cubic ones, and also of the valueof θ . They concluded that settings mentioned aboveproduced the best quality of the MLSM metamodel.It should be mentioned that these conclusions are notuniversal, e.g. for a case of a larger number (say, over10) of design variables even a quadratic base functioncould lead to an excessive amount of data needed tobuild a metamodel. Zadeh et al. (2005) demonstratedthat a good quality metamodel can be also obtainedusing a linear base polynomial, this can be a reasonablechoice for such a case of a larger number of designvariables. As related to the best value of the parameterθ , several measures of the metamodel quality can beused in an optimization process in which θ is treatedas a variable. This can be done either on the samedata (e.g. using a “leave-K-out” approach, see, e.g.Meckesheimer et al. 2002) or data points from a sep-arate design of experimenrs, see Wang and Shan (2007)and Narayanan et al. (2007). Figure 6 shows the globalmetamodel obtained for the constraint C1 related to thediscipline 1.

Step 4 Solve the system level optimization problems.Metamodels constructed in Step 3 are usedin the system level optimization. The processtreats response values below 2 × 10−4 (due tothe metamodel error which is corrected duringthe optimization process) as zero. A geneticalgorithm (GA) was used to solve the systemlevel optimization in this example.

Step 5 Trust region strategy: After solving the op-timization problem with global metamodels,

Fig. 6 Global metamodel built by MLSM for constraint C1(discipline 1)

Page 11: Metamodel-based collaborative optimization framework

Metamodel-based collaborative optimization framework 113

Table 5 Results of metamodel-based collaborative optimization and comparison with all-at-once method

Iteration Number of plan points System level Discipline levelsnumber used in disciplines 1 and 2 Design variables Constraints Objective Discipline 1 Discipline 2

Discipline 1 Discipline 2 s1 s2 s3 C1 C2 f Objective g1,1 g1,2 Objective g2,1

(×104) (g1) (g3) (g2)

Global range 50 50 6.771 42.889 0.401 0 0 6.221 0 1.53 1.58 0 1.36Trust region 1 35 30 2.620 36.577 0.402 0 0 3.312 0 1.0 1.45 0.07 1.0Trust region 2 39 28 3.314 43.350 0.497 0 0 3.085 0 1.02 1.11 0 1.0Trust region 3 25 24 3.359 44.819 0.524 0 0 2.955 0 1.0 1.0 0 1.0All-at-once 3.361 44.814 0.521 2.954 1.0 1.0 1.0

construct a new sub-region (trust region) inthe design space, add new DoE points andreturn to Step 2. This step focuses on thelocalised search for an optimum solution. Thenew sub-region is centred on the design pointobtained in Step 4, which is resized to besmaller part (here, a half of the range ofeach of the design variables) of the originalsize of the design variable domain. The newDoE points are generated in such a way asto ensure the homogeneous distribution of thepoints inside a search sub-region. The follow-ing strategy was adopted: the initial DoE had50 uniformly spaced points (see Fig. 2a and b)and each subsequent trust region had 20 newDoE points added and used together with ex-isting points from the previous iterations. Thenew DoE points were added randomly whilstcontrolling the minimum distance ε from thealready planted points. In the test problemabove the value of ε is chosen as 5% of thediagonal of a current trust region. If the dis-tance from a new point to any of the existingpoints is smaller than ε then such a candidatepoint is abandoned and another random pointis chosen. This is repeated until all the newpoints have been allocated. The optimizationproblem is solved and checked for conver-gence of the solution (stop if convergence isobtained, otherwise the trust region allocationprocess is continued until the optimum solu-tion is reached). The acceptance criteria forconvergence include constraint violation and

Table 6 Evaluation of predictive capabilities of metamodels con-structed during CO runs for the system level (discipline 1)

Iterations RMSE R-square RMAE RAAE

Global meta-model 0.5355 0.9021 1.3235 0.1402Trust region 1 0.2895 0.9267 0.7582 0.1590Trust region 2 0.3207 0.9914 0.2255 0.0711Trust region 3 0.0903 0.9980 0.1173 0.0323

objective function improvement. The result ofCO based on the multi-fidelity modelling isshown in Table 5.

5 Optimization algorithms used in CO

In the discipline level optimization runs various opti-mization techniques can be used (e.g. SQP) in conjunc-tion with metamodels. At the system level, however,the use of equality constraints to represent the disci-plinary feasible regions introduces numerical and com-putational difficulties that hinder the application ofgradient-based optimization algorithms. In this study,three different optimization algorithms SQP, Nelder–Mead (Nelder and Mead 1965) and GA were comparedfor the system level in the optimization on the abovetest problem. Due to the special features of the sys-tem level optimization discussed above, both SQP andNelder–Mead algorithms failed to provide a convergedsolution and, consequently, a more robust optimiza-tion algorithm (GA) was used for solving the CO sys-tem level optimization. SQP was successfully used forsolving the optimization problems in the disciplines 1and 2.

6 Evaluation of predictive capabilitiesof the metamodels

The construction of highly accurate metamodels is arequirement for system level optimization within a CO

Table 7 Evaluation of predictive capabilities of metamodels con-structed during CO runs for the system level (discipline 2)

Iterations RMSE R-square RMAE RAAE

Global metamodel 0.1924 0.7984 2.1062 0.1801model

Trust region 1 0.0147 0.9300 0.6985 0.1716Trust region 2 0.1591 0.9789 0.3612 0.1215Trust region 3 0.0571 0.9909 0.2858 0.0611

Page 12: Metamodel-based collaborative optimization framework

114 P. M. Zadeh et al.

framework and it is therefore important to evaluate thepredictive capabilities of such models. An accuracy es-timation using several statistical criteria was used in thiswork. These include root mean square error (RMSE),R-square, relative average absolute error (RAAE) andrelative maximum absolute error (RMAE) over DoEpoints. The larger R-square and smaller RMSE andRAAE values indicate a more accurate metamodel.Note that the parameters of the MLSM-based meta-models were fixed in all runs.

In Tables 6 and 7, R-square values for disciplines 1and 2 increases from 0.9021 to 0.998 and from 0.7984 to0.9909, respectively. This is indicative of the fact that asthe trust region reduces the accuracy of the metamodelincreases.

7 Conclusions

This paper described a metamodel-based CO approachto multidisciplinary optimization with high fidelity sim-ulation models. The obtained results (Table 5) show ahigh degree of accuracy in discipline level optimizationusing multi-fidelity modelling. It has been observed thata tuned low fidelity model can have virtually the sameaccuracy as the high fidelity model. The discrepancyof the optimum solutions using the high fidelity andthe tuned low fidelity models is negligible in the testexample.

Construction of metamodels poses significant diffi-culties because of the peculiar characteristics of the sys-tem level constraints. The results obtained here showthat the MLSM can be used effectively for the con-struction of metamodels at the system level to capturethe behavior of highly non-linear surfaces (in particularthe transition from a plateau of zero to non-zero val-ues on the equality constraint surfaces constructed atthe system level). MLSM predicted the response withsufficient accuracy and required only four iterations toreach an optimum solution. Good agreement was foundbetween the results obtained using conventional all-at-once optimization and metamodel-based collaborativeoptimization using high fidelity models for the testproblem (Table 5). In terms of reducing computingtime, it was observed that with the implementation ofmetamodels in CO it was reduced from around 10 h(when no metamodels were used at either disciplinelevel or system level) to several minutes of computingtime on a typical Windows PC. This is an encouragingresult for future applications to complex MDO prob-lems with high fidelity models (e.g. for vehicle crashsimulation, aerospace design, etc.).

References

Alexandrov NM, Lewis RM (2000) Analytical and compu-tational properties of distributed approaches to MDO. In:AIAA-2000-4718, Proceedings of the 8th AIAA/NASA/USAF/ISSMO symposium on multidisciplinary analysis andoptimization. Long Beach, CA

Alexandrov N, Dennis JE, Lewis RM, Torczon V (1998) A trustregion framework for managing the use of approximationmodels in optimization. Struct Multidisc Optim 15:16–23

Alexandrov NM, Lewis RM, Gumbert CR, Green LL, NewmanPA (1999) Optimization with variable-fidelity models ap-plied to wing design. Technical report no. 99-49. ICASE,Institute for Computer Applications in Science and Engi-neering. NASA Langley Research Center, Hampton, VA

Audze P, Eglais V (1977) New approach for planning out ofexperiments. Problems of Dynamics and Strengths 35:104–107 (in Russian)

Barthelemy J-FM, Haftka RT (1993) Approximation conceptsfor optimum structural design—a review. Struct Optim5(3):129–144

Bates SJ, Sienz J, Toropov VV (2004) Formulation of the Audze–Eglais uniform Latin hypercube design of experiments usinga permutation genetic algorithm. In: Proceedings of the 45thAIAA/ASME/ASE/AHS/ASC structural dynamics and ma-terials conference. Palm Springs, CA, 19–22 April 2004

Box GEP, Draper NR (1987) Empirical model-building and re-sponse surfaces. Wiley, New York

Braun RD, Gage P, Kroo IM, Sobieski I (1996) Implementa-tion and performance issues in collaborative optimization.In: Proceedings of the 6th AIAA/NASA/USAF/ISSMOsymposium on multidisciplinary analysis and optimization.Bellevue, WA

Choi KK, Youn BD, Yang R-J (2001) Moving least squaremethod for reliability-based design optimization. In: Pro-ceedings of the 4th world cong. Structural and multidiscipli-nary optimization. Dalian

Cramer EJ, Frank PD, Shubin GR, Dennis JE, Lewis RM(1992) On alternative problem formulation for mul-tidisciplinary optimization. In: Proceedings of the 4thAIAA/USAF/NASA/OAI symposium on multidisciplinaryanalysis and optimization. Cleveland, OH, AIAA 92-4752

Giunta A (1997) Aircraft multidisciplinary optimization using de-sign of experiments theory and response surface modellingmethods. Ph.D. thesis, Department of Aerospace and OceanEngineering, Virginia Tech, Blacksburg, VA

Giunta AA, Balabanov V, Burgee S, Grossman B, Haftka RT,Mason WH, Watson LT (1995) Variable-complexity mul-tidisciplinary design optimization using parallel computers.In: Alturi SN, Yagawa G, Cruse TA (eds) Computationalmechanics ’95—theory and applications. Springer, Berlin,pp 489–494

Jang BS, Yang YS, Jung HS, Yeun YS (2005) Managing approx-imation models in collaborative optimization. Struct Multi-disc Optim 30:11–26

Kim HM, Rideout DG, Papalambros PY, Stein JL (2003) Ana-lytical target cascading in automotive vehicle design. J MechDes 125:481–489

Kodiyalam S, Sobieszczanski-Sobieski J (2001) Multidisciplinarydesign optimization: some formal methods, framework re-quirements and application to vehicle design. Int J VehicleDesign 25(special issue):3–32

Kroo I (1995) Decomposition and collaborative optimizationfor large scale aerospace design. In: Proceedings of the

Page 13: Metamodel-based collaborative optimization framework

Metamodel-based collaborative optimization framework 115

ICAS/LaRC/SIAM workshop on multidisciplinary optimiza-tion. Hampton, VA

Kroo I, Altus S, Braun R, Gage P, Sobieski I (1994) Multidisci-plinary optimization method for aircraft preliminary design.In: AIAA-94-4333-CP, Proceedings of the 5th AIAA/NASA/USAF/ISSMO symposium on multidisciplinaryanalysis and optimization. Panama City, FL, pp 697–707,7–9 September

Mason WH, Haftka RT, Johnson ER (1994) Analysis anddesign of composite channel frames. In: Proceedings ofthe 5th AIAA/USAF/NASA/ISSMO symposium on multi-disciplinary analysis and optimization, vol 2. AIAA Paper94-4346-CP, Panama City, Florida, pp 1023–1040, September7–9

McKay MD, Bechman RJ, Conover WJ (1979) A comparisonof three methods for selecting values of input variables inthe analysis of output from a computer code. Technometrics21(2):239–245

Meckesheimer M, Booker AJ, Barton RR, Simpson TW (2002)Computationally inexpensive metamodel assessment strate-gies. AIAA J 40(10):2053–2060

Montgomery DC (1997) Design and analysis of experiment.Wiley, New York

Narayanan A, Toropov VV, Wood AS, Campean IF (2007)Simultaneous model building and validation with uniformdesigns of experiments. Eng Optim 39(5):497–512

Nelder JA, Mead A (1965) A simplex method for function mini-mization. Comput J 7:308–313

Rodriguez JF, Renaud JE, Watson LT (1997) Convergence oftrust region augmented Lagrangian methods using variablefidelity approximation data. In: Proceedings of the 2nd worldcongress of structural and multidisciplinary optimization.Zakopane, Poland, 26–30 May

Simpson TW, Peplinski J, Mitchell TJ, Allen JK (1997) On theuse of statistics in design and the implications for determin-istic computer experiment. In: Proceedings of the ASME,DETC97/DTM-3881, design theory and methodology—DTM’97. Sacramento, CA

Simpson TW, Booker AJ, Ghosh D, Giunta AA, Koch PN,Yang R-J (2004) Approximation methods in multidiscipli-nary analysis and optimization: a panel discussion. StructOptim 27:302–313

Sobieski IP, Kroo IM (1996) Aircraft design using collabora-tive optimization. In: Proceedings of the aerospace sciencesmeeting. AIAA Paper 96-0715, Reno, NV, January

Sobieszczanski-Sobieski J (1988) Optimization by decompo-sition: a step from hierarchic to non-hierarchic systems.In: Proceedings of the 2nd NASA/Air force symposium onrecent advances in multidisciplinary analysis and optimiza-tion. Hampton, VA, NASA-CP-3031

Sobieszczanski-Sobieski J, Haftka RT (1997) Multidisciplinaryaerospace design optimization: survey of recent develop-ments. Struct Optimiz 14(1):1–23

Sobieszczanski-Sobieski J, Agte J, Sandusky R Jr (1998)Bi-level integrated system synthesis (BLISS). In: Proceed-ings of the 7th AIAA/USAF/NASA/ISSMO symposium

multidisciplinary analysis and optimization (St. Louis, MO).AIAA 98-4916

Toropov VV (2001) Modelling and approximation strategies inoptimization-global and mid-range metamodels, responsesurface methods, genetic programming, and low/high fidelitymodels. In: Blachut J, Eschenauer HA (eds) Emerging meth-ods for multidisciplinary optimization, CISM courses andlectures, no. 425, international center for mechanical science.Springer, Berlin, pp 205–256

Toropov VV, Markine VL (1996) The use of simplified numericalmodels as mid-range approximations. In: Proceedings of the6th AIAA/USAF/NASA/ISSMO symposium on multidisci-plinary analysis and optimization. pp 952–958

Toropov VV, van der Giessen E (1993) Parameter iden-tification for nonlinear constitutive models: finite ele-ment simulation–optimization–nontrivial experiments. In:Pederson P (ed) Optimal design with advanced materials—the Frithiof Niordson volume. Proceedings of IUTAM sym-posium. Elsevier, Amsterdam, pp 113–130

Toropov VV, Schramm U, Sahai A, Jones RD, Zeguer T (2005)Design optimization and stochastic analysis based on theMoving Least Squares method. In: Proceedings of the 6thworld congresses of structural and multidisciplinary opti-mization. Rio de Janeiro, Brazil, 30 May–3 June

Venkataraman S, Haftka RT, Johnson TF (1988) Design ofshell structures for bucking using correction response surfaceapproximations. In: Proceedings of the 7th AIAA/USAF/NASA/ISSMO symposium multidisciplinary analysis andoptimization. St. Louis, MO, AIAA 98-4855, September 2–4

Vitali R, Haftka RT, Sankar BV (1998) Correction responsesurface approximations for stress intensity factors of acomposite stiffened plate. In: Proceedings of the 39thAIAA/ASME/ASCE/AHS/ ASC structural dynamics, andmaterials conference. Long Beach, CA, AIAA-98-2047,pp 2917–2922, 20–23 April

Wang GG, Shan S (2007) Review of metamodeling techniquesin support of engineering design optimization. J Mech DesTrans ASME 129(4):370–380

Wang BP, Yang RJ (2000) Structural optimization using movingleast squares approximation. In: Proceedings of the 24thnational mechanics conference. CYCU, Chungli, Taiwan,9–10 Dec. 2000

Zadeh PM, Toropov VV (2002) Multi-fidelity multidisciplinarydesign optimization based on collaborative optimizationframework. In: AIAA 2002-5504, Proceedings of the 9thAIAA/ISSMO symposium on multidisciplinary analysis andoptimization. Atlanta, GA

Zadeh PM, Toropov VV, Wood AS (2004) Use of global ap-proximations in the collaborative optimization framework.In: AIAA 2004-4654, Proceedings of the 10th AIAA/ISSMOmultidisciplinary analysis and optimization conference.Albany, NY

Zadeh PM, Toropov VV, Wood AS (2005) Use of MLSM incollaborative optimization. In: Paper 6411, Proceedings ofthe 6th world congresses of structural and multidisciplinaryoptimization. Rio de Janeiro, Brazil, 30 May–3 June