one-step problems of optimization of chemical engineering systems under soft constraints

4
ISSN 0012-5008, Doklady Chemistry, 2009, Vol. 425, Part 1, pp. 64–67. © Pleiades Publishing, Ltd., 2009. Original Russian Text © G.M. Ostrovskii, N.N. Ziyatdinov, T.V. Lapteva, D.D. Pervukhin, 2009, published in Doklady Akademii Nauk, 2009, Vol. 425, No. 1, pp. 63–66. 64 Computer modeling has become an integral part of design of chemical engineering systems. The purpose of computer modeling is to determine the optimal design of a chemical engineering system in terms of an economic criterion to guarantee that design require- ments, namely, various safety, environmental, produc- tivity, etc., constraints, are satisfied. This problem is solved under uncertainty of the initial physicochemical, technological, and economic information; therefore, chemical engineering systems are designed using inex- act mathematical models. Moreover, in operation of chemical engineering systems, the internal characteris- tics and external conditions often change. Thus, it is necessary to solve the problem of designing a flexible chemical engineering system that guarantees the opti- mal value of a criterion of operation of the chemical engineering system throughout the lifetime and the high performance (satisfaction of all design con- straints) of the system throughout the lifetime despite using inexact mathematical models and changes in internal and external factors. Constraints may be hard or soft. Constraints are called hard if they must neces- sarily be satisfied throughout the lifetime of a system at any values of uncertain parameters. Violation of these constraints may cause emergency, harm environment, etc. Soft constraints may be satisfied only with a given probability or on the average. Methods for solving problems under hard constraints are well known [1, 2]. Here, we propose a method for solving a one-step opti- mization problem under soft constraints. Let us consider an approximate method for solving a one-step optimization in which the objective function is the mathematical expectation of the initial optimiza- tion criterion f (x, θ) and the constraint must be met with a certain probability: (1) (2) where x is the vector of search variables, X is the feasi- ble domain of x, θ is the vector of uncertain parameters (θ ∈ T), α j is a given probability that the jth constraint is satisfied, E θ { f (x, θ)} is the mathematical expectation of the function f (x, θ), and Pr{g j (x, θ) 0} is the prob- ability that a random vector θ is in the uncertainty region T = {θ i : θ i , i = 1, 2, …, n}. The main difficulty in solving problems of type (1) is the necessity of calculating the multidimensional integrals E[ f (x, θ)] and Pr{g j (x, θ) 0}. Using quadra- ture formulas [3] for this purpose involves very labori- ous procedures. Therefore, a special quadrature for- mula was obtained [4], which significantly reduces cal- culation time for the case where the parameters θ are normally distributed random quantities. Monte Carlo methods for calculating the multidimensional integrals E[ f (x, θ)] were also analyzed [5]. However, they also require rather laborious calculations. In this context, we propose an approach based on the transformation of probabilistic constraints into deterministic ones. We assume that all the uncertain parameters are indepen- dent, normally distributed random quantities. Let E[θ i ] and V[θ i ] be the mathematical expectation and the vari- ance (σ i ) 2 (σ i = ) of the random quantity θ i . Let us initially discuss the case where g j (x, θ) is a linear function of the parameters θ i and the variables x i : (3) The mathematical expectation E[y] and the variance V[y] of random quantity (4) f * Efx θ , ( ) [ ] , x X min = Pr g j x θ , ( ) 0 { } α j j 12 m, , , , = θ i L θ i U V θ i [ ] g j x θ , ( ) θ 1 x 1 θ n x n θ n 1 + , θ i 0, + + = i 12 n 1. + , , , = y θ 1 x 1 θ n x n θ n 1 + + + = One-Step Problems of Optimization of Chemical Engineering Systems under Soft Constraints G. M. Ostrovskii a , N. N. Ziyatdinov b , T. V. Lapteva b , and D. D. Pervukhin b Presented by Academician Yu.D. Tret’yakov October 3, 2008 Received October 3, 2008 DOI: 10.1134/S0012500809030033 a Karpov Institute of Physical Chemistry, ul. Vorontsovo pole 10, Moscow, 103064 Russia b Kazan State Technological University, ul. Karla Marksa 68, Kazan, 420015 Russia CHEMICAL TECHNOLOGY

Upload: d-d

Post on 03-Aug-2016

212 views

Category:

Documents


0 download

TRANSCRIPT

ISSN 0012-5008, Doklady Chemistry, 2009, Vol. 425, Part 1, pp. 64–67. © Pleiades Publishing, Ltd., 2009.Original Russian Text © G.M. Ostrovskii, N.N. Ziyatdinov, T.V. Lapteva, D.D. Pervukhin, 2009, published in Doklady Akademii Nauk, 2009, Vol. 425, No. 1, pp. 63–66.

64

Computer modeling has become an integral part ofdesign of chemical engineering systems. The purposeof computer modeling is to determine the optimaldesign of a chemical engineering system in terms of aneconomic criterion to guarantee that design require-ments, namely, various safety, environmental, produc-tivity, etc., constraints, are satisfied. This problem issolved under uncertainty of the initial physicochemical,technological, and economic information; therefore,chemical engineering systems are designed using inex-act mathematical models. Moreover, in operation ofchemical engineering systems, the internal characteris-tics and external conditions often change. Thus, it isnecessary to solve the problem of designing a flexiblechemical engineering system that guarantees the opti-mal value of a criterion of operation of the chemicalengineering system throughout the lifetime and thehigh performance (satisfaction of all design con-straints) of the system throughout the lifetime despiteusing inexact mathematical models and changes ininternal and external factors. Constraints may be hardor soft. Constraints are called hard if they must neces-sarily be satisfied throughout the lifetime of a system atany values of uncertain parameters. Violation of theseconstraints may cause emergency, harm environment,etc. Soft constraints may be satisfied only with a givenprobability or on the average. Methods for solvingproblems under hard constraints are well known [1, 2].Here, we propose a method for solving a one-step opti-mization problem under soft constraints.

Let us consider an approximate method for solvinga one-step optimization in which the objective functionis the mathematical expectation of the initial optimiza-

tion criterion

f

(

x

,

θ

)

and the constraint must be met witha certain probability:

(1)

(2)

where

x

is the vector of search variables,

X

is the feasi-ble domain of

x

,

θ

is the vector of uncertain parameters

(

θ ∈

T

),

α

j

is a given probability that the

j

th constraintis satisfied,

E

θ

f

(

x

,

θ

)

is the mathematical expectationof the function

f

(

x

,

θ

),

and

Pr

g

j

(

x

,

θ

)

0

is the prob-ability that a random vector

θ

is in the uncertainty

region

T

=

θ

i

:

θ

i

,

i

= 1, 2, …,

n

.

The main difficulty in solving problems of type (1)is the necessity of calculating the multidimensionalintegrals

E

[

f

(

x

,

θ

)]

and Pr

g

j

(

x

,

θ

)

0

. Using quadra-ture formulas [3] for this purpose involves very labori-ous procedures. Therefore, a special quadrature for-mula was obtained [4], which significantly reduces cal-culation time for the case where the parameters

θ

arenormally distributed random quantities. Monte Carlomethods for calculating the multidimensional integrals

E

[

f

(

x

,

θ

)]

were also analyzed [5]. However, they alsorequire rather laborious calculations. In this context, wepropose an approach based on the transformation ofprobabilistic constraints into deterministic ones. Weassume that all the uncertain parameters are indepen-dent, normally distributed random quantities. Let

E

[

θ

i

]

and

V

[

θ

i

]

be the mathematical expectation and the vari-

ance (

σ

i

)

2

(

σ

i

=

) of the random quantity

θ

i

.

Let us initially discuss the case where

g

j

(

x

,

θ

)

is alinear function of the parameters

θ

i

and the variables

x

i

:

(3)

The mathematical expectation

E

[

y

]

and the variance

V

[

y

]

of random quantity

(4)

f * E f x θ,( )[ ],x X∈min=

Pr g j x θ,( ) 0≤ α j j≥ 1 2 … m,, , ,=

θiL θi

U

V θi[ ]

g j x θ,( ) θ1x1 … θnxn θn 1+ , θi 0,≥–+ +=

i 1 2 … n 1.+, , ,=

y θ1x1 … θnxn θn 1+–+ +=

One-Step Problems of Optimization of Chemical Engineering Systems under Soft Constraints

G. M. Ostrovskiia, N. N. Ziyatdinovb, T. V. Laptevab, and D. D. Pervukhinb

Presented by Academician Yu.D. Tret’yakov October 3, 2008

Received October 3, 2008

DOI: 10.1134/S0012500809030033

a Karpov Institute of Physical Chemistry, ul. Vorontsovo pole 10, Moscow, 103064 Russia

b Kazan State Technological University, ul. Karla Marksa 68, Kazan, 420015 Russia

CHEMICAL TECHNOLOGY

DOKLADY CHEMISTRY Vol. 425 Part 1 2009

ONE-STEP PROBLEMS OF OPTIMIZATION OF CHEMICAL ENGINEERING SYSTEMS 65

are

(5)

respectively.

The variable y is normally distributed since all the θi

and b are independent and normally distributed. Con-sider random variable

The random quantity η(x, θ) has a normalized normaldistribution Φ(η) with zero mean and unit variance [6].If gj(x, θ) is linear function (4), then probabilistic con-straint (2) is equivalent to deterministic constraint [6]

(6)

where

B(x) = .

This result is easy to generalize to the case where

(7)

In this case,

(8)

Let us now analyze the general case where the functiong(x, θ) is nonlinear and convex in θ. Expand the func-tion gj(x, θ) in a Taylor series in the vicinity of certainpoint θ(l) and retain only the first-order terms:

(9)

Replace the functions gj(x, θ) in constraints (2) by theirapproximations (9). Then, problem (1) takes form

(10)

(11)

E y[ ] E θi[ ]xi

i 1=

n

∑ E θn 1+[ ],–=

V y[ ] V θi[ ]xi2 V θn 1+[ ].+

i 1=

n

∑=

η x θ,( )

θixi

i 1=

n

∑ b– E θi[ ]xi

i 1=

n

∑ E θn 1+[ ]–⎝ ⎠⎜ ⎟⎛ ⎞

V y( )-----------------------------------------------------------------------------------------.≡

Φ B x( )( ) α,≥

– E θi[ ]xi E θ j 1+[ ]–i 1=

p

∑V y[ ]

-------------------------------------------------------

g j x θ,( ) θ1h j1 x( ) … θph jp x( ) θ j 1+ .–+ +=

B j x( )

E θi[ ]h ji x( ) E θ j 1+[ ]–i 1=

p

∑V y[ ]

-------------------------------------------------------------.–=

g j x θ θ l( ), ,( ) = g j x θ l( ),( )∂g j x θ l( ),( )

∂θi

------------------------- θi θil( )–( ).

i 1=

p

∑+

f E f x θ,( )[ ],x

min=

Pr g j x θ θ l j,( ), ,( ) 0≤ α j, j≥ 1 2 … m., , ,=

Introduce region

Functions convex in θ meet condition

(12)

at any x. Hence,

(13)

Consequently,

(14)

Inequalities (14) imply inequality ≤ f *; therefore,the solution of problem (10) is a lower-bound estimateof the solution of problem (1) if the functions gj(x, θ)are convex in θ. Transform the probabilistic constraintsin problem (10) into deterministic constraints. Thefunctions (x, θ, θ(l)) in constraints (11) have form (7),where

(15)

In this case, θj + 1 is not a random quantity. Hence, usingexpressions (11), (6), (7), and (10), the probabilisticconstraints in problem (10) can be transformed intodeterministic ones. After this, problem (10) takes form

(16)

(17)

Let us consider the calculation of mathematicalexpectation

(18)

Ω j θ l j,( )( ) θ: g j x θ l j,( ),( ) ∫⎩⎨⎧

=

+∂g j x θ l j,( ),( )

∂θi

----------------------------- θi θil j,( )–( )

i 1=

p

∑ 0, θ T∈≤⎭⎬⎫

.

g j x θ,( ) g j x θ l j,( ),( )∂g j x θ l j,( ),( )

∂θi

----------------------------- θi θil j,( )–( ),

i 1=

p

∑+≥

θ∀ T∈

Ω j Ω j θ l j,( )( ), j⊂ 1 2 … m., , ,=

Pr θ Ω j θ l j,( )( )∈ Pr θ Ω j∈ ,≥j 1 2 … m., , ,=

f

g j

h ji x θ l j,( ),( )∂g j x θ l j,( ),( )

∂θi

-----------------------------,=

θ j 1+ x( )∂g j

∂θi

--------θil j,( )

i 1=

p

∑ g j x θ l j,( ),( ).–=

f E f x θ,( )[ ],x

min=

θil j,( ) E θi[ ]–( )

∂g j

∂θi

--------i 1=

p

∑ g j x θ l j,( ),( )–⎝ ⎠⎜ ⎟⎛ ⎞

× V θi[ ]∂g j

∂θi

--------i 1=

p

∑⎝ ⎠⎜ ⎟⎛ ⎞

1–

Φ 1– α( ), j≥ 1 2 … m., , ,=

E f x θ,( )[ ] f x θ,( )ρ θ( ) θ,d

T

∫=

66

DOKLADY CHEMISTRY Vol. 425 Part 1 2009

OSTROVSKII et al.

where ρ(θ) is the probability distribution density. Themathematical expectation E[ f (x, θ)] can be approxi-mately calculated by the successive approximationmethod based on partitioning the region T into subre-gions Tl. Let the region T at the kth iteration be parti-tioned into Nk subregions Tl. In each of the subregionsTl, expand the function gj(x, θ) in a Taylor series in thevicinity of certain points θ(l) and retain only the first-order terms:

(19)

Let Eap[ f (x, θ); Tl] be the mathematical expectation of

the function (x, θ, θ(l)) in the subregion Tl. This math-ematical expectation is a certain approximation ofE[ f (x, θ); Tl]. We have

(20)

where

(21)

It is clear that aq = 1 at T(q) = T. If the function f (x, θ) isconvex in θ at each x, then we obtain inequality

(22)

Hence, we have

(23)

Thus, the approximation Eap[ f (x, θ); Tl] is a lower-bound estimate of E[ f (x, θ); Tl] at any point θl.

Let us refine the approximation using the procedureof partitioning into subregions. Initially, partition theregion T into pk subregions T1, T2, …, , where k isthe number of an iteration. We have

(24)

f x θq, ∆θ; Tl+( ) f x θ θ q( ), ,( ) f x θq,( )= =

+∂f x θq,( )

∂θi

---------------------- θi θiq–( ).

i 1=

p

f

Eap f x θ,( ); T q( )[ ] al f x θq,( )=

+∂f x θq,( )

∂θi

---------------------- E θi; Tq( )[ ] alθi

q–( ),i 1=

p

aq ρ θ( ) θ, E θi; Tq( )[ ]d

Tq( )

∫ θiρ θ( ) θ.d

Tq( )

∫= =

f x θq ∆θ; T l( )+,( ) f x θq,( )≥

+∂f x θq,( )

∂θi

---------------------- θi θiq–( ) θ∀

i 1=

p

∑ T q( ).∈

E f x θ,( ); T q( )[ ] Eap f x θ,( ); T q( )[ ].≥

T pk

E f x θ,( ); T[ ] f x θ,( )ρ θ( ) θd

T1( )

∫=

+ … f x θ,( )ρ θ( ) θd

Tpk( )

∫+ E f x θ,( ); T i( )[ ].i 1=

pk

∑=

Using approximation (20), we obtain

(25)

It is easy to show that the equality a1 + a2 + … + = 1is valid.

The two basic features of the successive approxima-tion algorithm are, first, the successive addition ofpoints θ(k, j), j = 1, 2, …, m (k is the number of an itera-tion), at which the functions (x, θ, θ(l, j)), j = 0, 1, …,m, are formed, and, second, the partition of the regionT into subregions Tp.

The addition of points θ(k, j) is used for improving theapproximation of constraints (2). For each constraint,its own sequence of points is formed. On this basis,constraints (17) are constructed and added to the con-straints formed at previous iterations. The partition ofthe region T into subregions Tp is used for improvingthe approximation of the multidimensional integralE[ f (x, θ)]. At the kth iteration, to the jth constraint, the

set S(k, j) = θ(1, j), θ(2, j), …, of Njk points θ(l, j))corresponds. Let the region T at the kth iteration be par-titioned into pk subregions Tp. Consider such an itera-tive procedure that the following conditions are met ask → ∞:

the points θ(l, j) uniformly cover the region T;

for all the points θ(l, j), the condition Pr (x, θ, θ(l, j)) ≤0 ≥ αj, j = 1, 2, …, m, l = 1, 2, …, Njk, is valid.

At each iteration, the following problem is solved:

(26)

(27)

where Njk is the number of the points θ(l, j) that wereaccumulated at the previous, (k – 1)th, iteration. It wasshown that probabilistic constraint (27) can be trans-formed into deterministic constraint (17). Conse-quently, problem (26) can be transformed into deter-ministic problem

(28)

Eap f x θ,( ); T[ ]

= al f x θl,( )∂f x θl,( )

∂θi

--------------------- E θi; Tl[ ] alθil–( )

i 1=

p

∑+⎝ ⎠⎜ ⎟⎛ ⎞

.l 1=

Nk

aNk

g j

θN jk j,( )

N jk ∞;→

g j

f k( ) Eap f x θ,( )[ ],x

min=

Pr g j x θ θ l j,( ), ,( ) 0≤ α j,≥j 1 2 … m, l, , , 1 2 … N jk,, , ,= =

f k( ) Eap f x θ,( )[ ],x

min=

DOKLADY CHEMISTRY Vol. 425 Part 1 2009

ONE-STEP PROBLEMS OF OPTIMIZATION OF CHEMICAL ENGINEERING SYSTEMS 67

where Eap[ f (x, θ)] is given by formula (25). Let x(k) isthe solution of this problem. Since inequality (25) isvalid, the following inequalities are also valid:

Thus, the feasible domain of problem (26) is wider thanthe feasible domain of problem (1). Consequently, wehave f (k) ≤ f *.

At each iteration, the following three main tasks areperformed: solving problem (28); choosing pointsθ(k + 1, j), j = 1, 2, …, m, that should be added; and select-ing a subregion for partition for improving approxima-tion (25) of the objective function of problem (1).

Let us consider the second task. The points θ(l + 1, j),j = 1, 2, …, m, are chosen so that, in the vicinity of thesurface gj(x, θ) = 0, the approximation of each functiongj(x, θ), j = 0, 1, …, m, by the function (x, θ, θ(k, l)) isimproved. For this purpose, the region T is partitionedinto subregions T(l, j). To each function gj(x, θ), its ownpartition corresponds. In each subregion T(l, j), a certainpoint is chosen as the point θ(l, j). Let the region T at thekth iteration have already been partitioned into Nk, j sub-regions T(l, j), l = 1, 2, …, Nk, j, for each function gj(x, θ),

j = 0, 1, …, m. At the kth iteration, partition is appliedto the subregion in which the approximation of thefunction gj(x, θ) by the function (x, θ, θ(k, j)) in thevicinity of the surface gj(x, θ) = 0 is the worst. The qual-ity of approximation in the subregion T(l, j) is estimatedin terms of the quantity σlj obtained by solving the fol-lowing problem:

(29)

(30)

(31)

(32)

where x(k) is the solution of problem (28).

REFERENCES

1. Biegler, L.T., Grossmann, I.E., and Westerberg, A.W.,Systematic Methods of Chemical Process Design, UpperSaddle River (N.J.): Prentice Hall, 1999.

2. Ostrovskii, G.M. and Volin, Yu.M., Dokl. Chem., 2001,vol. 376, nos. 1–3, pp. 38–41 [Dokl. Akad. Nauk, 2001,vol. 376, no. 2, pp. 215–218].

3. Bakhvalov, N.S., Zhidkov, N.P., and Kobel’kov, G.M.,Chislennye metody (Numerical Methods), Moscow:Laboratoriya bazovykh znanii, 2000.

4. Bernardo, F.P., Pistikopoulos, E.N., and Saraiva, P.M.,Ind. Eng. Chem. Res., 1999, vol. 38, p. 3056.

5. Diwaker, U.M. and Kalagnanam, J.R., AIChE J., 1997,vol. 43, pp. 440–447.

6. Call, P. and Wallace, S.W., Stochastic Programming,Chichester: Wiley, 1994.

θil j,( ) E θi[ ]–( )

∂g j

∂θi

--------i 1=

p

∑ g j x θ l j,( ),( )–⎝ ⎠⎜ ⎟⎛ ⎞

× V θi[ ]∂g j

∂θi

--------i 1=

p

∑⎝ ⎠⎜ ⎟⎛ ⎞

1–

Φ 1– αi( ),≥

l 1 2 … N jk, j, , , 1 2 … m,, , ,= =

Pr θ Ω j θ l j,( )( )∈ Pr θ Ω j∈ ,≥l 1 2 … N jk, j, , , 1 2 … m., , ,= =

g j

g j

σlj θ θ l j,( )–2;

θ l j,( ) θ,max=

g j x k( ) θ l j,( ),( ) 0, j 1 2 … m;, , ,= =

g j x k( ) θ θ l j,( ), ,( ) 0, j 1 2 … m;, , ,= =

θ l j,( ) Tl, θ Tl,∈ ∈