dynamic neural network control (dnnc): a non-conventional neural network model

Post on 12-Feb-2016

60 Views

Category:

Documents

1 Downloads

Preview:

Click to see full reader

DESCRIPTION

Dynamic Neural Network Control (DNNC): A Non-Conventional Neural Network Model Masoud Nikravesh EECS Department, CS Division BISC Program University of California Berkeley, California Abstract: - PowerPoint PPT Presentation

TRANSCRIPT

Dynamic Neural Network Control (DNNC): A Non-Conventional Neural Network Model

Masoud Nikravesh

EECS Department, CS Division

BISC Program

University of California

Berkeley, California

Abstract:

In this study, Dynamic Neural Network Control methodology for model identification

and control of nonlinear processes is presented. The methodology uses several

techniques: Dynamic Neural Network Control (DNNC) network structure,

neuro-statistical (neural network & non-parametric statistical technique such as

ACE; Alternative Conditional Expectation) techniques, model-based control

strategy, and stability analysis techniques such as Liapunov theory. In this

study, the DNNC model is used because it is much easier to update and adapt the

network on-line. In addition, this technique in conjunction with

Levenberge-Marquardt algorithm can be used as a more robust technique for network

training and optimization purposes. The ACE technique is used for scaling the

networks input-output data and can be used to find the input structure of the

network. The result from Liapunov theory is used to find the optimal neural

network structure. In addition, a special neural network structure is used to

insure the stability of the network for long-term prediction. In this model, the

current information from the input layer is presented into a pseudo hidden layer.

This model minimizes not only the conventional error in the output layer but also

minimizes the filtered value of the output. This technique is a tradeoff between

the accuracy of the actual and filtered prediction, which will result in the

stability of the long-term prediction of the network model. Even though, it is

clear that DNNC will perform better than PID control, it is useful to compare PID

with DNNC to illustrate the extreme range of the non linearity of the processes

were used in this study. The integration of the DNNC and the

shortest-prediction-horizon nonlinear model-predictive control is a great

candidate for control of highly nonlinear processes including biochemical

reactors.

References:

1. M. Nikravesh, A. E. Farell, T. G. Stanford, Control of Nonisothermal CSTR with

time varying parameters via dynamic neural nework control (DNNC), Chemical

Engineering Journal, vol. 76, 2000, pp. 1-16.

2. M. Nikravesh, Artificial neural networks for nonlinear control of industrial

processes, " Nonlinear Model Based Process Control", Book edited by Ridvan Berber

and Costas

Karavaris, NATO Advanced Science Institute Series, Vol 353, 1998 Kluwer Academic

Publishers, pp. 831-870

3. S. Valluri, M. Soroush, and M. Nikravesh, Shortest-prediction-horizon nonlinear

model-predictive control, Chemical Engineering science, Vol 53, No2, pp. 273-292,

1998.

Dynamic Neural Network Control (DNNC):

A Non-Conventional Neural Network Model

Masoud Nikravesh

EECS Department, CS Division

BISC Program

University of California

Berkeley, California

Dynamic Neural Network Control (DNNC)

1. Introduction

2. Theory

3. Applications and Results

4. Conclusions

5. Future Works

u(k)

y(k+1)

u(k-1)

u(k-M+1)

u(k-N+1)

ym (k) B1B2

W2

W1(N+1)

W11

W12

W1M

W1N

1

1

1

BM

BN

1

Q P

P

yu

d

d

e’

ysp

y

+

+

+

+

-

-

w

IMC

Q1 P

P

yu

d

d

e’

ysp

y

+

+

+

+

-

-Q2

+

-

w1 w

w2

Modified IMC, Zheng

et al. (1994)

To address integral windup.

Q’1 P

h(x(t-)

yu

d

d

e’

ysp

y

+

+

+

+

-

-Q’2

+

-

w’1 w

w’2

Non-linear model state-feedback control structure

x=f(x)+g(x) ux

P

On-Line Adaptation

FilterFilter

SetpointSetpoint

TrajectoryTrajectory

PController

Model

PressurePressure

PressurePressure

q

FilterFilter

SetpointSetpoint

TrajectoryTrajectory

Controller

Model

CA

CA

qc

CA

y k j a y k jii

j( ) ( )*

u(k - i + j) + + d(k + j)

1

y k j y k j a u k j io ii j

N*( ) ( ) ( )

1

y k j a u k N jo N( ) ( ) 1

d k k km( ) ( ) ( ) = y - y

Model Predictive Control

Y

Time i

ai

 k = discrete time

y(k) = model output u(k) = change in the input (manipulated variable) defined as u(k)- u(k-1) d(k) = unmodelled disturbance effects on the output ai = unit step response coefficients N = number of time intervals needed to describe the process dynamics (Note: )

ym(k) = current feedback measurement y* (k+j) = predicted output at k+j due to input moves up to k.

In the absence of any additional information, it is assumed that  

dkjdk () ()

N ifor a a Ni

y ky ky k

y k N

y K P

u k u k u k u k Nu k u k u k u k Nu k u k u k u k N

u k N u k N u

( )( )( )

( )

( )

( ) ( ) ( ) ( )( ) ( ) ( ) ( )( ) ( ) ( ) ( )

( ) ( ) (

123

1 2 1 01 1 2 02 1 3 0

1 2 k N u k

u k P N u k

aaa

a

ad k

N

P

3 0

0 0 0 1

111

1

1

1

2

3

1

) ( )

( ) ( )( )

(13)

y d = U a1

y(k + j) = y(k + j) - y (k + j)o

The Backpropagation Neural Network

z = x w + x w + + x w + = 1 ,..., P.

(i)1 2 N1 2

( ) ( ) ( )i iN

i

for i

=

x xx

x x

ww

1 2

1

2 N

1

2zz

z

xx x

xwP

N

N

P P P N

(1)

( )

( )

(1) (1) (1)

( ) ( ) ( )

( ) ( ) ( )

2 22

2 2

1

11

1

y F z1 ( ).z = X w = X w

1

w = | T

wT

X = | 1 1X

Comparing DMC with the neural network, the following analogy may

be drawn,  

y z

U X

w ad

1 1

.

The DNNC Process Model

y k y k a u k d k( ) ( ) ( ) ( ) 1 1 11(k)

e (k) ( ) ( )k a u k 1 1

y k k u k k u k k u kk u k N k u k N

( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( ) ( )

1 1 21

a + a + a + ...+ a + a + d(k + 1)

1 2 3

N N

d k y k y km( ) ( ) ( ) 1 1 1

d k k k( ) ( ) ( ) 1 y - (k) y(k)m

d k k( ) ( ) 1 (k) - (k) y + (k) d(k)m

d k k( ) ( ) 1 y (k) + (k) d(k).m

The DNNC Process Model

y k k u k k u k k u kk u k N

k k

uy

( ) ( ) ( ) ( ) ( ) ( ) ( )( ) ( )

( ) ( )

1 1 21

a + a + a + ...+ a (k) u(k - N + 2) + a + y (k) + d(k)

= a(k) (k) + (k) d(k)

1 2 3

N-1 N

m

T

y k g uy( ) ( 1 (k),a(k))

State-Space Representation of DNNC

y k w w uy B B( ) ( ) 1 2 1 1 2 T

uy = [ u(k) u(k -1) u(k - N + 2) u(k - N + 1) y (k)]

w = [ w1 w1 ... w1 w1 ] m

T

1 1 2 N N+1T

( )ze ee e

z z

z z

y k w w uy B B

uy

( ) ( )

1 2 1 1 2

= [u(k) u(k -1) u (k - N + 2) u(k - N + 1) y(k)]

w = [ w1 (w1 - w1 ) ... (w1 - w1 ) w1 ]

T

T

1 1 2 1 N N-1 N+1T

x k f x k g x k u k

y k h z x k u k

y k h x k u k

( ) ( ( )) ( ( ), ( )),

( ) ( ), ( ) ,

( ) ( ( ), ( ))

1

1

1

x k x k x k x k x k

u k u k N y k

N NT

T

( ) ( ) ( ) ( ) ( )

( ) ( ) ( )

1 2 1

2 1

= u(k -1)

x k x k x k x k x k

k u k N y k

k k k u k

N NT

T

T

( ) ( ) ( ) ( ) ( )

( ) ( ) ( )

( ) ( ) ( ), ( ))

1 1 1 1 1

1 2 1

1 2 1

= u(k) u

= u(k) x x h(x1 N-2

State-Space Representation of DNNC

State-Space Representation of DNNC

f x k x k x k x k

g x k u k u k h x k u k

h x k u k y k w w uy B B

NT

T

( ( )) ( ) ( ) ( )

( ( ), ( )) ( ) ( ( ), ( ))

( ( ), ( )) ( ) ( )

0 0

0 0 0 0

1

1 2 2

2 1 1 2 T

Stability of the DNNC Process Model

J

w w y k

Y

N N N

0 0 0 0 0 01 0 0 0 0 00 1 0 0 0 0

0 0 0 0 0 00 0 0 1 0 0

1 1 11 2 3 2 1 2 12 ( )

y k w uy B

w w y k

w w w

j

j j

j j j

1

1 1

1 1

1

1 1

2 12

1 1

( ) ( )

( )

,...,

,

,

T

N -1

= w21 1 121 y k w N( ) .

DNNC Controller

u k

k Bw w

w

C C

( )

( )(

,

1 2

21

1 1

- B ) uy1

T

wwNT

1C

1,11,21,N-1 = [ w w ... w 11 ,] uyC T = [ u(k -1) u(k - 2) ... u(k - N + 2) u(k - N +1) y(k)]

( ) ( ) ( ) ( ) ( ) ( )( ) ( ) ( )k k y k d k y k

d k y k y k

set

m

1 1

1 0 5( ) .z ln 1- z1+ z

Stability of the DNNC Process Model

x k

ww

x kx kx k

x kx kx k

R

k

N NN

N

N

Nset

( )

( )( )( )

( )( )

( ) ( )

1

11

1 0 0 0 0 00 1 0 0 0 0

0 0 0 0 0 00 0 0 1 0 00 0 0 0 0 0

00

00

1 2 3 2 11

1

1

2

3

2

1

=

+

jj

set

ww

R

k Bw

B

w

1

1

1

2

1

1

2 1

1

,

( )

Stability of the DNNC Process Model

J = U

1 2 3 2 11

1

11

1 0 0 0 0 00 1 0 0 0 0

0 0 0 0 0 00 0 0 1 0 00 0 0 0 0 0

N NNw

w

NNNNN

1

12

23

210

Extension of the DNNC Model to the MIMO Case in IMC Framework

y k w W uy k B B

uy

uy j

( ) ( ( ) )

( )

1 2 1 1 2 F

= [ uy (k) uy (k) ... u (k) ]

= [ u (k) u (k) ... u (k - N + 1) y (k) ]

(1) (2) (j) T

(j) (j) (j)i m

(j) T

y(k + 1) = [ y (k + 1) y (k + 1) . . . y (k + 1) ] (1) (2) (j) T

W

ww w

w w w

j

j j j j

1 =

w w

1 1 1 2 1

2 1 2 2

1 2

, , ,

, ,

, , ,

w i j, [ w w . . . w w ]i,1 i,2 i,N i,NT

j j 1

Neuro-Statistical Model

W J J J + I e T -1 T2

W JT J J + e T T -1 T T

T V

1

V = ij1

2 1me ei k

k m

m

j k

V = I2

W W k

Distribution ofMin

Distribution ofMax

Distribution ofMean

Mean+Std(Mean)Mean-Std(Mean)

Max-Std(Max)Min+Std(Min)

Distribution ofMin

Distribution ofMax

Distribution ofMean

Mean+2*Std(Mean)Mean-2*Std(Mean)

Min+2*Std(Min)Max-2*Std(Max)

 0.0

0.2

0.4

0.6

0.8

1.0

0.0 0.2 0.4 0.6 0.8 1.0Actual

Neu

ral N

etw

ork

Pred

ictio

n

Actual Mean Mean+Std Mean-Std Upper Lower

0.0

0.2

0.4

0.6

0.8

1.0

0 5 10 15 20Data Points

Neu

ral N

etw

ork

Pred

ictio

n

Actual Mean Mean+Std Mean-Std Upper Lower

0 5 10 15 20 0 0.5 1

1.5 2

2.5 3

3.5 4

4.5

Actual

Upper

Lower

Mean

MPV

F(k)F(k-1)F(k-2)F(k-3)

F(k-4)

P(k)P(k-1)P(k-2)

P(k-4)P(k-3)

P(k+1)

u(k)y(k+1)

u(k-1)

u(k-M+1)

u(k-N+1)

ym (k)

nonlinear transfer function

lineartransfer function

B1 B2

W2

W1(N+1)

W11

W12

W1M

W1N

1 1

Pc(k+1)

F(k)F(k-1)F(k-2)F(k-3)

F(k-4)

P(k)P(k-1)P(k-2)

P(k-4)P(k-3)

P(k+1)

Z(-1)Z(-2)

Z(-5)Z(-4)

Z(-3)

u(k)

y(k+1)

u(k-1)

u(k-M+1)

u(k-N+1)

ym (k) B1B2

W2

W1(N+1)

W11

W12

W1M

W1N

1

1

1

BM

BN

1

720

740

760

780

800

820

0 250 500 750Sampling Time

Wel

lhea

d Pr

essu

re, p

si

Actual data

Neural NetworkPrediction

F(k)F(k-1)F(k-2)F(k-3)

F(k-4)P(k)P(k-1)P(k-2)

P(k-4)P(k-3)

P(k+1)

720

740

760

0 250 500 750Sampling Time

Wel

lhea

d Pr

essu

re, p

si

Solid Line: ActualThin Line: Network Prediction

Pc(k+1)

F(k)F(k-1)F(k-2)F(k-3)

F(k-4)

P(k)P(k-1)P(k-2)

P(k-4)P(k-3)

P(k+1)

Z(-1)Z(-2)

Z(-5)Z(-4)

Z(-3)

u(k)

y(k+1)

u(k-1)

u(k-M+1)

u(k-N+1)

ym (k) B1B2

W2

W1(N+1)

W11

W12

W1M

W1N

1

1

1

BM

BN

1

Typical multi-layer DNNC process model.

Alternative Conditional Expectation

X Y

(Y)(X)

No. Epochs: 200No. Hidden Nodes: 10 

No. Epochs: 5No. Hidden Nodes: 1 

NN Prediction

-0.9-0.8-0.7-0.6-0.5

-1 -0.5 0 0.5 1x

z

-0.2-0.1

00.10.2

-1 -0.5 0 0.5 1x

Phi(x

)

-0.9-0.8-0.7-0.6-0.5

-0.2 -0.1 0 0.1 0.2Phi(x)

z

-0.9-0.8-0.7-0.6-0.5

-1 -0.5 0 0.5 1y

z

-1-0.5

00.5

1

-1 -0.5 0 0.5 1y

Phi(y

)

-0.9-0.8-0.7-0.6-0.5

-1 -0.5 0 0.5 1Phi(y)

z

-0.8 -0.75 -0.7 -0.65 -0.6 -0.55-0.9

-0.85

-0.8

-0.75

-0.7

-0.65

-0.6

-0.55

Actual

Input-Output Data,Prvious Information

Input Layer

Input Data for Current Prediction

Hidden Layer Output Layer

Prediction, Output

AC

E

Phai

-0.8 -0.75 -0.7 -0.65 -0.6 -0.55-0.9

-0.85

-0.8

-0.75

-0.7

-0.65

-0.6

-0.55

Actual

Input-Output Data,Prvious Information

Input Layer

Input Data for Current Prediction

Hidden Layer Output Layer

Prediction, OutputA

CE

Phai

u(k)

y(k+1)

u(k-1)

u(k-M+1)

u(k-N+1)

ym (k) B1B2

W2

W1(N+1)

W11

W12

W1M

W1N

1

1

1

BM

BN

1

Typical multi-layer DNNC process model.

FilterFilter

SetpointSetpoint

TrajectoryTrajectory

Controller

Model

CA

CA

qc

CA

dCdt

qV C C

ERT tA

Af A o A = k C c( ) exp( ) ( )

dTdt

qV

T T H ERT

t

h Aq

t T T

fo A

p

c pc

pc

c pccf

= k C C

C

C V q

C

c

h

( ) ( ) exp( ) ( )

exp ( ) ( )

1

h(t)

:

Fouling coefficientc(t)

:

Deactivation coefficientCA

:

effluent concentration, the controlled variableqc

:

coolant flow rate, the manipulated variableq

:

feed flow rate, disturbanceCAf

:

feed concentration Tf

:

feed temperatures Tcf

:

coolant inlet temperature

h = (t) h = ( 1 - t ) h d h h

k c= k exp -E

RT (t)o

c (t)= exp ( - t ).

Process Model

H-Injector

110

120

130

140

0 500 1000 1500 2000 2500 3000 3500Sampling Time

Inje

ctio

n R

ate,

BC

W/d

ay

ActualNetwork Prediction

LR-Injector

25

35

45

55

65

0 500 1000 1500 2000 2500 3000 3500Sampling Time

Inje

ctio

n R

ate,

BC

W/d

ay ActualNetwork Prediction

•The DNNC strategy differs from previous neural network controllers because the network structure is very simple, having limited nodes in the input and hidden layers.

•As a result of its simplicity, the DNNC design and implementation are easier than other control strategies such as conventional and hybrid neural networks.

• In addition to offering a better initialization of network weights, the inverse of the process is exact and does not involve approximation error.

•DNNC’s ability to model nonlinear process behavior does not appear to suffer as a result of its simplicity.

• For the nonlinear, time varying case, the performance of DNNC was compared to the PID control strategy.

•In comparison with PID control, DNNC showed significant improvement with faster response time toward the setpoint for the servo problem.

•The DNNC strategy is also able to reject unmodeled disturbances more effectively.

•DNNC showed excellent performance in controlling the complex processes in the region where the PID controller failed.

•It has been shown that the DNNC controller strategy is robust enough to perform well over a wide range of operating conditions.

Conclusions

Q P

P

yu

d

d

e’

ysp

y

+

+

+

+

-

-

w

IMC

Q1 P

P

yu

d

d

e’

ysp

y

+

+

+

+

-

-Q2

+

-

w1 w

w2

Modified IMC, Zheng

et al. (1994)

To address integral windup.

Q’1 P

h(x(t-)

yu

d

d

e’

ysp

y

+

+

+

+

-

-Q’2

+

-

w’1 w

w’2

Non-linear model state-feedback control structure

x=f(x)+g(x) ux

P

Future Works

•The integration of the DNNC and the shortest-prediction-horizon nonlinear model-predictive

•No assumption regarding the uncertainty in input and output

• Use of fuzzy logic techniques.

u(k)

y(k+1)

u(k-1)

u(k-M+1)

u(k-N+1)

ym (k) B1B2

W2

W1(N+1)

W11

W12

W1M

W1N

1

1

1

BM

BN

1

top related