10.1.1.50

18

Upload: sultanprince

Post on 08-Apr-2015

70 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: 10.1.1.50

Stochastic Analysis and Control of Real-timeSystems with Random Time Delays ?Johan Nilsson, Bo Bernhardsson, and Björn WittenmarkDepartment of Automatic ControlLund Institute of TechnologyBox 118, S-221 00 Lund, SwedenEmail: [email protected] paper discusses modeling and analysis of real-time systems subject to randomtime delays in the communication network. A new method for analysis of di�erentcontrol schemes is presented. The method is used to evaluate di�erent suggestedschemes from the literature. A new scheme, using so called timestamps, for handlingthe random time delays is then developed and successfully compared with previousschemes. The new scheme is based on stochastic control theory and a separationproperty is shown to hold for the optimal controller.Key words: Delay compensation, distributed computer control systems, real-timesystems, stochastic control, stochastic parameters, timing jitter.1 IntroductionMany real-time systems are implemented as distributed control systems, wherethe control loops are closed over a communication network or a �eld bus. Therewill inevitably be time delays in the communication net. As long as the sam-pling periods are long compared with these delays there is no need to considerthe in uence of the delays. As the demand on the control system increases itwill be more and more important to take the delays into account in the analy-sis and the design of the control system. While inaccuracies, disturbances, etc,have been extensively studied in the control literature the timing problems in? An earlier version of this paper, [11], was presented at the 13th InternationalFederation of Automatic Control World Congress, San Francisco, June 30{July 5,1996.Preprint submitted to Elsevier Preprint 13 June 1997

Page 2: 10.1.1.50

real-time systems have just recently attracted attention, and in the communi-cation literature the feedback control aspect has not been treated to any largerextent. This is thus an area where much can be gained by combining ideasfrom the �elds of control, real-time systems, and communication networks.Several problem formulations have been suggested by previous authors. A gen-eral setup would involve a distributed control structure, where communicationbetween di�erent control system nodes is achieved over a communication net-work. Often a centralized controller is used and the actuators and sensors arecommunicating with the controller(s) via a bus. The e�ect of communicationdelays is discussed, for instance, in [13], [15], and [9]. For an introduction tothe area with remarks on open problems see [16].We will analyze a simple structure with just one controller and one processconnected as in Figure 1. A number of previous authors have suggested such acontrol scheme with slightly di�erent setups. One can consider several cases,depending on how the actuator, sensor, and controller nodes are synchronized.The scheme we choose to analyze has a time-driven sensor system sampledwith a constant sampling rate and event-driven controller and actuator nodes.This means that a transmitted signal is used as soon as it arrives to thecontroller or actuator node.There are essentially three kinds of computer delays in such a system:� Communication delay between the sensor and the controller � sc� Computational delay in the controller � c� Communication delay between the controller and the actuator � caThe control delay for the control system, in principle, equals the sum of thesedelays. In this paper we will look at the in uence of the delays � sc and � ca.The e�ect of � c can be embedded in � ca.In the article we will compare with a controller structure where bu�ers areused at the controller node and at the actuator node. This is a scheme sug-gested by [10]. In [9] a model of the time delays based on a Markov chain isused. Necessary and su�cient conditions for zero-state mean-square exponen-tial stability are found for this system. A setup with time-driven sensor andcontroller nodes is studied in [14]. A suboptimal LQG-like controller is alsoderived in this paper. Another setup with time-driven sensor and controllernodes is studied in [4]. In the concluding example we will also compare withthe standard LQG-controller that neglects the e�ects of the time delays.The essential problem in the analysis is that the time delays are varying in arandom fashion. In this paper we will formulate and solve a problem wherethe performance of such systems can be analyzed. The methodology is basedon a stochastic description of the variations of the delays. The in uence of2

Page 3: 10.1.1.50

the stochastic variation of the time delays can be computed under di�erentassumptions. Formulas suited for numerical calculations are derived in the pa-per. The main assumption in this paper is that the time delays are statisticallymutually independent. This restriction is discussed in the paper and will bethe topic for future research.A new improved scheme is also presented. It is based on optimal stochasticcontrol. It is assumed that the probability distribution of the time delaysare known and that the control and measurement signals are supplementedwith so called \timestamps", the time when the signal was generated. Thismeans that the controller can obtain information of the length of previoustime delays. The control law satis�es a separation property when time delaysare uncorrelated. A suboptimal scheme is also suggested. This scheme requiresless computation but seems to give results close to optimal.The theoretical results are veri�ed using Monte-Carlo simulations. The nu-merical results support the theoretical claims and show the advantage of exactcomputations of the in uence of random time delays.Sensor

node

Actuator

node

Network

Controller

node

ProcessPSfrag replacements � sck� cak hFig. 1. Distributed digital control system with induced delays.2 Problem FormulationWe will make the following assumptions about the control system:� The output of the process is sampled periodically without any schedulingdisturbances. The sampling period is h.� The communication delays � sc and � ca are randomly varying. All time delaysare independent over the full horizon and their probability distributions areknown a priori.� The control signal is applied to the process as soon as the data arrives atthe actuator node. 3

Page 4: 10.1.1.50

� The total time delay is always less than one sampling period, i.e. � sc+� ca <h.� The lengths of the past time delays are known to the controller. One wayto achieve this is by marking every signal that is transferred in the systemwith a timestamp.PSfrag replacementsProcessoutputProcessinput

� sck� cakykyk uk uk(k� 1)h kh (k+ 1)hFig. 2. Timing of signals in the control system. The �rst diagram illustrates theprocess output and the sampling instants, the second diagram illustrates the signalinto the controller node, the third diagram illustrates the signal into the actuatornode, and the fourth diagram illustrates the process input, compare with Figure 1.The timing of signals in the control system is illustrated in Figure 2. Theprocess to be controlled is assumed to be of the form_x = Ax+Bu+Gv;where x is the process state, u the input and v is white noise with unit incre-mental variance. Discretizing the process at the sampling instants kh takinginto account the e�ect of the time delays � sck and � cak , see Figure 2, gives thediscrete time modelxk+1 =�xk + �0(� sck ; � cak )uk + �1(� sck ; � cak )uk�1 + vk (1)yk =Cxk + wk; (2)where � =eAh�0(� sck ; � cak ) = Z h��sck ��cak0 eAsdsB�1(� sck ; � cak ) = Z hh��sck ��cak eAsdsB;4

Page 5: 10.1.1.50

and vk and wk are uncorrelated white noise with zero mean and covariancematrices R1 and R2 respectively. Denote the information available to the con-troller when control signal uk is calculated by Yk. This has the structureYk = nyk; yk�1; :::; � sck ; � sck�1; ::::; � cak�1; � cak�2; :::; uk�1; uk�2; :::o :Notice that the sensor to controller delay � sc at time k and older are available,and that the controller to actuator delays � ca are assumed available up totime k � 1. We assume that the control signal is a function of all informationavailable when it is calculated, i.e. uk = f(Yk).3 Evaluation of SchemesIt turns out that with the proposed control schemes the closed loop systemcan be written as zk+1 = A(�k)zk +B(�k)ek;where f�kg is a random process independent of the noise process fekg, zk isa state vector for the closed loop system and ek is a vector with independentwhite noise components with zero mean and unit variance. For example zkcan be a vector with xk, uk�1, and the controller state. Similarly �k can be avector consisting of � sck and � cak . Often zk has the property that xk and uk canbe obtained by a linear transformation of zk, such that264xkuk375 = Qszk:One way to compare the performance of di�erent control schemes when sub-jected to random communication delays is to set up a cost function to beminimized by the controller. For the LQG-case it is of interest to evaluate thecost functionE�;e264xkuk375T Q264xkuk375 = E�;ezTkQTsQQszk = trnQTs QQsTko ; (3)where Tk = E�;ezkzTk . To evaluate the quantity Tk, which is independent of Qand Qs, we use thatTk+1 = E�;e[Akzk +Bkek][Akzk +Bkek]T= E� [BkBTk + EeAkzkzTkATk ]:5

Page 6: 10.1.1.50

Using Kronecker products this can be written asvec(Tk+1) = vec(E�kBkBTk ) + E�k(Ak Ak) vec(Tk): (4)The short forms Ak = A(�k) and Bk = B(�k) are used to simplify reading.We have here used that zk and ek are independent and that zk is independentof �k. This is crucial for the applied technique to work and indirectly requiresthat �k and �k�1 are independent. From the above calculations it follows adirect algorithm to calculate the stationary cost function in the LQG-case.Algorithm 1 (Stationary cost function)(1) Iterate (4) forward in time to get the stationary value T1 of Tk.(2) Calculate the stationary cost function from (3).From (4) it is seen that stochastic stability, in the sense of E zkzTk <1, of theclosed loop system is determined by the stability of the matrixE�k(A(�k)A(�k)): (5)This stability condition appeared in [8] in a slightly di�erent setting. For adiscussion of the connection between second moment stability and other stabil-ity concepts such as mean square stability, stochastic stability and exponentialmean square stability, see [7]. We have now derived formulas for evaluation ofcost functions from equations for the corresponding closed loop system. In thecase of quadratic cost function an algorithm for evaluation of the mean costhas been found. A condition for test of stochastic stability followed from thecalculations.4 Optimal Stochastic ControlIn this section we solve the control problem set up by the cost functionJN = xTNQNxN + E N�1Xk=0 264xkuk375T Q264xkuk375 ; (6)where Q is symmetric with the structureQ = 264Q11 Q12QT12 Q22375 : (7)6

Page 7: 10.1.1.50

Here Q is positive semi-de�nite and Q22 is positive de�nite. The solution ofthis problem follows by the same technique as for the standard LQG problem.We have the following result:Theorem 1 (Optimal state feedback) Given the plant (1), with noise freemeasurement of the state vector xk, i.e. yk = xk. The control law that mini-mizes the cost function (6) is given byu�k = �L(� sck )264 xku�k�1375 ; (8)where L(� sck ) =(Q22 + ~S22k+1)�1 �QT12 + ~S21k+1 ~S23k+1�~Sk+1(� sck ) = E�cak �GT (� sck ; � cak )Sk+1G(� sck ; � cak )����� sck �G(� sck ; � cak ) =264� �0(� sck ; � cak ) �1(� sck ; � cak )0 I 0 375Sk =E�sck nF T1 (� sck )QF1(� sck ) + F T2 (� sck ) ~Sk+1(� sck )F2(� sck )oF1(� sck ) =264 I 0�L(� sck )375F2(� sck ) =2666664 I 0�L(� sck )0 I 3777775SN =264QN 00 0375 :~Sijk is block (i; j) of the symmetric matrix ~Sk(� sc), and Qij is block (i; j) of Q.PROOF. Introduce a new state variable zk = 264 xkuk�1375. Using dynamic pro-gramming with Sk the cost to go at time k, and with �k the part of the cost7

Page 8: 10.1.1.50

function that cannot be a�ected by control, giveszTk Skzk + �k =minuk E�sck ;�cak ;vk8>><>>:264xkuk375T Q264xkuk375+ zTk+1Sk+1zk+19>>=>>;+ �k+1=E�sck minuk E�cak ;vk8>><>>:264xkuk375T Q264xkuk375+ zTk+1Sk+1zk+1 ����� sck 9>>=>>;+ �k+1=E�sck minuk 8>>>>><>>>>>:264xkuk375T Q264xkuk375+ 2666664 xkukuk�13777775T ~Sk+1(� sck )2666664 xkukuk�137777759>>>>>=>>>>>;+ �k+1 + trS11k+1R1:The second equality follows from the fact that � sck is known when uk is deter-mined. The third equality follows from independence of 264xkuk375 and � cak , and fromthe de�nition of ~Sk+1(� sck ). The resulting expression is a quadratic form in uk.Minimizing this with respect to uk gives the optimal control law (8). From theassumption that Q is symmetric it follows that Sk and ~Sk are symmetric. 2Theorem 1 states that the optimal controller with full state information is alinear � sck -depending feedback from the state and the previous control signal,i.e. uk = �L(� sck ; Sk+1)264 xkuk�1375 :The equation involved in going from Sk+1 to Sk is a stochastic Riccati equationevolving backwards in time. Each step in this iteration will contain expectationcalculations with respect to the unknown � sck and � cak . Under reasonable as-sumptions a stationary value S1 of Sk can be found by iterating the stochasticRiccati equation. In practice a tabular for L(� sck ; S1) can then be calculatedto get a control law on the formuk = �L(� sck )264 xkuk�1375 ;where L(� sck ) is interpolated from the tabular values of L(� sck ; S1) in real-time.8

Page 9: 10.1.1.50

In many cases the assumption of full state information does not hold. Thiscan be solved by constructing a state estimate from the available data. Inour setup there is the problem of the random time delays which enter in anonlinear fashion. The fact that the old time delays up to time k � 1 areknown at time k, however, allows the standard time-varying Kalman �lter ofthe process state to be optimal.Theorem 2 (Optimal state estimate) Given the plant (1){(2). The esti-mator xkjk = xkjk�1 +Kk(yk � Cxkjk�1) (9)withxk+1jk = �xkjk�1 + �0(� sck ; � cak )uk + �1(� sck ; � cak )uk�1 +Kk(yk � Cxkjk�1)x0j�1 = E(x0)Pk+1 = �Pk�T +R1 ��PkCT [CPkCT +R2]�1CPk�P0 = R0 = E(x0xT0 )Kk = �PkCT [CPkCT +R2]�1Kk = PkCT [CPkCT +R2]�1minimizes the error variance Ef[xk � xk]T [xk � xk] j Ykg. Note that the �ltergains Kk and Kk do not depend on � sc and � ca. Moreover, the estimationerror is Gaussian with zero mean and covariance Pkjk = Pk �PkCT [CPkCT +R2]�1CPk.PROOF. Note that the random matrices in the process (1), �0(� sck ; � cak ) and�1(� sck ; � cak ), are known when the estimate xk+1jk is calculated. This simplyfollows from the assumption that old time delays are known when we makethe estimate. By this we know how the control signal enters xk+1, and theoptimality of the estimator can be proved in the same way as the standardKalman �lter for time-varying, linear systems, see [1]. See also [5]. 2The following theorem justi�es use of the estimated state in the optimal con-troller.Theorem 3 (Separation property) Given the plant (1){(2), with Yk knownwhen the control signal is calculated. The controller that minimizes the costfunction (6) is given by u�k = �L(� sck )264 xkjku�k�1375 (10)9

Page 10: 10.1.1.50

with L(� sck ) = (Q22 + ~S22k+1)�1 �QT12 + ~S21k+1 ~S23k+1� ; (11)where ~Sk is calculated as in Theorem 1, and xkjk is the minimum varianceestimate from Theorem 2.To prove Theorem 3 we will need some lemmas. The �rst lemma is from [2].Lemma 1 Let E[� j y] denote the conditional mean given y. Assume thatthe function f(y; u) = E[l(x; y; u) j y] has a unique minimum with respect tou 2 U for all y 2 Y. Let u0(y) denote the value of u for which the minimumis achieved. Thenminu(y) E l(x; y; u) = E l(x; y; u0(y)) = Ey fminu E[l(x; y; u) j y]g; (12)where Ey denotes the mean value with respect to the distribution of y.PROOF. This is Lemma 3.2 in Chapter 8 of [2]. 2Lemma 2 With the notation in (9) and under the conditions for Theorem 2the following holds.E�cak ;vk ;wk+18>><>>:264xk+1jk+1uk 375T Sk+1 264xk+1jk+1uk 375 ����Yk9>>=>>;= 2666664 xkjkukuk�13777775T eSk+1(� sck )2666664 xkjkukuk�13777775+ tr(R1CTKTk+1S11k+1Kk+1C)+ tr(R2KTk+1S11k+1Kk+1) + tr(Pkjk�TCTKTk+1S11k+1Kk+1C�);where S11k is block (1; 1) of the matrix Sk.PROOF. The calculations are similar to those in Theorem 1. In Theorem 2the state estimate recursion is written as a recursion in xkjk�1. This can, byuse of the equations in Theorem 2, be rewritten as a recursion in xkjk.10

Page 11: 10.1.1.50

xk+1jk+1=(I �Kk+1C)xk+1jk +Kk+1yk+1=(I �Kk+1C)n�xkjk + �0(� sck ; � cak )uk + �1(� sck ; � cak )uk�1o+Kk+1 fC(�xk + �0(� sck ; � cak )uk + �1(� sck ; � cak )uk�1 + vk)+wk+1g : (13)By introducing the estimation error ~xk = xk � xkjk, which we know is orthog-onal to xkjk from Theorem 2, (13) can be written asxk+1jk+1 = �xkjk + �0(� sck ; � cak )uk + �1(� sck ; � cak )uk�1+Kk+1C�~xk +Kk+1Cvk +Kk+1wk+1: (14)From this it follows that264xk+1jk+1uk 375 = G(� sck ; � cak )2666664 xkjkukuk�13777775+H 2666664 ~xkvkwk+13777775 ; (15)whereG(� sck ; � cak )= 264� �0(� sck ; � cak ) �1(� sck ; � cak )0 I 0 375 (16)H = 264Kk+1C� Kk+1C Kk+10 0 0 375 : (17)11

Page 12: 10.1.1.50

The equality can now be written asE�cak ;vk ;wk+18>><>>:264xk+1jk+1uk 375T Sk+1 264xk+1jk+1uk 375 j Yk9>>=>>;= 2666664 xkjkukuk�13777775T E�cak �GT (� sck ; � cak )Sk+1G(� sck ; � cak )����� sck �2666664 xkjkukuk�13777775+ Evk ;wk+18>>>>><>>>>>:2666664 ~xkvkwk+13777775T HTSk+1H 2666664 ~xkvkwk+13777775 ����Yk9>>>>>=>>>>>;= 2666664 xkjkukuk�13777775T eSk+1(� sck )2666664 xkjkukuk�13777775+ tr(Pkjk�TCTKTk+1S11k+1Kk+1C�)+ tr(R2KTk+1S11k+1Kk+1) + tr(R1CTKTk+1S11k+1Kk+1C); (18)where eSk+1(� sck ) = E�cak �GT (� sck ; � cak )Sk+1G(� sck ; � cak )����� sck � : (19)The �rst part of the �rst equality follows from that xkjk, uk, and uk�1 areindependent of ~xk, � cak , vk, and wk+1. The second part of the �rst equalityfollows from that H is independent of � cak . The second equality follows fromthat ~xk is independent of vk and wk+1. 2Proof of Theorem 3 By repeated use of Lemma 1 we can obtain a dynamicprogramming recursion for the future loss W . Since knowledge of xkjk and Pkis a su�cient statistic for the conditional distribution of xk given Yk and since12

Page 13: 10.1.1.50

� sck is known at time k we obtain the functional equationW (xkjk; Pkjk; k)= E�sck minuk E8>><>>:264xkuk375T Q264xkuk375+W (xk+1jk+1; Pk+1jk+1; k + 1) ����Yk9>>=>>;= E�sck minuk E8>><>>:264xkuk375T Q264xkuk375+W (xk+1jk+1; Pk+1jk+1; k + 1) ���� xkjk; Pkjk; � sck 9>>=>>; :(20)The initial condition for the functional (20) isW (xN jN; PN jN ; N) = E nxTNQNxN j xN jN ; PN jN o : (21)In (20) E�sck is brought outside the minimization using Lemma 1, i.e. � sck is knownwhen we calculate the control signal. We will now show that the functional(20) has a solution which is a quadratic formW (xkjk; Pkjk; k) = 264 xkuk�1375T Sk 264 xkuk�1375+ sk; (22)and that the functional is minimized by the controller of Theorem 1 with xkreplaced by xkjk. Using Theorem 2 we can rewrite the initial condition (21) asW (xN jN ; PN jN ; N) = xTN jNQN xN jN + tr(Q1PN jN); (23)which clearly is on the quadratic form (22). Proceeding by induction we assumethat (22) holds for k + 1 and we will then show that it also holds for k. Wehave thatW (xkjk; Pkjk; k) = E�sck minuk 8>><>>:264xkjkuk 375T Q264xkjkuk 375+ tr(PkjkQ1)+ 2666664 xkjkukuk�13777775T eSk+1(� sck )2666664 xkjkukuk�13777775+ tr(R1CTKTk+1S11k+1Kk+1C)+ tr(R2KTk+1S11k+1Kk+1) + tr(Pkjk�TCTKTk+1S11k+1Kk+1C�) + sk+1o ; (24)where we have used Lemma 2 to rewrite EW (xk+1jk+1; Pk+1jk+1; k + 1). Com-paring (24) with the quadratic form in the proof of Theorem 1 we see that it13

Page 14: 10.1.1.50

is minimized by the control lawu�k = �(Q22 + ~S22k+1)�1 �QT12 + ~S21k+1 ~S23k+1� 264 xkjku�k�1375 ; (25)where ~Sk+1 is as stated in Theorem 1. Using the optimal control in (24) and ap-plying E�sck , which can be moved inside [xTkjk uTk�1], we �nd thatW (xkjk; Pkjk; k) ison the quadratic form (22). The induction is thus completed and the criterionis minimized by the controller stated in the theorem.5 A Suboptimal SchemeA drawback with the optimal scheme is the complicated state feedback matrixL(� sck ). An alternative to the optimal controller is the suboptimal controlleruk = �L ��pk �pk� 264 xkjkuk�1375 ; (26)where �pk = eA(�sck +E �cak ) �pk = Z �sck +E �cak0 eAsdsBand L is the optimal state feedback vector in the delay-free setup. Here E � cakis the mean value of � cak . The operation �pkxkjk + �pkuk�1 can be seen as aprediction from the state estimate at time kh to a state estimate when thecontrol signal is applied at the actuator. This controller requires less compu-tations than the optimal controller in Section 4. In Section 6 this controlleris compared with the optimal controller. The performance is in the examplesvery close to the optimal.6 ExampleConsider the following plant, both plant and design speci�cations are takenfrom [6], dxdt = 264 0 1�3 �4375x+ 26401375u+ 264 35�61375 � (27)y = �2 1�x+ �;14

Page 15: 10.1.1.50

where E[�(t)] = E[�(t)] = 0 and E[�(t1)�(t2)] = E[�(t1)�(t2)] = �(t1� t2). Thecontrol objective is to minimize the cost functionJ = E limT!1 1T Z T0 (xTHTHx+ u2)dt;where H = 4p5 �p35 1�. The sampling period for the controller is chosen ash = 0:05. This is in accordance with the rule of thumb that is given in [3].The time delays, � sck and � cak , are assumed to be uniformly distributed on theinterval [0; �h=2] where 0 � � � 1.The stationary cost function will be evaluated and compared for �ve di�erentschemes:� an LQG-controller neglecting the time delays� an LQG-controller designed for the mean delay� the scheme with bu�ers proposed in [10]� the optimal controller derived in Section 4� the suboptimal controller in Section 5The �rst design is done without taking any time delays into account. Theprocess and the cost function are sampled to get discrete time equivalents,and the standard LQG-controller is calculated. This gives the designL = 26438:9118:094 375T ;K = 264 2:690�4:484375 ;K = 264 2:927�5:012375 :This choice of L, K and K gives the following closed loop polessp(�� �L) = f0:700 � 0:0702igsp(��KC) = f0:743; 0:173g:Even if these looks reasonable, a Nyquist plot of the loop transfer function re-veals a small phase margin, �m = 10:9�. The small phase margin indicates thatthere could be problems to handle unmodeled time delays. Numerical evalua-tion of (5) gives the stability limit �crit = 0:425 for the controller neglectingthe time delays.The scheme in [10] eliminates the randomness of the time delays by introduc-tion of timed bu�ers. This will, however, introduce extra time delay in theloop. The design for this scheme is done in the same way as in the standardLQG-problem.The fourth scheme we will compare with is the optimal controller describedin Section 4. Notice that the optimal state estimator gains K and K will be15

Page 16: 10.1.1.50

the same for the optimal controller as if the time delays were neglected. Thefeedback from the estimated state will have the formuk = �L(� sck )264 xkuk�1375 :The suboptimal controller (26) uses L, K, and K from the standard LQG-controller.The stationary cost function has been evaluated for the �ve schemes withAlgorithm 1. For comparison the stationary cost has also been evaluated byMonte Carlo simulation, calculating the mean cost during 2 � 104 simulatedsamples. The results agree very well, see Figure 3. From Figure 3 it is seenthat the controller neglecting the time delays fails to stabilize the processfor � > �crit. The optimal controller and the proposed suboptimal schemeoutperforms the scheme proposed in [10]. Note that for this example the costis just slightly higher with the suboptimal controller than with the optimalcontroller. The di�erence can not be seen in the exact calculated curves inFigure 3.7 ConclusionsIn this paper we have described a method to analyze performance of di�erentschemes to compensate for randomly varying time delays. A test of stochasticstability of the closed loop system has been presented. An LQG-optimal con-troller has been found for the setup with time-driven sampling, event-drivencontroller, and event-driven actuator. The optimal controller has successfullybeen compared with a proposed suboptimal controller and some controllersproposed in the literature. Future work will include studies of� Optimal schemes when the time delays are correlated from sample to sample.One way to model this is by letting the distributions of the network delaysbe governed by an underlying Markov chain. An algorithm for evaluation ofa given control law and stability results for this setup is presented in [12]� Control strategies when the control delay can be larger than the samplinginterval. A problem that occurs in this case is that there is then no guaranteethat the samples arrive at the controller and the actuator in the orderthey were sent. A related problem is when samples may be lost in thecommunication, so called vacant sampling.16

Page 17: 10.1.1.50

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2x 10

4

PSfrag replacementsDesign neglecting time delaysScheme by Luck-RayOptimal controllerSuboptimal controllerMean delay controller

�Loss

Fig. 3. Exact calculated performance (solid lines) of the four schemes, and simulatedperformance (dashed lines) of system (27) as a function of the amount of stochasticsin the time-delays. The time-delays are uniformly distributed on [0; �h=2]. The smalldi�erence between the suboptimal controller and the optimal controller is noticeable.For � > 0:425 the controller neglecting the time delays fails to stabilize the process.References[1] B.D.O Anderson and J.B. Moore. Optimal �ltering. Prentice-Hall, EnglewoodCli�s, N.J., 1979.[2] Karl Johan Åström. Introduction to Stochastic Control Theory. AcademicPress, New York, 1970. Translated into Russian, Japanese and Chinese.[3] Karl Johan Åström and Björn Wittenmark. Computer Controlled Systems|Theory and Design. Prentice-Hall, Englewood Cli�s, New Jersey, secondedition, 1990.[4] H. Chan and �U. Özg�uner. Closed-loop control of systems over a communicationsnetwork with queues. Int. J. Control, 62(3):493{510, 1995.[5] H.-F. Chen, P. R. Kumar, and J. H. van Schuppen. On Kalman �lteringfor conditionally Gaussian systems with random matrices. Systems & ControlLetters, pages 397{404, 1989.[6] J. C. Doyle and G. Stein. Robustness with observers. IEEE Trans. Automat.Contr., AC-24(4):607{611, 1979. 17

Page 18: 10.1.1.50

[7] Y. Ji, H. J. Chizeck, X. Feng, and K. A. Loparo. Stability and control ofdiscrete-time jump linear systems. Control-Theory and Advanced Technology,7(2):247{270, 1991.[8] R. E. Kalman. Control of randomly varying linear dynamical systems.Proceedings of symposia in applied mathematics, 13:287{298, 1962.[9] R. Krtolica, �U. Özg�uner, H. Chan, H. Göktas, J. Winkelman, and M. Liubakka.Stability of linear feedback systems with random communication delays. Int.J. Control, 59(4):925{953, 1994.[10] R. Luck and A. Ray. An observer-based compensator for distributed delays.Automatica, 26(5):903{908, 1990.[11] J. Nilsson, B. Bernhardsson, and B. Wittenmark. Stochastic analysis andcontrol of real-time systems with random time delays. Proceedings of the 13thInternational Federation of Automatic Control World Congress, San Francisco,pages 267{272, 1996.[12] Johan Nilsson. Analysis and Design of Real-Time Systems with Random Delays.PhD thesis, 1996.[13] A. Ray. Introduction to networking for integrated control systems. IEEEControl Systems Magazine, pages 76{79, January 1989.[14] A. Ray. Output feedback control under randomly varying distributed delays.Journal of Guidance, Control, and Dynamics, 17(4):701{711, 1994.[15] K. G. Shin and H. Kim. Hard deadlines in real-time systems. In IFACSymposium on Algorithms and Architectures for Real-Time Control, pages 9{14,Seoul, Korea, 1992.[16] B. Wittenmark, J. Nilsson, and M. Törngren. Timing problems in real-timecontrol systems. In Preprints American Control Conference, pages 2000{2004,Seattle, WA, June 1995.18