stochastic stability of jump discrete-time linear systems with markov chain in a general borel space

5
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 1, JANUARY 2014 223 represents a worst case scenario for the robustness analysis and the dif- ference between the two schemes is usually smaller in more realistic situations. VII. CONCLUSIONS In this technical note, a novel FF-CG scheme is proposed which does not make use of any measurement of the state to govern the set-point manipulations. The main idea under its development was to limit the set-point variations in order to always maintain the state trajectory “not too far” from the region of the steady-state admissible equilibria. The properties of the proposed algorithm have been carefully analyzed and the improvements and differences with earlier FF-CG approaches pointed out. Performance and robustness analyses and comparisons with classical CG and previously proposed FF-CG solutions have been provided and discussed in the nal example. REFERENCES [1] E. Garone, F. Tedesco, and A. Casavola, “Sensorless supervision of linear dynamical systems: The feed-forward command governor,” Au- tomatica, vol. 47, no. 7, pp. 1294–1303, 2011. [2] A. Casavola, E. Garone, and F. Tedesco, “Distributed coordination-by- constraint strategies for multi-agent networked systems,” in Proc. CDC 2011, Orlando, FL, 2011, pp. 6888–6893. [3] E. G. Gilbert, I. Kolmanovsky, and K. Tin Tan, “Discrete-time refer- ence governors and the nonlinear control of systems with state and control constraints,” Int. J. Robust and Nonlin. Control, vol. 5, pp. 487–504, 1995. [4] A. Bemporad, A. Casavola, and E. Mosca, “Nonlinear control of con- strained linear systems via predictive reference management,” IEEE Trans. Autom. Control, vol. 42, no. 3, pp. 340–349, Mar. 1997. [5] A. Casavola, E. Mosca, and D. Angeli, “Robust command governors for constrained linear systems,” IEEE Trans. Autom. Control, vol. 45, no. 11, pp. 2071–2077, Nov. 2000. [6] A. Angeli, A. Casavola, and E. Mosca, “On feasible set-membership state estimators in constrained command governor control,” Auto- matica, vol. 37, pp. 151–156, 2001. [7] T. Sugie and H. Yamamoto, “Reference management for closed loop systems with state and control constraints,” in Proc. IEEE American Control Conf. 2001, Arlington, VA. [8] P. Falcone, F. Borrelli, J. Peckar, and G. Steward, “A reference gov- ernor approach for constrained piecewise afne systems,” in Proc. Eur. Control Conf., Budapest, Hungary, Aug. 2009. [9] S. D. Cairano and I. V. Kolmanovsky, “Rate limited reference governor for network controlled systems,” in Proc. American Control Conf., Bal- timore, MD, Jun. 2010, pp. 3704–3709. [10] C. M. Cirre, M. Berenguel, L. Valenzuela, and R. Klempous, “Refer- ence governor optimization and control of a distributed solar collector eld,” Eur. J. Oper. Res., vol. 193, pp. 709–717, 2009. [11] I. Kolmanovsky and E. G. Gilbert, “Maximal output admissible sets for discrete-time systems with disturbance inputs,” in Proc. Amer. Control Conf., Seattle, WA, 1995, pp. 1995–1999. [12] F. Tedesco, “Distributed Command Governor Strategies for Multi-agent Dynamical Systems” Ph.D. dissertation, DEIS-UNICAL, University of Calabria, Calabria, Italy, 2011 [Online]. Available: http://tedescof.wordpress.com/publication/ [13] G. K. Batchelor, An Introduction to Fluid Dynamics. Cambridge, U.K.: Cambridge University Press, Feb. 28, 2000. Stochastic Stability of Jump Discrete-Time Linear Systems With Markov Chain in a General Borel Space O. L. V. Costa, Senior Member, IEEE, and D. Z. Figueiredo Abstract—Necessary and sufcient conditions for stochastic stability (SS) of discrete-time linear systems subject to Markov jumps in the parameters are considered, assuming that the Markov chain takes values in a general Borel space . It is shown that SS is equivalent to the spectrum radius of a bounded linear operator in a Banach space being less than 1, or to the existence of a solution of a Lyapunov type equation. These results generalize several previous results in the literature, which considered only the case of the Markov chain taking values in a nite or innite countable space. Index Terms—General Borel space, Lyapunov equation, Markov jump linear systems, stochastic stability. I. INTRODUCTION There has been lately an intensive interest on dynamic linear systems which are subject to abrupt changes in their structures. Among these models one that has received a great deal of attention is the so-called linear systems with Markov jump parameters (MJLS). Regarding the mean square stability theory for MJLS, we can mention, for instance, [3], [9], [10], [13], [14], [17], [19], [20], as a sample of works on this subject, which has by now a fairly complete body of results (see, for in- stance, [6] for further references on this topic). In these works the state space of the Markov chain was assumed to take values in a nite state space. For the case in which the state space of the Markov chain is count- ably innite it is shown in [4] that mean square and stochastic stability ( -stability) are no longer equivalent. Regarding other issues such as almost sure stability, robust stability, stabilizability and detectability, the readers are referred, for instance, to [1], [7], [8], [18], [22] and [24]. In particular in [16] it was considered MJLS with the Markov chain taking values in a general Borel space, and the main goal was to derive con- ditions for the uniform exponentially almost sure stability (UEAS-sta- bility). Assuming that the Markov chain is a positive Harris chain, the authors showed in Theorem 4.3 that UEAS is equivalent to a contrac- tivity condition being satised. In this paper we deal with a different sta- bility criterion, the so-called stochastic stability (SS). We consider a dis- crete-time MJLS with the jumps being modeled by a time-homogeneous Markov chain taking values in a general Borel space and with transition probability kernel having a density with respect to a -nite mesure on . No positive Harris assumptions are required, and the necessary and sufcient conditions are based on a Lyapunov type equation. As far as the authors are aware of, this is the rst time that the SS of MJLS with the Markov chain taking values in a general Borel space is considered in the literature. The paper is organized as follows. In Section II we dene the nota- tion and some basic concepts. The problem statement is presented in Section III and some auxiliary results are addressed in Section IV. In Section V we present the necessary and sufcient conditions for SS of discrete-time MJLS as well as some easy-to-check sufcient conditions Manuscript received December 17, 2012; revised April 08, 2013, May 23, 2013; accepted June 04, 2013. Date of publication June 20, 2013; date of current version December 19, 2013. This work was supported in part by the Brazilian, National Research Council-CNPq, under Grant 301067/09-0 and USP project MaCLinC. Recommended by Associate Editor M. L. Corradini. The authors are with the Departamento de Engenharia de Telecomunicações e Controle, Escola Politécnica da Universidade de São Paulo, CEP: 05508 900-São Paulo, Brazil (e-mail: [email protected]; dz[email protected]). Digital Object Identier 10.1109/TAC.2013.2270031 0018-9286 © 2013 IEEE

Upload: d-z

Post on 23-Dec-2016

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Stochastic Stability of Jump Discrete-Time Linear Systems With Markov Chain in a General Borel Space

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 1, JANUARY 2014 223

represents a worst case scenario for the robustness analysis and the dif-ference between the two schemes is usually smaller in more realisticsituations.

VII. CONCLUSIONS

In this technical note, a novel FF-CG scheme is proposed which doesnot make use of any measurement of the state to govern the set-pointmanipulations. The main idea under its development was to limit theset-point variations in order to always maintain the state trajectory“not too far” from the region of the steady-state admissible equilibria.The properties of the proposed algorithm have been carefully analyzedand the improvements and differences with earlier FF-CG approachespointed out. Performance and robustness analyses and comparisonswith classical CG and previously proposed FF-CG solutions have beenprovided and discussed in the final example.

REFERENCES[1] E. Garone, F. Tedesco, and A. Casavola, “Sensorless supervision of

linear dynamical systems: The feed-forward command governor,” Au-tomatica, vol. 47, no. 7, pp. 1294–1303, 2011.

[2] A. Casavola, E. Garone, and F. Tedesco, “Distributed coordination-by-constraint strategies for multi-agent networked systems,” inProc. CDC2011, Orlando, FL, 2011, pp. 6888–6893.

[3] E. G. Gilbert, I. Kolmanovsky, and K. Tin Tan, “Discrete-time refer-ence governors and the nonlinear control of systems with state andcontrol constraints,” Int. J. Robust and Nonlin. Control, vol. 5, pp.487–504, 1995.

[4] A. Bemporad, A. Casavola, and E. Mosca, “Nonlinear control of con-strained linear systems via predictive reference management,” IEEETrans. Autom. Control, vol. 42, no. 3, pp. 340–349, Mar. 1997.

[5] A. Casavola, E. Mosca, and D. Angeli, “Robust command governorsfor constrained linear systems,” IEEE Trans. Autom. Control, vol. 45,no. 11, pp. 2071–2077, Nov. 2000.

[6] A. Angeli, A. Casavola, and E. Mosca, “On feasible set-membershipstate estimators in constrained command governor control,” Auto-matica, vol. 37, pp. 151–156, 2001.

[7] T. Sugie and H. Yamamoto, “Reference management for closed loopsystems with state and control constraints,” in Proc. IEEE AmericanControl Conf. 2001, Arlington, VA.

[8] P. Falcone, F. Borrelli, J. Peckar, and G. Steward, “A reference gov-ernor approach for constrained piecewise affine systems,” in Proc. Eur.Control Conf., Budapest, Hungary, Aug. 2009.

[9] S. D. Cairano and I. V. Kolmanovsky, “Rate limited reference governorfor network controlled systems,” inProc. American Control Conf., Bal-timore, MD, Jun. 2010, pp. 3704–3709.

[10] C. M. Cirre, M. Berenguel, L. Valenzuela, and R. Klempous, “Refer-ence governor optimization and control of a distributed solar collectorfield,” Eur. J. Oper. Res., vol. 193, pp. 709–717, 2009.

[11] I. Kolmanovsky and E. G. Gilbert, “Maximal output admissible sets fordiscrete-time systems with disturbance inputs,” in Proc. Amer. ControlConf., Seattle, WA, 1995, pp. 1995–1999.

[12] F. Tedesco, “Distributed Command Governor Strategies forMulti-agent Dynamical Systems” Ph.D. dissertation, DEIS-UNICAL,University of Calabria, Calabria, Italy, 2011 [Online]. Available:http://tedescof.wordpress.com/publication/

[13] G. K. Batchelor, An Introduction to Fluid Dynamics. Cambridge,U.K.: Cambridge University Press, Feb. 28, 2000.

Stochastic Stability of Jump Discrete-Time Linear SystemsWith Markov Chain in a General Borel Space

O. L. V. Costa, Senior Member, IEEE, and D. Z. Figueiredo

Abstract—Necessary and sufficient conditions for stochastic stability (SS)of discrete-time linear systems subject to Markov jumps in the parametersare considered, assuming that the Markov chain takes values in a generalBorel space . It is shown that SS is equivalent to the spectrum radiusof a bounded linear operator in a Banach space being less than 1, or to theexistence of a solution of a Lyapunov type equation. These results generalizeseveral previous results in the literature, which considered only the case ofthe Markov chain taking values in a finite or infinite countable space.

Index Terms—General Borel space, Lyapunov equation, Markov jumplinear systems, stochastic stability.

I. INTRODUCTION

There has been lately an intensive interest on dynamic linear systemswhich are subject to abrupt changes in their structures. Among thesemodels one that has received a great deal of attention is the so-calledlinear systems with Markov jump parameters (MJLS). Regarding themean square stability theory for MJLS, we can mention, for instance,[3], [9], [10], [13], [14], [17], [19], [20], as a sample of works on thissubject, which has by now a fairly complete body of results (see, for in-stance, [6] for further references on this topic). In these works the statespace of the Markov chain was assumed to take values in a finite statespace. For the case inwhich the state space of theMarkov chain is count-ably infinite it is shown in [4] that mean square and stochastic stability( -stability) are no longer equivalent. Regarding other issues such asalmost sure stability, robust stability, stabilizability anddetectability, thereaders are referred, for instance, to [1], [7], [8], [18], [22] and [24]. Inparticular in [16] it was consideredMJLSwith theMarkov chain takingvalues in a general Borel space, and the main goal was to derive con-ditions for the uniform exponentially almost sure stability (UEAS-sta-bility). Assuming that the Markov chain is a positive Harris chain, theauthors showed in Theorem 4.3 that UEAS is equivalent to a contrac-tivity condition being satisfied. In this paperwe deal with a different sta-bility criterion, the so-called stochastic stability (SS).We consider a dis-crete-timeMJLSwith the jumpsbeingmodeled by a time-homogeneousMarkov chain taking values in a general Borel spaceandwith transition probability kernel having a density withrespect to a -finite mesure on . No positive Harris assumptions arerequired, and the necessary and sufficient conditions are based on aLyapunov type equation. As far as the authors are aware of, this is thefirst time that the SS of MJLS with the Markov chain taking values ina general Borel space is considered in the literature.The paper is organized as follows. In Section II we define the nota-

tion and some basic concepts. The problem statement is presented inSection III and some auxiliary results are addressed in Section IV. InSection V we present the necessary and sufficient conditions for SS ofdiscrete-timeMJLS aswell as some easy-to-check sufficient conditions

Manuscript received December 17, 2012; revised April 08, 2013, May 23,2013; accepted June 04, 2013. Date of publication June 20, 2013; date of currentversion December 19, 2013. This work was supported in part by the Brazilian,National Research Council-CNPq, under Grant 301067/09-0 and USP projectMaCLinC. Recommended by Associate Editor M. L. Corradini.The authors are with the Departamento de Engenharia de Telecomunicações

e Controle, Escola Politécnica da Universidade de São Paulo, CEP: 05508900-São Paulo, Brazil (e-mail: [email protected]; [email protected]).Digital Object Identifier 10.1109/TAC.2013.2270031

0018-9286 © 2013 IEEE

Page 2: Stochastic Stability of Jump Discrete-Time Linear Systems With Markov Chain in a General Borel Space

224 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 1, JANUARY 2014

and an example. The paper is concluded with some final comments inSection VI.

II. NOTATION AND PRELIMINARIES

For and complex Banach spaces we set for the Ba-nach space of all bounded linear operators of into , with the uni-form induced norm represented by . For simplicity we shall set

. For we denote by the spec-tral radius of . As usual, will mean that the op-erator is positive semi-definite (positive definite, respec-tively). In particular, we shall denote by ( respectively) the -di-mensional complex (real) Euclidean spaces and by thenormed bounded linear space of all complex matrices, with

and .In this case, the superscripts , , and will denote complex conju-gate, transpose and conjugate transpose, respectively. We denote by

, , the eigenvalues of a matrix . Ei-ther the uniform induced norm in or the standard Euclideannorm in is represented by . Recall that in this case for any

, (see [2], page 443). Theinterval will be denoted by , and the set of positive integersby .Remark 2.1: For any there exists a Cartesian self

adjoint decomposition (cf. [21], page 3761) , , 2, 3, 4, suchthat , for , 2, 3, 4, and

. Indeed, we can writewhere , . Since

and are self-adjoint (that is, , ), they canbe decomposed into positive and negative parts (see [21], p. 464), sothat there exist , , , 2, 3, 4, such that

, .We denote by the trace operator, which is a linear

functional , satisfying the proper-ties and for any ,with , ,

. We denote bythe operator that creates a column vector from a matrix bystacking the column vectors of below one another. For ,

we have that andtherefore

. We also notice that for any ,

.We recall that is a Borel space if it is a Borel subset of a

complete and separable metric space, and its Borel -algebra isdenoted by . For Borel spaces, the family of all sto-chastic kernels on given is denoted by . Let bea Borel space and a -finite measure on . For , positiveintegers, we say that if ismeasurable and , and, similarly,that if is measurable and

. It is easy to see thatand (the Banach space of measurable functions

such that , see[21], page 220) are uniformly homeomorphic, and thusis a Banach space. Similarly and (the Ba-nach space of measurable functions such that

) are uniformlyhomeomorphic and, thus, is also a Banach space.For simplicity we will write , , and

,, and almost for

. Finally, we say that iffor each .

III. PROBLEM FORMULATION

Let be a Borel space and a -finite measure on . On a prob-abilistic space ( , ) we consider a time-homogeneous Markovchain taking values in and with transition proba-bility kernel having a density with respect to , so that forany ,

(1)

Consider the following discrete-time Markov jump linear system

(2)

with , , and having distribution , forsome probability measure on . Denote by the marginal ofon , i.e., , for all . We assume that:A1) .A2) .A3) that is, is absolutely continuous with respect to .

From assumption A3) and the Radon-Nikodym theorem, there exists ameasurable function from into such that for all ,

(3)

From Proposition D.8, page 184, in [11], there exists a stochastic kernelsuch that for all and ,

(4)

Combining (3) and (4) we get that for all and ,

(5)

From (5) and Assumption A2) we can conclude that

(6)

Define where for each ,

(7)

Notice that and thus.

We define next the stability concepts that we consider in the fol-lowing sections. Note that, as in [3], it will be convenient first to con-sider complex initial conditions . Following the same ideas as in [5],we show in Theorem 5.3 that the results also hold for real initial con-ditions .Definition 3.1: System (2) is stochastically stable (SS) if

for any initial condition satis-fying Assumptions A2) and A3). We say that system (2) is SS for realinitial state variables if for any initialcondition satisfying Assumptions A2) and A3) and .Remark 3.2: As shown in the example of Remark 6 of [4], page

2079, for the case in which the Markov chain takes values in an infinitedimensional space, the concepts of mean square stability and SS are notequivalent.Example 3.3: This example is inspired on the solar thermal receiver

model proposed by Sworder and Rogers in [23], and also discussed in[6], Chapter 8. In this model the system dynamics is heavily depen-dent on the instantaneous insolation. For simplicity we consider here

Page 3: Stochastic Stability of Jump Discrete-Time Linear Systems With Markov Chain in a General Borel Space

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 1, JANUARY 2014 225

TABLE IPARAMETERS FOR THE SOLAR THERMAL RECEIVER MODEL

just two atmospheric conditions: 1) sunny and 2) cloudy. The thermalreceiver is described by the scalar MJLS ,where is a Markov chain taking values in the set

, and is characterized as follows.For , we have that where is uni-formly distributed in the interval , and with probability, and with probability . Clearly we have ,

, , , 2. For , we set .Therefore we have that in each atmospheric condition (sunny, repre-sented by or cloudy, represented by ) the dynamics of thesystem, given by , can take any value according to a uniformdistribution in the continuous interval . In Table I we presentthe parameters considered in this example. It follows that (1, ) corre-sponds to only stable modes, while (2, ) corresponds to some unstablemodes. In Section V we will establish a relation between andin order to guarantee SS.

IV. AUXILIARY RESULTS

In this section we present several auxiliary results and the main op-erators that will be needed to characterize the Lyapunov type equationand the SS results for system (2). We start by defining the followinglinear operators and : for , , and , define

(8)

(9)

Notice that and are positive operators, that is, andwhenever and . We have the following

proposition.Proposition 4.1: We have that: i) with ,

and ii) with .Proof: For any we have from Fubini’s theorem, (8), and

-almost everywhere on , that

(10)

since . From (10) we also get that, showing i). Considering now any and we

have from (9) and -almost everywhere on that

(11)

recalling again that . From (11) we concludethat and thus thatshowing ii).

We define next the bounded bilinear operator onwith values in as follows. For , ,

(12)

Recalling that -almost everywhere on , we getthat which shows that indeed is abounded bilinear operator. We recall that the adjoint of is definedby the relation (see [12], page 208) forevery , . We have the following proposition.Proposition 4.2: .Proof: For every , we have from Fubini’s

theorem, (8), (9) and (12) that

completing the proof.We conclude this section with the following auxiliary result.Proposition 4.3: There is a density with respect to such that

for each and .Proof: See [25].

V. STOCHASTIC STABILITY

In this section we present the main results of the paper. We start bydefining for , , and

(13)

where was defined in (7). In what follows we set as the-field generated by . Our first result estab-lishes a link between and the operator (see (8)).Proposition 5.1: We have that for each ,

, and for all ,

(14)

Proof: Let us show (14) by induction on . For we havethat and from (2) and (3)–(7) that

showing the result for . Suppose the result holds for .We have from (2) and Fubini’s theorem that

(15)

Since, from Proposition 4.1, , it follows from the inductionhypothesis and (15) that and

.Define next . We have the following result.

Page 4: Stochastic Stability of Jump Discrete-Time Linear Systems With Markov Chain in a General Borel Space

226 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 1, JANUARY 2014

Proposition 5.2: For every ,

(16)

Proof: For we have from (3)–(7) that

From Fubini’s theorem, (2), (13) and Proposition 5.1 we get that

and the result follows.Next we present the main result of the paper:Theorem 5.3: The following assertions are equivalent:I) System (2) is SS.II) System (2) is SS for real initial state variables .III) .IV) Given any there exists such that -almost

everywhere on

(17)

V) There exists and such that -almost everywhereon

(18)

Proof: iii) i): From Lemma 1 in [15] we have thatimplies that there exists , , such that forevery . From (14) we have that , and from (16),

. Therefore

(19)

and from (19) we clearly have that for anyinitial condition satisfying Assumptions A2) and A3) so that,according to Definition 3.1, system (2) is SS.

i) ii): This is immediate.ii) iii): Let us show first that for arbitrary ,

. Set , . Clearlywe have that , , and .From Proposition 4.3 we can find a density with respect tosuch that for every and .Consider a null mean random vector with covariancematrix and define the initial condition forsystem (2) as follows: has density with respect to

on , and . From this defini-tion we have that , and, recalling that

, that

(20)

so that from (20), .Therefore Assumptions A2) and A3) are satisfied with .Moreover, .From the fact that , (14), (16) and the SS (Def-inition 3.1), we have that

(21)

Consider now an arbitrary . From Remark 2.1, we canwrite , with .From this and (21) we get that

(22)

Since (22) holds for every we get from Lemma 1 in [15]that .iv) v): This is immediate.v) i): Consider any and set . Wehave from (18) that

(23)

and that -almost everywhere in . Fromwe have that -almost everywhere in , where

. Combining these results and from (18) we have,after noting that , that

(24)

-almost everywhere in . From (24) we get that , andthat -almost everywherein . Therefore

(25)

Combining (23) and (25) we get that. Iterating this equation and making , where

is as in (7), we get that .Notice from (14), (16), and (24) again, that

. Weconclude that

showing, according to Definition 3.1, that system (2) is SS.

Page 5: Stochastic Stability of Jump Discrete-Time Linear Systems With Markov Chain in a General Borel Space

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 59, NO. 1, JANUARY 2014 227

iii) iv): As before we recall that, from Lemma 1 in [15],implies that there exists , ,

such that for every . Consider any. Define, for , as follows:

. It is easy to see thatand that for every .Thus from the monotonicity result for positive semi-definitematrices (see Lemma 3.1 in [26]) we have that there exists

such that as , -almost every-where in , with . Moreover, byconstruction, and, from the Lebesgueconvergence theorem,

.Thus -almost everywhere in we get that

so that (17) holds,completing the proof.

We present next some sufficient conditions for the existence of asolution for the Lyapunov like (18).Corollary 5.4: If there exist and such that

, , and -almost everywhereon

(26)

then system (2) is SS.Proof: Set , so that . We have

that -almost everywhere on ,so that (18) is satisfied, and

the result follows from Theorem 5.3.Corollary 5.5: If there exist and such that

, , and -almost everywhereon

(27)

then system (2) is SS.Proof: Since

we conclude that (27) implies (26).We now return to Example 3.3.Example 5.6: For define , with ,

and set , , 2. From Corollary5.5 and (27) we get that, if and

then the system in Example 3.3is SS. Assuming that , , 2, we obtain that the aboveconditions turn out to be equivalent to

From this we can eliminate and obtain a condition directly infunction of and , which is

. Using the data in Table I we have that a suffi-cient condition for SS of the system in Example 3.3 is that

.

VI. FINAL COMMENTS

In this paper we have derived necessary and sufficient conditions forthe stochastic stability of discrete-time Markov jump linear systems,with the Markov chain taking values in a general Borel space. In The-orem 5.3 we have shown that SS is equivalent to the spectral radiusof an operator being less than one or the existence of the solution of aLyapunov like operator. Some sufficient conditions are also presentedin Corollaries 5.4 and 5.5. These results generalize previous ones when

restricted to the finite or infinite countable cases (see, for instance, [3],[4] and [6]).

ACKNOWLEDGMENT

The authors are grateful to anonymous referees for their suggestionswhich have greatly improved the presentation of the paper.

REFERENCES[1] P. Bolzern, P. Colaneri, and G. De Nicolao, “On almost sure stability of

discrete-time Markov jump linear systems,” in Proc. 43rd IEEE CDC,Bahamas, 2004, pp. 3204–3208.

[2] F. M. Callier and C. A. Desoer, Linear System Theory. Berlin:Springer-Verlag, 1991.

[3] O. L. V. Costa and M. D. Fragoso, “Stability results for discrete-timelinear systems with Markovian jumping parameters,” J. Mathemat.Anal. Appl., vol. 179, pp. 154–178, 1993.

[4] O. L. V. Costa and M. D. Fragoso, “Discrete-time LQ-optimal controlproblems for infinite Markov jump parameter systems,” IEEE Trans.Automat. Control, vol. 40, pp. 2076–2088, 1995.

[5] O. L. V. Costa and M. D. Fragoso, “Comments on stochastic stabilityof jump linear systems,” IEEE Trans. Automat. Control, vol. 49, pp.1414–1416, 2004.

[6] O. L. V. Costa, D. Fragoso, and R. P. Marques, Discrete-Time MarkovJump Linear Systems. Berlin: Springer, 2005, Probability and Its Ap-plications.

[7] F. Dufour and P. Bertrand, “Stabilizing control law for hybrid models,”IEEE Trans. Automat. Control, vol. 39, pp. 2354–2357, 1994.

[8] Y. Fang, “A new general sufficient condition for almost sure stabilityof jump linear systems,” IEEE Trans. Automat. Control, vol. 42, pp.378–382, 1997.

[9] Y. Fang and K. A. Loparo, “Stochastic stability of jump linear sys-tems,” IEEE Trans. Automat. Control, vol. 47, no. 7, pp. 1204–1208,2002.

[10] X. Feng, K. A. Loparo, Y. Ji, and H. J. Chizeck, “Stochastic stabilityproperties of jump linear systems,” IEEE Trans. Automat. Control, vol.37, pp. 38–53, 1992.

[11] O. Hernández-Lerma and J. B. Lasserre, Discrete-Time MarkovControl Processes: Basic Optimality Criteria. New York:Springer-Verlag, 1996, vol. 30, Applications of Mathematics.

[12] O. Hernández-Lerma and J. B. Lasserre, Further Topics on Discrete-Time Markov Control Processes. New York: Springer-Verlag, 1999,vol. 42, Applications of Mathematics.

[13] Y. Ji and H. J. Chizeck, “Controllability, observability and discrete-time Markovian jump linear quadratic control,” Int. J. Control, vol. 48,pp. 481–498, 1988.

[14] Y. Ji, H. J. Chizeck, X. Feng, and K. A. Loparo, “Stability and controlof discrete-time jump linear systems,” Control Theory Adv. Technol.,vol. 7, pp. 247–270, 1991.

[15] C. S. Kubrusly, “Mean square stability for discrete bounded linear sys-tems in Hilbert spaces,” SIAM J. Control Optim., vol. 23, pp. 19–29,1985.

[16] C. Li, M. Z. Q. Chen, J. Lam, and X. Mao, “On exponential almost surestability of random jump systems,” IEEE Trans. Automat. Control, vol.57, no. 12, pp. 3064–3077, 2012.

[17] Q. Ling and H. Deng, “A new proof to the necessity of a second mo-ment stability condition of discrete-time Markov jump linear systemswith real states,” J. Appl. Mathemat., vol. 2012, pp. 1–10, 2012.

[18] M. Mariton, “Almost sure and moments stability of jump linear sys-tems,” Syst. Control Lett., vol. 30, pp. 1145–1147, 1985.

[19] M. Mariton, Jump Linear Systems in Automatic Control. New York:Marcel Decker, 1990.

[20] T. Morozan, “Stabilization of some stochastic discrete-time controlsystems,” Stochastic Analysis and Appl., vol. 1, pp. 89–116, 1983.

[21] A. W. Naylor and G. R. Sell, Linear Operator Theory in Engineeringand Science, 2nd ed. Berlin: Springer-Verlag, 1982.

[22] M. A. Rami and L. El Ghaoui, “Robust stabilization of jump linearsystems using linear matrix inequalities,” in Proc. IFAC Symp. RobustControl Design, Rio de Janeiro, 1994, pp. 148–151.

[23] D. D. Sworder and R. O. Rogers, “An LQG solution to a controlproblem with solar thermal receiver,” IEEE Trans. Automat. Control,vol. 28, pp. 971–978, 1983.

[24] M. G. Todorov and M. D. Fragoso, “On the robust stability, stabiliza-tion, and stability radii of continuous-time infinite Markov jump linearsystem,” SIAM J. Control Optim., vol. 49, pp. 1171–1196, 2011.

[25] Wikipedia, -Finite Measure [Online]. Available: http://en.wikipedia.org/wiki/Sigma-finite_measure 2013

[26] W. M. Wonham, “On a matrix Riccati equation of stochastic control,”SIAM J. Control, vol. 6, pp. 681–697, 1968.