systems & control: foundations & applications978-0-8176-8152-4/1.pdf · systems &...

21

Upload: others

Post on 06-Aug-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,
Page 2: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Systems & Control: Foundations & Applications

Series EditorTamer Basar, University of Illinois at Urbana-Champaign, Urbana, IL,USA

Editorial BoardKarl Johan Åström, Lund University of Technology, Lund, SwedenHan-Fu Chen, Academia Sinica, Beijing, ChinaBill Helton, University of California, San Diego, CA, USAAlberto Isidori, University of Rome, Rome, Italy;

Washington University, St. Louis, MO, USAMiroslav Krstic, University of California, San Diego, CA, USAAlexander Kurzhanski, University of California, Berkeley, CA, USA;

Russian Academy of Sciences, Moscow, RussiaH. Vincent Poor, Princeton University, Princeton, NJ, USAMete Soner, Koç University, Istanbul, Turkey

For further volumes:www.springer.com/series/4895

Page 3: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Vladimir G. Boltyanski � Alexander S. Poznyak

The RobustMaximum Principle

Theory and Applications

Page 4: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Vladimir G. BoltyanskiCIMATJalisco S/N, Col. Valenciana36240 Guanajuato, [email protected]

Alexander S. PoznyakAutomatic Control DepartmentCINVESTAV-IPN, AP-14-740Av. IPN-2508, Col. San Pedro Zacatenco07000 México, [email protected]

ISBN 978-0-8176-8151-7 e-ISBN 978-0-8176-8152-4DOI 10.1007/978-0-8176-8152-4Springer New York Dordrecht Heidelberg London

Library of Congress Control Number: 2011941921

Mathematics Subject Classification (2010): 49Ixx, 49Kxx, 49Lxx, 93Axx, 93Bxx, 93Cxx, 93Lxx,60Gxx

© Springer Science+Business Media, LLC 2012All rights reserved. This work may not be translated or copied in whole or in part without the writtenpermission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York,NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use inconnection with any form of information storage and retrieval, electronic adaptation, computer software,or by similar or dissimilar methodology now known or hereafter developed is forbidden.The use in this publication of trade names, trademarks, service marks, and similar terms, even if they arenot identified as such, is not to be taken as an expression of opinion as to whether or not they are subjectto proprietary rights.

Printed on acid-free paper

Springer is part of Springer Science+Business Media (www.birkhauser-science.com)

Page 5: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

To Erica and Tatyana with love.

Page 6: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Preface

Optimal Control is a rapidly expanding field developed during the last half-centuryto analyze the optimal behavior of a constrained process that evolves in time accord-ing to prescribed laws. Its applications now embrace a variety of new disciplinessuch as economics and production planning. The main feature of Classical OptimalControl Theory (OCT) is that the mathematical technique, especially designed forthe analysis and synthesis of an optimal control of dynamic models, is based onthe assumption that a designer (or an analyst) possesses complete information on aconsidered model as well as on an environment where this controlled model has toevolve.

There exist two principal approaches to solving optimal control problems in thepresence of complete information on the dynamic models considered:

– the first one is the Maximum Principle (MP) of L. Pontryagin (Boltyanski et al.1956)

– and the second one is the Dynamic Programming Method (DPM) of R. Bellman(Bellman 1957)

The Maximum Principle is a basic instrument to derive a set of necessary con-ditions which should be satisfied by any optimal solution (see also Boltyanski 1975,1978; Dubovitski and Milyutin 1971; Sussman 1987a, 1987b, 1987c). Thus, to solvea static optimization problem in a finite-dimensional space, one should obtain theso-called zero-derivative condition (in the case of unconstrained optimization) andthe Kuhn–Tucker conditions (in the case of constrained optimization). These con-ditions become sufficient under certain convexity assumptions related to the objec-tive as well as to constraint functions. Optimal control problems, on the other hand,may be regarded as optimization problems in the corresponding infinite-dimensional(Hilbert or, in general, Banach) spaces. The Maximum Principle is really a mile-stone of modern optimal control theory. It states that any dynamic system, closedby an optimal control strategy or, simply, by an optimal control, is a Hamiltoniansystem (with a doubled dimension) described by a system of forward-backward or-dinary differential equations; in addition, an optimal control maximizes a functioncalled the Hamiltonian. Its mathematical importance is derived from the following

vii

Page 7: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

viii Preface

fact: the maximization of the Hamiltonian with respect to a control variable givenin a finite-dimensional space looks and really is much easier than the original opti-mization problem formulated in an infinite-dimensional space. The key idea of theoriginal version of the Maximum Principle comes from classical variational calcu-lus. To derive the main MP formulation, first one needs to perturb slightly an optimalcontrol using the so-called needle-shape (spike) variations and, second, to considerthe first-order term in a Taylor expansion with respect to this perturbation. Lettingthe perturbations go to zero, some variational inequalities may be obtained. Thenthe final result follows directly from duality. The same formulation can be arrived atbased on more general concepts related to some geometric representation and sep-arability theorems in Banach space. This approach is called the Tent Method. It is akey mathematical apparatus used in this book.

The Dynamic Programming Method (DPM) is another powerful approach tosolve optimal control problems. It provides sufficient conditions for testing whethera control is optimal or not. The basic idea of this approach consists of consider-ing a family of optimal control problems with different initial conditions (times andstates) and obtaining some relationships among them via the so-called Hamilton–Jacobi–Bellman equation (HJB), which is a nonlinear first-order partial differentialequation. If this HJB equation is solvable (analytically or even numerically), thenthe optimal control can be obtained by maximization (or minimization) of the cor-responding generalized Hamiltonian. Such optimal controllers turn out to be givenby a nonlinear feedback depending on the optimized plant nonlinearities as wellas on the solution of the corresponding HJB equation. Such an approach actuallyprovides solutions to the entire family of optimization problems, and, in particular,the original problem. Such a technique is called “Invariant Embedding.” The majordrawback of the classical HJB method is that it requires that this partial differentialequation admits a smooth enough solution. Unfortunately this is not the case evenfor some very simple situations. To overcome this problem the so-called viscos-ity solutions have been introduced (Crandall and Lions 1983). These solutions aresome sort of nonsmooth solutions with a key function to replace the conventionalderivatives by a set-valued super/subdifferential maintaining the uniqueness of thesolutions under very mild conditions. This approach not only saves the DPM as amathematical method, but also makes it a powerful tool in optimal control tackling.In this book we will briefly touch on this approach and also discuss the gap betweennecessary (MP) and sufficient conditions (DPM), while applying this considerationto some particular problems.

When we do not have complete information on a dynamic model to be con-trolled, the main problem entails designing an acceptable control which remains“close to the optimal one” (having a low sensitivity with respect to an unknown (un-predictable) parameter or input belonging to a given possible set). In other words,the desired control should be robust with respect to the unknown factors. In thepresence of any sort of uncertainties (parametric type, unmodeled dynamics, andexternal perturbations), the main approach to obtaining a solution suitable for a

Page 8: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Preface ix

class of given models is to formulate a corresponding Min-Max control problem,where maximization is taken over a set of uncertainties and minimization is takenover control actions within a given set. The Min-Max controller design for differ-ent classes of nonlinear systems has been a hot topic of research over the last twodecades.

One of the important components of Min-Max Control Theory is the game-theoretic approach (Basar and Bernhard 1991). In terms of game theory, control andmodel uncertainty are strategies employed by opposing players in a game: controlis chosen to minimize a cost function and uncertainty is chosen to maximize it. Tothe best of our knowledge, the earliest publications in this direction were the papersof Dorato and Drenick (1966) and Krasovskii (1969, in Russian). Subsequently, inthe book by Kurjanskii (1977), the Lagrange Multiplier Approach was applied toproblems of control and observations under incomplete information. They were for-mulated as corresponding Min-Max problems.

Starting from the pioneering work of Zames (1981), which dealt with frequencydomain methods to minimize the norm of the transfer function between the dis-turbance inputs and the performance output, the minimax controller design is for-mulated as an H∞-optimization problem. As was shown in Basar and Bernhard(1991), this specific problem can be successfully solved in the time domain, lead-ing to rapprochement with dynamic game theory and the establishment of a re-lationship with risk-sensitivity quadratic stochastic control (Doyle et al. 1989;Glover and Doyle 1988; Limebeer et al. 1989; Khargonekar 1991). The paper byLimebeer et al. (1989) presented a control design method for continuous-time plantswhose uncertain parameters in the output matrix are only known to lie within anellipsoidal set. An algorithm for Min-Max control, which at every iteration approx-imately minimizes the defined Hamiltonian, is presented in Pytlak (1990). In thepublication by Didinsky and Basar (1994), using “the cost-to-come” method, theauthors showed that the original problem with incomplete information can be con-verted into an equivalent full information Min-Max control problem of a higherdimension, which can be solved using the Dynamic Programming Approach. Min-Max control of a class of dynamic systems with mixed uncertainties was investi-gated in Basar (1994). A continuous deterministic uncertainty which affects systemdynamics and discrete stochastic uncertainties leading to jumps in the system struc-ture at random times were also studied. The solution involves a finite-dimensionalcompensator using two finite sets of partial differential equations. The robust con-troller for linear time-varying systems given by a stochastic differential equationwas studied in Poznyak and Taksar (1996). The solution was based on stochasticLyapunov-like analysis with a martingale technique implementation.

Another class of problems dealing with discrete-time models of a deterministicand/or stochastic nature and their corresponding solutions was discussed in Didin-sky and Basar (1991), Blom and Everdij (1993), and Bernhard (1994). A compre-hensive survey of various parameter space methods for robust control design can befound in Siljak (1989).

In this book we present a new version of the Maximum Principle recently de-veloped, particularly, for the construction of optimal control strategies for the class

Page 9: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

x Preface

of uncertain systems given by a system of ordinary differential equations with un-known parameters belonging to a given set (finite or compact) which corresponds todifferent scenarios of the possible dynamics. Such problems, dealing with finite un-certainty sets, are very common, for example, in Reliability Theory, where some ofthe sensors or actuators may fail, leading to a complete change in the structure of thesystem to be controlled (each of the possible structures can be associated with one ofthe fixed parameter values). The problem under consideration belongs to the class ofoptimization problems of the Min-Max type. The proof is based on the Tent Method(Boltyanski 1975, 1987), which is discussed in the following text. We show thatin the general case the original problem can be converted into the analysis of non-solid convex cones, which leads to the inapplicability of the Dubovitski–Milyutinmethod (Dubovitski and Milyutin 1965) for deriving the corresponding necessaryconditions of optimality whenever the Tent Method still remains operative.

This book is for experts, scientists, and researchers in the field of Control Theory.However, it may also be of interest to scholars who want to use the results of ControlTheory in complex cases, in engineering, and management science. It will also beuseful for students who pursue Ph.D.-level or advanced graduate-level courses. Itmay also serve for training and research purposes.

The present book is both a refinement and an extension of the authors’ earlierpublications and consists of four complementary parts.

Part I: Topics of Classical Optimal Control.Part II: The Tent Method.

Part III: Robust Maximum Principle for Deterministic Systems.Part IV: Robust Maximum Principle for Stochastic Systems.

Part I presents a review of Classical Optimal Control Theory and includes twomain topics: the Maximum Principle and Dynamic Programming. Two importantsubproblems such as Linear Quadratic Optimal Control and Time Optimization areconsidered in more detail. This part of the book can be considered as independentand may be recommended (adding more examples) for a postgraduate course inOptimal Control Theory as well as for self-study by wide groups of electrical andmechanical engineers.

Part II introduces the reader to the Tent Method, which, in fact, is a basic math-ematical tool for the rigorous proof and justification of one of the main results ofOptimal Control Theory. The Tent Method is shown to be a general tool for solvingextremal problems profoundly justifying the so-called Separation Principle. First, itwas developed in finite-dimensional spaces, using topology theory to justify someresults in variational calculus. A short historical remark on the Tent Method is made,and the idea of the proof of the Maximum Principle is explained, paying specialattention to the necessary topological tools. The finite-dimensional version of theTent Method allows one to establish the Maximum Principle and a generalizationof the Kuhn–Tucker Theorem in Euclidean spaces. In this part, we also present aversion of the Tent Method in Banach spaces and demonstrate its application to a

Page 10: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Preface xi

generalization of the Kuhn–Tucker Theorem and the Lagrange Principle for infinite-dimensional spaces.

This part is much more advanced than the others and is accessible only to readerswith a strong background in mathematics, particularly in topology. Those who findit difficult to follow topological (homology) arguments can omit the proofs of thebasic theorems, trying to understand only their principal statements.

Part III is the central part of this book. It presents a robust version of the Maxi-mum Principle dealing with the construction of Min-Max control strategies for theclass of uncertain systems described by an ordinary differential equation with un-known parameters from a given compact set. A finite collection of parameters cor-responds to different scenarios of possible dynamics. The proof is based on the TentMethod described in the previous part of the book. The Min-Max Linear Quadratic(LQ) Control Problem is considered in detail. It is shown that the design of theMin-Max optimal controller in this case may be reduced to a finite-dimensional op-timization problem given at the corresponding simplex set containing the weightparameters to be found. The robust LQ optimal control may be interpreted as amixture (with optimal weights) of the controls which are optimal for each fixedparameter value. Robust time optimality is also considered (as a particular case ofthe Lagrange problem). Usually, the Robust Maximum Principle appears only as anecessary condition for robust optimality. But the specific character of the lineartime-optimization problem permits us to obtain more profound results. In partic-ular, in this case the Robust Maximum Principle appears as both a necessary anda sufficient condition. Moreover, for linear robust time optimality, it is possible toestablish some additional results: the existence and uniqueness of robust controls,piecewise constant robust controls for the polyhedral resource set, and a Feldbaum-type estimate for the number of intervals of constancy (or “switching”). All theseaspects are studied in detail in this part of the book. Dynamic Programming forMin-Max problems is also derived. A comparison of optimal controllers, designedby the Maximum Principle and Dynamic Programming for LQ problems, is car-ried out. Applications of results obtained to Multimodel Sliding Mode Control andMultimodel Differential Games are also presented.

Part IV deals with designing the Robust Maximum Principle for Stochastic Sys-tems described by stochastic differential equations (with the Itô integral implemen-tation) and subject to terminal constraints. The main goal of this part is to illustratethe possibilities of the MP approach for a class of Min-Max control problems foruncertain systems given by a system of linear stochastic differential equations withcontrolled drift and diffusion terms and unknown parameters within a given finiteand, in general, compact uncertainty set, supplemented by a given measure. If in thedeterministic case the adjoint equations are backward ordinary differential equa-tions and represent, in some sense, the same forward equation but in reverse time,then in the stochastic case such an interpretation is not valid because any time re-versal may destroy the nonanticipative character of the stochastic solutions, that is,any obtained robust control should be independent of the future. The proof of the

Page 11: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

xii Preface

Robust Maximum Principle is also based on the use of the Tent Method, but witha special technique specific to stochastic calculus. The Hamiltonian function usedfor these constructions is equal to the Lebesgue integral over the given uncertaintyset of the standard stochastic Hamiltonians corresponding to a fixed value of the un-certain parameter. Two illustrative examples, dealing with production planning andreinsurance-dividend management, conclude this part.

Most of the material given in this book has been tested in class at the SteklovMathematical Institute (Moscow, 1962–1980), the Institute of Control Sciences(Moscow, 1978–1993), the Mathematical Investigation Center of Mexico (CIMAT,Guanojuato, 1995–2006), and the Center of Investigation and Advanced Educationof IPN (CINVESTAV, Mexico, 1993–2009). Some studies, dealing with multimodelsliding-mode control and multimodel differential games, present the main results ofPh.D. theses of our students defended during the last few years.

Acknowledgments The authors wish to express their appreciation to those who participated incollaboration and profound discussions concerning Optimal Control, in general, and the RobustMaximum Principle, in particular: V. Alexandrov (Moscow State University), T. Duncan (the Uni-versity of Kansas at Lawrence), B. Polyak (Institute of Control Science, Russian Academy ofSciences), L. Fridman (National Autonomous University of Mexico), Y. Shtessel (Alabama StateUniversity at Huntsville), V. Utkin (Ohio State University), and V. Azhmyakov (CINVESTAV,Mexico), among others. It is a pleasure to recognize the impact of F. Bejerano and M. Jimenez (ourformer Ph.D. students) on the results presented and to thank them for their enthusiasm, encourage-ment, and suggestions. We feel deep gratitude to our colleagues and Master of Science studentsover the years, without whose evident collaboration, comments, enjoyment, and expressed appre-ciation, the current work would not have been undertaken. We wish to thanks our wives, Erica andTatyana, for their patience and deep understanding throughout the years. We also wish to acknowl-edge the editors of Birkhäuser for being so cooperative during the production process.

Vladimir G. BoltyanskiAlexander S. Poznyak

Mexico, D.F., Mexico

Page 12: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Contents

1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Part I Topics of Classical Optimal Control

2 The Maximum Principle . . . . . . . . . . . . . . . . . . . . . . . . . 92.1 Optimal Control Problem . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 Controlled Plant, Cost Functionals, and Terminal Set . . . . 92.1.2 Feasible and Admissible Control . . . . . . . . . . . . . . 112.1.3 Setting of the Problem in the General Bolza Form . . . . . 112.1.4 Representation in the Mayer Form . . . . . . . . . . . . . 12

2.2 Maximum Principle Formulation . . . . . . . . . . . . . . . . . . 122.2.1 Needle-Shaped Variations and Variational Equation . . . . 122.2.2 Adjoint Variables and MP Formulation for Cost

Functionals with a Fixed Horizon . . . . . . . . . . . . . . 152.2.3 The Regular Case . . . . . . . . . . . . . . . . . . . . . . 182.2.4 Hamiltonian Form and Constancy Property . . . . . . . . . 192.2.5 Variable Horizon Optimal Control Problem and Zero

Property . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.2.6 Joint Optimal Control and Parametric Optimization Problem 232.2.7 Sufficient Conditions of Optimality . . . . . . . . . . . . . 25

2.3 Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.3.1 Linear ODE and Liouville’s Theorem . . . . . . . . . . . . 282.3.2 Bihari Lemma . . . . . . . . . . . . . . . . . . . . . . . . 312.3.3 Gronwall Lemma . . . . . . . . . . . . . . . . . . . . . . 342.3.4 The Lagrange Principle in Finite-Dimensional Spaces . . . 34

3 Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . . 453.1 Bellman’s Principle of Optimality . . . . . . . . . . . . . . . . . . 45

3.1.1 Formulation of the Principle . . . . . . . . . . . . . . . . . 453.1.2 Sufficient Conditions for BPO . . . . . . . . . . . . . . . . 46

3.2 Invariant Embedding and Dynamic Programming . . . . . . . . . 483.2.1 System Description and Basic Assumptions . . . . . . . . 48

xiii

Page 13: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

xiv Contents

3.2.2 The Dynamic Programming Equation in the Integral Form . 493.2.3 The Hamilton–Jacobi–Bellman First-Order Partial

Differential Equation and the Verification Theorem . . . . . 503.3 HJB Smooth Solution Based on First Integrals . . . . . . . . . . . 54

3.3.1 First Integrals . . . . . . . . . . . . . . . . . . . . . . . . 543.3.2 Structured Hamiltonians . . . . . . . . . . . . . . . . . . . 553.3.3 HJB Solution Based on First Integrals . . . . . . . . . . . . 57

3.4 The Deterministic Feynman–Kac Formula: the General SmoothCase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.5 The Viscosity Solutions Concept: Nonsmooth Case . . . . . . . . 633.5.1 Vanishing Viscosity . . . . . . . . . . . . . . . . . . . . . 643.5.2 Definition of a Viscosity Solution . . . . . . . . . . . . . . 643.5.3 Existence of a Viscosity Solution . . . . . . . . . . . . . . 65

3.6 Time-Averaged Cost Optimal Control . . . . . . . . . . . . . . . . 663.6.1 Time-Averaged Cost Stationary Optimization: Problem

Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . 663.6.2 HJB Equation and the Verification Rule . . . . . . . . . . . 663.6.3 Affine Dynamics with a Quadratic Cost . . . . . . . . . . . 67

4 Linear Quadratic Optimal Control . . . . . . . . . . . . . . . . . . . 714.1 Formulation of the Problem . . . . . . . . . . . . . . . . . . . . . 71

4.1.1 Nonstationary Linear Systems . . . . . . . . . . . . . . . . 714.1.2 Linear Quadratic Problem . . . . . . . . . . . . . . . . . . 72

4.2 Maximum Principle for the DLQ Problem . . . . . . . . . . . . . 734.2.1 Formulation of the MP . . . . . . . . . . . . . . . . . . . . 734.2.2 Sufficiency Condition . . . . . . . . . . . . . . . . . . . . 74

4.3 The Riccati Differential Equation and Feedback Optimal Control . 754.3.1 The Riccati Differential Equation . . . . . . . . . . . . . . 754.3.2 Linear Feedback Control . . . . . . . . . . . . . . . . . . 754.3.3 Analysis of the Differential Riccati Equation and the

Uniqueness of Its Solution . . . . . . . . . . . . . . . . . . 784.4 Stationary Systems on the Infinite Horizon . . . . . . . . . . . . . 82

4.4.1 Stationary Systems and the Infinite Horizon Cost Function . 824.4.2 Controllability, Stabilizability, Observability, and

Detectability . . . . . . . . . . . . . . . . . . . . . . . . . 824.4.3 Sylvester and Lyapunov Matrix Equations . . . . . . . . . 924.4.4 Direct Method . . . . . . . . . . . . . . . . . . . . . . . . 994.4.5 DPM Approach . . . . . . . . . . . . . . . . . . . . . . . 101

4.5 Matrix Riccati Equation and the Existence of Its Solution . . . . . 1034.5.1 Hamiltonian Matrix . . . . . . . . . . . . . . . . . . . . . 1044.5.2 All Solutions of the Algebraic Riccati Equation . . . . . . 1054.5.3 Hermitian and Symmetric Solutions . . . . . . . . . . . . . 1094.5.4 Nonnegative and Positive-Definite Solutions . . . . . . . . 116

4.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

Page 14: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Contents xv

5 Time-Optimization Problem . . . . . . . . . . . . . . . . . . . . . . . 1195.1 Nonlinear Time Optimization . . . . . . . . . . . . . . . . . . . . 119

5.1.1 Representation of the Cost Function . . . . . . . . . . . . . 1195.1.2 Hamiltonian Representation and Optimal Control . . . . . 119

5.2 Linear Time Optimization . . . . . . . . . . . . . . . . . . . . . . 1205.2.1 Structure of Optimal Control . . . . . . . . . . . . . . . . 1205.2.2 Theorem on n-Intervals for Stationary Linear Systems . . . 121

5.3 Solution of the Simplest Time-Optimization Problem . . . . . . . 1235.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Part II The Tent Method

6 The Tent Method in Finite-Dimensional Spaces . . . . . . . . . . . . 1316.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

6.1.1 On the Theory of Extremal Problems . . . . . . . . . . . . 1316.1.2 On the Tent Method . . . . . . . . . . . . . . . . . . . . . 132

6.2 The Classical Lagrange Problem and Its Generalization . . . . . . 1326.2.1 A Conditional Extremum . . . . . . . . . . . . . . . . . . 1326.2.2 Abstract Extremal and Intersection Problems . . . . . . . . 1346.2.3 Interpretation of the Mayer Problem . . . . . . . . . . . . 134

6.3 Basic Ideas of the Tent Method . . . . . . . . . . . . . . . . . . . 1356.3.1 Tent and Support Cone . . . . . . . . . . . . . . . . . . . 1356.3.2 Separable Convex Cones . . . . . . . . . . . . . . . . . . 1366.3.3 How the Basic Theorems May Be Proven . . . . . . . . . . 1386.3.4 The Main Topological Lemma . . . . . . . . . . . . . . . 139

6.4 The Maximum Principle by the Tent Method . . . . . . . . . . . . 1426.5 Brief Historical Remark . . . . . . . . . . . . . . . . . . . . . . . 1466.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

7 Extremal Problems in Banach Spaces . . . . . . . . . . . . . . . . . . 1497.1 An Abstract Extremal Problem . . . . . . . . . . . . . . . . . . . 149

7.1.1 Formulation of the Problem . . . . . . . . . . . . . . . . . 1497.1.2 The Intersection Theorem . . . . . . . . . . . . . . . . . . 149

7.2 Some Definitions Related to Banach Spaces . . . . . . . . . . . . 1507.2.1 Planes and Convex Bodies . . . . . . . . . . . . . . . . . . 1517.2.2 Smooth Manifolds . . . . . . . . . . . . . . . . . . . . . . 1517.2.3 Curved Half-spaces . . . . . . . . . . . . . . . . . . . . . 153

7.3 Tents in Banach Spaces . . . . . . . . . . . . . . . . . . . . . . . 1537.3.1 Definition of a Tent . . . . . . . . . . . . . . . . . . . . . 1537.3.2 Maximal Tent . . . . . . . . . . . . . . . . . . . . . . . . 154

7.4 Subspaces in the General Position . . . . . . . . . . . . . . . . . . 1587.4.1 Main Definition . . . . . . . . . . . . . . . . . . . . . . . 1587.4.2 General Position for Subspaces in Banach Space . . . . . . 160

7.5 Separability of a System of Convex Cones . . . . . . . . . . . . . 1687.5.1 Necessary Conditions for Separability . . . . . . . . . . . 1687.5.2 Criterion for Separability . . . . . . . . . . . . . . . . . . 170

Page 15: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

xvi Contents

7.5.3 Separability in Hilbert Space . . . . . . . . . . . . . . . . 1757.6 Main Theorems on Tents . . . . . . . . . . . . . . . . . . . . . . 176

7.6.1 Some Fundamental Theorems on Tents . . . . . . . . . . . 1767.6.2 Solution of the Abstract Intersection Problem . . . . . . . . 182

7.7 Analog of the Kuhn–Tucker Theorem for Banach Spaces . . . . . 1837.7.1 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . 1837.7.2 Regular Case . . . . . . . . . . . . . . . . . . . . . . . . . 186

Part III Robust Maximum Principle for Deterministic Systems

8 Finite Collection of Dynamic Systems . . . . . . . . . . . . . . . . . . 1918.1 System Description and Basic Definitions . . . . . . . . . . . . . . 191

8.1.1 Controlled Plant . . . . . . . . . . . . . . . . . . . . . . . 1918.1.2 Admissible Control . . . . . . . . . . . . . . . . . . . . . 192

8.2 Statement of the Problem . . . . . . . . . . . . . . . . . . . . . . 1938.2.1 Terminal Conditions . . . . . . . . . . . . . . . . . . . . . 1938.2.2 Minimum Cost Function . . . . . . . . . . . . . . . . . . . 1938.2.3 Robust Optimal Control . . . . . . . . . . . . . . . . . . . 194

8.3 Robust Maximum Principle . . . . . . . . . . . . . . . . . . . . . 1948.3.1 The Required Formalism . . . . . . . . . . . . . . . . . . 1948.3.2 Robust Maximum Principle . . . . . . . . . . . . . . . . . 197

8.4 Proof . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1988.4.1 Active Elements . . . . . . . . . . . . . . . . . . . . . . . 1988.4.2 Controllability Region . . . . . . . . . . . . . . . . . . . . 1988.4.3 The Set Ω0 of Forbidden Variations . . . . . . . . . . . . . 1998.4.4 Intersection Problem . . . . . . . . . . . . . . . . . . . . . 1998.4.5 Needle-Shaped Variations . . . . . . . . . . . . . . . . . . 2008.4.6 Proof of the Maximality Condition . . . . . . . . . . . . . 2018.4.7 Proof of the Complementary Slackness Condition . . . . . 2028.4.8 Proof of the Transversality Condition . . . . . . . . . . . . 2048.4.9 Proof of the Nontriviality Condition . . . . . . . . . . . . . 204

8.5 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . 2058.5.1 Single-Dimensional Plant . . . . . . . . . . . . . . . . . . 2058.5.2 Numerical Examples . . . . . . . . . . . . . . . . . . . . . 207

8.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

9 Multimodel Bolza and LQ Problem . . . . . . . . . . . . . . . . . . . 2139.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2139.2 Min-Max Control Problem in the Bolza Form . . . . . . . . . . . 214

9.2.1 System Description . . . . . . . . . . . . . . . . . . . . . 2149.2.2 Feasible and Admissible Control . . . . . . . . . . . . . . 2149.2.3 Cost Function and Min-Max Control Problem . . . . . . . 2159.2.4 Representation of the Mayer Form . . . . . . . . . . . . . 216

9.3 Robust Maximum Principle . . . . . . . . . . . . . . . . . . . . . 2189.4 Min-Max Linear Quadratic Multimodel Control . . . . . . . . . . 219

9.4.1 Formulation of the Problem . . . . . . . . . . . . . . . . . 219

Page 16: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Contents xvii

9.4.2 Hamiltonian Form and Parametrization of Robust OptimalControllers . . . . . . . . . . . . . . . . . . . . . . . . . . 220

9.4.3 Extended Form of the Closed-Loop System . . . . . . . . . 2219.4.4 Robust Optimal Control . . . . . . . . . . . . . . . . . . . 2229.4.5 Robust Optimal Control for Linear Stationary Systems

with Infinite Horizon . . . . . . . . . . . . . . . . . . . . . 2259.4.6 Numerical Examples . . . . . . . . . . . . . . . . . . . . . 226

9.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228

10 Linear Multimodel Time Optimization . . . . . . . . . . . . . . . . . 22910.1 Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . . . 22910.2 Main Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230

10.2.1 Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . 23210.2.2 Existence Theorem . . . . . . . . . . . . . . . . . . . . . 23310.2.3 Uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . 23310.2.4 Polytope Resource Set . . . . . . . . . . . . . . . . . . . . 23410.2.5 The Generalized Feldbaum’s n-Interval Theorem . . . . . . 234

10.3 Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23510.3.1 Proof of Theorem 10.1 . . . . . . . . . . . . . . . . . . . . 23610.3.2 Proof of Theorem 10.2 . . . . . . . . . . . . . . . . . . . . 23710.3.3 Proof of Theorem 10.3 . . . . . . . . . . . . . . . . . . . . 23710.3.4 Proof of Theorem 10.4 . . . . . . . . . . . . . . . . . . . . 23810.3.5 Proof of Theorem 10.5 . . . . . . . . . . . . . . . . . . . . 239

10.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24010.4.1 Example 1 . . . . . . . . . . . . . . . . . . . . . . . . . . 24010.4.2 Example 2 . . . . . . . . . . . . . . . . . . . . . . . . . . 24210.4.3 Example 3 . . . . . . . . . . . . . . . . . . . . . . . . . . 24410.4.4 Example 4 . . . . . . . . . . . . . . . . . . . . . . . . . . 24510.4.5 Example 5 . . . . . . . . . . . . . . . . . . . . . . . . . . 24810.4.6 Example 6 . . . . . . . . . . . . . . . . . . . . . . . . . . 249

10.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251

11 A Measurable Space as Uncertainty Set . . . . . . . . . . . . . . . . 25311.1 Problem Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 253

11.1.1 Plant with Unknown Parameter . . . . . . . . . . . . . . . 25311.1.2 Terminal Set and Admissible Control . . . . . . . . . . . . 25411.1.3 Maximum Cost Function . . . . . . . . . . . . . . . . . . 25511.1.4 Robust Optimal Control . . . . . . . . . . . . . . . . . . . 255

11.2 The Formalism . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25611.3 The Main Theorem . . . . . . . . . . . . . . . . . . . . . . . . . 25811.4 Proof of the Main Result . . . . . . . . . . . . . . . . . . . . . . . 259

11.4.1 Application of the Tent Method . . . . . . . . . . . . . . . 25911.4.2 Needle-Shaped Variations and Proof of the Maximality

Condition . . . . . . . . . . . . . . . . . . . . . . . . . . 26011.4.3 Proof of Complementary Slackness Property . . . . . . . . 26211.4.4 Transversality Condition Proof . . . . . . . . . . . . . . . 263

Page 17: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

xviii Contents

11.4.5 Nontriviality Condition Proof . . . . . . . . . . . . . . . . 26311.5 Some Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . 263

11.5.1 Comment on Possible Variable Horizon Extension . . . . . 26311.5.2 The Case of Absolutely Continuous Measures . . . . . . . 26411.5.3 Uniform-Density Case . . . . . . . . . . . . . . . . . . . . 26511.5.4 Finite Uncertainty Set . . . . . . . . . . . . . . . . . . . . 26511.5.5 May the Complementary Slackness Inequalities Be

Replaced by the Equalities? . . . . . . . . . . . . . . . . . 26611.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 266

12 Dynamic Programming for Robust Optimization . . . . . . . . . . . 26912.1 Problem Formulation and Preliminary Results . . . . . . . . . . . 26912.2 Robust Version of the Hamilton–Jacobi–Bellman Equation . . . . 27312.3 Dynamic Programming Approach to Multimodel LQ-Type

Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28112.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283

13 Min-Max Sliding-Mode Control . . . . . . . . . . . . . . . . . . . . . 28513.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285

13.1.1 Brief Description of Sliding-Mode Control . . . . . . . . . 28513.1.2 Basic Assumptions and Restrictions . . . . . . . . . . . . . 28613.1.3 Main Contribution of this Chapter . . . . . . . . . . . . . . 28713.1.4 Structure of the Chapter . . . . . . . . . . . . . . . . . . . 287

13.2 Description of the System and Problem Setting . . . . . . . . . . . 28813.2.1 Plant Model . . . . . . . . . . . . . . . . . . . . . . . . . 28813.2.2 Control Strategy . . . . . . . . . . . . . . . . . . . . . . . 28813.2.3 Performance Index and Formulation of the Problem . . . . 289

13.3 Extended Model and Transformation to Regular Form . . . . . . . 29013.4 Min-Max Sliding Surface . . . . . . . . . . . . . . . . . . . . . . 29213.5 Sliding-Mode Control Function Design . . . . . . . . . . . . . . . 29413.6 Minimal-Time Reaching Phase Control . . . . . . . . . . . . . . . 29513.7 Successive Approximation of Initial Sliding Hyperplane and Joint

Optimal Control . . . . . . . . . . . . . . . . . . . . . . . . . . . 29813.8 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . 29913.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304

14 Multimodel Differential Games . . . . . . . . . . . . . . . . . . . . . 30714.1 On Differential Games . . . . . . . . . . . . . . . . . . . . . . . . 307

14.1.1 What Are Dynamic Games? . . . . . . . . . . . . . . . . . 30714.1.2 Short Review on LQ Games . . . . . . . . . . . . . . . . . 30814.1.3 Motivation of the Multimodel—Case Study . . . . . . . . 310

14.2 Multimodel Differential Game . . . . . . . . . . . . . . . . . . . 31114.2.1 Multimodel Game Descriptions . . . . . . . . . . . . . . . 31114.2.2 Robust Nash Equilibrium . . . . . . . . . . . . . . . . . . 31214.2.3 Representation of the Mayer Form . . . . . . . . . . . . . 312

14.3 Robust Nash Equilibrium for LQ Differential Games . . . . . . . . 31614.3.1 Formulation of the Problem . . . . . . . . . . . . . . . . . 316

Page 18: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

Contents xix

14.3.2 Hamiltonian Equations for Players and ParametrizedStrategies . . . . . . . . . . . . . . . . . . . . . . . . . . . 317

14.3.3 Extended Form for the LQ Game . . . . . . . . . . . . . . 31914.4 Numerical Procedure for Adjustment of the Equilibrium Weights . 324

14.4.1 Some Convexity Properties of the Cost Functional as aFunction of the Weights . . . . . . . . . . . . . . . . . . . 324

14.4.2 Numerical Procedure . . . . . . . . . . . . . . . . . . . . 32714.5 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . 33014.6 Prey–Predator Differential Game . . . . . . . . . . . . . . . . . . 333

14.6.1 Multimodel Dynamics . . . . . . . . . . . . . . . . . . . . 33314.6.2 Prey–Predator Differential Game . . . . . . . . . . . . . . 33414.6.3 Individual Aims . . . . . . . . . . . . . . . . . . . . . . . 33514.6.4 Missile Guidance . . . . . . . . . . . . . . . . . . . . . . 335

14.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339

Part IV Robust Maximum Principle for Stochastic Systems

15 Multiplant Robust Control . . . . . . . . . . . . . . . . . . . . . . . . 34315.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343

15.1.1 A Short Review of Min-Max Stochastic Control . . . . . . 34315.1.2 Purpose of the Chapter . . . . . . . . . . . . . . . . . . . . 345

15.2 Stochastic Uncertain System . . . . . . . . . . . . . . . . . . . . . 34615.3 A Terminal Condition and a Feasible and Admissible Control . . . 34815.4 Robust Optimal Stochastic Control Problem Setting . . . . . . . . 34815.5 Robust Maximum Principle for Min-Max Stochastic Control . . . 34915.6 Proof of RSMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 352

15.6.1 Proof of Properties 1–3 . . . . . . . . . . . . . . . . . . . 35215.6.2 Proof of Property 4 (Maximality Condition) . . . . . . . . 358

15.7 Some Important Comments . . . . . . . . . . . . . . . . . . . . . 36415.8 Illustrative Examples . . . . . . . . . . . . . . . . . . . . . . . . . 365

15.8.1 Min-Max Production Planning . . . . . . . . . . . . . . . 36515.8.2 Min-Max Reinsurance-Dividend Management . . . . . . . 371

15.9 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 375

16 LQ-Stochastic Multimodel Control . . . . . . . . . . . . . . . . . . . 37716.1 Min-Max LQ Control Problem Setting . . . . . . . . . . . . . . . 377

16.1.1 Stochastic Uncertain Linear System . . . . . . . . . . . . . 37716.1.2 Feasible and Admissible Control . . . . . . . . . . . . . . 37816.1.3 Robust Optimal Stochastic Control Problem Setting . . . . 379

16.2 Robust Maximum Principle for Min-Max LQ-Stochastic Control . 38016.2.1 The Presentation of the Problem in the Mayer Form . . . . 38016.2.2 First- and the Second-Order Adjoint Equations . . . . . . . 38016.2.3 Hamiltonian Form . . . . . . . . . . . . . . . . . . . . . . 38116.2.4 Basic Theorem on Robust Stochastic Optimal Control . . . 38216.2.5 Normalized Form for the Adjoint Equations . . . . . . . . 38416.2.6 Extended Form for the Closed-Loop System . . . . . . . . 386

Page 19: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

xx Contents

16.3 Riccati Equation and Robust Optimal Control . . . . . . . . . . . 38816.3.1 Robust Stochastic Optimal Control for Linear Stationary

Systems with Infinite Horizon . . . . . . . . . . . . . . . . 39116.3.2 Numerical Examples . . . . . . . . . . . . . . . . . . . . . 394

16.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 396

17 A Compact Uncertainty Set . . . . . . . . . . . . . . . . . . . . . . . 39717.1 Problem Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . 397

17.1.1 Stochastic Uncertain System . . . . . . . . . . . . . . . . 39717.1.2 A Terminal Condition, a Feasible and Admissible Control . 39917.1.3 Maximum Cost Function and Robust Optimal Control . . . 400

17.2 Robust Stochastic Maximum Principle . . . . . . . . . . . . . . . 40117.2.1 First- and Second-Order Adjoint Processes . . . . . . . . . 40117.2.2 Main Result on GRSMOP . . . . . . . . . . . . . . . . . . 40317.2.3 Proof of Theorem 1 (GRSMP) . . . . . . . . . . . . . . . 405

17.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41717.3.1 The Hamiltonian Structure . . . . . . . . . . . . . . . . . 41717.3.2 GRSMP for a Control-Independent Diffusion Term . . . . 41717.3.3 The Case of Complete Information . . . . . . . . . . . . . 41817.3.4 Deterministic Systems . . . . . . . . . . . . . . . . . . . . 41817.3.5 Comment on Possible Variable Horizon Extension . . . . . 41917.3.6 The Case of Absolutely Continuous Measures for the

Uncertainty Set . . . . . . . . . . . . . . . . . . . . . . . 42017.3.7 Case of Uniform Density . . . . . . . . . . . . . . . . . . 42017.3.8 Finite Uncertainty Set . . . . . . . . . . . . . . . . . . . . 421

17.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 421

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 423

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429

Page 20: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

List of Figures

Fig. 1.1 Min-Max optimized function . . . . . . . . . . . . . . . . . . . . 2Fig. 2.1 The illustration of the Separation Principle . . . . . . . . . . . . 36Fig. 3.1 The illustration of Bellman’s Principle of Optimality . . . . . . . 46Fig. 4.1 Eigenvalues of H . . . . . . . . . . . . . . . . . . . . . . . . . . 104Fig. 5.1 The synthesis of the time-optimal control . . . . . . . . . . . . . 127Fig. 6.1 Mayer optimization problem . . . . . . . . . . . . . . . . . . . . 135Fig. 6.2 Tent K . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136Fig. 6.3 The support cone at the point x1 . . . . . . . . . . . . . . . . . . 136Fig. 6.4 Separability of cones . . . . . . . . . . . . . . . . . . . . . . . . 136Fig. 6.5 Illustration to the Topological Lemma . . . . . . . . . . . . . . . 141Fig. 6.6 A needle-shaped variation . . . . . . . . . . . . . . . . . . . . . 143Fig. 8.1 A family of trajectories and a terminal set M . . . . . . . . . . . 193Fig. 9.1 The function J (λ) at the complete interval λ ∈ [0;1]) . . . . . . . 227Fig. 9.2 The finite horizon performance index (the zoom-in view) as a

function of λ . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227Fig. 9.3 The behavior of the trajectories corresponding to the robust

optimal control . . . . . . . . . . . . . . . . . . . . . . . . . . . 228Fig. 9.4 The infinite horizon performance index as a function of λ . . . . . 228Fig. 10.1 Trajectories of the system . . . . . . . . . . . . . . . . . . . . . . 244Fig. 10.2 The illustration to the generalized Feldbaum’s n-interval theorem . 249Fig. 13.1 F(λ1) dependence . . . . . . . . . . . . . . . . . . . . . . . . . 300Fig. 13.2 Trajectory behavior x(t) . . . . . . . . . . . . . . . . . . . . . . 301Fig. 13.3 Control action u(t) . . . . . . . . . . . . . . . . . . . . . . . . . 302Fig. 13.4 Sliding surface σ(x, t) for x2

1 = x22 = 0 . . . . . . . . . . . . . . . 303

Fig. 13.5 Sliding surface σ(x, t) for x11 = x1

2 = 0 . . . . . . . . . . . . . . . 303Fig. 13.6 Trajectory behavior x(t) . . . . . . . . . . . . . . . . . . . . . . 304Fig. 13.7 Control action u(t) . . . . . . . . . . . . . . . . . . . . . . . . . 305Fig. 13.8 Sliding surface σ(x, t) for x1

1 = x22 = x3

1 = x32 = 0 . . . . . . . . . 305

Fig. 13.9 Sliding surface σ(x, t) for x11 = x2

1 = x31 = x3

2 = 0 . . . . . . . . . 306Fig. 14.1 The cost function of the first player . . . . . . . . . . . . . . . . . 331Fig. 14.2 The cost function of the second player . . . . . . . . . . . . . . . 332

xxi

Page 21: Systems & Control: Foundations & Applications978-0-8176-8152-4/1.pdf · Systems & Control: Foundations & Applications Series Editor Tamer Ba¸ sar, University of Illinois at Urbana-Champaign,

xxii List of Figures

Fig. 14.3 Missile collision geometry . . . . . . . . . . . . . . . . . . . . . 336Fig. 14.4 Missile pursuit evasion . . . . . . . . . . . . . . . . . . . . . . . 336Fig. 14.5 The three different cases of the Evader’s trajectories . . . . . . . . 338Fig. 14.6 The control actions for Pursuit (u2) and Evader (u1) guidance . . 338Fig. 15.1 The function ϕ(x) . . . . . . . . . . . . . . . . . . . . . . . . . . 368Fig. 15.2 Z(x) function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368Fig. 15.3 The inverse x(Z) mapping . . . . . . . . . . . . . . . . . . . . . 368