linear estimation - saba web pagesaba.kntu.ac.ir/eecd/taghirad/e books/toc/linear estimation.pdf ·...

Download LINEAR ESTIMATION - Saba Web Pagesaba.kntu.ac.ir/eecd/taghirad/E books/TOC/Linear estimation.pdf · SOLIMAN & SRINATH SOLO & KONG ... 6.2 Signals and Systems 6.2.1 The ?-Transform

If you can't read please download the document

Upload: lamtuong

Post on 06-Feb-2018

223 views

Category:

Documents


1 download

TRANSCRIPT

  • LINEAR ESTIMATION

  • PRENTICE HALL INFORMATION AND SYSTEM SCIENCES SERIES

    Thomas Kailath, Editor

    ASTROM & WITENMARK BHATTACHARYA, CHAPELLAT

    & KEEL FRIEDLAND G ARDNER GREWALL & ANDREWS HAYKIN HAYKIN, ED. JAIN & CHELLAPA KAILATH KUNG KUNG, WHITEHOUSE

    & KAILATH, EDS. KWAKERNAAK & SWAN UUNG UUNG & GLAD MACOVSKI MOSCA NARENDRA & ANNASWAMY NEKOOGAR & MORIAR7Y PORAT RUGH SOLIMAN & SRINATH SOLO & KONG SRINATH, RAJASEKARAN,

    & VISWANATHAN WELLS WILLIAMS

    . .

    Con~puter-Controlled Sysfems: Theory and Design, 2/E Robust Conrrol: The Parametric Approach

    Advanced Conrrol Systern Design Staristical Spectral Andvsi .~: A Nonprobabilirtic Tlrcory Kalman Filtering: Theory and Practice Adaptive Filter Theory, 3/,5 Blind Deconvolution Fundamenrals of Digi~al Image Processing, 2/E Linear Sysfems VLSI Array Processors VLSI and Modern Signal Processing

    Signals and Systems System Identi@ation: Theory for rhe User Modeling of Dynamic Syrtems Medical lnlaging Syslenrs Sfochasfic and Predictive Adaprive Control Srable Adaprive Systems Digital Cortfrol Using Digiral Signal Processing Digital Processing of Random Signals: Theory & Melhods Linear System Theory, 2/E Continuous nnd Discme-Time Signal and Sysfents, 2/E Adaptive Signal Proce,~sing Aigoritlrnrr: Stabilify & Perforri Infroduction to Statisfical Sigr~al Processing with Applicatio

    Applied Coding and Information Theory for Engineers Designing Digital Filters

  • To ourparents and families

  • LINEAR ESTIMATION BRITISH LIBRARY

    DOCUMENT SUPPLY

    T. Kailath A. H. Sayed B. H a ~ i b i a Stanford University UCLA Bell Laboratories

    PRENTICE HALL

    Upper Saddle River. New Jersey 07458

  • Library of Congress Cnraloging-in-Publicorion Darn

    Kailalh. Thomas Linear estimation / T. Kailath. A.H. Sayed, B. Hassibi.

    p. Cm. Indudes bibliographical references ISBN 0-13422464-2 (case) 1.Estimation Ulcory 2. Least squares. I . Saycd, Ali H..11. Hassibi. Babak. 111. litle.

    \ - - ' . . QA276.8 .K33 159Y . . 519.5'4Mc21 39.c:. 99447033 ,

    Publisher: Tom Robbins . Associate editor: Alice Dworkin Production editor: Audri Anna Bazlen Vice-president and editorial director: Marcia Horton Executive managing editor: Vince O'Brien Managing editor: David A. George Vice-president of production and manufacturing: Dnvid U! Riccnrdi Art director: Jayne Conte Cover design: Bruce Kenselaar Manufacturing buyer: Pat Brown Marketing manager: Danny Hoyt Composition: PreTH

    O 2000 by Prentice Hall Prentice-Hall, Inc. Upper Saddle River, New Jersey 07458

    All rights reserved. No part of this book may be reproduced, in any form or by any means, n permission in writing from the publisher.

    The author and publisher of this book haveused their best efforts in preparing this book. These effc the development, research, and testing of the theories and programs to determine their effectiv author and publisher make no warranty of any kind, expressed or implied, with regard to thesc or the documentation contained in this book. The author and publisher shall not be liable in an: incidental or consequential damages in connection with, or arising out of, the furnishing, perfoi use of these programs.

    , , , . ... Printed in the u r d e d States of America

    I 0 9 8 7 6 5 4 3 2 1

    ISBN 0-13 -0224bq -2

    Prentice-Hall International (UK) Limited, London Prentice-Hall of Australia Pty. Limited, Sydney Prentice-Hall Canada Inc., Toronto Prentice-Hall Hispanoamericana, S.A., Mexico Prentice-Hall of India Private Limited, New Delhi Prentice-Hall of Japan, Inc., Tokyo Pearson Education Asia Pte. ~ t d . , Singapore Editora Prentice-Hall do Brasil, Ltda., Rio de Jnneiro

  • Contents

    Preface xix

    Symbols xxiii

    1 OVERVIEW 1 The Asymptotic Observer 2 The Opt imum Transient Observer 4 1.2.1 The Mean-Square-Error Criterion 6 1.2.2 Minimization via Completion of Squares 7 1.2.3 The Optimum Transient Observer 9 1.2.4 The Kalrnan Filter 10 4

    Coming Attractions 1.3.1 Smoothed Estimators 12 1.3.2 Extensions to Time-Variant Models 12 1.3.3 Fast Algorithms for Time-Invariant Systems 13 1.3.4 Numerical Issues 14 1.3.5 Array Algorithms 14 1.3.6 Other Topics 15 The Innovations Process 1.4.1 Whiteness of the Innovations Process 17 1.4.2 Innovations Representations 18 ,, 9 1.4.3 Canonical Covariance Factorization 19 1.4.4 Exploiting State-Space Structure for Matrix Problems 20 Steady-State Behavior 1.5.1 Appropriate Solutions of the DARE 22 1.5.2 Wiener Filters 23 1.5.3 Convergence Results 26 Several Related Problems 1.6.1 Adaptive RLS Filtering 27 1.6.2 Linear Quadratic Control 28 1.6.3 X, Estimation 29 1.6.4 X, Adaptive Filtering 32 1.6.5 'H, Control 34 1.6.6 Linear Algebra and MatrixTheory 35

    vii

  • viii

    1.7 Complements Problems

    b , 2 DETERMINISTIC LEAST-SQUARES PROBLEMS 2.1 The Deterministic Least-Squares Criterion 2.2 The Classical Solutions

    2.2.1 The Normal Equations 42 2.2.2 Weighted Least-Squares Problems 44 2.2.3 Statistical Assumptions on the Noise 44

    2.3 A Geometric Formulation: The Orthogonali[y Condition 2.3.1 The Projection Theorem in Inner Product Spaces 47 2.3.2 Geometric Insights 48 2.3.3 Projection Matrices 49 2.3.4 An Application: Order-Recursive Least-Squares 49

    2.4 Regularized Least-Squares Problems 2.5 An Array Algorithm: The QR Method 2.6 Updating Least-Squares Solutions: RLS Algorithms

    2.6.1 The FILS Algorithm 55 2.6.2 An Array Algorithm lor RLS 58

    2.7 Downdating Least-Squares Solutions 2.8 Some Variations of Least-Squares Problems

    2.8.1 m e Total Least-Squares Criterion 62 2.8.2 Criteria with Bounds on Data Uncertainties 63

    2.9 Complements Problems

    2.A On Systems of Linear Equations

    b 3 STOCHASTIC LEAST-SQUARES PROBLEMS 3 1 The Problem of Stochastic Estimation .. s 3.2 Linear Least-Mean-Squares Estimators

    3.2.1 The Fundamental Equations 80 3.2.2 Stochastic Interpretation of Triangular Factc~rization 82 3.2.3 Singular Data Covariance Matrices 84 3.2.4 Nonzero-Mean Values and Centering 84 3.2.5 Estimators for Complex-Valued Random Variables 85

    3.3 A Geometric Formulation 3.3.1 The Orthogonality Condition 89 3.3.2 Examples 93

    3.4 Linear Models 3.4.1 Information Forms When R, > 0 and R,. > 0 95 3.4.2 The Gauss-Markov Theorem 96 3.4.3 Combining Estimators 98

  • Contents ix

    3.5 Equivalence to Deterministic Least~Squares 3.6 Complements .. .

    Problems

    3.A Least-Mean-Squares Estimation 3.B Gaussian Random Variables 3.C Optimal Estimation for Gaussian Variables

    b 4 THE INNOVATIONS PROCESS 118 4.1 Estimation of Stochastic Processes 119

    4.1.1 The Fixed Interval Smoothing Problem 120 4.1.2 The Causal Filtering Problem 121 4.1.3 The Wiener-Hopf Technique 122 4.1.4 A Note on Terminology - Vectors and Gramians 124

    4.2 The Innovations Process 125 4.2.1 A Geometric Approach 125 4.2.2 An Algebraic Approach 127 4.2.3 The Modified Gram-Schmidt Procedure 129 4.2.4 Estimation Given the Innovations Process 130 4.2.5 The Filtering Problem via the Innovations Approach 131 4.2.6 Computational Issues 132

    4.3 Innovations Approach to Deterministic Least-Squares Problems 4.4 The Exponentially Correlated Process

    4.4.1 Triangular Factorization of R, 134 4.4.2 Finding L-' and the Innovations 136 4.4.3 Innovations via the Gram-Schmidt Procedures 137

    4.5 Complements Problems

    4.A Linear Spaces, Modules, and Gramians 147

    ' Y . ; . 5 STATE-SPACE MODELS ., 3 152 5.1 The Exponentially Correlated Process 152

    5.1.1 Finite Interval Problems; Initial Conditions lor Stationarity 154 5.1.2 Innovations from the Process Model 155

    5.2 Going Beyond the Stationary Case 155 5.2.1 Stationary Processes 156 5.2.2 Nonstationary Processes 157

    5.3 Higher-Order Processes and State-Space Models 157 5.3.1 Autoregressive Processes 158 5.3.2 Handling Initial Conditions 158 5.3.3 State-Space Descriptions 159 5.3.4 The Standard State-Space Model 160 5.3.5 Examples of Other State-Space Models 163

  • 5.4 Wide-Sense Markov Processes 5.4.1 Forwards Markovian Models 165 5.4.2 Backwards Markovian Models 166 ,5.4.3 Backwards Models from Forwards Models 169 5.4.4 Markovian Representations and the Standard Model 171

    5.5 Complements Problems

    5.A Some Global Formulas

    b b INNOVATIONS FOR STATIONARY PROCESSES 6.1 Innovations via Spectral Factorization

    6.1.1 Stationary Processes 184 6.1.2 Generating Functions and i-Spectra 186

    6.2 Signals and Systems 6.2.1 The ?-Transform 189 6.2.2 LinearTime-InvariantSystems 191 6.2.3 Causal. Anticausal. and Minimum-Phase Systcms 192

    6.3 Stationary Random Processes 6.3.1 Properties of the z-Spectrum 194 6.3.2 Linear Operations on Stationary Stochastic Processes 195

    6.4 Canonical Spectral Factorization 6.5 Scalar Rational i-Spectra 6.6 Vector-Valued Stationary Processes 6.7 ~ o m ~ l e m k n t s

    Problems 6.A Continuous-Time Systems and Processes

    b 7 WIENER THEORY FOR SCALAR PROCESSES 7.1 Continuous-Time Wiener Smoothing . . 7.1.1 The Geometric Formulation 223

    7.1.2 Solution via Fourier Transforms 224 7.1.3 n e Minimum Mean-Square Error 225 7.1.4 Filtering Signals out of Noisy Measuremenls 226 7.1.5 Comparison with the Ideal Filter 226

    7.2 The Continuous-The Wiener-Hopf Equation 7.3 Discrete-Time Problems

    7.3.1 The Discrete-Time Wiener Smoother 228 7.3.2 The Discrete-Time Wiener-Hopf Equation 230

    7.4 The Discrete-Time Wiener-Hopf Technique 7.5 Causal Parts Via Partial Fractions 7.6 Important Special Cases and Examples

    7.6.1 Pure Prediction 237 7.6.2 Additive White Noise 240

  • Contents xi . . . ,

    7.7 Innovations Approach to the Wiener Filter 7.7.1 The Pure Prediction Problem 245 7.7.2 Additive White-Noise Problems 246

    7.8 Vector Processes 7.9 Extensions of Wiener Filtering 7.10 Complements

    Problems 7.A The Continuous-Time Wiener-Hopf Technique

    Time-Invariant State-Space Models 8.1.1 Covariance Functions for Xme-Invariant Models 266 8.1.2 The Special Case of Stationary Processes 267 8.1.3 Expressions for the z-Spectrum 268 A n Equivalence Class for Input Gramians Canonical Spectral Factorization 8.3.1 Unit-Circle Controllability Condition 272 8.3.2 An Inertia Property 274 8.3.3 Algebraic Riccati Equations and Spectral Factorization 275 8.3.4 Appropriate Solutions of the DARE 276 8.3.5 Canonical Spectral Factorization and Innovations Models 277 8.3.6 A Digression: A Criterion for Positivity 279 Recursive Estimation Given State-Space Models 8.4.1 Recursive Predictors 280 8.4.2 Recursive State Predictors 281 8.4.3 Recursive Smoothed Estimators 282 Factorization Given Covariance Data: Recursive Wiener Filters Extension to Time-Variant Models The Appendices Complements Problems The Popov function

    8.B System Theory Approach to Rational Spectral Factorization 8.C The KYP and Related Lemmas 8.D Vector Spectral Factorization in Continuous Time

    b 9 THE KALMAN FILTER 9.1 The Standard State-Space Model 9.2 The Kalman Filter Recursions for the Innovations

    9.2.1 Recursions for the Innovations 312 9.2.2 R,,; and Kp,i in Terms of P; 314 9.2.3 Recursion for P; 316 9.2.4 The Kalman Filter Recursions for the Innovations 317 9.2.5 Innovations Models for the Output Process 318

  • Recursions for Predicted and Filtered State Estimators 9.3.1 The Predicled Estimators 319 9.3.2 Schmidt's Modification: Measurement and T h e Updates 319 .9.3.3 Recursions for Filtered Estimators 322 9.3.4 An Alternative Innovalions Model 323 Triangular Factorizations of R,. and R;' A n Important Special Assumption: RJ > 0 9S:l Simplifications for Correlated Noise Processes 325 9.5.2 Measurement Updates in Information Form 327 9.5.3 Existence of P;' 329 9.5.4 Sequential Processing 329 9.5.5 l ime Updates in Informalion Form (Qi > 0) 331 9.5.6 A Recursion for P;' 332 9.5.7 Summary of Results under Invertibility Conditions 332

    Covariance-Based Filters Approximate Nonlinear Filtering 9.7.1 A Linearized Kalman Filter 338 9.7.2 Schmidt Extended Kalman Filter (EKF) 339 9.7.3 The Iterated Schmidt E K F 341 9.7.4 Performance of the Approximate Filters 341 9.7.5 Other Schemes 342 Backwards Kalman Recursions 9.8.1 Backwards Markovian Representations of {y , ) 342 9.8.2 Recyrsions for the Backwards Innovations Proccss 343 9.8.3 ?he Filtered Version o l the Backwards Kalrnan Recursions 344 9.8.4 U D U * Factorization of R , 345 Complements

    Problenls Factorization of R, using the M G S Procedure Factorization via Gramian Equivalence Classes

    k 10 SMOOTHED ESTIMATORS ., I 10.1 General Smoothing Formulas 10.2 Exploiting State-Space Structure

    10.2.1 The Bryson-Frazier (BF) Formulas 373 10.2.2 Slochastic Interpretation of the Adjoint Variable 375

    10.3 The Rauch-Tung-Striebel (RTS) Recursions 10.3.1 First Form of RTS Recursions 376 10.3.2 The SmoothingErrors are Backwards Markov 377 10.3.3 The Original Rauch-Tung-Striebel (RTS) Formulas 378

    10.4 Two-Filter Formulas 10.4.1 General Two Filter Formulas 380 10.4.2 The Mayne and Fraser-Potter Formulas 381 10.4.3 Combined Estimators Derivation 383

  • Contents xiii

    . . . 10.5 The Hamiltonian Equations ( R i > 0) ' ' 10.6 Variational Origin of Hamiltonian E&ati$ns 10.7 Applications of Equivalence

    10.7.1 The Equivalent Stochastic Problem 389 10.7.2 Solving the Stochastic Problem 390 10.7.3 Solving the Deterministic Problem 390 10.7.4 An Alternative Direct Solution 391 10.7.5 MAP Estimation and a Deterministic Interpretation for the Kalman Filter 393 10.7.6 The Deterministic Approach of Whittle 394

    10.8 Complements 397 Problems 397

    b 11 FAST ALGORITHMS 406 The Fast (CKMS) Recursions Two Important Cases 11.2.1 Zero Initial Conditions 413 11.2.2 Stationary Processes 413 Structured Time-Variant Systems CKMS Recursions Given Covariance Data Relation to Displacement Rank Complements Problems

    b 1 2 ARRAY ALGORITHMS 427 Review and Notations 428 12.1.1 Notation 429 12.1.2 Normalizations 430 12.1.3 A Demonstration of Round-Off Error Effects 431 Potter's Explicit Algorithm for Scalar Measurement Update 432

    Several Array Algorithms 43 3 12.3.1 A Standing Assumption 433

    ,, 3 12.3.2 Time Updates 434 12.3.3 Measurement Updates 435 12.3.4 Predicted Estimators 437 12.3.5 Filtered Estimators 438 12.3.6 Estimator Update 438 12.3.7 Operation Counts and Condensed Forms 440 Numerical Examples 12.4.1 Triangularization via Givens Rotations 440 12.4.2 Triangularization via Householder Transformations 442 12.4.3 Triangularization via Square-Root Free Rotations 443 Derivations of the Array Algorithms 12.5.1 The Xme-Update Algorithm 445 12.5.2 The ~easurement-Update Algorithm 445 12.5.3 Algorithm for the State Predictors 446

  • xiv

    12.6

    12.7 12.8

    12.9

    A Geometric Derivation of the Arrays 12.6.1 Predicted Form of the Arrays 448 12.6.2 Measurement Updates 451 12.6.3 l ime Updates 452 Paige's Form of the Array Algorithm Array Algorithms for the Information Forms 12.8.1 Information Array for the Measurement Update 453 12.8.2 Information Array for the Time Update 454 12.8.3 Alternative Derivation via Inversion of Covariance Forms 455 12.8.4 Derivation via Dualities When Ri > 0 and Qi > 0 456 12.8.5 The General Information Filter Form 457 12.8.6 A Geometric Derivation of the Information Filter Form 459 Array Algorithms for Smoothing 12.9.1 Bryson-Frazier Formulas in Array Form 460 12.9.2 Rauch-lbng-Striebel Formulas in Array Form 462 12.9.3 Two-Filter (or Mapne-Fraser) Array Formulas 462

    12.10 Complements Problems

    12.A The UD Algorithm 12.B The Use of Schur and Condensed Forms 12.C Paige's Array Algorithm

    1 3 FAST ARRAY ALGORITHMS 13.1 A Special Case: Po = 0

    13.1.1 unitary Equivalence and an Alternative Derivation 483 13.2 A General Fast Array Algorithm 13.3 From Explicit Equations to Array Algorithms 13.4 Structured Time-Variant Systems 13.5 Complements

    Problems 13.A Combining Displacement and State-Space Structures

    14.1 Introduction 14.1.1 lime-Invariant State-Space hlodels 499 14.1.2 Convergence for Indefinite Initial Conditions 500 14.1.3 Convergence for Unstable F 502 14.1.4 Why Study Models with Unstable F? 502

    14.2 Solutions of the DARE 14.3 Summary of Results 14.4 Riccati Solutions for Different Initial Conditions 14.5 Convergence Results

    14.5.1 A Sufficiency Result 513 14.5.2 Simplified Convergence Conditions 525 14.5.3 The Dual DARE and Stabilizablity 530

  • Contents XV

    14.6 The Case of Stable Systems ,:. ' .. . . . ' . . . . . .

    14.7 The Case of S # 0' . .. . . 14.8 Exponential Convergence of the Fast Recursions

    14.9 Complements

    Problems

    b 1 5 DUALITY AND EQUIVALENCE IN ESTIMATION AND CONTROL 555 Dual Bases 555 15.1.1 Algebraic Specification 557 15.1.2 Geometric Specification 557 15.1.3 Some Reasons for Introducing Dual Bases 558 15.1.4 Estimators via the Dual Basis 559

    Application to Linear Models 15.2.1 Linear Models and Dual Bases 560 15.2.2 Application to the Measurement Update Problem 562 15.2.3 Application to State-Space Models 563

    Duality and Equivalence Relationships 15.3.1 Equivalent Stochastic and Deterministic Problems 565 15.3.2 Dual Stochastic and Deterministic Problems 566 15.3.3 Summary of Duality and Equivalence Results 567 15.3.4 A Deterministic Optimization Problem via Duality 569 15.3.5 Application to Linear Quadratic Tracking 573 15.3.6 Application to Linear Quadratic Regulation 576

    Duality under Causality Constraints 15.4.1 Causal Estimation 577 15.4.2 Anticausal Dual Problem 579 15.4.3 Anticausal Estimation and Causal Duality 580 15.4.4 Application to Stochastic Quadratic Control 581

    Measurement Constraints and a Separation Principle 15.5.1 A Separation Principle with Causal Dependence on ,, Data , 586 15.5.2 A Separation Principle with Anticausal Dependence on Data 591 15.5.3 Application to Measurement Feedback Control 592

    Duality in the Frequency Domain 15.6.1 Duality without Constraints 594 15.6.2 Duality with Causality Constraints 595 15.6.3 Application to the Infinite-Horizon LQR Problem 597

    Complementary State-Space Models The Standard State-Space Model 599 Backwards Complementary Models 600 Direct Derivation of the Hamiltonian Equations 603 Forwards Complementary Models 604 The Mixed Complementary Model 608 An Application to Smoothing 608

  • xvi

    15.8 Complements Problems

    16.1 Continuous-Time Models 16.1.1 Standard Continuous-Time Models 618 16.1.2 Discrete-Tune Approximations 618 16.1.3 An Application: State-Variance Recursions 621

    16.2 The Kalman Filter Equations given State-Space and Covariance Models 16.3 Some Examples 16.4 Direct Solution using the Innovations Process

    16.4.1 The Innovations Process 630 16.4.2 The Innovations Approach 633

    16.5 Smoothed Estimators 16.6 Fast Algorithms for Time-Invariant Models 16.7 Asymptotic Behavior

    16.7.1 Positive-Semi-Definite Solutions of the CARE 642 16.7.2 Convergence Results 642 16.7.3 The Dual CARE 646 16.7.4 Exponential Convergence of the Fast Filtering Equations 646

    16.8 The Steady-State Filter 16.9 Complements

    Problems 16.A Backwads Markovian Models

    16.A.1 Backwards Models via Time Reversal 672 16.A.2 The Backwards-Time Kalman Filters 674 16.A.3 Application to Smoothing Problems 674

    b 1 7 A SCATTERING THEORY APPROACH 17.1 A Generalized Transmission-Line Model . . 17.1.1 Identifying the Macroscopic Scattering Operators 650

    a 17.1.2 Identifying the Signals 683 ., <

    17.2 Backward Evolution 17.3 The Star Product

    17.3.1 Evolution Equations 688 17.3.2 General Initial Conditions 690 17.3.3 Chain Scattering or Transmission Matriccs 692

    17.4 Various Riccati Formulas 17.4.1 Incorporating Boundary Conditions 693 17.4.2 Partitioned Formulas 695 17.4.3 General Changes in the Boundary Conditions 696 17.4.4 Smoothing as an Extended Filtering Problem 698

    17.5 HornogeneousMedia: Time-Invariant Models 17.5.1 A Doubling Algorithm 702 17.5.2 Generalized Stokes Identities 703

  • 17.6 Discrete-Time Scattering Formulation 17.6.1 Some Features of D'iscrete-X& scat(tiring 708 17.6.2 The Scattering Parameters 709 17.6.3 The Kalman Filter and Related Identities 712 17.6.4 General Change of Initial Conditions 713 17.6.5 Backward Evolution 715 17.6.6 Homogeneous Media 716

    17.7 Further Work 17.8 Complements

    Problems 17.A A Complementary State-Space Model

    Contents xvii

    706

    b A USEFUL MATRIX RESULTS 725 A.1 Some Matrix Identities A.2 Kronecker Products A.3 The Reduced and Full QR Decompositions A.4 The Singular Value Decomposition and Applications A S Basis Rotations A.6 Complex Gradients and Hessians A.7 Further Reading

    b B UNITARY AND I-UNITARY TRANSFORMATIONS 743 B.l Householder Transformations B.2 Circular or Givens Rotations B.3 Fast Givens Transformations B.4 J-Unitary Householder Transformations B.5 Hyperbolic Givens Rotations B.6 Some Alternative Implementations

    b ' C SOME SYSTEM THEORY CONCEPTS 759 C.1 Linear State-Space Models C.2 State-Transition Matrices C.3 Controllability and Stabilizability C.4 Observability and Detectability C.5 Minimal Realizations

    b D LYAPUNOV EQUATIONS 766 D.l Discrete-Time Lyapunov Equations D.2 Continuous-Time Lyapunov Equations D.3 Internal Stability

  • xviii

    b E ALGEBRAIC RlCCATl EQUATIONS E.l Overview of DARE E.2 . A Linear Matrix Inequality E.3 Existence of Solutions to the DARE E.4 Properties of the Maximal Solution E.5 Main Result E.6 Further Remarks E.7 The Invariant Subspace Method E.8 The Dual DARE E.9 TheCARE E.10 Complements

    b F DISPLACEMENT STRUCTURE F.1 Motivation F.2 ?ivo Fundamental Properties F.3 A Generalized Schur Algorithm F.4 The Classical Schur Algorithm F.5 Combining Displacement and State-Space Structures

  • Preface

    The problem of estimating the values of a random (or stochastic) process given obser- vations of a related random process is encountered in many areas of science and enni- - neering, e.g., communications, control, signal processing, geophysics, econometrics, and statistics. Although the topic has a rich history, and its formative stages can be attributed - - to illustrious investigators such as Laplace, Gauss, Legendre, and others, the current high interest in such problems began with the work of H. Wold, A. N. Kolmogorov, and N. Wiener in the late 1930s and early 1940s. N. Wiener in particular stressed the importance of modeling not just "noise" but also "signals" as random processes. His thought-provoking originally classified 1942 report, released for open publication in 1949 and now available in paperback form under the title Time Series Analysis, is still very worthwhile background reading.

    As with all deep subjects, the extensions of these results have been very far- reaching as well. A particularly important development arose from the incorporation into the theory of multichannel state-space models. Though there were various earlier partial intimations and explorations, especially in the work of R. L. Stratonovich in the former Soviet Union, the chief credit for the explosion of activity in this direction goes to R. E. Kalman, who also made important related contributions to linear systems, optimal control, passive systems, stability theory, and network synthesis.

    ' In fact, least-squares estimation is one of those happy'subjects that is interest- ing not only in the richness and scope of its results, but also because of its mutually beneficial connections with a host of other (often apparently very different) subjects. Thus, beyond those already named, we may mention connections with radiative transfer and scattering theory, linear algebra, matrix and operator theory, orthogonal polyno- mials, moment problems, inverse scattering problems, interpolation theory, decoding of ReedSolomon and BCH codes, polynomial factorization and root distribution prob- lems, digital filtering, spectral analysis, signal detection, martingale theory, the so-called 'H, theories of estimation and control, least-squares and adaptive filtering problems, and many others. We can surely apply to it the lines written by William Shakespeare about another (beautiful) subject:

    "Age does not wither her, nor custom stale, Her infinite variety."

    xix

  • Though we were originally tempted to cover a wider range, many reasons led us to focus this volume largely on estimation problems lor finite-din~ensional li systems with state-space models, covering most aspects of an area now generally kl- as Wiener and Kalman filtering theory. Three distinctive Ieatures of our treat] are the pervasive use of a geometric point of view, the emphasis on the numeri favored square-root/array forms of many algorithms, and the emphasis on equival and duality concepts for the solution of several related problems in adaptive filte estimation, and control. These features are generally absent in most prior treatmt ostensibly on the grounds that they are too abstract and complicated. It is our I that these misconceptions will be dispelled by thc presentation herein, and tha fundamental simplicity and power of these ideas will be more widely recognized exploited.

    The material presented in this book can be broadly categorized into the follo' topics:

    Introduction and Foundations Chapter 1: Overview Chapter 2: Deterministic Least-Squares Problems Chapter 3: Stochastic Least-Squares Problems Chapter 4: The Innovations Process

    Chapter 5: State-Space Models

    Estimation of Stationary Processes

    chapter 6: Innovations for Stationary 1'1-ocesses

    Chapter 7: Wiener Theory for Scalar P~.ocesscs Chapter 8: Recursive Wiene~ Filters

    Estimation of Nonstationary Processes

    Chapter 9: The Kalman Filter Chapter 10: Smoothed Estimators

    Fast and Array Algorithms

    Chapter 11: Fast Algorithms Chapter 12: Array Algorithms Chapter 13: Fast Array Algorithms

    Continuous-Time Estimation Chapter 16: Continuous-Time State-Spce Estimation

    Advanced Topics Chapter 14: Asynlptotic Behavior Chapter 15: Duality-and Equivalence in Estimation and Control Chapter 17: A Scattering Theory Approach

  • Preface xxi

    Being intended for a graduate-level course, the book assumes familiarity with basic concepts from matrix theory, linear algeb;ra, linear system theory, and random processes. Four appendices at the end of the book provide the reader with background material in all these areas.

    There is ample material in this book for the instructor to fashion a course to his or her needs and tastes. The avthors have used portions of this book as the basis for one-quarter first-year graduate level courses at Stanford University, the University of California at Los Angeles, and the University of California at Santa Barbara; the students were expected to have had some exposure to discrete-time and state-space theory. A typical course would start with Secs. 1.1-1.2 as an overview (perhaps omitting the matrix derivations), with the rest of Ch. 1 left for a quick reading (and re-reading from time to time), most of Chs. 2 and 3 (focusing on the geometric approach) on the basic deterministic and stochastic least-squares problems, Ch. 4 on the innovations process. Secs. 6.4-6.5 and 7.3-7.7 on scalar Wiener filtering, Secs. 9.1-9.3, 9.5, and 9.7 on Kalman filtering. Secs. 10.1-10.2 as an introduction to smoothing, Secs. 12.1-12.5 and 13.1-13.4 on array algorithms, and Secs. 16.1-16.4 and 16.6 on continuous-time problems.

    More advanced students and researchers would pursue selections of material from Sec. 2.5, Chs. 8, 11, 14, 15, and 17, and Apps. E and F. These cover, among other topics. least-squares problems with uncertain data, the problem of canonical spectral factorization, convergence of the Kalman filter, the algebraic Riccati equation, duality, backwards-time and complementary models. scattering, etc. Tliose wishing to go on to the more recent 31, theory can find a treatment closely related to the philosophy of the current book (cf Sec. 1.6) in the research monograph of Hassibi, Sayed, and Kailath (1999).

    A feature of the book is a collection of nearly 300 problems, several of which complement the text and present additional results and insights. However, there is little discussion of real applications or of the error and sensitivity analyses required for them. The main issue in applications is constructing an appropriate model, or actually a set of models, which are further analyzed and then refined by using the results and algorithms presented in this book. Developing good models and analyzing them effectively requires not only a good appreciation of the actual application, but also;a good understanding of the theory, at both an analytical,nnd intuitive level. It is the latter that we have tried to achieve here; examples of successful applications have to be sought in the literature, and some references are provided to this end.

    Acknowledgments

    The development of this textbook has spanned many years. So the material, as well as its presentation, has benefited greatly from the inputs of the many bright students who have worked with us on these topics: J. Omura, P. Frost, T. Duncan, R. Geesey, D. Duttweiler, H. Aasnaes, M. Gevers. H. Weinert, A. Segall, M. Morf, B. Dickinson, G. Sidhu. B. Friedlander, A. Vieira, S. Y. Kung, B. Levy, G. Verghese, D. Lee, J. Delosme, B. Porat, H. Lev-Ari, 1. Cioffi, A. Bruckstein, T. Citron, Y. Bresler, R. Roy, J. Chun, D. Slock. D. Pal, G. Xu, R. Ackner, Y. Cho, P. Park, T. Boros, A. Erdogan, U. Forsell, B. Halder, H. Hindi, V. Nascimento, T. Pare, R. Merched, and our young friend Amir Ghazanfarian (in mernorimn) from whom we had so much more to learn.

  • We are of course also deeply indebted to the many researchers and authors i beautiful field. Partial acknowledgment is evident through the citations and refer? while the list of the latter is quite long, we apologize for omissions andinadeql arising from the limitations of our knowledge and our energy. Nevertheless, we v be remiss not to explicitly mention the inspiration and pleasure we have gain studying the papers and books of N. Wiener, R. E. Kalman, and I? Whittle.

    Major support for the many years of research that led to this book was provid the Mathematics Divisions of the Air Force Office of Scientific Research and the / Research Office, by the Joint Services Electronics Program, by the Defense Advi! Research Projects Agency. and by the National Science Foundation. Finally, we B like to thank Bernard Goodwin and Tom Robbins. as well as the staff of Prentice for their patience and other contributions to this project.

    T. K; Stanford

    A. H. S Westwood

    B. H; Murray Hi1