detection, estimation, and · 7 detection of signals–estimation of signal parameters 584 7.1...

30

Upload: others

Post on 09-Jun-2020

11 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2
Page 2: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2
Page 3: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Detection, Estimation, andModulation Theory

Part I: Detection, Estimation, andFiltering Theory

Second Edition

Page 4: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2
Page 5: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Detection, Estimation, andModulation TheoryPart I: Detection, Estimation, and

Filtering Theory

Second Edition

HARRY L. VAN TREESKRISTINE L. BELL

withZHI TIAN

Page 6: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Copyright © 2013 by John Wiley & Sons, Inc. All rights reserved.

Published by John Wiley & Sons, Inc., Hoboken, New Jersey.Published simultaneously in Canada.

No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by anymeans, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted underSection 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of thePublisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center,Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web atwww.copyright.com. Requests to the Publisher for permission should be addressed to the PermissionsDepartment, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201)748-6008, or online at http://www.wiley.com/go/permission.

Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts inpreparing this book, they make no representations or warranties with respect to the accuracy or completeness ofthe contents of this book and specifically disclaim any implied warranties of merchantability or fitness for aparticular purpose. No warranty may be created or extended by sales representatives or written sales materials.The advice and strategies contained herein may not be suitable for your situation. You should consult with aprofessional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any othercommercial damages, including but not limited to special, incidental, consequential, or other damages.

For general information on our other products and services or for technical support, please contact our CustomerCare Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax(317) 572-4002.

Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not beavailable in electronic formats. For more information about Wiley products, visit our web site at www.wiley.com.

Library of Congress Cataloging-in-Publication Data:

Van Trees, Harry L.Detection estimation and modulation theory. – Second edition / Harry L. Van

Trees, Kristine L. Bell, Zhi Tian.pages cm

Includes bibliographical references and index.ISBN 978-0-470-54296-5 (cloth)

1. Signal theory (Telecommunication) 2. Modulation (Electronics)3. Estimation theory. I. Bell, Kristine L. II. Tian, Zhi, 1972- III. Title.

TK5102.5.V3 2013621.382’2–dc23

2012036672

Printed in the United States of America

10 9 8 7 6 5 4 3 2 1

Page 7: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

To my mentors at Massachusetts Institute of Technology;Professors Yuk Wing Lee, Norbert Weiner, Wilbur Davenport, andAmar Bose; to my colleagues at M.I.T., Arthur Baggeroer, LewisCollins, Estil Hoversten, and Donald Snyder whose critiques andcontributions greatly enhanced the first edition; and to my wife,Diane Enright Van Trees who has patiently accommodated thetime that I have spent over the years on DEMT.

Harry Van Trees

To Harry Van Trees, whose knowledge and guidance have shapedmy professional development in immeasurable ways for nearly30 years. It has been an honor to collaborate on this secondedition of a truly classic textbook. And to my husband Jamie, mydaughters Julie and Lisa, and parents Richard and Jean LaCroix,who are my foundation.

Kristine L. Bell

Page 8: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2
Page 9: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Contents

Preface xv

Preface to the First Edition xix

1 Introduction 1

1.1 Introduction 11.2 Topical Outline 11.3 Possible Approaches 111.4 Organization 14

2 Classical Detection Theory 17

2.1 Introduction 172.2 Simple Binary Hypothesis Tests 20

2.2.1 Decision Criteria 202.2.2 Performance: Receiver Operating Characteristic 35

2.3 M Hypotheses 512.4 Performance Bounds and Approximations 632.5 Monte Carlo Simulation 80

2.5.1 Monte Carlo Simulation Techniques 802.5.2 Importance Sampling 86

2.5.2.1 Simulation of PF 872.5.2.2 Simulation of PM 912.5.2.3 Independent Observations 942.5.2.4 Simulation of the ROC 942.5.2.5 Examples 962.5.2.6 Iterative Importance Sampling 106

2.5.3 Summary 1082.6 Summary 1092.7 Problems 110

3 General Gaussian Detection 125

3.1 Detection of Gaussian Random Vectors 1263.1.1 Real Gaussian Random Vectors 1263.1.2 Circular Complex Gaussian Random Vectors 127

vii

Page 10: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

viii Contents

3.1.3 General Gaussian Detection 1323.1.3.1 Real Gaussian Vectors 1323.1.3.2 Circular Complex Gaussian Vectors 1363.1.3.3 Summary 137

3.2 Equal Covariance Matrices 1383.2.1 Independent Components with Equal Variance 1423.2.2 Independent Components with Unequal Variances 1463.2.3 General Case: Eigendecomposition 1473.2.4 Optimum Signal Design 1563.2.5 Interference Matrix: Estimator–Subtractor 1603.2.6 Low-Rank Models 1653.2.7 Summary 173

3.3 Equal Mean Vectors 1743.3.1 Diagonal Covariance Matrix on H0: Equal Variance 175

3.3.1.1 Independent, Identically Distributed SignalComponents 177

3.3.1.2 Independent Signal Components: Unequal Variances 1783.3.1.3 Correlated Signal Components 1793.3.1.4 Low-Rank Signal Model 1843.3.1.5 Symmetric Hypotheses, Uncorrelated Noise 186

3.3.2 Nondiagonal Covariance Matrix on H0 1913.3.2.1 Signal on H1 Only 1913.3.2.2 Signal on Both Hypotheses 195

3.3.3 Summary 1963.4 General Gaussian 197

3.4.1 Real Gaussian Model 1973.4.2 Circular Complex Gaussian Model 1983.4.3 Single Quadratic Form 2013.4.4 Summary 208

3.5 M Hypotheses 2093.6 Summary 2133.7 Problems 215

4 Classical Parameter Estimation 230

4.1 Introduction 2304.2 Scalar Parameter Estimation 232

4.2.1 Random Parameters: Bayes Estimation 2324.2.2 Nonrandom Parameter Estimation 2464.2.3 Bayesian Bounds 261

4.2.3.1 Lower Bound on the MSE 2614.2.3.2 Asymptotic Behavior 265

4.2.4 Case Study 2684.2.5 Exponential Family 279

4.2.5.1 Nonrandom Parameters 2794.2.5.2 Random Parameters 287

4.2.6 Summary of Scalar Parameter Estimation 292

Page 11: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Contents ix

4.3 Multiple Parameter Estimation 2934.3.1 Estimation Procedures 293

4.3.1.1 Random Parameters 2934.3.1.2 Nonrandom Parameters 296

4.3.2 Measures of Error 2964.3.2.1 Nonrandom Parameters 2964.3.2.2 Random Parameters 299

4.3.3 Bounds on Estimation Error 2994.3.3.1 Nonrandom Parameters 2994.3.3.2 Random Parameters 316

4.3.4 Exponential Family 3214.3.4.1 Nonrandom Parameters 3214.3.4.2 Random Parameters 324

4.3.5 Nuisance Parameters 3254.3.5.1 Nonrandom Parameters 3254.3.5.2 Random Parameters 3264.3.5.3 Hybrid Parameters 328

4.3.6 Hybrid Parameters 3284.3.6.1 Joint ML and MAP Estimation 3294.3.6.2 Nuisance Parameters 331

4.3.7 Summary of Multiple Parameter Estimation 3314.4 Global Bayesian Bounds 332

4.4.1 Covariance Inequality Bounds 3334.4.1.1 Covariance Inequality 3334.4.1.2 Bayesian Bounds 3344.4.1.3 Scalar Parameters 3344.4.1.4 Vector Parameters 3404.4.1.5 Combined Bayesian Bounds 3414.4.1.6 Functions of the Parameter Vector 3424.4.1.7 Summary of Covariance Inequality Bounds 344

4.4.2 Method of Interval Estimation 3454.4.3 Summary of Global Bayesian Bounds 348

4.5 Composite Hypotheses 3484.5.1 Introduction 3484.5.2 Random Parameters 3504.5.3 Nonrandom Parameters 3524.5.4 Simulation 3724.5.5 Summary of Composite Hypotheses 375

4.6 Summary 3754.7 Problems 377

5 General Gaussian Estimation 400

5.1 Introduction 4005.2 Nonrandom Parameters 401

5.2.1 General Gaussian Estimation Model 4015.2.2 Maximum Likelihood Estimation 4075.2.3 Cramer–Rao Bound 409

Page 12: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

x Contents

5.2.4 Fisher Linear Gaussian Model 4125.2.4.1 Introduction 4125.2.4.2 White Noise 4185.2.4.3 Low-Rank Interference 424

5.2.5 Separable Models for Mean Parameters 4295.2.6 Covariance Matrix Parameters 442

5.2.6.1 White Noise 4435.2.6.2 Colored Noise 4445.2.6.3 Rank One Signal Matrix Plus White Noise 4455.2.6.4 Rank One Signal Matrix Plus Colored Noise 450

5.2.7 Linear Gaussian Mean and Covariance Matrix Parameters 4505.2.7.1 White Noise 4505.2.7.2 Colored Noise 4515.2.7.3 General Covariance Matrix 452

5.2.8 Computational Algorithms 4525.2.8.1 Introduction 4525.2.8.2 Gradient Techniques 4535.2.8.3 Alternating Projection Algorithm 4575.2.8.4 Expectation–Maximization Algorithm 4615.2.8.5 Summary 469

5.2.9 Equivalent Estimation Algorithms 4695.2.9.1 Least Squares 4705.2.9.2 Minimum Variance Distortionless Response 4705.2.9.3 Summary 472

5.2.10 Sensitivity, Mismatch, and Diagonal Loading 4735.2.10.1 Sensitivity and Array Perturbations 4745.2.10.2 Diagonal Loading 477

5.2.11 Summary 4815.3 Random Parameters 483

5.3.1 Model, MAP Estimation, and the BCRB 4835.3.2 Bayesian Linear Gaussian Model 4875.3.3 Summary 494

5.4 Sequential Estimation 4955.4.1 Sequential Bayes Estimation 4955.4.2 Recursive Maximum Likelihood 5045.4.3 Summary 506

5.5 Summary 5075.6 Problems 510

6 Representation of Random Processes 519

6.1 Introduction 5196.2 Orthonormal Expansions: Deterministic Signals 5206.3 Random Process Characterization 528

6.3.1 Random Processes: Conventional Characterizations 5286.3.2 Series Representation of Sample Functions of Random

Processes 5326.3.3 Gaussian Processes 536

Page 13: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Contents xi

6.4 Homogeous Integral Equations and Eigenfunctions 5406.4.1 Rational Spectra 5406.4.2 Bandlimited Spectra 5456.4.3 Nonstationary Processes 5486.4.4 White Noise Processes 5506.4.5 Low Rank Kernels 5526.4.6 The Optimum Linear Filter 5536.4.7 Properties of Eigenfunctions and Eigenvalues 559

6.4.7.1 Monotonic Property 5596.4.7.2 Asymptotic Behavior Properties 560

6.5 Vector Random Processes 5646.6 Summary 5686.7 Problems 569

7 Detection of Signals–Estimation of Signal Parameters 584

7.1 Introduction 5847.1.1 Models 584

7.1.1.1 Detection 5847.1.1.2 Estimation 587

7.1.2 Format 5897.2 Detection and Estimation in White Gaussian Noise 591

7.2.1 Detection of Signals in Additive White Gaussian Noise 5917.2.1.1 Simple Binary Detection 5917.2.1.2 General Binary Detection in White Gaussian

Noise 5977.2.1.3 M-ary Detection in White Gaussian Noise 6017.2.1.4 Sensitivity 611

7.2.2 Linear Estimation 6147.2.3 Nonlinear Estimation 6167.2.4 Summary of Known Signals in White Gaussian Noise 628

7.2.4.1 Detection 6287.2.4.2 Estimation 628

7.3 Detection and Estimation in Nonwhite Gaussian Noise 6297.3.1 “Whitening” Approach 632

7.3.1.1 Structures 6327.3.1.2 Construction of Qn(t, u) and g(t) 6357.3.1.3 Summary 639

7.3.2 A Direct Derivation Using the Karhunen-Loeve Expansion 6397.3.3 A Direct Derivation with a Sufficient Statistic 6417.3.4 Detection Performance 643

7.3.4.1 Performance: Simple Binary Detection Problem 6437.3.4.2 Optimum Signal Design: Coincident Intervals 6447.3.4.3 Singularity 6457.3.4.4 General Binary Receivers 647

7.3.5 Estimation 6487.3.6 Solution Techniques for Integral Equations 650

7.3.6.1 Infinite Observation Interval: Stationary Noise 650

Page 14: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

xii Contents

7.3.6.2 Finite Observation Interval: Rational Spectra 6547.3.6.3 Finite Observation Time: Separable Kernels 662

7.3.7 Sensitivity, Mismatch, and Diagonal Loading 6677.3.7.1 Sensitivity 6677.3.7.2 Mismatch and Diagonal Loading 673

7.3.8 Known Linear Channels 6737.3.8.1 Summary 675

7.4 Signals with Unwanted Parameters: The Composite Hypothesis Problem 6757.4.1 Random Phase Angles 6777.4.2 Random Amplitude and Phase 6947.4.3 Other Target Models 7067.4.4 Nonrandom Parameters 709

7.4.4.1 Summary 7117.5 Multiple Channels 712

7.5.1 Vector Karhunen–Loeve 7127.5.1.1 Application 714

7.6 Multiple Parameter Estimation 7167.6.1 Known Signal in Additive White Gaussian Noise 7177.6.2 Separable Models 7187.6.3 Summary 720

7.7 Summary 7217.8 Problems 722

8 Estimation of Continuous-Time Random Processes 771

8.1 Optimum Linear Processors 7718.2 Realizable Linear Filters: Stationary Processes, Infinite Past:

Wiener Filters 7878.2.1 Solution of Wiener–Hopf Equation 7888.2.2 Errors in Optimum Systems 7988.2.3 Unrealizable Filters 8018.2.4 Closed-Form Error Expressions 803

8.3 Gaussian–Markov Processes: Kalman Filter 8078.3.1 Differential Equation Representation of Linear Systems

and Random Process Generation 8088.3.2 Kalman Filter 8258.3.3 Realizable Whitening Filter 8398.3.4 Generalizations 8418.3.5 Implementation Issues 842

8.4 Bayesian Estimation of Non-Gaussian Models 8428.4.1 The Extended Kalman Filter 843

8.4.1.1 Linear AWGN Process and Observations 8448.4.1.2 Linear AWGN Process, Nonlinear AWGN

Observations 8458.4.1.3 Nonlinear AWGN Process and Observations 8488.4.1.4 General Nonlinear Process and Observations 849

8.4.2 Bayesian Cramer–Rao Bounds: Continuous-Time 8498.4.3 Summary 852

Page 15: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Contents xiii

8.5 Summary 8528.6 Problems 855

9 Estimation of Discrete–Time Random Processes 880

9.1 Introduction 8809.2 Discrete-Time Wiener Filtering 882

9.2.1 Model 8829.2.2 Random Process Models 8839.2.3 Optimum FIR Filters 8949.2.4 Unrealizable IIR Wiener Filters 9009.2.5 Realizable IIR Wiener Filters 9049.2.6 Summary: Discrete-Time Wiener Filter 918

9.3 Discrete-Time Kalman Filter 9199.3.1 Random Process Models 9209.3.2 Kalman Filter 926

9.3.2.1 Derivation 9279.3.2.2 Reduced Dimension Implementations 9349.3.2.3 Applications 9399.3.2.4 Estimation in Nonwhite Noise 9549.3.2.5 Sequential Processing of Estimators 9559.3.2.6 Square-Root Filters 9589.3.2.7 Divergence 9629.3.2.8 Sensitivity and Model Mismatch 9669.3.2.9 Summary: Kalman Filters 972

9.3.3 Kalman Predictors 9739.3.3.1 Fixed-Lead Prediction 9749.3.3.2 Fixed-Point Prediction 9759.3.3.3 Fixed-Interval Prediction 9779.3.3.4 Summary: Kalman Predictors 977

9.3.4 Kalman Smoothing 9789.3.4.1 Fixed-Interval Smoothing 9789.3.4.2 Fixed-Lag Smoothing 9799.3.4.3 Summary: Kalman Smoothing 982

9.3.5 Bayesian Estimation of Nonlinear Models 9829.3.5.1 General Nonlinear Model: MMSE and

MAP Estimation 9839.3.5.2 Extended Kalman Filter 9859.3.5.3 Recursive Bayesian Cramer–Rao Bounds 9879.3.5.4 Applications 9929.3.5.5 Joint State and Parameter Estimation 10059.3.5.6 Continuous-Time Processes and Discrete-Time

Observations 10099.3.5.7 Summary 1013

9.3.6 Summary: Kalman Filters 10139.4 Summary 10169.5 Problems 1016

Page 16: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

xiv Contents

10 Detection of Gaussian Signals 1030

10.1 Introduction 103010.2 Detection of Continuous-Time Gaussian Processes 1030

10.2.1 Sampling 103210.2.2 Optimum Continuous-Time Receivers 103410.2.3 Performance of Optimum Receivers 104610.2.4 State-Variable Realization 104910.2.5 Stationary Process-Long Observation Time (SPLOT)

Receiver 105110.2.6 Low-Rank Kernels 106110.2.7 Summary 1066

10.3 Detection of Discrete-Time Gaussian Processes 106710.3.1 Second Moment Characterization 1067

10.3.1.1 Known Means and Covariance Matrices 106710.3.1.2 Means and Covariance Matrices with Unknown

Parameters 106810.3.2 State Variable Characterization 107010.3.3 Summary 1076

10.4 Summary 107610.5 Problems 1077

11 Epilogue 1084

11.1 Classical Detection and Estimation Theory 108411.1.1 Classical Detection Theory 108411.1.2 General Gaussian Detection 108611.1.3 Classical Parameter Estimation 108811.1.4 General Gaussian Estimation 1089

11.2 Representation of Random Processes 109311.3 Detection of Signals and Estimation of Signal Parameters 109511.4 Linear Estimation of Random Processes 109811.5 Observations 1105

11.5.1 Models and Mismatch 110511.5.2 Bayes vis-a-vis Fisher 110511.5.3 Bayesian and Fisher Bounds 110511.5.4 Eigenspace 110611.5.5 Whitening 110611.5.6 The Gaussian Model 1106

11.6 Conclusion 1106

Appendix A: Probability Distributions and Mathematical Functions 1107

Appendix B: Example Index 1119

References 1125

Index 1145

Page 17: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Preface

We have included the preface to the first edition in order to provide a context for the originalwork. For readers who are not familiar with Part I of Detection, Estimation, and ModulationTheory (DEMT), it may be useful to read it first.

In 1968, Part I of DEMT was published. It turned out to be a reasonably successful bookthat has been widely used by several generations of engineers. There were 28 printings,but the last printing was in 1996. Parts II and III were published in 1971 and focused onspecific applications areas such as analog modulation, Gaussian signals and noise, and theradar–sonar problem. Part II had a short life span due to the shift from analog modulation todigital modulation. Part III is still widely used as a reference and as a supplementary text.In 2002, the fourth volume in the sequence, Optimum Array Processing, was published.In conjunction with the publication of Optimum Array Processing, paperback versions ofParts I, II, and III were published. In 2007, in order to expand on the performance boundsthat played an important role in Parts I–IV, Dr. Kristine Bell and I edited a book, BayesianBounds for Parameter Estimation and Nonlinear Filtering/Tracking.

In the 44 years since the publication of Part I, there have been a large number of changes:

1. The basic detection and estimation theory has remained the same but numerous newresults have been obtained.

2. The exponential growth in computational capability has enabled us to implementalgorithms there were only of theoretical interest in 1968. The results from detectionand estimation theory were applied in operational systems.

3. Simulation became more widely used in system design and analysis, research, andteaching.

4. Matlab became an essential tool.

If I had stayed at MIT and continued working in this area, then presumably a new editionwould have come out every 10 years and evolved along with the field and this might be thefifth edition.

A few comments on my career may help explain the long delay between editions. In1972, MIT loaned me to the Defense Communications Agency in Washington, DC, whereI spent 3 years as the Chief Scientist and the Associate Director of Technology. At theend of the tour, I decided, for personal reasons, to stay in Washington, DC. I spent thenext 3 years as an Assistant Vice President at Comsat where my group did the advancedplanning for the INTELSAT satellites. In 1978, I became the Chief Scientist of the UnitedStates Air Force. In 1979, Dr. Gerald Dinneen, the former director of Lincoln Laboratory,

xv

Page 18: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

xvi Preface

was serving as Assistant Secretary of Defense for Command, Control, Communications,and Intelligence (C3I). He asked me to become his Principal Deputy and I spent 2 years inthat position. In 1981, I joined M/A-COM Linkabit. This is the company that Irwin Jacobsand Andrew Viterbi had started in 1969 and sold to M/A-COM in 1979. I started an EasternOperation that grew to about 200 people in 3 years. After Irwin and Andy left M/A-COMand started Qualcomm, I was responsible for the government operations in San Diego aswell as Washington, DC. In 1988, M/A-COM sold the division and at that point I decidedto return to the academic world.

I joined George Mason University (GMU) in September 1988. One of my priorities wasto restart my research in detection and estimation theory and finish the book on OptimumArray Processing. However, I found that I needed to build up a research center in order toattract young research-oriented faculty and doctoral students. One of my first students wasDr. Bell, who had worked for me at M/A-COM. She joined the doctoral program in 1990,graduated in 1995, and joined the GMU faculty in the Statistics Department. The processof growing a research center took about 6 years. The Center for Excellence in C3I has beenvery successful and has generated over 30 million dollars in research funding during itsexistence. During this growth phase, I spent some time on my research but a concentratedeffort was not possible.

After I retired from teaching and serving as Director of the C3I Center in 2005, I coulddevote full time to consulting and writing. After the publication of Bayesian Bounds in2007, Dr. Bell and I started work on the second edition of Part I. There were a number offactors that had to be considered:

1. The first edition was written during a period that is sometimes referred to as the“Golden Age of Communications Theory.” Norbert Wiener, Claude Shannon, andY. W. Lee were on the MIT faculty and a number of the future leaders in the fieldwere graduate students. Detection and estimation theory was an exciting new researcharea. It has evolved into a mature discipline that is applied in a number of areas.

2. The audience for the book has changed. The first edition was designed for my courseat MIT in which the audience was 40–50 graduate students, many of whom plannedto do research in the area. This allowed me to leave out the implementation detailsand incorporate new derivations in the problems (the best example was the derivationof the discrete-time Kalman filter as a problem in Chapter 2). To make the secondedition more readable to a larger audience, we have expanded the explanations inmany areas.

3. There have been a large number of new results. We have tried to select the ones thatare most suitable for an introductory textbook.

4. The first edition emphasized closed-form analytic solutions wherever possible. Thesecond edition retains that focus but incorporates iterative solutions, simulations, andextensive use of Matlab.

Some of the specific new features in the second edition include:

1. Chapter 2 in the first edition has been expanded into Chapters 2–5 in the secondedition. The new Chapter 2 develops classical detection theory (Sections 2.1–2.3 and2.7 in the first edition) and adds a section on importance sampling as a logical exten-sion of the tilted densities in the performance bounds section. Chapter 3, “GaussianDetection,” is a significant expansion of Section 2.6 in the first edition and derives

Page 19: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Preface xvii

a number of explicit results that will be used later in the book. Chapter 4, “Classi-cal Parameter Estimation,” is a significant expansion of Sections 2.4 and 2.5 in thefirst edition. It introduces several new topics and includes a detailed developmentof global Bayesian bounds based on the introductory material in Bayesian Bounds.Chapter 5, “General Gaussian Estimation,” is new material. It introduces the Fisherlinear Gaussian model and the Bayesian linear Gaussian model. It discusses com-putational algorithms, equivalent estimation algorithms (ML, least squares, MVDR),sensitivity and mismatch, and introduces sequential estimation.

2. Chapters 6, 7, and 8 in the second edition correspond to Chapters 3, 4, and 6 in thefirst edition. There are minor changes but the basic material is the same.

3. Chapter 9, “Linear Estimation of Discrete-Time Random Processes,” is a new chapter.It develops the discrete-time Wiener filter and the discrete-time Kalman filter. Inaddition to developing the various algorithms, it discusses the various problems thatmay arise in the numerical implementation of the algorithms and techniques foravoiding these problems as well as reducing the computational complexity.

4. Chapter 10, “Detection of Gaussian Signals,” treats both continuous-time and discrete-time processes. The discussion of continuous-time processes is taken from Chapters2 and 4 of DEMT, Part III. The discussion of discrete-time processing is divided intoblock processing and sequential processing. For block processing, we provide tablesto show where in Chapters 3–5 we have already solved the problem. For sequentialprocessing, we show how the detection statistics can be generated from the outputsof an FIR Wiener filter or a discrete-time Kalman filter.

For readers familiar with the first edition of DEMT, Part I or other detection theory orestimation theory texts, it may be useful to scan Chapter 11, “Epilogue,” to see a summaryof the material covered in the second edition.

The addition of a significant amount of material on filtering and the deletion of thechapter on modulation theory motivated the addition of a subtitle for Part I, Detection,Estimation, and Filtering Theory.

From the standpoint of specific background, little advanced material is required. A thor-ough knowledge of elementary probability theory and random processes is the assumed.In particular, the reader needs to have worked with second-moment characterizations ofrandom processes and Gaussian random processes. The reader should have worked withmatrices and be comfortable in eigenspace. In later chapters, experience with state variablerepresentations is useful. Our teaching experience with a wide variety of audiences showsthat many students understand the basic results in detection and estimation theory but havetrouble implementing them because of a weak background in random processes and/ormatrix theory. The level of mathematical rigor is moderate, although in most sections theresults could be rigorously proved by simply being more careful in our derivations. We haveadopted this approach in order not to obscure the important ideas with a lot of detail andto make the material readable for the kind of engineering audience that will find it useful.Fortunately, in almost all cases, we can verify that our answers are intuitively logical. It isworthwhile to observe that the ability to check our answers intuitively would be necessaryeven if our derivations were rigorous, because our ultimate objective is to obtain answersthat correspond to some physical system of interest. It is easy to find physical problemsin which a plausible mathematical model and correct mathematics lead to an unrealisticanswer for the original problem.

Page 20: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

xviii Preface

We need to reemphasize the necessity for the reader to solve problems to understand thematerial fully. Throughout the course of the book, we emphasize the development of theability to work problems. At the end of each chapter are problems that range from routinemanipulations to significant extensions of the material in the text. Only by working a fairnumber is it possible to appreciate the significance and generality of the results. A solutionmanual is available (email: [email protected]). It contains solutions to about 25% ofthe problems in the text. In addition, it contains the Matlab scripts for most of the figuresthat are new in the second edition.

The actual authorship of the book has evolved as we worked our way through themanuscript. Originally Dr. Bell and I were to be coauthors of the entire book. After Dr. Bellleft GMU in 2009 to join Metron, we agreed that she would complete her responsibilitiesfor the first five chapters and I would continue to develop the remaining six chapters. Iwas the only author of Chapters 6–8, 10, and 11. However, in order to complete Chapter 9,I recruited Dr. Zhi Tian, a former doctoral student at GMU and currently a Professor ofECE at Michigan Technological University, to be a coauthor of the chapter. It is importantto recognize the contributions of these coauthors. They brought excellent analytical andmathematical skills to the project and an ability to work with Matlab, which was essentialto the completion of the book. In addition, Dr. Bell also developed the two appendices anddid a careful proof reading of the entire book. Their contribution is gratefully acknowledged.

The actual production of the draft manuscript was challenging because the first editionwas published in the pre-LaTeX era. Some financial support was provided by Norma Cor-rales and Fred Rainbow of AFCEA and Prof. Mark Pullen, the current director of the C4ICenter at GMU. The manuscript was put into LaTeX by three graduate students: SeyedRizi, Awais Khawar, and Khalid Al-Muhanna. They devoted an enormous amount of timeto repeated drafts even though the material was not in their research area. Seyed Rizi over-saw the entire process and deserves special recognition for his dedication to the project.Vibhu Dubey and his staff at Thomson Digital did an excellent job of typesetting the finalmanuscript.

Harry L. Van Trees

Page 21: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Preface to the First Edition

The area of detection and estimation theory that we shall study in this book representsa combination of the classical techniques of statistical inference and the random processcharacterization of communication, radar, sonar, and other modern data processing systems.The two major areas of statistical inference are decision theory and estimation theory. In thefirst case we observe an output that has a random character and decide which of two possiblecauses produced it. This type of problem was studied in the middle of the eighteenth centuryby Thomas Bayes [1]. In the estimation theory case the output is related to the value of someparameter of interest, and we try to estimate the value of this parameter. Work in this areawas published by Legendre [2] and Gauss [3] in the early nineteenth century. Significantcontributions to the classical theory that we use as background were developed by Fisher[4] and Neyman and Pearson [5] more than 30 years ago. In 1941 and 1942 Kolmogoroff[6] and Wiener [7] applied statistical techniques to the solution of the optimum linearfiltering problem. Since that time the application of statistical techniques to the synthesisand analysis of all types of systems has grown rapidly. The application of these techniquesand the resulting implications are the subject of this book.

This book and the subsequent volume, Detection, Estimation, and Modulation Theory,Part II, are based on notes prepared for a course entitled “Detection, Estimation, and Mod-ulation Theory,” which is taught as a second-level graduate course at M.I.T. My originalinterest in the material grew out of my research activities in the area of analog modulationtheory. A preliminary version of the material that deals with modulation theory was usedas a text for a summer course presented at M.I.T. in 1964. It turned out that our viewpointon modulation theory could best be understood by an audience with a clear understandingof modern detection and estimation theory. At that time there was no suitable text availableto cover the material of interest and emphasize the points that I felt were important, so Istarted writing notes. It was clear that in order to present the material to graduate studentsin a reasonable amount of time it would be necessary to develop a unified presentation ofthe three topics: detection, estimation, and modulation theory, and exploit the fundamentalideas that connected them. As the development proceeded, it grew in size until the materialthat was originally intended to be background for modulation theory occupies the entirecontents of this book. The original material on modulation theory starts at the beginning ofthe second book. Collectively, the two books provide a unified coverage of the three topicsand their application to many important physical problems.

For the last three years I have presented successively revised versions of the materialin my course. The audience consists typically of 40 to 50 students who have completed agraduate course in random processes which covered most of the material in Davenport and

xix

Page 22: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

xx Preface to the First Edition

Root [8]. In general, they have a good understanding of random process theory and a fairamount of practice with the routine manipulation required to solve problems. In addition,many of them are interested in doing research in this general area or closely related areas.This interest provides a great deal of motivation which I exploit by requiring them to developmany of the important ideas as problems. It is for this audience that the book is primarilyintended. The appendix contains a detailed outline of the course.

On the other hand, many practicing engineers deal with systems that have been orshould have been designed and analyzed with the techniques developed in this book. Ihave attempted to make the book useful to them. An earlier version was used successfullyas a text for an in-plant course for graduate engineers.

From the standpoint of specific background little advanced material is required. A knowl-edge of elementary probability theory and second moment characterization of randomprocesses is assumed. Some familiarity with matrix theory and linear algebra is helpfulbut certainly not necessary. The level of mathematical rigor is low, although in most sec-tions the results could be rigorously proved by simply being more careful in our derivations.We have adopted this approach in order not to obscure the important ideas with a lot ofdetail and to make the material readable for the kind of engineering audience that will findit useful. Fortunately, in almost all cases we can verify that our answers are intuitivelylogical. It is worthwhile to observe that this ability to check our answers intuitively wouldbe necessary even if our derivations were rigorous, because our ultimate objective is toobtain an answer that corresponds to some physical system of interest. It is easy to findphysical problems in which a plausible mathematical model and correct mathematics leadto an unrealistic answer for the original problem.

We have several idiosyncrasies that it might be appropriate to mention. In general, welook at a problem in a fair amount of detail. Many times we look at the same problemin several different ways in order to gain a better understanding of the meaning of theresult. Teaching students a number of ways of doing things helps them to be more flexiblein their approach to new problems. A second feature is the necessity for the reader tosolve problems to understand the material fully. Throughout the course and the book weemphasize the development of an ability to work problems. At the end of each chapter areproblems that range from routine manipulations to significant extensions of the materialin the text. In many cases they are equivalent to journal articles currently being published.Only by working a fair number of them is it possible to appreciate the significance andgenerality of the results. Solutions for an individual problem will be supplied on request,and a book containing solutions to about one third of the problems is available to facultymembers teaching the course. We are continually generating new problems in conjunctionwith the course and will send them to anyone who is using the book as a course text. A thirdissue is the abundance of block diagrams, outlines, and pictures. The diagrams are includedbecause most engineers (including myself) are more at home with these items than with thecorresponding equations.

One problem always encountered is the amount of notation needed to cover the largerange of subjects. We have tried to choose the notation in a logical manner and to make itmnemonic. All the notation is summarized in the glossary at the end of the book. We havetried to make our list of references as complete as possible and to acknowledge any ideasdue to other people.

A number of people have contributed in many ways and it is a pleasure to acknowledgethem. Professors W. B. Davenport and W. M. Siebert have provided continual encour-agement and technical comments on the various chapters. Professors Estil Hoversten and

Page 23: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Preface to the First Edition xxi

Donald Snyder of the M.I.T. faculty and Lewis Collins, Arthur Baggeroer, and MichaelAustin, three of my doctoral students, have carefully read and criticized the various chap-ters. Their suggestions have improved the manuscript appreciably. In addition, Baggeroerand Collins contributed a number of the problems in the various chapters and Baggeroerdid the programming necessary for many of the graphical results. Lt. David Wright readand criticized Chapter 2. L. A. Frasco and H. D. Goldfein, two of my teaching assistants,worked all of the problems in the book. Dr. Howard Yudkin of Lincoln Laboratory readthe entire manuscript and offered a number of important criticisms. In addition, variousgraduate students taking the course have made suggestions which have been incorporated.Most of the final draft was typed by Miss Aina Sils. Her patience with the innumerablechanges is sincerely appreciated. Several other secretaries, including Mrs. Jarmila Hrbek,Mrs. Joan Bauer, and Miss Camille Tortorici, typed sections of the various drafts.

As pointed out earlier, the books are an outgrowth of my research interests. This researchis a continuing effort, and I shall be glad to send our current work to people working in thisarea on a regular reciprocal basis. My early work in modulation theory was supported byLincoln Laboratory as a summer employee and consultant in groups directed by Dr. HerbertSherman and Dr. Barney Reiffen. My research at M.I.T. was partly supported by the JointServices and the National Aeronautics and Space Administration under the auspices of theResearch Laboratory of Electronics. This support is gratefully acknowledged.

Harry L. Van TreesCambridge, MassachusettsOctober, 1967

REFERENCES

[1] Thomas Bayes, “An Essay Towards Solving a Problem in the Doctrine of Chances,” Phil. Trans,53, 370–418 (1764).

[2] A. M. Legendre, Nouvelles Méthodes pour La Détermination ces Orbites des Comètes, Paris,1806.

[3] K. F. Gauss, Theory of Motion of the Heavenly Bodies Moving About the Sun in Conic Sections,reprinted by Dover, New York, 1963.

[4] R. A. Fisher, “Theory of Statistical Estimation,” Proc. Cambridge Philos. Soc., 22, 700 (1925).

[5] J. Neyman and E. S. Pearson, “On the Problem of the Most Efficient Tests of Statistical Hypothe-ses,” Phil. Trans. Roy. Soc. London, A 231, 289, (1933).

[6] A. Kolmogoroff, “Interpolation and Extrapolation von Stationären Zufälligen Folgen,” Bull. Acad.Sci. USSR, Ser. Math. 5, 1941.

[7] N. Wiener, Extrapolation, Interpolation, and Smoothing of Stationary Time Series, Tech. Pressof M.I.T. and Wiley, New York, 1949 (originally published as a classified report in 1942).

[8] W. B. Davenport and W. L. Root, Random Signals and Noise, McGraw-Hill, New York, 1958.

Page 24: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2
Page 25: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

1Introduction

1.1 INTRODUCTION

This book is the second edition of Part I of the four-volume series on “Detection, Estimation,and Modulation Theory.” It is a significant expansion of the original Part I [Van68, Van01a].It includes many of the important results that have occurred in the 44 years since the firstedition was published. It expands upon many of the original areas from the first edition andintroduces a large number of new areas. In addition, some of the material from the originalPart III is moved into this edition.

In this book, we shall study three areas of statistical theory, which we have labeleddetection theory, estimation theory, and filtering theory. The goal is to develop these theoriesin a common mathematical framework and demonstrate how they can be used to solve awealth of practical problems in many diverse physical situations.

In this chapter, we present three outlines of the material. The first is a topical outlinein which we develop a qualitative understanding of the three areas by examining sometypical problems of interest. The second is a logical outline in which we explore the variousmethods of attacking the problems. The third is a chronological outline in which we explainthe structure of the book.

1.2 TOPICAL OUTLINE

An easy way to explain what is meant by detection theory is to examine several physicalsituations that lead to detection theory problems.

A simple digital communication system is shown in Figure 1.1. The source puts out abinary digit every T seconds. Our objective is to transmit this sequence of digits to someother location. The channel available for transmitting the sequence depends on the particularsituation. Typically, it could be a telephone line, a radio link, or an acoustical channel. Forpurposes of illustration, we shall consider a radio link. In order to transmit the information,we must put it into a form suitable for propagating over the channel. A straightforwardmethod would be to build a device that generates a sine wave,

s1(t) = sin ω1t, (1.1)

Detection, Estimation, and Modulation Theory, Second Edition. Harry L. Van Trees, Kristine L. Bell with Zhi Tian.© 2013 John Wiley & Sons, Inc. Published 2013 by John Wiley & Sons, Inc.

1

Page 26: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

2 Detection, Estimation, and Modulation Theory

Source Channelr (t)

TransmitterDigital

sequence

Signal

sequence

Figure 1.1: Digital communication system.

for T seconds if the source generated a “one” in the preceding interval, and a sine wave ofa different frequency,

s0(t) = sin ω0t, (1.2)

for T seconds if the source generated a “zero” in the preceding interval. The frequenciesare chosen so that the signals s0(t) and s1(t) will propagate over the particular radio link ofconcern. The output of the device is fed into an antenna and transmitted over the channel.Typical source and transmitted signal sequences are shown in Figure 1.2.

In the simplest kind of channel the signal sequence arrives at the receiving antennaattenuated but essentially undistorted. To process the received signal, we pass it throughthe antenna and some stages of rf amplification, in the course of which a thermal noisen(t) is added to the message sequence. Thus in any T -second interval, we have available awaveform r(t) in which

r(t) = s1(t) + n(t), 0 � t � T, (1.3)

if s1(t) was transmitted and

r(t) = s0(t) + n(t), 0 � t � T, (1.4)

if s0(t) was transmitted. We are now faced with the problem of deciding which of the twopossible signals was transmitted. We label the device that does this a decision device. It issimply a processor that observes r(t) and guesses whether s1(t) or s0(t) was sent according tosome set of rules. This is equivalent to guessing what the source output was in the precedinginterval. We refer to designing and evaluating the processor as a detection theory problem.In this particular case, the only possible source of error in making a decision is the additivenoise. If it were not present, the input would be completely known and we could make deci-sions without errors. We denote this type of problem as the known signal in noise problem.It corresponds to the lowest level (i.e., simplest) of the detection problems of interest.

Source output

1

T

0

T T T T T T T

0 1Transmitter

Transmitted sequence

sin ω1t sin ω1tsin ω0t sin ω0t

Figure 1.2: Typical sequences.

Page 27: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Introduction 3

sin (ω1t + θ1) sin (ω0t + θ0) sin (ω1t + θ1′)

Figure 1.3: Sequence with phase shifts.

An example of the next level of detection problem is shown in Figure 1.3. The oscillatorsused to generate s1(t) and s0(t) in the preceding example have a phase drift. Therefore in aparticular T -second interval, the received signal corresponding to a “one” is

r(t) = sin(ω1t + θ1) + n(t), 0 � t � T, (1.5)

and the received signal corresponding to a “zero” is

r(t) = sin(ω0t + θ0) + n(t), 0 � t � T, (1.6)

where θ0 and θ1 are unknown constant phase angles. Thus, even in the absence of noise theinput waveform is not completely known. In a practical system the receiver may includeauxiliary equipment to measure the oscillator phase. If the phase varies slowly enough, weshall see that essentially perfect measurement is possible. If this is true, the problem is thesame as above. However, if the measurement is not perfect, we must incorporate the signaluncertainty in our model.

A corresponding problem arises in the radar and sonar areas. A conventional radartransmits a pulse at some frequency ωc with a rectangular envelope:

st(t) = sin ωct, 0 � t � T. (1.7)

If a target is present, the pulse is reflected. Even the simplest target will introduce anattenuation and phase shift in the transmitted signal. Thus, the signal available for processingin the interval of interest is

r(t) = Vr sin[ωc(t − τ) + θr] + n(t), τ � t � τ + T,

= n(t), 0 � t < τ, τ + T < t < ∞,(1.8)

if a target is present and

r(t) = n(t), 0 � t < ∞, (1.9)

if a target is absent. We see that in the absence of noise the signal still contains three unknownquantities: Vr, the amplitude, θr, the phase, and τ, the round-trip travel time to the target.

These two examples represent the second level of detection problems. We classify themas signal with unknown parameters in noise problems.

Detection problems of a third level appear in several areas. In a passive sonar detectionsystem, the receiver listens for noise generated by enemy vessels. The engines, propellers,and other elements in the vessel generate acoustical signals that travel through the ocean tothe hydrophones in the detection system. This composite signal can best be characterized as

Page 28: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

4 Detection, Estimation, and Modulation Theory

a sample function from a random process. In addition, the hydrophone generates self-noiseand picks up sea noise. Thus, a suitable model for the detection problem might be

r(t) = s�(t) + n(t) (1.10)

if the target is present and

r(t) = n(t) (1.11)

if it is not. In the absence of noise, the signal is a sample function from a random process(indicated by the subscript �).

In the communications field, a large number of systems employ channels in whichrandomness is inherent. Typical systems are tropospheric scatter links, orbiting dipole links,and chaff systems. A common technique is to transmit one of two signals separated infrequency. (We denote these frequencies as ω1 and ω0.) The resulting received signal is

r(t) = s�1 (t) + n(t) (1.12)

if s1(t) was transmitted and

r(t) = s�0 (t) + n(t) (1.13)

if s0(t) was transmitted. Here, s�1 (t) is a sample function from a random process centered atω1 and s�0 (t) is a sample function from a random process centered at ω0. These examples arecharacterized by the lack of any deterministic signal component. Any decision procedurethat we design will have to be based on the difference in the statistical properties of thetwo random processes from which s�0 (t) and s�1 (t) are obtained. This is the third level ofdetection problem and is referred to as a random signal in noise problem.

In our examination of representative examples, we have seen that detection theory prob-lems are characterized by the fact that we must decide which of several alternatives is true.There were only two alternatives in the examples cited; therefore, we refer to them as binarydetection problems. Later we will encounter problems in which there are M alternativesavailable (the M-ary detection problem). Our hierarchy of detection problems is presentedgraphically in Table 1.1.

Table 1.1: Detection theory hierarchy

Detection Theory

Level 1. Known signals in noise 1. Synchronous digital communication2. Pattern recognition problems

Level 2. Signals with unknownparameters in noise

1. Conventional pulsed radar or sonar, targetdetection

2. Target classification (orientation of target unknown)3. Digital communication systems without phase reference4. Digital communication over slowly fading channels

Level 3. Random signals in noise 1. Digital communication over scatter link, orbiting dipolechannel, or chaff link

2. Passive sonar3. Seismic detection system4. Radio astronomy (detection of noise sources)

Page 29: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

Introduction 5

Analogsource Sampler Transmitter

(a)

(b)

(c)

(d)

a(t )

a(t ) as(t ) s(t , An)

s(t , An)

T

A1A4

A2

A3

–A3

A4

A2

A1

(Frequency changesexaggerated)

Filter

Transmitter

Â1Â4

Â2

Â3

âs(t ) â (t )

â (t )

Figure 1.4: (a) Sampling an analog source; (b) pulse-amplitude modulation; (c) pulse-frequencymodulation; (d) waveform reconstruction.

There is a parallel set of problems in the estimation theory area. A simple example isgiven in Figure 1.4, in which the source puts out an analog message a(t) (Figure 1.4a).To transmit the message we first sample it every T seconds. Then, every T seconds wetransmit a signal that contains a parameter that is uniquely related to the last sample value.In Figure 1.4b, the signal is a sinusoid whose amplitude depends on the last sample. Thus,if the sample at time nT is An, the signal in the interval [nT, (n + 1)T ] is

s(t, An) = An sin ωct, nT � t � (n + 1)T. (1.14)

Page 30: Detection, Estimation, and · 7 Detection of Signals–Estimation of Signal Parameters 584 7.1 Introduction 584 7.1.1 Models 584 7.1.1.1 Detection 584 7.1.1.2 Estimation 587 7.1.2

6 Detection, Estimation, and Modulation Theory

A system of this type is called a pulse amplitude modulation (PAM) system. In Figure 1.4c,the signal is a sinusoid whose frequency in the interval differs from the reference frequencyωc by an amount proportional to the preceding sample value,

s(t, An) = sin(ωct + Ant), nT � t � (n + 1)T. (1.15)

A system of this type is called a pulse-frequency modulation (PFM) system. Once againthere is additive noise. The received waveform, given that An was the sample value, is

r(t) = s(t, An) + n(t), nT � t � (n + 1)T. (1.16)

During each interval, the receiver tries to estimate An. We denote these estimates as An.Over a period of time we obtain a sequence of estimates, as shown in Figure 1.4d, whichis passed into a device whose output is an estimate of the original message a(t). If a(t) isa band-limited signal, the device is just an ideal low-pass filter. For other cases, it is moreinvolved.

If, however, the parameters in this example were known and the noise were absent,the received signal would be completely known. We refer to problems in this category asknown signal in noise problems. If we assume that the mapping from An to s(t, An) in thetransmitter has an inverse, we see that if the noise were not present we could determine An

unambiguously. (Clearly, if we were allowed to design the transmitter, we should alwayschoose a mapping with an inverse.) The known signal in noise problem is the first level ofthe estimation problem hierarchy.

Returning to the area of radar, we consider a somewhat different problem. We assumethat we know a target is present but do not know its range or velocity. Then the receivedsignal is

r(t) = Vr sin[(ωc + ωd)(t − τ) + θr] + n(t), τ � t � τ + T,

= n(t), 0 � t < τ, τ + T < t < ∞,(1.17)

where ωd denotes a Doppler shift caused by the target’s motion. We want to estimateτ and ωd . Now, even if the noise were absent and τ and ωd were known, the signalwould still contain the unknown parameters Vr and θr. This is a typical second-levelestimation problem. As in detection theory, we refer to problems in this category assignal with unknown parameters in noise problems.

At the third level, the signal component is a random process whose statistical character-istics contain parameters we want to estimate. The received signal is of the form

r(t) = s�(t, A) + n(t), (1.18)

where s�(t, A) is a sample function from a random process. In a simple case it might be astationary process with the narrow-band spectrum shown in Figure 1.5. The shape of thespectrum is known but the center frequency is not. The receiver must observe r(t) and,using the statistical properties of s�(t, A) and n(t), estimate the value of A. This particularexample could arise in either radio astronomy or passive sonar. The general class of problemin which the signal containing the parameters is a sample function from a random processis referred to as the random signal in noise problem. The hierarchy of estimation theoryproblems is shown in Table 1.2.