digital spectral analysis - download.e-bookshelf.de · digital spectral analysis parametric,...

30

Upload: others

Post on 15-Sep-2019

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié
File Attachment
Coverjpg

Digital Spectral Analysis

To my children Guillaume Aureacutelien Anastasia Virginia Enrica

Digital Spectral Analysis

parametric non-parametric and advanced methods

Edited by Francis Castanieacute

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley amp Sons Inc

Apart from any fair dealing for the purposes of research or private study or criticism or review as permitted under the Copyright Designs and Patents Act 1988 this publication may only be reproduced stored or transmitted in any form or by any means with the prior permission in writing of the publishers or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address

ISTE Ltd John Wiley amp Sons Inc 27-37 St Georgersquos Road 111 River Street London SW19 4EU Hoboken NJ 07030 UK USA

wwwistecouk wwwwileycom

copy ISTE Ltd 2011 The rights of Francis Castanieacute to be identified as the authors of this work have been asserted by him in accordance with the Copyright Designs and Patents Act 1988 ____________________________________________________________________________________

Library of Congress Cataloging-in-Publication Data Digital spectral analysis parametric non-parametric and advanced methods edited by Francis Castanieacute p cm Includes bibliographical references and index ISBN 978-1-84821-277-0 1 Spectral theory (Mathematics) 2 Signal processing--Digital techniques--Mathematics 3 Spectrum analysis I Castanieacute Francis QA280D543 2011 6213822--dc23

2011012246

British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-277-0 Printed and bound in Great Britain by CPI Antony Rowe Chippenham and Eastbourne

Table of Contents

Preface xiii

PART 1 TOOLS AND SPECTRAL ANALYSIS 1

Chapter 1 Fundamentals 3 Francis CASTANIEacute

11 Classes of signals 3 111 Deterministic signals 3 112 Random signals 6

12 Representations of signals 9 121 Representations of deterministic signals 9 122 Representations of random signals 13

13 Spectral analysis position of the problem 20 14 Bibliography 21

Chapter 2 Digital Signal Processing 23 Eacuteric LE CARPENTIER

21 Introduction 23 22 Transform properties 24

221 Some useful functions and series 24 222 Fourier transform 29 223 Fundamental properties 33 224 Convolution sum 34 225 Energy conservation (Parsevalrsquos theorem) 36 226 Other properties 38 227 Examples 39 228 Sampling 41 229 Practical calculation FFT 46

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 2: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Digital Spectral Analysis

To my children Guillaume Aureacutelien Anastasia Virginia Enrica

Digital Spectral Analysis

parametric non-parametric and advanced methods

Edited by Francis Castanieacute

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley amp Sons Inc

Apart from any fair dealing for the purposes of research or private study or criticism or review as permitted under the Copyright Designs and Patents Act 1988 this publication may only be reproduced stored or transmitted in any form or by any means with the prior permission in writing of the publishers or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address

ISTE Ltd John Wiley amp Sons Inc 27-37 St Georgersquos Road 111 River Street London SW19 4EU Hoboken NJ 07030 UK USA

wwwistecouk wwwwileycom

copy ISTE Ltd 2011 The rights of Francis Castanieacute to be identified as the authors of this work have been asserted by him in accordance with the Copyright Designs and Patents Act 1988 ____________________________________________________________________________________

Library of Congress Cataloging-in-Publication Data Digital spectral analysis parametric non-parametric and advanced methods edited by Francis Castanieacute p cm Includes bibliographical references and index ISBN 978-1-84821-277-0 1 Spectral theory (Mathematics) 2 Signal processing--Digital techniques--Mathematics 3 Spectrum analysis I Castanieacute Francis QA280D543 2011 6213822--dc23

2011012246

British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-277-0 Printed and bound in Great Britain by CPI Antony Rowe Chippenham and Eastbourne

Table of Contents

Preface xiii

PART 1 TOOLS AND SPECTRAL ANALYSIS 1

Chapter 1 Fundamentals 3 Francis CASTANIEacute

11 Classes of signals 3 111 Deterministic signals 3 112 Random signals 6

12 Representations of signals 9 121 Representations of deterministic signals 9 122 Representations of random signals 13

13 Spectral analysis position of the problem 20 14 Bibliography 21

Chapter 2 Digital Signal Processing 23 Eacuteric LE CARPENTIER

21 Introduction 23 22 Transform properties 24

221 Some useful functions and series 24 222 Fourier transform 29 223 Fundamental properties 33 224 Convolution sum 34 225 Energy conservation (Parsevalrsquos theorem) 36 226 Other properties 38 227 Examples 39 228 Sampling 41 229 Practical calculation FFT 46

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 3: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

To my children Guillaume Aureacutelien Anastasia Virginia Enrica

Digital Spectral Analysis

parametric non-parametric and advanced methods

Edited by Francis Castanieacute

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley amp Sons Inc

Apart from any fair dealing for the purposes of research or private study or criticism or review as permitted under the Copyright Designs and Patents Act 1988 this publication may only be reproduced stored or transmitted in any form or by any means with the prior permission in writing of the publishers or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address

ISTE Ltd John Wiley amp Sons Inc 27-37 St Georgersquos Road 111 River Street London SW19 4EU Hoboken NJ 07030 UK USA

wwwistecouk wwwwileycom

copy ISTE Ltd 2011 The rights of Francis Castanieacute to be identified as the authors of this work have been asserted by him in accordance with the Copyright Designs and Patents Act 1988 ____________________________________________________________________________________

Library of Congress Cataloging-in-Publication Data Digital spectral analysis parametric non-parametric and advanced methods edited by Francis Castanieacute p cm Includes bibliographical references and index ISBN 978-1-84821-277-0 1 Spectral theory (Mathematics) 2 Signal processing--Digital techniques--Mathematics 3 Spectrum analysis I Castanieacute Francis QA280D543 2011 6213822--dc23

2011012246

British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-277-0 Printed and bound in Great Britain by CPI Antony Rowe Chippenham and Eastbourne

Table of Contents

Preface xiii

PART 1 TOOLS AND SPECTRAL ANALYSIS 1

Chapter 1 Fundamentals 3 Francis CASTANIEacute

11 Classes of signals 3 111 Deterministic signals 3 112 Random signals 6

12 Representations of signals 9 121 Representations of deterministic signals 9 122 Representations of random signals 13

13 Spectral analysis position of the problem 20 14 Bibliography 21

Chapter 2 Digital Signal Processing 23 Eacuteric LE CARPENTIER

21 Introduction 23 22 Transform properties 24

221 Some useful functions and series 24 222 Fourier transform 29 223 Fundamental properties 33 224 Convolution sum 34 225 Energy conservation (Parsevalrsquos theorem) 36 226 Other properties 38 227 Examples 39 228 Sampling 41 229 Practical calculation FFT 46

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 4: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Digital Spectral Analysis

parametric non-parametric and advanced methods

Edited by Francis Castanieacute

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley amp Sons Inc

Apart from any fair dealing for the purposes of research or private study or criticism or review as permitted under the Copyright Designs and Patents Act 1988 this publication may only be reproduced stored or transmitted in any form or by any means with the prior permission in writing of the publishers or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address

ISTE Ltd John Wiley amp Sons Inc 27-37 St Georgersquos Road 111 River Street London SW19 4EU Hoboken NJ 07030 UK USA

wwwistecouk wwwwileycom

copy ISTE Ltd 2011 The rights of Francis Castanieacute to be identified as the authors of this work have been asserted by him in accordance with the Copyright Designs and Patents Act 1988 ____________________________________________________________________________________

Library of Congress Cataloging-in-Publication Data Digital spectral analysis parametric non-parametric and advanced methods edited by Francis Castanieacute p cm Includes bibliographical references and index ISBN 978-1-84821-277-0 1 Spectral theory (Mathematics) 2 Signal processing--Digital techniques--Mathematics 3 Spectrum analysis I Castanieacute Francis QA280D543 2011 6213822--dc23

2011012246

British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-277-0 Printed and bound in Great Britain by CPI Antony Rowe Chippenham and Eastbourne

Table of Contents

Preface xiii

PART 1 TOOLS AND SPECTRAL ANALYSIS 1

Chapter 1 Fundamentals 3 Francis CASTANIEacute

11 Classes of signals 3 111 Deterministic signals 3 112 Random signals 6

12 Representations of signals 9 121 Representations of deterministic signals 9 122 Representations of random signals 13

13 Spectral analysis position of the problem 20 14 Bibliography 21

Chapter 2 Digital Signal Processing 23 Eacuteric LE CARPENTIER

21 Introduction 23 22 Transform properties 24

221 Some useful functions and series 24 222 Fourier transform 29 223 Fundamental properties 33 224 Convolution sum 34 225 Energy conservation (Parsevalrsquos theorem) 36 226 Other properties 38 227 Examples 39 228 Sampling 41 229 Practical calculation FFT 46

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 5: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

First published 2011 in Great Britain and the United States by ISTE Ltd and John Wiley amp Sons Inc

Apart from any fair dealing for the purposes of research or private study or criticism or review as permitted under the Copyright Designs and Patents Act 1988 this publication may only be reproduced stored or transmitted in any form or by any means with the prior permission in writing of the publishers or in the case of reprographic reproduction in accordance with the terms and licenses issued by the CLA Enquiries concerning reproduction outside these terms should be sent to the publishers at the undermentioned address

ISTE Ltd John Wiley amp Sons Inc 27-37 St Georgersquos Road 111 River Street London SW19 4EU Hoboken NJ 07030 UK USA

wwwistecouk wwwwileycom

copy ISTE Ltd 2011 The rights of Francis Castanieacute to be identified as the authors of this work have been asserted by him in accordance with the Copyright Designs and Patents Act 1988 ____________________________________________________________________________________

Library of Congress Cataloging-in-Publication Data Digital spectral analysis parametric non-parametric and advanced methods edited by Francis Castanieacute p cm Includes bibliographical references and index ISBN 978-1-84821-277-0 1 Spectral theory (Mathematics) 2 Signal processing--Digital techniques--Mathematics 3 Spectrum analysis I Castanieacute Francis QA280D543 2011 6213822--dc23

2011012246

British Library Cataloguing-in-Publication Data A CIP record for this book is available from the British Library ISBN 978-1-84821-277-0 Printed and bound in Great Britain by CPI Antony Rowe Chippenham and Eastbourne

Table of Contents

Preface xiii

PART 1 TOOLS AND SPECTRAL ANALYSIS 1

Chapter 1 Fundamentals 3 Francis CASTANIEacute

11 Classes of signals 3 111 Deterministic signals 3 112 Random signals 6

12 Representations of signals 9 121 Representations of deterministic signals 9 122 Representations of random signals 13

13 Spectral analysis position of the problem 20 14 Bibliography 21

Chapter 2 Digital Signal Processing 23 Eacuteric LE CARPENTIER

21 Introduction 23 22 Transform properties 24

221 Some useful functions and series 24 222 Fourier transform 29 223 Fundamental properties 33 224 Convolution sum 34 225 Energy conservation (Parsevalrsquos theorem) 36 226 Other properties 38 227 Examples 39 228 Sampling 41 229 Practical calculation FFT 46

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 6: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Table of Contents

Preface xiii

PART 1 TOOLS AND SPECTRAL ANALYSIS 1

Chapter 1 Fundamentals 3 Francis CASTANIEacute

11 Classes of signals 3 111 Deterministic signals 3 112 Random signals 6

12 Representations of signals 9 121 Representations of deterministic signals 9 122 Representations of random signals 13

13 Spectral analysis position of the problem 20 14 Bibliography 21

Chapter 2 Digital Signal Processing 23 Eacuteric LE CARPENTIER

21 Introduction 23 22 Transform properties 24

221 Some useful functions and series 24 222 Fourier transform 29 223 Fundamental properties 33 224 Convolution sum 34 225 Energy conservation (Parsevalrsquos theorem) 36 226 Other properties 38 227 Examples 39 228 Sampling 41 229 Practical calculation FFT 46

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 7: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

vi Digital Spectral Analysis

23 Windows 49 24 Examples of application 57

241 LTI systems identification 57 242 Monitoring spectral lines 61 243 Spectral analysis of the coefficient of tide fluctuation 62

25 Bibliography 64

Chapter 3 Introduction to Estimation Theory with Application in Spectral Analysis 67 Olivier BESSON and Andreacute FERRARI

31 Introduction 67 311 Problem statement 68 312 Crameacuter-Rao lower bound 69 313 Sequence of estimators 74 314 Maximum likelihood estimation 77

32 Covariance-based estimation 86 321 Estimation of the autocorrelation functions of time series 87 322 Analysis of estimators based on ˆ ( )xxc m 91 323 The case of multi-dimensional observations 93

33 Performance assessment of some spectral estimators 95 331 Periodogram analysis 95 332 Estimation of AR model parameters 97 333 Estimation of a noisy cisoid by MUSIC 101 334 Conclusion 102

34 Bibliography 102

Chapter 4 Time-Series Models 105 Francis CASTANIEacute

41 Introduction 105 42 Linear models 107

421 Stationary linear models 108 422 Properties 110 423 Non-stationary linear models 114

43 Exponential models 117 431 Deterministic model 117 432 Noisy deterministic model 119 433 Models of random stationary signals 119

44 Nonlinear models 120 45 Bibliography 121

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 8: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Table of Contents vii

PART 2 NON-PARAMETRIC METHODS 123

Chapter 5 Non-Parametric Methods 125 Eacuteric LE CARPENTIER

51 Introduction 125 52 Estimation of the power spectral density 130

521 Filter bank method 130 522 Periodogram method 132 523 Periodogram variants 136

53 Generalization to higher-order spectra 141 54 Bibliography 142

PART 3 PARAMETRIC METHODS 143

Chapter 6 Spectral Analysis by Parametric Modeling 145 Corinne MAILHES and Francis CASTANIEacute

61 Which kind of parametric models 145 62 AR modeling 146

621 AR modeling as a spectral estimator 146 622 Estimation of AR parameters 146

63 ARMA modeling 154 631 ARMA modeling as a spectral estimator 154 632 Estimation of ARMA parameters 154

64 Prony modeling 156 641 Prony model as a spectral estimator 156 642 Estimation of Prony parameters 156

65 Order selection criteria 158 66 Examples of spectral analysis using parametric modeling 162 67 Bibliography 166

Chapter 7 Minimum Variance 169 Nadine MARTIN

71 Principle of the MV method 174 72 Properties of the MV estimator 177

721 Expressions of the MV filter 177 722 Probability density of the MV estimator 182 723 Frequency resolution of the MV estimator 186

73 Link with the Fourier estimators 188 74 Link with a maximum likelihood estimator 190 75 Lagunas methods normalized MV and generalized MV 192

751 Principle of the normalized MV 192

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 9: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

viii Digital Spectral Analysis

752 Spectral refinement of the NMV estimator 194 753 Convergence of the NMV estimator 195 754 Generalized MV estimator 198

76 A new estimator the CAPNORM estimator 200 77 Bibliography 204

Chapter 8 Subspace-Based Estimators and Application to Partially Known Signal Subspaces 207 Sylvie MARCOS and Reacutemy BOYER

81 Model concept of subspace definition of high resolution 207 811 Model of signals 207 812 Concept of subspaces 208 813 Definition of high resolution 209 814 Connection with spatial analysis or array processing 210

82 MUSIC 211 821 Principle of the MUSIC method 211 822 Pseudo-spectral version of MUSIC 213 823 Polynomial version of MUSIC 214

83 Determination criteria of the number of complex sine waves 216 831 AIC criterion (Akaike information criterion) 216 832 Minimum description length criterion 216

84 The MinNorm method 217 85 ldquoLinearrdquo subspace methods 219

851 The linear methods 219 852 The propagator method 219

86 The ESPRIT method 223 87 Illustration of the subspace-based methods performance 226 88 Adaptive research of subspaces 229 89 Integrating a priori known frequencies into the MUSIC criterion 233

891 A spectral approach The Prior-MUSIC algorithm 233 892 A polynomial-based rooting approach The projected companion matrix-MUSIC algorithm 237

810 Bibliography 243

PART 4 ADVANCED CONCEPTS 251

Chapter 9 Multidimensional Harmonic Retrieval Exact Asymptotic and Modified Crameacuter-Rao Bounds 253 Reacutemy BOYER

91 Introduction 253 92 CanDecompParafac decomposition of the multidimensional harmonic model 255

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 10: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Table of Contents ix

93 CRB for the multidimensional harmonic model 257 931 Deterministic CRB 257 932 Deterministic asymptotic CRB for the M-order harmonic model of dimension P 258 933 Asymptotic CRB for a constant amount of data 261

94 Modified CRB for the multidimensional harmonic model 266 941 Definition of the modified CRB for T = 1 266 942 Modified CRB in the case of a large number of samples 267 943 Generalization to time-varying initial phase 268 944 Analysis of the MACRB 269 945 Numerical illustrations 271

95 Conclusion 272 96 Appendices 273

961 Appendix A ACRB for the MD-harmonic model of dimension P 273 962 Appendix B Exact CRB for the first-order harmonic model of dimension P 276 963 Appendix C ACRB for the MD-harmonic model with multiple snapshots 278 964 Appendix D Application to the bistatic MIMO radar with widely separated arrays 279

97 Bibliography 284

Chapter 10 Introduction to Spectral Analysis of Non-Stationary Random Signals 287 Corinne MAILHES and Francis CASTANIEacute

101 Evolutive spectra 288 1011 Definition of the ldquoevolutive spectrumrdquo 288 1012 Evolutive spectrum properties 289

102 Non-parametric spectral estimation 290 103 Parametric spectral estimation 291

1031 Local stationary postulate 292 1032 Elimination of a stationary condition 293 1033 Application to spectral analysis 296

104 Bibliography 297

Chapter 11 Spectral Analysis of Non-uniformly Sampled Signals 301 Arnaud RIVOIRA and Gilles FLEURY

111 Applicative context 301 112 Theoretical framework 302

1121 Signal model 302 1122 Sampling process 302

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 11: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

x Digital Spectral Analysis

113 Generation of a randomly sampled stochastic process 302 1131 Objective 302 1132 Generation of the sampling times 302 1133 CARMA processes 303

114 Spectral analysis using undated samples 305 1141 Non-parametric estimation 305 1142 Parametric estimation 307

115 Spectral analysis using dated samples 309 1151 Masryrsquos work 309 1152 Slotting technique 310 1153 IRINCORREL A consistent spectral estimator 311

116 Perspectives 314 117 Bibliography 315

Chapter 12 SpacendashTime Adaptive Processing 317 Laurent SAVY and Franccedilois LE CHEVALIER

121 STAP spectral analysis and radar signal processing 319 1211 Generalities 319 1212 Radar configuration with side antenna 320 1213 Radar configuration with forward antenna 324

122 Spacendashtime processing as a spectral estimation problem 327 1221 Overview of spacendashtime processing 327 1222 Processing in additive white noise 328 1223 Processing in additive colored noise 330

123 STAP architectures 334 1231 Generalities and notations 334 1232 Pre-Doppler STAP 335 1233 Post-Doppler STAP 344

124 Relative advantages of pre-Doppler and post-Doppler STAP 354 1241 Estimation of adaptive filtering coefficients 354 1242 Exclusion of target signal from training data 356 1243 Clutter heterogeneity 356 1244 Computing load 357 1245 Angular position estimation 357 1246 Anti-jamming and STAP compatibility 358

125 Conclusion 358 126 Bibliography 359 127 Glossary 360

Chapter 13 Particle Filtering and Tracking of Varying Sinusoids 361 David BONACCI

131 Particle filtering 361

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 12: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Table of Contents xi

1311 Generalities 362 1312 Strategies for efficient particle filtering 365

132 Application to spectral analysis 370 1321 Observation model 370 1322 State vector 371 1323 State evolution model 372 1324 Simulation results 373

133 Bibliography 375

List of Authors 377

Index 379

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 13: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Preface

The oldest concept dealt with in signal processing is the concept of frequency and its operational consequence the constant quest of frequency contents of a signal This concept is obviously linked to the fascination for sine functions deriving from the 19th Century discovery of the ldquomagicrdquo property of the sine functions or more precisely complex exponentials it is the only function set able to cross what we call today ldquolinear invariant systemsrdquo without going out of this set ie the output of such a system remains in the same class as the input The distribution of qualifying quantities such as power or energy of a signal over frequency the so-called spectrum is probably one of the most studied topics in signal processing The concept of frequency itself is not limited to time-related functions but is much more general in particular image processing deals with space frequency concept related to geometrical length units etc

We can with good reason wonder on the pertinence of the importance given to spectral approaches From a fundamental viewpoint they relate to the Fourier transformation projection of signals on the set of special periodic functions which include complex exponential functions By generalizing this concept the basis of signal vector space can be made much wider (Hadamard Walsh etc) while maintaining the essential characteristics of the Fourier basis In fact the projection operator induced by the spectral representations measures the ldquosimilarityrdquo between the projected quantity and a particular basis they have henceforth no more ndash and no less ndash relevance than this

The predominance of this approach in signal processing is not only based on this reasonable (but dry) mathematical description but probably has its origins in the fact that the concept of frequency is in fact a perception through various human ldquosensorsrdquo the system of vision which perceives two concepts of frequency (time-dependent for colored perception and spatial via optical concepts of separating

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 14: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

xiv Digital Spectral Analysis

power or resolution) and hearing which no doubt is at the historical origin of the perceptual concept of frequency ndash Pythagoras and the ldquoMusic of the Spheresrdquo and probably other proprioceptive sensors (all those who suffer from seasickness have a direct physical experience of the frequency sensitivity)

Whatever the reasons may be spectral descriptors are the most commonly used in signal processing realizing this the measurement of these descriptors is therefore a major issue This is the reason for the existence of this book dedicated to this measurement that is classically christened as spectral analysis The term digital refers to todayrsquos most widely spread technical means to implement the proposed methods of the analysis

It is not essential that we must devote ourselves to a tedious hermeneutic to understand spectral analysis through countless books that have dealt with the subject (we will consult with interest the historical analysis of this field given in [MAR 87]) If we devote ourselves to this exercise in erudition concerning the cultural level we realize that the theoretical approaches of current spectral analysis were structured from the late 1950s these approaches have the specific nature of being controlled by the availability of technical tools that allow the analysis to be performed It must kept in mind that the first spectral analyzers used optical analyzers (spectrographs) then at the end of this archaeological phase ndash which is generally associated with the name of Isaac Newton ndash spectral analysis got organized around analog electronic technologies We will not be surprised indeed that the theoretical tools were centered on concepts of selective filtering and sustained by the theory of time-continuous signals (see [BEN 71]) At this time the criteria qualifying the spectral analysis methods were formulated frequency resolution or separating power variance of estimators etc They are still in use today and easy to assess in a typical linear filtering context

The change to digital processing tools was first done by transposition of earlier analog approaches adapting the time axis to discrete time signals Second a simultaneous increase in the power of processing tools and algorithms opened the field up to more and more intensive digital methods But beyond the mere availability of more comfortable digital tools the existence of such methods freed the imagination by allowing the use of descriptors derived from the domain of parametric modeling This has its origin in a field of statistics known as analysis of chronological series (see [BOX 70]) the so-called time series which was successfully applied by G Yule (1927) for the determination of periodicities of the number of sun spots but it is actually the present availability of sufficiently powerful digital methods from the mid-1970s which led the community of signal processing to consider parametric modeling as a tool for spectral analysis with its own characteristics including the possibilities to obtain ldquosuper resolutionsrdquo andor to process signals of very short duration We will see that characterizing these

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 15: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Preface xv

estimators with the same criteria as estimators from the analog world is not an easy task ndash the mere quantitative assessment of the frequency resolution or spectral variances becomes a complicated problem

Part 1 brings together the processing tools that contribute to spectral analysis Chapter 1 lists the basics of the signal theory needed to read the following parts of the book the informed reader could obviously skip this Next digital signal processing the theory of estimation and parametric modeling of time series are presented

The ldquoclassicalrdquo methods known nowadays as non-parametric methods form part of the second part The privative appearing in the qualification of ldquonon-parametricrdquo must not be seen as a sign of belittling these methods ndash they are the most used in industrial spectral analysis

Part 3 obviously deals with parametric methods studying first the methods based on models of time series Capons methods and its variants and then the estimators based on the concepts of sub-spaces

The fourth and final part is devoted to advanced concepts It provides an opening to parametric spectral analysis of non-stationary signals a subject with great potential which is tackled in greater depth in another book of the IC2 series [HLA 05] of space-time processing and proposes an inroad in most recent particle filtering-based methods

Francis CASTANIEacute May 2011

Bibliography

[BEN 71] BENDAT JS PIERSOL AG Random Data Analysis and Measurement Procedures Wiley Intersciences 1971

[BOX 70] BOX G JENKINS G Time Series Analysis Forecasting and Control Holden-Day San Francisco 1970

[HLA 05] HLAWATSCH F AUGER F Temps-freacutequence Concepts et Outils Hermegraves Science Paris 2005

[HLA 08] HLAWATSCH F AUGER F (eds) Time-frequency Analysis Concepts and Methods ISTE Ltd London and John Wiley amp Sons New York 2008

[MAR 87] MARPLE S Digital Spectral Analysis with Applications Prentice Hall Englewood Cliffs NJ 1987

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 16: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

PART 1

Tools and Spectral Analysis

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 17: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Chapter 1

Fundamentals

11 Classes of signals

Every signal-processing tool is designed to be adapted to one or more signal classes and presents a degraded or even deceptive performance if applied outside this group of classes Spectral analysis too does not escape this problem and the various tools and methods for spectral analysis will be more or less adapted depending on the class of signals to which they are applied

We see that the choice of classifying properties is fundamental because the definition of classes itself will affect the design of processing tools

Traditionally the first classifying property is the deterministic or non-deterministic nature of the signal

111 Deterministic signals

The definitions of determinism are varied but the simplest is the one that consists of calling any signal that is reproducible in the mathematical sense of the term as a deterministic signal ie any new experiment for the generation of a continuous time signal x(t) (or discrete time x(k)) produces a mathematically identical signal Another subtler definition resulting from the theory of random signals is based on the exactly predictive nature of x(t) t ge t0 from the moment that it is known for t lt t0 (singular term of the Wold decomposition eg see Chapter 4 and [LAC 00]) Here we discuss only the definition based on the reproducibility of x(t) Chapter written by Francis CASTANIEacute

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 18: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

4 Digital Spectral Analysis

as it induces a specific strategy on the processing tools as all information of the signal is contained in the function itself any bijective transformation of x(t) will also contain all this information Representations may thus be imagined which without loss of information will demonstrate the characteristics of the signal better than the direct representation of the function x(t) itself

The deterministic signals are usually separated into classes representing integral properties of x(t) strongly linked to some quantities known by physicists

Finite energy signals verify the integral properties in equations [11] and [12] with continuous or discrete time

( )2d

RE x t t= lt infinint [11]

( ) 2+infin

=minusinfin

= lt infinsumk

E x k [12]

We recognize the membership of x(t) to standard function spaces (noted as L2 or l2) as well as the fact that this integral to within some dimensional constant (an impedance in general) represents the energy E of the signal

Signals of finite average power verify

( )2 2

2

1lim dT

TTP x t t

T+

minusrarrinfin= lt infinint [13]

( ) 21lim2 1

N

N k NP x k

N

+

rarrinfin=minus

= lt infin+ sum [14]

If we accept the idea that the sums of equation [11] or [12] represent ldquoenergiesrdquo those of equation [13] or [14] then represent powers

It is clear that these integral properties correspond to mathematical characteristics whose morphological behavior along the time axis is very different the finite energy signals will be in practice ldquopulse-shapedrdquo or ldquotransientrdquo signals such that |x(t)| rarr 0 for |t| rarr infin This asymptotic behavior is not at all necessary to ensure the convergence of the sums and yet all practical finite energy signals verify it For example the signal below is of finite energy

( ) ( )e cos 2 00 0

atx t A ft tt

π θminus= + ge

= lt

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 19: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Fundamentals 5

(This type of damped exponential oscillatory waveform is a fundamental signal in the analysis of linear systems that are invariant by translation)

1

1

2

3

4

50 2000 0 2000

0

ndash1

Figure 11 Electromagnetic interference signal and its decomposition

On the more complex example of Figure 11 we see a finite energy signal of the form

( ) ( )4

1i i

ix t x t t

=

= minussum

where the four components xi(t) start at staggered times (ti) Its shape even though complex is supposed to reflect a perfectly reproducible physical experiment

Finite power signals will be in practice permanent signals ie not canceling at infinity For example

( ) ( ) ( )( )( )

sin x t A t t t

A t

ψ= isinreal

lt infin [15]

An example is given in Figure 12 It is clear that there is obviously a start and an end but its mathematical model cannot take this unknown data into account and it is relevant to represent it by an equation of the type [15] This type of signal modulated in amplitude and angle is fundamental in telecommunications

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 20: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

6 Digital Spectral Analysis

1

05

0

ndash05

ndash1

2400 2500 2600 2700 2800 2900

Figure 12 Frequency-modulated sine curve

112 Random signals

The deterministic models of signals described in the previous section are absolutely unrealistic in the sense that they do not take into account any inaccuracy or irreproducibility even if partial To model the uncertainty on the signals several approaches are possible today but the one that was historically adopted at the origin of the signal theory is a probabilistic approach

In this approach the signal is regarded as resulting from a drawing in a probability space The observation xj (t) (jth outcome or ldquorealizationrdquo) is one of the results of the random event but is no longer the signal the latter is now constituted by all the possible outcomes x(t ω) The information of the signal is no longer contained in one of the realizations xj (t) but in the probability laws that govern x(t ω) Understanding this fact is essential it is not relevant in this approach to look for representations of the signal in the form of transformations on one of the outcomes that could well demonstrate the properties sought here we must ldquogo backrdquo as far as possible to the probability laws of x(t ω) from the observation of one (or a finite number) of the outcomes generally over a finite time interval How from the data ( ) [ ] 0 11 jx t j K t t t= isinhellip we can go back to the laws of

x(t ω) is the subject of the estimation theory (see Chapter 3)

Figure 13 shows some outcomes of a same physical signal (Doppler echo signal from a 38 GHz traffic radar carried by a motor car) Every realization contains the same relevant information same vehicle speed same pavement surface same trajectory etc However there is no reproducibility of outcomes and there are fairly large morphological differences between them At the very most we cannot avoid noticing that they have a ldquofamily likenessrdquo The probabilistic description is nothing

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 21: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Fundamentals 7

but one of the approaches that helps account for this ldquofamily likenessrdquo ie by the membership of the signal to a well-defined probability space

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

0 100 200 300 400 500 600 700 800 900 1000

2

1

0

ndash1

2

1

0

ndash1

2

1

0

ndash1

ndash2

ndash2

x 1(t

)x 2

(t)

x 3(t

)

Figure 13 Example of three outcomes of an actual random signal

The laws are probability sets

( ) ( ) ( ) ( )1 1 2 2 1 2 1 2 n n n nP x t D x t D x t D L D D D t t t⎡ ⎤isin isin isin =⎣ ⎦hellip hellip hellip [16]

whose most well-known examples are the probability densities in which [ [ d i i i iD x x x= +

[ ] ( )1 1 1 d dn n nP p x x t t x x=hellip hellip hellip hellip [17]

and the distribution functions with ] ]i iD x= minus infin

[ ] ( )1 1 n nP F x x t t=hellip hellip hellip [18]

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 22: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

8 Digital Spectral Analysis

The set of these three laws fully characterizes the random signal but a partial characterization can be obtained via the moments of order M of the signal defined (when these exist) by

( ) ( )( ) ( )

( )

1 1

1

1 1 1 11

1 11

p d d

d

with

n nn

nn

kk k kn n n n n

k kn n n

ii

E x t x t x x x x t t x x

x x F x x t t

M k

real

real

=

=

=

int int

int intsum

hellip hellip hellip hellip

hellip hellip hellip [19]

It can be shown that these moments are linked in a simple way to the Taylor series expansion of the characteristic function of the n-tuple ( ) ( ) 1 1 n nx t x thellip

defined by

( ) ( ) ( )( )1

1 1

exp

Tn

T Tn n

u u E j

u u x x

φ φ=

= =

hellip

hellip hellip

u u x

u x [110]

We see that the laws and the moments have a dependence on the considered points titn of the time axis The separation of random signals into classes very often refers to the nature of this dependence if a signal has one of its characteristics invariant by translation or in other words independent of the time origin we will call the signal stationary for this particular characteristic We will thus speak of stationarity in law if

( ) ( )1 1 1 1 0 0 n n n np x x t t p x x t t t t= + +hellip hellip hellip hellip

and of stationarity for the moment of order M if

( ) ( )( ) ( ) ( )( )111 1 0 0

n nk k kkn nE x t x t E x t t x t t= + +hellip hellip

The interval t0 for which this type of equality is verified holds various stationarity classes

ndash local stationarity if this is true only for [ ]0 0 1 isint T T

ndash asymptotic stationarity if this is verified only when 0 t rarr infin etc

We will refer to [LAC 00] for a detailed study of these properties

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 23: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Fundamentals 9

Another type of time dependence is defined when the considered characteristics recur periodically on the time axis we then speak of cyclostationarity For example if

( ) ( )( ) ( ) ( )( ) ( )1 11 1 1 n nk kk k

n n nE x t x t E x t T x t T t t= + + forallhellip hellip hellip

the signal will be called cyclostationary for the moment of order M (see [GAR 89])

12 Representations of signals

The various classes of signals defined above can be represented in many ways ie forming of the information present in the signal which does not have any initial motive other than to highlight certain characteristics of the signal These representations may be complete if they carry all the information of the signal and partial otherwise

121 Representations of deterministic signals

1211 Complete representations

As the postulate of the deterministic signal is that any information that affects it is contained in the function x(t) itself it is clear that any bijective transformation applied to x(t) will preserve this information

From a general viewpoint such a transformation may be written with the help of integral transformations on the signal such as

( ) ( ) ( ) ( )1 1 1 1 1 d dm n n m nR u u H G x t x t t t u u t tψ⎡ ⎤= sdot⎡ ⎤⎣ ⎦⎢ ⎥⎣ ⎦int inthellip hellip hellip hellip hellip hellip [111]

where the operators H [] and G [] are not necessarily linear The kernel ( ) _ 2 j ftt f e πψ = has a fundamental role to play in the characteristics that we

wish to display This transformation will not necessarily retain the dimensions and in general we will use representations such as gem n We will expect it to be inversible (ie there exists an exact inverse transformation) and that it helps highlight certain properties of the signal better than the function x(t) itself It must also be understood that the demonstration of a characteristic of the signal may considerably mask other characteristics the choice of H [] G [] and ψ () governs the choice of what will be highlighted in x(t) In the case of discrete time signals discrete expressions of the same general form exist

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 24: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

10 Digital Spectral Analysis

The very general formula of equation [111] fortunately reduces to very simple elements in most cases

Here are some of the most widespread examples orthogonal transformations linear timendashfrequency representations timescale representations and quadratic timendashfrequency representations

Orthogonal transformations

This is the special case where m = n = 1 H [x] = G [x] = x and ψ (t u) is a set of orthogonal functions The undethronable Fourier transform is the most well known with ( ) _ 2 j ftt f e πψ = We may once again recall its definition here

( ) ( )( ) ( ) 2FT e d with j ftx f x t x t t fπminusreal

= = isinrealint [112]

and its discrete version for a signal vector ( ) ( )0 1 Tx x N= minusx hellip of length N

( ) ( )( ) ( ) [ ]1 2

0DFT with 0 1

klN

N j

kx l x k x k e l N

πminus minus

=

= = isin minussum [113]

If we bear in mind that the inner product appearing in equation [112] or [113] is a measure of likelihood between x(t) and _ 2 j ftπe we immediately see that the choice of this function set ( )t fψ is not innocent we are looking to highlight complex sine curves in x(t)

If we are looking to demonstrate the presence of other components in the signal we will select another set ( ) t uψ The most well-known examples are those where we look to find components with binary or ternary values We will then use functions such as HadamardndashWalsh or Haar (see [AHM 75])

Linear timendashfrequency representations

This is the special case where H[x] = G[x] = x and ( ) ( ) 2 e j ftt f w t πψ τ τ minus= minus leading to an expression of the type

( ) ( ) ( ) 2π e dj ftR t x t w t tτ τ minus= minusint [114]

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 25: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

Fundamentals 11

Timescale (or wavelet) representations

This is the special case where [ ] [ ] ( ) ( )and tM aH x G x x t a τψ τ ψ minus= = =

which gives

( ) ( ) dMtR a x t t

aττ ψ minus⎛ ⎞= ⎜ ⎟

⎝ ⎠int [115]

where ψM (t) is the parent wavelet The aim of this representation is to look for the parent wavelet in x(t) translated and expanded by the factors τ and a respectively

Quadratic timendashfrequency representations

In this case G [] is a quadratic operator and H [] = x The most important example is the WignerndashVille transformation

( ) 2π e d2 2

j fR t f x t x t ττ τ τminus⎛ ⎞ ⎛ ⎞= + minus⎜ ⎟ ⎜ ⎟⎝ ⎠ ⎝ ⎠int [116]

which is inversible if at least one point of x(t) is known

The detailed study of these three types of representations is the central theme of another book of the same series (see [HLA 05])

Representations with various more complex combinations of H [x] G [x] and ψM () can be found in [OPP 75] and [NIK 93] from among abundant literature

1212 Partial representations

In many applications we are not interested in the entire information present in the signal but only in a part of it The most common example is that of energy representations where only the quantities linked to the energy or power are considered

The energy or power spectral density

If we are interested in the frequency energy distribution (we will call it ldquospectralrdquo) we are led to consider the following quantity

( ) ( )( ) ( ) ( )( )2 2FT ESDxS f x t x f x t= = [117]

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the

Page 26: Digital Spectral Analysis - download.e-bookshelf.de · Digital Spectral Analysis parametric, non-parametric and advanced methods Edited by Francis Castanié

12 Digital Spectral Analysis

for continuous time signals and

( ) ( )( ) ( ) ( )( )2 2DFT ESDxS l x k x l x k= = [118]

for discrete time signals

These representations do not allow the signal to be reconstructed as the phase of the transform is lost We will call them energy spectral densities (ESD) But the main reason for interest in these partial representations is that they show the energy distribution along a non-time axis (here the frequency) In fact Parsevalrsquos theorem states

( ) ( )2 2d d

R RE x t t x f f= =int int [119]

and ( ) 2x f represents the energy distribution in f just as |x(t)|2 therefore

represents its distribution over t

For discrete time signals Parsevalrsquos theorem is expressed a little differently

( ) ( )1 1

2 2

0 0

1N N

k lE x k x l

N

minus minus

= =

= =sum sum

but the interpretation is the same

The quantities Sx( f ) or Sx(l) defined above have an inverse Fourier transform which can easily be shown as being expressed by

( )( ) ( ) ( ) ( )IFT dx xxRS f x t x t tτ γ τ= lowast minusint [120]

This function is known as the autocorrellation function of x(t) We see that the ESD of x(t) could have axiomatically been defined as follows

( ) ( )( ) ( )( )ESD FTx xxS f x t γ τ= [121]

When we are interested in finite power signals their Fourier transform exists in terms of distributions This generally prohibits the definition of the square of the