the least squares spectrum, its inverse transform and - t-space

183
THE LEAST SQUARES SPECTRUM, ITS INVERSE TRANSFORM AND AUTQCORRELATION FUNCTION: THEORY AND SOm APPLICATIONS IN GEODESY Michael Ruthven Craymer A thesis subrniûed in conformity with the requirements for the degree of Doctor of Philosophy, Graduate Depattment of Civil Engineering, University of Toronto @ Copyright by Michad Ruthven Craymer 1998

Upload: others

Post on 11-Sep-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: the least squares spectrum, its inverse transform and - T-Space

THE LEAST SQUARES SPECTRUM, ITS INVERSE TRANSFORM AND AUTQCORRELATION FUNCTION: THEORY AND S O m

APPLICATIONS IN GEODESY

Michael Ruthven Craymer

A thesis subrniûed in conformity with the requirements for the degree of Doctor of Philosophy, Graduate Depattment of Civil Engineering, University of Toronto

@ Copyright by Michad Ruthven Craymer 1998

Page 2: the least squares spectrum, its inverse transform and - T-Space

National Library Bibliothèque nationale du Canada

Acquisitions and Acquisitions et Bibliogaphic Services services bibliographiques

395 W d i Street 395, nie Weuitn* OFEawaON K t A W ûttawaON K I A W Canada Canada

The author has granted a non- exclusive Licence allowing the National Library of Canada to reproduce, loaq distribute or sell copies of this thesis in microform, paper or electronic formats.

The author retains ownership of the copyright in this thesis. Neither the thesis nor substantial extracts fiom it may be printed or otherwise reproduced without the author's permission.

L'auteur a accordé une licence non exclusive pemettant à la Bibliothèque nationale du Canada de reproduire, prêter, districbuer ou vendre des copies de cette thèse sous la forme de microfiche/nlm, de reproduction sur papier ou sur format électronique.

L'auteur conserve la propriété du droit d'auteur qui protège cette thèse. Ni la thèse ni des extraits substantiels de celle-ci ne doivent être imprimés ou autrement reproduits sans son autorisation,

Page 3: the least squares spectrum, its inverse transform and - T-Space

THE LEAST SQUARES SPECïRUM, ITS INVERSE TRANSFORM AND

AUTOCORRELATION FUNCTION:

THEORY AND SOME APPLICATIONS IN GEODESY

Doctor of Phiiosophy 1998

Michael Ruthven Craymer

Deparmient of Civil Engineering, University of Toronto

To redize the fuil potential of increasingly more accurate measurements,

scientists are now faced with the ta& of modelling ever smaller effects on their

observations to improve their resuits. The problem, however, is that there is often little

understanding of the cause and effect relation between these so-caiied systematic effects

and the measurements. Spectra and autocorrelation functions can be used to help

diagnose and improve the modelling of these systematic effects in measurements.

However, standard techniques for computing spectra and autocorrelation functions

requixe the data to be evenly spaced, which is often not satisfied in practice.

The approach taken here is to develop a general technique for determining

autocorrelation functions for data which are unevenly spaced. This is an indirect method

whereby the systematic effects, represented by the residuals nom an incomplete a priori

deterministic model, are transformed into a power specrrum and then into an

autocorrelation function. To accommodate unevenly spaced data, a general least squares

transfomi and its inverse are developed. The inverse transfonn is used to obtain the

autocorrelation function from the least squares spemrn onginally developed by Vaniik

[1971]. This formulation can accommodate unequaily spaced data, random observation

emrs, arbitrary fî-equency selec tion, arbi trarily weigh ted and coxrelated observations, as

Page 4: the least squares spectrum, its inverse transform and - T-Space

well as the presence of any a priori detemiinistic modeL The conventionai Fourier

transform and spectrum are shown to be just special cases of this more general least

squares formulation. It is also shown how the individual spectrai components in the least

squares spectmm and inverse transfomi can be estimated either independently of or

simuitaneously with each other.

The advanrages and limitations of the least squares transforms and spectra are

illustrated through tests with simuiated data. The technique of using autocorrelation

funceions to mode1 systematic effects is also iiiustrated with two r d applications; one

based on the precise measurement of the extension of a baseline spanning the San

Andreas fault in California, and another based on the measurernent of ellipsoidal heigha

using a GPS receiver under the infiuence of the effects of Selective Availability. These

tests show that the use of hlly populaied weight maaices generally results in an increase

in the value of the standard deviations of the estimated model parameters, thereby

providing more realistic estimates of the uncednties. On the other hand the effect of

comlations among the observations on the least squares estimates of model parameters

was found not to be very significant.

Page 5: the least squares spectrum, its inverse transform and - T-Space

ACKNOWLEDGMENTS

Ta Maty , Sarah, Lisa and Scu~u(eI.

This work is dedicated to my f d y , Mary. Sarah, Lisa and Samuel. It simply

would not have been possible without their saaifice and unfailing support and

understanding for so many years. I owe them a huge debt of gratitude.

I am also deepIy indebted to my supenrisor, Rofessor Petr Vaniîek for a i i his

guidance, advice, generosity and persevering suppoh His tircless and meticdous efforts

in reviewing my manuscripts and patience in dealing with my stubbonuiess are! gratefully

appreciated. He is truiy the quintessentiai supervisor. I codd not have been more

fortunate to have him as my mentor.

1 also thank the members of rny Examination Committee, especiaiiy my Intemal

Appraiser, Rofessor Ferko Csiilag (Geography), and my Extemal Examiner, Rofessor

Douglas E. Smylie ( E h and Atmospheric Science, York University). Their constructive

comments and recornrnendations, together with those £hm the other members of my

Examination Comrnittee, are greatly appreciated

The GPS data used in my tests were kindy provided by William Rescott of the

U.S. Geologicai Survey in Menlo Park. 1 especiaily thank John Langbein, also of the

U.S. Geological Survey in Menlo Park, for supplying the EDM &ta and for generously

taking the t h e to discuss some of the results of my analyses.

1 express my sincere gratitude to my employer, the Geodetic S m e y Division of

Geomatics Canada, and in particular Nomian Beck and Lloyd Nabe for giving me time and

support to complete this work.

Page 6: the least squares spectrum, its inverse transform and - T-Space

Portions of this research were also hinded by various Na@ Sciences and

Engineering Research Council of Canada Operating Grants held by Rof. Petr VanRSek

during the years 1986 to 1990.

Finally, 1 thank the Department of Civil Engineering for giving me the opportunity

to finish rny dissertation after so many years. 1 especiaüy thank Professor Robert Gunn for

his help in this regard

Page 7: the least squares spectrum, its inverse transform and - T-Space

TABLE OF CONTENTS

.. Abstract ............................................................ ,. ,. ............................. A Acknowledgments ............................................................................... iv List of Tables ..................................................................................... k

List of Figures ..................................................................................... x

Chapter 1 . Introduction .................................................................. 1

Chap ter 2 . Basic Concepts of Stochastic Rocesses ................................. 6 2.1 Types of Processes ................................................................ -6

2.2 Determïnistic and Random Processes ........................................... -7 2.3 S tationarity and Ergodicity ........ ..... ....... ... ............................. 7

2.4 Statistical Moments ............................................................... 10

2.5 Covariance and Correlation Functions ......................................... 1 1

2.6 Decomposition of the Observable ............................................... 15

Chapter 3 . The Fourier Transform and Spectmm ............... .. ........... 17 ..................................................... 3.1 FourierSeIiesandIntegrals 17

3 . 2 Fourier Transform ................................................................ 21

3.3 Fourier Spectmm ................................................................. 25

3.4 Convolu tion and Correlation .................................................... 33

3.5 Fast Fourier Transfomi ......................................................... -36

3.6 OtherTransforms ................................................................. 38

Page 8: the least squares spectrum, its inverse transform and - T-Space

............................................ . Chapter 4 The Least Squares Transform 40

........................................................................ 4.1 Introduction 40

............................................. 4.2 MatrixFormofFourierTransfonn 41

........................................................ 4.3 Least Squares Transform 46

........................................... 4.4 Weighted Least Squares Transform -49

................................................... 4.5 EfTm of Deteminhic Mode1 53

...................................................... 4.6 Vector Space Interpretation 57

4.7 Applications ...................................................................... -61

. Chapter 5 The Leas t Squares Spectrum .............................................. 63

........................................................................ 5.1 Introduction 63

5.2 Manix Fom of Fourier Spectrum .............................................. 64

............. 5.3 Least Squares S pectrum ... ...................................... 6 5

................... ..................... 5.4 Weighted Least Squares Spectrum .. 67

5.5 Effect of Deterministic Mode1 ................................................... 71

.................................................................... 5.6 StatisticalTests 74

5.7 Estimation Algorithms ............................................................ 79

.... Chapter 6 . Stochastic Modeiiing of Observation Errors ............... .. 81

6.1 Introduction ........................................................................ 81

..................... 6.2 Direct Autocovariance Function Estimation ............ .,, 82

....................... 6.3 Autocovariance Function Estimation via the Spectrum 83

............................. 6.4 Iteratively Reweighted Least Squares Estimation 84

. Chapter 7 Numerical Tests ............................................................... 86

........................................................................ 7.1 Introduction 86

7.2 Effect of Random Observation Errurs ......................................... -87

Page 9: the least squares spectrum, its inverse transform and - T-Space

........................... Chapter 8 . Some Applications in Geodes y .... ... -125 8.1 Introduction ....... ... ...... -., .................................................... -125

8.2 EDM Deformation Measurements ............................................. -126

8.3 GPS Point Positioning .......................................................... 142

Chapter 9 . Conclusions and Recommendations .................................. 156

References ........................................................................................ 159

Page 10: the least squares spectrum, its inverse transform and - T-Space

LIST OF TABLES

..................... Table8.1 Leastsquaresestunatesoflinear~dmd&tumoffs~ 129

Table 8.2 Least squares estimates of lineâr trend and datum offsets, including

additional d a m ofkt (#5a). ................................................... 134

Table 8.3 Least squares estimates of linear trend and dahm offsets, includùig

additional offset (ma) and usuig estimated N1 observation covariance

matrix based on cornputexi ACF.. ............................................... .140

Table 8.4 Su- of estimated linear trends with and without extra offset and

correlations .......................................................................... 141

Table 8.5 Unweighted and weighted hourly means and their standard deviations

................. (Std) of GPS height measurements over a 24 hou. period.. .148

..... Table 8.6 Twenty of the largest peaks in least squares spectrum in Figure 8.16. .152

Table 8.7 Weighted hourly means of GPS height measurements and their

standard deviations (Sui) over a 24 hour period using correlations

h m ACF based on 24 hours of data ............................................ 153

Page 11: the least squares spectrum, its inverse transform and - T-Space

LIST OF FIGURES

Figure 2.1:

Figure 3.1:

Figure 4.1:

Figure 6.1 :

Figure 7.1

Figure 7 2

Figure 7 3

Figure 7.4

Figure 7 5

A single random process consisting of an ensemble of 4 sample

records (A, B , C, D) ............................................................. -8

Autocomlation functions (ACF) and power spectral density functions

............................................ (SDF) for some special functions.. -27

Commutative diagram for the direct and inverse least squares transfomi,

..... where F denotes the direct transform and F-1 the inverse transform -60

..................... Iteratively reweighted Ieast squares estimation process -85

Periodic tirne series of 100 e q d y spaced points and period 10

(fiequency 0.1 hz) with no observation emrs and with normaily

distributeci randorn errors (standard deviations 1/3 and 2/3) ................. 89

Least squares spectra of time series of 1 0 equdy spaced points and

p e n d 10 (hquency 0 .l) with no observation emrs and with normally

................ disûibuted random emors (standard deviations 1/3 and 2/3) .90

Direct estimation of unbiased autocor~e1ation functions of tirne series of

100 equaily spaced points and pend 10 (fhquency 0.1) with no

observation errors and with nomally dis~buted random errors (standard

deviations l/3 and 2M) .......................................................... .9 1

Comparison of direct and indirect (via LS spectrum) estimation of biased

autoconelation functions of time series of 100 e q d y spaced points and

........................ penod 10 (frequency 0.1) with no observation errors 92

Cornparison of direct and indirect (via LS spectrum) estimation of biased

autoforrelation functions of time series of 100 equaily spaced points and

p e n d 10 (fÎequency 0.1) with mdom obsemation errors (standand

Page 12: the least squares spectrum, its inverse transform and - T-Space

.................................................................... deviation 1/3). -93

Figure 7.6 Cornparison of direct and indirect (via LS qxxtmm) estimation of biased

autocorrelation functions of the series of 100 equaiiy spaced points and

@od 10 (fhquency 0.1) with random observation amrs (standard

...........*....... ............................................... deviation 2/3). ,. -94

Figure 7.7 Periodic time senes of 100 equdy spaced points with period 10

(fkquency 0.1) and comlated random observation enors (standard

deviation 2/3). .................................................................... .97

Figure 7.8 Unweighted and weighted LS spectra (both independent and simuluu~ous

estimation) for periodic time &es of 100 e q d y spaced points with period

10 (fkquency O. 1) and comlated randorn observation mrs (standard

............*.........*..... ............................... deviation 2/3) ., ..... ... .98

Figure 7.9 Direct and unweighted indirect (via unweighted inverse ûansform of

unweighted LS spectrum) estimates of biased autocorrelation fimction

for periodic time saies of 100 equally spaced points with period 10

(frequency O. 1) and conelated random observation emrs (standard

.................................................................... deviation 2/3). -99

Figure 7.10 Weighted indirect estimates of biased autocorreIation function via

weighted inverse LS msform of both independent and simuitaneoudy

estimated LS spectra for peridc time senes of 100 equaily spaced points

with period 10 (frequency 0.1) and correlated random observation emrs

........................................................ (standard deviation 2/3) .100

Figure 7.1 1 Direct and unweighted indirect (via unweighted inverse transfm of

unweighted LS spectrum) estirnates of biased autocorrelation function

for tirne series of 100 equally spaced points with cme1ated random

obsewation m r s only (standard deviation 2/3). ........................... .lot

Fisue 7.12 Weighted indirect estimates of biased autmmlation function via

Page 13: the least squares spectrum, its inverse transform and - T-Space

weightcd inverse LS transfomi of both independent and simultaneously

estMated LS specaa for time series of 100 equally spaced points widi

..... correlated mndom observation errors ody (standard deviation 2/3). -102

Figure 7.13 Periodic t h e series of different lengths of randornly spaced points

(unifody distibuted) with period 10 (muency 0.1) and no random

observation errors.. ............................................................. .IO5 Figure 7.14a LS spectra (independently estimated fr-equency mmponents) up to

different maximum 6requencies for periodic data series of unequally

spaced points with period 10 (kquency 0.1) and no random

observation errors ................................................................ 106

Figure 7. I4b LS spectra (independently estirnateci hquency components) for

different lengths of periodic data series of unequally spaced points

..... with period 10 (fkquency O. 1) and no random observation enors.. .IO7

Figure 7.15 Indirect estimates (via unweighted inverse LS nansform of unweighted

LS spectrum) of biased autocorrelation functions for different lengths of

periodic data series of unequally spaced points with period 10

...................... (frequency 0.1) and no randorn observation enors.. .IO8

Figure 7.16 Direct estirnates (via interval averaging) of biased autocorrelation

functions for different lengths of periodic data series of unequally

spaced points with period 10 (frequency 0.1) and no random

observation errors. ................ ., ............................................. 1 0

Figure 7.17 LS spectra for different sets of simultaneously estimateci fkquencies

for periodic data series of 100 unequally spaced points with period 10

......................... (frequency 0.1) and no random observation errors -113

Figure 7.1 8 Indirectiy estimated LS autocoirelation fûnctions via the LS spectrum

using different sets of simultaneously estimated wuencies f a

periodic data series of 100 unequally spaced points with period 10

Page 14: the least squares spectrum, its inverse transform and - T-Space

Figure 7.19

Figure 7.20

Figure 7.21

Figure 7.22

Figure 7.23

Figure 7.24

Figure 7.25

Figure 8.1

Figure 8.2

(frequency O. 1) and no random observation emns.. ........................ 1 14

Periodic thne series of randomly spaced points with fiequencies 0.1

and 0.25 25 and no random observation ermrs (top), and independent

...................................... estimation of the LS specfiim (bottom). 115

Indirectly estimated ACF via the inverse LS transform of the

independent LS specmim using aU frequencies (top) and of the

sirnultaneous LS spectrum using only the two signifiant specaal

........................................... peaks at 0.1 and 0.25 hz (bottum). .116

Quadratic trend time series with periodic component (kquency

0.01 hz) and no random emm (top); LS spectrum of residuals h m

quadratic trend mode1 (miiddle); LS specuum accoun~g for effects

................................................... of quadratic mode1 (bottom) . I l 8

Evedy sampled 100 point random walk time s& (standard

deviation 1) (top) and its conesponding LS specmun ............ ...... 121

Direct (top) and indirect (bottom) autocorrelation functions for 100

............................................... point random walk data series.. .122

Unevenly sampled 100 point random walk tirne series (top) and its

corresponding LS spectrum ................................................... .123

Indirect estimate of autocorrelation via the independently estimated

LS spectrum for the unevenly sarnpled 100 point random w a k time

............................................................................. series -124

Location of the Pearblossom network in California used to

measure crusta1 deformation with a two-cofour EDM instrument

and location of the HolcombLepage baseline spanning the San

Andreas fault nuuiing through this network [after Langbein and

Johnson, 1997, Figure 11 ....................................................... 128

Changes in length of HolcombLepage baseline. Different

Page 15: the least squares spectrum, its inverse transform and - T-Space

Figure 8.3

Figure 8.4

Figure 8.5

Figure 8.6

Figure 8.7

Figure 8.8

Figure 8.9

Figure 8.10

Figure 8.1 1

Figure 8.12

obsenration groups are denoted by different symbol colomhype

combinations ..................................................................... 128

Cornparison of residuai baseline length changes after rem0va.i of

estimated distance offsets for each o h a t i o n group and a cornmon

linear trend. DBerent observation groups are denotecl by different

symbol colour&p combinations. ............................................ -130

Histograms of lengths of point mplea (''Nyquist pniods'")

comsponding to possible Nyquia fkequencies. Botrom plot gives

................... a more detailed histogram at 1 &y.. ................... ... 13 1

Weighted least squares spectra (independently estimated) of

...... baseline length residuals fiam the deterministic model in Table 8.1 .132

Changes in length of Holcomb to Lepage baseline with additionai

................. datum offset in observation group fkom 1984 to mid- 1992 133

Cornparison of residual baseline length changes after ~emoval of

estimated datum offsets, including additional offset, for each

............ observation group and a comrnon hear trend for ail groups.. ,134

Weighted lest squares spcctra of baseline length residuals h m

................... the detemiinistic mode1 with additional distance offset.. -135

Semi-log (top) and log (bottom) plots of weighted Ieast squares

spectra of baseiine length residuals h m the detemiinistic mode1

with additional datum offset ................................................... .137

Indirect ACF, and enlargement at short lags. estimated h m

zero-padded time series of Holcomb-Lepage length changes with

additional d a t u . offset.. .......................................-............... .139

Variations in deived horizontal (top) and vertical (bottom) GPS

..... ........................... positions over 24 hours at station Chabot .. 144

Variations in recordecl horizontal (top) and vertical (bottom) GPS

Page 16: the least squares spectrum, its inverse transform and - T-Space

................................. positions for the f k t hour at station Chabot. 145

Figure 8.13 hdependently eshmated least squares specaum of 8 s height

........................ measurcments for the first hour (data zempadded) -147

Figure 8-14 Indirect estimate of the biased autocorrelation function Ma the

inverse least squares transform of the least squares specaum for

the first hour.. ................................................................... ,147

Figure 8.15 Unweighted (top) and weighted (bottom) hourly means of GPS

height measiirements over a 24 hou pend.. ............................... -149

Figure 8.16 Least squares specaum (independently esémated) for entire 24

hour data set ..................................................................... -1 5 1

Figure 8.17 Autocorrelation function for en tire 24 hour dam set ........................ -1 52

Figure 8.18 Weighted (top) hourly means of GPS height measurements over

a 24 hour period using correlations obtained h m ACF based on

24 hours of data, and difference with equaüy weighted means without

........................ .................................. correlations (bottom) ,., 154

Page 17: the least squares spectrum, its inverse transform and - T-Space

Chapter 1 Introduction

Recent advances in technology have produced exmmely precise and accurate

meesuring systems that are affected by even the srnallest effects that were once much too

s m d to be noticed. In the past these eEects were considerd to be mndom noise to be

average6 out To realize the fidl potential of their measurements, scientists are now faced

with the ta& of modelling these s d effects in order to improve their predictions. The

problem, however, is that there is ofien little understanding of the cause and effect relation

between these so-cailed systematic effects and the measured observables.

There are basicaily two approaches to d d b u i g or modelling the measured

observations. Detarnuiistic models are used to explicitly describe the behaviour of the

observations in temis of a mathematical mode1 of the physical process. These detemiinistic

models con& of constants and parameters to be estimami. Ofien, however, there is littie

understanding of the physical processes underiying the behaviour of the measurements. In

the other approach, stochastic models mat the measurements, or what remains afier

removing a detemiinistic part, as unpreciictable random (Le., stochastic) quantities.

Stochastic models describe the dependencies between the data and the incomplete

deterministic model in terms of mathematical correlations. These correlations c m be

represented by Nters, polynomiais, correlation functions and spectral density functions.

Because detaministic modehg is usually the preferred approach. correlations are ofien

used to help diagnose and improve the d e t d t i c modeL In cases where this is not

possible, the correlations, if carefully consmcted, can be used to help describe the residuai

systematic effects within the deterministic model.

Page 18: the least squares spectrum, its inverse transform and - T-Space

The least squares estimation technique is primarily used f a fitting deterministic

models to the measurements. However, it is also able to accommodace stochastic modeis

diIough the use of a N l y populated covariance ma& for the observations. There are

different meuiods of deteminhg the variances and covariances that f m the observation

covariance ma&. The most direct method involves determinhg an autocovariance

function that describes the behaviour of various systematic effecrs. The problem with this,

and the main motivation for tbis work, is t h t traditional techniques for cornputing

autocorrelation functions require the data to be evenly spaced. This may not be the case,

especialiy when looking for correlations with some (physicaily meaningful) parameters

given as numerical functions. In practice such functions are often known only for values

of the argument that are unevenly spaced

The usual way of handling unevenly spaced data is to interpolate or approximate the

original series to get an evedy spaced one. Because this approach tends to mode1 the lower

frequency content in the data, the low fkquency behaviour of the measmemenits must be

known. Moreover, the high fkequency components can be lost by the srnoothing effcct of

the in teplation or approximation.

The approach taken here is to develop a more general technique for detemiining

autocorrelation functions for data which are unevenly spaced with respect to quantities

describing the systematic effects. As will be seen later, there are two basic approaches to

estimating autocorrelation functions. The most direct is to compute the autocorrelation

function directly h m the data. In this case, however, there is no satisfactory method of

handüng unevenly spaced points. There are methods based on averaging over larger,

evenly spaced intervals or bins, but using these results in a loss of resolution.

The aitemative approach is to estimate the autocorrelation function indiRctly by fmt

representing the systematic effects in tenns of a power spectrum and then transfomiing this

into an autocorrelation function. This is the approach taken here. Again the problem is that

most techniques for cornputing the power spectrum require evenly spaced &ta as do those

Page 19: the least squares spectrum, its inverse transform and - T-Space

for trandomiirig the power spectrum to the autocorrelation function. The aim hcrc is to

find a more general technique that does not quire evenly spaced data To this en& a

general least squares aansform is developed.

Other methods are also available for determining the variances and covariances of

the observations. The most popular of these are the methods of analysis of variance and

variance-covariance component esDimatia The "analysis of variance" (ANOVA) mthod

(also called factor analysis in statistics) can be found in most standard texts on statistics.

Geodetic applications of the technique are desmibed in detaü by Keiiy [1991] and in a

series of articles by Wassef [1959; 1974; 19761. Essentially the aim of the method is to

divide the measuremena into separate groups (factors which connibute to the overall

variation in the data) and to estimate the variance components for each. The niff;culty in

applying the method is defining a scheme of dividing the observations into separate groups

which characterize some behaviour of the systematic effect king modeiled. m n , the

factors describing the systematic effect c m o t be so discretely defined, rather they are often

of a CO~MUOUS nature that precludes lumping them together into separate and distinct

groups.

Varianceîovariance component estimation, on the other hanci, is based on

modelling detamuiistically the residual variation in the measurements. The variances and

covariances are expressed in tems of linear models relating these components to various

factors describing the systematic effsct. The coefficients (variance and covariance

components) in the variance-covariance mode1 are estimateci together with the parameters in

a least squares solution. The technique is described in detail in Rao and Kleffe [1988] and

has ken applied :O many geodetic problems ( s e , e.g., Grafarend et al. [1980], Grafarend

[1984], Chen et al. [1990]). It can be shown that the analysis of variance method is just a

special case of this more generai approach [Chnanowski et al., 19941 The problem with

applying the mediod is ihat the estimation of the variance-covariance mode1 coefficients

usually needs to be iterated which can result in biased estimates of the variances and

Page 20: the least squares spectrum, its inverse transform and - T-Space

covariances mao and Kleffe, 19881. This can lead to negative variances, which is

unaccep table.

The approach taken here is to model any residual systematic effects remaining after

removing a determinisec model, using autocorrelation functions derived from a power

spectcai density function of the residuals This idea was first proposai by VanîZek and

Craymer [1983a; 1983b] and furtherdeveloped by Craymer [1984]. To accommodate

uneveniy spaced data, the least squares spectrum. developed by VarilEek [1969a], was used

and converted to an autocorrelation fundon using the inverse Fourier transfonn.

However, the inverse Fourier transfomi is not completely compatible with the more general

lest squares specmim. Consequently, a more general leas squares transform and ia

inverse are developed here which are completely compatible with the lest squares specmim

and cm provide coma autocorrelation functions for data that are unevedy spaced

Although applied only to geodetic problems here, this technique should have wide

application in many areas of science where one needs to model or analyze measured data.

Before desaibing the technique, a review of the basic concepts of stochastic

processes and the conventional Fourier transform and spectrum are given. This is followed

by the development of a new "least squares" transform and its inverse, and the

reformulation of the least squares spectrum, originally developed by VaniZek [1969a;

19711, in tems of this new transform. It is then shown how an autocorrelation function

can be derived h m the least squares specmm using the inverse least squares transform,

and how this can be used in a procedure for stochasticaily modeiiing residual systematic

effects. These developmenu are followed by tests with simulateù data to examine

numerically sûme of the limitations of the technique. It is also applied to a couple of

examples in geodesy; the modelling of residual systematic effects in elecmnic distance

measurement (EDM) data and point positionhg data from the Global Positioning System

(GPS). FinaUy, conclusions and recommendations for M e r investigations are given.

Page 21: the least squares spectrum, its inverse transform and - T-Space

Throughout the sequel the following notation is used:

variables/observables italic

vectors lower case, boldface letters

matrice dope rat^^^ upper c m bolciface l e m

functions upper or lower case le-, no boldface

Page 22: the least squares spectrum, its inverse transform and - T-Space

Chapter 2 Basic Concepts of Stochastic Processes

2.1 Types of Processes

A process can be considered to be any kind of physical phewmenon that varies in

some way. We examine such processes by talang measurements of them; Le., by

describhg their physicai behaviour in tams of numencd quantities that cm then be

andysed mathematicdy. These processes are most commody represented as series of

measurements (obse~ations), often raken with respect to t h e (tirne series) or space (spatial

processes). When regardeci more generally as series with respect to any other argument,

these processes are referred to here as simply data series.

Rocesses Nt) are usudy thought of as one-dimensional; that is, varyhg with

respect to a one dimensional argument (t) such as time. However, a process may also be

multidimensional; Le., a function Nt) of a vector of arguments (t) - e-g., processes Nt)

which are functions of three-dimensiond position (x) in space or four-dimensional position

in space-the. One may ais0 encounter multiple processes Nt) of multiple arguments (t).

Rocesses can be classified as either CO~MUOUS or discrete. Examples of

CO~MUOUS processes are the cmstal motions of land masses due to tectonic deformations or

the motions of satellites in orbit about the Earth. On the other han& the accumulated errors

fiom point to point in a geodetic netwmk would be classined as a dimete pmess (in

space). Generaily, one is only able to obtain discrete samples of continuous processes,

primarily due to the na= of data acquisition systems.

Page 23: the least squares spectrum, its inverse transform and - T-Space

2.2 Deterministic and Random Processes

Processes can also be classifiai as d e t e d s t i c and random. What is random?

"Everything and nothing" according to I k 11983, pp. 4054MJ. There is no one test for

detemiining whether a m e s s is either random or detaniinistic. The definitions most

often used are ody subjective or heuristic and a mana of philosophical debate. One person

may consida a process to be randorn noise to be fïitered out while another may consider

the same random process to be a deteministic signal to be modelled.

The most straightfbrward definition is that deterministic irnplies predictability while

random i m p k unpredîctability. Thus what is considered deterministic and what is

considered random depends on what one wishes to mode1 . The deteminïstic part is what

is king predicted or estimated exactiy while the random or stochastic part is that which one

can only predict or estimate with some degree of uncertainty. In the last century,

instruments had rather Liniited precision and much of the variability in a process was

considered to be random or stochastic, so that one could only predict with a great deal of

uncertainty. More recently, however, new measuring techniques have becorne more

precise so that it is now possible to attempt to mode1 ever smaller variations in the data in an

effort to improve the prediction power of the deterministic modei.

2.3 Stationarity and Egodicity

Different realizations of a random process will, in general, not be identical. A

single realization of a process is calleci a sample record. The collection or ensemble of al1

sample records is cdied a random or stochastic pmcess (see Figure 2.1). In a random

process, di sample records are different while in a deterministic pmcess ali samples are

identical.

Page 24: the least squares spectrum, its inverse transform and - T-Space

Figure 2.1: A single random process consisting of an ensemble of 4 sample records (A,

B, C, D). There are 100 values of the argument ranging nom 1 to 100. Ensemble or

sample averages are taken over the four different sample records for each value (e.g., t or

t+@ of the argument; i.e., there are 100 sample averages. Argument averages are taken

over the arguments for each sample record; i.e., there are 4 argument averages.

Page 25: the least squares spectrum, its inverse transform and - T-Space

Random or stochastic processes can be classified as king either stationary or non-

stationary. A pnness is stationary if the statistical propaàes of the process, defied over

the ensemble, are independent of the argument(s) (usuaily time or space). That is, the

statistical moments over alî realizations (e.g., ensemble or sample averages) are the same

for all values of the argument. A non-stationary process is one for which this property is

not sabsfied. Such processes require special techniques to mode1 dieir behaviour (see,

e.g., Bendat and Piersol [IW 11 and Priestley 1198 11).

In practice, different degrees of stationarity exist If the complete statistical

description of the process (Le., all possible statistical moments) is independent of the

argument, the process is said to be completely stationary. If only the f h t few moments are

independent of the argument, the process is considered to be weakly stationary. Rocesses

with a Gaussian probability distribution are completely describeci by only the fint two

moments. In this case, stationarity in only the first two moments infers complete

stationarity . Stationarity can be further classifieci on the basis of ergodicity. A process is

ergodic if the statistical properties taken over the argument (e.g., tirne averages) are

identical to the statistical properties taken over different realizations (e.g., ensemble or

sample averages). The assumption of ergodicity ailows for a considerable reduction in the

number of observations and computations required to determine the statistical pmperties of

a random process. For the sake of simplicity, convenience and, most importantly, costs,

most random processes are assumed to be ergodic in practice, even though there may be

evidence to the contrary.

When dealing with multidimensional (i.e., mdti-argument) spatial processes Nx)

whose arguments (x) define Location and orientation in space, stationarity is ofien

considered in remis of homogeneity and isotxupy. A process is homogeneous if it is

invariant with respect to its location in space and isotropie if it is invariant with respect to its

orientation [Grafarend, 19761.

Page 26: the least squares spectrum, its inverse transform and - T-Space

Throughout this work a i l processes are assumed to be stationary and ergodic. Any

nonstationarity and nonergdcity is assumed to be explicitly modelled detaministically and

is assumcd to disappear when the mode1 is selected properly.

2.4 Statistical Moments

The properties of a random processes can be described by the statistical moments of

their probability density functions. For a single CO~MUOUS random process (or variable)

Nt), at a parti& argument t (hereafter d e d t h e for convenience), the kth-order moment

is given by

where Ep] is the mathemasical expectation operator and P(&t)) is the probabiüty ciensity

function of the randorn variable at time t. The integration is performed over all samp1e

records at time t. This implies that $(t)C E (--,-) n s t be integrable.

Generally, only the f h t two moments are useful in practice. The firt moment or

mean value is the simplest and most conmion measure of a random process. It provides a

measure of the cenaal tendency in the data series. For random processes with discrete

sample records &(t) , i= 1 ,.. .,n, the mean Mt) at argument t is defined by

where n is the total number of sample recoràs (infinie in the limit).

Page 27: the least squares spectrum, its inverse transform and - T-Space

The second-order moment is a measme of the variation in the random process and

is defined by

The second-order central moment is a measure of the variation about the mean and is also

cded the variance @t)? The discrete fomi of the variance can be written as

2.5 Covariance and Correlation Functions

Covariance and correlation functions are generic tem for the more general second-

order moments which provide a measure of the linear dependence between observations at

different values of the argument t. Autocovariance and autocorrelation functions represent

the linear dependence within a single random process. Cross-covariance and cross-

correlation functions represent the linear dependence between a pair of different random

processes.

The autocovariance function C(t,t') is deked by

Page 28: the least squares spectrum, its inverse transform and - T-Space

When the times are the same (i-e., t=t'), qn. (2.5) reduces to the variance ait)? The

cross-covariance function CO^ between two random processes Nt) and Ht) is defined

simihrly as

The autococorrelation function R(t,t') is defked as the normalized autocovariance

function; Le.,

SimilarIyl the cross-correlation function R d t , t 8 ) is the normalized autocorrelation

function:

The autocomlation function is limited to the range

for all t and t'. When the times t and t' are the same, the autocorrelation bction is equal to

one. The same holds for the crossi=orrelation function.

Page 29: the least squares spectrum, its inverse transform and - T-Space

If the mdom pmcess is stationary, the moments are independent of the value of the

argument, 'Zhus, in the above definitions, the expressions are dependent oniy on the time

difference or lag. F-ZX The moments then Rduce to the foIlow fomis:

Simila- expressions to eqns. (2.1 2 ) and (2.12) can be wrinen for the crosscovariance and

cross-correlation functions.

The following wo propemes of these functions are consequences of the

assumption of stationari ty :

1 . The auto/cross-covariance and auto/cross-correlation functions are even functions

of 2; Le.,

2. At lag d the autocovariance hinction is positive and the autocorrelation function is

equal to one; Le..

Page 30: the least squares spectrum, its inverse transform and - T-Space

Ergodicity is probably the most imponant and 0fte.n used assumption in practical

data analysis applications, even when the process is known to be non-ergodic or even non-

stationary. This is done to simpliSr the data acquisition and hanmg procedures.

Stochastic processes are ergodic if their sample moments (e.g., mean, autocovariance, etc.)

can be det&ed from averaging over the argument (e.g., tirne) instead of averaging over

the sample records (see Fi- 2.1). For the meaa and autacovariance function,

The discret- f m of these expressions (for discrete random processes) are given by:

. n-k

where lag q = k AI and Az is the sampling interval. This expression gives an unbiased

estimte of the autocovariance function. Although unbiasedness is desirable, this function

is not positive definite. Constructing covariance matrices h m this leads to singuiar

maaices. It also exhibits so-caiied "wild" behaviour at large lags. For these reasons, the

biased estimate is recommended by Bendat and Piersol[197 1, pp. 3 12-3141 and Priestley

Page 31: the least squares spectrum, its inverse transform and - T-Space

[1981, pp. 323-3241. whae the denominator n-k in eqn. (2.20) is replaced by the constant

n. This resdts in a iùnction that tapers off as the Iag increases. An example of this is

given in the numerical simulations in Chapter 7.

Sirniiar expressions can also be written for the cross-covariance functions. Note

that the inregrations and summations are wormed over the argument t rather than over the

sample records; i.e., unda the assumption of ergodicity the moments c m be computed

fÎom a single sample.

2.6 Decomposition of the Observable

In the reai world, processes cannot be rnodeiled as purely deterministic or

stochastic. Instead, one is faced with a mixture of both. CIearly, there are many factors

which prevent us to mode1 in a detexministic way. Most are due to either masurement

errors or systems that are simply too complex to be modeiled entirely deterministicaiiy.

According to Priestley 11981, p. 141 6'almost aU quantitative phenomena occirrring in

science shodd be treated as random processes as opposed to detenninistic functions."

The expected value of a random process may be computed h m some deterministic

model describing the expected behaviour of the series. However, this model wil l pmbably

not describe the series exactly as mentioned above. A stochastic mode1 may then be used to

account for the resulting lack of f i t It is thmefore convenient to decompose the observable

Nt) into a detemiinistic or trend cornponent &t) and a random or stochastic component e(t);

i.e.,

~ t ) = &t) + e(t) , V t e (a,-).

The mndom component e(t) may also be decomposed into two components:

Page 32: the least squares spectrum, its inverse transform and - T-Space

whae s(t) is a statistically dependent (correlated) component and q't) is a statistically

independent (uncorrelateci) component The observable may then be represented in the

form

The statisticaily dependent component is often due to effects neglected or incompletely

accounted for in the deterministic mode1 defineci by @ Both random cornponents art

assurned to have a zero mean. This is enforced when the trend component is estjmated by

lest squares. However, due to the statistical dependence, there is a correlation among the

s(t) components. This statistically dependent component can be thought of as the residual

determïnistic part remainùig after removing the postulateci detedstic d e l . Thus, this

component is ofren referred to as a systematic emn or systematic effect.

Page 33: the least squares spectrum, its inverse transform and - T-Space

Chapter 3 The Fourier Transform and Spectrum

3.1 Fourier Series and Integrals

It is weU known in matkmatics that any continuous, periodic bction can be

represented by an infulte series of trigonometric functions, called a Fourier series. If Nt)

is a function of period T, it can then be expressed in the form

where ai and bi are the Fourier coefficients comesponding to frequencyf;:. The frequencyfi

can also be expressed in terms of the namal or fundamentai fbquency f, asfi=&, where

fo=lB: Note that if angular frequencies (cu) are to be used, q (in radians per unit of t )

should be substituted for 2Mi.

Using the fact that the cosine and sine functions form an orthogonal basis over the

intervai (-272, T/2), the Fourier coefficients for di i = 0, ...,- are given by [Priestley,

1981, p. 1941

Page 34: the least squares spectrum, its inverse transform and - T-Space

If Nt) is an even funçtion, i.e., Nt) = #-t), the bi coefficients are zero. When Nt) is an

odd function. Le., Nt) = -#-t), the ai coefficients are zao. Note that writing the constant

t e m in eqn. (3.1) as rather than as og d e s the expressions (eqns. (3.2) and (3.3)) for

the coefficients valid even for i=O.

For non-periodic functions, there is no such Fourier series representation.

However, accmding to Priestley [ 1981, pp. 198-2001, a new perïodic function may be

defined which is the same as the non-periodic one over a finite interval, Say, (-T/2, TL?)

but repeats itself and is thus per idc outside this interval. This new fûnction will have a

p e n d T and can now be represented as a Fourier series. By letting T-, the dimete set

of frequencies in the Fourier series becornes a continuous set of fkequencies; Le., an

integral. The non-periodic funceion can then be represented by the scwalled Fourier

integral which has the fonn [Riestley, 1981, pp. 198-1991

where the Fourier coefficients over the continuous range of frequencies are defïned by

2 oy) = T ~ @ ( t ) c o s Z R f t d t , Q f ,

This representation of a non-periodic funchon in terms of a continuous set of frequencies

holds only when the function is absolutely integrable over the infinite interval (-O, -)

[Riestley, 198 1, p. 2001; i.e.,

Page 35: the least squares spectrum, its inverse transform and - T-Space

This happens when Nt) decays to zero as t goes to infimty.

So far only periodic and non-periodic determimstic functions have been considered

However, in practice one usuaily deds with random or stochastic functions (processes)

where the appIication of the above representations is not so apparent. Ciearly, stochastic

functions may not necessarïiy be periodic and thus they cannot be represented by Fourier

series. Furthmore, stochastic functions are not absolutely integrable since, by the

definition of stationarity, they do not decay to zero at infinity. It would then appear that we

cannot represent them as Fourier integrals either. Nevertheles, accordhg to Priestley

1198 1, p. 2071 it is possible to circumvent this problern by sirnply mcating the stochastic

process at, Say, -Tl2 and TE as done for non-periodic functions. ûutside this interval the

function is defuied to be zero, thereby satisfying the abwlutely integrable condition. As

long as the stochastic function is con~uous, it can be represented by the Fourier integral as

in eqn. (3.4) but with coefficients defïned by finite Fourier integrals using integration limits

(Ti2,-TI2) instead of (--,-); i.e.[Priestley, 198 1, p. 2071,

Unfominately, we cannot take the lirnit T+ as before since, by the pmperty of

stationarity, the above integrals would not be finite.

Page 36: the least squares spectrum, its inverse transform and - T-Space

Although aU of the expressions for the Fourier series and integrais were given in

texms of trigonometric huictions, it is more cornmon to use complex notation for a more

compact representation of the series and integials. Assigning the cosine t a n to the real

component and the sine temi to the imaginary component, each trigonometric term can be

replaced by a complex exponential funetion using Euler's i o d a wronshteki and

Semendyayev, 1985, p. 4741

where j=d-1 is the irnaginary unit.

Using this notation, the Fourier series in eqn. (3.1) can be re-written as mestley ,

1981, p. 1991

where

Substituthg for ak and bk, using eqns. (3.2). (3.3) and (3.10).

T/2 1 a

Page 37: the least squares spectrum, its inverse transform and - T-Space

Putting rhis in the Fourier series in the continuous form of eqn. (3.1 1) and letting T+

gives the s0i:aiied Fourier integrai over a continuous range of observations; Le.,

( [@(r) e - ~ z d dr for non-penodic functions

@(t) e--2?ft dt for stochastic functions -d 3.2 Fourier Transform

Given the Fourier integral representation of a non-periodic or stochastic funetion,

the transformation fbm Nt) to F(fJ in eqn. (3.15) is called the (direct) Fourier aansfom,

or the finite Fourier transfomi if deahg with stochastic functions. The transformation

fkom FV) to Nt) in eqn. (3.14) is cailed the inverse Fotuier transfom. Nt) and F m are

referred to as a Fourier &orm pair, denoted by Nt) o Fm. Note that the complex

conjugate fom is used in the direct aansfom and not in the inverse transform. In some

texts (e.g., Press et al. [198q), the conjugate form is used in the inverse ûmsform and not

in the direct transfom.

In practice one rarely deals with CO~MUOUS stochastic processes of infinite length

but d e r with actual discrete processes or discretely sampled data h m continuous

processes of finite length. Although such discrete samples are o h evenly spaceà in time

(or any other arpen t ) , this may not always be die case. Nevertheles, the application of

Page 38: the least squares spectrum, its inverse transform and - T-Space

traditional Fourier nansfarm techniques requires the procases to be discretely and evenly

samp1ed This is because the trigonomeaic functions are not orthogonal over an unevedy

spaced domain.

For a discretely and evenly sarnpled stochastic process or data senes {Ntd,

i=0,1, ..., e l } , the discrete Fourier transform is obtained by approxima~g the Folmer

integral in eqn. (3.15) with a summation; Le.,

where n is the number of 'observations" (samples), & is the sarnphg interval and fk is one

of the frequencies belonging to the set of fkequencies estimable h m the discrete pmcess

(see below). Note a h that the summation index extends fîom O to n-1 (instead of 1 to n)

following the usual convention. If T = d t is the length of the data series, the discrete set of

frequencies are given by

where fo=l/T=l/(ndt) is the fundamental frequency. To make maners simpler, n is

assumed to be even (the data mies is truncated to an even number of points). This set of

integer multiples of the fundamental fiequency will be simply caiied cTourier" fraluencies

here because they are always used in the Fourier transfm and Fourier specmm.

By convention, it is only the b a l summation in eqn. (3.16) (widiout the At in

fiont) that is commoniy refmed to as the discrete Fourier transfomi, denoted by Fk for

frequency fk. The discrete Fourier transfomi OFï) is then defined by

Page 39: the least squares spectrum, its inverse transform and - T-Space

The inverse discrete Fourier transfomi is obtained simiiarly by appruxima~g the integrai in

eqn. (3.14) with a summarion and substini~g for the d isæte Fourier naosform. This

gives

The discrete sampling of a stochastic process has an important consequence hown

as the aliasing effect, whereby some high frequency information will be lost or, more

precisely, hidden (aliased) in the lower fkquencies. This can be seen by examining the

exponen tial term in eqns. (3.17) and (3.1 8) as a func tion of the discretely sampleâ process

#(rd, i = -,...,O, where ti = i At and At is the sampling interval. Re-wrïting the

exponential function as

the effect of discrete sampling on each sine and cosine texm can be seen. For example,

substituting iAt for ti in the cosine term gives

The same cm be written for a new kquency f+M,

COS 21@f+Af)ti = cos(2IRfdf + Zia'&&) .

Page 40: the least squares spectrum, its inverse transform and - T-Space

These two msine tums are quivalent only if2îrigfdt is an integer rndtiple of z This

, d e d the Nyquist or critical fkquency. occurs when At is an inîeger multiple offnr = 2dt

have fkquencyf. The same holds for the sine tanis. All frequencies outside of the

Nyquist frequency range ( - f ~ f ~ ) wiU "11 beased to (i-e., moved to and superimposed on)

kpencies in this range. If possible, At should be chosen srnaLi enough to avoid aliasing.

However, this requires a knowledge of the upper frequency limit of the information

contained in the process being sampled or, at lest, knowledge that only negligible

information exists beyond the Nyquist frequency and our willingness to neglect this

information.

There are some speciai properties of Fourier transf' that are of partïcular

importance. niese are summarized as follows (* indicates the complex conjugate

operator):

Nt) is red

Nt) is Maginary

Nt) is even

Nt) is odd

F(-- = FM* ,

F(-j) = -Fm* ,

F(-- = Fu) (i-e., F m is even) ,

F(-fj = -Fm (Le., Flf) is odd) .

Note that when de al functions, the senes of trigonometric t e m of cosines and

suies d u c e to a series of only cosine terms; Le., by eqn. (3.19) the sine terms are all zero.

In this case the Fourier sansform reduces to the so-called cosine transform

The foIIowing are some other properties of the Fourier transfm (fkom Press et al.

11992, p. 4911). Recall that #j(t)oFm indicates that Nt) and F(B are a Fourier transfomi

pair.

Page 41: the least squares spectrum, its inverse transform and - T-Space

3.3 Fourier Spectrum

The representation of functions in temis of Fourier senes has a special physical

interpretation in terms of power (cf. Priestley [1981, p. 194-1951 and Ress et ai. [1992, p.

4921). Consider an absolutely integrable non-penodic function f(t). The total "power" of

Nt) is customarily defined by

00

Total power = #(t )2 dt . a

Substituang the inverse Fourier transfom in eqn. (3.14) for one of the M t ) gives

Interchanging the order of the integrals and substituting for the direct Fourier transforms

Page 42: the least squares spectrum, its inverse transform and - T-Space

where F*V) denotes the complex conjugate of Fm. The total power can therefore be

e x p d eitha in temis of the integral of the original function or its Fourier transfomi;

1-e.,

w QO

Total power = 1 ( ~ ( r ) 2 dt = 1 i ~ ( f 112 df . a -0

This is known as Parseval's relation [Jenkins and Watts, 1969, p. 25; Priestley, 1981, p.

2011 or Parseval's theorem b s s et al., 1992, p. 4921. Note that the total power is qua1

to n tirnes the variance S.

It can be seen h m q. (3.34) that the total power is divided among a continuous

set of muencies in the representative Fourier inte@ Each term F(BI*@ represents the

contribution to the total power in Nt) produced by the components with frequencies in the

intend (f. f+a. The so-cailed power specnal density sO) for frequency f is thus defined

by

The plot of s(f) versus fiequency f is also called the power specmun, or simply the

spectrum. Theoretical power spectral density functions for some specid functions are

illusnated in Figure 3.1.

Page 43: the least squares spectrum, its inverse transform and - T-Space

Constant - ACF Constant - SDF I l

Sine Wave - ACF

Exponenhd - Aff Exponential - SDF

-1 -0.5 O 0.5 1 O 0.5

1 Sine Waye - SDF

Figure 3.1: Autocorrelation functions (ACF) and power spectral density functions (SDF)

for some special functions.

0.5

(I*

- -

Page 44: the least squares spectrum, its inverse transform and - T-Space

For periodic fimctions, the total power over the entire interval (-o., -) is infinite

mestley, 1981, pp. 195,205j. Aithough it is only needed to describe the power over the

fînite interval (-T/2, T12) in d e r to characteriz it for the entire infinite intemal, it is

usuaüy mon convenient to use the totai powa per unit of the over the finite interval. This

is obrained by dividing the total power by the period T; Le.,

Total power per unit of tirne -2 , 2 - r T I - ~ o t a i power (4,

The total power over (-T/2,T/2) is divided among the infinite set of discrete frequencies in

the representative Fourier series. The contribution s(fk) to the total power per unit of time k

of each T0wiei7 fkquency f k = ~ is calIed the spectral value for frequency fk and is

Similarl y, for stationary stochastic functions (random processes), the total power is

aiso infinite by the definition of stationarity (i.e., a steady state process h m t = -.. to =

requires infinite energy or power). Using again the truncation approach, stochastic

processes can a h be represented by finite Fourier integrals in the nnite interval(-272, TIZ).

The total power in this finite interval WU then be finite.

Page 45: the least squares spectrum, its inverse transform and - T-Space

For bot. non-periodic and stochastic fuactions over a finite intaval (-Ta, T/2), it

is g e n d y more convenient to aiso use the pown per unit of time. As for penodic

functiom, the power pes unit time is obtained by dividing the total powa over the f i t e

interval by the length of the interval; Le.,

Total power (-5- , ) T T ~otaipowerpaunitoftime[-2 , 2 ) = T

Here s(B represents the power spectral density function. For a process of finite length T it

is defined by

The s p e c m denned above is a function of both positive and negative frequencies

and is cailed a "two-sided" s p e c d density function. However, one does not usually

distinguish between positive and negative frequencies. Moreover, when Nt) is Ral, the

Fourier transform is an even function; i.e., Ffl=F(-B. It is therefore customary to express

the specmim as a fiinction of only positive fkquencies. Such a spectnim is called a "one-

sided" spectral density function. Because the total power in the pmess must remain the

same, the spectrai values for the one-sided spectxum are defined as

Page 46: the least squares spectrum, its inverse transform and - T-Space

For real Nt), FV) = F(-jj and

Hereafter, the one-side spectral density fhction wil l be used since only real Nt) will be

considerd

It is also convenient to nomialize the spectral values so that they express the

percentage of the total power or variation in the process contributed by each fhquency.

The normaiized spectral values 5*m are given by

A couple of important properties foi power specaa are obtained h m the properties

of Fourier transfomis. One of the most important is the invariance of the specaum to time

shifang. Given a pmcess Nt) shified by t,, the new Fourier aansform is

The spectrum s'(B for this process is then given by

Page 47: the least squares spectrum, its inverse transform and - T-Space

which is identical to the specmim of the original series- Note that the constant exponential

term in eqn. (3.43) cancek with its complex conjugate.

The specrmm is not invariant with respect to time scaling, however. Intuitively,

expanding tirne effdvely resuits in s w g frequencies, and vice versa The relation

between two spectra with different time seales cm be obtained from eqn. (3.28). a v e n a

function Nat) which is scaled in time by a faftor a, the new Fourier aansform F'f) is, by

eqn. (3.28)-

This resuits in both a d n g of the frequencies as weU as the Fourier transfomi and

spectnim.

For discretely sampled, infinite length processes, the Fourier transform and spectrai k values are defined ody for the discrete set of ''Fourier" fkequencies fk = a k =

-n/2,...,n/2 (see discussion of discrete Fourier transform). The discrete fonn of Parseval's

relation for the total power in a process is obtained in the same way as for the Fourier

integrai except that the discrete Fourier transform is used instead. Foilowing the same

substitution and reordering of summations in eqn. (3.33) gives [Press et al., 1986, p. 3901

Page 48: the least squares spectrum, its inverse transform and - T-Space

The individual spectral values s(fC) for the power spectral density hction are then given

by

The normaiized spectrai values are obrained by dividing by the total power as in eqn.

(3.42). For the diçcrete case. this gives

ln-l RealiEng that the variance d is the total power divided by n ($ = n ç ~ t i ) 2 ) , the

a n o d z e d specaal values can also be written as

Sample estimates of the spectrum c m be obtained by e v a l u a ~ g the discrete Fourier 1

transfomi for frequencies fk = O, ...,= and cornputhg the spectral values sud using

eqns. (3.48) or (3.49). It is important to note for later that this is equivalent to (i)

e v d u a ~ g the Folnirr coefficients ak and bk for the dimete hquencies fk using l e s t

squares estimation and (ü) computuig the (amplitude) specmun h m (ak* + bk2). For real-

valued functions, only positive frequencies need be considered because the negative

Page 49: the least squares spectrum, its inverse transform and - T-Space

fiequency part of the specmim is the minor image of the positive part. Howeva, the

negative m e n c i e s will be aiiased as positive ones an& combined with the (identical)

positive ones. will result in spectral values twice those computed using eqn. (3.48), except

for the zero frequency. This gives the one-side specmim ratha than a two-sided spectrum.

The spectnnn computed in this marner is g e n d y referred a as the periodogram

[Priestley. 1981, p. 394; Ress et ai, 1986, p. 4211 and foms the basis of the least squares

spec-

3.4 Convolution and Correlation

Another application of Fourier transforms is in the concept of convolution and

correlation. Given two functions Nt) and Xt) and the5 Fourier aansforms F m and G(B,

we can combine these two functions together in what is calleci a convolution. For the

continuous case the convolution of Nt) and Nt), denoted @(t)*xt), is defineci by

[Bronshtein and Semendyayev, 1985, p. 5821

where ris thought of as an argument (time) difference or lag. The wnvolution theorem

then States that the Fourier transform of the convolution of two hinctions is equal to the

product of the Fourier transforms of the individual functions [ibid, p. 5821; Le.,

Page 50: the least squares spectrum, its inverse transform and - T-Space

where the syrnbol o again signifies that the functions on either side are Foinier transform

pairs. The Fourier transfomi is used to go h m left to right while the inverse nansform is

used h m right to left

For discretely and evenly sarnpled processes Nt) and gti) , i = -n/2,.. .,n/2, the

discrete convolution is defied by

where the lags t i t i 4 are eveniy spaccd. The discrete version of the convolution theorem is

then

for fiequencies fk, k = O,...,n- 1.

Closely related to the convolution theorem in eqn. (3.51) is the correlation theorem

It cm be s h o w that the product of a Fourier transform with the complex wnjugate of

another Fourier tmnsfom can be reduced to the form [Priestley, 198 1, p. 2 1 11

where K is caîled the kemel:

Page 51: the least squares spectrum, its inverse transform and - T-Space

In the conten of spectral analysis, the kernel K(r) represents the cross-covariance fiinction.

Multiplying the Foiiner transfonn by in own cornplex conjugate gives the autocovariance

funCaon C(r) (cf. Section 2.5) as the kernel; Le.,

Realizing that this multiplication gives the spectral value for fresuencyf, the covariance and

the spcctmm hinction can be expressed as a Fouriex transform pair; i.e.,

This is known as the Wiener-Khinchin theorem. Furthermore, the nmalized s p e c m

r(B is the aansfm pair of the autocorrelation funcbon R(t) (cf. Section 2.5) so that

When computing the convolution of two iùnctions care must be exercised to avoid

socaiied "end effects" or "wrap around effects" caused by assuming the fimctions to be

periodic. For example, when convolving a function wi th itself (Le., autocorrelation), data

h m the end of the series are effectively wrapped around to the beginning of the series

thereby fomiing a penodic function with period T. This can have adverse e k t s but can

be prevented by simply "padding" the data series with enough zeros to avoid any overlap of

original data. To estimate ail possible frequenties up to the Nyquist hquency (defïned in

Section 3.2), a data series of n points must be padded with n zeros to completely avoid any

wrap around affect. There is a nade off when dohg this, however; the more zeros

Page 52: the least squares spectrum, its inverse transform and - T-Space

appended to the series, the pa te r the aors in the sample eshates of the Fourier

aansforms. See Press et al. [1992, pp. 5331 for more information on end effects.

These indirect expressions in temis of the spectnun are often used as the basis for

the efficient computation of autocovariance and autocomiation fundons using the FFI: It

will also be used as the basis for developing autocovariance fiinctions for unevenly spaced

data to provide objective a priori estimates of covariances and weights that account for

residual systematic effects in lem squares modelling. However, it must be adkd that

this indirea procedure gives the biased estimate of the autocovariance and autocorrelation

functions [Bendat and PiersoI, 197 1, pp. 3 12-3 14; Priestley, 198 1, pp. 323-3241.

3.5 Fast Fourier Transform

Any discussion of the Fourier transform would not be complete without mentionhg

the secalled Fast Fourier Transform (FFT). Although the term is often used

synonymously with the Fourier transfomi itself, it is reaiiy only a numerical algorithm used

to compute the discrete Fourier transfixm (Dm in an exmely efficient manner. The

algorithm, popdarized by Cooley and Tukey [1965], revolutionized the way in which the

DFT had been used. Up to that tirne the DFT was restricted to only srnall data sets. W i h

the advent of the FFT aigorithm. however, it was quickly employed in a multitude of

applications.

nie basic idea behind the FFT is a bisection and recombination process. F i t the

data is repeatedly bisected into pairs of points by recursively dividing the data into odd and

even numbered points. The Fourier transfoms are then computed for each of these pairs

of points and subsequently recombined to foxm the Fourier transform of the entire data

series. Because the Fourier transform of a pair of data points is a trivial and very fast

computation (no multiplications are needed), the algorithm results in a dramatic increase in

computational efficiency, especially for large data sets. The number of (cornplex)

Page 53: the least squares spectrum, its inverse transform and - T-Space

multiplications involvd in the direct evaluation of the disate Foinier trandorm is of the

order of 9 whereas the number of such operations in the FFT aigorithm (in the

recombination of the individual transforms) is only of the Ofder of n log2n [Press et ai.,

19921. This general strategy was fîrst used by Gauss to d u c e the computationai effort in

determinhg planetary orbits and also derived by as many as a dozen others since (see

Brigham [1974] and Bracewell[1989] for more information).

The main limitation of b t h the discrete Fourier transform and its FlT algorithm is

that the data must be equaiiy spaced. The expression for the Fourier coefficients, and thus

the Fourier transfomi, are vaiid only for equdy spaced data. Moreover, the FFT algorithm

uses certain properties of the sine and cosine functions for eveniy spaced data to reduce the

number of temis that ne& to be evaluated For the investigation of systematic effects

which can be functions of many different kinds of arguments that are usually very

irreguiarly spaced, this precludes the use of the FFT, at least in the computation of the

discrete Fourier transfm A similar problem also arises when there are large gaps in an

othenvise equaily spaced data series.

To circumvent the problem of unevenly spaced or "gappy" data, interpolation

schemes are someeimes used where the original data are interpolami to give an evenly

spaced series. This then ailows one to use naditional techniques such as the FFï.

However, the accuracy of the interpolating function to represent the original data series

depends on the form of the interpoia~g function, the smoothness of the original data Senes

and the presence of large gaps in the data. This presents a dilemma since in order to

properly interpolate the &ta we must have a good howledge of their behaviour, but the

lack of this knowledge is usually the reason for computing FFTs in the £kt place. Another

problem with interpolation is that in the presence of large gaps, they often result in

disastrous results.

A second limitation of the FFï' is that the number of data points to be transformed

must be a power of 2 for the FFT to be most efficient Altemate and mixeci mdu<

Page 54: the least squares spectrum, its inverse transform and - T-Space

formulations of the FFî aiso exin but they are much less efficient. The conventional

method of deaüng with a number of points that are not a power of two is to again ''pi" the

data series with enough oeros to obtain the required number of points for the FFT. This

clearly infiates the number of points to process thereby increasing not only processing time

but also storage requirernena. It is most inefficient when dealing with large data sets. In

these cases. one usudy only takes the fnst power of two number of points and omits the

rest More importantly, zero paddllig also increases the ermr in the FFI' with respect to the

continuous Fourier transform,

One more limitation of the FFT is that it is restricted to only the set of 'Fouriei'

fre-quencies. If fkquencies other than these standard ones are present, a phenornenon

lmown as spectrai leakage can degrade the resuits . To compensate for this, so-called

window functions are employai to reduce this leakage by convolving a tapered Gaussian-

like function with the data senes in the Fourier transform. For more on window functions

see, e.g., Riestiey [l98 1. Chapter 71 and Press et al. [1992, Chapter 13-41,

3.6 Other Transforms

The preceding developrnents have been based on the use of Fourier (trigonomeaic)

series to approximate functions and stochastic processes. The advantage of using Fourier

series is that the periodic ternis are usually easier to interpret physicaliy. Nevertheless,

other approximation or basis functions can be used.

One popuiar alternative approximation hinction is the socaiied "cas" hinction

which forms the basis of the Hartley transfm m e y , 1942; BraceweU, 19861. This

function is defined as

Page 55: the least squares spectrum, its inverse transform and - T-Space

and is used in place of d2@t = cos2@ - j sin2m in the usual Foimer expressions. Note

that the difference between the two is that the Fourier expressions separate the cosine and

sine ternis while the Hartiey expressions combine them.

In spie of the different functions used in the Fourier and Hartiey transforms, they

are simila, in shape. In fact, the Fourier transform can be deduced b m the Haniey

tramform. aithough this is wnsidered unnecessary because either transfona provides a pair

of numbers at each fiquency that represents the oscillation of the series in amplitude and

phase @3raceweU, 19891. Moreover, the amplitude and phase spcctra obtained from either

nansfonn are identical, although they are derived in a slighdy different manna [ibid.,

19891.

As for the Fourier aansforrn, Braceweil[1986] has also developed a fast Hartley

transfomi in much the same way as the FFT. The advantage is that the fast Hadey

transform has k e n shown to be nvice as fast as the FFï and uses haIf as rnuch cornputer

memory [O'Neill, 19891.

Page 56: the least squares spectrum, its inverse transform and - T-Space

Chapter 4 The Least Squares Transform

4.1 Introduction

A significant limitation of the traditional techniques for the estimation of

autocorrelation fwictions, either directly or indirectly via the inverse of the Fourier

spectrum, is that they always require the data to be equdy spaced in the argument.

Although the data might be evenly spaced with respect to some basic sampling parameter

such as time, it will generally not be evenly spaced with respect to other parametas that

may better characterize the behaviour of any systematic effeca tu be modelled by

correlation functions. Som typical parameters that might be used to mode1 such systematic

effects in geodetic problems include spatial distance, satciiite elevation angle, atmospheric

temperature, temperature gradient, pressure, etc.; cf. Vanilk and Craymer [1983a,b],

Craymer [1984; 19851, VmZek et al. [1985], and Craymer and Vaniiek [1986] Clearly it

would be very diffcult to obtain a data senes evenly spaced in even some of these

randody fluctuating parameters.

Other reasons for seeking altemative techniques are conmed with the limitations

of the discrete Fourier transfomi and FFI' describeci in the preceding chapter. These

include the use of ody the set of standard 'Fourier" muencies (infeger multiples of the

fundamental fkequency), and the requirement of 2n data points for the FFI' algorithm. Ui

addition, a detenninistic mode1 is often es- and rernoved h m the data prior to any

spectral analysis. Traditional spectral techniques do not consider any interaction or linear

dependence (correlation) between the a priori determinisec mode1 and the implied periodic

Page 57: the least squares spectrum, its inverse transform and - T-Space

components modelled in the spectrum and in the comlation hinction. M m v e r , the data

cannot be weighted in the Foinier d o m computation in accordance with their assumed

probabiiity density fiuiction. Thus, some observations with relatively large random errurs

will be treated the same as other observations that may be many times more precise.

The aim here is to fornidate a more general aansform that is capable of handling

such unevenly spaced arguments. The transform is based on the least squares specaum

computation developed by VaniRIek [1%9a; 197 11 and is r e f d to here as the least

squares rransform and its inverse. Note that this l e m squares approach is developed here

for real-valueci data and, consequently, positive frequencies. It cannot cope with complex

data or negative frequencies, which are useful in distinguishing between prograde and

retrograde motions.

4.2 Matrix Form of Fourier Transform

The basic fom of the least squares transfomi can be derived by fifit expressing the

dismete Fourier transform (Dm in terms of matrices of complex exponentiai fumions.

Rewriting eqn. (3.18) in manix fomi gives (the superscript "c" denotes a complex matrix)

w here

Page 58: the least squares spectrum, its inverse transform and - T-Space

Note that the transpose in eqn. (4.1) is the complex conjugate transpose for cornplex

matrices (see Golub and Van Loan [1983, p. 9 1); i.e.,

This matrix fonn of the discrete Fornier transform can be written for each of the discrete

''Fourier" frequencies in eqn. (3.17).

Combining ail fhquencies together gives the simdtaneous transform for a i l the

standard Fourier frequencies; i.e.,

w here

The transpose in eqn. (4.5) again indicates the complex conjugate transpose where

Page 59: the least squares spectrum, its inverse transform and - T-Space

Note that AVk in eqn. (4.1) is the k-th columa of AC coftesponding to the specinc

frequenc~ fk-

The inverse discrete Fourier transform expresses each observation Nti) in tnms of

the Fourier traasforms Fk for ali of the discrete T0uriei7 frequencies fc = k/(ndt),

H l , ..., el. This a n also be written in ma& form as for the direct transfomi. Rewriting

eqn. (3.19) in rnatrïx notation gives

where

Combining a l i observations together gives the sirnultanmus inverse uansfonn; i.e.,

where AC is defined as in eqn. (4.7) and @ is denned by eqn. (4.3). Note that the design

matrix is not transposed in the inverse transfom and a factor of Iln is included as in the

cornplex form. Expanding this in ternis of the Fourier msfonns for the individual

Totrrier" fiquencies gives

Page 60: the least squares spectrum, its inverse transform and - T-Space

Befare developing a more generaï least squares fom of the a b e transfm, it is

necessary to replace these complex expressions with their reaI-value- nigonomenic forms.

It will be show later that this i s because, for unequally spaced data, the real and imaginary

components can, in general, no longer be treated independently of each other. Using

Edex's formula (eqn. (3. IO)), the discrete Fourier transfonn in eqn. (3.18) becomes

and the inverse discrete Fourier transform is

Note that the sine term is zero for the zem frequency component ( H l ) in the above

expressions. Realizing that the red (cosine) and irnaginary (sine) temis are two separate

quantities that are independent of each other, the complex expression can be rewritten as

two separate real expressions for each tem. That is, for the reai term,

and for the irnaginary tam,

Page 61: the least squares spectrum, its inverse transform and - T-Space

The discrete Foinier msform in eqn, (4.1) can now be expressed in real matrix

notation using eqn. (4.1). with separate columns in the design matrix A for the r d

(cosine) and imaginary (sine) temis. The transform is then aven by eqn. (4.1). where FCk

and AYk are replaced with Fk and Afp respectively, which are defined as

Note that for zero frequency (ka), Im(F,)=O and ai l the sine temis in Af, are also zero, so

that

The sirnultaneous direct and inverse Fourier tnnsforms for a i l the "Fouriei' frequencies are

then given by eqns. (4.5) and (4.1 l), respectively, with F and AC replaced by,

respectively,

Page 62: the least squares spectrum, its inverse transform and - T-Space

Note that there are n observations and ody n-1 coefficients to solve for.

4.3 Least Squares Transform

A more general least squares transfomi (LST) cm be obtained h m the above

ma& form of the discrete Fourier msform (DFT) by reaiizing that the DFT and its

inverse are equivalent to least squares interpolation or approximation using aigonometric

functions (Le., Fourier series) as the basis functions (see, e-g., Vmiek and Krakiwsky

[ 1986, Chapter 121 for a detailed exposition of least squares theory). Specifîcaiiy. a vector

of observations 4 can be approximated in tems of a Fourier series by eqn. (3.1). which

can be written in matrix notation as

where

Page 63: the least squares spectrum, its inverse transform and - T-Space

is the vector of Foinier coefficients to be estimaestimated and A represents the basis

(trigonometric) functions as de- in eqn. (422). Note that for fo=O, there is no

imaginary tem and thus no bo coefficient The Fourier coefficients x can be estimated by

solvhg for them using the least squares mInimization criterion (cf. Vaniiek and Krakiwsky

[1986, pp. 204-2071). The solution is given by

where N =A* A is the nomai equation coefficient rnatrix.

Note that in the above equation AT# is the manix fonn of the (Smultaneous)

discrete Fourier tmsfonn in eqn. (4.5). T'us, the l e s t squares transfomi for aIi

frequencies simultaneously is given by eqn. (4.5) and the transform for each frequency fk

by

where Ak that part of A corresponding to only frequency fk.

The estimated Fourier coefficients in eqn. (4.25) can then be written as

Substituting this in eqn. (4.23) gives the estima& observations

a = A N - I F ,

Page 64: the least squares spectrum, its inverse transform and - T-Space

which represents the simuitaneous inverse least squares aan~fom for a l l Frequenties. The

individual observations Rti) are then given by

where Ap represents the i-th row of A corresponding to time ti.

The conventionai Fourier aansforms are just a s p e d case of these more gewral

Ieast squares definitions for equally weighted and equally spaced data. Although the direct

least squares and Fourier transfomis are equivalent by definition, the quivalence of the

inverse aansfoms is not easy to see h m the ma& expressions. This quivalence can be

shown by examining the elements of Pl. Realizing that the Fourier expressions are valid

only for equally spaced data and the discrete set of "Fourier" fiequencies, it can be shown

that the columns of A form an orthogonal basis under these assumptions. The elements of

N (summations of trigonomemc products) reduce to

n-1 O for k=I=0 or n /2 z ( s i n 2 ~ ~ t ~ sin2 lcfitii) n/t for ~ = I * o or n/2 , id O for k d

Substituting these in N-1 in eqn. (4.28) and expanding in tems of the Fourier transfom

for the individual frequencies gives

Page 65: the least squares spectrum, its inverse transform and - T-Space

The difference between this and the inverse Fourier transform in eqn. (4.1 1) is the use of

n/Z in place of n for non-zero fkquencies (n is assumeci to be even, 0th- n/2 is

nuncated down to the nearest integer). This is because for real data the transforrn for

negative fiquencies is identical to that for positive frequencies. The wIumns of A

comsponding to these fiequencies wii i be identical thus making N singular when

simuitaneously estimating a i l fkquencis. Including only the positive frequencies wiLi

irnpiicitly account for the identical response for both negative and positive frequencies,

thereby effectively doubling the lest squares transform with respect to the Fourier

transform (i.e., it gives a transfonn which resdts in a one-sided specnum as derived in the

next chapter) . Note that the Nyquist fi-equency (at k = a ) is also excluded h m the

summation since this is aliased with the zero frequency.

It is important to realize that for unequally spaced data the inverse lem squares

transfomi in eqn. (4.28) cannot in general be expresseci as a summation of independent

contributions h m individual fkquencies. This is because N in general contains off-

diagonal elernents between frequencies and even beween the sine and cosine components

for the same fkquency; Le., these Fourier components are mathematicaîiy comiated with

each other (Le., they are no longer orthogonal or linearly independent).

4.4 Weighted Least Squares Transform

The above developmenn have irnplicitly assumed the observations to be equally

weighred A more general form of the least squares tmnsforms cm be derived by

w e i g h ~ g the observations using their associateci covariance ma& C+ This also dows

one to mode1 any known correlations among the obse~ations. The generai expressions for

Page 66: the least squares spectrum, its inverse transform and - T-Space

a weighted least squares interpolation or approximation are given by (cf. Va&& and

Krakiwsky [1986, pp. 204-2073)

where N = AT P A is the normal equation coefficient maaix, u = AT P #, is the normal

equation constant vector and P = C+-1 is the weight matrix of the observations.

Foiiowing the same development as for the unweighted (i.e., equaiiy weighted)

Ieast squares transfonns, the more general weighted Ieast squares transf~~lll for all

frequencies sirnultaneously is given by (cf. eqn. (4.26))

and the transfom for each individual frequency fk by (cf. eqn. (4.1))

Using this in the least squares estimation of the Fourier coefficient in eqn. (4.34) and then

substitu~g into eqn. (4.35) gives the inverse kast squares aansfom (cf. eqn. (4.28))

The individual observations Mti) are then

Page 67: the least squares spectrum, its inverse transform and - T-Space

Although the symbolic fomi of these expressions are identical to those for the unweighted

inverse transfm in eqns. (4.26) and (4.271, N and F are de- differently (they include

the weight rnaaix P). Note that the inverse transfm is essentially just a least squares

approximation of @ in terms of a Fourier series.

As stated at the end of Section 4.3. it is not possible in g e n d to separately

esàmate the individuai Fourier transfom values for different £requencies because of the

possible existence of mathematical correlations (non-onhogonality) among the Fourier

components (hg functions) due to unequal data spacing or correIations among the

observations. If, however, the observations are equaily spaced, equally weighted and

uncorrelated (Le., P = 4, and the set of "Fourier" frequencies are used, the normal

quarion ma& becomes a diagonal (Le., N = diag(n, n/2. fi,. . .) and the direct and

inverse least squares transfomis become identicai to eqns. (4.26) and (4.33). respectively,

and are thus equivalent to the standard Fourier ones. The Fourier transfonn is thus just a

special case of the least squares transfona

An attractive feature of the lest squares tmsform is that the covariance ma& for

the Fourier coefficients and the inverse least squares transfonn are pmvided by the lest

squares theory as by-products of inverthg the normal equation manix N (cf. Vani.ek and

Krakiwsky [1986, pp. 209-2101). The covariance matrix for the estimated F o e

coefficients 9 is given by

while that for the inverse transfm (mterpolated/approximated obsewations) is

cg = A Q A T .

Page 68: the least squares spectrum, its inverse transform and - T-Space

It is recalled that oniy frequencies up to, but not including, the Nyquist fkquency

should be included in the Fourier series in order to avoid singularities in N due to the

aliasing effect In addition, if the data are e q d y spaced, only the set of standard

T0urief7 fiequencies should be used (see Section 3.2). Moreover, if the data are reai,

only the positive Fourier fhquencies shouid be included (see pmperty in eqn. (3.23)).

This then allows fm a total of n-1 terms (a-1 cosines and n/2 sines) to be estimated fiom

n observations, which gives a nearly unique solution for the Fourier coefficients and

enables the observations to be repmiuced exactly using the inverse transform.

In addition to a ~ ~ e p ~ g uneqdy spaced data, another advantage of the least

squares transforms are that they are not restricted to only the set of standard Fouria

fiequencies fc = Rl(n At) = k/T for ka, ..., *1. Any set of frequencies in the range (O, f ~ )

can be used in the expressions. However, only a m;ucimum of n/2 fiequencies (n Fourier

coefficients) can be estirnatecl sirnultaneously fkom only n observations. Moreover, some

serious repercussions can aiso arise if the selected fkquencies result in some of the Fourier

components (hg functions) becoming nearly linearly dependent with each other, thereby

producing an ilkonditioned or near singular N. To avoid such ill-conditioning it becornes

necessary to either select a different set of hquencies to be estimated (e.g., equdy spaced

frequencies) or simply neglect the correlations in N (i.e., the off-diagonal blocks) and

esthate the inverse least squares transform separately for the individual frequencies using

eqn. (4.39).

Another problem in dealing with uneqdy spaced data is that the Nyquist

fiequency is not weil defined, if at ail. It was thought that, because a single cycle of a

periodic function can be defined with only 3 points, the smaiiest time interval of a mplet of

adjacent points would represent the smaliest pend which can be estimateci. Care would

also be need to ensure that no pair of points in the triplet are so close together that the niplet

is essentialIy only a pair of points for ali practical purposes. In practice, however, this

triplet interval does not appear to defhe a Nyquist fiequency. As WU be shown in the

Page 69: the least squares spectrum, its inverse transform and - T-Space

numericd tests of Chapter 7, spectra wmputed to frequencies well beyond this implied

Nyquist fresuency do not exhibit the expected mirror image about any Nyquist frequency.

4.5 Effect of Deterministic Mode1

So far it has been assurneci that the original data is stationary and can be modeiied

completely by a Fourier series. In general this is hardly ever the case. It is more common

to first remove the non-stationarity by modebg some hown a priori deterministic trends

using, e.g., least squares fitting and to analyse the residual (stochastic) series using the

above techniques. The problem, however, is that there may be linear dependence beh~een

the deterministic d e l and the periodic components in the Fourier series (the stochastic

mode1) which may significantIy affact the Fomier transfbrm and spechum.

To account for such effects, it is necessary to refomiulate the preceding

developments to accommcxiate both the detenninistic mode1 as weil as the stochastic mode1

(periodic Fourier senes components) in the esthnation of a least squares fransfom

Partitionhg A and x,

the data series (observation) vector 0 is modelled in ternis of both deterministic and

stoc has tic (Fourier series) & components as

Page 70: the least squares spectrum, its inverse transform and - T-Space

For the de temiinistic d e l , AD is the design ma& and XD is the parameter vector to be

estimated, and for the stochastic (Fourier series) d e l , As is the ma& of cusine and sine

basis functions as defined in eqn. (4.22) and xs is the vector of Fourier coefficients to be

estimated as defined in eqn. (424). The aim is to account for the effect of estirnating XD in

the estimation of xs.

The weighted lest squares estimates of the combined parameter vector î. and the

appmximated observation vecmr ) are given by eqns. (4.34) and (4.33, where the

matrices are defined as above. Substitu~g the above partitioned forms of A and x into

these expressions gives

w here

Although, for stochastic modeIIing, we are ~ a U y only interested in k it is

necessary to account for any effect of the d e t e d s t i c mode1 on the estimation of & by

PD. This is obtained by making use of some well-known matrix identities in the evaluation

Page 71: the least squares spectrum, its inverse transform and - T-Space

of Ifs. Specificaiiy. the inversion of the n o d equation maaix N can be written as

Wamiek and Krakiwsky. 1986, p. 281

w here

Substituting into eqn. (4.45) and gives for%

where the so-called "reduced" normal equation and constant vectur are

Page 72: the least squares spectrum, its inverse transform and - T-Space

Defining the "reduced" weight matru< P., whicnch accounts for the effect of the

deterministic d e l , by

the normal equations in cqn. (4.58) can be wristen in the same general fomi as that without

the deterministic model; i.e.,

where

The simultaneous least squares transforrn F* (for aii fquencies simultaneously)

which accounts for the deterrninistic mode1 is then defined in the sarne manner as in eqn.

(4.36):

F* = A$ P* q .

The tmnsform for each individual fiequency fk is then (cf. eqn. (4.37))

Page 73: the least squares spectrum, its inverse transform and - T-Space

Similady, the inverse transform for a l l observations is defined by eqn. (4.38), using the

reduced f o m of N and F, as

and for individuai observations &ti) by

The expressions for independently estirnated frequency components are simply obtained by

ignoring the offdiagond ternis between different frequencies in N* and P*

When there is no deterministic model, AD = O, P* = P and the above expressions

reduce to the same forrn as in the previous section. Note that the weighted inverse

transfomi is essentially just a weighted least squares approximation of # in temis of the a

priori deterministic mode1 and the individual periodic (Fourier series) components.

4.6 Vector Space Interpretation

The least squares aansform can be more elegantly interpreted using the concept of

Hilbert spaces and commutative diagrams using the language of functional analysis. The

fundamental component of functiond anaiysis is the space, in which we want to work.

The elements in a space c m be real numbers, complex numbers, vectas, matrices as well

as huictions of these. Here we consider the more restrictive case of vector spaces

consishg of sets of vectors which can be visualized as positions in the space. A brief

Page 74: the least squares spectrum, its inverse transform and - T-Space

review of functional analysis as it applies to the geomeaical interpretation of the least

squares rransform is given. For more on functional analysis see, e.g., Kreyszig [1978].

There are various classifications of spaces. The most generai type of space is the

rnetric space in which the concept of a distance (or menic) p(*y) between two eiernents x

and y in the space is defined. A nonoed space is a memc space in which a nom II*II may be

induced as the distance h m the ndï element. The norm IMl of a single element x is just its

length p(x,O). A Hilbert space is a normed space in which a scalar (or inner) product

ap for a pair of elements x and y rnay be induced by the relations

There are many ways of defining a scalar product For vector spaces of f i t e

dimension the most common is the simple linear combination of vector elements; Le., for

vectors x and y ,

For compact vector spaces the analogous form of the scalar product is

A more general definition of the discrete scalar p d u c t , and the one used here. is the norm

defineci by

Page 75: the least squares spectrum, its inverse transform and - T-Space

where P. the weight maaix for the v e c ~ space, is g e n d y the inverse of the covariance

ma& of the vector elements. This curresponds to a generalization of Euclidean space with

metric tensor I, into a Riananian space with metric tensor P. Note that for compact

manices, the vectm and manices wiil also be compact, and contain con~uous functions.

An interpretation of basic least squares theory in terms of functional analysis is

aven by VaniEek [1986]. The theory is interpreted using commutative diagrams which

desmïbe the various transformations between probabilistic (Hilbert) spaces. The same

diagram can be used to interpret the least squares transfom In this diagram @ is the

observation vector belonging to the observation space @, C4 is the observation covariance

matrix (not necessarily diagonal) defining the scalar product (and nom and distance) in this

space. x is the parameter vector of Fourier coefficients to be estimated belonging to the

parameter space X and A is the design ma& tmnsforming the observations to the

parameters, which contains the sines and cosines functions (basis functions).

The commutative diagram is set up by first defining the transfomation (i.e., the

observation equations) #=Ax from the parameter space P[ to the observation space @. The

weight matrices Px and P@ define the nansfonnations to the dual parameter space and

dual observation space @*. respectively. The transfomation h m the dual observation

space @* to the dual parameter space P[*. is defined by A ~ . Asslmiuig the design matrix

A and covariance ma& Ce are known, the rernaining transformations can be obtained

h m the commutative diagram using the following steps.

1. Pq, = C#-'

2. pX = A T P ~ A cX = P E I

3. A- = C,AT P# = (ATP* A)-' AT Pf

4. F = AT P#

Page 76: the least squares spectrum, its inverse transform and - T-Space

These steps are Uustrated in Figure 4.1. Here, F is defineci sIightly differently ththan in the

pRceding developments. It repmsents the transfomi operator that acts on the observations,

and wt the entire transform itself as dehed in Section 43. Sindarly, F-1 is the inverse

operator.

It can be seen h m ihe commutative diagram that the Ieast squares Fourier

~ s f o m F is a trausfbrmation h m the observation space 8 to the dual parameter space

via the dual observation space 9'. The inverse least squares Fourier transform F-z is

then deriveci by proceeding fkom the dual parameter space to the observation space

via the parmeter space X.

Observation Space

Observation Space

Figure 4.1: Commutative diagram for the direct and inverse least squares ~ s f o r m ,

where F denotes the direct transform and F-l the inverse transfom.

Page 77: the least squares spectrum, its inverse transform and - T-Space

The design matrix A contains the trigonometnc functions definiog the Fourier saies

representation of the observations. The individual sine and coshe terms (columns of A)

fom a basis for the obmaaon space. For the standard Fourier transfonn, the data are

equally spaced, equaily weighted and uncmlated so that the columns of A form an

orthogonal basis. The norrnai equation ma& N =AT P A then becornes a diagonal

ma& a s does the covariance rnatrix of the parameters. In the more generai least squares

aansfomi, the base functions are not necessarily orthogonal, although, in practice, this is

usually the case even with unequally spaced data.

4.7 Applications

The above least squares transfomi can be applied in the same mariner as the

traditional Foinier one, with the added advantage that they can be used not only for e q d y

spaced data senes, but also for unequaiiy spaced senes and for any arbitrary set of

fkequencies. One of the most important applications (to be discussed in the next chapter) is

the detdnat ion of the power specaal density fa unequaliy spaced data that also accounts

for a detemiinistic model. In this case there is no need to determine a frequency response

function for the detenninistic model in order to remove its effect f h m the specaum of the

model residuds. The correct spectnim is obtained directty when the detenninistic model is

acwunted for in the formulation of the spectrua

Another important application of the lest squares transfomi is the indirect

estimation of autocovariance/aut~~ofielation functions using the correlation theorem (see

Chapter 6). Instead of transforming the effect of al l the spectral values, a s m o o t k

autocovariance function cm be obtained by uskg only the significant specaal values.

Because these significant spectrai components are not likely to be evenly spaced, it is

necessary to use the inverse least squares transform to convert hem into an autocorrelation

function.

Page 78: the least squares spectrum, its inverse transform and - T-Space

The inverse least squares tram$ann can also be used in data series approximation

and interpolation problem. In these applications the direct Fourier transform is used to

estimate FourÏer series coefficients, which are then used in the inverse t rans fm to

approxirnate or interpolate the original Senes. The degree of smoothing of the Onginai

series can be increased by induding only fiequencies corresponding to highiy significant

Fourier coefficients (or specaal peaks).

Page 79: the least squares spectrum, its inverse transform and - T-Space

Chapter 5 The Least Squares Spectrum

5.1 Introduction

As dimissed in the previous chapters, traditionai mtthods of detemiining power

spectral density and autocorrelation functions are significantiy limited in their application

because they always require the &ta to be equally spaced in the argument. Other reasons

for seeking alternative techniques are concénied with the limitations of the discrete Fouier

transfomi and FFI' commoniy used to generate spectra as weli as autocorrelation functions

(ûansfomied from the spectrum). These include the use of only the set of "Fouriei'

fkquencies (integer multiples of the fundamental fresuency), and the requkment of 2n

data points (for the FFT algorithm). In addition, the aaditional techniques do not consider

any interaction (correlation) benveen the determinisac mode1 and the implied periodic

components modelled in the s p e c m . Moreover, the data cannot be weighted in the

transform computation in accordance with their assumai probability density hinction.

Thus, some observations with relatively large random errors will be weighted the same as

other observations that may be many times more precise.

Traditional methods of cornputhg power spectral density functions h m unequally

spaced data have often been based on interpolation or approximation. That is, the original

unequaily spaced data series was interpolated or approximated to an equally spaced series

for which the standard Fourier techniques could then be applied The problem, however,

is that this approach really mates a new data series that depends on the smoothness of the

original series, the presence of data gaps and the subjective choice of the interpola~g or

Page 80: the least squares spectrum, its inverse transform and - T-Space

appmximating fiinction. The interpoIation also tends to smooth out any high frequency

components of the original data Senes.

To overcome these limitations and difficulties, VanRIek [1969a] developed a method

of spezûum computation based on least squares estimation. This method was M e r

developed in V a e k 1197 11, Steeves [ 198 11 and Wells et al. [1985] and forms the basis

of other Simüar techniques in slightly different f m promoted by various authors since

(e.g., Rochester et ai. [1974], Lomb [1976], Ferraz-Meilo [1981], Scargle [1982], Home

and Baüunas [1986]). In this Chapter, the same basic lest squares spectnim is

reformulated in ternis of the newly developed least squares transform. A new

"simdtaneous" spectrai estimation procedure, somewhat simila. to that used by Rochester

et al. [1974], is also developed.

5.2 Matrix Form of Fourier Spectrum

Before giving the expressions for the least squares spectrum, the Fourier spectrum

is first expresseci in matrix fom. This is done by simply using the matrix expressions for

the Fourier transfomi (eqns. (4.9) and (4.21)) in the definition of total power (eqn. (3.47))

and the individual Fourier spectral estimates (eqn. (3.48)). Parseval's relation in eqn.

(3.47) can then be written in mabtix notation as

1 Total power = eT# = n F=F .

where

Page 81: the least squares spectrum, its inverse transform and - T-Space

The individual spead components for the twesided power spectral density function (eqn.

(3.48)) are then given by

The one-sided spectral density function is twice the two-side function and is defined by

1 1 - F ~ T F R = $~(f~)12 for k = O n

~0%) = (5.3 L F k T ~ t = ; I F ( ~ ~ ) I ~ f o r k = l ... n12-1 n

5.3 Least Squares Spectrum

The Least squares specmim was originally developed by V m i e k [1969a; 19721 (see

also Steeves [1981] and Wells et al. [198q). The expressions for this form of the least

squares specaum (referred to here as the '%onventional" form) can be developed in terms of

the (unweighted) least squares tiransform. First, the total power is given by

Total power = #T @ .

Substituting for the inverse least squares transform in eqn. (4.28) results in

Total power = @T # = FT N-1 F .

Page 82: the least squares spectrum, its inverse transform and - T-Space

Note that, g e n d y , the total power cm not be expressed as a sum of individual

contributions h m the different fkquency mmponents. As with the inverse least squares

nansform, the problem is that with unequally spaced data, N is not a diagonal matrix

because the Fourier components (hg fwictions) are not orthogonal to (lineariy independent

of) each other. As explallied above, this problem is avoiâed by simply examining one

fhquency at a time, independently (out of context) of the othm. This is quivalent to

ignoring the iinear dependence between different fiequency components in N and amomts

to d e m g the spectnun as the independent contribution of each fkquency component to

the total power.

FoIlowing this approach, the s p e c d component sud (for the one-sideci l w t

squares power spectral density function) is defineci by

where Nk is the k-th diagonal block of N correspondhg to frequency fk. The normalized

spectral values T V , are then

The nomialized specmim represents the percentage of variation in the original data Smes

independently expiaineci by each s p e d component. In its basic phüosophy, this

corresponds to the ~2 statis tic in regression analysis m p e r and Smith, 198 1, p. 331.

One of the most signifiant advantages of the l e s t squares spectnun, other than

handhg unequaiiy spaced data, is the ability to estimate spectrai components for any real

(arbitrary) fnquency, not just the set of 'Fourier" fiequencies. The expressions in eqns.

(5.8) and (5.9) essentiaiiy provide continuous estimates for any set of fkpencies. The

Page 83: the least squares spectrum, its inverse transform and - T-Space

usual procedm is to take a set of equally spaced frequencies between zero and the

e s tk t ed Nyquïst or maximum frequency (note that the Nyquist frequency is undefirred for

unevenly spaceà &ta as discussed Section 4.4). The pnxise fkquency location of

signifiant peaks cm then be detennùied by "mrning" in on that frequency area of the

specaum. This allows one to locate mencies for significant peaks to any resolution,

within the Iimits of the &ta sampling.

5.4 Weighted Least Squares Spectrum

The more general weighted least squares power specmun is obtained in a similar

way except that the general (weighted) least squares transfom are used in the above

developrnents. In this more general situation of an observation weight marrix, the total

power is defined by the weighted sum of squares as

Total power = #T P # , (5.10)

where P is the inverse of the observation covariance ma&. Substituthg for @ using the

weighted least squares inverse transfomi in eqn. (4.38) and n o ~ g that A= P A = N gives

Total power = #TP # = FTN-1 F. (5.1 1)

Vmiek [1969a] defines the spectrum as the independent fiequency contributions to

this total power (cf. eqns. (5.8) and (5.9)). That is, each frequency component is

estimated independently, or out of context, of the others. Steeves [1981] extends this

approach by incorporating the weight matrix (P) of the observations. The independent

estimate of each spectraI component is then obtained using the weighted lest squares

Page 84: the least squares spectrum, its inverse transform and - T-Space

transfomi h m eqn. (4.36) in the specaal estimates given by eqns. (5.8) and (5.9). whae

the weighted normal equation matrix Nk for the k-th specaal component is defined by

This type of spectrai estimation is referzed to here as "independent" or "out-of-context"

spectral estimation.

An alternative approgch to least squares spectral estimation ain be deweloped in

which aii spectral components are estimated sirndtaneously; Le.. in the context of the

others king present This appmach takes into account the non-onhogooality (mathematical

correlations) between the specuai components. It is effectively equivalent to the

geometrical projection of the total multidimensional quadratic fomi representing the total

power, ont0 the subspace for each individual spectral component This is analogous to the

way in which quadratic fonns and confidence regions are dehed fm station coordinates in

geodetic networks. This estimation method is developed by first realizing that in eqn.

(5.1 1) for the total power the inverse of the normal quation matrix N-1 is equivalent to the

covariance matrix for the simuitaneously estimated Fourier coefficients (cf. eqn.

(4.40)). The total power can then be wrinen as

Total power = FT Cp F . (5.13)

Substituting for the weighted least squares transform in eqn. (4.36). the total power can be

expressed in te- of the estirnated Fourier coefficients:

Total power = $T Ccf-l/: .

Page 85: the least squares spectrum, its inverse transform and - T-Space

The weighted edeast squares spectruru is defined as the contribution sud of the

individual fkquency componena (Fourier coefficients) a> the total power. That is, the

quadratic fomi of the estirnateci Fourier coefficients for individual frequencies is

where Cpk is the k-th diagonal block of covariance matrk C*. Substitu~g back in the

weighted least squares transform in eqn. (4.37) fot individual £kquencies, gives the

weighted least squares specaal values

which account for any non-orthogonality (mathematical correlations) among the different

specuai components. Note that Cgk is not the same as Nk-l in the expression for the

independently estimateci (out-of-contutt) l e s t squares specaum. Using the k-th diagonal

block from Cgk is the same as extracting the k-th diagonal block £kom N-1 instead of from

N as in the conventional expressions (cf. Steeves [1981]). Thus, eqn. (5.16) may also be

written as

The nonnalized spectral value S(&J for frequency fk is obtained by dividing by the total

power, i.e.,

Page 86: the least squares spectrum, its inverse transform and - T-Space

This type of spectral estimation is xferred to hcre as "simdtaneous" or "in-context"

spectcai estimation.

AU linear dependence (mathematicai correlation) among the fkquency wrnponena

are accounted for in this simuitaneous estimate of the weighted least squares spect~m,

When the correlations between the frequency components are ignoreci, N homes a

diagonal manu of normal equation matrices Nk fm each individual frequencyfi and C& =

~ ~ - 1 . The expressions aven here are then equivalent to those in Sreeves [l98l], for the

independent estimation of spectral components where no decerministic mode1 is considered.

When the data are also equaily weighted, these expressions are identical to those in Vmiek

[1969a; 19721. When the data are equally spaced and the set of Touriei' frequencies are

used, N-1 = diag(2/n), and the weighted Ieast squares specnal values are then equivalent to

the standard one-side Fourier ones given by eqn. (3.4 1).

Vaniiek [1969a; 19721 also includes some simplifjhg trigonomerric identities that

make the evaluation of the elements in N-1 more efficient for equaliy spaced data (see ais0

Wells et al. [1983). These have been omitted h m the developments here for the sake of

simplicity, although any routine application of these shodd include these optimhations to

d u c e the required computationai effort

This approach is also similar to that used by Rochester et al. Cl9741 in that

correlations between different muencies are accounted for. However, the correlations

among the coefficients for same frequency are implicitly ignoreci in their expressions

because of the use of complex notation. The real (cosine) and imaginary (sine) tems for

the same kquency are treated independently. Only when the data are equaliy spaced is

theV approach equivalent to the preceding ones.

The same comments on the Fourier rransform regarding frequencies greater than the

Nyquist frequency also apply here for the sirnuitaneous estimate of the N l y weighted least

squares spectrum. Singularities in K.' should be avoided by using only frequencies up to

Page 87: the least squares spectrum, its inverse transform and - T-Space

the Nyquist fkequency. Frequencies that are too closely spaced cm also cause ill-

conditionhg problems in the simuitaneous estimation of different spectral values.

Finally, it should be emphasized that these definitions of the least squares specmun

do not satisfy Parseval's relation. That is, the surn of the these spectraX values does not

equal the total power in eqn (5.7). Because of the correlation among the frequencies, there

is no equivalent to Parseval's relation for unequdiy spaced data.

5.5 Effect of Deterministic Mode1

In the developments thus far, the mathematical comlations mear dependence)

between the specnal components and any detemiùiistic mode1 have been ignored, as they

are in the traditional Fourier method. One of the most signincant contributions of VaniEek

[1969a; 1972 ] was the incorporation of the effect of any a priori deterministic mode1 in the

determination of the spectrai values. An important consequence (advantage) of this is that it

alleviates the need to deterrnine frequency response functions for the determùiistic d e l .

In the context of specmim estimation, fkquency response functions are used to account for

the effect of the detemiinistic model on the spectrum. Here, the determinisistic effects are

modeiled explicitly in the formation of the expressions for the estimation of the spectral

components.

The effect of the detemiinistic model on the spectrum is obtained in the same way as

for the inverse least squares trmsforrn in the previous chapter. The spectnun is defined as

the coniribution of each frequency component to the total power. This can be expressed in

terms of the quadratic fonn of the estimated Fourier ooefficients 2 as in eqn. (5.15).

However, to account for the effects of the deterministic model, the quadratic form must be

based on estirnates h m the combined detenninistic and "specaal" model as explained in

Section 4.5. That is, the spectral component sud for fiequencyfk is given by

Page 88: the least squares spectrum, its inverse transform and - T-Space

where the ma& components for fkequency fk are, fkom eqn. (4.62).

Pk = N*-1 ** = N C I A ~ T p* @ ,

N*-1 and P* are dehed in eqns. (4.64) and (4.6 l), respectively. Note that these

expressions are formaily identical to those without a deterministic model. except that the

"reduced" weight ma& P* in eqn. (4.61) is used in place of P. The effect of the

deterministic model is thenfore completely containeci within p.

Foliowing the same substitution procedure as in the previous section, the Ieast

squares estimates of the spectral values cm be wrinen in terms of the weighted Ieast

squares msfotn FE in eqn. (4.67) as (cf. Vaniiek (1 97 1, eqn. (2.4)])

The normaiized spectrum is defined as before to be the percentage of the variation in

the data explaineci by the each spectral component. In the presence of an a priori

detenninistic model, this repments the variance explained by each spectraI component

which is not accounted for by the deterministic modeI. The part that is not explained by the

deterministic mode1 is just the residuals r~ finnom the deterrainistic mode1 alone. That is,

using the notation of Section 4.5,

Page 89: the least squares spectrum, its inverse transform and - T-Space

where

and N m and UD are defined by eqns. (4.47) and (4.51), respectively. Expanding r D and

rearranging gives

a = ( z - A ~ N ~ ~ - ~ A ~ ~ P ) #.

Substituting this in the quadratic form of r~ and simpWyhg resuits in

rDTPrD = # T ( P - P A ~ N ~ ~ - ~ A ~ ~ P ) # = @'P*#, (5-26)

where P* is the "reduced" weight matrix accounting for the deterministic model. Dividing

the spectral values eqns. (5.22) by (5.26)- the n o d z e d spectnim that accounts for the

deterministic mode1 is

The consideration of which frequencies to include in the weighted least squares

spectmn must be done very carefully when accounting for the effects of a determuiistic

mode1 (it is effectively undefined in the spectruxn estimation). This is especidy important

if penodic trends are present in the d e t e d s t i c modeI. In that case, the spectral value for

the same frequency is undefined because it has effectively been accounted for in the

deterministic model and is therefore undefined in the least squares specmim; ie., the

periodic component in the detenninistic model and the same component in the spectrai

Page 90: the least squares spectrum, its inverse transform and - T-Space

mode1 will be pesfectly lineariy dependent. Evaluating spectrai components for the same

fkequencies as the periodic mnds wiU mul t in a singular nomial equation manix p.

Present algorithms for the least squares spectrum (e.g., Wells et al. [1983) check for this

situation by inspecting the detemiiaant of Nk*; a zcro or near zero value indicates a

singdarity and thus an undefineci specnal value.

Ignoring oorrelations between spectrai components is @ d y acceptable within the

context of improvhg the deterministic modeL In this case the objective is to iteratively

search for only the largest spectral component in a residual &ta series h m a deterministic

modeL Any significant spectral values can then be incorporated into the deterniinistic

mode4 either explicitly as a periodic trend or irnplicitly as part of a more complex mudel of

the underlying physicai processes. In this way the rnethod effectively accounn for the

correlations among only the most sipnincant spectral cornponents that are iterativeiy

included in the dete-stic model.

5.6 Statistical Tests

Another p t advantage of the Ieast squares spectrum is that the signifïcance of the

least squares spccaal values can be tested statisticaiiy in a rigorous mamer. The following

statistical tests are based on S teeves [ 198 11.

It is weIl known in statistics that a quadratic fom has chi-square distribution with

degrees of W o m q u a i to the rank of the weight matrix. Expressing the estimated

spectral values in terms of the quadratic fom of the estimated Fourier coefficient& in

eqn. (5.15), this quantity then has a Chi-square distribution x ~ ( u ; 1-a) with u=2 degrees

of fkeedom (represen~g the rank of the covariance ma& C& for the two Fourier

coefficients for frequency fC) WaniZlek and Krakiwsky, 19863. A statistical test of the nuil

hypothesis %: SUR) = O can then be made using the decision function

Page 91: the least squares spectrum, its inverse transform and - T-Space

S ~ 2 ( 2 ; L a ) ; accept Ho: su',) = O > ~ 2 ( 2 ; 1-a); reject Ho

where a is the signincance level of the test (usually 5%).

If the scale (Le., a priori variance factor 002) of Cg is unlaiown. the estimaM

value can be obtained fkom

where v = te2 is the degrees of freedom (two degrees of needom lost to the estimation of

the two Fourier coefficients). This estimateci variance factor is used to scale the covariance

ma& Ceb which then has a Fisher distribution F(v.u; 1-a) with -12-2 and u=2 degrees

of freedom. A statistical test of the null hypothesis I&: slfk) = O can then be made using

the decision function

{ I F(v ,2; 1-a); accept Ho: s(fk) = O > F(v.2; 1-a); reject Ho

The distribution of the normalized spectral values is obtained by h t rewriting the

quadratic foxm #T TP @ in temis of the residuals 3 and estimateci observations @hm the

spectral model. RealiPng that

the quadratic foxm of the residuals can be expressed as

r V r = # T ( P - P A N - ~ A T P ) ~ = $Tp*$.

Page 92: the least squares spectrum, its inverse transform and - T-Space

Noting that P = Cg and rearranging,

Thus, the quadratic form of the observations is

which represent the total power. The quadratic f m on the right si& of eqn. (5.34) are

weLi known ( s e , e.g., Vaniiek and Krakiwsky 119861). The quadratic f m 9$' Cpk-I

Pk of the estimated Fourier coefficient has a Chi-square distribution with 2 degrees of

fkeedom (the number of Fourier coefficients for frequency fd. The quadratic form rT P r

of the residuals has a Chi-square distribution with w n - u degrees of freedom, whae u is

the total number of Fourier coefficients being simultaneously estimated (if the spectrai

values are king estirnateci independently, then u=2).

Using eqns. (5.15) and (5.34) in the expression for the nomalized spectrai value in

eqn. (5.1 8) and rearranging gives

where the ratio of ovo quadratic forms in the denorninator has the following Fisher

distribution

Page 93: the least squares spectrum, its inverse transform and - T-Space

where "-4' means '7s distributed as", and v and 2 are the degrees of fieedorn of the

numerator and denominator, respectively. Note the use of the a probabilin/ level instead of

1-or This is because of the inverse relation between this F statistic and the spectral value

(for which we want the 1-a probabiiity level). Given the distribution of the ratio of the

quadratic fomis in eqh (5.35). the disaibution of the nomialized spectral value is then (cf.

S teeves 1 198 1, eqn. (3.19) 1)

A statistical test of the null hypothesis &,: = O c m then be made using the decision

fimction

The above Fisher disaibution can be simplifieci M e r using the inverse relation for

the Fisher distribution [Freund, 197 11,

When the first degree of hedom is NO, this can be approximated by [Steeves, 198 11,

Page 94: the least squares spectrum, its inverse transform and - T-Space

This results in a statisticai test of the nuii hypothesis %: Svd = O using the decision

function

The statistid tests for specaal values that account for the presence of any

detemiùiistic mode1 are exactIy the same as above, except that the "reduced" observation

weight manix P* is used in place of the a d weight math P in the computation of the

quadratic forms.

The above tests are the so-cdied "out-of-context" tests, which test the individual

spectral components out of context of the others king escimated (see VaniZek and

Krakiwsky [1986, p. 229-23 11). They are identicai to those in Steeves [1981] and apply to

the independent estimation of the spectral cornponents, but not to the estimation of a i l the

specaal values simultaneously. In that case the "incontext" test should be used which

takes into consideration the estimation of the other spectral cornponents. Two approaches

can be used in this regard. The simplest one is to use the simultaneous confidence region

for all m fiequency components being estimatecl. This gives the same test as in eqn. (5.41)

except that 2m degrees of freedom is used in place of 2. However, this approach usuaily

results in tw pessimistic (large) a b i t to be any real value. A better approach is to use the

relation between the simultaneous probability a for the joint test of all spectrai components

together and the "local" probability a;, for the test of each spectral component separately.

Foilowing Miller 1 19661, the relation is given to first-order approximation by a, = dm.

The in-context test is then obtained by using in place of a in the above tests. NOIE that

Press and Rybicki [1989], Press et al. 11992, p. 5701 also use the in-context test based on

Page 95: the least squares spectrum, its inverse transform and - T-Space

simultaneous probability. However. they incorrectly apply it to the testing of the

independently estiniated specnai components, where the correlations among the different

fkquency cornponents is ignoreci. The in-context test should only be used for the

simultanews estimates of the spectral values, where the correlations among al l the

frequencies used is accounted for.

5.7 Estimation Algorithms

As stated at the begidng of this chapter, th- have been a variety of papers since

Vaniiek, [1%9a, 19711 describing the same least squares spectrum (independently

estimated spectral components) in slightly different fom; e.g., Lomb 119751, Fenaz-

Mello 119811, Scargle [1982], Home and Baliunas [1986]. It can be shown, however, that

under the same assumptions ail of these are identical to Vaniiek's more general approach.

The ciifferences are only the use of slightly different nomiatization methods and different

numencal methods for solving the normal equations.

In Vaniiek [1969a], the direct inversion of the 2x2 normal equation matrix is

optimized by using an analytical expression. in addition to king the fastest algorithm, it

also accounts for the presence of a priori deterministic models and includes various

trigonometric identities for greater efficiency, especially for equally spaced data Compared

to the FFT, however, the least squares aansfom and specmun are cornputationaiiy much

slower. Unfortunately, a diioct comparison of computational speed could not be made

because of the software useci. AU tests were perfofmed using the MATLAB software,

which has a built-in (compiled) FlT function optimisai for speed whereas the least squares

specmun algorithm was implemented as an extemal (interpreted) function. Because

extemal fûnctions execute much more slowly than built-in functions, no fair comparison

between the FiT and least squares algorithms couid be made in M A T M . Nevertheless,

Page 96: the least squares spectrum, its inverse transform and - T-Space

when confionteci with unevenly spaced data, the least squares method is the only axrect

approach to use.

Lomb [1975] and Scargie, 19821 solve the n o d equations using an

orthogonalization (diagonakation) procedure based on tirne shifnng (a different time shift

is needed for each frequency). This approach is slower than the dgecl analytical solution of

Vmiek. It &O does not account for the prtsence of any a priori models, except for a

mean. Ferraz-Mello [1981] uses Gram-Schmidt orthogonalization to diagonalize the

normal equarions. Again, this procedures is slower than direct anaiytical inversion and

does not account for the presence of any a priori deterministic modek.

Recently, Ress and Rybicki [1989] have developed a novel approach to the fast

computation of a least squares spectnim. It is based on the concept of "'extirpolation" and

the use of the FFî. Basicdy, exthplation gives an equally spaced data series bat, when

interpolateci to the original times, gives back exactly the original data series. This is also

called reverse interpolation. The FFT is used to evaluate the evenly spaced (extirpolated)

sine and cosine sununations in the tirne-shifting algorithm of Lomb [1975]. The original

extirpolation algorithm used two complex FFïs. The more efficient algorithm uses the

same trïgonornetric identities us& by Vaniiek [1969a] to reduce the computations to only

one FFT. The biggest disadvantage of this method is that it's lirnited to only the set of

ccFourier" frequenties due to the use of the FFT. It is thus not possible to " m m in" on

significant peaks to better resolve the frequency. The FFI' also requires 2" data points,

which necessitates zero-padding the data senes. As for the other algorithms, the presence

of a priori detenninistic models cannot be accounted for. Finally, the extirpolation accuracy

depends on the "oversampling factor" used in the extirpolation to genaate many more data

points than the original data senes. Gream oversampling of the extirpolated series

provides better accuracy but results in more computations. In spite of the above

limitations, this algorithms works very weU and very fast (on the order of n logn, ins&

of n2).

Page 97: the least squares spectrum, its inverse transform and - T-Space

Chapter 6 Stochastic Modelling of Observation Errors

6.1 Introduction

The weighted least squares estimation model aliows for the stochastic modehg of

residual errors through the use of a M y popdated covariance manix. This can be used to

account for those systematic effects that have not been modelIed explicitly

(detefministically) in the design rnatrix fa the least squares model. The problem with

using f d y popuiated covariance maaices in this rnanner is the diaculty in detamining the

covariance or correlations among the observations in an objective way.

There are a few rnethods that cari be used to d e m e the variance and covariance

each with their own advantages and drawbacks. One of the most popular of these are the

rnethods of analysis of variance and variance-covariance component estimation. The

"anaiysis of variance" (ANOVA) method (also called factur analysis in statistics) cm be

found in most standard texts on statistics. Geodetic applications of the technique are

describeci in detail by Kelly [1991] and in a series of articles by Wassef 11959; 1974;

19'761. Essentiaily the aim of the method is to divide the measurements into separate

groups (factors which contribute to the overall variation in the data) and to estimate the

variance cornponents for each. The difficulty in applying the method is in dennuig a

scheme of dividing the observations into separate groups which charactcrize some

behaviour of the systematic eEect king modeiied. Often, the factors describing the

systematic effect cannot be so discretely defined, rather they are often of a continuous

nature that precludes lumping them together into separate and distinct groups.

Page 98: the least squares spectrum, its inverse transform and - T-Space

Variance-covariance component estimation, on the other hand, is based on

modelling deteminktically the residual variation in the measurernents. The variances and

covariances are expressed in ~ ~ T K U S of iinear modek relating these components to various

factors describing the systematic effect The coefficients (variance and covariance

components) in the variance-covariance model are estimate- together with the paramettas in

a least squares solution. The technique is describai in detail in Rao and Kleffe [1988] and

has been appiied to many geodetic problems (see, e.g., Grafârend et al. [1980], Grafarend

[1984], Chen et al. [1990]). It can be shown that the analysis of variance method is just a

special case of this more general approach [Chnanowski et al., 19941. The problem with

applying the methad is that the estimation of the variance-covariance mode1 coefficients

usually needs to be iterated which can result in biased estirnates of the variances and

covariances [Rao and Kleffe, 19881. This c m lead to negative variances, which is

unaccep table.

The approach taken here is m mode1 any residual systematic effects remaining after

accounting for a detemiinistic model, usuig autocorrelation (ACF) or autocovariance

(ACVF) functions derived h m a power spectral density function of the residuals. This

idea was first proposed for geodetic applications by Vaniiek and Craymer [1983a; 1983b1

and M e r developed by Craymer [ 19841. To accommodate unevenly spaced dam, a

general least squares transfm is developed to determine the nomialized power spectrurn.

The inverse transforrn is then used to convert this to an ACF which is converted to an

ACvF.

6.2 Direct Autocovariance Function Estimation

The autocovariance function of an equaily spaced data &es i(td can be estimateci

directly using the expressions given in Chapter 2. This gives the sample autocovariance

func tion

Page 99: the least squares spectrum, its inverse transform and - T-Space

1 C(z,) = - n-m

where rn = zm/& is the so-called lag number and dr is the data series spacing. No= that, as

in eqn. (2.20), the summation is divided by twn rather than by n, in order to provide an

unbiased estimate of C(rJ. The biased estimate is obtained by dividing by n.

For uneqdy spaced data which are relatively homogeneously disaibuteci, an

averaging procedure cm be used. In this approach the unevenly spaced lags are àivided

into equaily spaced lag intervals or bins, similar to the way in which histograms are

constnicted. Ail lags within the lag inmal are summed together in (6.1) to give an

average autocovariance for the lag interval. This method gives a smthed estimate of the

autocovariance function. The problem is that if the data have large gaps, the lag intervals

may need to be relatively large, resulting in degraded resolution See VaniZek and Craymer

[1983a;b] and Craymer (19841 for more details of this technique.

6.3 Autocovariance Function Estimation via the Spectrum

The autocovariance function for an evenly spaced data senes can be most

conveniently derived fiom the power spectrai density function using the Fourier transfom.

As discussed in Section 3.4, the autocovariance function c m be expresseci as the Fourier

transform pair with the spec- and the autocomlation function R(t) as the transfomi pair

with the nomialized spectrum. These expressions in terrns of the spectnun are ofien used

as the basis for the efficient computation of autocovariance and autocomlation functions of

evenly spaced data using the FFï. It wiU also be used as the basis for deveioping

autocovariance functions for unevenly spaced data to provide objective a priori estirnates of

Page 100: the least squares spectrum, its inverse transform and - T-Space

covariances and weights that account for residual systematic effects in least squares

modellhg.

As mentioned in Section 3.4, care must be exercisad to avoid any "wrap amund" or

"end" effects when compu~g the autocovariance or autocomlation function b m the

s p e m This is most easily achieved by simply paddîng the data series with zeros out to

double the length of the original Senes. Furthemore, this indirect estimation via the

spectrum pmvides the biased estimate of the autocovariance/autocorrelation function. As

recommended by Bendat and Pierso1[1971, pp. 3 12-3141 and Pnestiey [1981, pp. 323-

3241, this should be used in preference to the unbiased estimate because the biased one is a

positive definite function which generates a positive definite covariance matrix. The

unbiased ACF and ACvF are not positive definite and result in suigular covariance marrices

that are not suitable for generating weight matrices for least squares models.

6.4 Iteratively Reweighted Least Squares Estimation

The covariance ma& generated h m the autocovariance function is used to

smchastically model the residual enas in the detaministic least squares d e l . The basic

idea is to begin with some a priori estimate of the covariance ma&, usudy a diagonal

maaix of known variances. A les t squares solution is obtained for the deterministic mode1

and the observation residuals provide an estimate of the random observation emrs. The

autocorrelation funaion is determine. for these residuals in order to obtain a more realistic

estimate of the correlations among the random obsewations e m . This autocordation

function is then used together with the a priori variances to generate a new covariance

function for the observations which is included in a new lest squares solution for the

dete-stic model and new estimate of the residual observation enws. Anorher

autocorrelation function is then computed and the whole estimation process is repeated

(iterated) until the solution for the detemiinistic mode1 and covariance matrix converge to a

Page 101: the least squares spectrum, its inverse transform and - T-Space

stable form This is referred to an iteratively reweighted least squares estimation and is

identical to the iterated MINQUE technique exœpt that a dekministic mode1 is used tfiere

to rnodel the variances and covariances (see Ra0 and Klcffe [1988]). The procedure is

illustrated schematicaily in Figure 6.1.

from A Prion Variances

Weighted Least Squares 1 Solution for Deterministic Model

1 611 ~ovariansmatrix using ( ACF and A Priori Variances

I

Figure 6.1: Iteratively reweighted least squares estimation process.

Page 102: the least squares spectrum, its inverse transform and - T-Space

Chapter 7 Numerical Tests

7.1 Introduction

In this chapter, various n d c a l tests of the least squares transfm and spectnim

are given under a variety of d.erent situations. Throughou~ the following terminology

and notation is used:

'Tourier" frequencies

LST

ILST

LSS

Independent LSS/ILST

Simultaneous LSS/ILST

Unweighted LSS/ILST

ACF

Indirect ACF

Set of integer multiples of the fundamental fkequency

Least squares transfonn

Inverse least squares transfm

Least squares spectnim

Independent estimation of the LSS or EST £i-equency

components

Simultaneous estimation of the LSS or ILST fhquency

components

Estimation of U S or ILST using equally weighted

obsexvations (no weight ma& P used)

Estimation of LSS or LST using weighted observations

(weight matrix P used)

Autocorrelation function

Indirect estimation of the autocomlation function via the

EST or the LSS

Page 103: the least squares spectrum, its inverse transform and - T-Space

The tests presented here are based on simulateci data using a pseudo-random

number generator for n o d y distributed obmation e r r ~ r s and unif'ormiy distributed,

unequally spaced times. Udess otherwise stated, these tests use a deterministic mode1

consisting of a periodic trend with @od 10 (frequency 0.1 Hz). AU computations were

performed using the MAT- numerical and graphical software system.

Tests were perfonned to ascertain the effects of the following on the LSS and

indirect estimation of the ACF:

random observation emrs

correlations among observations

random sampling (unequally spaced data)

fkequency selection

deterministic modd

non-stationary random emrs (random walk)

The effects on the LSS and ACF were determined by comparing the results to the known

theoretical form for both functions.

7.2 Effect of Random Observation Errors

To study the effect of random observation mors, three data series of 100 equally

spaced points were used Each was composed of a penodic trend of amplitude 1 and

p e n d 10, Le., frequency O. 1 Hz. The first series contained no observation mrs. The

second series contained normally distributed random emrs with a standard deviation of

1B. The third data series contained nomaliy distributed random emrs with a standard

deviation of 2/3. The three data series are ploaed in Figure 7.1.

The least squares spectra (for "Fouxïer" frequencies) of the three data stxies are

given in Figure 7.2. &th the independently and simdtaneously estimatecf spectral values

Page 104: the least squares spectrum, its inverse transform and - T-Space

wiU be identical in these tests because the data are equally spaced, equdy weighted and the

set of "Fourier" frequencies is used. The effect of random observation erm on the LS

specmmi is to reduœ the magnitude of the kgest spectral peak, which in ai l cases is

correctly located at the kquency of the periodic trend The iarper the random errer, the

greater the reduction in the spectrai value for the significant peak. The magnitude of the

reduction in the peaks is equivalent to the inverse of the square of the signal to noise ratio

(ratio of amplitude of periodic signai to standard deviation of noise).

The direct estimates of the autocorrelation hinctions for the three &ta series are

given in Figure 7.3. These are unbiased estimates and were estimateci using eqns. (2.20)

and (2.12). The ACFs al l exhibit the expected coslie form. However, the functions all

display correlations larger than one at large lags, typicai of the unbiased form. As

explained in Section 3.4, this s o - d e d "wild" behaviour is the main reason the unbiased

estimate is not used.

The biased estimates of the autocorrelation functions are aven in Figures 7.4 to 7.6

for the three data senes, respectively. Both the direct estimate and the indirect estimate via

the invme LS transfomi of the LS specmim are given as weiï as the difference between the

nvo. The indirect estimates were denved following the procedure described in Section 6.3,

where zero-padding is used to avoid any "wrap arounb' effem (see Section 3.4). As

expected, a l l three ACFs exhibit the correct sinusoicial shape and tapeMg characteristic of

the biased estimate. However, there is a reduction in the mgninide of the wrrelation as the

random error increases. AIthough the differences between the direct and indyect estirnates

get larger in direct proportion to the magnitude of the random error, they are negligible for

ail three data series.

Page 105: the least squares spectrum, its inverse transform and - T-Space

Periadic Trend (14.1) + Random Enor (&)

Periodic Trend (f=0.1) + Random Emr (s=0.66ô7)

Figure 7.1: Periodic time series of 100 equaily spaced points and period 10 (ftequency

0.1 hz) with no obsewation errors and with nonnally disaibuteci random e m (standard

deviations 1/3 and 2/3).

Page 106: the least squares spectrum, its inverse transform and - T-Space

Nonnalized LS Spectnim - Random Enor = 0.3333 1 , l I I ! I 1 1 I l 1

Frequency (Hz)

Nomlized LS Spectnim - Random Error = 0.6667

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Frequency (Hz)

Figure 73: Least squares spectra of time series of 100 equaiiy spaced points and period

10 (frequency 0.1) with no observation emrs and with nomaily dismbuted random emrs

(standard deviations 1/3 and Z). The horizontal line indicates the 95% confidence limit

for statisticdy signiFicant specaal peaks.

Page 107: the least squares spectrum, its inverse transform and - T-Space

Direct ACF (Unbiased) - Random Error = O

-2 1 1 I 1 1 I 1 1 I 1 I O 10 20 30 40 50 60 70 80 90 100

Time h g

Figure 7.3: Direct estimation of unbiased autocorrelation functions of tïme series of 100

equaIly spaced points and period 10 (fiequency 0.1) with no observation mors and with

n o d y distributed mdom emrs (standard deviations ID and 2B).

Page 108: the least squares spectrum, its inverse transform and - T-Space

Direct ACF (Biased) - Random Enor = O

t O - C, m - 2 8 O

O 10 20 30 40 50 60 70 80 90 100 Time Lag

Indirect ACF (E3iased) - Random Enor = O

O 10 20 30 40 50 60 70 80 90 100 Time Lag

x IO-'^ Indirect-Direct ACF (Biased) - Random Enor = O 6 I I t 1 I I I I 1

O 10 20 30 40 50 60 70 80 90 100 Tirne Lag

Figure 7.4: Cornparison of direct and indirect (via LS spectrum) estimation of biased

autocorrelation functions of tirne senes of 100 equally spaced points and p e n d 10

(frequency O. 1) with no observation mors.

Page 109: the least squares spectrum, its inverse transform and - T-Space

Direct ACF (Biased) - Random Emr = 0.3333 1 I I 1 I I 1 1 1 1 L

Indirect ACF (Biased) - Random Enor = 0.3333 1 4 I ! ! I ! I I 1 1

i 1 1 I f I 1 I 1 I O 10 20 30 40 50 60 70 80 90 100

Time Lag

x lo4 Indirect-Direct ACF (Eased) - Random Error = 0.3333 OQ I I I I I I I I

-8 1 I 1 1 1 I I 1 I I 1 O 10 20 30 40 50 60 70 80 90 100

Time Lag

Figure 7.5: Cornparison of direct and i n k t (via LS spectrum) estimation of biased

autocorrelation bctions of time series of 100 equally spaced points and pend 10

(frequency 0.1) with random observation emns (standard deviation ID).

Page 110: the least squares spectrum, its inverse transform and - T-Space

Direct ACF (Biased) - Random Error = 0.6667

1 -

O 10 20 30 40 50 60 7 0 80 90 100 Time Lag

Indirect ACF (Biased) - Randorn Enor = 0.6667

c O - C

cTI - $ O

- 1 i i i i i t i i 1

O 10 20 30 40 50 60 70 80 90 100 lime Lag

Indirect-Direct ACF (ûiaçed) - Random Error = 0.6667

O 10 20 30 40 50 60 70 80 90 100 Time Lag

Figure 7.6: Cornparison of direct and indirect (via LS spectnim) estimation of biased

autocorrelation functions of tirne series of 100 equaily spaced points and period 10

(frequency O. 1) with random observation enors (standard deviation 2B).

Page 111: the least squares spectrum, its inverse transform and - T-Space

7.3 Effect of Correlated Random Errors

To test the effect of correlations among the random observation erras, it is

necessary to generate a comlated set of em>rs E This can be accomplished by finding a

nansfomtion L of a set of uncorrelated random errors with diagomi covariance matrix

Cq, which, by the law of propagation of ermrs, gives a set of correlated random errors E

with the desired covariance manix CE, Le., for identically nomially distributeci random

errors (C4=Z),

The above decomposition (factorization) of a manix into another ma& times the transpose

of itself is known as Cholesky decomposition, where L is a lower triangular ma& called

the Cholesky niangle or square root pahlquist and Bjorck, 1974, p. 158; Golub and Van

Loan, 1983, pp. 88; Ress et al., 1992, pp. 891. Using the Cholesky aiangle, the

t r a n s f o d set of correlated random errors can then be obtained from

In the following tests, the periodic data from the previous section is used with a

standard deviation of 2/3. A M y popdated covariance rnatrix for the observations was

constructed k m the autocorrelation fiinction

where At = 9 - ti = 1. A plot of the the series and correlation function are given in Figure

7.7 using a standard deviation of 2/3.

Page 112: the least squares spectrum, its inverse transform and - T-Space

Three different types of least squares spectrum were cornputeci for this data series:

(1) the unweighted independent estimste, (2) the weighted independent estimate:, and (3)

the wcighted simuitaneous estimate. The different spectra a i l provide good results, each

clearly identifying the periodic component correctiy at kquency 0.1 (see Figure 7.8).

AIthough the unweighted independent LS spectrum displays slightly iarger noise at the

lower frequencies than the other spectra, the noise is weii within the 95% confidence

interval. The weighted LS specaa pvide almost identical resula, although the peak at

fiequency O. 1 is slightly larger. These resula verQ the daim by Steeves LI98 11 that

correlations among the obsmations have iittle effect on the resulting spectra.

The direct and indirect (via the unweighted inverse LS transform of the unweighted

LS spectrum) estimates of the autocorrelation function are given in Fi- 7.9. The two

ACFs are identical and agree well with the expected form for the perïodic data set (see

Figure 7.6), although those here display slightly larger correlations at lower fiquencies

duc to the a priori correlation function. The weighted indirect ACFs are shown in Figure

7.10. Both exhibit the correct shape for the periodic signal, but that based on the

independently esthateci spectnim gives larger correlations than for the unweighted

estimates. On the other hand, the ACF based on the simultaneously estimated specaum

displays much smaller correlations and thus gives the poorest estimate of the ACF.

Another check on the estimation of the autocorrelation functions was perfonned by

compu~g the ACFs only for the correlated ermrs (the periodic signal was not included).

The ACFs should agree closely with the a priori one used in C O ~ S ~ ~ U C M ~ the comlated

emrs (see bonom plot of Figure 7.7). Figure 7.1 1 shows both the direct and indirect (via

the unweighted inverse LS transfonn of the unweighted LS specmim) estimates of the

biased autocorrelation function. Both are identical and agree well with the theoreticai

correlation fùnction in Figure 7.1. The departures b r n the mie ACF are due to the

limitations of the random number generatm. The indirect weighted estimates via the

inverse weighted LS transfomi of both the independently and simultaneously estimated LS

Page 113: the least squares spectrum, its inverse transform and - T-Space

specna are given in Figure 7.12. Ali these ACFs display the same shape, except for the

weighted simuitaneous estimate which has slightly larger currelations.

O 10 20 30 40 50 60 70 80 90 100 Time

Conelated Random Ems

O 10 20 30 40 50 60 70 80 90 100 Time

Autocorrelation Fundion of Random Errors

01 i i I i i 1 i I I O 10 20 30 40 50 60 70 80 90 100

Time Lag

Figure 7.7: Periodic time senes of 100 equally spaced points with period 10 (hquency

0.1) and correlated random observation emrs (standard deviation 2/3).

Page 114: the least squares spectrum, its inverse transform and - T-Space

Unweighted lndependent LS Spectrum (Nomalized)

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Frequew (Hz)

Weighted lndependent LS Spectrum (Nomalized)

Weighted Simuitaneous LS Spectrum (Nomalized)

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Frequency (Hz)

Figure 7.8: Unweighted and weighted LS spectra (both independent and simultaneous

estimation) for periodic rime series of 100 e q d y spaced points with period 10 (fîquency

0.1) and correlated random observation emrs (standard deviation 2/3).

Page 115: the least squares spectrum, its inverse transform and - T-Space

Direct UnweigMed ACF (Biased)

O 10 20 30 40 50 60 70 80 90 100 Time Lag

Indirect ACF (Biased) via UnweigMed Independent LSS 1 I I ! 1 ! ! I 1 I I

-1 1 i i i i 1 i 1 1 1 O 10 20 30 40 50 60 70 80 90 100

Time Lag

Figure 7.9: Direct and unweighted indirect (via unweighted inverse transform of

unweighted LS spectrum) estimates of biased autocorrelation function for periodic M i e

senes of 100 equally spaced points with period 10 (fkquency 0.1) and conelated random

observation mors (standard deviation 2D).

Page 116: the least squares spectrum, its inverse transform and - T-Space

Indirect ACF (Biased) via Weighted Independent LSS

C 9 C m - 2 8 O

O 10 20 30 40 50 60 70 80 90 100 lime Lag

Indirect ACF (Biased) via Weighted Simuitaneous LSS

-0.5 1 i I i i i I 1 I i ! O 10 20 30 40 50 60 70 80 90 100

lime Lag

Figure 7.10: Weighted indirect estimates of biased autocorrelation function via weighted

inverse LS msform of both independent and simultaneously estimated LS spectra for

periodic time series of 100 equaily spaced points with period 10 (fkquency 0.1) and

correlated random observation emrs (standard deviation 2/3).

Page 117: the least squares spectrum, its inverse transform and - T-Space

Direct Unweighted ACF (Biased)

-0.5 1 1 I I 1 1 1 1 I 1 I O 10 20 30 40 50 60 70 80 90 100

lime Lag

Indirect ACF (Biased) via Unweighted l ndependent LSS

Figure 7.11: Direct and unweighted indirect (via unweighted inverse transfoxm of

unweighted LS specmim) estimates of biased autocorrelation function for time series of

100 equaliy spaced points with correlateci random observation emrs only (standard

Page 118: the least squares spectrum, its inverse transform and - T-Space

Indirect ACF (Biased) via Weig hted f ndependent LSS

lime Lag

Indirect ACF (Biased) via Weig hted Simukaneous LSS -

b

i - -- O 10 20 30 40 50 60 70 80 90 100

Time Lag

Figure 7.12: Weighted indirect estimates of biased autocorrelation function via weighted

inverse LS transfomi of both independent and simultaneously esthated LS spec~a for time

series of 100 equally spaced points with correlateci mndom observation emrs only

(standard deviation 2/3).

Page 119: the least squares spectrum, its inverse transform and - T-Space

7.4 Effect of Random Sampling

Random observation sampling d t s in an unqually spaced data series in which

case the conventional Fourier expressions are no longer valid This is the primary reason

for using the least squares transfomi and spectrê To test the effect of random sampiing on

the LS aansform and spectm, unequdy spaced periodic data series were constructed.

Different lengths of data series were used to examine the effect of the finiteness and

sparseness of the data. The unequaüy spaced tirne arguments were created using a pseude

random number generator with a distribution (see Press et al. [1991] for an

explanation of the unifonn distribution). Thnx uaequally spaced (emrless) data sets with

a peridc trend of period 10 (fkequency 0.1 Hz) were generated with 100.60 and 20

points (see Figure 7.13).

The spectra were computed independently for integer multiples of the fundamental

frequency (0.01 h a , up a, fkquency 0.5 hz. Because the Nyquist fiequency is undefined

for randody data spacing, the spectra were wmputed only up to an arbitrarily selected

frequency of 0.5 hz. The absence of a Nyquist nequency is iilustrated in Figure 7.14a,

which gives the spectra of the data series up to maximum fi-equencies of 0.5,6 and 25 hz.

There is no evidence of a minor image in these spectra that would indicate the presence of a

possible Nyquist frequency. Also, because of the large correlations between the fiquency

components, it is not possible to estimate the simultaneous inverse LS transfm due to ill-

conditioning. This will be investigated M e r in the next section.

The spectra for the three data series are given in Figure 7.14b. The effect of

unequal samphg on the independent LS specmim is negligible. The specnal component at

fkquency O. 1 is correctly Iocared with a norrdked spectrai value of 1. The correct

location of the spectral peak is also unaffectai by the finiteness or sparseness of the data

series. Even with only 20 points the LS spectrum is practically unchanged, except foi

greater noise in the s p e m m and a larger 95% confidence leveL

Page 120: the least squares spectrum, its inverse transform and - T-Space

The indirect (biased) esthates of the autocorre1ation function via the independent

LS specmun are given in Figure 7.15 for the three data series Zenpadding was used

prior to cornputhg the spectmm to which the inv- LS transform was applied. Ail ACFs

display the cmect shape and tapering for the periodic signal in the data series. The effect

of the random sampling is to reduce the magnitude of maximum correlation for non-zero

lags (compare top plot in Figure 7.15 with Figure 7.4). The maximum correlation is about

half of the theoretical f 1 value for all plots; i.e., the magnitude does not change as a

function of the finiteness or sparseness of the &ta. The correct shape of the theoretical

ACF is dso preserved even with ody 20 points.

For cornparison, Figure 7.16 gives direct estimates of the autown-elation functions

computed for the same unequally spaced data series using the intexval averaging method

describecl by Vaniiek and Craymer [1983a; 1983bJ and Craymer [1984]. AU ACFs display

the same periodic component as the indirect estimates (overlay Figure 7.16 with Figure

7.15). However, the direct ACF for the 100 point series clearly does not follow the

expected tapered shape (compare with Figure 7.4). Instead, the correlations at both smaU

and large time lags are significantly aaenuated, whüe correlations at the middle lags are

equal to one. It appears more like a modulateci unbiased A C The other ACFs agree weil

with both the indirect estimates; they are closer in magnitude to the theoretical ACF

(compare with Figure 7.4).

Page 121: the least squares spectrum, its inverse transform and - T-Space

Unequally Spaced (Unifomly Distd) Series - 100 Points

Unequaliy Spaced (Unifomdy Distd) Senes - 60 Points

O 10 20 30 40 50 60 70 80 90 100 Time

Unequally Spaced (Unifonnly Distd) Series - 20 Points

O 1 O 20 30 40 50 60 70 80 90 Time

Figure 7.13: Periodic time series of different lengths of randody spaced points

(uniformly distributeci) with period 10 (£kquency 0.1) and no random observation ermrs.

Page 122: the least squares spectrum, its inverse transform and - T-Space

Unweighted lndpendent LS Spectnim (NomaCzed) - 100 Points

O 1 2 3 4 5 6 7 Frequency (Hz)

Unweighted lndpendent LS Spectrum (Normaiïzed) - 100 Points a

O O 5 10 15 20 25

Frequency (Hz)

Figure 7.14a: LS spectra (independentiy estimated fkquency components) up to

diEerent maximum hqueencies for periodic data series of unequally spaced points with

period 10 (frequency O. 1) and no random observation mors.

Page 123: the least squares spectrum, its inverse transform and - T-Space

Unweighted lndpendent LS Spednim (Nomalized) - 100 Points

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 F r m J e w (Hz)

Unweighted lndpendent LS Spectnim (Normalized) - 60 Points

Unweighted lndpendent LS Spectrurn (Normalized) - 20 Points 1 , I 7 n 1 1 l I I 1 I 1

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.5 Frequency (Hz)

Figure 7.14b: LS spectra (independently estimated frequency components) for different

lengths of periodic data series of unequdy spaced points with period 10 (fiequency 0.1)

and no random observation emrs.

Page 124: the least squares spectrum, its inverse transform and - T-Space

Indirect ACF (Biased) via Unweighted lndependent LSS - 100 Points 1Q 1 I I 1 I I 1 1 I 1

-1 1 I I I 1 1 I 1 I I 1 O 10 20 30 40 50 60 70 80 90 100

Tirne Lag

Indirect ACF (Biasecl) via UnweigMed lndependent LSS - 60 Points 1q I ! I r 1 I 1 r 1 1

-- . O 10 20 30 40 50 60 70 80 9 0 100

Time Lag

Indirect ACF (Biased) via Unweighted lndependent LSS - 20 Points

C O - C Ca - L O O

O 10 20 30 40 50 60 70 80 90 100 Tirne Lag

Figure 7.15: Indirect estimates (via unweighted inverse LS transform of unweighted LS

spectnun) of biased autocorrelation functions for different lengdis of periodic data series of

unequally spaced points with perîod 10 (fkequency 0.1) and no random observation errors.

Page 125: the least squares spectrum, its inverse transform and - T-Space

Direct ACF (Biased) - 1 O0 Points

Direct ACF (Biased) - 60 Points

O 10 20 30 40 50 60 70 80 90 100 Tirne Lag

Direct ACF (Biased) - 20 Points

Figure 7.16: Direct estimates (via intmal averaging) of biased autocomlation functions

for different lengths of periodic data series of unequally spaced points with period 10

(fiequency 0.1) and no random observation enors.

Page 126: the least squares spectrum, its inverse transform and - T-Space

7.5 Effect of Frequency Selection

The effect of different selections of fiequencies for the simdtaneous LS specmmi

was dso examined. Note that frequency selection only affects the simuitmeous estirnation

of the spectral components. It has no &ect on the independently estimated LS spectxum

where each spectral component is treated out-of-context of the ohen (no correlations arise)

and any set of frequencies may be used to correctly locate the significant spectral peaks in a

data series, within the limitations of the sampling theareni (see Section 5.3). This

effectively provides a continuous specaum, although spectral leakage may &ect the resdt

The significant spectrai components can then be used in the indirect estimation of the ACF

via the simuitaneously estirnateci LS transfurm or in an improved detenrnnistic d e l .

On the other hanci, the selection of hquencies is of critical importance for the

sirnultaneously esrimateci LS s p e m In this case the correlations among the spectd

components must be carefdly considered, othawise ill-conditioning in the normal

equations for the simultaneous solution of al l specaal components can produce completely

wrong resuits. For example, consider the same data series used in the previous section

(top plot in Figure 7.13), containhg 100 unequaiiy spaced ( u n i f d y distributed) points

with a periodic trend of period 10 (fkquency 0.1 Hz) and no random errors. Using the

entire set of 50 'Fourier" frequencies in the simultaneous LS specmun, results in an ill-

conditioned solution. The resulting spectrum fails to detect the penodic trend at fiequency

0.1 hz even with no random emns present (see top plot in Figure 7.17).

The correiations among the frequencies can be reduced and the ill-mnditioning in

the spectrai transfimn removed by decreasing the fnquency sampling to only every other

kequency; i.e., 25 of the original set of 50 frequencies. Although the periodic component

is now visible in the simultaneous LS specûum, it is still relatively small and only just

statistically significant (see middle plot in Figure 7.17). This is improved M e r by iaking

Page 127: the least squares spectrum, its inverse transform and - T-Space

every 5th fiequency so that only 10 of the original 50 fÎequencies are u s e d The spectral

peak at 0.1 is now highly signikant.

The same behaviour is aiso dispiayed by the indirect estimate of the autocorrelation

function. Note, however, that the original data series needs to be zero-padded to avoid

"wrap mund" eEects in the ACF. This doubiing of the series length results in a

fundamental hquency that is haIf of that for the original series and twice as many

muencies. This results in even more severe ill-conditionhg and a completely emneous

ACF where correlations are much m e r than 1 (see top plot in Figure 7.18). Decreasing

the hquency sampling to only 50 hquencies irnproves the ACF but there are still some

correlations greater than 1 (see middle plot of Figure 7.18). The situation is improved

when only 10 frequencies are used. The ACF has the correct cosine fom and the

umùmum correlations are oniy slightiy la~ger than 1 (they couici be truncated to 1 in

pradce).

The problem with decreasing the fkquency sampling is that some peaks rnay be

missed. Clearly, great care must be exercwd when selecting the frequencies to use with

the simultaneous estimation of the LS spectmm and the inverse LS transfom. Note that by

reducing the number of simdtaneously estimated fhquencies, one is approaching the

method of independent estimation of the spectrai components (the extrerne or limiting case

of reducing the number of frequencies).

A better approach may be to instead search for and use only statistically signincant

spectrai components h m the uidependent estimation of the LS specnum. These

fkquencies can then be used in a simultaneous estimation of the LS spectrum and in the

simultaneous inverse LS transform for the indirect ACF. The results foilowing this

procedure are illusirated in Figures 7.19 and 7.20 for a randomly sampled data series with

two periodic components (fiequencies 0.1 and 0.25 hz) and no random mors. The

independent estimation of the LS specmim correctly identifies the two periodic cornponents

as shown in Figrire 7.19. Using only these significant periodic components in the

Page 128: the least squares spectrum, its inverse transform and - T-Space

simultaneous estimation of the spectrum and the subsequent simuitaneous inverse

aansform gives an indirect ACF that agrees with the theoretical form of the unbiased. rather

than the biased, ACF, rathm than the biased as shown in Figure 7.20. On the other hanci,

the ACF deriveci b m the inverse ~ansform of the entire independently estimiited LS

spectnun provides the expected biased fomi Am. It appears that reducing the numbn of

fkquencies in the inverse transf' gives an ACF that more closely agrees with the

unbiased estimate. The biased ACF c m be obtained by simply using n in place of the

divisor (n-k) in the expression for the unbiased ACF in eqn. (2.20).

Page 129: the least squares spectrum, its inverse transform and - T-Space

Weighted Simuttaneous LS Spednrm (Normaiized) - 50 Frequencies a

Weighted Simîtaneous LS Spedrum (Norrnafized) - 25 Frequencies

Frequency (Hz)

Weighted Simuitaneous LS Spectnim (Normalized) - 10 F r e q ~ e ~ e s 1 , 1 1 1 I I I 1 I I i

.................................................................. ;.................-

........ : ......... : ........ i i i i i i i i i i i i i i i i i i iJJJJJJJJJJ: --.....-

........ z ......... t . . . . . . . . : . . . . . r . . . . . . . . . . . . . . . . . . . . ,

.......... ........:......... I......... II..........................-

A r* A r* A b A. - v w V w Y

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 O S Frequency (Hz)

Figure 7.17: LS spectra for different sets of sirnulüuieously estimateci fnquencies for

periodic data senes of 100 unequally spaced points with period 10 (frequency O. 1) and no

random observation mrs.

Page 130: the least squares spectrum, its inverse transform and - T-Space

Indirect ACF (Biased) via Unweighted Sirnuttaneous LSS - 100 Frequencies

O 10 20 30 40 50 60 70 80 90 100 Time Lag

-1.5 1 l I I I 1 I 1 I I I O 10 20 30 40 50 60 70 80 90 100

Tirne Lag

Figure 7.18: Indirectly estimated LS autocorrelation hctions via the LS spectrum using

different sets of simultaneously estkmted muencies for periodic data series of 100

unequaiiy spaced points with period 10 (frequency 0.1) and no random observation mors.

Page 131: the least squares spectrum, its inverse transform and - T-Space

Unequaliy Spaced (Ultifomly Distd) Senes - 100 Points

O 10 20 30 40 50 60 70 80 90 100 Tirne

Independent LS Spectrum (Nonnalized) - 50 Frequencies 1 I I I I 1 I I I I I

Frequency (Hz)

Figure 7.19: Penodic time senes of randorniy spaced points with frequencies 0.1 and

0.25 hz and no random observation enors (top), and independent estimation of the LS

Page 132: the least squares spectrum, its inverse transform and - T-Space

Indirect ACF via Independent LSS - 1 00 Frequencies

O 10 20 30 40 50 60 70 80 90 100 Tirne Lag

Indirect ACF via Sirnuitanmus LSS - 2 Frequencies

I .- - -- - - - -- -

O 10 20 30 40 50 6 0 70 80 90 100 Time h g

Figure 7.20: Indirecdy estimated ACF via the inverse LS transfomi of the independent

LS specmim using all frequencies (top) and of the simultaneous LS specaum using only

the two significant spectral peaks at 0.1 and 0.25 hz (bottom).

Page 133: the least squares spectrum, its inverse transform and - T-Space

7.6 Effect of Deterministic Mode1

The ef fm of the determinisuc mode1 on the IS specmim and indirectly estimated

autocorrelation fiinction is to absorb any spectral components that are bighly correlatai with

the deterministic model. These spectral components are usually at the Iower fkquencies,

unless some high nequency periodic trends are included in the determimstic modeL The

deterzninistic mode1 is accomnrodated by accounting for its effect within the estimation of

the LS specrmm and inverse LS transfonn foilowing the appach described in Chapters 4

and 5,

To test the effect of a deterministic linear trend model, a 100 point equally spaced

data series consisting of a quadratic trend (1 + 0.02 t + 0.00005 t2) and a periodic residual

trend of frequency 0.01 hz was generated with no random enors (see top plot in Figure

7.21). The quadratic trend will tend to alias as a long period trend which may resdt in

erroneous estimates of the spectmm of the residuais if the correlations with the quacùatic

mode1 are not accounted for. This is evident in the rniddle plot of Figure 7-21, where the

LS spectcum displays a peak at 0.02 hz while the actual periodic signal should be at 0.01

h z There is also some spectral leakage into the neighbouring frequencies at 0.01 and 0.03

hz. Accounting for the correlations with the detemllnistic model results in a spectrum that

correctly identifies the 0.01 hz peak and eluninates the specaal leakage (bottom plot in

Figure 7.21).

Page 134: the least squares spectrum, its inverse transform and - T-Space

lndependent LS Spectrum (Nomüzed) - Quadratic Trend Residuais

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Frequency (Hz)

lndependent LS Spectnim (Nomlized) w/ Quadratic Trend

O 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 Frequency (Hz)

Figure 7.21: Quadratic trend t h e series with penodic component (fhquency 0.01 hz)

and no random enors (top); LS spectnmi of residuals from quachtic trend mode1 (middle);

LS spectmm accounting for effects of quadratic mode1 (bottom).

Page 135: the least squares spectrum, its inverse transform and - T-Space

7.7 Effect of Non-Stationary Random Errors (Random Walk)

Another kind of correlateci error are non-stationary random enors. One example of

this is the simple random wak mode1 where the ermr 4 at tirne ri is the accumulation of a

white noise process [Papoulis, 19651; Le.,

where the Q are norrnally distributed random variables with zero mean. One such equaiiy

spaced random walk data sexies with a unit standard deviation is dispIayed in Figure 7.22

(top plot). This 100 point data series is actualiy a evenly sampled subset (every fifth point)

of a much larger 500 point random walk data series using a white noise process with unit

standard deviation. The theoretical spectnim for such a process is hversely proportional to

the square of the fkequency [Zhang et al., 1997. The computed LS spectrum is given in

the middle and bottom plots of Figure 7.22. The bottom plot uses a log scale for both axes

and exhibits a linear aend with a slope of about -2 correspondhg to the expectedf2

relation for a random walk rnodel. The direct and indirect autocorrelation functions are

aven in Figure 7.23. The indirect estimate via the LS spectnim (zero-padding is used)

agrees well with the direct estirnate. The differences between them shown in the bottom

plot of Figure 7.23 increase in direct proportion to the hg. The indirect ACF departs fimm

the direct ACF to about 0.5 at the highest lag.

To test the effect of the data sampling, an unevenly spaced mndom wak data series

was generated by randornly sampling the same 500 point random waik series used above

(see Figure 7.24). (A uniform random number genemor was again used to generate the

random selection of 100 points; see Section 7.4.) The LS spectruxn is given in the bomm

two plots. The effect of the random samphg is to 5tten out the specmun at the higher

Page 136: the least squares spectrum, its inverse transform and - T-Space

mencies. The inverse square frequency relation only h01ds at the lower hquencies.

This behaviour was ais0 found by Zhang et al. [1997J. The indirect estimate of the

autocmlation function via the independent LS spectrum (with zempanriing) is also

si@cantly affected by the random samphg (see Figure 7.25). It now drops off much

more rapidly in cornparison to the direct estirnate in Figure 7.23).

Page 137: the least squares spectrum, its inverse transform and - T-Space

Randorn Walk (Equaliy Spaced Times) - 1 0 Points 2 r 1 l I I I k I I

Independent LS Spectnirn (Normalized) - Random Walk

Frequency (Hz)

. . . . . - - . . . . S .

. . Y . . . . . . . .

Independent LS Spectrum (Norrnalized) - Randorn Walk

Log Frequency (Hz)

Figure 7.22: Evenly sampled 100 point random walk time series (standard deviation 1)

(top) and its corresponding LS specmm.

Page 138: the least squares spectrum, its inverse transform and - T-Space

Direct ACF (Bi- - Random Walk

O 50 100 150 200 250 300 350 400 450 TÏme Lag

I&red ACF (Biased) via LS Spectrum - Random WaJk

Indirect-Direct ACF (Biased) - Random Walk 1 , I I I I I r I I I 1

O 50 100 150 200 250 300 350 400 450 Time Lag

Figure 7.23: Direct (top) and indirect (bottom) autocomlahon functions for 100 point

random walk data series.

Page 139: the least squares spectrum, its inverse transform and - T-Space

Randam Walk (Unequally, Uniformfy Sampled Times) - 100 Points 2 , I I I I I I I 1 I I

lndependent LS Spectnirn (Normalized) - Random Walk

O 0.01 0.02 0.03 0.04 0-05 0.06 0.07 0.08 0.09 Frequency (Hz)

independent LS Spectmm (Nonnalized) - Randorn Walk .................................................. ........................... * . . . - . + . . - . . S . . * . * ........................................ -*..-*-...--...-~.--.-.-*.------..-- ....*...-.-.*.*.-. .............................................................................................. ......... .... r r ~ . . . - . * i . r r * . . r % r r b - i S - a r . I - r . t - - - r - - - - - a . . a... . . - 8 ...... .......... b . - . % . . b . - ~ . ....... ..........................................-.....--.......................... * * * . * * . . * * -

1 O-* Log Frequency (Hz)

Figure 724: Unevedy sampled 100 point random walk time series (top) and ia

corresponding LS spectrum.

Page 140: the least squares spectrum, its inverse transform and - T-Space

Figure 7.25: Indirect estimate of autocomlation Ma the independentiy estimated LS

specsum for the uneveniy sampled 100 point random walk time series.

Page 141: the least squares spectrum, its inverse transform and - T-Space

Chapter 8 Some Applications in Geodesy

8.1 Introduction

There have ken many applications of time series analysis in geodesy to the saidy

of tide gauge data, gravity data and geodynamics. In particular, the method of least squares

spectral analysis has been applied to studies of the Earth-pole wobble by Vanilek [1969b]

and Rochester et al. [1974]. However, there have been few applications of time series

anaiysis techniques to other h d s of geodetic data. The few studies employing these

techniques have been mostiy applied to levebg data (see, e.g., VanîEek and Craymer

[1983a, 1983b], Craymer [1984]. Vaniiek et al. [1985], Craymer [1985], Craymer and

VaniZek [1985, 1986,19901). More recently timc series analysis techniques have also

k e n applied to electronic distance measurement (EDM) data by Langbein et al. [1990] and

Langbein and Johnson [1997], and to Global Positioning S ystem (GPS) data by El-

Rabbany [1994], King et al. [1995] and Zhang et al. [lggq. In El-Rabbany [1994], only

standard Foinier (and FFT) methods in the equally spaced Mie dimension are considered.

The study by King et al. [1995] also assumed equally spaced time arguments. Only the

ment work of Langbein and Johnson [1997] and Zhang et al. [1997 have considered

unequally spaced dam in particuiar, Zhang et ai. [1997] have used the periodogram as

dehed by Scargle [1982], which can be shown to be a special case of Vmik ' s original

method (see Section 5.7). Estimation of covariance and correlation functions for stochastic

modelling of e m , however, was still based on traditional methods assuming equally

spaced data.

Page 142: the least squares spectrum, its inverse transform and - T-Space

The sndies by Craymer et aL have applied time series techiques more generally to

arguments that are not necessarily equalîy spaccd in order to search for systematic e m ~ s

that depend on these quantities. Ail these studies have used the unweighted f m of the

independently estimated least squares spectrum to search for systematic errors in precise

levelling. Here, the weighted fonn of the least squares approach to specûum and

autocovariance function estimation are applied to the stochastic modehg of errors using

two mal examples: estimation of the defonnation of an EDM basehe across the San

Andreas fault using the same data as in Langbein and Johnson [1997J, and GPS single

point positioning using pseudo-range observations (the typical positioning data used by

most handheld GPS receivers).

8.2 EDM Deformation Measurements

Electronic distance measurements @DM') is the most precise distance measuring

technique at close to moderate ranges (about 1 km). The most accurate EDM instruments,

such as the Kem ME5000, cm routinely obtain submillimeter repeatability. The most

accurate EDM insrrument is based on dual kequency ("two-colour") lasers (see Slater and

Huggea [197q). The two measuring muencies d o w one to more directly detemine and

correct for the refraction effect (which is a function of the frequency of the laser). For this

reason, two-colour EDM insrruments are often used in southern California by Earth

Scientists to monitor the cnistal deformation mund the San Andreas fault (see. e-g.,

Savage and Lisowski [1995]).

Here the least squares specaal analysis technique is applied to the same data used

by Langbein and Johnson [1997] to search for possible systematic signals in their two-

colour EDM data. Traditional spectral techniques were used by Langbein and Johnson for

this purpose. Because the observations are at irregular time intervals, some necessary

approximations, specificaliy interpolation, had to be made to estimate their spectm. No

Page 143: the least squares spectrum, its inverse transform and - T-Space

such approximations are needed for the least squares technique which is an ideal application

of this metfiod.

The &ta used in this analysis are part of the Pearblossom network. near Palmdale

in southern Cal i fha and were provideci by J. Langbein (personal communication, 21

February 1997) of the U.S. Geological Siirvey, Menlo Park, CA. The network is fadial in

design. whae aii distances (baseLines) are measured h m Holcomb to twelve smunding

monuments at distances h m 3 to 8 lan (see Figure 8.1). Ody the Holcomb-Lepage

baseline with a nominal distance of 6130 m was used in this andysis. Initially the baseline

measurements at Pearblossom were made s e v d times per week for 4 years (1980- 1984).

Since about 1987 they have been reduced to about once every 3 or 4 months, although each

baseline is measined twice during each network re-observation. In addition, different

instruments and monuments have been used over the years and thae have k e n a number

of earthquakes. Consequently, the data have been reduced to changes in basehe length

h m the nominai value and grouped into sets sharing common EDM instrumentation and

monuments between earthquakes. The time series of the Lepage baseline measmrnents is

given in Figure 8.2. Note the different offsets between each data group and the consistent

linear trend (expansion of the basehe) for a i l groups. The different danim offsets

represent biases in the measured differences due to the different instnment/monument

combinations or the occurrence of earthquakes. It was also noted that several observations

were repeated within a couple of hours of each other (two within 15 minutes!). To avoid

excessively large temporal correlations under these circumstances. only the second ( r e p t )

observations were used.

The *nt biases between measurement groups necessitate accounting for a

sepmate danim offset for each. Likewise, the consistent trend for aU groups necessitates

m o d e b g a common linear trend for a i l groups. Least squares estimates of these model

parametexs are given in Table 8.1, where the datum offsets are all refaenced to the b t

measurement epoch. The 1.72 i 0.07 mm&ear linea. trend (extension of the baseline)

Page 144: the least squares spectrum, its inverse transform and - T-Space

Pearblossom network

F i 8.1: Location of the Pearblossom network in California used to rneasure crustai

deformation with a two-colour EDM instnunent and location of the Holcomb-Lepage

baseline spanning the San Andreas fault ninning through this network [after Langbein and

Johnson, 1997, Figure 11.

Holcomb-Lepage Baseline Length Changes 10, 1 I 1 I 1 I i l w h t

1 1 I I 1 1 1 I I 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998

Tirne (days)

Figure 8.2: Changes in length of Holcomb-Lepage baseline. Dinerent observation

groups are denoted by different symbol colowftype combinations.

Page 145: the least squares spectrum, its inverse transform and - T-Space

Table 8.1: Least squares estimates of linear trend and d a m

offsets.

Estimate Std t Statistic

Offset #1 (mm) -2.3 O. 1 22.4

Offset #2 (mm) -3.2 0.2 17.0

Offset #3 (mm) -4.2 0.2 17.5

Offset #4 (mm) -4.8 0.5 9 .5

Offset #5 (mm) -15.4 0.4 35.0

Offset #6 (mm) -20.1 0.7 27.6

Offset #7 (mm) -7.1 0.4 20.3

offset #8 (mm) -10.5 0.5 19.5

Linear Trend (mm&) 1.72 0.05 34.4

agrees well with the 1.67 value determineci by Langbein and Johnson [1997]. In the l e s t

squares solution, the data were weighted using standard deviations provided by J.

Langbein (personal communication, 2 1 February 1997). AU estirnateci mode1 parameters

were statistically signifcant at any masonable significance level and were removed h m the

data leaving the residual series in Figure 8.3. It is this data mies that is used in the

following spectral analysis.

Before performing a spectral analy sis, appropriate frequencies (i.e., frequency

spacing and range) must be chosen. The total length of the data senes defines the smallest

frwluency spacing that cm be resolved without spectral "leakage" f h n adjacent peaks.

The freguency interval (4 is def'ined by

Page 146: the least squares spectrum, its inverse transform and - T-Space

I I I I I 1 I I I 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998

Tirne (days)

Figure 83: Cornparison of residual baseline length changes after removal of estimated

distance offsets for each observation group and a common linear trend- Different

observation groups are denoted by different symbol colou~Itype combinations.

where T, = (?-min) is the fùndamental period andf, is the fundamental frequency (see

Section 3 .2 eqn. (3.17)). The largest frequency that can be detennined by the data series

is defined by the Nyquist frequency f ~ - It corresponds to the time interval over a triplet of

adjacent points, the minimum number of points for the unambiguous determination of a

periodic component

The Nyquist fkquency is not clearly defïned for unevenly spaced data. For evenly

spaced data, it is simply twice the time interval between any pair of adjacent points (Le.,

twice the sampling interval t). The Nyquist freq~ency is then defined as f~ = 2 /(2 t)

(cf. Section 3.2). This represents the largest fkquency (smdest period) the data series is

capable of reliably estimaihg without aliasing effects. For unevenly spaced data series, the

distribution of possible triplets of points can Vary si-cantly and thus there is no weU

d e f i Nyquist frequency present. In theory, the highest frequency (that can be estima&

h m a data series) will correspond to the srnaIlest point triplet interval. This intemil

corresponds to the srnailest pend (maximum fkquency) that can possibly be determined

Page 147: the least squares spectrum, its inverse transform and - T-Space

£tom the data series. However, in practice, the specaa generally exhibit no mirror image

about this or any other frequency when the data are unevenly and r a n d d y spacd. The

exception is when deaüng with data that are reguiarly spaced as multip1es of some conmion

interval or evenly spaced except for gaps.

For the baseiine length residuais in Figure 8.3, the variation in possible Nyquist

frequencies is illustrami in Figure 8.4 in temis of histograms of the lengths (time intrrvds)

of all possible point triplets ("Nyquist periods")). The srnaIIest mplet interval is about 1 day

corresponding to a Nyquist frequency of 1 cylday. This is because the measurements were

collecteci on a regu1a.r daily basis in the beginning. In the following analyses, spectra are

therefore estimated at integer multiples of the fundamental fkquency up to a Nyquist

fiequency of 1 cy/day.

Histogram of Possible Nyquist Periods 40 I I I I I I I I I 1

O 2 4 6 8 1 O 12 14 16 18 20 Period of Point Triplets (days)

Histogram of Possible Nyquist Penods I 1 I I

O 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 Period of Point Triplets (days)

Figure 8.4: Histograms of lengths of point mplets ("Nyquist periods") corresponding to

possible Nyquist muencies. Bottom plot gives a more detailed histogram at 1 day.

Page 148: the least squares spectrum, its inverse transform and - T-Space

in the estimation of the weighted Ieast squares specaimi of the baseline lengrh

residuals, any linear dependence (mathematical correlation) with the estimateci detenninistic

mode1 (distances offsets and linear trend) are taken into account as described in Sections

4.5 and 5.5. The specmrm is plotteci in Figure 8.5 with respect to period instead of

frequency for d e r interpretation There are clear significant spectral components at

periods of 2 and 8 years, in addition to several peaks at perîods shorter than a year. The

lower plot in Figure 8.5 enlarges the short period range and shows significant spectrai

components at periods of about 1,100,150 and 200 days.

2 4 6 8 10 12 14 16 Period (years)

Weighted LS Spectrum of Residuals 1 I I I

.... : ......... : ................... : .......,

.... ;..................,.................,

....................... I.................,

y- -fl t

O 50 100 150 200 250 300 350 400 450 500 Period (days)

Figure 8.5 Weighted l e s t squares spectra (independently estimated) of baseline length

residuals h m the deterministic mdel in Table 8.1. The horizontal iine is the 95%

confidence interval for detecoing signincant spectral values.

Page 149: the least squares spectrum, its inverse transform and - T-Space

The 8 year pexiod is interes~g because it is &O visible in the residuals between

about 1984 and 1996 (see Figure 8.3). It was thought that this might be due to a possible

additional datum offset at about 1988.7 in the data group between 1984.2 and 1992.5 (see

figure 8.2). Apparently, the instrumentation had been taken down and set up again at this

Mie but it was thought that this was done accurately so as not to produce any additional

bias in the distance measurements (J. Langbein, personal communication, 2 1 Mar& 1997).

To check for the significance of such a b i s , an additionai d a m offset was estimated at

l988.7. This resulted in replacing the 1984.2- l99Z.S group (with d a m offset #5) with

two oew groups; l984.2- 1988.7 with dahm offset #5 and 1988.7- 1992.5 with new datum

offset #5a Figure 8.6 shows these two new groups together with the time series of length

changes. The les t squares estimates of the mode1 with the additional offset (##a) are given

in Table 8.2 and the residual series after removing the mode1 is given in Figure 8.7. It was

found that the datum offsets #k5 and #a for the two new groups were statisticaily different

from each other at any reasonable significance level (t statistic = 7.0) and both biases were

therefore modelied in the foilowing analyses.

Holcomb-Lepage Basdine Length Changes 10, I I I 1 I I I 1 A l

Figure 8.6: Changes in length of Holcomb to Lepage basehe with additional d a m

offset in observation group from 1984 to mid-1992. Different observation groups are

indicated by different symbol colour/type combinations.

Page 150: the least squares spectrum, its inverse transform and - T-Space

Table 8.2: Least squares estimates of Linear trend and datum

offsets, including additional datum offset (##a).

Estimate Std t Statistic

Offset #l (mm) -2-0 O. 1 19.3

Offset #2 (mm) -2.8 0.2 15.1

Offset #3 (mm) -3.6 0.2 15.5

Off't #4 (~nm) -4.2 0.5 8.7

Offset #5 (mm) -15.3 0.4 36-9

Offset H a (mm) -12.6 0.6 21.9

ûfF~et #6 (IMI) - 17.1 0.8 21.3

offset #7 (m~n) -6.2 0.4 17.4

offset #8 (mm) -9 -2 0.5 17 .O

Linear Trend (mmlyr) 1.51 0.06 26.9

Baseline Length Residuais (Weighted Sotn) 4 r 1 1 I I I I 1

Figure 8.7: Cornparison of residual baseline length changes after removd of estimateci

datum offsets, including additional offset, for each observation group and a common linear

trend for a i i groups. Different observation groups are denoted by different symbol

colodtype corn binations.

Page 151: the least squares spectrum, its inverse transform and - T-Space

The weighted lest squares specmim for the rsiduals after removing the estimsted

detemnaistic mode1 with the additional damm offset is given in Figure 8.8. The most

obvious difference fiom the previous specmmi is that the peak at 8 years has now been

signincantly reduced by the introduction of the additional d a m offset in the model.

However, there still remains a large peak at about 1000 days (2.5 years) that accounts for

15% of the noise in the residual data series. One possible explanation for such an

interannual behaviour may be an El Nino wamiing efkct, wbich has hquencies of

between 2 and 4 years during tbïs time p a i o d The warming effect is generally

Weighted LS Spectrum of Residuals

O 2 4 6 8 10 12 14 16 Pe riod (years)

Figure 8.8 Weighted least squares spectra of basehe length residuals from the

detenninistic model with adnitional distance offset- The horizontal line is the 95%

confidence interval for d e t e c ~ g significant specfral values.

Page 152: the least squares spectrum, its inverse transform and - T-Space

accompanied by more muen t and severe wet weather which couid cause monument

motion due to higher levels of ground water. In addition, the "piling up" of warmer waters

in the eastern Pacific could also possibly led to additional crusta1 loading on the westeni

seaboard of North America, The o h significant peaks are at short @ods and are more

clearly identifieci in the Iowa plot of Figure 8.8. The largest peaks in this fiequency range

a~ at about 150 and 210 days. Ciniously, these peaks are symmetrical (Hl days) about

small central peak widi a semi-annuai period (180 days). According to Vaniek [1969b],

this conesponds to a possible modulation of a semi-annual perïod by a 30 day period. The

semi-annual period may be related to weather. For example. it is well knom that southem

California generally has wet spring and fall and a dry summer and winter which could

conceivably cause a semi-annual period in the presence of gruund water, thus possibly

conaibuting to a semi-annual behaviour of the motions of the geodetic monuments. The 30

day p e n d may be related to lunar tidal effects. Other peaks evident in the spectrum are at

penod of about 110 days and 1 day. The diumal period is believed to be a consequence of

the usual diunial behaviour of many systematic effects related to atmospheric conditions,

such as atmospheric refraction and heating (expansion) of the ground and monuments. The

other notable feature of the s p e c m is the absence of an annuai period. In fact, the

spectral value for this pexiod is almost exactly zero, indicating that such a period had

already been removed h m the data. This was denied by Langbein (personal

communication, 2 1 March 2 997), however.

Langbein and Johnson [1997] also argue for the presence of a random wak signal

in the residual data series. Their specmim for the Holwmb-Lepage baseline was cornputeci

by first interpolaring the unevedy spaced measurement series to an evenly spaced one by

averaging the data spanning 15-35 days either side of the missing point White noise was

also added to their interpolateci value. The power s p e c m was then mmputed using the

FFT technique and plotted against the log of the frequency (see Langbein and Johnson

[1997, Figure 31). Their plots display a clear trend proportionai to 1/f2 as expecrad for a

Page 153: the least squares spectrum, its inverse transform and - T-Space

Weighted LS Spectnim of Residuals 0 -2 . . . . , , , . . . . m . , 1 I ...

Weighted LS Spectnim of Residuais 1 oO I i I I

Figure 8.9 Semi-log (top) and log (bottom) plots of weighted least squares spectra of

basehe length residuals h m the detenninistic mode1 with additional datum offset. The

straight line represents a 4.60 linear trend at low frequencies (fc4x10-2).

random walk process (see Section 7.7). For cornparison, the weighted least squares

spectra is displayed in Figure 8.9 (top plot) using the same semi-log fkquency plor No

clear l/f* trend is apparent in this spectrum. The specmirn is also displayed in Figure 8.9

(bottom plot) using a fuii log plot, where the presence of random walk noise should

produce a negative hear mnd at low fi=equencies, as discussed in Section 7.7. A srnail

negative trend (-0.60 f 0.08) is visible in the Ieast squares spectnun at muencies below 4

Page 154: the least squares spectrum, its inverse transform and - T-Space

x IO-* cy/day, which grows even smaller for higher frequencies. Howeva, this linear

trend is pmp0rtiona.I to l/fl-6, rather than 1 ~ 2 as characteristic of a random waik process.

The autocomlation fûnction for the observations was indirectly estimateâ nOm the

inverse Ieast squares transfomi of the independently estimateci, weighted least squares

spectrum following the iterative procedure outiined in Section 6.4. The a priori standard

deviations of the dam were used to generate a pria5 observation weights.. nie data series

was also zem-padded prior to computing the specûum to avoid any wrap around effects in

the autocorreIation function as described in Section 3.4. The main difliculty encountered

was with the large number of possible time lags for which the a u ~ m l a t i o n needed to be

computed. For uneveniy and randody spaced data, th= are in general as many different

lags as there are combinations of observation pairs. For the Holcomb-Lepage distance

measurements, there are 361 observations for which there are 65,341 unique possible time

lags (number of off-diagonal elements in the observation covariance matrix). It was

therefore impractical to compute the autocorrelation function for all lags at once. instead,

the ACF was computed separately for the lags corresponding to each row of the

observation covariance matrix. Oniy the autocorrelations for the upper triangular part of

each row needed to be computed. The entire correlation ma& R for the observations was

assembled in this way and the N1 covariance rnatrix C was obtained using the a prion

standard deviations of the observations (which were also used in the computation of the

weighted spectmm); i.e.,

C = S R S ,

where S is a diagonal matrix of the a priori standard deviations. The autocorrelation

function for is ploaed in Figure 8.10 together with an enlargement at short lags. Although

there is a periodic behaviour in the enlarged plot, the magnitude of the corre1ations are small

even for short lags. No explanarion was found for smaU correlation "spikes".

Page 155: the least squares spectrum, its inverse transform and - T-Space

Autocorrelation Function

Figure 8.10: Indirect ACF, and enlargement at short lags, estimated from zero-padded

time senes of Holcomb-Lepage length changes with additional dam offset.

The detenninistic mode1 of the damm offsets and linear trend were re-solved using

the new full covariance matrix. The solution is given in Table 8.3 with the additional

datum offset (Ha) at 1988.7 included. Because of the smaU codations, there is little

difference in the estimated offsets and trend berneen this solution and that based on only a

diagonal covariance matrix (Table 8.2); ail are statisticaüy compatible. However, in most

cases the estimated standard deviations of the offsets and trend are larger when the full

covariance matrix is used, indicating greater uncertainty in the estimated parameters.

Page 156: the least squares spectrum, its inverse transform and - T-Space

Table 8.3: h t squares estimates of linear trend and d a m

ofEseu, including additional offset (##a) and using estimated fidl

observation covariance matRx based on computed ACF.

Estimae Std t Sraastic

Offiet #1 O LI^) -1.7 0.2 7.8

offset #2 (mm) -3.6 0.2 16.2

Offset #3 (mm) -4.6 0.2 20.8

Off't #4 (III.~) -4.1 1.0 3.9

Offset #5 (mm) - 14.0 0.3 51.3

offset #a (mm) -11.9 1 .O 12.3

Offset #6 (mm) - 16.2 1.3 12.4

Offset #7 (mm) -5.8 0.6 9.3

offset #8 (mm) -8.9 1.2 7.4

Linear Trend (mm/yr) 1.44 0.09 16.3

Specifically, the standard deviation for the linear trend is increased h m 0.06 to 0.09

d y r . This is thought to be caused by a slight reduction in the overall redundancy due to

the linear dependence (mathematical corrdations) among the obsewations. There were also

some significant differences in the correlations between the estimated parameters. For

example, the codation between offsets #5 and #5a was reduced from 0.75 to 0.44. This

caused the difference between the two offsets to becorne Iess statisticaily significant (t

statistic reduced h m 7.0 to 2.4). Nevertheless, the difference is sall statisticaiiy

significant at the 95% confidence level, leading us to still consider the possibility that the

addition daturn offset is reai.

Page 157: the least squares spectrum, its inverse transform and - T-Space

Table 8.4: Summary of estirnated linear trends with and without

extra offset and correIations.

Linear Trend k Standard Deviation

Without Corr. With Con-.

Without extra offset 1.72 k 0.05 1.61 k 0.07

1.67 m g b e i n & Johnson]

With extra offset 1.51 f 0.06 1.44 f 0-09

Fioaly, the estimateci hear trends (baseline expansion) are sunnnaiized in Table

8.4. The main ciifference with the estimate h m Langbein and Johnson [19m is due to

the use of the snnitional danun offset Ma. When the offset is not used, the estimiited trend

with or without the observation correlations is not signincantly different h m Langbein and

Johnson's. The Merences are weil within the 95% confidence intervals. When the extra

offset is used, the linear trends are reduced by about 0.2 mm/y with or without the use of

cmelations. These are sisnificantly different at the 95% confidence level The standard

deviation of the iinear trend is only slightiy increased by the additional offset

The use of observation correlations derived h m the esthateci autocorrelation

fùnction dso reduces the magnitude of the hear trends both with and without the extra

offset. However, the reduction is only about 0.1 in both cases and is not

statistically signincant at the 9596 conMence level. The correlations also increase! the

estimated formal standard deviations of the linear trends by about 5046, even though the

magnitude of the autocorrelation is relatively small. This increase is thought to be due to a

implied reduction in the total redundancy (the existence of wrrelations means the= are

effectively fewa truly independent observations).

Finally, it is noted that the estirriiited linear trend (1.44 f 0.09 mm&) with the extra

offset and correlations agrees beaer with the linear trend (1.46 mm/yr) estimated by

Page 158: the least squares spectrum, its inverse transform and - T-Space

Langbein and Johnson for the basehe from station Holcumb to station Bird, which is in

the same generai vicinity and direction as station Lepage (see Figure 8.1). The baselines to

these two station shouid thedore behave similar1y in t e m of its motion relative to

Holwmb. The apparent agreement therefore suppom the existence of an extra d a m

offset in the measurements m Lepage.

8.3 GPS Point Positioning

The use of the Global Position S ystem (GPS) has grown greatly in recent years,

largely owing to the wide availability of d, low cost receivers. For an in depth

expianation of the concepts involved in GPS, see Wells et al. [198Q or Dana [1997]. In its

most basic mode of operation, referred to as the Standard Position Service, users can

obtain their position to an accuracy of only about 100 metres horizontally and about 150

metres venically. in this mode, GPS receivers make use of the secalled U A code pseudo-

range (measured satellite-to-receiver distance), which is obtained by timing the satellite-to-

receiver travel t h e of the basic U A (coarse acquisition) code that is superimposed on the

L1 carrier frequency. The satellite-to-receiver ranges are used to solve for the receiver's

position in what is essentially known as a 3-dimensional resection problem in siateying.

This mode of positioning is called "point positioning" to distinguish it b m other, more

accurate, methods based on relative or differentiai positioning between receiven.

Although the pseudo-range observable is capable of providing point positioning

accuracies of about 10 to 30 meaes, the US Depanment of Defense intentionaüy degrades

the observable to the 100 m level for seclaity reasons. This degradation is cded Selective

Availability (S/A) and involves the introduction of systematic enas in the fom of a

mathematical aigorithm (caUed cc~thering") into the hadcast satellite ephemais and clock.

To date only clock dithering has apparently k e n applied. This error propagates dVectly

into the signal travel time from which the pseudo-range observable is de-rived. However,

Page 159: the least squares spectrum, its inverse transform and - T-Space

because the S/A error is fairiy systematic, there exist very large autocorrelations in the

pseudo-range data and thus also in the estimated positions derived h m them Hem, only

the vertical position is examined with the aim of investigating the degree of autocorrelation

in the &ta and the effect of using the autocdation fwiction to weight the point positions

when using tirne averaging to reduce the effects of S/A and improve the accuracy. The

malysis of the two horizontal componenu can be done in an analogous fashion.

The data used in this study were provided by W. Rescott of the US Geological

Siwey @asonal communication, 19 May 1994). They were obtained h m a geodetic

quality Ashtech L-XII GPS receiver and uicluded, for each masurement epoch, the receive

time, computed WGS-84 Cartesian coordinates of the receiver's antenna and computed

receiver clock b i s . The tirne series of instantaneous point positions refers to station

Chabot in the south part of Oakland, Ciùifoxnia. The point positions were recmded every

30 seconds for a total of 24 hours on April6, 1994. Plots of the variation in the horizontal

and vertical position estimates over this 24 hour period are given in Figure 8.1 1, and for

only the first hour in Figure 8.12. The high degree of autocomlation at short time intervais

is rcadily apparent h m the very systematic way in which the positions slowly Vary.

As already stated, the most cornmon method of reducing the effects of S/A is to

average the point positions over time. Generally, users average their positions over

intervals as short as 5 minutes and at most about an hour. Here, one hour averaging is

used to examine the effectiveness of this in reducing the effects of SIA. This provides for

24 independent hourly means.

For each hou, the least squares spectrum is computed. Any linear dependence

between the estimated mean and the specaal components is accounted for as describeci in

Section 5.5. The hour long subsets are also zero-padded to avoid any wrap around effects

in the derived autocomlation functions (see Section 3.4). The systematic nature of S/A is

revealed as statistically signifiant peaks in the spectra, mainly at lower frequencies. The

independently estimated least squares specaum for first hour of the height series is given in

Page 160: the least squares spectrum, its inverse transform and - T-Space

Van'ation in Horizontal GPS Positions about Mean I

I I

-50 O 50 Change in East (m)

Variation in Vertical GPS Positions about Mean

O 5 1 O 15 20 25 Tirne (hr)

Figure 8.11: Variations in derived horizontal (top) and vertical (bottom) GPS positions

over 24 hours at station Chabot. Variation is with respect to mean position.

Page 161: the least squares spectrum, its inverse transform and - T-Space

Variation in Horizontal GPS Positions about 24 hr Mean (Hour 1)

Variation in Vertical GPS Positions about 24 hr Mean (Hour 1)

1 O 20 30 40 50 60 Time (min)

Figure 8.12: Variations in recorded horizontal (top) and vertical (bottom) GPS positions

for the fim hou. at station Chabot Variations are with respect to 24 hour mean position.

Page 162: the least squares spectrum, its inverse transform and - T-Space

Figrrre 8.13, where the SIA effects are ckarly evident. The peak around O. 1 c y / d (period

= 10 min) is typical also of the other hourly data sets. The spectra are sufficiently dinerent

h m hour to hour, however, so that it is not possibie to predict the S/A algorithm with ody

one hour of data.

The inverse least squares transform of the zero-padded spectrum was used to obtain

the (biased) autocorrelation function (ACF) for each hour. The biased estimate of the ACF

for the first horn is given Figure 8.14. There are smng, periodic corre1ations present

which are typical &O of the other hourly subsets. For each hourly subset the estimated

autocomlation function is used to populate a fidl wrrelation rnah fm the observations.

These are then used as the a priori weight matrix in the re-estimation of the hourly means;

referred to here as the "weighted" rneans. Note, however, that the ACF used in the

weighted mean was computed by k t removing the unweighted mean (using the identity

matrix for the a priori weight matrix). Consequently, the weighted mean was re-estimateci

by iterating the computation of the ACF using the weighted mean h m the previous

iteration as describeci in Section 6.4 and illustrated in Figure 6.1. The standard deviations

of the means were also scaled by the square mot of the estirnated variance factor of the

height residuais h m each hourly mean solution.

The un weigh ted means (equail y weighted observations with no comlations) and

weighted means (with comlations b a s d on the estirnated ACF) were obtained in the above

manner for each hourly subset of data. The results are s b z e d in Table 8.5 and plotted

in Figure 8.15. It is evident from these resuits that there is a systernatic vanation in the

hourly means. The average of the formal standard deviations is only about 4 rn whereas

the RMS of the hourly means is about 13 a This clearly indicates that the foxmal standard

deviations of the estirnated means are too optimistic. The use of observation correlations

derived h m the ACFs estimated fhm each hour of data does not -ove the results. The

average formai standard deviation is stU much smaller than the RMS.

Page 163: the least squares spectrum, its inverse transform and - T-Space

Least Squares Spectrum w/ Zero-Padding (Hour 1)

Figure 8.13: Independently estirnated lem squares spectrum of GPS height

measurements for the first hour (data zen>-padded).

Biased Autocorrelation Function (Hour 1 ) I I I I 1 1

-0.5 I I I 1 I l O 1 O 20 30 40 50 60

Time Lag (min)

Figure 8.14: Indirect estimate of the biased autocorrelation function via the inverse least

squares transfm of the least squares spectnim for the first hour.

Page 164: the least squares spectrum, its inverse transform and - T-Space

Table 8.5: Unweighted and weighted hourly means and their sondard deviations (Std) of

GPS height m«isurements over a 24 hour period. "A~erage"~ 'Xange" and "RMS" denote

the arithrnetic average (unweighted). range (max - min) and roo< mean square (about the

"Average"), respectively, of these hourly values and are provide for ülusaation only. - -

Unweighted Weighted Mean Std Mean Std

Hour 1

Hour 2 Hour 3 Hour 4

Hour 5 Hom 6

Hour 7

Hour 8 Hour 9

Hour 10

Hom I l Hour 12 Hour 13

Hour 14

Hour 15

Hour 16 H o u 17

Hour 18

Hour 19

Hour 20 Hour 21

Hour 22 Hour 23 Hour 24 - .-- - 244.776 - - - 241.927 9.138

Average 225.173 4.186 226,861 5.152

Range 47.519 3.874 52.466 8.455 RMS 12.576 13.691

Page 165: the least squares spectrum, its inverse transform and - T-Space

1801 1 t I r 1 O 5 1 O 15 20 25

Hour

Unweighted Means

Figure 8.15: Unweighted (top) and weighted (bottom) hourly means of GPS height

280

g-260 Y

9 2 4 0 - H r 220 s = 200

180

measurements over a 24 hour period The ACF-weighted means are based on iterative

1 I I 1

- - - -

- -

- -

I I I 1

estimation of the ACF. Enor bars represent 95% confidence intervals.

O 5 10 15 20 25 Hour

Page 166: the least squares spectrum, its inverse transform and - T-Space

h m these results it is clear that the one hour averaging does not eIiminate di the

effects of S/A. To gain a better understanding of the nature of S/A for this data set, the

least squares specaum was computed for the entire 24 hour data ser The specmun is

plotted in Figure 8.16 and the largest peaks are tabulated in Table 8.6. It is clear from these

results that there are many significant periodic components at periods longer than one hour.

in fact, the largest peak has a pend of about 1.5 hrs. There is also a peak at 24 hrs. which

is the fwidamental period of the &ta Senes. This wuld indicate a 24 hour mod (possibly

diunial atmospheric effects) or spectrai leakage h m periodic components beyond 24

hours. Without a longer data sexies, it is not possible to distinguish between these two

possibilities. Regafdless, it is evident that the hourly means are not capable of "averaghg

out" all of the systematic effects. Longer averaging is clearly needed

The hourly means were recomputed using the autoc0rreIation funaion h m the

entire 24 hour data set The autocorrelation function is plotted in Figure 8.17 and the new

weighted hourly means tabulated in Table 8.7 and plotted in Figure 8.18. The lower plot in

Figure 8.18 shows the Merences between these means and the unweighted ones. All

ciifferences are within the 95% confidence interval. Although the esrirnated means stili

exhibit the same systematic trend as for the unweighted ones, the foxmai standard

deviations are much larger. The average of these fornial standard deviations (12 m) is very

nearly the same as the RMS of the means (13 m). This indicates that the foxmal standard

deviations are now much more redistic, accounting for the uncertainty due to the systematic

effects still present in the resulrs.

As stated in the previous section, this increase in the formal standard deviations is

thought to be caused by a reduction in the o v d redundancy due to correlations m n g the

individual position estimates. The observations can no longer be considerd independent

of each other. The degrees of fieedom is thus somewhat less than the number of

observations minus the number of unknowns. The larger standard deviations represent a

Page 167: the least squares spectrum, its inverse transform and - T-Space

Least Squares Spedrum

O 500 1000 Period (min)

Least Squares Spednim

O Period (min)

Figure 8.16: Least squares spectnim (independently estimated) for entire 24 hour data

set. The lower plot is an enlargement of the short period range. The horizontal line is the

95% confidence interval for d e t e c ~ g significant spectral values.

Page 168: the least squares spectrum, its inverse transform and - T-Space

Table 8.6: Twenty of the largest peaks in Ieast squares spectnim in Figure 8.16.

Period (min) Specb.al Value

84.7 0.046

62.6 0.020

160.0 0.0 16

27.7 0.016

12.3 0.0 16 1440.0 0.016

41.1 0.015

36.0 0.0 14 72.0 0.0 13 16.0 0.0 13

13.7 0.0 13

32.0 0.01 1

15.2 0.01 f

144.0 0.01 1

19.2 0.0 1 1

22.9 0.01 1

25.3 0.01 1

17.3 0.01 1 16.2 0.0 1 1

28.8 0.0 10

Biased Autocorrelation Function (24 hrs) 1 I I

500 1 O00 rime Lag (dn)

Figure 8.17: Autocorrelation function for entire 24 hour data set

Page 169: the least squares spectrum, its inverse transform and - T-Space

Table 8.7: Weighted hoinly means of GPS height measurernents and rheir standard

deviations (Std) over a 24 hour period using comlations from ACF base- on 24 hours of

data "Average", "Range" and "RMS" denote the arithmetic average (unweighted), range

(max - min) and root mean square (about the "Average"), respectively, of these hourly

values and are provide for illustration only.

Hour 1 Hour 2 Hour 3 Hour 4 Hour 5 Hour 6 Hour 7 Hour 8 Hour 9 Hour 10 Hour 11 Hour 12 Hour 13 Hour 14 Hour 15 Hom 16 Hour 17 Hour 18 Hour 19 Hour 20 Hour 21 Hour 22 Hour 23 Hour 24 249.42 1 21.283

Average 228.094 1 1.877 Range 45-9 17 15.769 RMS 13 ,289

Page 170: the least squares spectrum, its inverse transform and - T-Space

ACF-Weig hted Means

O 5 1 O 15 20 25 Subset

Figure 8.18: Weighted (top) hourly means of GPS height measuremena over a 24 hour

period using correlations obtained h m ACF based on 24 hours of data, and clifference

with equaliy weighted means without comlations (bottom). Ermr bars represent 95%

confidence intervals. Note different vertical scale in top plot from that used in Figure 8.15.

Page 171: the least squares spectrum, its inverse transform and - T-Space

more realistic estimate of precision than that deaived h m the unweighted m a n s without

any correlations among the individual height observations.

It should be r d h d that the S/A algorithm is very complex and a closely guarded

secret It cannot be assumed the effet would repeat f h n &y to &y to enable its

prediction. Furthemore, the systematic signals evident in the spectrum and autocorrelation

function may also be due to other effects, such as aopospheric and ionospheric delays and

signal multipath. Consequently, the systematic positionhg efzors can be expected to Vary

spatidy so that, ui general, an autocorrelation function computed for one point would not

be applicable for any other point Further investigations of qxctra for longer time periods,

at different times of the year and at diffant locations wouid be needed to make more

general conclusions about these effects.

Page 172: the least squares spectrum, its inverse transform and - T-Space

Chapter 9 Conclusions and Recommendations

A general least squares (LS) transform and its inverse have been developed which

c m accommodate unequally spaced data, rancbm observation tl~uz?i, arbiuary fiequencies,

arbitrarily weighted and comlated observations, as well as the effect of any a priori

deterministic mode1 on the spectrai components. The L!3 specaum has aiso been

reformulated in terms of this LS tranSfm and its inverse. The conventional Fourier

transfom and derived spectrum have been shown to be just a special case of the more

general LS transform and spectrtun.

The LS transfomi avoids rnany of the limitations associated with the traditional FFT

algorithms. These inciude the requiremena that the &ta be equally spaced and have 2" data

points (some FFT algurithms exist for handling other than 2" data points, but these begin to

lose the computational efficiency of the m. It is also not limited to providing

information only about the 'Fourier" frequencies (Le., the integer multiples of the

fundamentai fkquency). On the other hanci, the LS transfom is much more

computationally demanding than the FFT.

The spectral cornponents in the LS spectrum and inverse LS tmnsform can be

estimated either independently (out of context) of each other or sïmuitaneously (in-wntext)

with each other. However, the simultaneous estimation is very sensitive, in tenns of U-

conditioning, to the choice of muencies while the independent estimation is not.

Independent estimation is thus a more robust estirnatm in general and was also found to

provide a more reliable estimate of the autocorrelation function. Nevertheless, it would be

possible to use the independent estimation for the identification of signifiant spectral

Page 173: the least squares spectrum, its inverse transform and - T-Space

components, and to model only these components in the simulmeous estimatioa of the

inverse LS tfansfomi (for LS approximation) or in the LS specaum and the indirectly

derived a u m l a t i o n function. This would result in a more optimal estimation of the

inverse wnsform and the specmmi, where the wmlations among the estimated specnal

components are taken hto account Care must be taken, however, when selecting these

components to avoid any iuconditioning in the sirnultanaus estimation due to

multicollinearity among the components (see Draper and Smith [l98 1, p. 2581 for a

discussion of multicollinearity). In such cases, o d y the independently estirnated specaal

components can be used. It is recommended to fiirther investigate the advantages and

disadvantages of SimultaneouIy estimating the spectral components.

The most signiFicant application of the LS transfm is in the rigomus determination

of autocomlation functions for unevenly s p d data series. Autocorrelation functions c m

be indirectly denved h m the inverse LS transform of the LS spectrum. These c m then be

used to construct M y populared a priori weight manices for more complete rnodelling of

systematic effects in least squares estimation problems. The tests with real data show that

the use of fully populated weight matrices generally results in an increase in the magnitude

of the standard deviations of the estimateci model parameters. This provides more realistic

estimates of the uncertainties and is thought to be caused by an implicit reduction in

redundancy due to the correlations arnong the observations.

The efiect of correlations among the observations on the estimation of the LS

spectnun was found not to be very significant, as claimed by Steeves [Mi 11, even when

the correlations are relatively large. This is an important consideration because ~ C C O U ~ M ~

for the correlations requires the use of a fuily populated observation covariance ma& in

the weighted f o m of the LS expressions derived here. which imposes a great

computational burden. Ignoring the comlations was found to have littie effect on the

results (estimated specrra and model patameters) but it significantly increases the

computational efficiency. On the other han& as stated above, the use of a priori

Page 174: the least squares spectrum, its inverse transform and - T-Space

correlations among the observations generally gives more realistic

uncertainties associateci with the estGnated mode1 parameters

generally resuits in overly optimistic estimates of the standard deviatons.

Although the algorithm for the LS transform and specmim is relatively slow

compared to the FFI; the computationai efficiency can be greatly impved by using the

trigonometnc identities employed by VaniEek [1971]. It is strongly recommended to use

these in any practical use of the LS tmnsform and specaum. It is also recommended to

investigate the use of "windowing" techniques widely used in traditional Fourier spectra to

reduce the effecn of aliasing (see, e.g.. Press et al. [1992, pp.545-55 11).

Finally, the direct and inverse LS transfomis have been developed here oniy for

one-dimensional problems. Thus, it is srrongly reconmiended to investigate the possibility

of extending the LS transform to multiple dimensions in order to dow for the estimation of

multidimensional spectra and autocorrelation functions. This is especially impairtant in

geodesy, where most problems are of a two- or three-dimensionai name and where

observations are affected by a wide variety of (multidimensional) systematic emrs. In

these cases, a multidimensional form of the LS transforms should ideaily be used.

Page 175: the least squares spectrum, its inverse transform and - T-Space

References

Bendat, J.S. and A.G. Piersoi (197 1). Radom Duta: Anolysis and Measwernem

Procedues. John Wiley and Sons, Inc., New York.

Bracewell, R.N. (1989). The Fourier transform. Scientific Amencan, Vol. 262, No. 6,

June, pp. 86-95.

Bracewell, RN. (1986). The Hartley Traifonn. Oxford University Press, London.

Castle, RO., T.D. Gilmore, R.K. Mark, B.W. Brown, Jr. and R.C. Wilson (1983). An

examination of the southern Calrfomia field test for the systematic accumulation of

the optical refhciion e m r in geodetic leveling. Geophysicd Resemch Letters, Vol.

10, NO. 11, pp. 1081-1084.

Chen, Y.C. (1983). Analysis of deformation srirveys - A generaiized approach. Technical

Report No. 94, Depamnent of Geodesy and Gwmatics Engineering, University of

New Brunswick, Fredericton, N.B.

Chen, Y.C., A. Chrzanowski and M. Kavouras (1990). Assessrnent of observations

using unbiased minimum nom quadratic unbiased estimation (MINQUE). CISM

Journal, Vol. 44, No. 4, pp. 39-46.

Chrzanowski, A., M. Caissy, J. Grodecki, J. Secord (1994). Software development and

naining for geomeaical defornation analysis. Contract Report 95-00 1, Geodetic

Slwey Division, Geomatics Canada, Ottawa.

Cooley, J.W. and J. W. Tukey (1965). An algorithm for the machine calculahon of

complex Fourier series. Mathematical Computing, Vol. 19, pp. 297-30 1.

Craymer, M.R. (1984). Data senes analysis and systematic effects in levelling. M.A.Sc.

Thesis, Dept. of Civil Engineering (Survey Science), University of Toronto,

Page 176: the least squares spectrum, its inverse transform and - T-Space

Toronto; Technïcai Report No. 6, Survey Science, University of Tmnto (Erindale

Campus), Mississauga, Ontario.

Craymer, MM. and P. Van%ek (1985). An investigation of systematic ermrs in Canadian

levebg lines. Roc. Third International Symposium on the North American

Vertical D a m , Rockville, MD, A p d 21-26, pp. 441-447.

Crayrner, MM. and P. VaniCek (1986). FuRher analysis of the 1981 Southern California

field test for levelling refi-action J o d of Geophyscul Research, Vol. 9 1, No.

B9, pp. 9045-9055.

Craymer, MR and P. Vaniiek (1990). A cornparison of various dgorithms for the

spectral analysis of unevenly spaced data series. Presented at the Joint Annual

Meeting of the Canadian Inshtute of Sweying and Mapping and Canadian

Geophysical Union, Ottawa, May 22-25.

Craymer, M.R., P. VanEek and R.O. Castle (1995). Estimation of rod scale enors in

geodetic leveling. Journal of Geophysical Research, Vol. 100, No. B8, pp.

15129- 15145.

Dahlquist, G. and A. Bjorck ( 1 974). Nwnerical Methoh. Pren tice-Hall, Inc., Englewood

Cliffs, NJ.

Dana, PH. (1997). Global Positioning System Overview. The Geographer's M t

Project, Department of Geography, University of Texas at Austin. World Wide

Web address <h ttp://www.utexas.ed~depts/grg!gcraft/notes/gps/gps.h tmb

Deeming, T.J. (1975). Fourier analysis with unequally-spaced data Astrophysics and

Space Science, Vol. 36, pp. 137- 158.

Draper, N.R. and H. Smith (198 1). Applied Regression Analysis. Second Edition. John

Wiley & Sons, New York.

El-Rabbany, A. and A. Kleusberg (1992). Physical correlations in GPS differential

positioning. Ebc. Sixth International Symposium on Satellite Positioning,

Columbus, OH, March 17-20.

Page 177: the least squares spectrum, its inverse transform and - T-Space

El-Rabbany, AE-S. (1994). The Effkct of Physical Comlations on the Ambiguity

ResoIution and Acciiracy Estimation in GPS Differential Positioning. Technical

Rtport No. 170, Department of Geodesy and Geomatics Engineering, University

of New Brunswick, Fredericton, N.B.

Fesraz-Mello, S. (1981). Estimation of paiods from unequdy spaced observations. The

Asnommicul JournclZ, Vol. 86, No. 4, pp. 619-624.

Freund, J E (1 97 1). MuthematicaZ Statistics. Second Edition. Prentice-Hall, Inc .,

Englewood W s , NJ.

Golub, G.H. and C.F. Van Loan (1983). M d Cmpuratiom. The John Hopkins

University Press, Baltimore, MD.

Grafarend, E.W. (1976). Geodetic applications of stochastic processes. Physics of the

Earth und PImerary Interiors, Vol. 12, pp. 15 1 - 179.

Grafarend, E.W. (1984). Variance-mvariancecomponent estimation of Heimert type in

the Gauss-Helmert model. ZfV, Vol. 109, pp. 34-44.

Grafarend, E.W., A. Kleusberg and B. Sch& (1980). An introduction to the variance-

covariance component estimation of Helmert type. ZfV, Vol. 105, pp. 16 1- 180.

Hartiey, R.VL. (1942). A more symmetricd Fourier analysis applied to transmission

problems. Roceedings of the IRE, Vol. 30, p. 144.

Holdahl, S.R. (1981). A rnodel of temperature stratification for the correction of levehg

refraction. Bulletin Geodesique, Vol. 55, No. 3, pp. 231-249.

Home, J.H. and SL. Baliunas (1986). A prescription of p e n d aoalysis of unevenly

sampled time series. The Astrophysicol Journal9 Vol. 302, pp. 757-763.

Kac, M. (1983). What is random? American Scientist, Vol. 7 1, pp. 405-406.

Kelly, K.M. (1991). Weight estimation in geodetic leveiiing uskg variance components

derived from analysis of variance. M.A.Sc. Thesis, Dept of Civil Engineering

(Survey Science), University of Toronto, Toronto.

Page 178: the least squares spectrum, its inverse transform and - T-Space

Kreyszig (1978). Introduciory F~uu:tioruaI Andysis with Applications. John Wiley and

Sons, New York,

Langbein, J. and H. Johnson (1997). Conelated emns in geodetic t h e series:

hglications for time-dependent deformation. Joumal of Geophysical Resemch,

Vol. 102, NO. B 1, pp. 59 1-603.

Langbein, J.O., R.O. Burford and L.E. Slater (1990). Variations in Fault Slip and Snain

Accumulation at PaMield, California.. Initial Results Using TweColor GeOdimeter

Measurements, 1984-1988. J o m l ~Ge~physica l Research, Vol. 95, No. B3,

pp. 2533-2552.

Lomb, N.R. (1976). Least-squares frequency analysis of unequally spaced data.

Asrophysics and Space Science, Vol. 39, pp. 447-462.

Lucht, H. (1972). Korrelation in prazisionsniveiiement 'Wissenschafliche Arbeiten der

Lehrstule fur Geodasïe, P hotogrammetrie und Kmograp hie, Nr. 48, Technisc he

Universitat Hannover.

Lucht, H. (1 983). Neighbourhood-conelarion among observations in levelling networks.

In H. Pelzer and W. Niemeier (Editors), Precise belLing : Con~butions to the

Workrhop on Precise k e l l h g , Dümmler Verlag, Bonn, pp. 315-326.

Maul, G.A. and A. Yanaway (1978). Deep sea tides determination h m GEOS-3. NASA

Contract Repon 141435, National Aeronautics and Space Administration, Waiiops

Flight Center, Wallops Island, VA.

m e r , R.G. (1966). Simultaneolcs Statistical inference. McGraw-Hill, New York.

Moritz, H. Covariance functions in least-squares coilocation. Report No. 240,

Department of Geodetic Science, The Ohio State University, Columbus, OH.

ONeiU, M.A. (1988). Faster than fast Fourier. Byte, Vol. 13, No. 4, pp. 293-300.

Papoulis, A. (1965). Probabiliry, R a n h Vanobles. and Stz~chastic Processes. 2nd

Edition. McGraw-Hill Book Company, New York.

Page 179: the least squares spectrum, its inverse transform and - T-Space

Ress, WH. and G.B. Rybicki (1989). Fast aïgorithm for spectral analysis of unevenly

sampled data. The Amophysicul Jozunai, Vol. 338, pp. 277-280.

Press, WH. and S.A. Teukolsky (1988). Search algorithm for weak periodic signais in

uneveniy spaced data. Conputers in Physàcs, Vol. 2, No. 6, pp. 77-82.

Press, WH., BP. Flannery, S.A. Teukolslq and W.T. Venerling (1986). Numerical

Recipes: The Art of Scienrific Computing. Cambridge University Press,

Camùxidge.

Press, WH., S.A. Teukolsky, W.T. Vetterling and B.P. Flannery (1992). Numerical

Recipes in FORTRAN: The Art of Scienrifc Computing, Second Edition.

Cambridge University Press, Cambridge.

Priestley, M.B. (198 1). Spectral Anaiysis and Tàme Series. Academic Press, London.

Rao, C.R. and I. Kleffe (1988). Estimation of Vuriance Components and Applications.

North-Holland, New York.

Rochester, M.R., O.G. Jensen and D.E. Smylie (1974). A Search for the Earth's 'Nearly

Diurnai Free Wobble'. Geophysicd J o m l of the Royal Astronomkal Society,

Vol. 38, pp. 349-363.

Savage, J-C. and M. Lisowski (1995). Geodetic monitoring of the southern San Ancireas

fa* California, 1980- 199 1. Journal of Geophysicul Research, Vol. 100, pp.

81 85-8192.

Scargle, JD. (1982). Studies in astmnornical time series analysis II: Statistical aspects of

spectrai analysis of unevenly spaced data. The Asrrophysical Journal, Vol. 263,

pp. 835-853.

Schwarz, K.P., M. Sideris, E.G. Anderson, P. Stoliker and S.M. Nakiboglu (1986). A

weight estimation scheme for the NAVD 88 readjusment. Conaact Report 86-02,

Geodetic S urvey Division, Geomatics Canada, Ottawa.

Page 180: the least squares spectrum, its inverse transform and - T-Space

Seeber, G. (1973). Data evaluation by covariance analysis, exercised on photographie

satellite observations. Proc. Symposium on Eanh's Gravitational Field and Secular

Variations in Position, pp. 454-462.

Sjoberg, LE. (1983). Unbiased estimation of variancecovariance components in

condition adjusmients with unknowns - A MINQUE approach. SfV. Vol. 108,

pp. 382-387.

Slater, LE. and G.R Huggett (1976). A multiwavelength distance-measuring insrnunent

for geophysical experiments. Journal of Geophysical Research, Vol. 8 1, pp.6299-

6304.

Steeves, R.R. (1981). A statistical test for significant peaks in the least squares specmun.

In Collected Papers, Geodetic Survey, Sweys and Mapping Branch, Energy,

Mines and Resources Canada, Ottawa, Ontario.

Stein, R.S. and W. Thatcher (1 982). Field test for refraction in leveling. Eos

Transactions, American GeophysicaZ Union, Vol. 63, No. 45, p. 1106.

Stein, R.S., C.T. Whalen, S.R. Holdahl, W.E. Strange and W. Thatcher. Saugus-

Palrndale, California, field test for reection in historical levelling surveys. Journal

of Geophysicai Research, Vol. 9 1, No. B9, pp. 9056-go??.

Taylor, J. and S. Hamilton (1972). Some tests of the Vmiek method of spectral analysis.

Astrophysics and Space Science, Vol. 17, pp. 357-367.

VaniEek, P. (1969a). Approdate specnal analysis by least-squares fit. Asnophysics and

Space Science, Vol. 4, pp. 387-391.

Vmiek, P. (1 969b). New Analysis of the Earth-Pole Wobble. Studia Geophysica et

Geodaetica, Vol. 13, pp. 225-230.

Vantiek, P. (1 97 1). Further developrnent and properties of the spectral analysis by least-

squares. Asnophysics and Space Science, Vol. 12, pp. 10-33.

Vaniiek, P. (1979). Tensor structure and the lest squares. Bulletin Geodesique, Vol. 53,

pp. 22 1-225.

Page 181: the least squares spectrum, its inverse transform and - T-Space

Vaniiek, P. (1986). Diagpmmatic approach to adjusmient calculus. Geodezja, Vol. 86,

No. 999, pp. 29-39.

Vaniiek, P. and M.R. Craymer (1983a). Autocorrelation functions as a diagnostic tool in

leveihg. In H. P e k r and W. Niemeier (Editon), Precire LeveIIing:

Conaibutions ro the Workrhop on Precke Levelling, Diimniler Verlag, Bonn, pp.

327-341.

Vaniiek, P. and M.R. Craymer (1983 b). Autocorrelation hinctions in the search for

systematic errors in levelling. Mm~(scripfa Geodaercu, Vol. 8, pp. 32 1-341.

VamZek, P., GH. Carrera and UR Craymer (1985). Corrections for systematic emrs in

the Canadian levelling network. Technical Report No. 10, Survey Science,

University of Toronto (Exindale Campus), Mississauga, Ontario.

VaoiZek, P. and E.W. Grafarend (1980). On the weight estimation in levelhg. NOAA

Technical Report NOS 86 NGS 17, U.S. Department of C o m c e , National

Oceanic and Amspheric Administration, National Ocean Survey, Rockville, MD.

Wassef, A.M. (1959). Nore on the application of mathematical statistics to the analysis of

levelling errors. Bulletin Geodesique, Vol. 52, pp. 19-25.

Wassef, A.M. (1 974). On the searc h for reliable criteria of the accuracy of precise levefig

based on statis tical considerations of the discre pancies. Builetin Geodesiqw , Vol.

112, pp. 149-163.

Wassef, A.M. (1976). Propagation of enors in precise leveiiing and its bearing on the

assessrnent of ment crustal movements. Bulletin Geodesique, Vol. 53, pp. 53-60.

Weiis, D.E., N. Beck, D. Delikaraoglou, A. Kleusberg, E.J. Krakiwsky, G. Lachapelle,

R.B. Lmgley, M. Nakiboglu, K.-P. Schwarz, J.M. Tranquilla and P. VaniEek

( 1987). Guide to GPS Positioning. Canadian GPS Associates, Fredericton, N.B.

Wells, D.E., P. Vmiek and S. Pagiatakis (1985). Least squares spectral andysis

revisited. Technical Report No. 84, Department of Geodesy and Geomatics

Engineering, University of New Brunswick. Fredericton, N.B.

Page 182: the least squares spectrum, its inverse transform and - T-Space

Whalen. C.T. and W.E. Strange (198 1). The 1981 Saugus to Palmdale, California

leveling refiaction test NOAA Technical Report NOS 98 NGS 27, National

Geodetic Information Center, Rockville, Md.

Zhang, J., Y. Bock, H. Johnson, P. Fang, J. Genrich, S. Williams. S. Wdowinski and J.

Behr (1997). Southen California Pemianent GPS Geodetic Anay: Emn Analysis

of Daily Position Estirnates and Site Velocities. Jownal of Geophysical Resemch.

in press.

Page 183: the least squares spectrum, its inverse transform and - T-Space

l MAGE EVALUATION TEST TARGET (QA-3)