authorcsj e. contract or grant numberr»j · 2017. 3. 28. · steven kay and debasis sengupta e....

57
SECjRl'^'' CL »S5i riCATiOM Of THIS PAGE rl*Tion D«<« Enisrad) REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM ^ REPORT NUMBER 2. GOVT ACCESSION NO 3 RECIPIENT'S CATALOG NUMBER 4. TiT[.E (and Sublllle) Simple and Efficient Estimation of Parameters of Non-Gaussian Autoregressive Processes 5. TYPE OF REPORT 4 PERIOD COVERED August 1984 to July 1986 6. PERFORMING ORG. REPORT NUMBER 7. AUTHORCSJ Steven Kay and Debasis Sengupta e. CONTRACT OR GRANT NUMBERr»J N00014-84-K-0527 9 PERFORMING ORGANIZATION NAME AND ADDRESS Electrical Engineering Departinent University of Rhode Island Kingston, Rhode Island 02881 10. PROGRAM ELEMENT, PROJECT, TASK AREA 4 WORK UNIT NUMBERS II. CONTROLLING OFFICE NAME AND ADDRESS Office of Naval Research Departinent of the Navy Arlington, Virginia 22217 12. REPORT DATE August 1986 13. NUMBER OF PAGES 42 1* MONITORING AGENCY NAME 4 AODRESSC// dlllarant Irom Controlling Olllca) 15. SECURITY CLASS, (ol Ihit report) Unclassified 15« OECL ASSIFlC ATION DOWNGRADING SCHEDULE IS. DIS'^RI BUTlON STATEMENT (of this Rupert) Approved for public release. Distribution unlimited. 17. DISTRIBUTION STATEMENT (ol (he mbtlracl enrersd In S/ock 30, II dlIterant Irom Report) 18. SUPPLEMENTARY NOTES 19 KEY WORDS 'Conf/nue or\ reverse aide If neceaiary and tdentily bv block r\umber) Parameter Estimation Autoregressive processes Non-Gaussian processes Maximum Likelihood Estimator Weighted Least Squares Efficiency Robustness 20 ABSTRACT (Continue on mvarma «/d* H neceatmry and identity by block numbmr) A new technique for the estimation of autoregressive filter parameters of a non-Gaussian autoregressive process is proposed. The probability density function of the driving noise is assumed to be known. The new technique is a twD-stage procedure motivated by maximum likelihood estimation. It is ccrputationally much simpler than the maximum likelihood estimator and does not suffer from convergence problems. Ccnputer simulations indicate that unlike the least squares or linear prediction estimators, the proposed estimator is nearly efficient, even for moderately sized data records. By a DD , J4N 73 1473 EDITION OF 1 NOV 65 IS OBSOLETE b. N OliU- Lfv OU- 0601 SECumTY CLASSIFICATION OF THIS PAOB (Wfimn Dalm 8nrar»dj

Upload: others

Post on 23-Oct-2020

0 views

Category:

Documents


0 download

TRANSCRIPT

  • SECjRl'^'' CL »S5i riCATiOM Of THIS PAGE rl*Tion D«

  • r

    Report Number 4,

    LIBRARY RESEARCH REPORTS DfYISfOPI '■'AVAL POSTGRADUATE SCHOOC ,.iCJiv;r£REY, CALiFORr«llA 93940 Smple and Efficient Estimation of Parameters

    of Non-Gaussian Autoregressive Processes » 6^

    STEVEN KAY AND DEBASIS SENGUPTA

    Depaitment of Electrical Engineering University of Rhode Island ^^^uV^^yt^.

    Kngston, Rli(^e Island 02881

    August 1986

    Prepared for

    OFFICE OF NAVAL RESEARCH Statistics and Probability Branch

    Arlington, Virginia 22217 under Contract N0001^84-K-0527 S.M. Kay, Principal Investigator

    Approved for public release; distribution unlimited

  • Abstract

    A new technique for the estimation of autoregressive filter parameters of a non-

    Gaussian autoregressive process is proposed. The probability density function of

    the driving noise is assumed to be known. The new technique is a two-stage pro-

    cedure motivated by maximum likelihood estimation. It is computationally much

    simpler than the maximum likelihood estimator and does not suffer from conver-

    gence problems. Computer simulations indicate that unlike the least squares or

    linear prediction estimators, the proposed estimator is nearly efficient, even for

    moderately sized data records. By a slight modification the proposed estimator

    can also be used in the case when the parameters of the driving noise probability

    density function are not known.

  • I. Introduction

    Estimation of the parameters of autoregressive (AR) processes has been widely-

    addressed [Box and Jenkins 1970], [Kay 1986]. These processes are modeled by an

    all-pole filter excited by a white Gaiissian process, also referred to as the driving noise.

    The class of AR processes driven by white non-Gaussian noise has not received much

    attention, although they are capable of representing a wide range of physical processes

    [Sengupta and Kay 1986]. Previous research has shown that it may be possible to esti-

    mate some of the parameters characterizing a non-Gaussiem AR process more precisely

    than those of a Gaussian AR process with the same power spectral density (PSD).

    Specifically, the Cramer-Rao bound (CR bound) for the variances of the estimtators for

    the AR filter parameters [Martin 1982] is lower in the non-Gaussian case than in the

    Gaussian case [Pakula 1986], [Sengupta 1986]. Yet utilization of this theoretical result

    has been limited. The method of maximum likelihood, which is the most widely consid-

    ered approach for AR parameter estimation, is usually associated with computational

    complexity and convergence problems. In the case of non-Gaussian PDF's maxuniza-

    tion of the likelihood function leads to a set of highly non-linear equations [Sengupta

    and Kay 1986] in contrast to the Gaussian case where the equations become linear

    after a few simplifying assumptions. It is the solution of these non-lmear equations

    which results in the computational complexity of the Tna-yimmn likelihood estimator

    (MLE). Iterative techniques which are often used to solve these equations suffer from

    convergence problems for short data records.

    Several non-Gaussian PDF's have been proposed to model the driving noise. Zero-

    mean symmetric PDF's having tails heavier than the Gaussian tail are of particular

    interest becatise they may model a nominally Gaussian background with occasioned im-

    pulses. Such noise processes aie encountered in low-frequency atmospheric commimica-

    tions [Bernstein 1974], sonar and radar problems and so on. Examples of heavy-tailed

    PDF's are the mixed-Gaussian PDF, the Johnson family and Middleton's class A and

    2

  • B PDF families. All of them lead to a complicated likelihood function which is difficult

    to maximize.

    This paper suggests a method to estimate the AR filter parameters of a non-

    Gaussian process in a computationally simple way. Essentially it is a two-stage proce-

    dure based on an approximation of the MLE. The resulting estimator is asymptotically

    efficient in the sense that its variance approaches the CR bound for large data records.

    The mixed-Gaussian PDF is vised to illustrate the approach and to demonstrate some

    of the finer aspects.

    The paper is organized as follows. Section 11 gives an mterpretation of the MLE

    which forms the basis for the development of the new estimator. Several special cases

    are discussed to illustrate the central argument. Section El suggests an approximation

    to the MLE based on this interpretation. Section IV actually implements such an

    estimator for the case of a mixed-Gaussian distribution. Section V discusses the case

    when some of the parameters of the noise PDF are unknown while section VI reports

    the results of computer simulations. Section Vn suimnarizes the main results.

    n. An Interpretation of the MLE

    Consider N observations of an AR(p) process p

    Xn = -'^ajXn-j+Urt, n = l,2,'--N (1) j=i

    where the driving noise Un has the PDF /(u„; 0) dependent on the parameter vector

    0. / is assumed to be an even function and hence zero mean. It is also assumed that /

    has tails heavier than a Gaussian PDF g having equal variance, i.e., there is a number

    U such that

    /(un)>(7(u„) for|un|>t^ (2)

    subject to the constraint /+00 /•+00

    ul f{Un)dUn = / ul g{Un)dUr, -oo y-oo

  • The log likelihood function for the AR filter parameters is given by the joint PDF

    of {xi,X2, • • •, xjv} when the Xt's are replaced by their observed values. This is difficult

    to evaluate. It is a common practice to replace the exact likelihood function by the

    conditional liklihood of {xp+i,Xp+2,-• ■ ,XN} given {xi,X2,-• • ,Xp} for the purpose of

    maximization over the parameters. It can be easily shown [Box and Jenkins 1970] that

    the conditional log likelihood function is given by

    N

    bif= J] ln/(u„;0) n=p+l

    where OQ = 1. Differentiating with respect to ay

    "'•=ELo'"••*''-■

    dUn daj

    n=p+l

    N

    daj "''=Er=o''''—•

    r(un;0)

    = "T X u ^'^^"'^ / ^ •*'n—j "n 77 AT /'(un;0)

    n=p+l

    Jf

    n=p+l

    where

    r(un) /'(un;0)

    tt« Xn-i

    u«/(un;0)

    (3)

    (4)

    (5)

    The MLE of a = [aj 02 • • • Op] is found by solving

    tr J^ Xn-yu„r(u„)

    n=p+l = 0, i = l,2,---p (6)

    provided 0 is either known or replaced by its MLE 0 in order to calculate T{un) from

    (5). From this point onwards 0 will be assiimed to be known. It will be shown in

    section V that for some PDF's the method to be described can be implemented with

    4

  • 0 replaced by a reasonable estimate. Note that since / is assumed to be a symmetric

    PDF, /' is an odd function of u„. Therefore f'jun is even in Un and V{un) given by

    (5) is an even function. T[un) is assumed to exist over the domain of /(u„; 9). (6) caji

    also be written as

    P N

    t=0 n=p+l = 0, y = l,2,---p (7)

    For the special case of a Gaussian PDF with variance a^, /'// = -U^/CT^. There-

    fore r(un) = 1/cr^ and (7) reduces to

    P N

    Y,(^ X) ^n-iXn-j=0, i=l,2,---p (8) t=0 n=p+l

    which can be recognized as the covarianct method of linear prediction, known to be

    approximately the MLE in the Gaussian case. It is the solution of a Imear least squares

    (LS) problem [Box and Jenkins 1970], [Kay 1986]

    mm n=p+l \ y=i /

    (9)

    (7) resembles the solution of a LS problem except for the weighting factors r(u„). Note

    that the inherent dependence of Un (and hence r(u„)) on a, the AR filter parameters,

    makes it a non-linear problem. If, however, the argument of T is czdculated using a fixed

    (and hopefully an approximate) value of a, (7) becomes the solution of the following

    weighted LS problem

    N , p v2

    "^ H [''^^H°-i''^-n^^^^) (10) n=p+lV y=i /

    Un, a quantity expected to be close to u^ is defined by (l)

    p

    u„ = J^ayxn-j, n = p+l,p + 2,---N (H)

  • where some fixed approximate values of the AR filter parameters are used, CQ is defined

    to be imity. (10) is minimized by (7) with r(u„) replaced by r(u„) which reduces

    to a set of linear equations whose unique solution can be expected to be close to

    the MLE of a. The resulting estimator should be much better than the unweighted

    LS estimator (resulting from (9)) because it retains the general shape of r(u„) by

    approximating it with T{un). The sequence {un | p + 1 < n < AT} can be generated

    by passing {xp+i,Xp+2,-• ■ ,XN} through a moving average (MA) filter as per (11)

    with coefficients obtained from a preliminary stage of least squares estimation {e.g., by

    covariance method). In other words, tin becomes an estimate of the nth sample of the

    drivmg noise based on an LS estimate of the filter parameters such as the covariance

    method. This approach leads to an approximate MLE which is described in the next

    section.

    It is of mterest to know how the terms of (10) are actually weighted. Three symr

    metric PDF's which are commonly used to model heavy-tailed non-Gaussian processes

    [Czamecki and Thomas 1982], [Middleton 1977], [Johnson and Kotz 1971] are now con-

    sidered. The plots of the weighting fimction r(u) for these PDF's provide insight into

    the structure of the MLE.

    Mixed-Gaussian Model: The mixed-Gaussian PDF has received considerable atten-

    tion in situations where the underlying random process is characterized by the presence

    of occasional impulses in an otherwise Gaussian process. The PDF is given by

    f{u) = {l-€)EB{u)+eEi{u), 0

  • here. (12) is explicitly written as

    /(") = -1=T^ ^ + -7^^ ^ (13)

    For this PDF r(u) can be shown to be

    1-6 -i,l^y e _ I f u ^^

    Y(u)- "v^^ PV2np

    l-£ _:^ , e _ii _ 1 ^¥ Pv/2^

    a| 1-6 _ui , e _ui (14) v/27r v^Trp

    where p = aj/al and u = U/CTB- Figure 1(a) plots r(u)/r(0) vs. u (= u normalized

    by as) for p = 100. Curves for different values of e are overlayed. Figure 1(b) plots

    r{u)/r(0) vs. u for e = 0.1 and different values of p. The curves show that r(u) acts

    as a limiter. The squared errors in (10) with large values of tin are suppressed. This

    makes intuitive sense because large values of u„ {spikes at the input of the AR filter)

    would otherwise dominate the sum of the squares and consequently the information

    contained in the rest of the terms will be lost. Quantitatively, from (14) with p » 1,

    r(0) « 1/0% and r(u) ->• 1/aj as u -^ 00, i.e., very small and very large terms in (10)

    are scaled in the inverse ratio of background and interference noise powers, respectively.

    This is in accordance with analogous results in optimal weighted least squares theory

    [Sorenson 1980].

    Middleton'3 Class A Model: Another physically motivated model to represent

    nominally Gaussian noise with an impulsive component is Middleton's class A PDF

    given by the infinite sum

    m=0

    where 0 < A < 1 and Em{u) is a zero-mean Gaussian PDF with variance c^ given by

    '-="1^8- (!«'

    7

  • with B a constant. The Gaussian component corresponding to m = 0 has the least

    variance and can be thought of as the background process. The weights of the higher

    order terms can be controlled by the constant A. A small value of A will diminish the

    contribution of the contaminating high-order terms. The constant B can be used to

    adjust the variance of these components. The overall variance of the Middleton's class

    A PDF is a^. T{u) can be written in this case as

    r(u) = ^^^^^-^ = ^=° "^ M7^

    ^ m! /-^ amm\ m=0 »n=0

    Figures 2(a) and 2(b) plot r(u)/r(0) vs. u for this PDF for different values of A and

    B. a^ is assumed to be unity. These curves also are observed to be similar to a limiter

    curve. A larger value of A implies an mcreased presence of high-variance components.

    Therefore a snaaller threshold is necessary above which the squared terms of (10) need

    to be down-weighted in order to preserve the information in the remziining terms. This

    is reflected in Figure 2(a) which clearly exhibits a smaller threshold for a higher A.

    Also, a larger value of B implies less difference in the variances of the Gaussian terms

    corresponding to m = 0 and m > 0, i.e., a smaller deviation from Gaussianity. A

    smaller value of B indicates more non-Gaussianity and hence a smaller threshold is

    necessary, which is confirmed by Figxire 2(b).

    Johnson Fajnily: The Johnson family of PDF's is one of the heavy-tailed families

    obtained by applying a transformation to a Gaussian random variable. If t; is a Gaus-

    sian random variable with mean zero and variance one, then the transformed random

    variable

    u = t sinh I - I

    has a PDF belonging to the Johnson family given by

    /(u)= ' ty/2Tr

    8

    g-i(68inh-'(?)) = (18)

  • t is chosen to be

    t = lo'

    so that the density has a variance cr^. The parameter i can be used to control the

    heaviness of tail. A smaller value of t implies a heavier tail. The PDF approaches a

    Gaussian one as 5 —> cx5. r(u) can be written in this case as

    r(u) = i2 1 +

    -1

    + 1 + -Tr f2 (19)

    Figure 3 plots r(u)/r(u) vs. u/a for different values of t. A larger value of i indicates

    less deviation from Gaussianity confirmed by a gradual decrease of the curve from the

    value at u = 0. Smaller values of t correspond to a sharper transition and a smaller

    threshold.

    All these illustrations show that the MLE given as a solution of (7) actually

    tfownweights the larger squared terms and may therefore be well approximated by

    an appropriate weighted LS esthnator. The following section elaborates on this point.

    It should also be noted that r(u) is positive in all the above cases. This is true for any

    symmetric PDF which is a monotonically decreasing function of u for positive values

    of u, as may be verified from (5). Most of the common PDF's have this property.

    m. An approximation to the MLE

    It was suggested in the previous section that the problem of solving the set of

    highly non-linear equations (7) can be replaced by solving a set of lintar equations if

    the weighting function r(u„) is known or can be estimated for each n. Specifically, this

    suggests a two-stage procedure. The first stage involves computation of the unweighted

    LS estimates of the unknown filter parameters. These crude estimates can be used to

    estimate Un as per (11) and hence r(un) required for the second stage of weighted LS

    estimation. This procedure would eliminate convergence problems and much of the

    9

  • computational complexity associated with the MLE. Yet it is apparent from (14), (17)

    and (19) that even if u„ were known computation of the weighting function might be

    difficult for many heavy-tailed non-Gaussian PDF's. Typically the weighting function

    involves computation of transcendental functions such as exponentials which may be

    computationally burdensome. The problem would be sunplified considerably if T could

    be approximated by a simple function whose characteristics depended on the PDF

    parameters. A possible approximation of the weighting curves shown in Figures 1-3 is

    the Butttrworth "filter"

    r(u)= ^' ^+K2 (20) 1 +

    u

    where Ug denotes the '3 dB cutoff', /? is the order of approximation (not necessarily

    an integer) and iiCi > 0 and JFC2 > 0 can be used to match f with T for u = 0 and

    u —>^ oo. All these parameters can be iised to produce an accurate approxiination of

    the usually complicated function T. The second stage of LS can therefore be simplified

    by minimizing (10) with T replaced by f. This yields

    2^a»f Y, Xn_.X„_yr(Un))=- J^ XnX^_yf(Un), J=l,2, 1=1 \n=p+l / n=p+l

    which in matrix form is

    10

  • (^ N

    ^ Xn-lXn-lf(Un) ^ Xn-lXn-2^{Un) =p+l n=p+l ^ AT

    ^ Xn_2a:n-if(Un) ^ In-2Xn-2f (u^) n=p+l n=p+l

    X] ^ri-pXn-if[Un) ^ X„_pX„_2f(t2n) \n=p+l n=p+l

    N _ \

    n=p+l N

    2J 3:n_2X,i_pf(u„) n=p+l

    N

    y^ 2:n-pX„_pf(Un) n=p+l

    02

    VflpV

    y

    2_^ XnXn-ir(u„) n=p+l

    ^ XnXn_2f(Un) n=p+l

    AT

    ]X XnXn-pf(u„)

    (21)

    X is a symmetric ■py.p matrix which is positive semidefinite. To show this assimie

    b = [61 62 • • • 6p] is a vector of real numbers. Then,

    b^Xb = ^^6,6y Y. X„_,Xn-yf(«„) t=lj=l n=p+l

    = Y f(Un) J])^6,6yXn-,Xn-> ■ n=p+l t=iy=i

    JV To n2

    n=p+l Y^biXn-i t=l

    >0

    since the weights f(u„) are always positive (see (20)). (21) can be solved in 0{p^)

    operations. A Cholesky decomposition cjin be iised to reduce computation. This is

    in contrast to the unweighted least squares, for which it is possible to estimate the

    11

  • parameters in 0{p^] operations [Morf et d 1977]. In summary, the proposed technique

    is

    STEP I: Use covariance method ((21) with f(un) = 1) to obtain initial estimates

    of a.

    STEP U: Generate the sequence u„ by passing {xp+i,Xp+2, ■■■,XN} through the

    MA filter whose coefficients are as estimated m step I, as given by (11).

    STEP lU : Select the curve f (20) by choosing appropriate values ofuc, /3, Ki and

    K2 from the known values of the PDF parameters (0).

    STEP IV : Solve for a from (21).

    Step m will be different for different non-Gaussian PDF's. The following section

    addresses this part of the problem for the specific case of a mixed-Gaussian distribution.

    rV. Weighted LS for Mixed-Gaussian PDF

    Performance of the weighted LS estimator described in the previous section is

    expected to be dependent on how well the curve f can approxunate the true weighting

    curve r. Therefore the parameters of f, namely, u^, /?, Ki and K2 should be chosen

    properly for every set of values of the parameters of the PDF, i.e., 0. In the mixed-

    Gaussian case the parameters e and p determme the shape of T (see Figures 1(a) and

    1(b)). Ki and K2 can be foimd as a fimction of these two parameters by matching the

    values of r and f for u = 0 and u -> 00. It follows from (14) assuming a% = 1 that

    r(0) = ^ and r(oo)^l (1-6) + -^ P

    Also, f (0) =Ki+K2 and f(oo) ^ K2. Therefore

    ■(1-^) + iCi = Py/P

    1(1-^) + -^ - and K2 = - (22) P p ^ '

    12

  • Fortunately, Ki and K2 are obtained as explicit closed-form functions of e and p. cr%

    has been assumed to be unity without loss of generality, since it will only change Uc by

    a scale factor and furthermore the weighting curve need only be determined to within

    a scale factor for use in (21). Uc can be chosen to match f and T at u = Uc. It is found

    by solving

    n^c) = ^+K2 (23)

    where T and is defined by (14). T being a complicated function, (23) can only be

    solved by a search algorithm. Figure 4 plots Uc, as obtamed by solvmg (23), vs. c for

    different values of p. The curves can be explained by interpreting the mixed-Gaussian

    PDF as one arising from a nominally Gaussian noise contaminated by a high variance

    Gaussian process. A small value of e implies little contammation by the high variance

    population and therefore the limiter can allow for reasonably large values of Un and

    hence a large Uc results. On the other hand as e increases, increased interference from

    the high variance population is compensated for by making the threshold Uc smaller

    so as not to allow large values of Un suppress the information contained in the other

    terms. Figure 5 plots Uc vs. p for different values of e. It shows that the threshold

    is minhnum for /> « 10. If p is large, the contaniinating population would mtroduce

    very large spikes and therefore a high threshold would suffice to suppress them. For

    a smaller ratio of a] to a% a smaller threshold is necessary. When aj and 0% are of

    the same order {p < 10), it becomes difficult to distinguish between contributions from

    the two populations and therefore most of the terms should be equally weighted, which

    is accomplished by causing the threshold to be large, as can be verified from Figure

    5. An interesting special case is p = 1 when the mixed-Gaussian PDF degenerates to

    a Gaussian PDF (r(un) = l/a| for all

  • spaced points. The sum of {T{u) - f (u))^ evaluated at these points is then minimized

    with respect to /?. Figure 6 plots /? vs. e for different values of p. It shows that a sharp

    cutoff (i.e., a high value of /3) is necessary only when the contaminating process has

    high power and it appears very rarely (large p and small e).

    A typical approximation of T{u) by f (u) has been plotted in Figure 7. Both the

    functions are plotted vs. u m the same scale. a% = 1, p = 100 and e = 0.1 were

    assumed. The corresponding threshold Uc was 3.0224 and the most suitable (3 was

    9.422. Ki = 0.979 and iC2 = 0.01 were obtained from (22). The approxhnation appers

    to be reasonably accurate. In general, accuracy of the approximation will depend on

    the values of p and e.

    V. The case of unknown PDF parameters

    It was assumed for the weighted LS estimator described in section HI that 6, the

    vector of noise PDF parameters was known. This was necessary to in order to determme

    the weightmg function to be used in (21). If it is partially or completely unknown, it

    has to be estimated. This will xmdoubtedly degrade the performance of the estimator.

    It will be shown in the next section that when the PDF parameters are known, the

    estimator performzince of the weighted LS estimator proposed nearly attains the CR

    bound. Alternately, the performance is as good as the MLE. Hence for the purpose of

    discxission the weighted LS estimator can be considered to be asymptotically efficient

    when the PDF parameters are known so that T is known. It is thus of interest to

    determme the sensitivity of this performance to changes in T due to estimation errors

    m the unknown PDF parameters. One way of quantifying this is to determine the

    efficiency of the correspondmg AR filter parameter estimator as the PDF parameters

    vary from the true values assumed for F. The problem is now examined from the

    viewpoint of robust M-estimators [Martin 1979], [Martui and Yohai 1984].

    The origmal set of non-linear equations (6) to be solved for the MLE (which are

    14

  • approximated by a set of linear equations in the weighted LS method) can be written

    as N / P \

    ^ x„_y

  • The asymptotic efficiency given by (26) depends on how well the function (f) matches

    the optimal one for a given PDF /. It attains the upper bound of unity if only if

    4> = -/'// or alternately (24) represents the MLE equations.

    For a Gaussian M-estimator (which assumes the underlying PDF to be Gaussian

    with zero mean and variance a^ for the purpose of choosing (f) (j) = -f If = "n/o-^.

    Therefore the estimator reduces to a LS estimator. In this case, assummg the true PDF

    to be /, (26) implies

    ^^^(^'/) = ^ (28)

    It is known [Sengupta and Kay 1986] that for all symmetric PDF's a'^If > 1 with the

    equality holding only for the Gaussian PDF. Therefore for all symmetric non-Gaussian

    PDF's efficiency of the LS estimator is less than tmity. In fact the LS estimator is

    known to be severely lackmg in efficiency robustness, as verified by Figures 8, 9, 10

    and 11 which plot the asymptotic efficiency of the LS estimator given by (28) for the

    three non-Gaussian PDF's described m Section 11. A small deviation from Gaussianity

    is observed to produce a large drop in asymptotic efficiency. For example, in the

    case of a mixed-Gaussian PDF, it can be observed from Figure 8 that /j = 100 and

    c = 0.1 results in a drop by a factor of 10 m the asymptotic efficiency from the value

    at e = 0 (which corresponds to a Gaussian PDF). Figure 9, which plots the asymptotic

    efficiency of the LS estimator for a mixed-Gaussian process vs. p for different values

    of 6, shows that the estimator loses efficiency for moderately large values of p, even

    when 6 is reasonably small. In the cases of Middleton's class A and Johnson's families

    Gaussianity corresponds to large values of B and t, respectively. In both cases the

    asymptotic efficiency of the LS estimator drops substantially when these parameters

    are smaller (see Figures 10 and 11).

    It is expected that a wiser choice of (f> (or T) in (24) would result in a better

    M-estunator. If the PDF parameters (0) were known, the choice T - -f'/uf or a

    suitable approximation T thereof would have been optimal. Since these parameters

    16

  • are unknown, a selection of f which is quite appropriate for one value of 0 may not

    be suitable for other values of it. If such a mismatch does not reduce the asymptotic

    efficiency substantially, the corresponding M-estimator would be considered insensitive

    to small variations in the PDF parameters. This will be examined by plotting the

    asymptotic efficiency of a nominal M-estimator as the true PDF parameter values vary

    from the values for which the chosen estimator is optunal. The weighted LS estimator

    proposed ui section m may be viewed as an M-estimator where u„ (the argument of

    T) is replaced by a preluninary estimate Un. Therefore the asymptotic performance

    (in terms of efficiency) of the M-estimators, which will now be described, should be a

    good indication of the sensitivity of the weighted LS estimator to changes in f due to

    estimation errors in tmknown PDF parameters.

    Figme 12 plots the asymptotic efficiency of a typical M-estimator for different

    mixed-Gaussian PDF's. A fixed limiter curve with u^ = 3, /3 = 10, Ki = 0.98 and

    K2 = 0.01 is used. In this case

    {Un) = «nf («„) = -2:??!^ +0.01tt„

    1 + (^«'

    3

    (J% is assumed to be unity and the asymptotic efficiency (calculated from (26) and (29)

    by numerical integration) is plotted vs. c for different values of p. The curve corre-

    sponding to /) = 100 exhibits a maximum (efficiency « 1) at c = 0.1 demonstratmg that

    a mixed-Gaussian PDF with e = 0.1 and p = 100 is most suitable for this M-estimator.

    In fact, these values of c and p were actually used to determine f as described in Section

    rV and resulted in (29). For values of c from 0 to 0.5 its asymptotic performance is

    reasonably good. The asymptotic efficiency is 0.98 at c = 0 (i.e., the PDF is Gaussian)

    and 0.90 at c = 0.5. Therefore for a truly Gaussian PDF this estimator will be 98% as

    efficient as a LS estimator (which has efficiency of one for a Gaussian PDF). This can

    be thought of as a 2% premium [Martin 1979] for a 90% protection or coverage against

    up to 50% outliers. Figure 13 plots the asymptotic efficiency of the same esthnator vs.

    17

  • p for different values of e. It shows that although this particular choice of

  • does not lose efficiency as fast as the LS estimator as t becomes smaller {i.e. the

    PDF becomes more non-Gaussian). These are instances of the M-estimator being

    insensitive to small variations in some of the PDF parameters. It is not clear how

    well these results apply to the weighted LS estimator, and hence its improvement over

    the LS estimator may be somewhat less than what the curves show. The central

    argument is that the proposed estimator improves the efficiency robustness of the LS

    estimator by weighting the squared terms, and it also reduces computation over the

    M-estimator by approximatmg the argument of f. Its ability to handle the case of

    unknown PDF parameters makes it more attractive m practice than an M-estimator

    which requires the solution of non-linear equations. The following section presents the

    results of computer simulations which justify the approximations made in deriving the

    weighted LS estimator.

    VI. Simulation of the perforxnance of the weighted LS estimator

    Two typical AR(4) processes [Kay 1986] was chosen for computer simulations. The

    parameters are given in Table A. Process I is broadband while process U is narrowband.

    The underlying PDF is assumed to be mixed-Gaussian with cr| = 1 and p = 100. The

    mixture parameter is e = 0.1. The AR process was generated by passing a white mixed-

    Gaussian process through a filter, blowing sufficient time for the transients to decay.

    The white process was generated by randomly selecting from two mutually independent

    white Gaussian processes with PDF's EB ajid Ej (having variances a| and a] = pa%

    respectively) on the basis of a series of Bernoulli trials with probability of success c.

    Thus a random variable could be expected to come from the background population for

    (1-c) fraction of times and from the coniaminating population for e fraction of times.

    In ax:cordance with the discussion in the previous section one of the PDF parameters,

    namely e, was assumed to be unknown, e is linearly related to the overall variance a^

    19

  • of the PDF,

    cT2=a|[(l-e) + ep)] (30a)

    I.e.,

    ,2 C = ^-1 p-1 .1 - (306)

    It was suggested in the previous section that a crude estimator of e can be used to

    select the proper weighting curve. In this case the driving noise power, i.e., a^ was

    estimated along with a in the first step of unweighted LS estimation using covariance

    method (see (8)) and c was calculated from this estimate using (30b).

    Table B shows the sample means and sample vaxiances of the AR filter pjurameter

    estimators obtained by the exact evaluation of the approximate MLE. The MLE is

    foimd by the foiir-dimensionaJ optimization (for the four AR filter parameters) of the

    conditional likelihood function as reported in [Sengupta and Kay 1986]. A Newton-

    Raphson iterative procedure was used for this purpose, with initial conditions obtained

    from a preluninary stage of least squares estimation, namely, the Forward-backward

    method [Kay 1986]. The value of e used in (6) was as obtained from a^ in the first stage

    of LS estimation. 1000 data points were used and the result is based on 500 experiments.

    The results, as summarized in Table D, can be compared to the performance of the

    Forward-Backward estimator (see Table C) and the CR bound. The MLE achieves

    the CR bound while the variances of the Forward-Backward estimators are larger by

    a factor of 10. This agrees with the theoretical prediction of previous section, as the

    asymptotic eflBiciency of an LS esthnator, given in this case by (28), is 0.106 « 1/10.

    Table D reports the performance of the weighted LS estimator. The loss of performance

    is only marginal, as is verified by comparing it to the performance of the exact MLE.

    Evaluation of the exact MLE is not only computationally intensive, but it eilso

    suffers from convergence problems. For 1000 data points \% of the experiments failed

    20

  • to converge. For short data records [e.g., N < 250) it almost never converges. However

    the weighted LS does not suffer from this problem. The performance of the unweighted

    and weighted LS estimators for N = 100 is summarized in Tables E and F, respectively,

    along with the CR bound. The unweighted LS (Forward-Backward) estimator continues

    to be off from the CR bound by a factor of 10 for Process L The offset increases to

    a factor of 15 for Process 11. The weighted LS suffers from a slight degradation of

    performance: it is off from the CR bound by a factor of 2 for Process I and by a

    factor of 2.5 for Process 11. This is probably due to the increased inaccuracy of the

    simplifying approximations for shorter data records. It still exhibits improvement over

    the Forward-Backward estimator. It is also seen to have less bias as compared to the

    Forward-Backward estimator in all cases.

    Vn. Conclusions

    The weighted LS estunator proposed in this paper yields accurate estunates of

    the parameters of an AR process excited by non-Gaussian white noise. The method

    utilizes the partial information available about the noise PDF (principally the form of

    the PDF to within a set of unknown parameters) and should thereby outperform the

    so-called robust estimators. It also reduces computation by avoiding solution of non-

    linear equations required by the MLE or a robust estimator. Computer simulations

    have justified the assumptions made in determining the estimator. The new technique

    does not suffer from convergence problems, and exhibits only a slight departure in

    performance from the CR bound for short data records. The weighted LS estimator for

    the AR filter parameters cam be used in conjimction with other estimation techniques

    directed towards assessment of unknown PDF parameters. In some situations it may

    tolerate a reasonable inaccuracy in the estimation of these parameters and yet produce

    an accxirate estimate of the AR filter parameters.

    21

  • References

    [1] S.M. Kay, Modern Spectral Estimation: Theory and Application, Chapters 5-7,

    Prentice-Hall, 1986, to be published.

    [2] D. Sengupta and S.M. Kay, "Efficient Estimation of Parameters for Non-

    Gaussian Autoregressive Processes", submitted to IEEE Trans, on Acoustics, Speech

    and Signai Processing, 1986.

    [3] R.D. Martin, "The Cramer-Rao Bound and Robust M-Estimates for Time Series

    Autoregressions", Biometrica, pp. 437-442, Vol. 69, No. 2, 1982.

    [4] D. Sengupta, "Estiniation and Detection for Non-Gaussian Processes Using

    Autoregressive and Other Models", M.S. Thesis, Univ. of Rhode Island, 1986.

    [5] S.L. Bemstem et d., "Long-range Communication at extremely low frequen-

    cies", Proc. of the IEEE, pp. 292-312, Vol. 62, March 1974.

    [6] G.E.P. Box and G.J. Jenkins, Time Series Analysis: Forecasting and Control,

    Chapter 7, San Francisco: Holden-Day, 1970.

    [7] S.V. Czamecki and J.B. Thomas, "Nearly Optimal Detection of Signals in Non-

    Gaussian noise", ONR report #14, Feb. 1984.

    [8] D. Middleton, "Cannonically Optimum Threshold Detection", IEEE Trans, on

    Info. Theory, pp. 230-243, Vol. IT-12, No. 2, April 1966.

    [9] N.L. Johnson and S. Kotz, Distributions in Statistics: Continuous Univariate

    Distributions, New York: John Wiley, 1971.

    [10] R.D. Martin, "Robust Estimation for Thne Series Autoregressions" in Robust-

    ness in Statistics, Edited by R.L. Launer and G.N. Wilkmson, New York: Academic,

    1979.

    [11] R.D. Martin and V.J. Yohai, "Robustness in Time Series and Estunating

    ARMA models", Univ. of Washington Technical Report #50, Jvine 1984.

    [12] P.J. Huber, Robust Statistics, Chapter 4, New York: John Wiley, 1981.

    22

  • [13] T.W. Anderson, The Statistical Analysis of Time Series, Chapter 5, New York:

    John Wiley, 1971.

    [14] H.W. Sorenson, Parameter Estimation: Principles and Problems, Chapter 2,

    New York: Marcel Dekker, 1980.

    [15] L. Pakula, private communication, 1986.

    23

  • Table A: Parameters of the AR processes used for simulation

    Process

    II

    ai

    -1.352

    -2.760

    02

    1.338

    3.809

    as

    -0.662

    -2.654

    a*

    0.240

    0.924

    poles

    0.7exp[y2ff(0.12)] 0.7exp[j27r(0.21)l

    0.98exp[y27r(0.11)] 0.98exp[y27r(0.14)]

    Table B: Performance of the MLE, N = 1000

    value

    Sample

    mean Biaa^ Sample

    variance

    Cramer-Rao

    bound

    Process I 03

    a*

    -1.352 1.338

    -0.662 0.240

    -1.3527 1.3391

    -0.6630 0.2404

    4.900 X lO-'^ 1.210 X 10-« 1.000 X 10-« 1.600 X 10-^

    1.0221 X 10-* 2.4601X 10-* 2.4251X 10-* 1.1035 X 10-*

    1.0491 X 10-* 2.5961 X 10-* 2.5961 X 10-* 1.0491 X 10-*

    Process II

    02

    as a*

    -2.760 3.809

    -2.654 0.924

    -2.7597 3.8081

    -2.6530 0.9236

    9.000 X 10-* 8.100 X 10-'^ 1.000 X 10-« 1.600 X 10-^

    1.7445 X 10-5 9.0097 X 10-5 9.1303 X 10-5 1.8496 X 10-5

    1.6278 X 10-5 8.0163 X 10-5 8.0163 X 10-5 1.6278 X 10-5

  • o o

    r(u) r(0)

    p=100

    0.00 2.00 4.00 6-00 8.00 iO.OO

    u = u ^B

    Figure 1(a) The weighting curve for Mixed-Gaussian PDF for different e's

  • Table C: Performance of the Forward-Backward Estimator, N 1000

    True

    value

    Sample

    mean Bias^ Sample

    variance

    Cramer-Rao

    bound

    Process I

    at

    -1.352 1.338

    -0.662 0.240

    -1.3482 1.3326

    -0.6591 0.2382

    1.444 X 10-5 2.916 X 10-5 8.410 X 10-« 3.240 X 10-*

    1.0197 X 10-3 2.3822 X 10-3 2.3531 X 10-3 9.6246 X 10-*

    1.0491 X 10-* 2.5961 X 10-* 2.5961 X 10-* 1.0491 X 10-*

    Process II

    02

    03

    a*

    -2.760 3.809

    -2.654 0.924

    -2.7567 3.8001

    -2.6447 0.9197

    1.089 X 10-5 7.921 X 10-5 8.649 X 10-5 1.849 X 10-5

    1.6569 X 10-* 8.3418 X 10-* 8.4388 X 10-* 1.7083 X 10-*

    1.6278 X 10-5 8.0163 X 10-5 8.0163 X 10-5 1.6278 X 10-5

    Table D: Performance of the Weighted LS Estimator, N = 1000

    True

    value

    Sample

    mean Bias^ Sample

    variance

    Cramer-Rao

    bound

    Process I

    Ol

    03

    04

    -1.352 1.338

    -0.662 0.240

    -1.3525 1.3388

    -0.6628 0.2402

    2.500 X 10-^ 6.400 X 10-^ 6.400 X 10-^ 4.000 X 10-8

    1.1169 X 10-* 2.6466 X 10-* 2.6389 X 10-* 1.1049 X 10-*

    1.0491 X 10-* 2.5961 X 10-* 2.5961 X 10-* 1.0491 X 10-*

    Process II

    Ol

    03

    04

    -2.760 3.809

    -2.654 0.924

    -2.7595 3.8075

    -2.6524 0.9233

    2.500 X 10-^ 2.250 X 10-« 2.560 X 10-^ 4.900 X 10-^

    1.9003 X 10-5 9.8729 X 10-5 1.0043 X 10-* 2.0422 X 10-5

    1.6278 X 10-5 8.0163 X 10-5 8.0163 X 10-5 1.6278 X 10-5

  • Table E: Performance of the Forward-Backward Estimator, N = 100

    True

    value

    Sample

    mean Bias2 Sample

    variance

    Cramer-Rao

    bound

    Process I as

    a4

    -1.352 1.338

    -0.662 0.240

    -1.3328 1.3095

    -0.6383 0.2324

    3.686 X 10"* 8.123 X 10-* 5.617 X 10-* 5.776 X 10-^

    9.7580 X 10-3 2.1536 X 10-2 2.0595 X 10-2 8.5016 X 10-3

    1.0491 X 10-3 2.5961 X 10-3 2.5961X 10-3 1.0491 X 10-3

    Procesa II

    03

    as 04

    -2.760 3.809

    -2.654 0.924

    -2.7372 3.7494

    -2.5934 0.8974

    5.198 X 10-* 3.552 X 10-3 3.672 X 10-3 7.076 X 10-*

    2.4807 X 10-3 1.2929 X 10-2 1.2943 X 10-2 2.5026 X 10-3

    1.6278 X 10-* 8.0163 X 10-* 8.0163 X 10-* 1.6278 X 10-*

    Table F: Performance of the Weighted LS Estimator, iV^ = 100

    Thie Sample Sample Cramer-Rao value mean Bias' variance bound

    ai -r.352 -1.3476 1.936 X 10-5 1.9844 X 10-3 1.0491 X 10-3 Proceaa 03 1.338 1.3293 7.569 X 10-5 5.2554 X 10-3 2.5961 X 10-3

    I as -0.662 -0.6536 7.056 X 10-5 5.2139 X 10-3 2.5961 X 10-3 a* 0.240 0.2366 1.156 X 10-5 2.0403 X 10-3 1.0491 X 10-3

    ai -2.760 -2.7543 3.249 X 10-5 3.6109 X 10-* 1.6278 X 10-* Process 03 3.809 3.7936 2.372 X 10-* 1.9891 X 10-3 8.0163 X 10-*

    II as -2.654 -2.6380 2.560 X 10-* 2.0666 X 10-3 8.0163 X 10-* 04 0.924 0.9168 5.184 X 10-5 4.3322 X 10-* 1.6278 X 10-*

  • o o

    """"^^xx ' \ \ \ \p=10000 .' \ ■'

    o" ■ \\ \pilOOO £=0.1 '^ . ^

    r(u) \\ \ \ ■ ..■■■'.,-'

    r(oo ^ \\\\ : -■■ ■ ' ;.,V , .

    o"

    \\\ '

    to

    \\\

    .■

    o~

    ^ \^

    ■' '■

    O o -^ ^ QT r ■ T 1 1 1 0.00 2.00 4.00 6.00 8.00 I o .00

    u u =

    ^B

    Figure 1(b) The weighting curve for Mixed-Gaussian PDF for different 's

  • B=1.0

    =0.005 A=0.1

    A=0.01

    0.00 2.00 4.00 e'.oo 8.00 10.00

    Figure 2(a) The weighting curve for Middleton's class A PDF for different A's

  • o o

    0.00 10.00

    Figure 2(b) The weighting curve for Middleton's class A PDF for different B's

  • 10.00

    u a

    Figure 3 Tne weighting curve for Johnson's PDF for different t's

  • o o

    '^

  • o o CD

    O

    u.

    O O

    ro'

    o

    10 10 10 10

    Figure 5 Dependence of threshold on p in the Mixed-Gaussian case

  • o ^ p=2000 TT p=1000

    O O

    tD_

    O O

    m

    oo'

    o o

    1 10 10

    —r

    10" e

    10

    Figure 6 Dependence of 3 on e and p in the Mixed-Gaussian case

  • 0.00 1.50 3.00 4.50 u

    6.00 7.50

    B

    Figure 7 A typical approximation for the weighting cturve in the Mixed-Gaussian case

  • o o

    I •l-t o •H

    « U •H O •M

    p=10000

    00

    Figure 8 Asymptotic efficiency of LS estimator vs. £ for the Mixed-Gaussian process

  • o

    Figure 9 Asymptotic efficiency of LS estimator vs.P for the Mixed-Gaussian process

  • & o

    u g •H U •H

    u •H

    O in

    o O

    A=0.005

    A=0.1

    10~ 10

    .A=0.01

    ii=0.05

    —I r

    lo' 1 B ^

    10 -1

    10 -2

    Figure 10 Asymptotic efficiency of LS estimator vs. B for Middleton's class A process

  • o

    O c

    U-t

    in

    o'

    o

    < o"

    o o

    2.8; 2'. 40 2.00 60 I • c: ̂0 0

    Figure 11 Asymptotic efficiency of LS estimator vs. t for Johnson's process

  • o o

    p=1000

    0.20 00

    Figure 12 Asymptotic efficiency of M-estimator vs. e for the Mixed-Gaussian process

  • o o

    u

  • o

    o 0

    o

    o

    to J

    o

    2.8C "r ^ j ^C r c -^ ■j - OJ

    Figure 15 Asymptotic efficiency of M-estimator vs. t for Johnson's process

  • O •

    in *

    o

    X u c o "

    •I-t

    ■ H o l4-( • a> o"

    4J O 4-> P< ^ m ' ■ ■ _ I/) C\J <

    o"

    o o «

    _ ■

    Q ' 1 1

    10^

    A= 0.005

    A=0.01 .

    A=0.05

    A=0.1

    10^ 10^ 10" 10"

    Figure 14 Asymptotic efficiency of M-estimator vs. B for Middleton's class A process

  • Simple and Efficient Estimation of Parameters

    of Non-Gaussian Autoregressive Processes

    STEVEN KAY AND DEBASIS SENGUPTA Department of Electrical Engineering

    University of Rhode Island

    Kingston, Rhode Island 02881

    Abstract

    A new technique for the estimation of autoregressive filter parameters of a non-

    Gaussian autoregressive process is proposed. The probability density function of

    the driving noise is assumed to be known. The new technique is a two-stage pro-

    cedure motivated by maximum likelihood estimation. It is computationally much

    simpler than the maximum likelihood estimator and does not suffer from conver-

    gence problems. Computer simulations indicate that unlike the least squares or

    linear prediction estimators, the proposed estimator is nearly efficient, even for

    moderately sized data records. By a slight modification the proposed estimator

    can also be used in the case when the parameters of the driving noise probability

    density function are not known.

    This work was supported by the Office of Naval Research under contract No.

    N00014-84-K-0527.

  • o o

    in

    c 0) •H

    4-1 o

    to

  • o o

    1.0

    o 'U o

    in »

    C) o •H 4J

    ^

    o o

    ir-

    10^ 10^ 10^ 10

    e=0.5

    10'

    Figure 13 Asymptotic efficiency of M-estimator vs. p for the Mixed-Gaussian process

  • o o

    A=0.005 .A=0.01

    V=0.05

    Figure 10 Asymptotic efficiency of LS estimator vs for Middleton's class A process

  • o

    Figure 9 Asymptotic efficiency of LS estimator vs.P for the Mixed-Gaussian process

  • o ^ p=2000 TT p=1000

    O O

    CD_

    O O

    » oo'

    o o

    m 10 -3 10 e

    10 -1

    Figure 6 Dependence of 3 on e and p in the Mixed-Gaussian case

  • o

    0.00 1.50 3.00 4.50 6-00 7.50 u

    u => — a B

    Figure 7 A typical approximation for the weighting curve in the Mixed-Gaussian case

  • o o iD

    O

    u^

    O o

    ro'

    o

    10 -4 10" 10- 10 -1

    p=2000 p=1000 p=500 .p=200 p=100 p=50 p=20

    Figure 4 Dependence of threshold on e in the Mixed-Gaussian case

  • o o

  • OFFICE OF NAVAL RESEARCH STATISTICS AND PROBABILITY PROGRAM

    BASIC DISTRIBUTION LIST FOR

    UNCLASSIFIED TECHNICAL REPORTS

    FEBRUARY 1982

    Copies Copies

    Statistics and Probability Program (Code 411(SP))

    Office of Naval Research Arlington, VA 22217 3

    Defense Technical Information Center

    Cameron Station Alexandria, YA 22314 12

    Commanding Officer Office of Naval Research Eastern/Central Regional Office

    Attn: Director for Science Barnes Building 495 Summer Street Boston, MA 02210 1

    Commanding Officer Office of Naval Research Western Regional Office

    Attn: Dr. Richard Lau 1030 East Green Street Pasadena, CA 91101 J

    U. S. ONR Liaison Office - Far East Attn: Scientific Director APO San Francisco 96503 1

    Applied Mathematics Laboratory David Taylor Naval Ship Research and Development Center

    Attn: Mr. G. H. Gleissner Bethesda, Maryland 20084 1

    Comnandant of the Marine Coprs (Code AX)

    Attn: Dr. A. L. Slafkosky Scientific Advisor

    Washington, DC 20380 V

    Navy Library National Space Technology Laboratory Attn: Navy Librarian Bay St. Louis. MS 39522 1

    U. S. Amy Research Office P.O. Box 12211 Attn: Dr. J. Chandra Research Triangle Park, NO

    27706 1

    D1rector National Security Agency Attn: R51, Dr. Maar Fort Meade, MO 20755 1

    ATAA-SL, Library U.S. Army TRADOC Systems Analysis Activity Department of the Army White Sands Missile Range, NM

    88002 1

    ARI Field Unit-USAREUR Attn: Library c/o ODCSPER HQ USAEREUR i 7th Army APO New York 09403 1

    Library, Code 1424 Naval Postgraduate School Monterey, CA 93940 1

    Technical Information Division Naval Research Laboratory Washington, DC 20375 1

    OASD (liL), Pentagon Attn: Mr. Charles S. Smith Washington, DC 20301 1

  • \ 1

    Copies Copies

    Director AMSAA Attn: DRXSY-MP, H. Cohen Aberdeen Proving Ground, MD

    21005

    Dr. Gerhard Heiche Naval Air Systems Corrmand

    (NAIR 03) Jefferson Plaza No. 1 Arlington, VA 20360 1

    Dr. Barbara Bailar Associate Director, Statistical Standards

    Bureau of Census Washington, DC 20233 1

    Leon Slavin Naval Sea Systems Command

    (NSEA 05H) Crystal Mall #4, Rm. 129 Washington, DC 20036

    B. E. Clark RR #2, Box 647-B Graham, NC 27253 1

    Naval Underwater Systems Center Attn: Dr. Derrill J. Bordelon

    Code 601 Newport, Rhode Island 02840 1

    Naval Coastal Systems Center Code 741 Attn: Mr. C. M. Bennett Panama City, FL 32401 1

    Naval Electronic Systems Command (NELEX 612)

    Attn: John Schuster National Center No. 1 Arlington, VA 20360 1

    Defense Logistics Studies Information Exchange

    Army Logistics Management Center Attn: Mr. J. Dowling Fort Lee, VA 23801 1

    Reliability Analysis Center (RAC) RADC/RBRAC Attn: I. L. Krulac

    Data Coordinator/ Government Programs

    Griffiss AFB, New York 13441

    Technical Library Naval Ordnance Station Indian Head, MD 20640

    Library Naval Ocean Systems Center San Diego, CA 92152

    Technical Library Bureau of Naval Personnel Department of the Navy Washington, DC 20370

    Mr. Dan Leonard Code 8105 Naval Ocean Systems Center San Diego, CA 92152

    Dr. Alan F. Petty Code 7930 Naval Research Laboratory Washington, DC 20375 1

    Dr. M. J. Fischer Defense Communications Agency Defense Communications Engineering Center 1860 Wiehle Avenue Reston, VA 22090 1

    Mr. Jim Gates Code 9211 Fleet Material Support Office U. S. Navy Supply Center Mechanicsburg, PA 17055 1

    Mr. Ted Tupper Code M-311C Military Sealift Coirenand Department of the Navy Washington, DC 20390 T

  • ■r *-«>

    Copies Copies

    Mr. F. R. Del Priori Code 224 Operational Test and Evaluation Force (OPTEVFOR)

    Norfolk, VA 23511 1

    ;m-'