fusion under unknown correlation_covariance intersection

Upload: skyone2k10

Post on 09-Apr-2018

219 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    1/12

    Fusion under Unknown Correlation - CovarianceIntersection as a Special Case

    Lingji Chen, Pablo O. Arambel and Raman K. MehraScientic Systems Company Inc.

    500 West Cummings Park, Suite 3000Woburn, MA 01801

    chen,pablo,[email protected]

    Abstract This paper addresses the problem of fusing several random variables (RVs) with unknown correlations. A family of upper bounds on the resulting covariance matrix is given, and is shown to contain the upper bound offered bythe Covariance Intersection (CI) Algorithm proposed by Julier and Uhlmann. For trace minimization, the optimal onein this family is better than the one obtained by CI except in some cases where they are equal. It is further proved that the best pair of combination gains that minimizes the above optimal-trace-in-the-familycoincides with the one associated with the best value of omega in CI. Thus the CI Algorithm provides a convenient one-dimensional parameterization for theoptimal solution in the n-square dimensional space. The results are also extended to multiple RVs and partial estimates.

    Keywords: Fusion of random variables, unknown correlation, Covariance Intersection, consistent estimation, KalmanFilter

    1 IntroductionThe need of fusing several random variables (RVs) into one RV often arises in fusion and estimation problems.

    Updating the current estimate with a new observation is one such example. Usually the target RV is dened as a linearcombination of the source RVs. In the case of two uncorrelated RVs, for any given pair of combination gains, thecovariance matrix of the target RV is readily available, and the pair that minimizes the trace of this covariance matrix (andthus the estimation is optimal in the least square sense) is given by the Kalman Gain as dened in a Kalman Filter. Whenthe correlation is unknown, assuming it to be zero may still present satisfactory results for many applications. However,there are also many situations in which the assumption of independence may lead to serious problems. For example, in adistributed network, when a node

    receives a piece of information from a node , the topology of the network may besuch that is passing along the information it originally received from

    . Thus, if

    were to fuse the new informationfrom with the old one it has using the Kalman Gain under the assumption of independence, then the covariancematrix (as an indicator of the uncertainty about this information) would be reduced, when in fact it should remain at thesame level.

    In their seminal papers [1, 2], the Covariance Intersection (CI) Algorithm was proposed to deal with this problem. Theobjective is to obtain a consistent estimate of the covariance matrix when two random variables are linearly combined. Byconsistent we mean that the estimated covariance is always an upper-bound (in the positive denite sense; see Section2) of the true covariance, even when the correlation among the original RVs is unknown. Thus in the example above,after node

    combines the information, the covariance matrix will remain approximately the same, rather than incorrectlyreduced. Judiciously combined with Kalman Filter and prior knowledge about the systems, the CI Algorithm has foundwide applications [3, 4, 5, 6, 7, 8, 9, 10].

    Nevertheless, there are questions that are not answered by the CI Algorithm. When the variables being combined are dimensional vectors, the combination gain is a matrix with elements, thus can be chosen from an dimensional space.But the CI Algorithm parameterizes only a one dimensional curve in this space. In order to get a complete picture, we posetwo separate problems in this paper. The rst problem is to obtain a consistent estimate of the covariance matrix when xed combination gains are used. We solve this problem by presenting a family of such estimates. The member in thisfamily having the minimal trace can be determined analytically, and is referred to as the family-optimal estimate. The

    1

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    2/12

    second problem is to nd the best pair of gains that minimizes the trace of the above family-optimal estimate. In generalthis is an optimization problem in an dimensional space of combination gains. However, we prove that the globaloptimal solution is actually given by the CI Algorithm, even though it conducts the search only along a one dimensionalcurve.

    The paper is organized as follows. First a statement of the problem is given. Following this, the CI Algorithm isreviewed. Section 4 presents the main results of this paper. The results are generalized to partial estimates and multiplevariables in Section 5. Finally some conclusions are drawn. Part of the results presented in this paper will also appear in[11].

    2 Problem StatementTo highlight the essence of the results, no dynamics is considered in this paper, and the problem is simply stated as

    combining two estimates of the mean value of a random variable when the correlation between the estimates is unknown.The basic notations in [1] are followed here, but for simplicity no distinction is made between a random variable andits observation. More specically, let be the mean value of some random variable to be estimated. Two sources of information are available: estimate and estimate . Dene their estimation errors as

    !

    and assume that " #% $& '

    " #

    ) (0 $&

    1 2 3 2

    " #

    3 $& '

    " #

    ( $4

    1 56 5

    The true values of 1 2 3 2

    and1

    57 5

    may not be known, but consistent estimates are known:1

    2 8 2 @ 91

    28 2 1A 57 5

    91A 56 5

    (1)

    Here inequality is in the sense of matrix positive deniteness, i.e. , C B

    if and only if

    D is positive denite. Thecorrelation between the two estimates " #

    (

    $4

    1A 2 5

    is also unknown.Our objective is to construct a linear, unbiased estimator that combines and :

    E G FI H8 & PQ F

    (2)

    DeneE R S

    It follows that "E #T $& U '

    if and only if F

    HPQ F

    V (3)

    The covariance "E #

    ($W

    1A XY X

    may not be known, but we want to nd its consistent estimate1

    XY X

    :1 XY X

    91A XY X

    (4)

    Note that1

    XY X

    U FI H

    1A 2 8 2

    F (

    H

    PD F

    157 5

    Fa (

    PD FI H

    1 2 5

    F (

    PD F

    15

    2

    F (

    H

    (5)

    We formulate the following two problems:

    Problem 1: Determine a consistent estimate (upper bound)1

    XY X

    for1

    XY X

    in (5) for any given pair of Fb H and F

    .

    Problem 2: Find the pair of Fb H and F

    such that the upper bound1

    XY X

    is optimal in some sense, e.g. , minimaltrace or determinant.

    2

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    3/12

    If 1 2 5

    U '

    then for any given Fb H and F

    , the estimate1

    XY X

    FI H

    1 2 8 2

    F(

    H

    PD F

    157 5

    F(

    will be consistent ( i.e. ,1

    XY X

    91

    XY X

    ) as a direct consequence of (1). The trace of the above1

    XY X

    is minimized by1

    XY X

    d c

    1 e

    H

    2 8 2

    P

    1 e

    H

    56 5g f

    e

    H

    FI H

    1XY X

    1

    e

    H

    2 8 2

    156 5

    c

    1A 2 8 2

    P

    156 5

    f

    e

    H

    F

    1XY X

    1 e

    H

    56 5

    1A 2 8 2

    c

    1 2 8 2

    P

    156 5

    f

    e

    H

    This corresponds to the derivation of the Kalman Gain in Kalman Filter.

    3 The Covariance Intersection AlgorithmIf

    12

    5i h

    G ' but is known , then1A XY X

    can be given by

    1A XY X

    d p F H F

    r q

    s

    1 2 8 2 1 2 5

    1

    (

    2 5

    1 56 5a t

    s

    F

    (

    H

    F

    (

    t

    The best choice of Fb H and F

    that minimizes the trace of 1

    XY X

    can be obtained by solving the following constrainedoptimization problem: uw vy x

    #

    F

    1

    F ( $ subject to Fs

    V

    V

    tV

    where

    F p FH

    F

    q

    1

    s

    1 2 3 2 1A 2 5

    1

    (

    2 5

    1A 57 5t (6)

    The optimal solution of F H and F

    yields a1A XY X

    in the following form

    1e

    HXY X

    d p V V

    q

    1e

    H

    s

    V

    V

    t

    1 e

    H

    2 8 2

    P c

    1 e

    H

    2 8 2

    12

    5

    V

    f

    c

    1A 56 5 1

    (

    2 5

    1 e

    H

    2 8 2

    12

    5

    f

    e

    H

    c

    1

    (

    2 5

    1e

    H

    28 2

    V

    f

    (7)

    Covariance Ellipses , as dened below, is a convenient way of visualizing the relative size of covariance matrices. Fora positive denite matrix , we dene

    c

    f

    #

    (

    e

    H

    Y $ (8)

    A Covariance Ellipse at level is the boundary of cf

    . (We will omit ( ) in the following discussions.) Thus, if H

    , then

    W k j

    .Now we show that

    1. For a given1

    2 5

    , and hence the optimal10 XY X

    in (7), we haveln mo m l R I ln

    Proof: Since1

    in (6) is positive denite, by denition the second term in (7) is positive denite. Thus (1

    e

    H

    XY X

    implies (1

    e

    H

    28 2

    , i.e. ,

    l mo mW Q ln

    . Similarly,

    l mo mW l

    .

    2. For any point

    ln ` l

    , there exists a correlation matrix1 2 5

    such that (i)1

    as given in (6) is positivedenite, and (ii)

    z

    l mo m

    where1A XY X

    is given by (7).

    Proof: First we assume thatB

    (

    1

    e

    H

    2 8 2

    B

    (

    1 e

    H

    57 5

    B

    ' . Dene

    {

    (

    1 e

    H

    56 5

    (

    1

    e

    H

    2 8 2

    3

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    4/12

    1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

    1

    0.8

    0.6

    0.4

    0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Paa

    Pbb

    Pcc

    with known Pab

    Figure 1: Covariance with known | }8 ~ always lies inside the intersection

    It follows that z . Since the vector | W}3 } and |w~6 ~

    have the same length, one can be rotated to another by aunitary matrix , i.e. ,

    |

    }8 }b

    & |

    ~6 ~

    k

    & i A

    G

    The matrix| }8 ~W |`

    }3 }

    & |`

    ~7 ~

    satises our requirements:

    (i) | as given in (6) is positive denite: |0 }8 }@ , and |A ~6 ~0 |}3 ~

    |

    k

    }8 }

    |A }8 ~ E |A ~6 ~ .

    (ii) For |A Y as given in (7), we have

    |

    k

    Y

    G

    |

    }3 }

    w

    |

    }3 }

    |}8 ~

    A

    | ~6 ~ |

    }8 ~

    |

    }3 }

    | }8 ~ |

    }3 ~

    |

    k

    }8 }

    G

    |

    }3 }

    and therefore, z b .

    Other cases can be similarly proved by symmetry or by continuity argument (since we use strict inequality in (8) ).

    Based on the above observation, when | }3 ~ is not known, a consistent estimate of | Y should be such that bi o , or loosely speaking, | Y should include the intersection of |0 }8 } and |A ~6 ~ . This motivated the Covariance

    Intersection Algorithm [1]:|

    k

    Y

    |

    }8 }

    U

    W |

    ~6 ~

    (9)

    |A Y |

    k

    }8 }

    E |A Y |

    k~6 ~ (10)

    where S 8 is a parameter.An illustration of the above discussion on intersection is shown in Figures 1 and 2. In Figure 1, 3 different known | }8 ~

    are chosen, and the corresponding optimal covariance matrices | Y are obtained, whose covariance ellipses at level 1 areshown in solid lines, while those for | }8 } and | ~6 ~ are shown in dotted lines. In Figure 2, 3 different values of are chosen,and the corresponding covariance matrices are shown in the same fashion.

    The CI Algorithm requires to be optimized, for example by minimizing the trace or the determinant of | Y . Sincethe CI Algorithm computes the gains and on the y, it does not provide a complete solution to Problem 1, wherethe gains are xed a priori . For Problem 2, we show in Section 4.2 that the CI Algorithm does provide the global optimalsolution, even though it searches only along a one-dimensional curve as shown by Equation (10), while the gains in thegeneral case can be chosen from - 4 ) .

    4

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    5/12

    1 0.8 0.6 0.4 0.2 0 0.2 0.4 0.6 0.8 1

    1

    0.8

    0.6

    0.4

    0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    Paa

    Pbb

    Pcc

    parameterized by

    Figure 2: A family of covariance matrices containing the intersection.

    4 A Family of Solutions4.1 Problem 1

    In order to obtain an upper bound for

    |Y

    |A }8 } |A ~7 ~ | }8 ~ | ~6 }

    when the correlation |A }8 ~ is unknown, the following inequality is utilized

    6

    Y

    (11)

    where is a scalar parameter. It follows that

    |}3 }

    |~6 ~

    |

    }8 ~|

    ~6 } (12)

    Therefore, from (1) and (12), a consistent estimate | Y of | Y is characterized by the family

    |A Y

    | }8 }

    G

    | ~6 ~

    a (13)

    Values of can be chosen to optimize various performance criteria, and we will refer to this type of optimality asfamily-optimal. To minimize the trace of | Y , note that

    | Y

    T

    |}8 }

    T

    |~6 ~

    T

    | }3 }

    |A ~6 ~

    T

    | }8 }

    T

    | ~6 ~

    | }8 }

    T

    | ~7 ~

    | }8 }

    T

    | ~7 ~ (14)

    where equality holds when

    |A ~6 ~

    T

    | }3 }

    (15)

    Therefore we have the following.

    5

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    6/12

    Theorem 1 (xed gains) For any given Fb H and F

    , a family of consistent estimates of the covariance matrix is given by(13). When is chosen by (15), the trace is minimized and its value is given by the last equality in (14).

    Note: Theorem 1 holds for any pair of F H and F

    , i.e. , it does not require the equality constraint F H Pa F

    G V . Thisconstraint is posed simply because we are interested in an unbiased estimator. It is not necessary for fusing two RVs in

    general.It is worth pointing out that for a given value of , and the corresponding gains F H and F

    in (10), the bound (9) iscontained in the above family (13). More specically, the

    1 XY X

    given by (9) satises equation (13) for a particular choiceof , i.e. , if

    FH

    1A XY X 1 e

    H

    28 2

    F

    d c a

    f

    1 XY X 1

    e

    H

    57 5

    and

    `

    equation (13) becomes

    1 XY X

    1A XY X 1e

    H

    2 8 2

    1 2 8 2 1e

    H

    2 8 2

    1 XY X

    P

    `

    c `

    f

    1A XY X 1 e

    H

    56 5

    1A 56 5 1 e

    H

    56 5

    1A XY X

    c E `

    f

    1A XY X 1e

    H

    2 8 2

    1A XY X

    P c `

    f

    1A XY X 1 e

    H

    56 5

    1A XY X

    1A XY X

    c

    1e

    H

    2 3 2

    P c E `

    f

    1e

    H

    56 5g f

    1 XY X

    which is satised by1

    e

    H

    XY X

    1e

    H

    2 8 2

    P c W a

    f

    1

    e

    H

    56 5

    which is (9).

    4.2 Problem 2Our solution to Problem 2 posed in Section 2 was motivated by the following observation. For any given , a pair

    of combination gains F H and F

    are determined by (10). We have just shown that a consistent estimate of the resultingcovariance matrix is given by the family (13), of which the matrix given in (9) is a special case. In terms of trace, the bestin the family (13) is given by choosing as in (15), and this should yield a trace less than or equal to that of (9). How isthe comparison when ranges over p '

    q

    ? As an example, let

    12 3 2

    s

    ' '

    ' '

    t

    1 56 5

    s

    ' ' '

    ' ' '

    t

    Figure 3 shows the trace of 1 XY X

    as a function of , with the solid line given by (9) in the CI Algorithm and the dashed linegiven by the family-optimal choice (14). As can be seen, the trace offered by the best matches the family-optimal onefor that particular pair of Fb H and F

    . We next present an even stronger result: it matches the best family-optimal one forall pairs of Fb H and F

    .Note that a general solution to Problem 2 takes the form

    FH

    F

    U

    u vy x

    r j

    uw vy x

    c

    1A XY X

    c FH

    F

    f f

    FH

    PQ F

    U V

    where

    represents a performance criteria such as trace or determinant. In the case of trace minimization, we show thatthe above optimal solution is actually given by the CI Algorithm (with trace minimization).

    Theorem 2 (family-optimal gains) There exists p ' q

    such that

    #

    FH

    1 2 8 2

    F

    (

    H

    $R PU

    #

    F

    1 56 5

    F

    (

    $ FH

    PD F

    V (16)

    is minimized by1 XY X

    d c

    1e

    H

    2 8 2

    PU c E `

    f

    1 e

    H

    56 5I f

    e

    H

    FI H G

    1 XY X 1 e

    H

    28 2

    F

    d c a

    f

    1 XY X 1e

    H

    56 5

    6

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    7/12

    0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10.7

    0.8

    0.9

    1

    1.1

    1.2

    1.3

    1.4

    parameter

    trace of Pcc

    and Pcc

    *

    Covariance Intersection AlgorithmNew Algorithm

    Figure 3: Trace as a function of : solid CI, dashed family-optimal

    Proof: For the case when

    in (16) is minimized by either F HI ' or F

    ' , R can be chosen as ' or . In thefollowing we will assume that F H

    h

    G ' and F

    h

    ' .Dene the following Lagrange function

    #

    FH

    12 8 2

    F

    (

    H

    $R P

    #

    F

    1 56 5

    F

    (

    $E

    Q T

    co FI H PD F

    V

    f

    where

    is the summation of all the elements of the matrix

    , the operator denotes elementwise product of two matrices, and is a matrix of Lagrange multipliers. Using the identity

    #

    1

    (

    $

    1 1 1

    (

    the stationary points are given by the following equations:

    FH

    FH

    12 8 2

    #

    FI H

    1 2 8 2

    F

    (

    H

    $

    G ' (17)

    F

    F

    1 56 5

    #

    F

    1 56 5

    F

    (

    $

    ' (18)

    FH

    PQ F

    V@ ' (19)

    Let

    #

    FH

    12 8 2

    F

    (

    H

    $

    `

    #

    F

    1A 57 5

    F

    (

    $

    From (17) we haveF

    H

    1 e

    H

    2 3 2

    Similarly,F

    1 e

    H

    56 5

    Substituting into (19) we have

    c

    1 e

    H

    28 2

    P

    1 e

    H

    56 5I f

    e

    H

    (20)

    7

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    8/12

    Note that ( . Now the denition of

    and yields

    #

    1 e

    H

    2 8 2

    128 2

    1 e

    H

    28 2

    $4

    #

    1 e

    H

    2 8 2

    $

    or

    #

    1e

    H

    2 8 2

    $& (21)

    Similarly

    #

    1e

    H

    56 5

    $& (22)

    Equations (20), (21), and (22) lead to polynomial equations of order in the variables

    and , where is the dimensionof the problem. Our objective here is to parameterize the solutions using a one dimensional variable. Recall that theminimum trace

    (where

    is dened in the theorem) is achieved by the choice of (15) in the family (13). The parameter now becomes

    d

    #

    F

    1 56 5

    F

    (

    $

    #

    FH

    12 8 2

    F

    (

    H

    $

    #

    1

    e

    H

    56 5

    $

    #

    1 e

    H

    2 8 2

    $

    Thus the family-optimal covariance matrix is

    1A XY X

    c ! P

    f

    FH

    1 2 8 2

    F(

    H

    PU c R P

    f

    F

    1A 56 5

    F(

    c

    P

    f

    1e

    H

    2 8 2

    P c

    P

    f

    1 e

    H

    56 5

    c

    P

    f

    c

    1 e

    H

    2 8 2

    P

    1e

    H

    57 5I f

    c

    P

    f

    c

    P

    1e

    H

    2 8 2

    P

    P

    1 e

    H

    56 5I f

    e

    H

    and the gains are

    FI H

    P

    1XY X

    1

    e

    H

    2 8 2

    F

    P

    1XY X

    1 e

    H

    56 5

    The theorem is proved by setting

    P

    According to the theorem, the k dimensional optimization problem can be reduced to a one-dimensional one.

    5 GeneralizationsIn this section we will generalize the previous results to the case where partial estimates are to be fused, and to the

    case with multiple RVs.

    5.1 Partial EstimatesThe Kalman Filter usually takes the form of combining a state estimate with a measurement b . The results

    presented so far can be generalized to the case with partial observations in a straightforward fashion. If "E #

    $4 G

    2

    " #

    T $E

    5

    then a linear unbiased estimator can be dened as 1

    E G FH

    & PQ F

    FH

    2

    PQ F

    5

    U V

    The family of consistent estimates of the covariance matrix is again given by

    1XY X

    d c R P

    f

    FI H

    1A 2 8 2

    F (

    H

    PU c ! P

    f

    F

    156 5

    Fa (

    B

    ' (23)

    1 For 4 A ! k b to have a solution,

    !

    has to have full row rank.

    8

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    9/12

    When

    b d

    #

    F

    156 5

    F

    (

    $

    #

    Fb H

    1 2 3 2

    F

    (

    H

    $

    1XY X

    has the minimal trace

    #

    FH

    12 3 2

    F

    (

    H

    $R P

    #

    F

    1A 56 5

    F

    (

    $

    (24)

    With a constraint on 2

    and 5

    :

    (

    2

    1 e

    H

    2 8 2

    2

    PQ (

    5

    1 e

    H

    56 5

    5

    B

    '

    the CI Algorithm can be generalized [3]:1 e

    H

    XY X

    (

    2

    1 e

    H

    2 3 2

    2

    P c `

    f

    (

    5

    1e

    H

    56 5

    5

    (25)

    FH

    G

    1 XY X

    (

    2

    1e

    H

    28 2

    F

    d c a

    f

    1 XY X

    (

    5

    1 e

    H

    56 5 (26)

    where p ' q

    .It can be shown in a similar fashion that for any given and hence F H and F

    , the family (23) contains (25) for aparticular choice of z

    H

    e

    . It can also be proved along the same line as in Theorem 2 that the trace (24) is minimized

    by F H

    and F

    where the latter can be parameterized by the generalized CI Algorithm as in (25) and (26).

    5.2 Multiple RVsIn this section we will extend the results for two RVs to multiple RVs. Notations will be changed from and to

    H

    y , and from to . To illustrate the generalization, we will describe in detail the case of three RVs, and thecase of multiple RVs will follow in a straightforward fashion.

    Let ) be the mean value to be estimated. Dene the estimation errors

    d

    Assume that the estimations are unbiased " #

    $4 '

    and the upper bounds on the covariance matrices"E #

    (

    $

    1

    1

    are known. Assume that the correlations " #

    (

    $

    1

    are unknown. Our objective is to obtain a linear unbiased estimate

    FH H

    PD F

    PQ F

    whereFI H0 PQ F

    PQ F

    V

    and an upper bound on its covariance matrix"E #

    n (0 $

    1

    1

    where ) .For xed gains FI H , F

    and F

    , there are more than one way to obtain the upper bound1

    . We will present a familyof such bounds based on our results discussed earlier. First we dene

    b G FH H

    PQ F

    then can be re-written as U i PQ F

    8

    9

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    10/12

    Thus1

    can be given by the family

    c ! P

    f

    1

    PU c R P

    f

    F

    1

    F(

    B

    '

    where the covariance

    1

    of the estimation error of

    can be chosen from the family

    c ! P H

    f

    FH

    1

    HF

    (

    H

    P c ! P

    H

    f

    F

    1

    F(

    H

    B

    '

    Combine the above two, and we have

    1

    c R P

    f

    c ! P H

    f

    FI H

    1

    H8 Fa (

    H

    P c R P

    H

    f

    F

    1

    Fa (

    P

    c R P

    f

    F

    1

    F(

    H

    B

    ' (27)

    Next we will nd the minimum trace in this family. Dene

    #

    F

    1

    F (

    $

    d

    Then

    #

    1

    $4 d c ! P

    f

    c c ! P H

    f

    H P c ! P

    H

    f

    f

    PU c R P

    f

    Setting

    #

    1

    $

    H

    '

    #

    1

    $

    '

    we have

    H

    H

    H P

    (28)

    and #1

    $& H P

    P

    (29)Thus we have shown that for given F H , F

    and F

    , a family of upper bounds (27) is available, and by choosing theparameters as in (28), the one with the minimum trace (29) can be chosen. To nd Fa H , F

    and F

    that minimizes thisfamily-optimal trace, we proceed as follows. Dene the following Lagrange function

    HP

    P

    c

    co F H P Q F

    PQ F

    V

    f f

    as in Section 4. Setting partial derivatives of

    with respect to F

    to zero yields

    F

    1

    U '

    orF

    1e

    H

    d (30)

    The constraint F H PD F

    PQ F

    U V gives

    H

    1 e

    H

    H

    P

    1 e

    H

    P

    1 e

    H

    e

    H

    Recall that the family-optimal trace is given by the parameters chosen as in (28) and the covariance matrix is given by(27). With F

    chosen as in (30), we have

    1

    ! P

    H P

    E

    ! P

    H

    H

    1 e

    H

    H

    P

    10

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    11/12

    ! P

    H

    1e

    H

    Q P

    ! P

    H0 P

    1e

    H

    H P

    P

    H

    1 e

    H

    H

    P

    1 e

    H

    P

    1 e

    H

    f

    H P

    P

    H

    H P

    P

    1e

    H

    H

    P

    H P

    P

    1e

    H

    P

    HP

    P

    1 e

    H

    e

    H

    o H

    1 e

    H

    H

    P

    1 e

    H

    P

    1 e

    H

    e

    H

    andF

    H

    H

    1 1 e

    H

    H

    F

    1 1 e

    H

    F

    1 1 e

    H

    where H

    p '

    q

    H P

    P

    d

    Thus the optimization can be conducted in ! rather than in ! # "

    j

    .

    6 ConclusionsThe Covariance Intersection Algorithm is reexamined in this paper, in the general framework of obtaining a consistent

    estimate of the covariance matrix when combining two random variables with unknown correlation. For the case whenthe gains are chosen, a family of consistent estimates is given. For the case when optimal gains are to be found in order tominimize the trace of the estimated covariance, it is proved that the solution is given by the CI Algorithm, which conductsthe search on a one-dimensional curve rather than in the whole parameter space, and thus the optimization problem canbe solved very efciently. The results have also been extended to the case of combining more than two variables, and tothe case of partial observations where only I d is available, being the quantity of interest. It is straightforward toextend the results to the case with dynamical equations. It is the authors belief that with the newly gained understanding,Covariance Intersection Algorithm will nd more applications in the areas of distributed ltering and estimation and datafusion.

    7 AcknowledgmentThis work was supported by NASA-JPL under contract NAS 3-01021.

    References[1] Simon J. Julier and Jeffrey K. Uhlmann. Non-divergent estimation algorithm in the presence of unknown corre-

    lations. In Proceedings of the American Control Conference , volume 4, pages 23692373, Piscataway, NJ, USA,1997. IEEE.

    [2] Simon Julier and Jeffrey Uhlmann. General decentralized data fusion with Covariance Intersection (CI). In D. Halland J. Llians, editors, Handbook of multisensor data fusion , chapter 12, pages 121 to 1225. CRC Press, 2001.

    [3] Pablo O. Arambel, Constantino Rago, and Raman K. Mehra. Covariance intersection algorithm for distributedspacecraft state estimation. In Proceedings of the 2001 American Control Conference , volume 6, pages 4398 4403,2001.

    [4] X. Xu and S. Negahdaripour. Application of extended covariance intersection principle for mosaic-based opticalpositioning and navigation of underwater vehicles. In Proceedings of IEEE International Conference on Roboticsand Automation , volume 3, pages 2759 2766, 2001.

    [5] E T Baumgartner, H Aghazarian, A Trebi-Ollennu, T L Huntsberger, and M S. Garrett. State estimation and vehiclelocalization for the do rover. In Proceedings of SPIE - The International Society for Optical Engineering , volume4196, pages 329336, Bellingham, WA, USA, 2000. Society of Photo-Optical Instrumentation Engineers.

    11

  • 8/7/2019 Fusion under Unknown Correlation_Covariance intersection

    12/12

    [6] David Nicholson and Rob. Deaves. Decentralized track fusion in dynamic networks. In Proceedings of SPIE - The International Society for Optical Engineering , volume 4048, pages 452460, Bellingham, WA, USA., 2000. Societyof Photo-Optical Instrumentation Engineers.

    [7] Simon J. Julier and Jeffrey K. Uhlmann. Real time distributed map building in large environments. In Proceedings

    of SPIE - The InternationalSociety for Optical Engineering , volume 4196, pages 317328, Bellingham, WA, USA.,2000. Society of Photo-Optical Instrumentation Engineers.

    [8] Jeffrey Uhlmann, Simon Julier, Behzad Kamgar-Parsi, Marco Lanzagorta, and Haw-Jye. Shyu. NASA mars rover:a testbed for evaluating applications of covariance intersection. In Proceedings of Spie - the International Society for Optical Engineering , volume 3693, pages 140149, 1999.

    [9] Ronald Mahler. Optimal/robust distributed data fusion: a unied approach. In Proceedings of SPIE - The Inter-national Society for Optical Engineering , volume 4052, pages 128138, Bellingham, WA, USA, 2000. Society of Photo-Optical Instrumentation Engineers.

    [10] C. Y. Chong and Shozo Mori. Convex combination and covariance intersection algorithms in distributed fusion. InProceedings of Fusion 2001 , 2001.

    [11] Lingji Chen, Pablo A. Orambel, and Raman K. Mehra. Estimation under unknown correlation: Covariance Intersec-tion revisited. IEEE Transactions on Automatic Control , 2002. to appear.