method of least squares the ls filtering method is …barner/courses/eleg636/notes/...1 method of...

45
1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria is the sum of squared errors produced by the filter over a finite set of (training) data The method is related to linear regression Optimization procedure results in a LS best fit for the filter over the samples in the optimization (training) set.

Upload: others

Post on 26-Jan-2020

3 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

1

Method of Least Squares

The LS filtering method is a deterministic

method. The performance criteria is the sum

of squared errors produced by the filter over

a finite set of (training) data

• The method is related to linear

regression

• Optimization procedure results in a LS

best fit for the filter over the samples in

the optimization (training) set.

Page 2: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

2

Consider the linear transversal filter

GivenM the number of taps in the filter{x(i)} input sequence{d(i)} desired output sequence

for i = 1, 2, …, N. The goal of the LS

method is to set the tap weights such that

sum of squared errors

∑=

=N

Mi

ie 2|)(|)(wε

is minimized.

Page 3: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

3

Let T

Mwww ],,,[ 110 −= Lw be the weight

vector and

TMixixixi )]1(,),1(),([)( +−−= Lx

be the observation vector for M ≤ i ≤ N.

Then the error of time i is

)()()( iidie H xw−=

The full set of error values can be compiled

into a vector.

Define

)](,),1(),([ NeMeMeH L+=

Then ε is the (N – M + 1) × 1 error vector.

Page 4: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

4

Similarly, the (N – M + 1) × 1 desired vector

d can be defined as

)](,),1(),([ NdMdMdH L+=d

If the filter output is denoted as )(ˆ id , then

combining the filter output values in a

vector yields

HH

H

HHH

H

NMM

NMM

NdMdMd

Aw

xxxw

xwxwxw

d

=+=

+=+=

)](,),1(),([

)](,),1(),([

)](ˆ,),1(ˆ),(ˆ[ˆ

L

L

L

where

)](,),1(),([ NMMH xxxA L+=

is the observation data matrix.

Page 5: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

5

Expanding the data matrix

+−

−−+

=

+=

)1()2()1(

)1()()1(

)()1()(

)](,),1(),([

MNxxx

NxMxMx

NxMxMx

NMMH

L

MOMM

L

L

L xxxA

We see that HA is a M × (N – M + 1)

rectangular toplitz matrix

Combining all the above we have

filter output vector: HHH Awd =ˆ

desired output vector: Hd

error vector:HHH

HHH

Awd

dd

−=−= ˆ

Page 6: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

6

The sum of the squared estimate errors can

now be written as

AwAwdAwAwddd

AwdAwd

w

HHHHHH

HHH

H

N

Mi

ie

+−−=

−−==

= ∑=

))((

|)(|)( 2ε

minimizing with respect to w,

AwAdAww HH 22

)( +−=∂

∂ε

setting to zero gives the optimal LS

weights w

=⇒

equation normal

ticDeterminisˆ dAwAA HH

Page 7: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

7

while A is not generally square, and thus not

invertable, AHA is square and generally

invertable.

Thus

dAAAw

dAwAAHH

HH

1)(ˆ

ˆ−=⇒

=

Note that the deterministic can be rearranged

as

0A

wAd0dwAA

0dAwAA

=

−==−=−

min

min ˆusingor )ˆ(

ˆ

H

H

HH

Thus the LS orthogonality principle states

that the estimate error min is orthogonal to

the row vectors of the data matrix AH.

Page 8: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

8

Or in its expanded form

0A =minH

=

+

+−

−−+

0

0

0

)(

)1(

)(

)1()2()1(

)1()()1(

)()1()(

min

min

min

MM

L

MOMM

L

L

Ne

Me

Me

MNxxx

NxMxMx

NxMxMx

We can also derive the minimum sum of

squared errors, which results when w is

used.

wAAwwAddAwdd

wAdAwd

ˆˆˆˆ

)ˆ)(ˆ(minminmin

HHHHHH

HHH

He

+−−=−−=

=

Page 9: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

9

Utilizing the normal equations we have

wAAwdAw ˆˆˆ HHHH =

Thus,

wAddd

wAAwwAddAwddwAAw

ˆ

ˆˆˆˆˆˆ

min

HH

HHHHHH

HH

e

−=

+−−= 43421

or using dAAAw HH 1)(ˆ −=

dAAAAddd HHHHe 1min )( −−=

response desired

ofenergy |)(| 2∑

=

N

Mi

id

Page 10: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

10

Consider again the deterministic normal

equation

dAwAA HH =ˆ

Note that

xx

x

x

x

xxxAA

=

=

+

+=

∑=

N

Mi

H

H

H

H

H

ii

N

M

M

NMM

)()(

)(

)1(

)(

)](,),1(),([M

L

Time averagedcorrelation matrixsize M × M

Page 11: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

11

From ∑=

=N

Mi

H ii )()( xx

We see

1) is Hermetian

2) is nonnegative definite

since for any a

0|)(|

)]()][([

)()(

2 ≥=

=

=

=

=

=

N

Mi

H

N

Mi

HHH

N

Mi

HHH

i

ii

ii

xa

xaxa

axxaDa

3) From 1) and 2) we can derive that the

eigenvalues of are real and

nonnegative

Page 12: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

12

The deterministic normal equation,

dAwAA HH =ˆ

also gives

x

xxxdA

=

=

+

+=

∑=

N

Mi

H

idi

Nd

Md

Md

NMM

)()(

)(

)1(

)(

)](,),1(),([M

L

Time averagedcross-correlation vectorsize M × 1

Page 13: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

13

Thus the deterministic normal equation can

be expressed as

w

dAwAA

==

ˆ

ˆ HH

and since is usually positive definite

(always positive semi-definite)

w 1ˆ −=

Also, mine can now be expressed as

{

w

wdd

dAAAAdddw

1

ˆ

1min

ˆ

ˆ

)(

−=−=−=

−=

Hd

Hd

HH

HHHH

ee

eH

4434421

Energy of desired signal

Page 14: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

14

Consider again the orthogonality principle

0A =minH

For d the estimate of d utilizing w , i.e.

HH Awd ˆˆ = , we have

0d

0wAw

0A

=

=⇒

=

min

min

min

ˆ

ˆˆ HHH

H

Thus, the minimum estimation error vector,

min , is orthogonal to the data matrix HA

and the LS estimate d .

Page 15: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

15

Analysis of LS solution

Suppose that the true underlying system islinear,

)()(

)()()(

00

0

1

00

iei

iekixwid

H

M

kk

+=

+−= ∑−

=

xwwhere e0(i) is the unobservable measurementerror,

0)}({ 0 =ieE

and

ki

kikeieE

≠=

=∗

0)}()({

2

00

σ

That is, e0(i) is while with zero mean and

variance 2σ .

Expressing the desired signal in vector form

00Awd +=

where )](,),1(),([ 0000 NeMeMeH L+=

Page 16: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

16

Recall that

dAAAw HH 1)(ˆ −=

using 00Awd += in the above

01

0

01

01

)(

)()(ˆ

AAAw

AAAAwAAAwHH

HHHH

−−

+=

+=

Since A is fixed, taking the expectation

yields

0

01

0 }{)(}ˆ{

w

AAAww

=+= − EE HH

• The LS estimate, w , is unbiased.

Page 17: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

17

Consider next the covariance of w .

Note that

01

0

01

0

)(ˆ

)(ˆ

AAAww

AAAwwHH

HH

=−⇒

+=

and

12

112

100

1

100

1

00

2

}{

})(){(

})ˆ)(ˆ{(]ˆcov[

−−

−−

−−

==

=

=

−−=

AA

AAAAAA

wwwww

I

σσ

σ43421

HH

HHHH

H

E

E

E

• The covariance of w is proportional to

the variance of the measurement noise

and the inverse of the time average

correlation matrix.

Page 18: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

18

Now we will show that the LS estimate w is

the best linear unbiased estimate.

Consider any linear unbiased estimate w~ ,

which we can write as

Bdw =~

Substituting 00Awd += in the above,

00~ BBAww +=

Taking the expectation,

IBA

BAww

=⇒= 0}~{E

for w~ to be unbiased.

Thus 00~ Bww +=

M × (N – M + 1) matrix

Page 19: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

19

or

00~ Bww =−

and

H

HH

H

E

E

BB

BB

wwwww

2

00

00

}{

})~)(~{(]~cov[

σ=

=

−−=

Now define

HH AAAB 1)( −−=

which yields

{

11

111

1111

11

)(

]][[

11

−−

−−−

−−−−

−−

−=−=

+−−=

+−−=−−=

−−

AABBBB

BB

AABABABB

ABAB

II

HHH

H

HHHH

HH

4434421321

Page 20: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

20

Note that the diagonal elements at H

must be ≥ 0.

Thus 1)( −−= AABB HHH

])[(][ 1−≥⇒ AABB HH diagdiag

or multiplying by σ2

])([][ 122 −≥ AABB HH diagdiag σσbut

12 )(]ˆcov[ −= AAw HσHBBw 2]~cov[ σ=

Thus the above states

Miww ii ,...,2,1]ˆvariance[]~variance[ =≥

Thus the weights in w have lower variance

than any other linear estimates.

• The LS estimate w is the Best Linear

Unbiased Estimate (BLUE).

Page 21: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

21

Recursive Least Squares (RLS) estimateion

In the LS approach, we chose the M element

weight vector w to minimize

∑=

=N

Mi

ie 2|)(|)(wε

where )()()( iidie H xw−=

Letting the observation vectors be written as

+−−

−−−−

=

+=

)1()1()(

)1()2()1(

)()1()(

)](,),1(),([

MNxNxNx

NxMxMx

NxMxMx

NMMH

L

MOMM

L

L

K xxxA

Page 22: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

22

and given the desired output

)](,),1(),([ NdMdMdH K+=dThe LS solution is defined by

dAwAA HH =ˆor

dAAAw1

1)(ˆ−

== HH

where

∑∑=

=

==N

Mi

N

Mi

H idiii )()()()( xxx

Page 23: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

23

We now wish to form a recursive update for

the weights such that we do not have to

recompute 1)( −AAH for each new

observation sample

• )( AAH is M×M

• inversion requires O(M3) multiplications

and additions

• we can use the matrix inversion lemma

to reduce the number of computations

• weights are now a function of n

Page 24: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

24

Let the observation sequence be x(1), x(2),

…, x(n) where we assume x(l) = 0 for l ≤ 0.

Then the error is defined as

∑=

=n

i

ieinn1

2|)(|),()( βε

where

TM

T

H

nwnwnwn

Mixixixi

inidie

)](,),(),([)(

)]1(,),1(),([)(

)()()()(

110 −=+−−=

−=

K

K

w

x

xw

and β(n, i)∈(0, 1] is a forgetting factor used

in non-stationary statistic cases.

Page 25: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

25

A commonly used forgetting factor is the

exponential forgetting factor

niin in ,,2,1),( L== −λβ

where ]1,0(∈λ

Thus,

∑=

−=n

i

in ien1

2|)(|)( λε

The LS solution to this problem is given by

the normal equation

)()(ˆ)( nnn w =where now

=

∗−

=

=

=

n

i

in

n

i

Hin

idin

iin

1

1

)()()(

)()()(

x

xx

λ

λ

Page 26: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

26

The deterministic normal equation

components can be updated recursively,

)()()1(

)()()()(

)()()(

)1(

1

1

)1(

1

nnn

nnii

iin

H

H

n

n

i

Hin

n

i

Hin

xx

xxxx

xx

+−=

+

=

=

=

−−

=

λ

λλ

λ

444 3444 21

Similarly

)()()1(

)()()()(

)()()(

1

1

)1(

1

ndnn

ndnidi

idin

n

i

in

n

i

in

∗−

=

∗−−

=

∗−

+−=

+

=

=

x

xx

x

λ

λλ

λ

Page 27: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

27

In order to obtain the LS solution, we need

the inverse of )(n

matrix inversion lemma gives

size: ML

H

LLLMMMMM ××

××

×+= CDCBA 11

where A, B and D are positive definite (non-

singular)

Then

BCBCCDBCBA HH 11 ][ −− +−=For

)()()1()( nnnn Hxx+−= λ

we have

)11(1)()1(

)1()()()(1 ×=×−=

×=×=− DB

xCA

MMn

MnMMn

λ

Page 28: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

28

For

1)1(

)()(1 =−=

==− DB

xCA

n

nn

λ

and

BCBCCDBCBA HH 11 ][ −− +−=

Note that

1111 )]()1()(1[][ −−−− −+=+ nnnHH xxBCCD λ

is a scalar. Thus the above yields

)()1()(1

)1()()()1()1()(

11

112111

nnn

nnnnnn

H

H

xxxx

−+−−−−= −−

−−−−−−

λλλ

Let )()( 1 nn −=P

)()1()(1

)()1()(

1

1

nnn

nnn

H xPxxP

k−+

−= −

λλ

Then

)1()()()1()( 11 −−−= −− nnnnn H PxkPP λλ

Gain vector

Page 29: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

29

The gain vector can be simplified further

)()1()(1

)()1()(

1

1

nnn

nnn

H xPxxP

k−+

−= −

λλ

)()]1()()()1([

)()1()()()()1()(

)(

11

11

nnnnn

nnnnnnn

n

H

H

xPxkP

xPxkxPk

P444444 3444444 21−−−=

−−−=−−

−−

λλλλ

Therefore

)()()()()( 1 nnnnn xxPk −==

we must now derive an update for the tap

weight vector. Recall,

)()()()()(ˆ 1 nnnnn Pw == −

using )()()1()( ndnnn ∗+−= xλ

we have

)()()()1()()(ˆ ndnnnnn ∗+−= xPPw λ

Page 30: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

30

Using the update

)1()()()1()( 11 −−−= −− nnnnn H PxkPP λλ

in the first P(n) term of

)()()()1()()(ˆ ndnnnnn ∗+−= xPPw λ gives

)()()(

)1()1()()()1()1(

)()()(

)1()]1()()()1([)(ˆ 11

ndnn

nnnnnn

ndnn

nnnnnn

H

H

−−

+−−−−−=

+−−−−=

xP

PxkP

xP

PxkPw λλλ

)()()()1(ˆ)()()1(ˆ)(

ndnnnnnnn

H ∗+−−−=43421

k

xPwxkw

)()()1(ˆ

)]()1(ˆ)()[()1(ˆ

nnn

ndnnnn H

+−=−−−−=

αkw

wxkw

where )()1(ˆ)()( nnndn H xw −−=α

)1(ˆ −nw

Page 31: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

31

Notice the difference between e(n) and α(n)

error priori a)()1(ˆ)()(

error posteriari a)()(ˆ)()(

=−−==−=nnndn

nnndneH

H

xw

xw

α

Summary of RLS algorithm:

1) Given a new sample x(n), update thegain vector

)()1()(1

)()1()(

1

1

nnn

nnn

H xPxxP

k−+

−= −

λλ

2) Update the innovation)()1(ˆ)()( nnndn H xw −−=α

3) Update the tap weight vector)()()1(ˆ)(ˆ nnnn ∗+−= αkww

4) Update inverse correlation matrix)1()()()1()( 11 −−−= −− nnnnn H PxkPP λλ

Initial conditions: 0w =)0(ˆ and Iδ=)0(

where δ is a small positive constant,

201.0 xσδ ≈

Page 32: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

32

Comparison between RLS and LMS

algorithm terms:

Entity RLS LMSError

error) priori a(

)()1(ˆ)()( nnndn H xw −−=αerror) posteriari a(

)()(ˆ)()( nnndne H xw−=

WeightUpdate

)()()1(ˆ)(ˆ nnnn ∗+−= αkww )()()()1( nennn ∗+=+ xww µ

Gain oferror update

)()()1()(1

)1(1

1

nnnn

nH

xxPx

P

−+

−−

λλ )()( nxµ

Page 33: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

33

Consider the complexity (number of

additions and multiplies) for LMS, LS, and

RLS algorithms.

• Assume the data is real and the filter is

of size M.

For the LMS algorithm,

1) )()()(ˆ nnnd T xw=

2) )(ˆ)()( ndndne −=

3) )()()()1( nennn xww µ+=+

Complexity:

Stage O× O+

1) M M−12) 0 13) M+1 M

Total complexityper iteration

O×(2 M+1) O+(2 M)

Page 34: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

34

The LS algorithm is given by

)()(ˆ)( nnn w = .

For each new sample we have

1) )1()1()()1( +++=+ nnnn Txx

2) )1()1()()1( +++=+ ndnnn x

3) )1()1()1(ˆ 1 ++=+ − nnnw

Complexity:

Stage O× O+

1) M2 M2

2) M M3) M3+ M2 M3+ M(M−1)

Total complexityper iteration

O×( M3+ 2M2+ M) O+( M3+ 2M2)

Page 35: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

35

The RLS algorithm is (assume λ = 1) givenby:

1) )()1()(1

)()1()(

nnn

nnn

T xPxxP

k−+

−=

2) )()1(ˆ)()( nnndn T xw −−=α

3) )()()1(ˆ)(ˆ nnnn αkww +−=

4) )1()()()1()( −−−= nnnnn T PxkPP

Note repeated operation:

)1()( −nnT Px repeated steps are underlinedcomplexity:

Stage O× O+

1) numeratordenominator

division

M2

M2+ MM

M(M−1)M (M−1)+ M

2) M M3) M M4) M 2+M2 M (M−1)+M2

Total complexityper iteration

O×(3M2+4 M) O+(3M2+M)

Page 36: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

36

Analysis of the RLS algorithm

Assume again that the desired signal isformed by the regression model

)()()( 00 nnend H xw+=

where )(0 ne is white with variance 2σ

Assume λ = 1 and n ≥ M, then)()()(ˆ 1 nnnw −=

where

∑=

=n

i

H iin1

)()()( xx

and

∑=

∗=n

i

idin1

)()()( x

substituting into d(i) from above

∑∑

=

=

=

=

+=

+=

+=

n

i

n

i

n

i

H

n

i

H

iein

ieiii

iiein

100

10

10

100

)()()(

)()()()(

])()()[()(

xw

xwxx

wxx

Page 37: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

37

Thus

=

∗−

=

∗−

+=

+=

=

n

i

n

i

iein

ieinn

nnn

10

10

100

1

1

)()()(

])()()()[(

)()()(ˆ

xw

xw

w

note that }}|{{}{ BAEEAE = Thus

0

10

10

10

10

}})({)()({

}},,2,1),(|)()()({{

)}(ˆ{

w

xw

xw

w

=

+=

=+=

=

∗−

=

∗−

n

i

n

i

ieEinE

niixieinEE

nE

K

Since e0(i) is independent of all observationsand are the x(i) terms are given, Φ(n) isuniquely determined• The RLS algorithm is convergent in the

mean for n ≥ M.• How does this compare to the LMS

algorithm?

Page 38: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

38

Next, consider the convergence in the mean

square. Using

∑=

∗−+=n

i

ieinn1

01

0 )()()()(ˆ xww

we have

∑=

∗−=−=n

i

ieinnn1

01

0 )()()()(ˆ)( xww

and the weight error correlation matrix is

)}()()()()()({

)}()({)(

1

1 100

1 njjeieinE

nnEn

Hn

i

n

j

H

= =

∗−

=

=

∑∑ xx

K

using }}|{{}{ BAEEAE = again,

)}()()()({

)}()()}()({)()({)(

1

1

12

1

1 100

1

niinE

njjeieEinEn

n

i

H

n

i

n

j

H

=

= =

∗−

=

=

∑∑

xx

xxK

σ

)(2 ji −δσ

Page 39: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

39

)}({)}()()({)( 12112 nEnnnEn −−− ==K σσThe matrix )(n has a Wishart distributionand the expectation of )(1 n− is

11

1)}({ 11 +>

−−= −− Mn

MnnE R

thus

11

)( 12

+>−−

= − MnMn

n RKσ

and using the trace

11

1

][trace1

])([trace)}()({trace

)]}()([trace{)]}()([trace{

)}()({}||)({||

1

2

12

2

+>−−

=

−−=

====

=

∑=

MnMn

Mn

nnnE

nnEnnE

nnEnE

M

i i

H

HH

H

λσ

σR

K

• The weight vector MSE is initiallyproportional to min/1 λ

• The weight vector converges linearly inthe mean squared sense

Page 40: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

40

Now consider the learning (error) curve of

the RLS algorithm. Recall the a priori

estimation error

)()1()(

)()1(ˆ)()(

)()1(ˆ)()(

0

00

nnne

nnnne

nnndn

H

HH

H

x

xwxw

xw

−−=

−−+=−−=α

Now consider the MSE of )(nα

)}()1()1()({

)}()()1({

)}()1()({}|)({|

)]}()1()([

)]1()()({[

}|)({|)(

0

02

0

0

0

2

nnnnE

nennE

nennEneE

nnne

nnneE

nEnJ

HH

H

H

H

H

xx

x

x

x

x

−−+

−−

−−=

−−

−−==′

α

Page 41: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

41

Consider each term:

220 }|)({| σ=neE

Invoking the independence theorem, )1( −n

is independent of x(n) and e0(n). Thus,

)}()({)}1({)}()()1({ 00 nenEnEnennE HH ∗∗ −=− xxwhich is 0 by the orthogonality principle.

Thus

0)}()1()({)}()()1({ 00 =−=− ∗∗ nennEnennE HH xxNow consider the last term

)]}1()1()()(trace[{

)]}()1()1()(trace[{

)}()1()1()({

−−=−−=

−−

nnnnE

nnnnE

nnnnE

HH

HH

HH

xx

xx

xx

by the independence theorem

1)]-(ntrace[

)}]1()1({)}()({trace[

RK

xx

=−−= nnEnnE HH

Page 42: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

42

Putting the pieces together

1)]-(ntrace[)( 2 RK+=′ σnJor since

12

2)1( −

−−=− RK

Mnn

σ

we have

12

)(2

2 +>−−

+=′ MnMn

MnJ

σσ

• The ensemble average learning curve of

the RLS converges in about 2M

iterations, which is typically an order of

magnitude faster than the LMS

• 2)(lim σ=′

∞→nJ

n thus there is no excess

MSE

• Convergence of the RLS algorithm is

independent the eigenvalues of )(n

Page 43: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

43

Example: Consider again the channel

equalization problem

Where

=−+

=Otherwise0

3,2,1))]1(2

cos(1[2

1nn

Whn

π

As before a 11-tap filter is used. The SNR is

fixed at 30dB and W is varied to control the

eigenvalue spread.

Page 44: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

44

• The RLS algorithm converges in about

20 iterations (twice the number of filter

taps).

• The convergence is insensative to the

eigenvalue spread.

Page 45: Method of Least Squares The LS filtering method is …barner/courses/eleg636/notes/...1 Method of Least Squares The LS filtering method is a deterministic method. The performance criteria

45

• The RLS algorithm converges faster

than the LMS algorithm

• The RLS algorithm has lower steady

state error than the LMS algorithm.