25. multi stage wiener filter

9
636 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004 Split Wiener Filtering With Application in Adaptive Systems Leonardo S. Resende, Member, IEEE, João Marcos T. Romano, Member, IEEE, and Maurice G. Bellanger, Fellow, IEEE Abstract—This paper proposes a new structure for split transversal filtering and introduces the optimum split Wiener filter. The approach consists of combining the idea of split filtering with a linearly constrained optimization scheme. Furthermore, a continued split procedure, which leads to a multisplit filter structure, is considered. It is shown that the multisplit transform is not an input whitening transformation. Instead, it increases the di- agonalization factor of the input signal correlation matrix without affecting its eigenvalue spread. A power normalized, time-varying step-size least mean square (LMS) algorithm, which exploits the nature of the transformed input correlation matrix, is proposed for updating the adaptive filter coefficients. The multisplit approach is extended to linear-phase adaptive filtering and linear prediction. The optimum symmetric and antisymmetric linear-phase Wiener filters are presented. Simulation results enable us to evaluate the performance of the multisplit LMS algorithm. Index Terms—Adaptive filtering, linear-phase filtering, linear prediction, linearly constrained filtering, split filtering, Wiener filtering. I. INTRODUCTION N ONRECURSIVE systems have been frequently used in digital signal processing, mainly in adaptive filtering. Such finite impulse response (FIR) filters have the desirable properties of guaranteed stability and absence of limit cycles. However, in some applications, the filter order must be large (e.g., noise and echo cancellation and channel equalization, to name a few in the communication field) in order to obtain an acceptable performance. Consequently, an excessive number of multiplica- tion operations is required, and the implementation of the filter becomes unfeasible, even to the most powerful digital signal processors. The problem grows worse in adaptive filtering. Besides the computational complexity, the convergence rate and the tracking capability of the algorithms also deteriorate with an increasing number of coefficients to be updated. Owing to its simplicity and robustness, the least mean square (LMS) algorithm is one of the most widely used algorithms for adaptive signal processing. Unfortunately, its performance Manuscript received March 21, 2002; revised May 19, 2003. The associate editor coordinating the review of this paper and approving it for publication was Dr. Naofal M. W. Al-Dhahir. L. S. Resende is with the Electrical Engineering Department, Federal University of Santa Catarina, 88040-900, Florianópolis-SC, Brazil (e-mail: [email protected]). J. M. T. Romano is with the Communication Department, State University of Campinas, 13083-970, Campinas-SP, Brazil (e-mail: [email protected]. unicamp.br). M. G. Bellanger is with the Laboratoire d’Electronique et Communication, Conservatoire National des Arts et Métiers, 75141, Paris, France (e-mail: [email protected]). Digital Object Identifier 10.1109/TSP.2003.822351 in terms of convergence rate and tracking capability depends on the eigenvalue spread of the input signal correlation ma- trix [1]–[3]. Transform domain LMS algorithms, like the dis- crete cosine transform (DCT) and the discrete Fourier transform (DFT), have been employed to solve this problem at the expense of a high computational complexity [2], [4]. In general, it con- sists of using an orthogonal transform together with power nor- malization for speeding up the convergence of the LMS algo- rithm. Very interesting, efficient, and different approaches have also been proposed in the literature [5], [6], but they still present the same tradeoff between performance and complexity. Another alternative to overcome the aforementioned draw- backs of nonrecursive adaptive systems is the split processing technique. The fundamental principles were introduced when Delsarte and Genin proposed a split Levinson algorithm for real Toeplitz matrices in [7]. Identifying the redundancy of com- puting the set of the symmetric and antisymmetric parts of the predictors, they reduced the number of multiplication operations of the standard Levinson algorithm by about one half. Subse- quently, the same authors extended the technique to classical algorithms in linear prediction theory, such as the Schur, the lat- tice, and the normalized lattice algorithms [8]. A split LMS adaptive filter for autoregressive (AR) mod- eling (linear prediction) was proposed in [9] and generalized to a so-called unified approach [10], [11] by the introduction of the continuous splitting and the corresponding application to a general transversal filtering problem. Actually, an appropriate formulation of the split filtering problem has yet to be provided, and such a formulation would bring to us more insights on this versatile digital signal processing technique, whose structure exhibits high modularity, parallelism, or concurrency. This is the purpose of the present paper. By using an original and elegant joint approach combining split transversal filtering and linearly constrained optimization, a new structure for the split transversal filter is proposed. The optimum split Wiener filter and the optimum symmetric and an- tisymmetric linear-phase Wiener filters are introduced. The ap- proach consists of imposing the symmetry and the antisymmetry conditions on the impulse responses of two filters connected in parallel by means of an appropriate set of linear constraints implemented with the so-called generalized sidelobe canceller structure. Furthermore, a continued splitting process is applied to the proposed approach, giving rise to a multisplit filtering structure. We show that such a multisplit processing does not reduce the eigenvalue spread, but it does improve the diagonalization factor of the input signal correlation matrix. The interpretations of the splitting transform as a linearly constrained processing are then 1053-587X/04$20.00 © 2004 IEEE

Upload: vijay-sagar

Post on 14-Apr-2015

19 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: 25. Multi Stage Wiener Filter

636 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

Split Wiener Filtering With Applicationin Adaptive Systems

Leonardo S. Resende, Member, IEEE, João Marcos T. Romano, Member, IEEE, and Maurice G. Bellanger, Fellow, IEEE

Abstract—This paper proposes a new structure for splittransversal filtering and introduces the optimum split Wienerfilter. The approach consists of combining the idea of split filteringwith a linearly constrained optimization scheme. Furthermore,a continued split procedure, which leads to a multisplit filterstructure, is considered. It is shown that the multisplit transform isnot an input whitening transformation. Instead, it increases the di-agonalization factor of the input signal correlation matrix withoutaffecting its eigenvalue spread. A power normalized, time-varyingstep-size least mean square (LMS) algorithm, which exploits thenature of the transformed input correlation matrix, is proposed forupdating the adaptive filter coefficients. The multisplit approach isextended to linear-phase adaptive filtering and linear prediction.The optimum symmetric and antisymmetric linear-phase Wienerfilters are presented. Simulation results enable us to evaluate theperformance of the multisplit LMS algorithm.

Index Terms—Adaptive filtering, linear-phase filtering, linearprediction, linearly constrained filtering, split filtering, Wienerfiltering.

I. INTRODUCTION

NONRECURSIVE systems have been frequently used indigital signal processing, mainly in adaptive filtering. Such

finite impulse response (FIR) filters have the desirable propertiesof guaranteed stability and absence of limit cycles. However,in some applications, the filter order must be large (e.g., noiseand echo cancellation and channel equalization, to name a fewin the communication field) in order to obtain an acceptableperformance. Consequently, an excessive number of multiplica-tion operations is required, and the implementation of the filterbecomes unfeasible, even to the most powerful digital signalprocessors. The problem grows worse in adaptive filtering.Besides the computational complexity, the convergence rate andthe tracking capability of the algorithms also deteriorate withan increasing number of coefficients to be updated.

Owing to its simplicity and robustness, the least mean square(LMS) algorithm is one of the most widely used algorithmsfor adaptive signal processing. Unfortunately, its performance

Manuscript received March 21, 2002; revised May 19, 2003. The associateeditor coordinating the review of this paper and approving it for publication wasDr. Naofal M. W. Al-Dhahir.

L. S. Resende is with the Electrical Engineering Department, FederalUniversity of Santa Catarina, 88040-900, Florianópolis-SC, Brazil (e-mail:[email protected]).

J. M. T. Romano is with the Communication Department, State Universityof Campinas, 13083-970, Campinas-SP, Brazil (e-mail: [email protected]).

M. G. Bellanger is with the Laboratoire d’Electronique et Communication,Conservatoire National des Arts et Métiers, 75141, Paris, France (e-mail:[email protected]).

Digital Object Identifier 10.1109/TSP.2003.822351

in terms of convergence rate and tracking capability dependson the eigenvalue spread of the input signal correlation ma-trix [1]–[3]. Transform domain LMS algorithms, like the dis-crete cosine transform (DCT) and the discrete Fourier transform(DFT), have been employed to solve this problem at the expenseof a high computational complexity [2], [4]. In general, it con-sists of using an orthogonal transform together with power nor-malization for speeding up the convergence of the LMS algo-rithm. Very interesting, efficient, and different approaches havealso been proposed in the literature [5], [6], but they still presentthe same tradeoff between performance and complexity.

Another alternative to overcome the aforementioned draw-backs of nonrecursive adaptive systems is the split processingtechnique. The fundamental principles were introduced whenDelsarte and Genin proposed a split Levinson algorithm for realToeplitz matrices in [7]. Identifying the redundancy of com-puting the set of the symmetric and antisymmetric parts of thepredictors, they reduced the number of multiplication operationsof the standard Levinson algorithm by about one half. Subse-quently, the same authors extended the technique to classicalalgorithms in linear prediction theory, such as the Schur, the lat-tice, and the normalized lattice algorithms [8].

A split LMS adaptive filter for autoregressive (AR) mod-eling (linear prediction) was proposed in [9] and generalizedto a so-called unified approach [10], [11] by the introduction ofthe continuous splitting and the corresponding application to ageneral transversal filtering problem. Actually, an appropriateformulation of the split filtering problem has yet to be provided,and such a formulation would bring to us more insights on thisversatile digital signal processing technique, whose structureexhibits high modularity, parallelism, or concurrency. This isthe purpose of the present paper.

By using an original and elegant joint approach combiningsplit transversal filtering and linearly constrained optimization,a new structure for the split transversal filter is proposed. Theoptimum split Wiener filter and the optimum symmetric and an-tisymmetric linear-phase Wiener filters are introduced. The ap-proach consists of imposing the symmetry and the antisymmetryconditions on the impulse responses of two filters connectedin parallel by means of an appropriate set of linear constraintsimplemented with the so-called generalized sidelobe cancellerstructure.

Furthermore, a continued splitting process is applied to theproposed approach, giving rise to a multisplit filtering structure.We show that such a multisplit processing does not reduce theeigenvalue spread, but it does improve the diagonalization factorof the input signal correlation matrix. The interpretations of thesplitting transform as a linearly constrained processing are then

1053-587X/04$20.00 © 2004 IEEE

Page 2: 25. Multi Stage Wiener Filter

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 637

(a)

(b)

Fig. 1. Split adaptive transversal filtering.

considered in adaptive filtering, and a power normalized andtime-varying step-size LMS algorithm is suggested for updatingthe parameters of the proposed scheme. We also extend such anapproach to linear-phase adaptive filtering and linear prediction.Finally, simulation results obtained with the multisplit algorithmare presented and compared with the standard LMS, DCT-LMS,and recursive least squares (RLS) algorithms.

II. SPLIT TRANSVERSAL FILTERING

Let us start with the following sentence.Any finite sequence (e.g., the impulse response of a

transversal filter) can be expressed as the sum of a symmetricsequence and an antisymmetric sequence. The symmetric(antisymmetric) sequence is given by the sum (difference) ofthe original sequence and its backward version, dividing bytwo the resulting sequence.

Specifically, in matrix notation, let

(1)

denote the -by-1 tap-weight vector of a transversal filter [seeFig. 1(a)]. With and denoting the vectors of the sym-metric and antisymmetric parts of , then

(2)

where

(3)

(4)

and is the -by- reflection matrix (or exchange matrix),which has unit elements along the cross diagonal and zeroselsewhere. Thus, if ,

, where . The symmetry and an-tisymmetry conditions of and are, respectively, describedby

(5)

Fig. 2. Generalized sidelobe canceller.

and

(6)

which can be easily verified from (3) and (4).Now, consider the classical scheme of a transversal filter, as

shown in Fig. 1, in which the tap-weight vector has beensplit into its symmetric and antisymmetric parts. The inputsignal and the desired response are modeled aswide-sense stationary discrete-time stochastic processes ofzero mean. Without loss of generality, all the parameters havebeen assumed to be real valued.

III. SPLIT FILTERING AS A LINEARLY CONSTRAINED

FILTERING PROBLEM

The principle of linearly constrained transversal optimal fil-tering is to minimize the power of the estimation error ,subject to a set of linear equations defined by

(7)

where is the -by- constraint matrix, and is a -ele-ment response vector. An alternative implementation is repre-sented in block diagram form in Fig. 2. This structure is calledthe generalized sidelobe canceller (GSC) [12], [14]. Essentially,it consists of changing a constrained minimization problem intoan unconstrained one. The columns of the -by-( - ) ma-trix represent a basis for the orthogonal complement of thesubspace spanned by columns of . Thematrix is termed the signal blocking matrix. The ( - )-el-ement vector represents an unconstrained filter, and the co-efficient vector represents a filter that satisfiesthe constraints ( ).

The splitting of into its symmetric and antisymmetricparts (see Fig. 1) can be interpreted as a linearly constrained

optimization problem. Let us define matrices and , as wellas vectors and , as

......

. . ....

......

. . ....

(8)

Page 3: 25. Multi Stage Wiener Filter

638 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

(a)

(b)

Fig. 3. GSC implementation of the split filter.

(9)

and , for odd ( ), or

(10)

(11)

and , for even ( ). Then, considerimposing the constraints defined by

(12)

and

(13)

on and . It establishes the symmetry and antisymmetryproperties of and , respectively.

Notice that (12), with , requires that be orthogonalto the subspace spanned by the columns of . Likewise, isorthogonal to the subspace spanned by the columns of for

.Using the GSC structure and these constraints on the

respective branches of and in Fig. 1(b) leads to theblock diagram shown in Fig. 3(a) ( even). However, since

and , and, eliminating the two branches and .

Moreover, it is easy to verify that and .Thus, is a possible choice of signal blocking matrixto span the subspace that is the orthogonal complement of the

subspace spanned by the columns of . This property canalso be verified by the fact that forces to be symmetricthrough (12), whereas would force it to be antisymmetric.

Considering the above properties, Fig. 3(a) can be simplifiedto the block diagram shown in Fig. 3(b).

It is interesting to observe that the vectors and aremerely composed of the first coefficients of and .This can be easily verified by noting that the premultiplicationof by yields and of by yields .

The estimation error is then given by

(14)

where

(15)

denotes the -by-1 tap-input vector. In the mean-squared-errorsense, the vectors and are chosen to minimize thefollowing cost function:

(16)

where is the variance of the desired response , isthe -by- correlation matrix of , and is the -by-1cross-correlation vector between and .

Appealing to the symmetric and square Toeplitz properties ofthe correlation matrix , it can easily be shown that .A matrix with this property is said to be centrosymmetric [15]and, in the case of , can be partitioned into the form

(17)

for even ( ), where andare -by- correlation matrices

of , and

...

...

When is odd ( ), it can be partitioned as

(18)

where is a -by-1 correlationvector of , and is a scalar denoting

Page 4: 25. Multi Stage Wiener Filter

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 639

Fig. 4. Multisplit adaptive filtering.

the variance of the input signal . Then, it is easy to showby direct substitution of (10), (11), and (17) (or (8), (9), and(18) for odd) that the last term in (16) is equal to zero. Inother words, and are statistically uncorrelated( ), and consequently, the symmetric andantisymmetric parts can be optimized separately. Thus, the op-timum solutions are given by

(19)

and

(20)

and the scheme of Fig. 3(b) corresponds to the optimum splitWiener filter:

(21)

where

(22)

and

(23)

The proof of (21) is given in the Appendix.Notice that (22) and (19) define the true optimum linear-phase

Wiener filter , having both constant group delay and con-stant phase delay (symmetric impulse response). On the otherhand, (23) and (20) define a second type of optimum generalizedlinear-phase Wiener filter (affine phase filter), having onlythe group delay constant (antisymmetric impulse response).

IV. MULTISPLIT AND LINEAR TRANSFORMS

For ease of presentation, let , where is an integernumber greater than one. Now, if each branch in Fig. 3(b) isconsidered separately, the transversal filters and canalso be split into their symmetric and antisymmetric parts. Byproceeding continuously with this process and splitting the re-sulting filters, we arrive, after steps composed of split-ting operations each ( ), at the multisplit schemeshown in Fig. 4. and are -by- matrices suchas in (10) and (11), respectively, and , for

, are the single parameters of the resulting zero-order filters.

The above multisplit scheme can be viewed as a linear trans-formation of , which is denoted by

(24)

where

...

(25)

and

(26)

It can be verified by direct substitution that for , isa matrix of ’s and ’s, in which the inner product of anytwo distinct columns is zero. In fact, is a nonsingular matrix,and . In other words, the columns of are mutuallyorthogonal.

The correlation matrix of is given by

(27)

Let us denote by the matrix at the right side ofthe above equation. It is similar to and is obtained from bymeans of a similarity transformation [16]. Therefore, based on(27), we can stress that , and consequently, the lineartransformation does not affect the eigenvalue spread of the inputdata correlation matrix .

Now, an interesting point to mention is that the columns ofcan be permuted without affecting its properties. This amountsto a rearrangement of the single parameters in Fig. 4 in differentsequences. Then, there are possible permutations. The re-markable result is that one of them turns into the -orderHadamard matrix so that the multisplit scheme can be repre-sented in the compact form shown in Fig. 5.

The Hadamard matrix of order can be constructed fromas follows:

(28)

Starting with , this gives , , , and Hadamardmatrices of all orders which are powers of two. An alternativeway of describing (28) is

for (29)

where denotes the Kronecker product of matrices, and

(30)

Page 5: 25. Multi Stage Wiener Filter

640 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

Fig. 5. Hadamard transform of the input x(n).

Fig. 6. Flow graph of butterfly computation forM x(n).

Another very interesting linear transform is obtained, making

(31)

and

(32)

. Using (25), this results in a linear transforma-tion of with the flow graph depicted in Fig. 6 ( ).

Similarly, denoting this linear transform by , the multisplitscheme is also represented by Fig. 5 by substituting for ,with

(33)

and .

At this stage, it is worth pointing out that the aforementionedlinear transforms do not convert the vector into a corre-sponding input vector of uncorrelated variables. Therefore, thesingle parameters in Figs. 4 and 5 cannot be optimized sepa-rately by the mean-squared error criterion.

Consider the linear transform . Premultiplying and post-multiplying in (17) by and , we get (34), shown at thebottom of the page. This enables us to verify that the correlationmatrix of is diagonal by block and that the multisplit opera-tion improves the diagonalization of . This improvement canbe measured by a diagonalization factor defined in [11] by

(35)

With and transforms, the zeros in change po-sition, but they never take place in the main diagonal. Hence,

, , and are not orthogonal transforms that decorrelate theinput signal. In other words, (34) is not a unitary similarity trans-formation. As an exception, when

(36)

and

(37)

which means that (34) becomes the unitary similarity transfor-mation.

As a matter of fact, (34) reveals that the multisplit operationmakes and uncorrelated, where

...

...

(38)

(34)

Page 6: 25. Multi Stage Wiener Filter

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 641

Finally, the optimum coefficients , for, in the scheme of Fig. 5 can be obtained by minimizing of the

mean-squared-error, which results in

(39)

where . From (39), we alsohave that

(40)

From here, we describe the application of the constrainedoptimization interpretation for the generation of new multisplitadaptive filtering structures and algorithms.

V. MULTISPLIT ADAPTIVE FILTERING

In the adaptive context, we can explore the aforementionedproperties of the multisplit transform in order to propose a powernormalized and time-varying step-size LMS algorithm for up-dating the single parameters independently.

Let us start with . Since, in this case, andare uncorrelated, a least-squares version of the Newton

method can be applied to update the parameters andas follows:

(41)

where

(42)

and

(43)

Equation (43) is an estimate of the eigenvalues of the correla-tion matrix of , which defines the step-sizes used to adaptthe individual weights in (41). To account for adaptive filteringoperation in a nonstationary environment, a forgetting factor

is included in this recursive computation ofthe eigenvalues. The case applies to a wide-sense sta-tionary environment. Notice that making in (41) corre-sponds to the RLS algorithm applied to the parametersand independently.

For , despite the residual correlation among the vari-ables inside of and , the same strategyin (41) can be used since the diagonalization factor of hasbeen improved by the multisplit transforms. In other words, themultisplit orthogonal transform together with power normaliza-tion can be used to improve the convergence rate of the LMSalgorithm. In this case, based on (27), convergence of the singleparameters is assured by

where is the largest eigenvalue of . Table I presentsa summary of the proposed algorithm for multisplit adaptivefiltering.

It is important to stress that the use of in Table I is con-ditioned to . Otherwise, the linear transform matrixis not composed only of ’s and ’s, requiring a number ofmultiplication operations proportional to . Notwithstanding,the number of filter coefficients can usually be set to the nextpower of two to take advantage of the implementation simplicityand reduce computational burden. The butterfly computation inFig. 6 requires only addition operations per iteration. Inother words, no multiplication operation is demanded.

Finally, the procedure can be extended to complex parametersapplying the split processing as much to its real part as to itsimaginary part.

VI. LINEAR PHASE ADAPTIVE FILTERING

In applications that require an adaptive filter with linear-phase response, the symmetry constraint of the impulseresponse has been used. In that case, the utilization of theDFT-LMS, DCT-LMS, and RLS techniques, to name a few,would not solve the problem directly, requiring a symmetryconstraint to be imposed. With the multisplit technique, suchapplications fit perfectly, due to the fact that the symmetric orantisymmetric conditions in the impulse response of the filterhave already been guaranteed. In fact, we need to consider justone branch in Fig. 3(b).

For symmetric impulse response constraint, the input samplesfor updating the single-parameters ( ) are given by

and for the antisymmetric impulseresponse constraint by . However,there is another interesting point concerning linear-phase adap-tive filtering.

Consider again the structure of the split filter in Fig. 3(b). Theleast-squares (LS) criterion can be used in either symmetric orantisymmetric parts to obtain the linear-phase filters. For ex-ample, the optimum solution for the filter with symmetric im-pulse response is given by

(44)

where

(45)

and

(46)

Proceeding, in the adaptive context, the RLS algorithm can bedirectly applied to update . On the other hand, it is worth

Page 7: 25. Multi Stage Wiener Filter

642 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

TABLE IMULTISPLIT LMS (MS-LMS) ALGORITHM

pointing out that fast LS algorithms, which exploit the time shiftrelationship of the input data vector, cannot be applied due to thefact that does not satisfy this property [3].

A final observation is required here. The direct applicationof the LS criterion in the scheme of Fig. 3(b) in order to obtainthe optimum Wiener filter by computing independentlyand corresponds to an approximate LS solution (qua-sioptimal). Since the LS-estimated correlation matrix isnot centrosymmetric, the last term in (16) becomes nonzero, andconsequently, and cannot be independentlycomputed.

VII. LINEAR PREDICTION

All split and multisplit transversal filtering theory developedabove can be applied to linear prediction by making

or . In the case of linear-phase pre-diction, the appropriate structure of the prediction-error filter ispresented in Fig. 7 for . In this figure, both forwardand backward prediction have been considered. For forward pre-diction, the sign of in the sum block is positive, whereasthe sign of depends on the symmetry ( ) or anti-symmetry ( ) condition. The opposite combination and

corresponds to backward prediction.

VIII. SIMULATION RESULTS

To evaluate the performance of the MS-LMS algorithm inadaptive filtering, the same equalization system in [2, Ch. 5] isconsidered (see Fig. 8). The input channel is binary, with

Fig. 7. Linear phase prediction-error filter.

Fig. 8. Adaptive equalizer for simulation.

, and the impulse response of the channel is described by theraised cosine:

otherwise(47)

Page 8: 25. Multi Stage Wiener Filter

RESENDE et al.: SPLIT WIENER FILTERING WITH APPLICATION IN ADAPTIVE SYSTEMS 643

(a)

(b)

Fig. 9. Learning curves: Adaptive equalization. (a) �(R) = 6:0782.(b) �(R) = 46:8216.

where controls the eigenvalue spread of the correlationmatrix of the tap inputs in the equalizer, withfor , and for . The sequence

is an additive white noise that corrupts the channel outputwith variance , and the equalizer has 11 coefficients.

Fig. 9 shows a comparison of the ensemble-averaged errorperformances of the DCT-LMS, MS-LMS, standard LMS, andRLS algorithms for and (100independent trials). The good performance of the MS-LMS al-gorithm can be observed in terms of convergence rate whencompared with the standard LMS algorithm. On the other hand,we can verify that the MS-LMS algorithm is somewhat sensi-tive to variations in the eigenvalue spread so that the DCT-LMSexhibits a better performance in the second case. This showsclearly that the multisplit preprocessing does not orthoganalizethe input data vector, but it improves the diagonalization of theinput signal correlation matrix, which is taken into account inthe power normalization used by the MS-LMS algorithm. Nev-ertheless, as far as computational burden is concerned, its sim-plicity is notorious when compared with the DCT transform.Such an aspect, together with convergence improvement, is de-finitive in justifying its application.

Finally, the good performance of the linear-phase RLS(LP-RLS) algorithm is also illustrated in this equalizationexample where the considered channel has the linear-phase

property. It is clear that the LP-RLS technique surpasses allother approaches in terms of convergence rate, even standardRLS, because the symmetry of the response is built into theadaptive filtering structure. The algorithm parameters were

and .

IX. CONCLUSION

In the present work, it has been shown that the splittransversal filtering problem can be formulated and solvedby using a linearly constrained optimization and can be im-plemented by means of a parallel GSC structure. Then, theoptimum split Wiener filter can be introduced together with itssymmetric and antisymmetric linear-phase parts. Furthermore,a continued split procedure, which led to a multisplit filterstructure, has been considered. It has also been shown that themultisplit transform is not an input whitening transformation.Instead, it increases the diagonalization factor of the inputsignal correlation matrix without affecting its eigenvaluespread. A power normalized, time-varying step-size LMSalgorithm, which exploits the nature of the transformed inputcorrelation matrix, has been proposed for updating the adaptivefilter coefficients. The novel approach is generic since it canbe used for any value of filter order , for complex and realparameters, and can be extended to linear-phase adaptivefiltering and linear prediction. For a power-of-two value of ,such a proposition corresponds to a Hadamard transform do-main filter, from which the structure exhibits high modularity,parallelism, or concurrency, which is suited to implementationusing very large-scale integration (VLSI). Finally, simulationresults confirm that the split processing structures provide apowerful and interesting tool for adaptive filtering.

APPENDIX

PROOF OF (21)

The purpose of this Appendix is to show that the optimumWiener filter can be split into two symmetric and antisymmetricimpulse response filters connected in parallel according toFig. 3(b) and (21). Using (19)–(23), we need to prove that

(48)

or

(49)

The left side of (49) can be rewritten as

(50)

Page 9: 25. Multi Stage Wiener Filter

644 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 52, NO. 3, MARCH 2004

where , and it has been taken into account

that . The square matrix denotesthe similarity transformation that splits the Wiener filter onceinto symmetric and antisymmetric parts. In fact, the subspacesspanned by the columns of and are complementary, and

is an orthogonal transform with .From (50), we have

(51)

which proves the veracity of (21).

ACKNOWLEDGMENT

The authors wish to thank Prof. J. C. M. Bermudez andR. D. Souza, from the Federal University of Santa Catarina,and the anonymous reviewers for their suggestions, which haveimproved the presentation of the material in this paper.

REFERENCES

[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing. EnglewoodCliffs, NJ: Prentice-Hall, 1985.

[2] S. Haykin, Adaptive Filter Theory, 4th ed. Englewood Cliffs, NJ: Pren-tice-Hall, 2002.

[3] M. G. Bellanger, Adaptive Digital Filters and Signal Analysis, 2nded. New York: Marcel Dekker, 2001.

[4] F. Beaufays, “Transform-domain adaptive filters: An analytical ap-proach,” IEEE Trans. Signal Processing, vol. 43, pp. 422–431, Feb.1995.

[5] J. S. Goldstein, I. S. Reed, and L. L. Scharf, “A multistage representa-tion of the Wiener filter based on orthogonal projections,” IEEE Trans.Inform. Theory, vol. 44, pp. 2943–2959, Nov. 1998.

[6] P. Strobach, “Low-rank adaptive filters,” IEEE Trans. Signal Processing,vol. 44, pp. 2932–2947, Dec. 1996.

[7] P. Delsarte and Y. V. Genin, “The split levinson algorithm,” IEEE Trans.Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 470–478, June1986.

[8] , “On the splitting of classical algorithms in linear predictiontheory,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-35,pp. 645–653, May 1987.

[9] K. C. Ho and P. C. Ching, “Performance analysis of a split-path LMSadaptive filter for AR modeling,” IEEE Trans. Signal Processing, vol.40, pp. 1375–1382, June 1992.

[10] P. C. Ching and K. F. Wan, “A unified approach to split structure adaptivefiltering,” in Proc. IEEE ISCAS, Detroit, MI, May 1995.

[11] K. F. Wan and P. C. Ching, “Multilevel split-path adaptive filtering andits unification with discrete walsh transform adaptation,” IEEE Trans.Circuits Syst. II, vol. 44, pp. 147–151, Feb. 1992.

[12] L. J. Griffiths and C. W. Jim, “An alternative approach to linearly con-strained adaptive beamforming,” IEEE Trans. Antennas Propagat., vol.AP-30, pp. 27–34, Jan. 1982.

[13] B. D. Van Veen and K. M. Buckley, “Beamforming: A versatile approachto spatial filtering,” IEEE Acoust., Speech, Signal Processing Mag., vol.5, pp. 4–24, Apr. 1988.

[14] S. Haykin and A. Steinhardt, Eds., Adaptive Radar Detection and Esti-mation. New York: Wiley, 1992.

[15] S. L. Marple Jr., Digital Spectral Analysis with Applications. Engle-wood Cliffs, NJ: Prentice-Hall, 1988.

[16] B. Noble and J. W. Daniel, Applied Linear Algebra, 3rd ed. EnglewoodCliffs, NJ: Prentice-Hall, 1988.

Leonardo S. Resende (M’96) received the electricalengineering degree from Catholic University Pontificof Minas Gerais (PUC-MG), Belo Horizante, Brazil,in 1988, and the M.S. and Ph.D. degrees in electricalengineering from the State University of Campinas(UNICAMP), Campinas, Brazil, in 1991 and 1996,respectively.

From October 1992 to September 1993, heworked toward the doctoral degree at the Laboratoired’Electronique et Communication, ConservatoireNational des Arts et Métiers (CNAM), Paris, France.

Since 1996, he has been with the Electrical Engineering Department, FederalUniversity of Santa Catarina (UFSC), Florianópolis, Brazil, where he is anAssociate Professor. His research interests are in constrained and unconstraineddigital signal processing and adaptive filtering.

João Marcos T. Romano (M’90) was born in Riode Janeiro, Brazil, in 1960. He received the B.S. andM.S. degrees in electrical engineering from the StateUniversity of Campinas (UNICAMP), Campinas,Brazil, in 1981 and 1984, respectively. In 1987, hereceived the Ph.D. degree from the University ofParis-XI, Paris, France.

In 1988, he joined the Communications Depart-ment of the Faculty of Electrical and ComputerEngineering, UNICAMP, where he is now Professor.He served as an Invited Professor with the University

René Descartes, Paris, during the winter of 1999 and in the Communicationsand Electronic Laboratory in CNAM/Paris during the winter of 2002. He isresponsible for the Signal Processing for Communications Laboratory, andhis research interests concern adaptive and intelligent signal processing andits applications in telecommunications problems like channel equalization andsmart antennas. Since 1988, he has been a recipient of the Research Fellowshipof CNPq-Brazil.

Prof. Romano is a member of the IEEE Electronics and Signal ProcessingTechnical Committee. From April 2000, he has been the President of theBrazilian Communications Society (SBrT), a sister society of ComSoc-IEEE,and since April 2003, he has been the Vice-Director of the Faculty of Electricaland Computer Engineering at UNICAMP.

Maurice G. Bellanger (F’84) graduated from EcoleNationale Supérieure des Télécommunications(ENST), Paris, France, in 1965 and received thedoctorate degree from the University of Paris in1981.

He joined T.R.T. (Philips Communications inFrance), Paris, in 1967 and, since then, he hasworked on digital signal processing and applicationsin telecommunications. From 1974 to 1983, he washead of the telephone transmission department ofthe company that developed speech, audio, video,

and data terminals as well as multiplexing and line equipments for digitalcommunication networks. Then, he was deputy scientific director of thecompany and, from 1988 to 1991, he was the scientific director. In 1991, hejoined the Conservatoire National des Arts et Métiers (CNAM), Paris, a publiceducation and research institute, where he is a professor of electronics. He isthe head of the Electronics and Communications research team.

Dr. Bellanger has published 100 papers, has been granted 16 patents, and isthe author of two textbooks: Theory and Practice of Digital Signal Processing(New York: Wiley, 3rd ed., 2000) and Adaptive Filtering and Signal Analysis(New York: Marcel Dekker, first ed., 1987; second ed. 2001). He was electeda fellow of the IEEE for contributions to the theory of digital filtering and theapplications to communication systems. He is a former associate editor of theIEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING andwas the technical program chairman of the conference ICASSP, Paris, in 1982.He was the president of EURASIP, the European Association for Signal Pro-cessing, from 1987 to 1992. He is a member of the French Academy of Tech-nology.