adaptive algorithms seminar - final

26
A Seminar Report on ADAPTIVR FILTER ALGORITHMS Prepared by : Patel Rutul J. Roll No : 33 Semester : 8 th Semester Class : B.E. 4 th (Electronics & Communication Engineering) Year : 2010-2011 Guided by : Prof Naresh Patel Department of Electronics and Communication Engineering 2010-11 Sarvajanik College of Engineering & Technology Dr. R.K. Desai Road, Athwalines, Surat-395001 1

Upload: rutul-patel

Post on 24-Apr-2015

185 views

Category:

Documents


3 download

TRANSCRIPT

Page 1: Adaptive Algorithms Seminar - Final

A Seminar Report on

ADAPTIVR FILTER ALGORITHMS

Prepared by : Patel Rutul J.

Roll No : 33

Semester : 8th Semester

Class : B.E. 4th (Electronics & Communication Engineering)

Year : 2010-2011

Guided by : Prof Naresh Patel

Department of Electronics and Communication Engineering2010-11

Sarvajanik College of Engineering & TechnologyDr. R.K. Desai Road, Athwalines,

Surat-395001

Sarvajanik Education Society

Sarvajanik College of Engineering &

1

Page 2: Adaptive Algorithms Seminar - Final

Technology, Surat

Department ofElectronics and Communication Engineering,

CERTIFICATE

This is to certify that the Seminar report entitled “Adaptive Filter Algorithms” is prepared & presented by Patel Rutul J. (Roll no. 33) of B.E. IV Sem VII Electronics & communication Engineering department during year 2010-11. His work is satisfactory.

Signature of guide Head of the DepartmentElectronics & Communication

Engineering

Signature of Jury Members

2

Page 3: Adaptive Algorithms Seminar - Final

Acknowledgement

I take this opportunity to express my sincere thanks and deep sense of gratitude to my guide Prof. Naresh Patel for imparting me valuable guidance during my preparation of this seminar. He helped me by solving many of my doubts and suggesting many references.

I would also like to offer my gratitude towards faculty member of Electronics & Communication Department, Who helped me by giving valuable suggestion and Encouragement which not helped me in preparing this presentation but also in having a better insight in the field. Lastly I express deep sense of gratitude toward my colleagues who directly or indirectly helped me while preparing this seminar.

3

Page 4: Adaptive Algorithms Seminar - Final

ABSTRACT

Nowadays there are so many cases where the noise is unknown and variable. This kind of noises cannot be suppressed by using fixed filters like notch, etc. To suppress this kind of noise Adaptive Filter is used. Adaptive means tendency to adapt different situations. The use of an adaptive filter offers a highly attractive solution to the problem as it provides a significant improvement in performance over the use of a fixed filter designed by conventional methods.

This seminar is based on various techniques and algorithms for noise cancellation like LMS, NLMS, RLS,etc..

4

Page 5: Adaptive Algorithms Seminar - Final

INDEX1. INTRODUCTION.............................................................................................................................7

2. ADAPTIVE FILTER...........................................................................................................................8

2.1. BLOCK DIAGRAM OF ADAPTIVE FILTER:.................................................................................8

2.2. Wiener filters:........................................................................................................................9

2.3. Mean square error:................................................................................................................9

2.4. Adaptive Filter Algorithm:....................................................................................................10

2.4.1. Least Mean square (LMS):............................................................................................10

2.4.2. Normalized Least Mean Square (NLMS):......................................................................11

2.4.3. Variable Step size Least Mean Square (VSLMS):..........................................................11

2.4.4. Variable Step size Normalized Least Mean Square (VSNLMS):.....................................12

2.4.5. Recursive Least Square (RLS).......................................................................................13

3. Application...................................................................................................................................15

3.1. Interference Cancellation....................................................................................................15

3.2. Acoustic Echo Cancellation (AEC).........................................................................................15

3.3. Modelling.............................................................................................................................16

Conclusion...........................................................................................................................................18

Bibliography/References.....................................................................................................................19

5

Page 6: Adaptive Algorithms Seminar - Final

List Of Figures

Figure 2.1 diagram of adaptive filter[1] .............................................................................. 8

Figure 2.2 The block diagram of RLS filter [1] ............................................................... 13

Figure 3.1 Interference Cancellation [2] ........................................................................... 15

Figure 3.2 Acoustic echo cancellations [3] ....................................................................... 16

Figure 3.3 Modelling [2] ................................................................................................... 17

Figure 3.4 System Identification [4] ................................................................................. 17

6

Page 7: Adaptive Algorithms Seminar - Final

1. INTRODUCTION

It may be worth trying to understand the meaning of the terms ‘Adaptive’ and ‘filters’ in a very general sense. The adjective ‘Adaptive’ can be understood by considering the system which is trying to adjust itself so as to respond to some phenomenon that is taking place in its surrounding. In other words the system tries to adjust its parameters with the aim of meeting some well-defined goal or target which depends upon the state of the system as well as its surrounding. This is what ‘Adaptation’ means.

Moreover, there is a need to have a set of steps or certain procedure by which this process of ‘Adaptation’ is carried out. And finally, the ‘system’ that carries out and undergoes the process of ‘Adaptation’ is called by the more technical, yet general enough, name ‘filter’.

The subject of adaptive filters constitutes an important part of statistical signal processing. Whenever there is a requirement to process signals that result from operation in an environment of unknown statistics or one that is inherently non-stationary, the use of an adaptive filter offers a highly attractive solution to the problem as it provides a significant improvement in performance over the use of a fixed filter designed by conventional methods. Furthermore, the use of adaptive filter provides new signal processing capabilities that would not be possible otherwise. We thus find that adaptive filters have been successfully applied in such diverse fields as communications, control, radar, sonar & biomedical engineering, among others[1].

The term estimator or filter is commonly used to refer to a system that is designed to extract information about a prescribed quantity of interest from noisy data.

Clearly, depending upon the time required to meet the final target of the adaptation process, which we call convergence time, and the complexity/resources that are available to carry out the adaptation, we can have a variety of adaptation algorithms and filter structures.

From this point of view, we may go through the adaptive algorithms like LMS, NLMS, VSLMS, VSNLMS, RLS.

7

Page 8: Adaptive Algorithms Seminar - Final

2. ADAPTIVE FILTER

An adaptive filter is a filter that self-adjusts its transfer function according to an optimizing algorithm. Because of the complexity of the optimizing algorithms, most adaptive filters are digital filters that perform digital signal processing and adapt their performance based on the input signal. By way of contrast, a non-adaptive filter has static filter coefficients (which collectively form the transfer function).

For some applications, adaptive coefficients are required since some parameters of the desired processing operation (for instance, the properties of some noise signal) are not known in advance. In these situations it is common to employ an adaptive filter, which uses feedback to refine the values of the filter coefficients and hence its frequency response.

Generally speaking, the adapting process involves the use of a cost function, which is a criterion for optimum performance of the filter (for example, minimizing the noise component of the input), to feed an algorithm, which determines how to modify the filter coefficients to minimize the cost on the next iteration.

As the power of digital signal processors has increased, adaptive filters have become much more common and are now routinely used in devices such as mobile phones and other communication devices, camcorders and digital cameras, and medical monitoring equipment.

2.1.BLOCK DIAGRAM OF ADAPTIVE FILTER:

The block diagram, shown in the following figure, serves as a foundation for particular adaptive filter realisations, such as Least Mean Squares (LMS) and Recursive Least Squares (RLS). The idea behind the block diagram is that a variable filter extracts an estimate of the desired signal.

Figure 2.1 diagram of adaptive filter[1]

To start the discussion of the block diagram we take the following assumptions:The input signal is the sum of a desired signal d(n) and interfering noise v(n)

8

Page 9: Adaptive Algorithms Seminar - Final

x(n) = d(n) + v(n) (2.1)

The variable filter has a Finite Impulse Response (FIR) structure. For such structures the impulse response is equal to the filter coefficients. The coefficients for a filter of order p are defined as

Wn= [wn(0), wn(1), ..., wn(p)]T (2.2)

The error signal or cost function is the difference between the desired and the estimated signal

e(n) = d(n) – d(n)’ (2.3)The variable filter estimates the desired signal by convolving the input signal with the

impulse response. In vector notation this is expressed as d(n)’ = wn(n) * x(n) (2.4)Where x(n) = [x(n), x(n-1), ..., x(n-p)]T is an input signal vector. Moreover, the variable filter updates the filter coefficients at every time instant

Wn+1 = wn + wn

where wn is a correction factor for the filter coefficients. The adaptive algorithm generates this correction factor based on the input and error signals. LMS and RLS define two different coefficient update algorithms [1].

2.2.Wiener filters:

Wiener filters are a special class of transversal FIR filters which build upon the mean square error cost function of to arrive at an optimal filter tap weight vector which reduces to a minimum. They will be used in the derivation of adaptive filtering algorithms.

Consider the output of the transversal FIR filter as given below, for a filter tap weight vector, w(n), and input vector, x(n).

N-1y(n)= w(n) x(n-i) i=0

= wT(n) x(n) (2.5)

The MSE signal mean square error cost function can be expressed in terms of the cross-correlation vector between the desired and input signals, p(n)=E[x(n) d(n)], and the autocorrelation matrix of the input signal, R(n)=E[x(n)xT(n)]. When applied to FIR filtering the above cost function is an N-dimensional quadratic function. The minimum value of ξ(n) can be found by calculating its gradient vector related to the filter tap weights and equating it to 0. The Least Mean Square algorithm of adaptive filtering attempts to find the optimal wiener solution using estimations based on instantaneous values

2.3.Mean square error:

In this type of adaptive filter stochastic gradient approach is used. It uses a transversal filter as these structural basis for implementing the linear adaptive filter. For the case of stationary inputs, the cost function, also defined as the index of performance is defined as the mean square error. It is the mean square value of the different between desired

9

Page 10: Adaptive Algorithms Seminar - Final

response and the transversal filter output this cost function precisely a second order function of the tap weight in the transversal filter.

2.4. Adaptive Filter Algorithms:

1. Least Mean Square (LMS)2. Normalized Least Mean Square (NLMS)3. Variable Step size Least Mean Square (VSLMS)4. Variable Step size Normalized Least Mean Square (VSNLMS)5. Recursive Least Square (RLS)

2.4.1. Least Mean square (LMS):

The method of least squares is a standard approach to the approximate solution of overdetermined systems, i.e. sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in solving every single equation.

Least mean squares (LMS) algorithms are a class of adaptive filter used to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal (difference between the desired and the actual signal)

This algorithm adjusts the coefficients of w(n) of a filter in order to reduce the mean square error between the desired signal and output of the filter. This algorithm is basically the type of adaptive filter known as stochastic gradient-based algorithms. Why it’s called stochastic radient algorithm? Because in order to converge on the optimal Wiener solution, this algorithm use the gradient vector of the filter tap weights. This algorithm is also used due to its computational simplicity.

The equation below is LMS algorithm for updating the tap weights of the adaptive filter for each iteration.

w(n +1) = w(n) +μe(n)x* (n) (2.6)Where, x(n) : input vector of time delayed input values. w(n) :weight vector at time n.

The ‘step-size’ parameter μ introduced in above equatation controls how far we move along the error function surface at each update step. μ certainly has to be chosen μ > 0 (otherwise we would move the coefficient vector in a direction towards larger squared error). Also, μ should not be too large, since in the LMS algorithm we use a local approximation of p and R in the computation of the gradient of the cost function, and thus the cost function at each time instant may differ from an accurate global cost function. Furthermore, too large a step-size causes the LMS algorithm to be instable, i.e., the coefficients do not converge to fixed values but oscillate. Closer analysis reveals, that the upper bound for μ for stable behavior of the LMS algorithm depends on the largest eigenvalue _max of the tap-input auto-correlation matrix R and thus on the input signal. For stable adaptation behavior the step-size has to be [1]

10

Page 11: Adaptive Algorithms Seminar - Final

2.4.2. Normalized Least Mean Square (NLMS):

The main drawback of the "pure" LMS algorithm is that it is sensitive to the scaling of its input x(n). This makes it very hard (if not impossible) to choose a learning rate μ that guarantees stability of the algorithm. The Normalised least mean squares filter (NLMS) is a variant of the LMS algorithm that solves this problem by normalising with the power of the input. Step size is varied for each iteration according to inverse of energy of instantaneous coefficients of X(n) [5].

The recursion for the NLMS algorithm is stated as: w(n+1)=w(n)+{1/(xT(n)*x(n))}e(n)x(n) (2.8)

The NLMS algorithm can be summarised as:Parameters: p = filter order

μ = step size Initialization: w(0)=0 Computation: For n = 0, 1, 2,...

x(n)=[x(n),x(n-1),........,x(n-p+1)]T (2.9)error signal is given by this equation e(n)=d(n)- wT(n)x(n) (2.10)

μ(n)={1/xT(n)x(n)} (2.11)

2.4.3. Variable Step size Least Mean Square (VSLMS):

Both the LMS and the NLMS algorithms have a fixed step size value for every tap weight in each iteration. In the Variable Step Size Least Mean Square (VSLMS) algorithm the step size for each iteration is expressed as a vector, μ(n). Each element of the vector μ(n) is a different step size value corresponding to an element of the filter tap weight vector, w(n).

The VSLMS algorithm is executed by following these steps for each iteration With ρ=1 , where ρ is a small positive constant optionally used to controlled the effect of the gradient terms on the update procedure, in the later implementation this is set to 1.

The output of the adaptive filter is calculated [4]. N-1

y(n)= w(n) x(n-i) i=0

= wT(n) x(n) (2.12)

The error signal is calculated as the difference between the desired output and the filter output.

e(n)=d(n)-y(n) (2.13)

11

Page 12: Adaptive Algorithms Seminar - Final

The gradient, step size and filter tap weight vectors are updated using the following equations in preparation for the next iteration.

For i = 0, 1 ,...,N -1

gi(n)= e(n) x(n-i)

g(n) = e(n) x(n) (2.14)

2.4.4. Variable Step size Normalized Least Mean Square (VSNLMS):

The VSNLMS algorithm still has the same drawback as the standard LMS algorithm in that to guarantee stability of the algorithm, a statistical knowledge of the input signal is required prior to the algorithms commencement. Also, recall the major benefit of the NLMS algorithm is that it is designed to avoid this requirement by calculating an appropriate step size based upon the instantaneous energy of the input signal vector.

It is a natural progression to incorporate this step size calculation into variable step size algorithm, in order increase stability for the filter without prior knowledge of the input signal statistics. In the VSNLMS algorithm the upper bound available to each element of the step size vector, μ(n), is calculated for each iteration. As with the NLMS algorithm the step size value is inversely proportional to the instantaneous input signal energy.

The output of the adaptive filter is calculated [4]. N-1

y(n)= w(n) x(n-i) i=0

= wT(n) x(n) (2.15)

The error signal is calculated as the difference between the desired output and the filter output.

e(n)=d(n)-y(n) (2.16)

The gradient, step size and filter tap weight vectors are updated using the following equations in preparation for the next iteration.

For i = 0,1, . . . ,N -1 gi(n) = e(n) x(n-i) g(n) = e(n) x(n) (2.17)μi(n) = μi(n-1) + ρ gi(n) gi(n-1) (2.18)

μmax(n) = {1/ 2 xT(n) x(n) } (2.19)

if μi(n) > μmax(n), μi(n) = μmax(n) if μi(n) < μmin(n), μ(n) = μmin(n)

12

Page 13: Adaptive Algorithms Seminar - Final

wi(n+1) = wi(n) + 2 μi(n) gi(n)

ρ is an optional constant the same as is the VSLMS algorithm. With ρ =1, each iteration of the VSNLMS algorithm requires 5N+1 multiplication operations [4]-[5].

2.4.5. Recursive Least Square (RLS)

The RLS (recursive least squares) algorithm is another algorithm for determining the coefficients of an adaptive filter. In contrast to the LMS algorithm, the RLS algorithm uses information from all past input samples (and not only from the current tap-input samples) to estimate the (inverse of the) autocorrelation matrix of the input vector. To decrease the influence of input samples from the far past, a weighting factor for the influence of each sample is used.

The RLS adaptive filter is an algorithm which recursively finds the filter coefficients that minimize a weighted linear least squares cost function relating to the input signals. This in contrast to other algorithms such as the least mean squares (LMS) that aim to reduce the mean square error. In the derivation of the RLS, the input signals are considered deterministic, while for the LMS and similar algorithm they are considered stochastic. Compared to most of its competitors, the RLS exhibits extremely fast convergence.

This adaptive algorithm is used due to following factors: • Computational complexity

• Speed of convergence• Minimum error at convergence• Numerical stability• Robustness

Figure 1.2 The block diagram of RLS filter [1]

The RLS algorithm for a p-th order RLS filter can be summarized as Parameters: p = filter order

λ = forgetting factor δ = value to initialize P(0)

initialization : wn= 0 p(0) = δ-1I ; where I is the (p + 1)-by-(p + 1)

13

Page 14: Adaptive Algorithms Seminar - Final

Choosing λ: The smaller λ is, the smaller contribution of previous samples. This makes the filter more sensitive to recent samples, which means more fluctuations in the filter co-efficients. The λ = 1 case is referred to as the growing window RLS algorithm[1] Computation: For n=0, 1, 2, ......

x(n) = [x(n),x(n-1),........,x(n-p)] a(n) = d(n) – w(n-1)Tx(n) (2.20) g(n) = p(n-1)x(n){ λ+xT(n)p(n-1)x(n)}-1 (2.21) p(n) = λ-1p(n-1)-g(n)xT(n) λ-1p(n-1) (2.22) w(n) = w(n-1) + a(n)g(n) (2.23)

14

Page 15: Adaptive Algorithms Seminar - Final

3. Application

3.1.Interference Cancellation

Figure 3.1 Interference Cancellation [2]

Interference cancellation refers to situation where it is required to cancel an interfering signal/noise from the given signal which is mixture of the desired signal and the interference. The principle of interference cancellation is to obtain an estimate of the interfering signal and subtract that from corrupted signal. The feasibility of this idea relies on the availability of reference source from which the interfering signal originates.

Above figure shows the concept of interference cancellation, in its simplest form. There are two inputs to the canceller: ‘primary’ and ‘reference’. The primary input is the corrupted signal, i.e. the desired signal plus interference. The reference input, on the other hand, originates from the interference source only. The adaptive filter is adjusted so that a replica of the interference signal that is present in the primary input results in an output that is cleared from the interference, thus the name interference cancellation.

3.2. Acoustic Echo Cancellation (AEC)

To handle with the acoustic echo problem above in teleconference systems, one can Use voice switches and directional microphones but these methods have placed physical restriction on the speaker. The common and more perfective method is implementing the Acoustic Echo Cancellation (AEC) to remove the echo. AEC enhances greatly the quality of the audio signal of the hands-free communication system. Due to their assistance, the conferences will work more smoothly and naturally, keep the participants more comfortable.

Some echo cancellation algorithms are used for this purpose. All of them process the signals follow the basic steps below:1. Estimate the characteristics of echo path of the room.2. Create a replica of the echo signal.3. Echo is then subtracted from microphone signal (includes near-end and echo signals) to obtain the desired signal.

15

Page 16: Adaptive Algorithms Seminar - Final

Figure 3.2 Acoustic echo cancellations [3]

Adaptive filter is a good supplement to achieve a good replica because of the echo path is usually unknown and time-varying. The figure below illustrates about three step of the AEC using adaptive filter.

In the Figure 3.2, by using adaptive filter for AEC follows three basic steps above:1. Estimate the characteristics of echo path h(n) of the room: hˆ(n)2. Create a replica of the echo signal: yˆ(n) 3. Echo is then subtracted from microphone signal (includes near-end and echo signals) to obtain the desired signal: clear signal = d(n) − yˆ(n)

In the modern digital communication system such as: Public Switched Telephone Network (PSTN), Voice over IP (VoIP), Voice over Packet (VoP) and cell phone networks; the application of AEC is very important and necessary because it brings the better quality of service and obtains the main purpose of the communication service providers.

3.3. Modelling

Figure 4 depicts the problem of modelling in the context of adaptive filters. The aim is to estimate the parameters of model, W(z), of a plant ,G(z). On the basis of some a prior knowledge of the plant, G(z), a transfer function, W(z), with certain number of adjustable parameters is selected first. The parameters of W(z) are then chosen through an adaptive filtering algorithm such that the difference between the plant output, d(n), and the adaptive filter output , y(n), is minimized.

16

Page 17: Adaptive Algorithms Seminar - Final

Figure 3.3 Modelling [2]

An application of modelling, which may be readily through of, is system identification. In the figure, the unknown system is placed in parallel with the adaptive filter. This layout represents just one of many possible structures. The shaded area contains the adaptive filter system.

Figure 3.4 System Identification [4]

Clearly, when e(k) is very small, the adaptive filter response is close to the response of the unknown system. In this case the same input feeds both the adaptive filter and the unknown. If, for example, the unknown system is a modem, the input often represents white noise, and is a part of the sound you hear from your modem when you log in to your Internet service provider.

17

Page 18: Adaptive Algorithms Seminar - Final

Conclusion

The unknown and variable noisy signal can not be suppressed by ordinary fixed filter. For that purpose different Adaptive algorithms are used to suppress the noise from the desired output. For many applications like unknown system identification, acoustic echo cancellation, modelling and interference cancellation different adaptive algorithms are used.

The Adaptation portion is kept in parallel with the system or the desired signal to be determined. Only the algorithm like LMS, RLS, NLMS in the Adaptation portion is changed according to requirement.

18

Page 19: Adaptive Algorithms Seminar - Final

Bibliography/References

[1]. Simon Haykin and Thomas Kailath, Communications Reasearch Laboratory, McMaster University, Hamilton, Ontario, Canada. “Adaptive Filter Theory”, Fourth Edition, Pearson Education.

[2]. B. Fahrang-Boroujeny, National University of Singapore, “Adaptive Filters Theory And Application”

[3]. PDF on AEC

[4]. Wikipedia source

19