adaptiveequalizationrls_enee634

Upload: vo-phong-phu

Post on 03-Jun-2018

220 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    1/8

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    2/8

    Introduction:

    In this project, we extend the use of methods of least squares to find a recursive algorithm

    solution of adaptive transversal filter. Given the LS solution at any time instant n-1, we

    find the solution at time n recursively using past solution and newly arrived data. This

    algorithm is known as Recursive Least Squares (RLS) algorithm. We show theconvergence rate of RLS algorithm is faster than LMS algorithm by comparing the

    learning curves of two algorithms for specified channel response.

    Problem Formulation:

    Suppose, we want to transmit a digital message, which can be a sequence of bits

    corresponding to voltage levels in modulation technique, through a noisy communicationchannel with impulse response given by h(n). We can simplify the channel complexity by

    assuming that the channel noise is AWGN (Additive White Gaussian Noise) in nature

    and is independent of transmitted signal. So, the received signal u(n) at the demodulatorcan be given by following equation

    where d(n) is the transmitted digital message, L is the length of FIR approximation of

    channel distortion filter. Now, our aim is to determine (n) such that error between d(n)

    and (n)is minimized. We use a L-tap transversal filter to determine (n) from u(n). Thissystem is shown in figure 1. The basic structure of L-tap adaptive filter is shown in figure2. It involves update of tap vector based on current and past data. Several algorithms have

    been proposed in literature to solve this problem of error minimization. In this project, we

    use the solution using Recursive Least Square (RLS) algorithm.

    White Noise v(n)

    Transmitted Signal u(n)

    Channel response C(z) Adaptive equalizer filter H(z)

    Figure 1: A simple communication transmission model.

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    3/8

    The basic structure of an adaptive equalizer is given below,

    Figure 2: A simple model for Adaptive Equalizer.

    Problem of solving any least squares problem using recursive algorithm involvesinitialization of algorithm. Then, we use the information contained in new data samples

    to update the old estimates. In this way, the length of observable data changes. So, we

    define a weighted cost function to minimize,

    where is the cost function, is the weighing term and is the error in the

    estimate at any instant between desired response d(i) and output y(i) produced by

    transversal filter whose input is The relation between these quantities is given byfollowing equation,

    Where tap is input vector at time i. and is tap weight vector at time n. The

    weighing factor should be less than but close to unity. This factor corresponds to

    the memory of system. Generally, we use an exponential weighing factor given by,

    =1 indicates infinite memory or the ability of the system to remember all the past

    estimates. The solution of least square problem at any time instants can be given by

    solving following equation for ,

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    4/8

    Where and is defined as following,

    is a regularizing term. To implement recursion in this problem, we can use the

    following relation between (n) and (n-1),

    Similar recursive equation for z(n) and z(n-1) is given by,

    To compute the solution of equation, we use matrix inversion lemma which states that for

    two positive semi-definite M-by-M matrices related by,

    where D is a positive-definite N-by-M matrix and C is an M-by-N matrix, inverse ofmatrix A is given by following equation,

    Using this lemma and going through all the recursive calculations, we get RLS algorithmwhich is summarized as following,

    Initialize the algorithm by setting ,

    Where = small positive constant for high SNRlarge positive constant for low SNR

    For each time instant n=1,2,.,compute

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    5/8

    We can note from the above mentioned summary of the algorithm that computation of

    gain vector k(n) proceeds in two stages,

    First an intermediate quantity denoted by (n) is computed.

    Then, (n) is used to compute k(n).

    Experimental result:

    In this section, we present the experimental results obtained by simulating RLS algorithm

    using Matlab.

    A random signal with amplitude 1 is transmitted through three realizations of channel.

    The impulse response for each of three realizations is represented by h1, h2, h3

    respectively and their transfer functions are given by,

    White noise is added to the output of channel filter such that the resulting SNR is 20 db.

    The value of is taken as 0.005. A 21-tap FIR filter is used for channel equalization and

    initial values of this filter is set to be zero for all taps.

    Choice of Optimum Delay:

    The channel impulse response and tapped-equalizer impulse response tends to generateinput-output delay. For channel 1, impulse response is symmetric around n=1; so, this

    channel introduces a group delay of 1 unit. Since, input is real data; we expect tapped

    equalizer to be symmetric around n=10 to give linear phase. Hence, the total delayintroduced at output is of 11 units. This delay has been used in our simulations for

    learning curves.

    Learning Curves:

    Learning curves for all three channel response is shown in figure 3. From these curves,

    we observe that the no. of iteration required for convergence are approximately 30-40.This is very small no. of iterations as compared with LMS algorithm, where the order for

    no. of iterations was in thousands. The result can be attributed to the fact that we are

    finding LS solution at every iteration in RLS algorithm, hence, the convergence is faster.A comparison of learning curve for LMS and RLS algorithm is shown in figure 5 for

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    6/8

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    7/8

    Figure 4: Comparison of learning curves for impulse response using RLS algorithm.

    FFigure 5: comparison of RLS algorithm with LMS algorithm for three impulse responses.

  • 8/12/2019 AdaptiveEqualizationRLS_ENEE634

    8/8