adaptive noise reduction usig lv

18
Adaptive Noise Cancellation using LabVIEW T.J.Moir Institute of Information and Mathematical Sciences Massey University Albany Campus Auckland New Zealand [email protected] Summary It is shown how the Least-mean squares algorithm (LMS) can be implemented by using the graphical language LabVIEW. The LMS algorithm underpins the vast majority of current signal processing adaptive algorithms including noise- cancellation, beamforming, deconvolution, equalization and so on. It will be shown that LabVIEW is a convenient and powerful method of implementing such algorithms.

Upload: claudiu

Post on 22-Nov-2014

221 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Adaptive Noise Reduction Usig LV

Adaptive Noise Cancellation using LabVIEW T.J.Moir Institute of Information and Mathematical Sciences Massey University Albany Campus Auckland New Zealand [email protected] Summary It is shown how the Least-mean squares algorithm (LMS) can be implemented by using the graphical language LabVIEW. The LMS algorithm underpins the vast majority of current signal processing adaptive algorithms including noise-cancellation, beamforming, deconvolution, equalization and so on. It will be shown that LabVIEW is a convenient and powerful method of implementing such algorithms.

Page 2: Adaptive Noise Reduction Usig LV

1.Introduction LabVIEW (trademark National Instruments) has been in existence since the mid

1980s [1] and is normally associated with virtual instrumentation. Based on the

graphical language ‘g’ it is a block-diagram approach to programming. LabVIEW is a

full programming language which is compiled when the ‘run’ button is pressed. It is

perhaps a little harder to learn that approaches such as MATLAB but as will be shown

the effort is more than worth the end results. There is still a wealth of information

available on how to use LabVIEW to control a whole range of instrumentation but far

less on the basics of signal processing. Perhaps the main exception is the excellent

LabVIEW Signal Processing reference [2]. The basics of sampling, signal generation,

filters, matrix manipulation and FFTs are covered therein with numerous activities

and real-world applications.

The author was unable to source any information on using LabVIEW for adaptive

signal processing using the least-mean-squares(LMS) algorithm. As the LMS

algorithm is the most fundamental of all algorithms used in this area it was decided to

write a ‘g’ program which would perform this function. The type of applications

which use LMS are to name a few: system identification, adaptive beamforming (for

radar applications), adaptive filtering for hearing aids or generally speech signals,

time-delay estimation, active noise cancellation, adaptive equalization for

communication channels and some areas of adaptive control.

This article is not meant as an in-depth study of aspects of adaptive signal processing

but rather to illustrate how LabVIEW can be used as a scientific tool when

implementing such algorithms.

Page 3: Adaptive Noise Reduction Usig LV

2.The LMS algorithm for system identification.

Consider the block diagram shown in Figure 1 below. Although the LMS algorithm

can be applied to all of the applications mentioned in the previous section, perhaps the

easiest to understand and test any algorithm is that of system identification. The basic

objective is to find the transfer function of an unknown system. Usually the system is

driven by a white-noise source. The output of the unknown system is labeled the

primary input to the algorithm whilst the white-noise source itself is labeled the

reference. The coefficients of another filter within the LMS algorithm are then

adjusted according to an error (shown on the diagram). When this error (in fact its

average squared value) is at a minimum value, it is considered that the LMS algorithm

has converged and the coefficients of the filter within the LMS algorithm then match

the unknown system. Probably for the simple example shown it is best thought of as

‘path balancing’ ie if the unknown system was placed instead in the reference path

then the LMS algorithm would have to converge to the inverse of the unknown

system.

Page 4: Adaptive Noise Reduction Usig LV

Figure 1. LMS algorithm used for system identification.

The LMS inputs are described mathematically at time sample ‘k’ as ks (primary

input) and ku (reference input). The unknown system is labeled 11H (z )− which for

convenience is a finite impulse response filter (FIR) of order n with ku as its input

and ks as its output. The unknown system need not be FIR but it is easier to check the

results if it is. Suppose 11H (z )− has a z-transfer function

1 1 2 n1 0 1 2 nH (z ) b b z b z ... b z− − − −= + + + +

Where the coefficients b of the filter are unknown and are to be estimated.

The LMS algorithm is given by [3]

Update the error

Tk k k k 1e s X W −= − (1)

Update the weight vector estimate kW

k k 1 k kW W 2 X e−= + µ (2)

The weight vector is a column vector

0

1

k

n

w

wW .

.w

=

and Xk is the column vector of regressors (or past values of input) given by

Page 5: Adaptive Noise Reduction Usig LV

k

k 1

k

k n

u

uX .

.u

=

n is the system order and there are n + 1 weights. The transpose notation (T) in

equation (1) therefore makes X a row vector. For the purposes of programming both

W and X are arrays or length n.

This simple recursion lends itself well to real-time applications and has relatively

good tracking ability. The system input ku (reference signal) for this application is

assumed to have no dc and have a variance (average power) 2uσ . If a white-noise

signal is not available then a signal rich in harmonics can be used (for example a

square wave). The step-size µ must be chosen carefully. Too large and the algorithm

will go unstable, too low and convergence will be slow. It is well known that the

condition for convergence [3] is

step size (µ ) < 1/number of weights times reference noise power

In fact practically often a tenth of the theoretical maximum is used or sometimes one

third. For speech signals the variance will vary with time and clearly under such

conditions µmust vary too or be fixed at the worst (smallest) possible case. If the

LMS algorithm converges then each weight will in this case correspond to the

coefficients of the unknown system. For example 0 0w b= , 1 1w b= and so on.

Page 6: Adaptive Noise Reduction Usig LV

3.The LabVIEW LMS virtual instrument

This section will cover the graphical code for implementing the basic LMS algorithm.

It is best to start using a simple example and for this an FIR filter of third order (ie

four weights) was used driven by white Guassian noise. Figure 2 illustrates how this

is achieved in the main programming diagram. The LMS algorithm is implemented

for convenience as a sub .vi (virtual instrument file). It is contained within a While

loop which runs endlessly. The FIR system parameters were chosen arbitrarily as

there are no stability issues. The simplest way to simulate a third order FIR system is

to use a formula node. The registers on the main while loop are required to store past

values of input (ie regressors). The input and output to the LMS algorithm are both

scalar quantities. As the library function which generates Guassian white noise is

fundamentally an array, it is necessary to make this of length one point and use an

index array block after this pointing to the zeroth element ie the first point. Otherwise

the noise generator output will stay as type array and the connection to the LMS sub

will not be possible. In Figure 2, sk is the system output and uk the system input with

uk1,uk2 and uk3 all past samples of uk.

Page 7: Adaptive Noise Reduction Usig LV

Figure 2. The basic LMS virtual instrument.

Looking inside the LMS sub itself reveals the diagram shown in Figure 3 below.

Figure 3. The LMS algorithm sub .vi

Page 8: Adaptive Noise Reduction Usig LV

All inputs are shown to the left and outputs to the right. One of the tasks performed is

to shuffle the past samples of the contents in the vector X. For example if we had

four weights it would be necessary to perform at each iteration of the main while loop

X[3]=X[2]

X[2]=X[1]

X[1]=X[0]

and finally X[0] = new sample for reference input ( 0u ).

This is relatively easy to perform in sequential code but in graphical code this is a

little more complicated and one solution is shown in Figure 4 below.

Figure 4. Shuffles regressor vector X

The diagram in Figure 4 makes use of the library function replace array subset and

index array. The function index array selects the desired element of the X array and

the replace array subset puts the shuffled value into the array. External to the for loop

Page 9: Adaptive Noise Reduction Usig LV

is another replace array subset which replaces the new reference input point into the

array at index value zero. (X[0]= 0u )

Equation (1) involves the term Tk k 1X W − where X and W are both arrays of equal

dimension. This is just the summation of the product of corresponding X and W array

elements which have been multiplied together. In sequential pseudo code this would

traditionally be implemented using something like

sum=0

for i= 0 to array_length

sum=sum+X[I]*W[I]

end for loop

In graphical code this is implemented using the diagram shown in Figure 5.

Page 10: Adaptive Noise Reduction Usig LV

Figure 5 Shows how to compute Tk k 1X W −

The arrays W and X are both shown whilst the broken line in the far right is the

product Tk k 1X W − . All that is required here is use of the library function index array to

select the individual points in the array. They are then multiplied and stored in the

register. Note that the register is initialized to zero each time the loop is executed.

The rest of the LMS algorithm is pretty straight forward as there are only two

equation in total. For instance equation (1) only requires the primary input minus the

previously calculated Tk k 1X W − to give the scalar error ke . Equation (2) involves

straight multiplication of scalars and one vector to give k k2 X eµ . The step length µ is

made an external input to the LMS sub .vi. In the main panel it is set as a control

input. Of course more sophisticated automatic calculation of the step length can be

performed but this can easily be added later.

Page 11: Adaptive Noise Reduction Usig LV

The final stage is to implement the vector recursion k k 1 k kW W 2 X e−= + µ where

k k2 X eµ has already been calculated. This is done by using a register to store the

current values in the vector kW and recovering them at the next iteration of the main

While loop whereby they become k 1W − . Equation (1) only then needs a summation to

complete the whole equation. The register is external to the LMS sub ie in the main

While loop as shown in Figure 2. The regressor vector X must also be stored in a

register so as not to lose the values therein.

The front panel of the LMS virtual instrument is quite basic and is shown in Figure 6 .

Figure 6. Front panel of LMS virtual instrument

It can be seen that the step size µ is selected by a control which is set to around 0.01.

The four weights are shown for the third order FIR system and as expected give

precise values for such a simple example. The error is also shown and the number of

weights is also a control variable.

Page 12: Adaptive Noise Reduction Usig LV

4.Testing the LMS virtual instrument on real data.

The generic LMS algorithm has two inputs, a primary and reference. To get real data

into the algorithm, the easiest way is to make use of the sound card. For higher

bandwidth problems a special data acquisition card can be used. Whilst it would be

possible to read the two signals ‘live’ from sound sources it was thought better

initially to read pre-recorded signals from a stereo .wav file. The stereo .wav file had

one channel as signal plus noise and the other channel as ‘noise alone’. Of course with

the real recordings there is no such thing as ‘noise alone’. The diagram in Figure 7

shows a better model of what is happening.

Figure 7 Model of adaptive filter for recoding of real signals.

It can be seen that although it is assumed that the signal is a perfect measurement, the

noise is in fact measured via two separate acoustic transfer functions 11H (z )− and

12H (z )− which alter the magnitude and phase of the recordings at each microphone.

Even this is a little simplistic as it does not allow for room reverberations and the

unfortunate possibility that signal (speech) may be strong enough to reach the

Page 13: Adaptive Noise Reduction Usig LV

reference microphone. Note that for this more general case we are neither trying to

model H1 or H2 or their inverses. In fact the LMS algorithm minimized the mean-

squared error and hence the result is an adaptive Wiener filter. For more details see

reference [3].

Figure 8. Layout of room

The actual room used was approximately 3.5 meters square and the noise source was

approximately 2.5 meters apart from the signal source. The noise source was an

ordinary radio and the speech source was a person talking a sentence in English. The

resulting .wav file was saved direct to disk using an audio package. LabVIEW could

have been used for this purpose too. A diagram is shown in Figure 7. There were

other furnishings in the room not shown in the diagram, for example a book case, two

desks and a filing cabinet. The microphones required a pre-amplifier to boost the

Page 14: Adaptive Noise Reduction Usig LV

signal strengths to acceptable strengths. A number of recordings were made for later

analyses. A typical signal plus noise and LabVIEW LMS estimate for recorded data

is shown in Figure 9.

Figure 9. Signal plus noise (top) and LabVIEW LMS estimate (bottom)

The number of weights used was 512 for this example and the sampling frequency

was 22050 Hz. An eight bit word length was used although it would have been just as

straightforward to use 16 bits. As the data (speech) is a signal whose average power

changes with time, it is not too easy to measure signal to noise ratio (SNR) as it too

will vary with time. SNR is defined as

s10

n

PSNR 10log [ ]

P=

where sP and nP are respectively the power in the signal and the power in the noise.

In general power P (or variance when there is no dc) is easily calculated for m

samples according to

Page 15: Adaptive Noise Reduction Usig LV

m2i

i 1

1P x

m =

= ∑ where ix is the sample of the signal. The problem with our real-world

example is that the true signal is unknown (un-measurable) and hence its power

cannot be calculated on its own.

The usual way to calculate SNR for such problems is to measure the power of the

noise when the speech signal is absent and to then measure the power of the signal +

noise (assume the noise has not changed its characteristics ). By subtracting the two

measures (in dB), 1+SNR (in dB) is found from which SNR can then be calculated.

ie

s n10

n

P P1 SNR 10log [ ]

P+

+ =

Three areas of ‘noise-alone’ in the waveform were used and hence three

measurements of SNR were obtained and averaged. This method is sometimes known

as segmented SNR measurement. This assumes of course that the noise power does

not change throughout the whole time of measurement, which is in reality a false

assumption with non-stationary data. However, it is a good approximation and the

only method available other than simulation results. Power of the noise alone and

power in the signal plus noise were measured manually using commonly available

audio editing software (cooledit). It would be possible to do this automatically in

LabVIEW but a rather sophisticated speech activity detector would then have to be

developed which is a current research topic in itself.

For simulated data, that is by using transfer functions to model the acoustic paths

11H (z )− and 1

2H (z )− it is not uncommon to see large improvements in SNR, anything

from 6dB upwards. Similarly in anechoic chamber tests results are quite impressive.

However, in real-world applications such as this there are many factors effecting

Page 16: Adaptive Noise Reduction Usig LV

performance the most notable being reverberations in the room. This means that

instead of being just one noise source there are in fact many depending on how

reverberant the room is. Bearing this in mind it was found that on average the

improvement in SNR given by the LMS algorithm in this environment was a little

over 1.5dB. There are many approaches in the literature on how to improve on this,

some of which involve using multiple sensor arrays. Of course this too is possible

with LabVIEW too but is beyond the scope of this article.

5.Conclusions

The LMS algorithm has been implemented using the graphical programming language

‘g’ (LabVIEW). It has been shown how aspects of adaptive signal processing can be

quite elegantly implemented using this method and is a powerful alternative to other

programming methods. Adaptive array signal processing could be investigated using a

similar approach which would give improved results. Furthermore, this article has

explored only a few applications of the LMS algorithm from a vast literature of

applications

6.References

[1] Johnson,G.W.,LabVIEW Graphical programming, McGraw-Hill series,NY,1994

[2] Chugani,M,L,Samant,A.R and Cerna,M., LabVIEW Signal Processing, National

Instruments Series, Prentice Hall,NJ,1998

[3] Haykin,S., Adaptive Filter theory, Prentice Hall Englewood Cliffs,NJ,1986

Page 17: Adaptive Noise Reduction Usig LV

Appendix. Reading Stereo .wav files

Reading mono .wav files is covered extensively in the LabVIEW books and help

facilities. Perhaps much less coverage is given to stereo .wav files and how to read

them. The diagram below show how this can be achieved.

Page 18: Adaptive Noise Reduction Usig LV

Figure A1 Reading a .wav file using LabVIEW

The reading is done within a sequence structure (the ‘film strip’) as the file has to be

read before it is processed. (The rest of the LMS processing must also performed

within a sequence structure.) The read wav file library sub gives out a 2D integer

array which must be split into two separate channels. This is done by using the index

array library call as shown using 0 and 1 to point to each channel (column of the 2D

array). For convenience the two separate 8 bit integer arrays are then put through a

case structure which will swap them over at the push of a button on the front panel.

This is just in case the two channels are saved the wrong way around in the file. Not

shown is the next step where 128 must be subtracted from each input and scaled down

to a suitable level for processing. The 128 comes about since for 8 bits we have 256

levels and dc sits at mid way at 128. A scaling factor of 0.01 then gives a dynamic

range of +/- 1.28 for each channel. This scaling is not critical as long as later, when

writing the processed speech to a .wav file the inverse is used to get back to integer

values and a suitable dynamic range.