thesis - development of a flexible , programmable analysis system for mathematical operations on fft...
TRANSCRIPT
1
Master Thesis presented to
Prof. Dr. Jens Lüssem
Prof. Dr. Helmut Dispert
1
`
2013Faculty Of Computer
Sciences and Electrical Engineering, Kiel
University Of Applied Sciences
Muhammad Usman Khalil ( 918227 )
Abstract
For the measurement of sound emission of test objects, background noise and irrelevant sound signals
needs to be eliminated. These signals are recorded as noise in the process of measurement. In signal
processing, there are a lot’s of algorithms and techniques implemented and discussed. This paper presents
a generalized process of detecting sound frequencies and provides a way to apply different analysis on
them. This also proposes a general software implementation by considering various noise detection and
removal techniques. These algorithms make use of FFT outputs and produce results that can be displayed
on various third party software components.
2
Contents
2013................................................................................................................................................................1
List of Figures................................................................................................................................................8
1. Introduction............................................................................................................................................9
1.1 Motivation......................................................................................................................................9
1.2 Company......................................................................................................................................10
1.3 Brüel & Kjaer...............................................................................................................................10
1.3.1 Brüel & Kjaer (The Complete Measuring Chain)........................................................................10
1.4 Goal..............................................................................................................................................11
2. Pulse Time Data Recorder....................................................................................................................13
2.1 Channel Monitor..........................................................................................................................13
2.2 Export...........................................................................................................................................13
2.3 Study of Data Acquisition Components of Pulse Time Data Recorder...............................................14
2.3.1 DataRecorder.exe.........................................................................................................................15
2.3.2 DataRecorder.exe.........................................................................................................................16
2.3.3 DataRecorderUI.dll......................................................................................................................16
2.3.4 DataRecorderModel-Interfaces.dll...............................................................................................16
2.3.5 DataRecorderModel.dll................................................................................................................16
2.3.6 DataRecorderUIModel-Interfaces.dll..........................................................................................16
2.3.7 DataRecorderUIModel.dll...........................................................................................................17
2.3.8 DataRecorder-Application.dll......................................................................................................17
2.3.9 ChannelMonitor.dll......................................................................................................................17
2.3.10 ChannelTableUI.dll......................................................................................................................17
2.3.11 RecordingView.dll.......................................................................................................................17
2.3.12 RecordingController.dll...............................................................................................................17
2.3.13 PTIFileConsumer.dll....................................................................................................................17
3
2.3.14 BK.Plateform.DataRecorderProgrammingInterface.dll...............................................................18
2.3.15 Localization.dll............................................................................................................................18
2.3.16 BK.Pulse.Frontend.FrontendFactory...........................................................................................18
2.3.17 BK.Pulse.FrontendDrivers.LanXiFrontend.................................................................................18
2.3.18 GcxConverter...............................................................................................................................18
3. Fast Fourier Transform.........................................................................................................................20
3.1 Frequency Domain Analysis........................................................................................................20
3.2 Fourier Transform........................................................................................................................20
3.2.1 Introduction..................................................................................................................................21
3.2.2 Definition (Reverse Transform)...................................................................................................22
3.2.3 Definition (Reverse Transform)...................................................................................................22
3.3 Discrete Fourier Transform..........................................................................................................23
3.4 Sampling Rate..............................................................................................................................24
4. Structural Overview of “LOFAR” Modules.........................................................................................26
4.1 Data Source..................................................................................................................................26
4.1.1 Online Data Source Acquisition..................................................................................................26
4.1.1.1 Brüel Kjaer Devices.....................................................................................................................26
4.1.1.2 Microphone Input.........................................................................................................................27
4.1.2 Offline Data Source Acquisition..................................................................................................27
4.2 General Data Interfaces................................................................................................................28
4.3 Block Separator............................................................................................................................29
4.3.1 FFT Windowing...........................................................................................................................30
4.3.1.1 The "Hanning" window...............................................................................................................31
4.3.2 Overlap Processing......................................................................................................................32
4.4 FFT Analysis................................................................................................................................35
5. Types of Noise & Noise Removal Techniques....................................................................................37
5.1 Overview of Noises......................................................................................................................37
4
5.1.1 White Noise.................................................................................................................................37
5.1.2 Crosstalk......................................................................................................................................37
5.2 Noise Removal Techniques – Filter Design................................................................................38
5.2.1 Fixed Filters.................................................................................................................................38
5.2.2 Adaptive Filters............................................................................................................................38
5.3 Wiener Filter................................................................................................................................40
6. “LOFAR” Design and IT Architecture.................................................................................................42
6.1 Time Signal Processing for Noise Removal................................................................................42
6.1.1 Time Signal Windowing..............................................................................................................42
6.1.2 FFT Analysis................................................................................................................................42
6.1.3 Noise Removal Algorithm...........................................................................................................42
6.1.4 Reverse FFT (IFFT).....................................................................................................................43
6.1.5 Waterfall Display.........................................................................................................................43
6.2 System Flow Chart.......................................................................................................................45
6.3 Use Case Model...........................................................................................................................46
6.4 Use Case Description...................................................................................................................46
6.4.1 Actor............................................................................................................................................46
6.4.2 Description- BrowseForSignals...................................................................................................46
6.4.2.1 Pre conditions...............................................................................................................................47
6.4.2.2 Post conditions.............................................................................................................................47
6.4.3 Description- BrowseForNoise.....................................................................................................47
6.4.3.1 Pre conditions...............................................................................................................................47
6.4.3.2 Post conditions.............................................................................................................................47
6.4.4 Description - PlayFilteredSignal..................................................................................................47
6.4.4.1 Pre conditions...............................................................................................................................47
6.4.4.2 Post conditions.............................................................................................................................47
5
6.5 High Level Component Architecture (with Voice Recording and data recording devices as
input) 48
6.6 Class Level Architecture (with Voice Recording and data recording devices as input).............49
6.7 Multithreaded- High Level Software Activity.............................................................................50
7. “LOFAR” Development.......................................................................................................................51
7.1 Skeleton of “LOFAR” with WPF................................................................................................52
7.2 Original Signal rendered on “OrignalSignal” Panel....................................................................53
7.4 Filtered Signal rendered on “FilteredSignal” Panel.....................................................................55
8. Conclusion and Scope for Future Research..........................................................................................56
8.1 Conclusion of the work................................................................................................................56
8.2 Scope of further research.............................................................................................................56
9. Bibliography.........................................................................................................................................57
10. Appendix..............................................................................................................................................59
10.1 C# Source Code: WienerFilter.....................................................................................................59
10.2 C# Source Code: Noise Addition.................................................................................................60
10.3 C# Source Code Time Data Recording from File........................................................................61
6
List of Figures
Figure 1.1 - Proposed LOFAR High Level Architecture.............................................................................10
Figure 2.1 – Pulse Time Data Recorder Type 7708 (Kjaer)........................................................................13
Figure 3.1 - Frequency Domain (S. Allen Broughton, 2011)......................................................................20
Figure 3.2 Waveform of Sin function with its FFT (LDS-Group, 2013)...................................................22
Figure 4.1 - Pulse Front end Devices (http://www.bruelkjaer.de/)..............................................................26
Figure 4.2 Periodic Sine Wave (LDS-Group, 2013)..................................................................................29
Figure 4.3 FFT ( (LDS-Group, 2013)..........................................................................................................29
Figure 4.4 a-Periodic Sine Wave (LDS-Group, 2013)................................................................................29
Figure 4.5 FFT (LDS-Group, 2013)............................................................................................................29
Figure 6.1 Time Signal Processing for Noise Removal (Google Images)...................................................43
Figure 6.2 System Noise Removal Flow.....................................................................................................44
Figure 6.3 Use Case Model..........................................................................................................................45
Figure 6.4 System Component Architecture................................................................................................47
Figure 6.5 System Design Class Architecture.............................................................................................48
Figure 6.6 System Multithreaded Architecture of Noise Removal Process................................................49
Figure 7.1 Skeleton of LOFAR with WPF application................................................................................51
Figure 7.2 Original Signal............................................................................................................................52
Figure 7.3 Filtered signal after wiener filteration........................................................................................53
Figure 7.4 Filtered signal after wiener filteration........................................................................................54
7
1. Introduction
he topic of this thesis is the development of software component “LOFAR” for
measurement of outer world signals. These sound signals would be recorded and passed
through different mathematical operations. The focus of this dissertation is to analyze and
develop a monitoring procedure for detecting background noises, disturbing sources and
unexpected distortions. There would be test objects used to emit sound with different frequencies and
different amplitudes that contains a lot of background and irrelevant signals. These test sounds would be
processed, analyzed and different mathematical operations (FFT) would be applied to eradicate these
background signals. Results would be used and shown on waterfall display. This software component
would be used in the development of whole software focused on analysis of sound signals produced by
different front end devices, based on distributed client/server architecture, tested on different machines
with different networking environments and possibilities.
T
1.1 Motivation
After getting hands-on experience over the hydro acoustic signals produced by Brüel & Kjaer devices, I
was fully determined to see the involvement of these signals into a software application, that monitors the
signals and process them after recording from some interfaces. It was complete software that was being
written by Brüel & Kjaer1 containing all features to interact with incoming signals2 ( these signals were
captured by Frontend devices of Brüel & Kjaer equipped with multiple channels) to visualize them (e.g.
Spectrum, FFT visualization). This software contained several hundred other components and were
designed and developed to expose off different kind of functionalities included in the software. One of the
most important functionality was to capture data stream lively and in real time. I focused merely to this
and isolate those components for observing live data stream. This data stream has to be reused. For this I
formed a complete architectural flow of the concept3 and was in a position to implement it on specific
platform and technology. This idea gives the insight of software development in the world of hydro
acoustics and oscillations. Moreover, this dissertation was based on research and feasibility to isolate live
1 Company name2 Data stream, coming as an input in the form of digital signals.3 Refer to the figure after Goal paragraph.
8
data stream as discussed, from software that includes several other components and that would form as
basis for other software layers that follows.
1.2 Company
HGL Dynamics GmbH4 is an engineering company located in Berlin. Their products include multi-
channel data acquisition systems for dynamic measurements, single point lasers, scanning laser
micrometer and structural software for dynamic analysis. With complete equipment, they offer vibration
measurements and analysis.
1.3 Brüel & Kjaer
Brüel & Kjær Sound and Vibration Measurement A / S provides integrated solutions for the measurement
and analysis of sound and vibration. As a world leader in sound and vibration measurement technology,
they use their core competencies to the industry and government agencies to help solve their sound and
vibration problems, so that they can focus on their main objective - efficiency in industry and
government. Brüel & Kjaer have made their products for focusing mainly acquisition of data with respect
to time and they provide software platforms for making different kind of analysis. Brüel & Kjaer provides
data recorder software (e.g. Pulse Time Data Recorder) that provides ability to make analysis on recorded
signals via frontend devices (e.g. IDAe, LanXI).
1.3.1 Brüel & Kjaer (The Complete Measuring Chain)
Brüel & Kjær provides advanced technology solutions and high quality products. Complete measuring
from production to complete running system is covered.
1. Products: Brüel & Kjaer product line includes all of the components and tools that are needed
for the measurement and analysis of sound and vibration.
2. Systems: Brüel & Kjaer products are intelligently designed to fit together as a system. This
makes it easier to optimize the process systems that meet specific customer needs.
4 Retrieved on 29 May 2013 from http://hgl-dynamics.de/
9
3. Solutions: In certain cases, Brüel & Kjaer are not only the systems but they even run and provide
analytical results, which means that our customers can focus on your core business and not have
to worry about the operation and maintenance of their equipment.
4. Services: Brüel & Kjaer offer their products and systems for a range of services, including
installation, calibration, training; maintenance; upgrading, repair and rental belong.
5. Quality control: All products, systems and solutions are examined thoroughly, often in harsh
environments. As an ISO 9001 certified company, our customers can be assured that we meet
with respect to our products and services extremely high standards and regulations.
1.4 Goal
After data acquisition from these devices, further mathematical analysis and operations (FFT) would be
applied, and then finally these processed and transformed noise free signals would be available for
visualization on any third party components (e.g. Arction Lightening Charts). This confirms the
development of complete software which allows real time LOFAR analysis on incoming hydrophone
signals. This could be finalized as client/server architecture or distributed client/server paradigm. One
server can serve multiple clients at a time and can receive live or offline data streams in parallel.
10
Figure 1.1 - Proposed LOFAR High Level Architecture
There’s distribution of the content on the basis of separate sections. Data would be considered as
continuous stream that would be passed along the interface that would behave as general interface. This
interface would communicate with all kind of front end devices recording data from outside world. The
detail of this interface would be explained in separate section of “General Data Interface”. Afterwards,
there would be some dedicated discussion for dividing into blocks those would be an initial requirement
for FFT module. So, there’s a “Block Separator “section that would describe about, how the data would
be divided into blocks of size 2n that would then be forwarded towards FFT module. Next section would
talk little bit about FFT and its constituents. Here there would be a section that would describe a little bit
about discrete analysis on the data.
There would be a complete chapter that would describe about Wiener5 process6 and its role in removal of
“white noise”7. Finally the result would be displayed over the spectrum that displays the resultant
frequencies and those are supposed to be noiseless.
5 Wiener “Norbert Wiener” was a German mathematician. 6 Wiener Process is the process of removing noise from the continuous data.7 White noise is randomly generated signal that disturbs all frequencies uniformly and create an overall noisy impact over the tested signal.
11
2. Pulse Time Data Recorder
ulse Time Data Recorder is the complete software package for acquiring real time data from
front end devices, allowing different type of analysis (e.g. FFT Analysis, Order Analysis and
other real time analysis) and more importantly , it enables the user to navigate with Pulse
Labshop to perform deep mechanical operations on signals recorded from frontend devices.
It gathers the data from field and outer world and provides post processing on that data. It is multi-
channel data acquisition tool that gathers the data and applies post processing, offers Spectrum analysis,
history of the recorded data, Tachometer analysis, Channel Monitoring, Export data in variety of formats
and lots of other value able operations and features it provides.
PMost summarized information about few of the mentioned features is described here.
2.1 Channel Monitor
There is a dedicated panel on interface of Pulse Data Time Recorder that shows the channel data upon the
selection of particular signal from table of Channels. Channel Monitor allows the axis to be adjusted as
per requirement. Channel Monitor contains GCX display that renders the signal of a particular channel of
connected frontend device.
2.2 Export
Here, user can export the data of incoming signals of specific channel selected from Channel Table to
different file formats like (e.g. PTI file format).
12
Figure 2.2 – Pulse Time Data Recorder Type 7708 (Kjaer)
2.3 Study of Data Acquisition Components of Pulse Time Data Recorder
Pulse Time Data Recorder has several hundreds of components focusing on different parts of software.
These components implement basic and core functionalities of Pulse Time Data Recorder such as
Interface, Framework implementation, Licensing, Signal Analysis, Frontend setup, FFT Analysis,
Frontend Drivers and many other software components included with Pulse. The study of Pulse
Recording components carried out to extract and focus existing software components for acquiring real
time live data from frontend devices. These software components mainly designed for acquiring signals
from frontend such as IDAe, LanXI devices. Pulse Time Data Recorder makes the usage of internal
drivers to communicate with these devices.
13
DataRecorderUI
«component»
BusinessObject
«component»
ApplicationLogic
«component»
Frontend-Drivers (LanXi,IDAe)
«component»
Frontends(LanXI,IDAe)
«component»
Figure 2.2 Pulse Time Data Recorder Flow
2.3.1 DataRecorder.exe
Pulse Time Data Recorder encompasses with different kind of analyzers. These analyzers are setup during
the application setup phase of software. These analyzers execute on their specific execution cycles
asynchronously and produce specific results (processed signals) for engineer and user. These analyzers
have connectivity with frontends with different asynchronous execution context.
Some of these analyzers are:
Recording Analyzer
PTI File Analyzer
Tacho Analyzer
Audiometer Analyzer
Odometer Analyzer
Speedometer Analyzer
14
Channel Analyzer
2.3.2 DataRecorder.exe
Launch the main program and define the interface of the whole program structure. Host the application
components and start consuming signals immediately from front end devices if the front end devices are
connected, Display the whole UI framework of the software, all UI components. It has the functionality to
host all client libraries written only for communicating with Frontend devices and their drivers on the
basis of Frontend device nature. After successfully communicating with the device, it hosts the library
that starts scheduling to consume data from front end devices.
2.3.3 DataRecorderUI.dll
It defines most frequently used UI component of the application which contains UI structure of the
controls and menu options of PULSE Time Data Recorder such as Audio, Conditions and Tachometer
under ''Record'' option. It contains all UI interfaces defining the main user interface.
2.3.4 DataRecorderModel-Interfaces.dll
It defines all programmatic interfaces of the whole framework. These interfaces define and expose the
overall functionality of the framework (e.g. Data Recording, Analyzing, Analyzer Audio Monitoring,
Analyzer Monitoring, Data Recorder Signaling and Data Recorder Triggering).
2.3.5 DataRecorderModel.dll
It implements all programmatic interfaces stated in listing 3. The functionality which describes the overall
implementation of framework implemented here (e.g. Data Recording, Audio Monitoring, Analyzer
Monitoring, Data Recorder Signaling and Data Recorder Triggering).
2.3.6 DataRecorderUIModel-Interfaces.dll
It defines the programmatic interfaces for describing UI components and UI controls (presentation) of the
framework.
15
2.3.7 DataRecorderUIModel.dll
Implement the programmatic interfaces stated above. This will expose of UI of the whole framework and
allows user interaction on the framework (e.g. especially Audio Enabling / Audio Disabling, Conditions
interaction, Tachometer and other areas under ''Record'' option.
2.3.8 DataRecorder-Application.dll
It sets up all possible Add Ins, “Help” feature and starts up the application by initializing and loading the
possible project file (i.e. .tdr file). It also adds services of its few of its components (i.e. Calibration,
Transport Controller and Tachometer etc).
2.3.9 ChannelMonitor.dll
This DLL describes the presentation of Channel Monitor component and its overall working with other
UI components (e.g. GCX). Host the GCX control and makes the necessary initializations.
2.3.10 ChannelTableUI.dll
It describes the functionality of the main Grid where signal recording parameters and other properties are
shown.
2.3.11 RecordingView.dll
It contains the UI control of “Recording” feature and performs some UI configuration stuff regarding
Channel Interface and other related components.
2.3.12 RecordingController.dll
It contains the business objects and classes used for calculating and recording signals from Front-End
devices.
2.3.13 PTIFileConsumer.dll
It is containing PTI File writing logics and consumption of data (signals) and dumping the data over file.
It consumes the data in the form of asynchronous call backs and then prepared the data to be written on
16
the file in specific format and that is PTI file format. This prepares first the header information of the file
and then dumps the data in the form of blocks of specific sizes. Header contains the information regarding
frequency which is being setup while recording live data stream from frontends, No of Channels and
additionally other areas of information.
It has two functionalities, writing AC8 signals and DC9 signals over the disk.
2.3.14 BK10.Plateform.DataRecorderProgrammingInterface.dll
It is composed of Factory classes and programmatic interfaces to be implemented by others for
connecting with front end devices.
2.3.15 Localization.dll
It is composed of functionality and business object defines language Localization and related information
used inside the framework.
2.3.16 BK.Pulse.Frontend.FrontendFactory
This contains the implementation of factory classes that intelligently creates the objects of specific
frontend that has to be configured during configuration time of the Pulse Time Data Recorder. In this
way, only that frontend send notification and data to their consumer that is being configured.
2.3.17 BK.Pulse.FrontendDrivers.LanXiFrontend
It contains all the functionality to receive the data from specific to Frontend Device (e.g. LanXI, IDAe).
Here recorded signals received into the form of stream and kept in software buffers.
2.3.18 GcxConverter
8 Analog Signals 9 Digital Signal.10 BK: Brüel & Kjaer.
17
This component receives the data change notifications in normal manner as others. It prepares the data
into certain format (i.e. BKData11 Format) and makes it available for GCX12 graphical component to
render it. This conversion is necessary so that data can be easily rendered and displayed over the GCX
component.
11 BKData is the structure to hold the data information specific to a channel that records the data from frontend device.12 GCX is a graphical component that render’s the data for specific channel of frontend device.
18
3. Fast Fourier Transform
efore going into details of Fourier Transformation, Frequency Spectrum will give an
overview regarding frequency domain and its application. The “Frequency
spectrum” of a time-domain signal is a representation of that signal in the frequency
domain. The frequency spectrum can be generated via a Fourier transform of the
signal, and the resulting values are usually presented as amplitude and phase, both
plotted versus frequency.
BAny signal that can be represented as amplitude and it varies with time has always belongs to a frequency
spectrum. This way, several other concepts such as visible light (color), musical notes, radio/TV channels,
and the regular rotation of the earth can be included and can be well described. When these physical
phenomena are represented in the form of a frequency spectrum, certain physical descriptions of their
internal processes become much simpler. Often, the frequency spectrum clearly shows harmonics, visible
as distinct spikes or lines that provide insight into the mechanisms that generate the entire signal. This is
fairly obvious, because these spikes and lines contains all information that can be reverse backed and an
entire signal can be generated again by using this information.
3.1 Frequency Domain Analysis
“Spectrum analysis “also referred to as frequency domain analysis or spectral density estimation, is the
technical process of decomposing a complex signal into simpler parts. As described above, many physical
processes are best described as a sum of many individual frequency components. Any process that
quantifies the various amounts (e.g. amplitudes, powers, intensities, or phases), versus frequency can be
called “Spectrum analysis”.
Spectrum analysis can be performed on the entire signal. Alternatively, a signal can be broken into short
segments (sometimes called frames), and spectrum analysis may be applied to these individual
segments. Periodic functions (such as sin (t)) are particularly well-suited for this sub-division. General
mathematical techniques for analyzing non-periodic functions fall into the category of Fourier analysis.
3.2 Fourier Transform
19
The Fourier Transform named after Joseph Fourier is transforming function that having many
applications in engineering. By this, function of time f (t) can be transformed to new function “ F” whose
argument is frequency with hertz as units. This function is known as Fourier Transform or frequency
spectrum. This transformation is reversible as well. This new function “F” can be transformed back to
original function of time (t). This way one event can simply be represented as time domain representation
and frequency domain representation. We can describe time domain (t) is real valued function in general,
and its transformed produced complex valued where one complex number can describe both amplitude
and phase of corresponding frequency component.
For example, in periodic function (continuous or sinusoidal), Fourier transform can be simplified to the
calculation of discrete samples (i.e. amplitudes) and these amplitudes would behave as Fourier series
coefficients. For computer processing or storage, it is possible to re-create the reverse of original
transformation. This idea is known as “discrete-time Fourier Transform”.
3.2.1 Introduction
In the study or “Fourier Series” periodic and complicated functions can be written as summation of
simple sine and cosine waves mathematically. Fourier Transform is basically an extension to this idea,
that outputs when the period of the represented function is lengthen and prorogated and then allowed to
approach infinity.
Recovering amplitudes of each wave using sine and cosine in Fourier series, it is useful to use Euler’s
formula. According to it, e2πiθ = cos (2πθ) + isin (2πθ), it is possible to write Fourier series in terms of
basic waves e2πiθ. In this way, many formulas can be simplified which are involving directly and provides
formulated definition of Fourier series. Thus re-writing sine and cosine waves as complex exponential
values makes Fourier coefficients to be complex valued as well. Interpretation of this complex number is;
it gives amplitude or size of the wave and phase or angle of the wave. Simple visualization of frequency
domain can be observed and seen in the following figure.
Figure 3.3 - Frequency Domain (S. Allen Broughton, 2011)
20
3.2.2 Definition (Reverse Transform)
There are several common conventions for defining the Fourier transform ƒ̂V of
an integral function f: R → C. Here, for a continuous function of one variable f (t), the Fourier
Transform F (t) will be defined in the following figure equation (Brigham, 1988).
F ( f )=∫−∞
∞
f (t ) e−2i ∏ftdt
When the independent variable x represents time (with SI unit of seconds), the transform variable ξ
represents frequency (in hertz).
3.2.3 Definition (Reverse Transform)
Under suitable conditions, f(x) is determined by inverse transform equation that is being show in
following equation (Brigham, 1988).
f ( t )=∫−∞
∞
F ( f ) e2 i ∏ftdf
The statement that f(x) can be reconstructed from ƒ̂V is known as the Fourier inversion theorem, and was
first introduced in Fourier's Analytical Theory of Heat (Fourier 1822, p. 525), (Fourier & Freeman 1878,
p. 408), although what would be considered a proof by modern standards was not given until much later.
The functions f and ƒ̂V often are referred to as aFourier integral pair or Fourier transform pair.
After transforming real values, complex numbers are resulted. They can be thought of in two different
ways: (real, imaginary) or (magnitude, phase).This is described in the following equations. (Brigham,
1988)
|X|/ N=√(ℜ ( X )2+ℑ ( X )2 ) /N
arg (x)=arctan2 (ℑ ( X ) , ℜ( X ))
For Fast Fourier Transform, Time data is digitized and sampled into blocks of data. These blocks of data
also called as time window or time record. One FFT block re (Brigham, 1988)presents time data points
into frequency composition. For example in the following figure13 10 Hz Sine Waveform and it’s
13 Retrieved on 14th June 14, 2013 from http://www.physik.uniwuerzburg.de/~praktiku/Anleitung/Fremde/ANO14.pdf
21
respective FFT. This is clear that Sine Wave is represented by one discrete tone after FFT with 1.0 at
10Hz height.
Figure 3.4 Waveform of Sin function with its FFT (LDS-Group, 2013)
Fast Fourier transform always applied for time domain signals and mainly used to convert them into
frequency domain preserving all details related and relevant with time signals (i.e. phase and amplitude).
Moreover, FFT algorithm only applies, when we have signals containing no of elements equal to 2n .
Hence transformation of time domain data to another domain is the trivial work done that is being
operated by FFT. Indirectly it’s a mathematical method for transforming a function of time into a function
of frequency. So, it’s very much useful for the analysis of time-dependent phenomenon. Further these
results would then be used to apply other mathematical operators. These results would contain real and
imaginary values that hold the information about the phase and amplitude of original signals.
In the results of FFT (i.e. amplitude and phase), the amplitude is encoded as the magnitude of the
complex number (√ x2+ y2) while the phase is encoded as the angle (arctan 2( y , x )¿.
Here, one real no from the result of FFT can’t give you the phase and amplitude information together. If
we throw away the phase information, it could easily distort the information of signal upon recreation
from iFFT (Inverse Fast Fourier Transformation, explained in 6 th chapter), and that operation is always
required for the transferring information back towards time domain.
3.3 Discrete Fourier Transform
By law, Discrete Fourier Transformation is the process to convert finitely sample values into the series of
coefficients of sinusoids of complex numbers. These coefficients are well ordered with respect to their
frequencies. In this way, sample values (sampled functions in fact) converted from time domain to
frequency domain. (Smith S. W., 1999)
22
The DFT is the most important discrete transform, used to perform Fourier analysis in many practical
applications. In digital signal processing, the function is any quantity or signal that varies over time, such
as the pressure of a sound wave, radio signal, or daily temperature readings, sampled over a finite time
interval (often defined by a window function). In image processing, the samples can be the values
of pixels along a row or column of a raster image. The DFT is also used to efficiently solve partial
differential equations, and to perform other operations such as convolutions or multiplying large integers.
(Smith S. W., 1997-1998)
Since it deals with a finite amount of data, it can be implemented in computers by numerical
algorithms or even dedicated hardware. These implementations usually employ efficient fast Fourier
transform (FFT) algorithms; [1] so much so that the terms "FFT" and "DFT" are often used
interchangeably. The terminology is further blurred by the (now rare) synonym finite Fourier
transform for the DFT, which apparently predates the term "Fast Fourier transform" but has the
same initialization.
Since, DFT deals with finite set of data; this can be implemented in computers in hardware or software
easily. Fast Fourier transformation based algorithms and routines usually being used in the
implementation.
3.4 Sampling Rate
“The sampling rate, or sampling frequency defines the number of samples per unit of time (usually
seconds) taken from a continuous signal to make a discrete signal.”14 We can make random choice of
picking sampling frequency. One arbitrary signal may contain various frequency components that can be
detected.
Highest frequency would always be half of the sampling frequency that can be possibly detectable. It's
called the Nyquist or folding frequency. For a signal sampled with the sampling rate fs, due to the
limitation imposed by the Shannon Theorem, components of the spectrum can only be extracted for
frequencies between -fs/2 <= f <= fs/2, or between the negative and positive Nyquist frequency. Below is
the figure 15 representing discrete samples of a continuous signal.
14 Sampling Rate definition : Retrieved on 22 July, 2013 from http://developer.nokia.com/Community/Wiki/Sound_pattern_matching_using_Fast_Fourier_Transform_in_Windows_Phone15 Sampling Rate Figure : Retrieved on 22 July,2013 from http://developer.nokia.com/Community/Wiki/Sound_pattern_matching_using_Fast_Fourier_Transform_in_Windows_Phone
23
Figure 3.4 sampling continuous signals into discrete samples (Galazzo)
24
4. Structural Overview of “LOFAR”
Modules
he concept that was being mentioned as blocks after “Goal” has to be implemented in a
way, so that everything is modular and that can be easily plugged and re-used in any format.
They must be tested and are supposed to run in the form of services. Separate modules have
to be written those communicate with each other independent of platform and technology.
These tentative modules are listed as below.
T4.1 Data Source
There are different kinds of data sources available for providing data streams. For the sake of data, it is
convenient to generate even random values of finite length probably of 2n.
This way data can be simulated for its consumers giving an impression of online data provision. We study
following about the ways, how can data be transferred and produced for its consumers.
4.1.1 Online Data Source Acquisition
4.1.1.1 Brüel Kjaer Devices
For online data source acquisition, there are different kinds of devices that provide multiple
channel based data to its consumers16. These data sources are real time and extremely fast in terms of data
recording. They possess different no of channels for recording simultaneous data with their interfaces and
provide data to consumer or software application which might process that with the application of
different mathematical operations. See figure17 below.
16 Consumers of the data can be any software component or data buffers where the data is being recorded that are to be consumed later by other software modules.17 Retrieved on 23rd July, 2013, Brüel & Kjaer front end devices fromhttp://www.bruelkjaer.de/Products/frontends/lanxi.aspx
25
Figure 4.5 - Pulse Front end Devices (http://www.bruelkjaer.de/)
Brüel & Kjær PULSE data acquisition software acquires data from PULSE data acquisition hardware
devices that also contains any PULSE Labshop real-time analysis applications with PULSE Time Data
Recorder.
These software applications automatically synchronize and compensates for small differences and errors
in both amplitude and phase between different serial numbers, types and generations of hardware.
Currently I interacted with Pulse IDAe and LanXI based devices having different no of channels fixed
with them. With these channel based modules, Pulse Time Data Recorder or components for consuming
live data inside Pulse Time Data Recorder consumes data in real time.
4.1.1.2 Microphone Input
For the context of this software component “LOFAR”, sound signals can be recorded via microphone and
can be transmitted after converting into wav format towards its consuming components. Voices can be
recorded from ordinary microphone input. These inputs would be transformed to binary and can be stored
over disk. Further, that binary file can be read and MP3 format wave file can be generated which can
behave as input for reading all audio data for further processing on that. With any API for recording, one
can record voice to stream for making analysis on application component. Thus microphone input can
behave as another candidate for providing online data to “LOFAR” software component.
4.1.2 Offline Data Source Acquisition
26
For offline data acquisition, either data would already be recorded in the file system or there would be
constant stream that provides the data in offline mode. This can include PTI file and CSV data streams.
This gives the freedom to the user to have data applied on the software component “LOFAR” as it is
required. From the user interface18 aspect, user can have freedom to select specific amount of data (in the
form of blocks). For this probably user interface would allow user to select the data of audio signal and
highlight it for further usage. This way there is a direct control on the data unlikely in the live data stream.
For the construction of these data files, first part of the information would be header part that describes
the type of signal (AC, DC) and frequency that is being setup by the user on front end. Moreover no of
channels and other information can be observed. After header information, the data in the binary format
would be dumped over the file that would be provided to the software component. But as PTI files are
supposed to grow over time when live data is provided. So these files always have specific amount of size
to describe binary data that is being dumped over it.
For the context of this software component, due to the unavailability of devices and live data stream, I
would construct the data temporarily and dump that into the files in any format. This way, I can generate
random data from specific time period within specific range. This data altogether based on real values and
would be store in the data structures of the size 2n .
Another possibility would be to use already recorded wave file that have constant amount of data. For this
a software interface would be written that reads temporarily wave file (e.g. “Whislter.wav”) and provides
audio samples to the next component in chain for further processing.
4.2 General Data Interfaces
As, we have discussed about the sources of data. These can be manipulated and available in any way.
They can be chosen exclusively from proposed user interface of “LOFAR” component. So, there are
different kinds of data sources that can record the data signals from outside world or they can provide
already generated data stream to other modules in the chain for further analysis. Frontend devices, Offline
data sources (e.g. PTI, CSV files) and other online, offline data sources. There is requirement to build a
generic component that can interfaces with all these sources in general manner and it allows connecting
generally with “LOFAR” software component. This interface would behave totally in a generic manner.
18 Software user interface ( GUI )
27
There would be a software component “Factory”19 that creates the object of these data sources (depends
on the choice of user) and allows these “General Data Interfaces” to be communicated with other module
of this software component. There is always a criterion that would pass towards factory component and
on the basis of that; factory will create the object of specific data source. This interface must have the
capability to interact with online or offline data streaming source efficiently and blindly with respect to
the chains following it. Specific methods that takes the data or records the data of specific block length
would behave as transfer data from source towards other modules of “LOFAR” software component. 0
4.3 Block Separator
FFT computation required that data must be provided to it in the form of finite blocks and it must be
continuous. FFT operator only operates on the finite amount of data and outputs the frequency spectrum
for that specified time domain signals. So, block by block timed domain data would be processes and
analyzed.
FFT computation is being assumed that data is always in periodic in each data window and it is same over
and over again. This is the indication of getting identical signal again and again Another type of signal is
Transient signal that has always start at Zero level and it has the beginning at time window and then rises
to some maximum level and decreases down to zero before reaching to the end of the window. On the
computation of FFT for non-periodic signal, there would be resulting spectrum of frequencies that would
contain certain amount of leakage. Due to this leakage, energy of signal smeared out over frequency
range and instead on the other way, it should be on the narrow frequency range.
This effect of “leakage” is always on finite amount of data and main reason for getting leakage on data
after FFT, is to get instant change. Due to this instant change and discontinuity in the time domain
signals.
Following figures20 illustrates the concept of leakage. First top figure shows10 Hz sine wave with
amplitude 1.0 that is fully periodic in the time domain axis. On the other side, there is a resulting FFT that
has a narrow peak of 10 Hz in the frequency axis.
19 It is a software component that always creates the object of other different components within that software. It always uses interface as reference and makes the communication possible between components.20 Retrieved on 14th June 14, 2013from http://www.physik.uni-wuerzburg.de/~praktiku/Anleitung/Fremde/ANO14.pdf
28
Figure 4.6 Periodic Sine Wave (LDS-Group,
2013)
Figure 4.7 FFT ( (LDS-Group, 2013)
Because of periodic nature of signal, resulting FFT is discrete narrow peak with height of 1.0. While on
the other hand when signal is non periodic, results in a leakage in FFT (as shown in the following
figure21).Here as expected the amplitude is little less than 1.0 unlikely with upper case. Here amount of
energy more dispersed and this dispersed nature and shape of FFT make it more difficult to identify the
frequency content of the input signal.
Figure 4.8 a-Periodic Sine Wave (LDS-
Group, 2013)
Figure 4.9 FFT (LDS-Group,
2013)
4.3.1 FFT Windowing
This effect of leakage minimization can be achieved by “Windowing”.
Here, we multiply the window function22 by the original time domain signal enforces the signal to be zero
at the start and on the end of that time record. In this way, by placing multiple time records end to end, the
signal is now forced to be periodic in this way. One problem solved, but the signal now in each of the
time records is not a sinusoid any longer.
21 Retrieved on 14th June, 2013 from Han Window : http://www.physik.uni-wuerzburg.de/~praktiku/Anleitung/Fremde/ANO14.pdf22 Retrieved on 31st July,2013 from http://blog.prosig.com/2009/07/20/data-windows-what-why-and-when/
29
Figure 4.3.1 Hanning Window (Mathey, 20)
4.3.1.1 The "Hanning" window
Following figure23 illustrates the effect of windowing original signal (i.e. time domain signal after
windowing). Impact shows that signal is no longer sinusoid but it is now periodic any ways. Here,
“Hann” (shown above) function is applied.
Figure 4.3.1.1 Sinoside signal multiplied by window (Mathey, 20)
After applying this window, leakage is reduced. We can analyze the example of 10Hz that we discussed
above. Here, the resulting spectrum shows sharp narrow peak with 1.0 as amplitude. Here it does not
have exactly the shape we had of original sine wave (periodic sine wave), but the amplitude and
frequency error has been corrected.
23Periodic Signal : Retrieved on 1st August, 2013 from http://blog.prosig.com/2009/07/20/data-windows-what-why-and-when/
30
Figure 4.3.1.2 Time domain of Sinoside and windowed effect (LDS-Group, 2013)
Time domain signals applied “Hanning” window and that results in periodic effect.
Figure 4.3.1.3 Before and after applying windowing (LDS-Group, 2013)
So, in conclusion, window function almost reduces down the overall impact of leakage to better
understand and represent the frequency spectrum of the data.
4.3.2 Overlap Processing
On windowing, the data can be missed and it can impact effectively. This could be at the beginning or at
the end of each record. Data is being reduced and/or set to zero over one half the time records – So, this
needs to be assured about the events those happened in the reduced amplitude areas. A processing
technique exists called “overlap” processing. By applying this technique, the events occurring at or near
the beginning and ending of the time records are enhanced and maximized by using overlap processing.
Following figure represents records being processed “end-to-end” or 0% overlaps.
31
Figure 4.3.2.1 0% Overlap Processing (Mathey, 20)
Here, time domain data would be aligned in a way that information contains in lower amplitudes would
have less information about events, due to windowing. Following shows 50% overlap, where
information always possessed and persist.
Figure 4.3.2.2 50% Overlap Processing (Mathey, 20)
Typically 67% overlap (following figure) is considered sufficient to weight the events and information
near the beginning/end of the time record.
32
4.3.2.3 67% Overlap Processing (Mathey, 20)
However 75% overlap is somewhat better which is demonstrated on the next figure. Today, with the high
processing capabilities of computers, there is little reason not to utilize overlap processing.
4.3.2.4 75% Overlap Processing (Mathey, 20)
Continuous data stream must be partitioned in a way which will become as an input of the blocks of FFT.
As a prerequisite, data must be in the size of block 2n and for that it is necessary to separate the data in
that way. These blocks will form the necessary initialization and an entry point for the FFT Block. These
blocks will provide specific amount of data to the FFT module. As an input, these blocks would then be
transformed into other specific data after applying FFT operations and conversions.
Here this block separation needs to be done in two modes (i.e. Offline and Online). For an offline mode,
there would be affixed time stamp, the block position would be initializing and from that position,
specific amount of data (i.e. possibly 2n ) would be read and stored into the memory. For Online data,
timestamp needs to be inserted on each block as it is being read on some specific instant. For online
blocks data must be synchronized as well.
33
4.4 FFT Analysis
As, it is already mentioned that recorded time signals would be transformed to frequency domain and all
frequencies would be visible after having FFT applied. FFT operation gives the output in the form of
complex number that gives real and imaginary parts as output. Here real potion can be interpreted as
amplitude of the component cosine waves and imaginary portion can be considered as amplitude of the
component of sine waves. Following diagram (Smith S. W., 1999) represents the FFT output with time
domain behave as input for Real Fourier Transform and for Complex Fourier Transform.
Figure 4.4 DFT Output (Smith S. W., 1999)
Therefore, FFT or DFT is functionally changing of basis. This can be interpreted as the basis into which
the FFT changes and transforms the original signal in a set of sine and cosine waves. For that basis to
state all possible combination of inputs, it is necessary to be able to represent phase and amplitude both at
the same time. Hence phase is represented as complex number. The amplitudes of the cosine waves are
contained in ReX[k], while the amplitudes of the sine waves are contained in Im X[k]. While plotting on
a spectrum, we plot all cosine values on right hand side and all negative values on left hand side of
spectrum.
From the above discussion it is very clear that the FFT should return a set of complex numbers, with few
exceptions. First would be the spectral components at f=0 and f=fs/2 (the Nyquist frequency), which are
both real. The number of FFT elements is equal to the size of the time sample. The second part of these
complex numbers specifies negative frequencies and contains complex conjugates of the prior first half
for the positive frequencies, therefore does not holds any new information.
34
“FFT returns k complex numbers with spectral components as follows
a. The first item is for f=0. Since it has to be real, its imaginary component is 0.
b. The next k/2 items are for all positive frequencies in increasing order from Δf, 2Δf, 3Δf… (k/2-
1)ΔF, (k/2) Δf or fs/2. The element for the Nyquist frequency is real and is shared with the one
for f = -fs/2, since they are equal.
c. The next k/2-1 elements contain the negative frequency components. They are also placed in
mathematically increasing order from the most negative frequency f= -(k/2-1)Δf until f = -Δf. In
other words, they are sorted from maximum negative to minimum negative frequency. In this
sense they appear to be in reverse order of frequency”. (Sek)
35
5. Types of Noise & Noise Removal Techniques
s, we discussed in first chapter of this report about the objective of the whole process to get rid
of background or irrelevant noise signals those are detected during the recording of noise of
test objects. For this, time signals are supposed to pass from different modules in a way, so
that any noise removal technique can work out. This is the pre-requisite that would require to be done
before any noise removal technique needs to be applied.
ABefore going into the details of noise and its removal process, let us first elaborate few types of noise.
5.1 Overview of Noises
5.1.1 White Noise
White noise is a process having flat power spectral density. Therefore, it is the noise having frequency
and power spectrum is always constant and independent of incoming frequencies. White noise
pronounced due to the similarity with white light and that contains equal quantity and amount of all
colors. At high frequencies, white noise changes to pinkish. Practically, the signal power for constant
bandwidth which is centered at the frequency fb does not change if there is a variation in fb. Therefore in
reality, noise never truly white and it is relatively constant up to certain frequency that is known as cut off
frequency. (Singh J. , 2010)
By definition, white noise is the process where variables are randomly generated and they are un-
correlated with having zero mean and has finite variance s2. (About.com Economics)
5.1.2 Crosstalk
This is unwanted interference and coupling between signal paths. There are three reasons behind the
generation of crosstalk. Electrical coupling behind the transmission media, poor control of frequency
response (e.g. bad filters and inappropriate design). Excessive level may also produce crosstalk. Analog
transmission always distorted due to crosstalk. (Freeman)
36
5.2 Noise Removal Techniques – Filter Design
Designing a filter, specific to noise always a challenging task. These filters always suppress the noise and
a way to reduce the irrelevant signals from the desired signal intended for the user.
Method of estimating a signal disturbed by noise needs to pass through a filter that is suppose to reduce or
remove the noise and leaving original signal totally unchanged (Singh A. , 2001). This, we can see in the
following figure24.
Figure 5.1 Basic Filter (Singh A. , 2001)
Designing of these types of filters were introduced by the pioneering work by Wiener. It was then
extended and enhanced by other scientists. Here, filter could be either “fixed” or “adaptive”.
5.2.1 Fixed Filters
Designing of fixed filters needs prior information of real signal and noise inside that signal, i.e. if we
know the signal and noise before processing, we can design a filter that passes frequencies contained in
the incoming signal and discards the frequency band occupied by the noise.
5.2.2 Adaptive Filters
Adaptive filters, on the other side, behave in a way to adjust their response and filter out the co-related
signal in the input. They need very little or no a prior information of the signal and noise characteristics.
For this, if the signal is narrowband and noise broadband, which is normally the case, or vice versa, no a
priori knowledge is required; otherwise they require a signal (desired response) that is somehow,
correlated in some sense to the estimated signal. Moreover adaptive filters have the capability of
adaptively tracking and managing the signal under non-stationary conditions.
24 Basic Filter : Retrieved on 15th July,2013 from http://www.cs.cmu.edu/~aarti/pubs/ANC.pdf
37
Noise removal is another form of optimal filtering that involves production of an estimation of the noise
by filtering the reference input and then subtracting this noise estimate from the primary and original
input containing both signal and noise.
Figure 5.2 Basic Adaptive Filter Design (Singh A. , 2001)
Here, a reference input is included and that contains a co-related estimation of the noise to be removed and cancelled.
There’s a high amount of risk involved always all the time while subtracting noise from the input signal.
Distortion of noise can distort the original signal as well. During noise removal process, it is possible to
have an increase in the noise, if it is not done properly. This all depends upon the output of the filter for
reference input of noise. Reference input and noise input are co-related with each other. If we know all
characteristics and behaviors of noise emitting channels and we have the idea about reference inputs, we
can make close estimation of noise and reference input by designing a fixed filter. However, due to the
unpredictability and unreliability of the transmission paths, filtering and subtraction are controlled by
adaptation. Therefore, an adaptive filteration process needs to be developed so that its response can be
adjusted in order to minimize the error signal. (Singh A. , 2001)
38
5.3 Wiener Filter
During signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target
signal. This is designed to minimize the difference between filtered output and some desired signal. So,
that’s why it is often called as least mean square error technique. That’s why Wiener filter minimizes the
mean square error between the estimated random process (filtered process) and the desired process.
(Singh J. , 2010)
The Wiener filter tries to give an optimal solution in a way that it maximizes the ratio of actual signal to
the noise signal.
Following are two key points to hold for making Wiener Filter as optimal filter and the time on which
wiener filter needs to be used.
1. Both noise and desired signal are stationary processes
2. We know the desired signal or at least the ideal characteristics of the desired signal.
When both of these conditions hold, we can state wiener filter as optimal filter. (Schmidt, 2011)
Following figure (Singh J. , 2010), we can observe how it is expected from Wiener Filter to filter noise
from corrupted signal Y (t) and produce nearly S (t).
Figure 5.3 Basic Wiener Filter (Singh J. , 2010)
Thus Wiener Filter accepts contaminated signal and estimates linear filter L (.), so that output of it
becomes much closer to S (t) that is desired one. Here, the goal of signal would be to remove noise
from a signal. Here this is assumed that this filter is not an adaptive filter as it assumes that input
provided to it is stationary. The objective of Wiener filter is to estimate the error between desired
39
signal and input signal contaminated with noise. This would then be squared and optimal filtering can
be achieved by minimizing the equation of error (e). In the following figure (Singh J. , 2010), we
would assume Y (n) as input signal and that was mixed with some white noise which is forwarded to
a block of Wiener filter, that created a filtered output and that is then subtracted from actual desired
response S (n). As a result estimation error e (n) produced.
Figure 5.4 Complete Design of Wiener Filter (Singh J. , 2010)
A short code snippet would be given focusing “Wiener Filter” by the end of this document in appendix. That code would work in time domain for the removal of noise.
40
6. “LOFAR” Design and IT Architecture
he purpose of this chapter is to communicate the design, main flow and overall architecture of
the software component. Starting with main flow chart describing overall process of
background noise removal, here other UML abstractions and architectures would be discussed
and architecture level solution would be presented.
TSoftware high level architecture, important use cases and class level architectures would be presented in
this section.
6.1 Time Signal Processing for Noise Removal
Figure 6.1 demonstrates the process of performing various levels on time signals to remove
background noise.
Here, we have tentatively five different processing layers.
6.1.1 Time Signal Windowing
Here, live data stream or pre-recorded data stream (offline) would be processed with the technique of
“windowing”. Specific block size data would be process and appropriate windowing function would be
used to make signal periodic and its output would be send as input for FFT analysis.
6.1.2 FFT Analysis
The output of each window would then be shifted each block of FFT. Here, time domain signals
would be transformed into frequency domain. Complex valued data would be outputted to be
processed further.
6.1.3 Noise Removal Algorithm
This layer would contain more important logic than other traditional layers. Here there would be
specific Noise Removal technique would be applied. Wiener Filter can be used for passive noise
41
cancellation from frequencies, or other passive noise cancellation technique would be used. Finally
processed frequencies would be passed towards the next layer which would transform data back to
time domain.
6.1.4 Reverse FFT (IFFT)
All frequencies from the output of the previous layer would then be processed and data would be
converted back to the time domain. This is very important here to get back the stream as signal for the
displacement on user interface.
6.1.5 Waterfall Display
Waterfall display would show the spectrum of frequencies or time domain filtered from background
noise would again be shown as output of the whole process.
42
Figure 6.10 Time Signal Processing for Noise Removal (Google Images)
43
6.2 System Flow Chart
Figure 6.2 System Noise Removal Flow
44
6.3 Use Case Model
User
BrowseForNoiseSignal
BrowseForSignals
PlayFilteredSignal
Here, Noise removal Process would be applied and time data signal would be processed as follows :1 - Windowing2 - FFT Processing3 - Noise Removal Processing4 - Inverse FFT 5 - WaterFall Spectrum
Figure 6.3 Use Case Model
6.4 Use Case Description
6.4.1 Actor
As, this software component would be a part of another software that host all other signal processing
components. Here an actor would be an engineer or user who interact with the system and perform
different operations in order to accomplish signal processing based tasks.
6.4.2 Description- BrowseForSignals
System offers the functionality to the user to interact the user interface of this component for time data
signals to be browsed and provided to it. It would be live data stream or it could be any audio wave file
that can serve for this purpose.
45
6.4.2.1 Pre conditions
1. Devices are setup for live streaming and system is running perfectly.
2. User’s computer has sufficient free memory available to launch system.
6.4.2.2 Post conditions
System successfully loads the data stream and makes the data available for successive modules to make
processing on it.
6.4.3 Description- BrowseForNoise
System offers the functionality to the user to interact the user interface of this component for noise signals
to be browsed and provided to it. This would be a wave file that have noise signal in it and later on, it
would be blend as background item with the original time signal.
6.4.3.1 Pre conditions
1. System is in stable state and wave file or noise signal is in the disk.
2. User’s computer has sufficient free memory available to launch system.
6.4.3.2 Post conditions
System successfully loads the wave file and converted binary data into signals and makes the data
available for successive modules to make processing on it.
6.4.4 Description - PlayFilteredSignal
System offers the functionality to the user to interact with playing filtered signal with noise. Here, system
will show the background noise has no longer any existence inside the original time signal that was being
mixed with noise afterwards.
6.4.4.1 Pre conditions
3. System has already added noise with original signal and original signal with noise becoming
as input of “FilterNoise” operation.
4. User’s computer has sufficient free memory available to launch system.
6.4.4.2 Post conditions
Time data signal recorded as offline or online have no noise in it, and system is in stable state after
execution of its operation.
46
6.5 High Level Component Architecture (with Voice Recording and data recording devices as input)
47
VoiceRecorder
«component»
DataStream
«component»
LanXI
«component»
IDAe
«component»
General Data Interface
«component»IGeneralInterface
Block Seperator
«component»
FFT Analysis
«component»
NoiseCanceller
«component»
LighteningChart Waterfall Spectrum
«component»
Here Voice recorder gives the facility to record the voice and generate data stream from it and pass it to other core of software ( i.e Block Seperator , FFT and Noise Cancellation )
These data recorder devices would record signals from outside world and pass it to other Core. These devices are rela time data recording devices and have multiple channels for recording time signals
This Component deals with the technique of noise cancellation .This could be wither Wiener Filterization or any other passive and active noise cancellation technique
Figure 6.4 System Component Architecture
6.6 Class Level Architecture (with Voice Recording and data recording devices as input)
48
IGeneralDataInterface
«interface»
Attributes
Operations
+ GetDataStream(ref float[])
LANXIInput
Attributes
Operations
+ GetDataStream(ref float[])
IDAeInput
Attributes
Operations
+ GetDataStream(ref float[])
FFTAnalysis
Attributes
Operations
+ Complex[] PerformFFT()
INoiseCancaller
«interface»
Attributes
Operations
+ RemoveNoise(ref float())
WienerFilterCancallation
Attributes
Operations
+ RemoveNoise(ref float)
KalmanFilterCancellation
Attributes
Operations
+ RemoveNoise(ref float[])
MathBlocksCancellation
Attributes
Operations
+ RemoveNoise(float[])
MicroPhoneInput
Attributes
Operations
+ GetDataStream(ref float[])
FFTProgramableAnalysis
Attributes
Operations
+ GetData()+ GetNoise()+ GetNoiseSupressedData()
IFFT
Attributes
Operations
+ float[] TransformDataToTimeDomain(Complex)
WaterFallDisplay
Attributes
Operations
+ DisplayOnWaterFaillSpectrum()
Figure 6.5 System Design Class Architecture
6.7 Multithreaded- High Level Software Activity
49
Figure 6.6 System Multithreaded Architecture of Noise Removal Process
7. “LOFAR” Development
The theme of this chapter is to present basic idea of implementing a noise removal system. Here at the
first “Visual Studio 2010” would be used as environment of building a component for software like this.
Following is the list of environment and framework considered for the development of this component.
1. Microsoft’s “Visual Studio”25 is being used to design and develop this component.
2. Microsoft’s WPF (Windows Presentation Foundation) would be used as building up software
component interface. Different panels of main window would be partitioned and populated with
relevant and specific part of logic.
3. Arction’s26 “Lightning Charts” framework used for demonstrating signals with analysis on them.
The main user interface is being structured up with the three panels demonstrating different working
functionalities for the process of removing noise.
Signal Panel: This would render time domain signal of the given file. For testing purpose, all time
domain signals in discrete format saved collected from a text file and rendered as “original signal”.
Noise + Signal Panel: This panel would contain the basic signal with contamination of noise signal.
Here noise signal would be mixed with original signal and this would be rendered. This functionality
would be triggered when user clicks “AddNoise” button. Again here, noise would be collected in the form
of discrete samples from a text file.
Filtered Signal: This panel finally contains the approximate signal of the original signal and would be
executed on the action of user at “FilterNoise” button. The noise would be filtered out from the signal
using a noise filteration algorithm. This software can include filteration techniques other than “Wiener
Filter”.
25 Retrieved from Wikipedia on 24th August,2012 “Microsoft Visual Studio is an integrated development environment (IDE) from Microsoft”26 Enterprise provides leading chart components can be easily integrated with Microsoft framework. URL : arction.com
50
51
7.1 Skeleton of “LOFAR” with WPF
52
Figure 7.11Skeleton of LOFAR with WPF application
7.2 Original Signal rendered on “OrignalSignal” Panel
Following signal has sampling frequency with 1000. This chart would render 1000 points per second. X-
axis is operating with the duration of 1 second. Here this chart is being prepared before it is being fed
with signals. This has been provided with minimum and maximum values of data rendered over the
lightning charts. Followed by this preparation, data would be rendered over the chart with the signal
series. See Appendix for the code.
53
Figure 7.12 Original Signal
7.3 Signal with noise rendered on “OrignalSignal + Noise” Panel
Original signal is mixed with some background noise. This noise is added and enhanced the effects of
original signal’s data. This has same length over x-axis as that of original time domain signal.
See code for more details how the noise is being added into the signal arithmetically. See Appendix.
54
Figure 7.13 Filtered signal after wiener filteration
7.4 Filtered Signal rendered on “FilteredSignal” Panel
This panel would show the result after filteration technique (i.e. Wiener Filter). Here output signal is not
an exact match of the original signal indeed but having approximation to the original one. Wiener filter
algorithm is applied to the original signal and outputted on its panel. See Appendix for the detailed code
of the Wiener Filter.
55
Figure 7.14 Filtered signal after wiener filteration
8. Conclusion and Scope for Future Research
8.1 Conclusion of the work
Noise and background signals with their sources must be detectable. The removal of noise always
involves approximation. This paper presented pre-processing and a complete software flow with different
proposal of development. A step by step procedural and modular technique is discussed along digital
signal based processing. Time domain signals are always converted to discrete samples before they are
being analyzed for FFT. FFT takes the input stream and transforms all time signals to frequency based
data for further analysis. Here different noise removal techniques were discussed and planned architecture
to comprehend different techniques together. These approaches and techniques provide the way to
visualize audio and signal based processing. Finally a short and trial implementation is also being
presented for the visualization of time domain signals contaminated with noise. These filters can be
written on frequency domain for getting better results.
8.2 Scope of further research
As a part of current research, noise removal process is being implemented in limited functionality and
in constrained way. However there is scope for further research to improve the performance and
process by considering following.
In Noise suppression, filter can be applied after processing of time domain signals for
windowing. Discussed algorithms can be implemented in frequency domain.
Different other noises can be added and algorithms can be customized accordingly.
Math blocks and mathematical operations can be used to suppress noise frequencies more
efficiently.
Mathematical blocks can be implemented in modular manner and can be arranged flexibly
and the output of filtered frequencies can be used to improve the performance of the system
by considering noise type.
56
9. Bibliography
About.com Economics. (n.d.). Retrieved from http://economics.about.com/od/economicsglossary/g/whitenoise.htm
Brigham, E. O. (1988). The fast Fourier transform and its applications. Prentice Hall.
FFT Window Function. (n.d.). Retrieved 06 15, 2013, from http://www.bores.com/courses/advanced/windows/files/windows.pdf
Freeman, R. (n.d.). SearchEnterpriseWAN. Retrieved from http://searchenterprisewan.techtarget.com/feature/Types-of-noise-and-how-they-affect-the-network
Galazzo, S. (n.d.). Sound pattern matching using Fast Fourier Transform in Windows Phone. Retrieved 08 28, 2013, from Nokia: http://developer.nokia.com/Community/Wiki/Sound_pattern_matching_using_Fast_Fourier_Transform_in_Windows_Phone
Google Images. (n.d.). Retrieved 08 28, 2013, from http://patentimages.storage.googleapis.com/EP0624866A2/imgaf001.png
http://www.bruelkjaer.de/. (n.d.). Retrieved 05 29, 2013, from http://www.bruelkjaer.de/Products/analysis-software/data-acquisition/frontend-drivers/pulse-frontend-driver-3099-A.aspx?tab=overview
Kjaer, B. &. (n.d.). Time Data Recording - Art 7708. Retrieved 08 28, 2013, from http://www.bruelkjaer.de/: http://www.bruelkjaer.de/Products/analysis-software/data-acquisition/data-recording/time-data-recording-7708.aspx?tab=descriptions
LDS-Group. (2013, 08 28). Understanding FFT Windows. Wuerzburg, Byern, Germany.
Mathey, J. (20, July 2009). Prosig Noise & Vibration Blog. Retrieved 08 28, 2013, from DATA WINDOWS : WHAT, WHY AND WHEN: http://blog.prosig.com/2009/07/20/data-windows-what-why-and-when/
S. Allen Broughton, K. B. (2011). Discrete Fourier Analysis and Wavelets. In Discrete Fourier Analysis and Wavelets : Applications to Signal and Image Processing. John Willey & Sons, Inc.
Schmidt, S. (2011, 11 01). Kalman Filters, Stochastic signals and processes Lec. 9. Retrieved 08 20, 2013, from http://person.hst.aau.dk/sschmidt/ST/STb_Lec9.pdf
Sek, D. M. (n.d.). FREQUENCY ANALYSIS FAST FOURIER TRANSFORM, FREQUENCY SPECTRUM. Frequency Analysis - Fast Fourier Transform ( FFT ) , 12.
Singh, A. (2001, February). Adaptive Noise Cancellation. 5. India.
57
Singh, J. (2010). Adaptive Noise Cancellation in Sinusoidal Signal using Wiener Filter. Patiala.
Smith, S. W. (1997-1998). The Scientist and Engineer's Guide to. San Diego, CA: California Technical Publishing.
Smith, S. W. (1999). The Scientist and Engineer's Guide to Digital Signal Processing (Second ed.). San Diego, Calif.: California Technical Publishing.
58
10. Appendix
10.1 C# Source Code: WienerFilter
59
public double[] doWienerFilter(){
double[] rxx = new double[30];double[] rdx = new double[30];double[] h = new double[30];
h.Initialize();
for (int index = 0; index < h.Length; index++) h[index] = -9.2559631349317831e+061;
int p = 8; for (int i = 0; i <= p; i++) {
rdx[i] = 0.0;for (int j = 0; j < (SampleSize - i); j++){
rdx[i] += m_Signal[j] * m_Signal[j + i];}rdx[i] = rdx[i] / SampleSize;
} rxx[0] = rdx[0] + 1.0; for (int i = 1; i <= p; i++) { rxx[i] = rdx[i]; }
double[] dblWienerSig = WienerFilter(rxx, rdx, p, h); return dblWienerSig;
}
public double[] WienerFilter(double [] a_rxx, double[] a_rdx, int p, double[] a_h) { for (int k = 0; k < SampleSize; k++) { m_WienerSignal[k] = 0.0; for (int i = 0; i <= p; i++) { if ((k - i) >= 0) { m_WienerSignal[k] += a_h[i] * m_SignalWithNoise[k - i]; } } } return m_WienerSignal; }
10.2 C# Source Code: Noise Addition
60
private void btnAddNoise_Click (object sender, RoutedEventArgs e){ m_WienerSignalData = new double[m_SignalData.Length]; m_WienerSignalData.Initialize();
double[] NoiseSamples = new double[m_SignalData.Length];
Microsoft.Win32.OpenFileDialog openDialog = new Microsoft.Win32.OpenFileDialog(); openDialog.Filter = "(*.txt)|*.txt";
if (openDialog.ShowDialog() == true) { string strFileName = openDialog.FileName;
double[] NoiseSamplesFromFile = ReadDataFromFile(strFileName);
Array.Copy(NoiseSamplesFromFile, NoiseSamples, m_SignalData.Length);
for (int index = 0; index < m_SignalData.Length; index++) { m_WienerSignalData[index] = m_SignalData[index] + (NoiseSamplesFromFile[index]); }
PrepareSigCharts (OrigNoiseSig,m_WienerSignalData.Min(), m_WienerSignalData.Max()); FillWaveFormChart (m_WienerSignalData, m_SamplingFrequency, OrigNoiseSig, m_WienerSignalData.Min (), m_WienerSignalData.Max ()); } }
10.3 C# Source Code Time Data Recording from File
61
private void btnBrowseSig_Click(object sender, RoutedEventArgs e){ double[] aSignalData = null; Microsoft.Win32.OpenFileDialog openDialog = new Microsoft.Win32.OpenFileDialog(); openDialog.Filter = "(*.txt)|*.txt"; int dSamplingFrequency = 1000;
if (openDialog.ShowDialog() == true) { //aSignalData = GetSignalFromSignalReader(out dSamplingFrequency, openDialog.FileName);
aSignalData = ReadDataFromFile (openDialog.FileName); PrepareSigCharts (SpectrumAnalyzerSig, aSignalData.Min (), aSignalData.Max ()); FillWaveFormChart (aSignalData, dSamplingFrequency, SpectrumAnalyzerSig, aSignalData.Min (), aSignalData.Max ());
m_SignalData = aSignalData; m_SamplingFrequency = dSamplingFrequency; } else { activeStream = null; return; }