low memeory wavelet transform for wireless sensor networks

29
Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks Department of Electronics Engineering 1 Chapter 1 INTRODUCTION 1.1 Wireless Sensor Networks Wireless Sensor Networks are the networks of wirelessly interconnected devices that are able to ubiquitously retrieve content such as video and audio streams, still images, and scalar sensor data from the environment. The sensors are spatially distributed to monitor physical or environmental conditions and to cooperatively pass their data through the network to a main location. Most deployed wireless sensor networks measure scalar physical phenomena like temperature, pressure, humidity, or location of objects. The wireless sensor network is built of nodes from a few to several hundreds or even thousands, where each node is connected to one (or sometimes several) sensors. Sensor networks consist of nodes equipped with sensors, a processing unit, a short-range communication unit with low data rates, and a battery. The position of sensor nodes need not be engineered or pre-determined. This allows random deployment in inaccessible terrains or disaster relief operations. On the other hand, this also means that sensor network protocols and algorithms must possess self- organizing capabilities. Another unique feature of sensor networks is the cooperative eort of sensor nodes. Sensor nodes are connected with an on-board processor. Instead of sending the raw data to the nodes responsible for the fusion, sensor nodes use their processing abilities to locally carry out simple computations and transmit only the required and partially processed data. The above described features ensure a wide range of applications for sensor networks. Some of the application areas are health, military, and security. Sensor networks can also be used to detect foreign chemical agents in the air and the water. They can help to identify the type, concentration, and location of pollutants. In essence, sensor networks will provide the end user with intelligence and a better understanding of the environment. In future, wireless sensor networks will be an integral part of our lives, more so than the present-day personal computers.

Upload: nita-simon

Post on 12-Nov-2014

15 views

Category:

Documents


0 download

DESCRIPTION

a guide to using wavelet transforms in WSN.

TRANSCRIPT

Page 1: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 1

Chapter 1

INTRODUCTION

1.1 Wireless Sensor Networks

Wireless Sensor Networks are the networks of wirelessly interconnected devices that

are able to ubiquitously retrieve content such as video and audio streams, still images,

and scalar sensor data from the environment. The sensors are spatially distributed to

monitor physical or environmental conditions and to cooperatively pass their data

through the network to a main location. Most deployed wireless sensor networks

measure scalar physical phenomena like temperature, pressure, humidity, or location

of objects. The wireless sensor network is built of nodes from a few to several

hundreds or even thousands, where each node is connected to one (or sometimes

several) sensors. Sensor networks consist of nodes equipped with sensors, a

processing unit, a short-range communication unit with low data rates, and a battery.

The position of sensor nodes need not be engineered or pre-determined. This allows

random deployment in inaccessible terrains or disaster relief operations. On the other

hand, this also means that sensor network protocols and algorithms must possess self-

organizing capabilities. Another unique feature of sensor networks is the cooperative

effort of sensor nodes. Sensor nodes are connected with an on-board processor.

Instead of sending the raw data to the nodes responsible for the fusion, sensor nodes

use their processing abilities to locally carry out simple computations and transmit

only the required and partially processed data. The above described features ensure a

wide range of applications for sensor networks. Some of the application areas are

health, military, and security. Sensor networks can also be used to detect foreign

chemical agents in the air and the water. They can help to identify the type,

concentration, and location of pollutants. In essence, sensor networks will provide the

end user with intelligence and a better understanding of the environment. In future,

wireless sensor networks will be an integral part of our lives, more so than the

present-day personal computers.

Page 2: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 2

The sensor nodes are usually scattered in a sensor field as shown in Fig. 1.1. Each of

these scattered sensor nodes has the capabilities to collect data and route data back to

the sink and the end users. Data are routed back to the end user by a multihop

infrastructureless architecture through the sink. The sink may communicate with the

task manager node via Internet or Satellite.The nodes have to be low-cost and energy

efficient, to allow these systems to economically monitor the environment. For these

reasons, the processing unit of the nodes is typically a low-complexity 16-bit

microcontroller, which has limited processing power and random access memory

(RAM). A sensor node can be equipped with a small camera to track an object of

interest or to monitor the environment; thus, forming a camera sensor network. These

imaging-oriented applications, however, exceed the resources of a typical low-cost

sensor node. Even if it is possible to store an image on a low-complexity node using

cheap, fast, and large flash memory, the image transmission over the network can

consume too much bandwidth and energy. Therefore, image processing techniques are

required either (i) to compress the image data, e.g., with wavelet based techniques that

exploit the similarities in transformed versions of the image, or (ii) to extract the

interesting features from the images and transmit only image meta data.

Figure 1.1: Sensor nodes scattered in a sensor field [4]

Page 3: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 3

1.1.1 Sensor Nodes

A sensor node is the fundamental unit of a sensor network. It is made up of four basic

components as shown in Fig. 1.2: a sensing unit, a processing unit, a transceiver unit

and a power unit. They may also have application dependent additional components

such as a location finding system, a power generator and a mobilizer. Sensing units

are usually composed of two subunits: sensors and analog to digital converters

(ADCs). The analog signals produced by the sensors based on the observed

phenomenon are converted to digital signals by the ADC, and then fed into the

processing unit. The processing unit, which is generally associated with a small

storage unit, manages the procedures that make the sensor node collaborate with the

other nodes to carry out the assigned sensing tasks. A transceiver unit connects the

node to the network. One of the most important components of a sensor node is the

power unit. Power units may be supported by a power scavenging unit such as solar

cells. There are also other subunits, which are application dependent. Most of the

sensor network routing techniques and sensing tasks require the knowledge of

location with high accuracy. Thus, it is common that a sensor node has a location

finding system. A mobilizer may sometimes be needed to move sensor nodes when it

is required to carry out the assigned tasks.

Figure 1.2: Components of sensor node [4]

Page 4: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 4

All of these subunits may need to fit into a matchbox-sized module. The required size

may be smaller than even a cubic centimeter which is light enough to remain

suspended in the air. Apart from the size, there are also some other stringent

constraints for sensor nodes. These nodes must

• consume extremely low power

• operate in high volumetric densities

• have low production cost and be dispensable

• be autonomous and operate unattended

• be adaptive to the environment.

1.1.2 Sensor Network Applications

Sensor networks may consist of many different types of sensors such as seismic, low

sampling rate magnetic, thermal, visual, infrared, acoustic and radar, which are able to

monitor a wide variety of ambient conditions that include the following :

temperature, humidity, vehicular movement, lightning condition, pressure, soil

makeup, noise levels, the presence or absence of certain kinds of objects, mechanical

stress levels on attached objects, and the current characteristics such as speed,

direction, and size of an object.

Sensor nodes can be used for continuous sensing, event detection, event ID, location

sensing, and local control of actuators. The concept of micro-sensing and wireless

connection of these nodes promises many new application areas. The applications can

be categorized into military, environment, health, home and other commercial areas.

It is possible to expand this classification with more categories such as space

exploration, chemical processing and disaster relief.

Page 5: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 5

Chapter 2

IMAGE COMPRESSION

2.1. Wavelet based Image Compression

For image-based applications the resource constraints are particularly severe because

representing visual data requires a large amount of information, which in turn leads to

high data rate (as opposed to low data rate sensing like temperature or pressure).

Typically, images are compressed in order to save communication energy. A common

characteristic of most images is that the neighbouring pixels are correlated and

therefore contain redundant information. Image compression aims at reducing the

number of bits needed to represent an image by removing the spatial and spectral

redundancies as much as possible. More recently, the wavelet transform has gained

widespread acceptance in signal processing in general and in image compression

research in particular. In many applications, wavelet-based schemes (also referred to

as sub-band coding) outperform other coding schemes. Wavelet-based coding is more

robust under transmission and decoding errors, and also facilitates progressive

transmission of images. Because of their inherent multi-resolution nature, wavelet

coding schemes are especially suitable for applications where scalability and tolerable

degradation are important.

Figure. 2.1. wavelet-based image compression (a) encoder (b) decoder [6]

Page 6: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 6

Wavelet transform coding first transforms the image from its spatial domain

representation to a different type of representation using wavelet transform and then

codes the transformed values (coefficients). A typical wavelet based image

compression system is shown in Fig. 2.1.

2.2. One Dimensional Wavelet transform

Figure 2.2. 1-D wavelet transform procedure [7]

The fundamental concept behind wavelet transform is to split up the frequency band

of a signal and then to code each sub-band using a coder and bit rate accurately

matched to the statistics of the band. There are several ways wavelet transforms can

decompose a signal into various sub-bands. These include uniform decomposition,

octave-band decomposition, and adaptive or wavelet-packet decomposition. Out of

these, octave-band decomposition is the most widely used. This is a non-uniform band

splitting method that decomposes the lower frequency part into narrower bands and

the high-pass output at each level is left without any further decomposition. The

octave-band decomposition procedure can be described as follows. A Low Pass Filter

(LPF) and a High Pass Filter (HPF) are chosen, such that they exactly halve the

frequency range between themselves. First, the LPF is applied for each row of data,

thereby getting the low frequency components of the row. When viewed in the

frequency domain, the output data contains frequencies only in the first half of the

original frequency range due to the LPF is a half band filter. Therefore, they can be

sub-sampled by two according to Shannon's sampling theorem, so that the output data

contains only half the original number of samples. Then, the HPF is applied for the

same row of data, and similarly the high pass components are separated. The low pass

components are placed at the left and the high pass components are placed at the right

Page 7: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 7

side of output row as in Fig. 2.2. This procedure is done for all rows, which is termed

as 1-D wavelet transform.

Figure. 2.3. 1-D wavelet transforms [6]

2.3. Two Dimensional Wavelet transform

An image is a two dimensional signal. 2D wavelet transforms comes in two different

forms. One which consists of two 1D transforms and, one which is a true 2D

transform. The first type is called separable and the second non-separable. For the

one-level wavelet transform we computed all wavelet levels for one input line and

stored the result in a second line with the same dimension.

Figure 2.4 2-D wavelet transform procedure. [7]

Page 8: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 8

For the two-dimensional transform, we perform a one-level transform on all rows of

an image, as illustrated in Figure 2.4. Specifically, for each row the approximations

and details are computed by low pass and high pass filtering. This results in two

matrices L and H, each with half of the original image dimension. These matrices are

similarly low pass and high pass filtered to result in the four sub-matrices LL, LH,

HL, HH, which are called sub-bands as shown in Fig 2.5a): LL is the all-low pass

sub-band (coarse approximation image), HL is the vertical sub-band, LH is the

horizontal sub-band, and HH is the diagonal sub-band. This can be done up to any

level, thereby resulting in a pyramidal decomposition as depicted in Fig 2.5b), Fig

2.5c). After the wavelet transform, bit allocation, quantization and entropy coding will

be applied to get the final compressed image.

Figure 2.5.a) One-level image wavelet transform [6]

Figure 2.5.b) Two-level image wavelet transform[6]

Page 9: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 9

The four sub-images are created by breaking the original image into 4 pixel blocks, 2

pixels to a side. If the original image has size 2n x 2

n, we should have 2

2n-2 blocks.

Now for each block, the top-right pixel will go directly into the top-right sub-image.

The bottom-left pixel goes directly into the bottom-left sub-image.

Figure 2.5.c) Three-level image wavelet transform [1]

And the bottom-right pixel goes into the bottom-right sub-image. So these three sub-

images will look like a coarse version of the original, containing 1/4 of the original

pixels. The top-left pixel of each block does not go directly into the top-left sub-

image. Rather, all 4 pixels of the block are averaged and placed into the top-left sub-

image. So the top-left sub-image is effectively a scaled down version of the original

image at 1/4 the original size. But it does not contain any of the original pixels itself

(unless by chance). This process can be repeated now for the top-left image to obtain

the second level wavelet transform. The LL sub-image is broken down into four new

sub-images in the same way as shown in Fig.2.6.b), producing a top-left sub-image of

the previous top-left sub-image that is now a scaled down image 1/16 the original

size. This process is repeated (as shown in Fig 2.7) until the top-left sub-image is

only 1 pixel, with its colour corresponding to the average of all pixels in the original

image. The newly encoded image lends itself to progressive transmission, as the top-

left sub-images can be transmitted first, yielding useful approximations, followed by

the corresponding sub-images which contain original pixels to be used for

reconstructing the next finer top-left sub-image. One pixel is grabbed from each of the

top-right, bottom-left, and bottom-right sub-images of the encoded image to

Page 10: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 10

reconstruct the original image. Then the top-left original pixel is generated by taking

the average value from the top-left sub-image, multiplying it by 4, and subtracting out

each of the other three original pixel values. The original top-left value is left. This is

a lossless encoding.

Figure 2.6. Two-level image wavelet transform of an image. [1]

Page 11: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 11

Figure 2.7.a) Original image [7]

Figure 2.7.b) One level image wavelet transform of the image. [7]

Page 12: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 12

Chapter 3

WAVELET TRANSFORMS

3.1 Wavelet Transform using Convolution

A wavelet transform can be computed by convolution filter operations. Convolution

is an elementary signal processing operation, where a flipped filter vector is shifted

step-wise over a signal vector while for each position the scalar product between the

overlapping values is computed. The boundaries of a signal can be linearly (zero-

padding), circularly, or symmetrically extended. For the wavelet transform we use

here a symmetrical extension, as it is reasonable to obtain a smooth signal change. If

h = [h0, h1, h2] denotes a filter vector of length L = 3 and s = [s0, s1, s2, s3] denotes a

signal vector of length N = 4, the symmetrical convolution is given by conv(s, h). The

vector h is shifted over the signal vector s and for each position the scalar product of

the overlapping values is computed, giving a result vector of length N + L − 1.

Figure. 3.1. one-level wavelet transform[1]

Page 13: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 13

Figure. 3.2. 3-level wavelet transform[1]

A one-dimensional wavelet transform is typically performed by separately applying

two different filters, namely a low pass and a high pass filter to the original signal, as

illustrated in Figure 3.1. The low pass and high pass filter are also referred to as

scaling filter and wavelet filter, respectively; we employ the term wavelet filter for

both filter types. The filtered signals are sampled down by leaving out each second

value such that the odd indexed values (1, 3, 5, . . .) are kept from the low pass

filtering (i.e., the first value is kept, the second one discarded, and so on) and the even

indexed values (2, 4, 6, . . .) are kept from the high pass filtering (i.e., the first value is

discarded, the second one is kept, and so on). The thus obtained values are the wavelet

coefficients and are referred to as approximations and details. As observed in Fig. 3.1,

the aggregate number of approximation and detail coefficients equals the signal

dimension. To reconstruct the original signal the coefficients have to be sampled up

by inserting zeros between each second value. Upsampling of the vector s = [1, 2, 3,

4] thus results in [1, 0, 2, 0, 3, 0, 4, 0]. The up-sampled versions of the

approximations and details are filtered by the synthesis low pass and high pass filters.

(The analysis and synthesis wavelet filters are employed for the forward and

backward transform, respectively.) Both filtered arrays of values are summed up.

Such a wavelet transform can be performed multiple times to achieve a multi-level

transform, as illustrated in Figure 3.2. This scheme, where only the approximations

are further processed, is called the pyramidal algorithm. The thresholding operation

deletes very small coefficients and is a general data compression technique. These

small coefficients are expected to not contain meaningful information.

Page 14: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 14

3.1.1. Haar Wavelet

Figure. 3.3. Haar wavelet [7]

The Haar wavelet is the simplest wavelet. It was proposed by Alfred Haar. It’s not a

continuous wavelet. Thus, it cannot be differentiated. But, this property can be used in

analysis of signals with sudden transitions. For example, the haar wavelet can be used

in order to find sudden failure of machines. The Haar wavelet mother function is

given as

......(3.1)

Apply the Haar-Wavelet filter to the signal s. The Haar-filter coefficients for the low

pass are given as l = [0.5, 0.5], and for the high pass as h = [1, −1]. The detail

coefficients for the first level are computed by shifting the flipped version of the high

pass filter over the signal, resulting in d = conv(s, h). After down sampling the details

are given as dd. The approximation coefficients for the first level are computed by

shifting the flipped version of the low pass filter over the signal, resulting in a =

conv(s, l). After down sampling the approximations are given as ad. All the other

coefficients are computed similarly. Noting that each second convolution value is

discarded, the filter coefficients can be shifted by two sample steps instead of one

step. The coefficients for next level is obtained by convolution of the approximation

coefficients (obtained in the previous level) with the low pass and high pass filter

coefficients. Thus, the approximations of a given level form the input for the next

Page 15: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 15

level. The details are not further processed and are copied to the next transform levels.

The result vector contains all the detail coefficients of the previous levels, which are

not further processed. Generally, instead of the filters l = [0.5, 0.5] and h = [1, −1]

which we used for ease of illustration, the normalized filter coefficients l = [1/√2,

1/√2] and h = [1/√2, -1/√2] are used. The normalized coefficients make the transform

orthonormal; thus, conserving the signal energy.

3.1.2. Daubechies 9/7 Wavelet

A wavelet transform with the filter coefficients in Table 3.1 is computed as in the case

of the Haar-wavelet by low-pass and high-pass filtering of the input signal. The input

signal can be a single line s = [s0, s1, . . . , sN−1] of an image with the dimension N.

The two resulting signals are then down-sampled, that is, each second value is

discarded to form the approximations and details, which are two sets of N/2 wavelet

coefficients.

j analysis

lowpass lj

analysis

highpass hj

synthesis

lowpass lj

synthesis

highpass hj

-4 0.037828 0.037828

-3 -0.023849 0.064539 -0.064539 0.023849

-2 -0.110624 -0.040689 -0.040689 -0.110624

-1 0.377403 -0.418092 0.418092 -0.377403

0 0.852699 0.788486 0.788486 0.852699

1 0.377403 -0.418092 0.418092 -0.377403

2 -0.110624 -0.040689 -0.040689 -0.110624

3 -0.023849 0.064539 -0.064539 0.023849

4 0.037828 0.037828

Table 3.1. Filter coefficients of the Daubechies 9/7 wavelet [1]

More specifically, the approximations ai are computed as

Page 16: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 16

.....(3.2)

where the notation convp(s, l, i) is introduced to denote the wavelet coefficient at

position i obtained by convolving signal s with filter l. Note that the filter coefficients

lj are the analysis lowpass filter coefficients given in Table 3.1. Analogously, using

the highpass analysis filter coefficients hj in Table 3.1 we obtain the details di as

.....(3.3)

Note that due to the symmetry of the lowpass and the highpass filters, the sign of the

filter index j in the subscript of the signal s is arbitrary. Downsampling to [a0, a2, . .. ,

aN−2] gives N/2 approximations and downsampling to [d1, d3, . . . , dN−1] gives N/2

details. The down-sampling can be incorporated into the convolution by only

computing the coefficients remaining after the downsampling. In particular, to obtain

the approximations the center of the lowpass filter shifts over the even signal samples

s0, s2, . . . , sN/2−2 and to obtain the details the center of the highpass filter moves over

the odd signal samples s1, s3, . . . , sN/2−1. Intuitively, by aligning the center of the

lowpass filter with the even signal samples and the center of the highpass filter with

the odd samples, each filter “takes in” a different half of the input signal. Taken

together, both filters “take in” the complete input signal.

3.2. Wavelet Transform with the Lifting Scheme

Wavelet transform is computed in-place. Thus the memory requirement is reduced.

Also, the computing time for the wavelet filter can be reduced by incorporating the

lifting scheme.

3.2.1. Lifting Scheme for the Haar-Wavelet

Forward transform can be performed using the so-called lifting scheme, as illustrated

in Fig 3.2. The lifting scheme first conducts a lazy wavelet transform, that is, it

Page 17: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 17

separates the signal in even and odd samples. Then, the detail coefficient is predicted

using its left neighbour sample.

Figure 3.4. Basic lifting scheme [1]

The even sample predicts the odd coefficient. Note that the sample indices start with

zero, e.g., s = [s0, s1, s2, . . .]; thus, the even set is given by se = [s0, s2, s4, . . .] and

the odd set is given by so = [s1, s3, s5, . . .]. The update stage ensures that the coarser

approximation signal has the same average as the original signal. For inverse

transform, the stages are reversed.

Figure 3.5. Inverse Transform using lifting scheme [1]

3.2.2. Linear Interpolation Wavelet

The predict and update filters both have order one, as they use either one past or one

future sample. If correlations over more than two samples shall be addressed, as it is,

for instance, needed for image compression, higher filter orders can be employed. The

linear wavelet transform uses filters of order two. To explain the linear wavelet

transform, let s2k, k = 0, 1, . . ., denote the even samples, and s2k+1, k = 0, 1, . . .,

denote the odd samples. The detail coefficient dk is then computed as

Page 18: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 18

.....(3.4)

This is a type of prediction where the detail gives the difference to the linear

approximation using the left and the right sample. The approximation coefficient ak is

givenas

.....(3.5)

3.2.3. Lifting Scheme for the 9/7-Wavelet

From the coefficients from Table 3.1 the parameters α, β, γ, δ and ζ can be derived

through a factoring algorithm giving,

α = −1.5861343420693648

β = −0.0529801185718856

γ = 0.8829110755411875

δ = 0.4435068520511142

ζ = 1.1496043988602418

These parameters form the filters that are applied in the socalled update and predict

steps:

Pα = [α, α]

Uβ = [β, β]

Pγ = [γ, γ]

Uδ = [δ, δ]

Figure 3.6. Lifting scheme structure for Daubechies 9/7 wavelet [2]

Page 19: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 19

Figure 3.7. Inverse wavelet transform for the 9/7 wavelet using lifting scheme [2]

First the original signal is separated into even and odd values. Then follow four

update and predict steps where the parameters α and γ are convoluted with the odd

part and parameters β and δ are convolved with the even part of the signal.

Page 20: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 20

Chapter 4

WAVELET TRANSFORM FOR FIXED POINT

ARITHMETIC

Fixed-point numbers differ from floating-point numbers in that the decimal point is

fixed. A number is represented as a sequence of bits, i.e., a binary word, in a personal

computer or microcomputer. A binary word with M = 16 bits can be given as

b15b14b13b12b11b10 . . . b1b0. If this word is interpreted as an unsigned integer, its

numerical value is computed as

.....(4.1)

Using fixed point numbers, it is computed a

.....(4.2)

Q-format:

We use the Q-format to denote the fixedpoint number format. The Q-format assumes

the two’s complement representation; therefore, an M-binary word always has M−1

bits for the absolute numerical value. The Qm.n format denotes that m bits are used to

designate the two’s complement integer portion of the number, not including the MSB

bit and that n bits are used to designate the two’s complement fractional portion of the

number, that is, the number of bits to the right of the radix point. Thus, a Qm.n

number requires M = m + n + 1 bits. The range of a Qm.n number is [−2m

, 2m

−2−n

]

and the resolution is given by 2−n

. The value m in Qm, n is optional; if m is omitted, it

is assumed to be zero. The special case of arithmetic with m = 0 is commonly referred

to fractional arithmetic and operates in the range [−1, 1]. In this tutorial we employ

the general fixed-point arithmetic since our computations require larger number

ranges.

Page 21: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 21

4.1. Basic Operations in Fixed-Point Arithmetic

1) Changing the Exponent (Virtual Shift)

For some operations the operands are required to have the same exponent. Change the

exponent exp(a) of a number a to the exponent exp(b),

a x 2−exp(a)

= a x 2exp(b)−exp(a) x 2

−exp(b). .....(4.3)

Thus, if exp(b) ≥ exp(a) we shift the bits of a to the left by exp(b) − exp(a) positions.

Whereas, if exp(b) < exp(a), we shift the bits of a to the right by exp(a) − exp(b)

positions.

2) Addition and Subtraction

For addition and subtraction the two numbers a and b have to be converted to the

exponent of the result number c.

c = a x 2−exp(c)

+ b x 2−exp(c)

= (a+b) x 2−exp(c)

. ....(4.4)

If the exponents of a and b are equal, the two numbers can simply be added. In case of

an overflow, the sum of two binary numbers requires one more integer bit in the

result, that is, if the numbers are in the form Qa.b, the result requires the form

Q(a+1).b.

3) Multiplication

The product of two binary words a and b can be computed with an integer

multiplication as

A x B = a x 2−exp(a)

x b x 2−exp(b)

= a x b x 2−(exp(a)+exp(b))

.....(4.5)

The exponent of the result is the sum of the input exponents. If the numbers are in the

form Qa.b and Qc.d, the result is in the form Q(a + c).(b + d) for unsigned numbers,

or in the form Q(a+c+1).(b+d) for signed numbers, whereby (a+c) is the maximum

number of integer bits and (b+d) the required number of fractional bits (in order to

not lose precision).

4) Division:

We consider a division of A by B, where the exponents of A, B, and the result C are

equal, i.e., exp(a) = exp(b) = exp(c). The result can thus be evaluated as

Page 22: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 22

....(4.6)

4.2. Selecting Appropriate Fixed-Point Q-Format

To compute the filter operation with fixed-point arithmetic, the filter coefficients first

have to be converted to the Q-format. We determine the appropriate Q-format for the

filter coefficients by comparing their number range, which is [−0.42, 0.85], with the

Q-format number ranges in Table 4.1. Clearly, a larger number range implies coarser

resolution, i.e., lower precision. Therefore, the Q-format with the smallest number

range (finest resolution) that accommodates the number range of the transform

coefficients should be selected. Select the Q0.15 format. (Selecting a format with a

larger number range, e.g., the Q1.14 would unnecessarily reduce precision.) Table 4.1

gives the integer coefficients that result from multiplying the real filter coefficients by

215

. These integer filter coefficients are employed for the fixed-point filter functions.

Determine an appropriate Q-format for the result coefficients of the wavelet

transform, i.e., the approximations and details. Generally, one can first obtain an

estimate by calculating bounds on the number range of the result coefficients.

Through pre-computing the coefficients on a personal computer for a large set of

representative sample images one can next check whether a Q format with a smaller

number range and correspondingly higher precision can be used.

Page 23: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 23

Qm.n range resolution

Q0.15 -1 0.999969482421875 0.000031

Q1.14 -2 1.99993896484375 0.000061

2.13 -4 3.99987792968750 0.000122

3.12 -8 7.99975585937500 0.00244

4.11 -16 15.9995117187500 0.000488

5.10 -32 31.999023437500 0.000977

6.9 -64 63.9980468750000 0.001953

7.8 -128 127.996093750000 0.003906

8.7 -256 255.992187500000 0.007812

9.6 -512 511.984375000000 0.015625

10.5 -1024 1023.96875000000 0.031250

11.4 -2048 2047.93750000000 0.062500

12.3 -4096 4095.87500000000 0.125000

13.2 -8192 8191.75000000000 0.250000

14.1 -16384 16383.5000000000 0.500000

15.0 -32768 32767 1.000000

Table 4.1. Number range for the texas instruments Qm.n format [1]

Page 24: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 24

Chapter 5

FRACTIONAL WAVELET FILTER

The memory requirements of existing image wavelet transform techniques exceed the

limited resources on low-complexity micro-controllers. Implementations of the

discrete two-dimensional image wavelet transform on a personal computer (PC)

generally keep the entire source and/or destination image in memory and separately

apply horizontal and vertical filters. But this approach is typically not possible on

resource-limited platforms. The general approach to build a multimedia sensor node

has been to connect a second platform with a more capable processor to the sensor

node. This increases the cost of the network. In order to overcome these limitations

the fractional wavelet filter, a novel computational method for the image wavelet

transform is introduced. Fractional wavelet transform (FRWT) is a generalization of

the classical wavelet transform (WT). This transform is proposed in order to rectify

the limitations of the WT and the fractional Fourier transform. The FRWT not only

inherits the advantages of multi-resolution analysis of the WT, but also has the

capability of signal representations in the fractional domain which is similar to the

FRFT. Compared with the existing FRWT, the FRWT can offer signal representations

in the time-fractional-frequency plane.

The fractional wavelet filter requires only random access memory (RAM) for three

image lines. In particular, less than 1.5 Kbyte of RAM is required to transform an

image with 256 x 256 8-bit pixels using only 16-bit integer calculations. Thus, the

fractional wavelet filter works well within the limitations of typical low-cost sensor

nodes. The fractional wavelet filter achieves this performance through a novel step-

wise computation of the vertical wavelet coefficients. Connect a small camera directly

to the micro-controller of the sensor node and perform the wavelet transform with the

micro-controller, to build a multimedia sensor node. The wavelet transform is

computed at an individual node, unlike the distributed image compression approach

(which strives to minimize energy consumption by distributing the compression

computations across the nodes between a camera node and the sink node). The data

Page 25: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 25

on the MMC-card can only be accessed in blocks of 512 Bytes; thus, sample by

sample access, as easily executed with RAM memory on PCs, is not feasible. Even if

it is possible to access a smaller number of samples of a block, the read/write time

would significantly slow down the algorithm, as the time to load a few samples is the

same as for a complete block. The fractional filter takes this restriction into account

by reading complete horizontal lines of the image data. Throughout this section we

consider the forward (analysis) wavelet transform. We explain two versions of the

fractional wavelet filter, namely a floating-point version and a fixed point version.

Figure.5.1. Illustration of fractional image wavelet transform [1]

For the first transform level, the algorithm reads the image samples line by line from

the MMC-card while it writes the sub-bands line by line to a different destination on

the MMC-card (SD-card). For the next transform level the LL sub-band contains the

input data. Note that the input samples for the first level are of the type unsigned char

(8 bit), whereas the input for the higher level is either of type float (floating point

filter) or INT16 (fixed-point filter) format. The filter does not work in place and for

Page 26: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 26

each level a new destination matrix is allocated on the MMC card. However, as the

MMC-card has plenty of memory, this allocation strategy does not affect the sensor’s

resources.

5.1. Floating-Point Filter

The floating point wavelet filter computes the wavelet transform with a high precision

using 32 bit floating point variables for the wavelet and filter coefficients as well as

for the intermediate operations. Thus, the images can be reconstructed essentially

without loss of information. For the considered image of dimension N×N, the wavelet

filter uses three buffers of dimension N, one buffer for the current input line and two

buffers for two destination lines, as illustrated in the bottom part of Fig. 10. One

destination (output) line forms a row of the LL/HL sub-bands and the other

destination line forms a row of the LH/HH sub-bands.

The fractional wavelet filter approach shifts an input (vertical filter) area across the

image in the vertical direction. The input area extends over the full horizontal width

of the image. The vertical height of the input area is equal to the number of wavelet

filter coefficients; for the considered Daubechies 9/7 filter, see Table 3.1, the input

area has a height of nine lines to accommodate the nine low-pass filter coefficients.

The filter computes the horizontal wavelet coefficients on the fly. The vertical

wavelet coefficients for each sub-band are computed iteratively through a set of

fractional sub-band wavelet coefficients (fractions) ll(i, j, k), lh(i, j, k), hl(i, j, k), and

hh(i, j, k). These fractions are later summed over the vertical filter index j to obtain

the final wavelet coefficients. More specifically, the indices i, j, and k have the

following meanings:

i with i = 0, 1, . . . ,N/2−1, gives the vertical position of the input (vertical filter) area

as 2i. For each vertical filter area, i.e., each value of i, nine input lines are read for

low-pass filtering with nine filter coefficients. The filter moves up by two lines to

achieve implicit vertical down sampling; thus, a total of N/2 ・ 9 lines have to be

read. Note that there have to be N/2 sets of final results, each set consisting of an LL,

an HL, an LH, and an HH sub-band row.

Page 27: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 27

j with j = −4,−3, . . . , +4, specifies the current input line as l = 2i + j, that is, j

specifies the current line within the nine lines of the current input area. From the

perspective of the filtering, j specifies the vertical wavelet filter coefficient.

k with k = 0, 1, . . . ,N/2 − 1, gives the horizontal position of the centre of the wavelet

filter as 2k for the horizontal low-pass filtering and 2k+1 for the horizontal high-pass

filtering. That is, k specifies the current position of the horizontal fractional filter

within the current input area.

5.2. Fixed-Point Filter

The fractional fixed-point filter can be realized by first transforming the real wavelet

coefficients to the Q0.15 format. The second requirement is that for all add and

multiply operations the exponent has to be taken into account. For an add operation,

both operands must have the same exponent. A multiply operation requires a result

exponent given by the sum of the input exponents. Both operations can be supported

by an exponent change function that corresponds to a left or right bit-shift. Aside from

these special considerations, the fixed-point filter works similarly as the floating-point

filter. The lifting scheme cannot be applied to the vertical filtering technique of the

fractional filter and also not to the horizontal transform of the first level, as the input

lines of this level have to use an integer buffer array to avoid exceeding the memory.

The lifting scheme allows to compute the one-dimensional transform in place, that is,

there is only one buffer for the input and output values, whereby the buffer must have

the appropriate size to contain the final wavelet coefficients. It thus can be applied to

the higher levels of the horizontal transform, as the input lines for these levels take

variables with the larger data format.

The computing time for the fractional wavelet filter can be reduced by incorporating

the lifting scheme. The lifting scheme cannot be applied to the vertical filtering

technique of the fractional filter and also not to the horizontal transform of the first

level, as the input lines of this level have to use an integer buffer array to avoid

exceeding the memory. The lifting scheme allows to compute the one-dimensional

transform in place, that is, there is only one buffer for the input and output values,

whereby the buffer must have the appropriate size to contain the final wavelet

Page 28: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 28

coefficients. It thus can be applied to the higher levels of the horizontal transform, as

the input lines for these levels take variables with the larger data format (and not only

8 bit unsigned char as for the first level input). More specifically, the lifting scheme is

employed for the computation of the convolution.

Page 29: low memeory wavelet transform for wireless sensor networks

Govt. Model Engineering College Low Memory Wavelet Transform for Wireless Sensor Networks

Department of Electronics Engineering 29

Chapter 6

CONCLUSION

The image wavelet transform is highly useful in wireless sensor networks for pre-

processing the image data gathered by the single nodes for compression or feature

extraction. However, traditionally the wavelet transform has not been employed on

the typical low-complexity sensor nodes, as the required computations and memory

exceeded the sensor node resources. Thus, we have techniques that allow for

transforming images on in-network sensors with very small random access memory

(RAM). One-dimensional and two-dimensional wavelet transforms were explained.

Then, the computation of the wavelet transform with fixed-point arithmetic on

microcontrollers was explained. The fractional wavelet transform which required

only 1.5 Kbyte of RAM was used to transform a gray scale 256 × 256 image. Thus,

the image wavelet transform is feasible for sensor networks formed from low-

complexity sensor nodes. The low-memory transform techniques are not lossless.

However, the performance evaluation illustrates that the loss is typically not visible.

Combining the low-memory wavelet transform techniques with a low-memory image

wavelet coder achieved image compression competitive with state of the art

JPEG2000 compression.