lossy and lossless compression using combinational...
Post on 24-Mar-2018
223 Views
Preview:
TRANSCRIPT
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 8, August 2015
ISSN: 2278 – 1323 All Rights Reserved © 2015 IJARCET 3419
Lossy and lossless compression using
combinational methods
Ms. C.S Sree Thayanandeswari,M.E, MISTE,
Assistant Professor, Department of ECE, PET
Engineering College, Vallioor.
J Jeya Christy Bindhu Sheeba, Dept of ECE,
IIndM.E(C.S) PET Engineering College,
Vallioor .
Abstract—Image compression is the process of reducing
the amount of data required to represent an image.
Image Compression is used in the field of Broadcast
TV, Remote sensing, Medical Images. Many common
file formats are surveyed and the experimental results
of various states of lossy and lossless compression
algorithms are given .In the proposed method, image is
compressed by using lossy and lossless methods for
different types of images. Here, the lossy compression
is done by the fractal decomposition code and lossless
compression is done by using the LZW algorithm.
LZW is the dictionary based algorithm, which is simple
and can be used for the hardware applications. Fractal compression represents the image in a contractive form.
Inspite of its lossy nature it can be used for the case of
lossless compression. A general comparison is done
based on analyzing the parameters such as Peak Signal
to Noise Ratio (PSNR), Mean Square Error(MSE),
Image fidelity (IF), Absolute Difference (AD) to the
different types of images.
IndexTerms Image compression, LZW, Fractal decomposition, mean square error.
1. INTRODUCTION
In the digitized world of today, the role played
by computer and its applications are mandatory in each
and every field. There are many fields which has the
wide variety applications of the audio, image and
digital video processing. In order to handle more
number of data (images, videos) there is a requirement
of large amount of space and a huge bandwidth for the
process of transmission. The good solution for this problem is the compression of the images which reduce
the redundant information and increase the space.
In this paper, LZW algorithm is capable of
producing compressed images without having an effect
on the quality of the image. This can be successfully
brought about by reducing the total number of bits
needed to constitute each pixel of an image. Thus, in
succession which minimize the memory space needed to store images and transmission can be done with little
amount of time. There are two types of image
compression. They are lossy and lossless image
compression. Depending on the application and the
degree of compression any one of the two types can be
chosen. Lossless compression is used where the exact
replica of the original image is to be produced. Lossy
compression can be affected by the loss of data
compared to the original image. The improvement of
this type is that it provides a scope for high
compression ratios than the lossless compression
Fig1.Block diagram of image compression system
The most common characteristics of the
images are the nearby pixels are compared and then
they have the unwanted information. The first quest is
to find reduced number of similar depiction of the
image. The two major elements of compression are
redundancy and reduction in irrelevancy.
Reduction in redundancies aims in getting rid
of the mimeo from the source signal. Reduction in the
irrelevancy neglects the part of the signal that is not
seen by the receiver or the Human Visual Display
System.
Original
image
Encoder
Channel
Recreated
image
Decoder
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 8, August 2015
ISSN: 2278 – 1323 All Rights Reserved © 2015 IJARCET 3420
2. BLOCK DIAGRAM
Fig.2Block diagram for the proposed system
The block diagram consists of the input image.
At first the input image is to be compressed by the
LZW algorithm. In order to be compressed by LZW it
must be transformed to binary image. The grey scale
image is to be converted from the decimal value to
binary value. The binary image which is compressed by
LZW is then divided into blocks which have 7 bits
each; since it wants only 7 bits to depict a byte. This is
known by the term decoding by BCH. Thus the compressed image is obtained.
Then the reverse process of decoding is to be
done to delete the extra added 7 bits. Then the result so
obtained is to be decompressed to get the binary image.
To obtain the original image, the binary image is to be
transformed to grey scale image.
3. PROPOSED METHOD
The proposed method uses a compression
methodology using the two lossless techniques LZW
along with Huffman coding and then the Discrete
Cosine Transform (DCT). Next, along with these
lossless techniques the proposed method also has the lossy algorithm as fractal compression. Fractal
compression algorithm removes some information from
the input image and the output given by the fractal
method is not so clear. DCT algorithm produces a
blurred output. LZW algorithm produces the result
which is same as that of the original image. The LZW
algorithm is superior to other compression techniques.
3.1 LZW ALGORITHM
The LZW algorithm is named after the
scientists Lempel, Ziv and Welch. It is a simple
dictionary based algorithm used for the lossless
compression of images. Dictionary based algorithms
are nothing but they are arranged in the form of dictionary. The algorithm first searches the file and then
it arranges the dates in sequences of strings which occur
repeatedly. The LZW algorithm then replaces the
repeated text omitting the incoming text. If any one of
the data is found to be new then it will add to the
dictionary. These words are then saved in the dictionary
and the references are added where the data gets
repeated. Each word in the dictionary has a particular
code. The repeated words are replaced with another
code. The length of the code must be a constant one.
The LZW algorithm is used where the file have more repeated strings. It s a computationally fast algorithm
and is very effective, since the decompression does not
need the strings to be passed to the table. LZW
encoding is based on the multiplication of the encoded
pixels. The principle involves in building the dictionary
by substituting the patterns for the image given as input.
The LZW algorithm can be applied to different types of
image formats which are used to remove the repeated
strings. The BCH algorithm used along with the LZW
algorithm is to correct the errors or to find the errors.
The size of the image file which is compressed by LZW algorithm along with BCH increased because it has
monochrome images.
Input Image Compress by
LZW
Decode by
BCH
Compressed
image
Original
image
Decompress by
LZW
Encode by
BCH
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 8, August 2015
ISSN: 2278 – 1323 All Rights Reserved © 2015 IJARCET 3421
3.2 DISCRETE COSINE TRANSFORM
DFT has a good computational efficiency but the designing of DFT is difficult and has poor energy
compaction. Energy compaction is nothing but the
capacity to collect the energy of the spatial coordinates
in the frequency domain. Energy compaction is very
much important for image compression. Since the DCT
does not save any bits and also doesnot introduce any
distortion hence it can be quantized and used in lossless
compression.
The DCT works well in separating the image
into different pixels of differing frequencies. So that it
can be compressed without losing the major
information. The edges and borders in the images
compressed by DCT are clearly visible without any
blurs and distortion. In the processing of the image by
DCT, the image is first broken into 8*8 blocks of
pixels. Then from the top to bottom or left to right DCT
is applied to each and every block of pixels. The blocks
of pixels are compressed by the process of quantization.
The compressed block of array which has the image is stored in less space than the original image. To obtain
the original image is done by the process of
decompression which can be done by Inverse Discrete
Cosine Transform (IDCT). DCT and ICDT are
symmetric in nature.
Before applying DCT to the image the pixels
are to be divided based in the black and white pixels.
The black and white pixels range from 0 to 255. The pure black pixels are denoted by 0 and pure white
pixels are given by 255. This is the reason why the
image looks like black and white or grey in color. An
image contains thousands of 8*8 blocks in which the
compression is done in each and every block. By this
way each and every block is to be compressed and the
resultant image is obtained.
4 FRACTAL DECOMPOSITION ALGORITHM
The Fractal image compression is given by
Integrated Function System (IFS). Here in this method
it has a source image and the designation image. The
source image is known as the attractor. The designation
image is the output or the recreated image. At first the
image is partitioned into small parts which are known
as blocks. Those subdivided blocks should not overlap with other blocks. Each destination block is to be
mapped with other block which is assembled after the
removal of repeated bits. It has a transforming operator
is known as contracting function. It transforms the
compressed image but the visual effect does not
change. This point is reached when the transformation
is done to N points in the image which can be done by
elementary transformations. This has the basic
approaches needed to compress the image known as
contacting transformation. Then by dividing and
contacting the image by a transformation it is named as
fractal transformation or fractal decomposition. It is
advantageous since it depicts the image in a contractive
form. Fractal compression is a recent method on lossy compression based on the use of fractals which
degrades the likeliness of different parts of an image.
5 PERFORMANCE CRITERIA FOR IMAGE
COMPRESSION
SNR:
The standardized quantity of measuring the
image quality is the signal-to-noise ratio. It is given by
ratio of the power of the signal to the power of noise in
the signal. SNR is given in decibels by
𝑆𝑁𝑅 𝑑𝑏 = 10 log10
σx2
MSE
PSNR:
The most common case of representing the
picture of the input image is given by the Peak value of
SNR. It is defined as the ratio of the maximum power
of the signal to the power of the corrupted noise signal.
𝑃𝑆𝑁𝑅 𝑑𝑏 = 10 log10
2552
𝑀𝑆𝐸
Where the value 255 is the peak in image signal.
MSE:
Mean square error is defined as the measure of
average of square of ratio of estimator output to the
estimated output. it is also known as the rate of
distortion in the retrieved image. Mean square error is given in decibels by
𝑀𝑆𝐸 𝑑𝑏 =1
𝑥𝑦 𝑋 𝑚, 𝑛 − 𝑌(𝑚, 𝑛)2
𝑦−1
𝑛=0
𝑥−1
𝑚=0
6 RESULTS AND DISCUSSION
The performance comparison between lossy
and lossless images is done using MATLAB. The lossy compression is done by using fractal decomposition
method and lossless compression is done by two
compression algorithms DCT and LZW.
In this paper the input image is of different image formats are taken and loaded into the system
for the compression of the given input image. At first
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 8, August 2015
ISSN: 2278 – 1323 All Rights Reserved © 2015 IJARCET 3422
the corresponding loaded input image is displayed. This loaded image is further preceded to the next
stage of lossy compression. The next stage gives the
compressed image by fractal decomposition method.
The loaded input image is converted to grayscale
values and then the binary values are obtained from it, then the binary values are converted to compressed
image. Next upcoming step gives; the image which is
compressed by the fractal decomposition method is
then compressed by the lossless compression
technique of DCT algorithm. This provides a better
result than the fractal compression method. The
image which is compressed by the DCT algorithm is
then compressed by LZW algorithm which is a
lossless method. This provides a better result than the
DCT algorithm. Further, image which is compressed
by the DCT algorithm is then compressed by LZW
algorithm which is a lossless method. This provides a better result than the DCT algorithm.
This algorithm based on the combinational method has the combination of fractal decomposition
for lossy method and DCT, LZW for the lossless
compression. Here in this thesis different image types
such as bmp, tif, png, jpg formats are used .those
image formats are black and white type. The given
colored images are processed in the form of gray
scale images only.
Input Image Image obtained by fractal method
Image by DCT Image by LZW
Fig3 Result of Compressed Image of bmp Type
Input Image Image obtained by fractal method
Image by DCT Image by LZW
Fig4 Result of Compressed Image of tif Type
Input Image Image obtained by fractal method
Image by DCT Image by LZW
Fig5 Result of Compressed Image of png Type
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 8, August 2015
ISSN: 2278 – 1323 All Rights Reserved © 2015 IJARCET 3423
Input Image Image obtained by fractal method
Image by DCT Image by LZW
Fig6 Result of Compressed Image of jpeg Type
PARAMETERS VALUES OBTAINED
LOSSY COMPRESSION BY FRACTAL
DECOMPOSTION
Average absolute
difference
0.2198
Image fidelity 0.1851
SNR 7.3152
PSNR 9.8407
MSE -0.0717
LOSSLESS COMPRESSION BY LZW
Average absolute
difference
0.0105
Image fidelity 0.0004
SNR 3.1696
PSNR 5.7365
MSE -0.0001
Table1: Summarized Result
IMAGE
TYPE
IMAGE
NAME
PSNR SNR MSE
bmp bird 5.73 3.16 0.0001
tif women -20.23 -25.39 0.10
png balloon -22.64 -22.91 0.18
jpeg penguin 9.87 7.35 0.07
Table 2 Comparison of different image types
7 CONCLUSION
Thus the compression is a theme which gains
much significance and it can be used in many
applications. This thesis presents the lossy and lossless
image compression on different file format of images. Many different types of methods have been assessed in
account of quantity of compression that they offer,
effectiveness of the method used and the sensitivity of
error. The effectiveness of the method used and the
sensitivity to error are sovereign of the feature of the
group of source. The level of the compression attained
greatly depends on the source file. It is terminated that
the higher data redundancy favors to reach more
compressed image. The proposed method has the
advantage of LZW algorithm which is combined with
the fractal decomposition method is known for the clarity and fastness. The major goal is to reduce the
computational time and minimize the space occupancy.
The tests were carried on the different types of
image sets and their results were assessed by the clarity
and then by bits per pixel. The demonstrational rating
gives that the proposed method has improvement while
comparing with other conventional methods.
8 FUTURE WORKS
The future works aims in achieving a better
compression ratio by using various new techniques. The
proposed method is on various image types but it is
limited to the videos. New algorithms can be merged and resolved that reduced the computational time which
occurred by the creation of dictionary in LZW method.
The dataset used in this thesis is restricted;
thus by applying the new algorithms on a larger dataset
could be the theme for the future research. The
algorithm can be elaborated for the compression of
color images. Also the work can be enlarged to video compression. The data in video is three dimensional
collections of the colored pixels that have the temporal
and spatial redundancy.
International Journal of Advanced Research in Computer Engineering & Technology (IJARCET)
Volume 4 Issue 8, August 2015
ISSN: 2278 – 1323 All Rights Reserved © 2015 IJARCET 3424
REFERENCES
[1] R. Al-Hashemi and I. Kamal, ―A New Lossless Image Compression Technique based
on Bose, ―International Journal of Software
Engineering and Its Applications, Vol. 5, No.
3, 2011, pp. 15-22.
[2] H. Bahadili and A. Rababa’a, ―A Bit-Level
Text Compression Scheme Based on the
HCDC Algorithm,‖ International Journal of
Computers and Applications, Vol. 32, No. 3,
2010.
[3] N. J. Brittain and M. R. ElSakka, ―Grayscale
True Two- Dimensional Dictionary based
Image Compression,‖ Journal of Visual Communication and Image Representation,
Vol. 18, No. 1, pp. 35-44.
[4] M. Burrows and D. J. Wheeler, ―A Block-
Sorting Loss-less Data Compression
Algorithm,‖ Systems Research Center, Vol.
22, No. 5, 1994, pp
[5] R.C. Chen, P.Y. Pai, Y.K. Chan and C.C.
Chang, ―Lossless Image Compression Based
on Multiple-Tables Arithmetic Coding,‖
Mathematical Problems in Engineering, Vol.
2009, 2009, Article ID: 128317. doi:10.1155/2009/128317
[6] P. Franti, ―A Fast and Efficient Compression
Method for Binary Image,‖ 1993
[7] R. C. Gonzalez, R. E. Woods and S. L. Eddins,
―Digital Image Processing Using MATLAB,‖
Pearson Prentice Hall, Upper Saddle River,
2003
[8] P. G. Howard and V. J. Scott, ―New Method
for Lossless Image Compression Using
Arithmetic Coding,‖ Information Processing &
Management, Vol. 28, No. 6, 1992, pp. 749-763. Doi: 10.1016/0306-4573(92)90066-
[9] B. Meyer and P. Tischer, ―TMW—A New
Method for Lossless Image Compression,‖
Australia, 1997
[10] M. Poolakkaparambil, J. Mathew, A. M. Jabir,
D. K. Pradhan and S. P. Mohanty, ―BCH Code
Based Multiple Bit Error Correction in Finite
Field Multiplier Circuits,‖ Santa Clara, 14-16
March 2011, pp. 1-6.
doi:10.1109/ISQED.2011.5770792
[11] J. H. Pujar and L. M. Kadlaskar, ―A New
Lossless Method of Image Compression and Decompression Using Huffman Coding
Technique,‖ Journal of Theoretical and
Applied Information Technology, 2010
[12] M. Rabbani and W. P. Jones, ―Digital Image
Compression Techniques,‖ SPIE, Washington.
doi:10.1117/3.3491
[13] R. Rajeswari and R. Rajesh, ―WBMP
Compression,‖ International Journal of
Wisdom Based Computing, Vol. 1, No. 2, 2011. doi:10.1109/ICIIP.2011.610893
[14] B. Ranjan, ―Information Theory, Coding and
Cryptography,‖ 2nd Edition, McGraw-Hill
Book Company, India, 2008
[15] D. Shapira and A. Daptardar, ―Adapting the
Knuth Morris Pratt Algorithm for Pattern
Matching in Huffman En-coded Texts,‖
Information Processing and Management, Vol.
42, No. 2, 2006, pp. 429-439.
doi:10.1016/j.ipm.2005.02.00
[16] K. D. Sonal, ―Study of Various Image
Compression Techniques,‖ Proceedings of COIT, RIMT Institute of Engineering &
Technology, Pacific, 2000, pp. 799803
[17] M. F. Talu and I. Turkoglu, ―Hybrid Lossless
Compression Method for Binary Images,‖
University of Firat, Elazig, Turkey, 2003
[18] W. Walczak, ―Fractal Compression of Medical
Images,‖ Master Thesis, School of
Engineering Blekinge Institute of Technology,
Sweden.
[19] H. Zha, ―Progressive Lossless Image
Compression Using Image Decomposition and Context Quantization,‖ Master Thesis,
University of Waterloo, Waterloo
Sree Thayanandeswari received the
B.E degree in Electronics and communication from
Anna University ,Chennai, 2007 and M.E degree
from Anna University ,Chennai, 2013. She is
currently working as an Assistant Professor in the
PET Engg college,Department of Electronics and
Communication, Vallioor. Her research areas
include digital image processing.
Jeya Christy Bindhu Sheeba received the B.E degree in Electronics and communication
from Anna University ,Chennai, 2014.She is
currently doing her M.E in in the PET Engg college.
top related