chapter - 5 image compressionimage compression research aims at reducing the number of bits needed...
TRANSCRIPT
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
6767
(Basic)
Q(1) Explain the different types of redundncies that exists in image. ? (8M May06 Comp) [8M , MAY 07 , ETRX] A common characteristic of most images is that the neighboring pixels are correlated and therefore contain redundant information. The foremost task then is to find less correlated representation of the image. Two fundamental components of compression are redundancy and irrelevancy reduction. Redundancy reduction aims at removing duplication from the signal source (image/video). Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System (HVS).
Types of redundancies are as follows :
• Spatial Redundancy or correlation between neighbouring pixel values. • Spectral Redundancy or correlation between different color planes or
spectral bands. • Temporal Redundancy or correlation between adjacent frames in a
sequence of images (in video applications).
Image compression research aims at reducing the number of bits needed to represent an image by removing the spatial and spectral redundancies as much as possible.
In digital image compression, three basic data redundancies can be identified and exploited : Coding redundancy, Inter-pixel redundancy and Psychovisual redundancy.
1) Coding Redundancy. If the gray levels of an image are coded in a way that uses more code symbols than absolutely necessary to represent each gray level then,
the resulting image is said to contain coding redundancy.
Chapter - 5 : IMAGE COMPRESSION
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
6868
2) Inter-pixel Redundancy. The gray levels not equally probable. The value of any given pixel can
be predicted from the value of its neighbors that is they are highly correlated. The information carried by individual pixel is relatively small. To reduce the inter-pixel redundancy the difference between adjacent pixels can be used to represent an image.
3) Psychovisual Redundancy. The psychovisual redundancies exist because human perception does
not involve quantitative analysis of every pixel or luminance value in the image. It’s elimination is real visual information is possible only because the information itself is not essential for normal visual processing.
Q(2) Describe the general compression system model . (6M May06 IT)
Image compression system reduces the number of bits needed to represent an image by removing the spatial and spectral redundancies as much as possible. In digital image compression, three basic data redundancies can be identified and exploited. Coding redundancy, Inter-pixel redundancy and psychovisual redundancy.
Input Image
Preprocessing Entropy Encoder
Quantization Lossy
Lossless Encoding Encoded Image
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
6969
Source Encoder (or Linear Transformer)
Over the years, a variety of linear transforms have been developed which include Discrete Fourier Transform (DFT), Discrete Cosine Transform (DCT) [1], Discrete Wavelet Transform (DWT) and many more, each with its own advantages and disadvantages. Quantizer
A quantizer simply reduces the number of bits needed to store the transformed coefficients by reducing the precision of those values. Since this is a many-to-one mapping, it is a lossy process and is the main source of compression in an encoder. Quantization can be performed on each individual coefficient, which is known as Scalar Quantization (SQ). Quantization can also be performed on a group of coefficients together, and this is known as Vector Quantization (VQ). Both uniform and non-uniform quantizers can be used depending on the problem at hand. Entropy Encoder
An entropy encoder further compresses the quantized values losslessly to give better overall compression. It uses a model to accurately determine the probabilities for each quantized value and produces an appropriate code based on these probabilities so that the resultant output code stream will be smaller than the input stream. The most commonly used entropy encoders are the Huffman encoder and the arithmetic encoder, although for applications requiring fast execution, simple run-length encoding (RLE) has proven very effective. It is important to note that a properly designed quantizer and entropy encoder are absolutely necessary along with optimum signal transformation to get the best possible compression.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7070
Q(3) Write short notes on: Psychovisual redundancy. (10M Dec06 IT) (5M May05 Comp)
Eye does not respond with equal sensitivity to all visual information. Certain information has lees relative importance than other information in normal visual processing.
This information is said to be psychovisually redundant. It can be eliminated without significantly impairing the quality of image perception.
The psychovisual redundancies exist because human perception does not involve quantitative analysis of every pixel or luminance value in the image. In general, an observer searches for distinguishing features such as edges or – textural regions and mentally combines them into recognizable groupings. The brain then correlates these groupings with prior knowledge in – order to complete the image interpretation processes.
Psychovisual redundancies are associated with real visual information. It’s elimination in real visual information is possible only because the information itself is not essential for normal visual processing.
Since the elimination of psychovisual redundant data, results in a loss of quantitative information, its is commonly referred to as quantization
Q(4) Classify Image Compression Techniques. (5M Dec06 Comp)
In Lossy compression techniques reconstructed image contains loss of information. Which in turn produces distortion in the image. The resulting distortion may or may not be visually apparent. This compromising accuracy of the reconstructed image gives very high compression ratio.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7171
Q(5) Explain Objective And Subjective Fidelity Criteria (10M Dec 04 Comp)
The criteria for an assessment of a the quality of an image are 1) Objective Fidelity Criteria and 2) Subjective Fidelity Criteria. 1) Objective Fidelity Criteria : -
a) Mean–Square Error [MSE] :
Let f (x,y) represent an input image and Let )y,x(f̂ denote an estimate or appro of f (x,y) For any value of x and y, the error e (x,y) between f(x,y) and )y,x(f̂ can be defined as, e (x,y) = f (x,y) – )y,x(f̂
So that the total error between the two images is
∑ ∑− −
=
−1
,
1
0]),(ˆ),([
M
x
N
yyxfyxf Where the images are of size M x N.
The root mean square error between f (x,y) and )y,x(f̂ is defined as,
21
1
0
1
0
2]),(ˆ),([1⎥⎦
⎤⎢⎣
⎡−= ∑ ∑
−
=
−
=
M
x
N
yrms yxfyxf
MNe == MSE
b) Signal to Noise Ratio [SNR} ∑ ∑
∑∑−
=
−
=
−
=
−
=
−= 1
,0
1
0
2
1
,0
1
0
2
)],(ˆ),([
),(
M
x
N
y
M
x
N
y
yxfyxf
yxfSNR
c) Peak Signal to Noise Ratio [PSNR] ∑ ∑
∑∑−
=
−
=
−
=
−
=
−= 1
,0
1
0
2
1
,0
1
0
]),(ˆ),([
2)255(
M
x
N
y
M
x
N
y
yxfyxfPSNR
2) Subjective fidelity criteria :- Images are viewed by human beings. Therefore measuring image quality by the subjective evaluations of a human observer is more appropriate. This can be accomplished by showing a “typical’ decompressed image to an appropriate cross section of viewers and averaging their – evaluations. The evaluations may be made – using an absolute rating scale or by means of side – by – side comparisons of f (x,y) and )y,x(f̂ , side by side comparisons can be done with the following scale.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7272
1) {1, 2, 3, 4, 5, 6} to represent subjective evaluations such as {Excellent, Fine, Passable, Marginal, Inferior, Unusable} respectively.
2) {- 3, - 2, -1, 0, 1, 2, 3} to represent subjective evaluations such as {much worse, worse, slightly worse, the same, slightly better, better, much better] respectively.
These evaluations are said to be based on subjective fidelity criteria.
Value Rating Description
1 Excellent An image of extremely high quality, as good as you could desire.
2 Fine An image of high quality, providing enjoyable viewing. Interference is not objectionable.
3 Passable An image of acceptable quality. Interference is not objectionable.
4 Marginal An image of poor quality; you wish you could improve it. Interference is somewhat objectionable.
5
Inferior A very poor image, but you could watch it. Objectionable interference is definitely present.
6 Unusable An image so bad that you could not watch it.
Q(6) Explain Arithmetic Coding (6M Dec 04 Comp)
In arithmetic coding one to one correspondence between source symbols and code words does not exists. Instead an entire sequence is assigned single code word. The codeword its self defines an interval of real numbers between 0 and 1. As the number of symbols in the message increases, the interval used to represent it becomes smaller and the number of information bits required to represent the interval becomes larger. Each symbol of the message reduces the size of the interval in accordance with its probability of occurrences
Example : --
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7373
Q(7) Classify with reasons, the following data compression techniques into
Lossy and Lossless schemes [8 M , MAY 07 , ETRX] i] Run length coding. ii] DCT compression.
(Loss-less)
Q(8) Given below is a table of eight symbol and their frequency of occurrence.
Symbol S0 S1 S2 S3 S4 S5 S××× S7 Frequency 0.25 0.15 0.06 0.08 0.21 0.14 0.07 0.04
(a) Give Huffman code for each eight symbol. (16M Dec.04 I.T)
(b) Evaluate minimum number of average bits of sequence per symbol.
(c) What is coding efficiency for the code you have obtained in part (a)?
Q(9) Generate Huffman code for the given image source . Calculate entropy of
the source, average length of the code generated and the compression ratio achieved compare it to standard binary encoding. (12M May06 Etrx)
Q(10) Find Huffman code for the following six symbols. The symbols and their probabilities are given in tabular form: (10M Dec06 IT)
Q(11) Short Note: Huffman coding and Run length coding. (10M Dec05 IT) Q(12) How many unique Huffman codes are there for three symbol source ?
construct these modes. [4 M , MAY 07 , Comp]
Solution :
Consider three symbols s1, s2, s3 with probabilities p1,p2 and p3 respectively.
Symbol A1 A2 A3 A4 A5 A6 A7 A8 probability 0.06 0.02 0.2 0.6 0.04 0.01 0.03 0.04
Symbol A1 A2 A3 A4 A5 A6 Probability 0.1 0.4 0.06 0.1 0.04 0.3
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7474
Case 1: When P1 > P2 > P3
Case 2: When P1 > P3 > P2
Case 3: When P2 > P1 > P3
Case 4: When P2 > P3 > P1
SYMBOL PROBABILTY
S1 P1 P23
S2 P2 0 P1 S3
0
11
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
01300211
SSS
CODESYMBOL
SYMBOL PROBABILTY
S1 P1 P23
S3 P3 0 P1 S2 P2
0
11
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
01200311
SSS
CODESYMBOL
SYMBOL PROBABILTY
S2 P2 P23
S1 P1 0 P1 S3 P3
0
11
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
01300112
SSS
CODESYMBOL
SYMBOL PROBABILTY
S2 P2 P23
S3 P3 0 P1 S1 P1
0
11
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
01100312
SSS
CODESYMBOL
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7575
. Case 5: When P3 > P1 > P2
Case 6: When P3 > P2 > P1
Q(13) For a given source A = { a1, a2, a3, a4 } the following codes were
developed. Check for each of them whether it is uniquely decodable or not. Also state which is the most optimum compared to others ? Why ?
SYMBOL PROBABILITY CODE-1 CODE-2 CODE-3 CODE-4
a1 0.5 00 0 0 11 a2 0.25 01 1 10 10 a3 0.125 10 00 110 001 a4 0.125 11 11 111 1011
Solution :
Code -2 and Code -4 are NOT uniquely decodable code. In Code-1 : The prefix of the code is not a valid codeword. Therefore Code-1 is uniquely
decodable Code. In Code-2, the codeword for symbol a4 is 1 1.
The single bit prefix of code(a4) is ‘1’ which is a valid codeword. Therefore, code-2 is NOT uniquely decodable code.
In Code-3 , the single bit prefix of a2, a3 and a4 is ‘1’ and it is not a valid codeword. Two bit prefix of code(a3) and Code(a4) is 1 1 and it is not a valid codeword. Therefore Code-3 is uniquely decodable Code.
SYMBOL PROBABILTY
S3 P3 P23
S1 P1 0 P1 S2 P2
0
11
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
01200113
SSS
CODESYMBOL
SYMBOL PROBABILTY
S3 P3 P23
S2 P2 0 P1 S1 P1
0
11
⎥⎥⎥⎥
⎦
⎤
⎢⎢⎢⎢
⎣
⎡
01100213
SSS
CODESYMBOL
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7676
In Code-4 , the single bit prefix is not a valid codeword. Two bit prefix of Code(a3) is ‘00’ (which is not a valid codeword.) and Code-4 is 10 ( which is a valid codeword). Therefore, code-4 is NOT uniquely decodable code.
To find Optimum Code, find Average Length. (1) For Code-1 :
∑=
=N
kLkPkLavg
1
= P1 L1 + P2 L2 + P3 L3 + P4 L4 = (0.5)(2)+ (0.25)(2)+ (0.125)(2)+ (0.125)(2)
Lavg = 1 bit/symbol
(2) For Code-3 :
∑=
=N
kLkPkLavg
1
= P1 L1 + P2 L2 + P3 L3 + P4 L4 = (0.5)(1)+ (0.25)(2)+ (0.125)(3)+ (0.125)(3)
Lavg = 1.75 bit/symbol
Code -1 has minimum value of Average Length. So the efficiency of Code-1 is greater than the efficiency of Code-3. Therefore Code-1 is an Optimum code.
Q(14) Can variable length coding procedures be used to compress a histogram
equalized image with gray levels? . Explain. (10M May06 Comp) Q(15) What is Constant Area Coding Technique? (5M Dec06 Comp)
The image is divided into blocks of size (m x n pixels which are classified as all white, all black or mixed intensity. the most frequently occurs category is assigned the 1– bit code word ‘0’ and other two categories are assigned the 2 codes ‘10’ and ‘11’. The mixed intensity category block is coded using 2 bit code word as a prefix which is followed by the m x n bit – pattern of the pixel.
Example :- ● ● ●
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7777
Compression is achieved because the (m x n) bits that will be replaced by 1 bit or 2 code words. When predominantly white text documents are compressed a slightly simpler approach is to code the solid white areas as 0 and all other blocks (including solid black blocks) by a 1 followed by the bit pattern of the block. This approach is called white block skipping (WBS). As few black areas are expected, they are grouped with the mixed intensity regions.
Q(16) Explain Bit plane Coding. (5M May05 Comp)(6M Dec 04 Comp)
Bit plane technique is an error free compression that exploits the images interpixel redundancies. Bit plane technique is based on the concept of decomposing a multilevel image (monocrome or colour) into a series of binary images and compressing each binary image.
Bit Plane Decomposition :-
The gray levels of an m bit gray scale image can be represented in the form of the base 2 polynomial. am – 1 2m – 1 + ---------- + a1 21 + a0 20
By separating the m coefficients into m 1 – bit planes, the image gets decomposed into bit planes. Zero order bit plane is generated by collecting a0 bit of each pixel.
In General, each bit plane is numbered from 0 to (m – 1) and is constructed by setting its pixel equal to the values of the appropriate bits from each pixel.
The disadvantage of this approach is that small changes in gray level can have a significant impact on the complexity of the bit planes. If a pixel of intensity 127 (01111111) is adjacent to a pixel of intensity 128 (1000 0000), every bit plane will contain a corresponding 0 to 1 (OR 1 to 0) transition. This problem can be solved by representing the image by m bit Gray code.
The m – bit gray code gm – 1 …….. g2 g1 g0 can be computed from
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7878
gi = ai + ai + 1 2mi0 −≤≤
gm –1 = am – 1
This code has the unique property that successive code words differ in only one bit position. Thus small changes in gray level are lees likely to affect all m bit planes.
When gray levels 127 and 128 are adjacent the gray code of 127 = 01000000 and 128 = 11000000
That means only the 7th bit plane will contain a 0 to 1 transition Q(17) Explain B2 Code :-
B2 Code is variable length coding. Each code word is made up of continuation bits denoted as C and information bits which are natural binary numbers. The purpose of the continuation code is to separate the individual code words. In B2 code, two information bits are used per continuation bits and hence B2 code.
Example :- Consider six symbols with probabilities as given below ,
Symbol Probability B2 Code
S1 0.4 C00
S2 0.3 C01
S3 0.1 C10
S4 0.1 C11
S5 0.06 C00C00
S6 0.04 C00C01 Message : S1 S2 S3 S5 S1 S6 B2 code : C00 C01 C10 C00 C00 C00 C00 C01 C=1 C=0 C=1 C=0 C=1 C=0 C=1 C=0 1 0 0 1 0 1 1 1 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 1
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
7979
To Average Length :
∑=
=N
kLkPkLavg
1
= P1 L1 + P2 L2 + P3 L3 + P4 L4
= (0.4)(3) + (0.3)(3) + (0.1)(3) + (0.1)(3) + (0.06)(6) + (0.04)(6)
Lavg = 3.3 bit/symbol
Q(18) Explain Shift codes.
Algorithm to find shift code word.
i) Arrange the source symbols so that their probabilities are monotonically decreasing.
ii) Divide the total number of symbol into symbol blocks of equal size.
iii) Add special shift up and / or shift down symbols to identify each block.
Each Time a shift up or shift down symbol is recognized at the decoder, it moves one block up or down with respect to a predefined – reference block.
Example of Shift code is Binary Shift Code.
Consider 21 source symbols, Arrange the source symbols so that their probabilities are monotonically decreasing and divide into three blocks of seven symbols. The individual symbols (a1 thro’ a7) of upper block can be considered as the reference block. Code the symbols in reference block with the binary codes 000 thro’ 110.
The 8th binary code is obtained by single shift up control symbol 11 followed by 000.
The 9th binary code is obtained by single shift up control symbol 11 followed by 001 and so on. . .
This procedure continues to find the code word of the next symbol.
The code word for each symbol is control symbol followed by binary sequence of corresponding earlier block symbol codeword.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8080
Q(19) What do you mean by Variable Length Coding technique? Explain at least two Techniques. (10M Dec06 Comp)
Examples of Variable Length Coding are : Huffman Code, B2 Code, Shift Code.
(Lossy)
Q(20) Explain Improved Gray Scale (IGS) Quantization :-
IGS method recognizes the eye’s inherent sensitivity to edges and breaks them up by adding to each a pseudo random number.
The pseudo random number is generated from low order bits of neighbouring pixels. Because low order bits are fairly random, this amounts to adding a level of randomness which depends on the local characteristics of the image, to the artificial edges normally associated with false contouring.
IGS Quantization Procedure :- PIXEL GRAY LEVEL SUM IGS CODE
N / A 0000 0000 N / A
0110 1100 0110 1100 0110
1000 1011 1001 0111 1001
1000 0111 1000 1110 1000
1111 0100 1111 0100 1111
SUM is initially set to Zero.
Then
Sum = Current 8 bit gray level + Four least significant bits of a previously generated sum.
If the four most significant bits of the current value are 1111 instead of four least significant bits of a previously generated sum, 0000 are added.
The four most significant bits of the resulting sum are used as the coded pixel value.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8181
Q(21) Consider an 8-pixel line of gray – scale data { 12,12,13,13,10,13,57,54 },
which has been uniformly quantized with 6- bit accuracy . Construct its 3-bit IGS code. (8M May06 Etrx)
Solution : Pixel Pixel in 6 bit
Binary Sum
000 000 IGS Code
12 001 000 001 000 001 12 001 000 001 000 001 13 001 001 001 001 001 13 001 001 001 001 001 10 001 010 001 100 001 13 001 001 001 101 001 57 111 001 111 001 111 54 110 110 110 111 110
Q(22) DPCM/DCT encoder for video compression. (5M May06 I.T) Q(23) Write Short Notes on:
(a) Compression using LZW method (10M May06 Comp)(7M May06 Etrx) (b) Delta Modulation and differential pulse code modulation (DPCM).
(6M May06 Comp) Q(24) TRUE OR FALSE AND JUSTIFY
(a) All image compression technique are invertible. (4M May07 IT) (3M May06 Etrx)
False All image compression techniques are not invertible. Lossless compression techniques such as Runlength encoding, Huffman coding, Arithmetic coding, CAC coding, B2 code DPCM are invertible. Lossy compression techniques such as JPEG, IGS Quantization are not invertible.
(b) Runlength coding is loss less coding but may not give data compression
always . [4 M , MAY 07 , ETRX] True
Runlength coding is a lossless compression coding technique. Runlength codeword consists of repeated sequence of tokens. Each token consists of two words : (1) pixel value (2) consecutive occurrence of pixel value. Runlegth Encoding (RLE) is effective compression technique when pixel value is repeated consecutively. In the overall image if the pixel value is not repeatedly consecutively large number of times, RLE cannot give compression.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8282
For Ex-1
RLE coded image is = 0, 8, 3, 0, 2, 8, 3, 0, 11 Size : 10 bytes Image is compressed Size of input Image is 25 bytes
For Ex-2
RLE coded image is = 0, 5, 1, 1, 8, 3, 2, 2, 8, 3 0, 2, 1, 1, 2, 2, 1, 1, 5, 1 3, 1, 2, 1, 1, 1, 4, 1
RLE coded imag Size : 28 bytes Size of input Image is 25 bytes
Hence size of the RLE coded image is larger than size of input image. That means every time RLE does not gives compression.
Q(25) Write Short Notes on Transform Coding
In transform coding a reversible, linear transform is used to map the image into a set of transform coefficients which are then quantized and coded. For most images, a significant number of coefficients have small magnitudes and can be coarsely quantized (or discarded entirely) with little image distortion.
The following fig. shows general block diagram of transform coding system.
Input Compressed
Forward Transform Quantizer Symbol encoder image
Image (N x N)
Compressed Deccmpressed
Symbol decoder Inverse Transform image
image
0 0 0 0 0 0 8 8 8 0 0 8 8 8 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 1 8 8 8 2 2 8 8 8 0 0 1 2 2 1 5 0 2 1 4
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8383
The decoder implements the inverse sequence of steps. An N x N input image first is subdivided into subimages of size (n x n) which are then transformed to generate n x n subimage transform arrays.
The goal of the transformation process is to de-correlate the pixel of each subimage or to pack as much information as possible into the smallest number of coefficients.
Transform coding system based on DCT, WT, FT can be effectively applied. The choice of a particular transform in a given application depends on the amount of reconstruction error that can be tolerated and computational resources available.
Q(26) Explain the basic principle of transform coding for image compression and
illustrate same with the help of DFT and DCT. (10M Dec05 IT) Q(27) Write short Note on : JPEG Compression
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8484
JPEG ENCODER ( Sequential Baseline System)
i) In the sequential baseline system, an input image is first divided into pixel blocks of size 8 x 8 which are processed left to right, top to bottom.
ii) Each block of 8 x 8 pixel size is level shifted by subtracting the quantity
2n-1 where 2nis the highest gray value (ie for n = 8 , 2 n = 256).
iii) The 2D DCT of the block is then computed and quantized.
Ô(u,v) = Round ⎥⎦
⎤⎢⎣
⎡),(),(
vuQvuc Where Q(u,v) is the quantization factor.
iv) In DCT sequential baseline system, Input and output data precision is limited to 8 bits and DCT quantized values are restricted to 11 bits. Quantized DCT coefficients are reordered using the zigzag pattern to form a 1D sequence of quantized coefficients. Zigzag reordering results increasing spatial frequency components with long runs of zeros.
v) Non-zero AC coefficients are coded using a variable length code that
defines the coefficients value and the number of preceding zeros.
vi) DC coefficients are the difference coded relative to the Dc coefficient of the previous sub-image.
The output consists of sequence of tokens is repeated until the block is complete.
FDCT Quantizer Zig-zag
ordering
Entropy
Encoding
Quantization Table
Specification
Input Image Data
Compressed Image
8 X 8 Blocks
Entropy Table
Specification
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8585
1) Run length : Number of consecutive zeros that preceded the current
element in the DCT output matrix. 2) Bit count : The number of bits to follow in the amplitude number. 3) Amplitude : The amplitude of the DCT coefficient.
JPEG DECODER :
i) To decompress the image, decoder first creates the normalized transform coefficient.
ii) After demoralization, the DCT coefficients are computed. C(u,v) = c(u,v) Q(u,v).
iii) The reconstructed sub-image is obtained by taking the inverse DCT of the demoralized coefficients and then level shifting each pixel by 2n-1.
iv) The difference between the original and reconstructed image is the result of lossy nature of compression and decompression process.
v) The root mean square error of the overall compression and reconstruction process is approximately 5.9.
IDCT Dequantizer Zig-zag Re-ordering
Entropy Decoding
Quantization
Table Specification
Entropy Table
Specification
Compressed Image
Reconstructed Image Data
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8686
Q(28) Write short note on :- Image Compression Standards. (5M Dec 04 Comp)
Image Compression Standards :- Joint Photographic Expert Group (JPEG) has developed an international standard for general purpose colour still image compression standards. Motion Picture Expert Group(MPEG) has developed full motion video image sequence with application to digital video distribution and high Definition Television (HDTV)
JPEG still Image Compression:- The JPEG standard defines three difference coding systems. 1. Lossy baseline Coding System-which is based on sequential DCT based
compression.
2. An Extended Coding System, for greater compression, higher precision or progressive reconstructions applications.
3. Lossless independent coding system based on sequential predictive compression techniques.
Q(29) Write short note on :- Vidéo Compression Standards. (5M Dec 04 Comp)
MPEG Full motion video compression:-
The MPEG standard defines different coding systems 1. Video teleconferencing standards. 2. Multimedia standards. H.263 standard defines low bit rate 10-30 kbps speed. H.320 standard supports ISDN Bandwidth.
IP Help Line : 9 9 8 7 0 3 0 8 8 1 www.guideforengineers.com
8787
(1) MPEG-1 standard is an Entertainment quality video compression
standard for the storage and retrieval of compressed imaginary on digital media such as compact disk read only memory (CD-Rom). MPEG-1 standard is written to allow higher bit rates and higher quality encoding.
(2) MPEG –II standard supports video transfer rates between 5 to 10
Mbps a range which is suitable for cable TV distribution and narrow-channel satellite broadcasting.
(3) MPEG IV standard are developed for small frame full motion
compression with slow refresh rates. Both MPEG standards and H.261 standards extends the DCT based compression approach.