itc imp ques

12

Click here to load reader

Upload: jormn

Post on 07-Apr-2015

515 views

Category:

Documents


7 download

TRANSCRIPT

Page 1: ITC Imp Ques

UNIT – I

2 MARKS

1. Define a discrete memoryless channel. APRIL/MAY 20042. State channel Capacity Theorem. NOV/DEC 2008 , APRIL/MAY 20043. Draw the Huffman code tree and find out the code for the given data: AAAABBCDAB4. What is Prefix coding?Give one example. APRIL/MAY 20085. What is channel coding theorem ? (Or) What is the condition to achieve error free communication over the channel?6. Define Entropy.7. Write down the properties of Entropy .8. What is source encoding? Answer: Source encoding is the process of generating efficient representation by a discrete source.9. Define Variable length code.Give an example.10.Define the Code efficiency of the source encoder.11.State Source Coding theorem. APRIL/MAY 200812.Draw the transition probability diagram of a Binary Symmetric Channel.13.A Source emits any one of the 4 possible symbols during each signaling interval.The Symbols Occur with probability P0=0.4, P1=0.3, P2=0.2, P3=0.1 Find the amount of information gained by Observing each symbol.14.What is Nyquist rate and Nyquist interval? NOV/DEC 200615.Write down the expression for the entropy of a binary memoryless source. NOV/DEC 2006 , MAY/JUNE 200619. If X represents the outcome of a single roll a fair die.What is the entropy of X? NOV/DEC 200720. A code is composed of dots and dashes. Assume that dashes three times longer than dot and has one third of probability of occurrence. Calculate average information in the dot-dash code. NOV/DEC 200721. Calculate entrophy H(X) for a discrete memory less source X, which has four Symbols x1,x2,x3 and x4 with probabilities p(x1)=0.4, p(x2)=0.3, p(x3)=0.2 and p(x4)=0.1 MAY/JUNE 200722. Consider an additive White Gaussian noise channel with 4-Khz bandwidth and noise noise power spectral density η / 2 =10-2 w/hz. The signal power required at the receiver is 0.1 mw. Calculate the capacity of this channel. MAY/JUNE 200723. What is transition matrix?Give its significance. NOV/DEC 2008

8 or 16 Marks:

1. A Discrete memoryless source has a alphabhet of five symbols whose probabilities Of occurrence are as described here:

Symbols : X1 X2 X3 X4 X5Probability: 0.2 0.2 0.1 0.1 0.4

Use Huffman’s encoding for symbols and find the average codeword length. Also Prove that it satisfies source coding theorem. MAY/JUNE 20062. Apply the Shannon Fano encoding procedure to the following ensemble.

X = { x1,x2,x3,x4,x5,x6,x7,x8,x9 }P = { 0.49 , 0.14 , 0.14 , 0.07 , 0.07 , 0.04 , 0.02, 0.02, 0.01 }

Find Efficiency , Average codewordlength L , Redundancy.3. A Discrete memory less Source has an alphabet of seven symbols whose Pobabilities of occurrence are as described below:

1

Page 2: ITC Imp Ques

Symbol : S0 S1 S2 S3 S4 S5 S6Probability : 0.25 0.25 0.0625 0.0625 0.125 0.125 0.125

a)Compute the Huffman code for this source moving a “Combined” symbols as high as possible.b)Calculate the Coding efficiency & Redundancy. APRIL/MAY 20044. A voice grade channel of telephone network has a bandwidth of 3.4khz Calculatea) the information capacity of the telephone channel for a signal – to – noise ratio of 30db andb) the minimum signal – to – noise ratio required to support information transmission through the telephone channel at the rate of 9.6kb/s. NOV /DEC 20035. Find Capacity of a Binary(symmetric)channel in bits/sec, when probability of error Is 0.1 and the symbol rate is 1000 symbols/sec.6. Develop Huffmann’s Code for a Source which units 6 symbols which are equiprobable.

X = { X1 , X2 , X3 , X4 , X5 , X6 }P = { 1/6, 1/6, 1/6, 1/6, 1/6, 1/6}

Find Efficiency,H(s),Average codeword length.7. Consider that two sources S1 and S2 emits messages x1,x2 & x3 and y1,y2 & y3 With Joint probability P(X,Y)as shown in the matrix form.. P(X,Y) =

Y1 Y2 Y3X1 3/40 1/40 1/40X2 1/20 3/20 1/20X3 1/8 1/8 3/8

Calculate the entropies H(X),H(Y),H(X/Y),H(Y/X) &H(X,Y). MAY/JUNE 20078. A Discrete memoryless source X has five symbols x1,x2,x3,x4 and x5 with Probabilities p(x1)=0.4, p(x2)=0.19, p(x3)=0.16, p(x4)= 0.15 and p(x5)= 0.1(i) Construct a Shannon-Fano code for X, and calculate the efficiency of the code.(ii) Repeat for the Huffman code and compare the results. MAY/JUNE 20079. A statistical encoding algorithm is being considered for the transmission of a number of long text files over a public network.Analysis of the file contents have shown that each file comprises only the six different characters M, F, Y, N, O and L, each of which occurs with a relative frequency of occrrence of 0.25, 0.25, 0.125, 0.125, 0.125 and 0.125 respectively.Use Huffman algorithm for encoding these characters and find the following :(i) Code words for each of the character(ii) Average number of bits per codeword(iii)Entropy of the source. NOV /DEC 200610. State and explain source coding theorem. NOV /DEC 2006 , MAY/JUNE 200711. State and explain Channel coding theorem.NOV /DEC 2006 , NOV /DEC 200812. An analog signal having 4khz bandwidth is sampled at 1.25 times the Nyquist rate and each sample is quantized into one of 256 equally likely levels.Assume that the successive samples are statistically independent.(i) What is the information rate of this source ?(ii) Can the output of the source be transmitted without error an AWGN channel with a bandwidth of 10khz and an S/N ratio of 20db ?(iii)Find the bandwidth required for an Awgn channel for error-free transmission of the output of this source if the S/N ratio is 25db. NOV /DEC 200713. A discrete memoryless source X has five equally likely symbols.(i) Construct a shannon-Fano code for X, and calculate the efficiency of the code.(ii) Repeat for the Huffman code and compare the results. NOV /DEC 2007

2

Page 3: ITC Imp Ques

14. Encode the following messages with their respective probability using basic Huffman algorithm :

Message M1 M2 M3 M4 M5 M6 M7 M8Probability 1/2 1/8 1/8 1/16 1/16 1/16 1/32 1/32

Also calculate the efficiency of coding coding and comment on the result.APRIL/MAY 200815. Alphanumeric data are entered into a computer from a remote terminal through a Voice grade telephone channel. The channel has a bandwidth of 3.4khz and output signal-to-noise ratio of 20db.The terminal has a total of 128 symbols.Assume that the symbols are equiprobable and the successive transmissions are statistically independent.Calculate the information capcity of the channel , and the maximum symbol rate for which error free transmisssion over the channel is possible. APRIL/MAY 200816.Find capacity of a Binary(symmetric)channel in bits/sec, when probabity of error

Is 0.1 and the symbol rate is 1000 symbols/sec. NOV /DEC 200517.Apply Huffman Encoding procedure to following message ensemble and determine average length of encoded message also. Determine the coding efficiency.Use coding alphabet D = 4. There are 10 symbols. [X] = [x1,x2,x3,……x10] P[X] = [0.18, 0.17, 0.16, 0.15, 0.10, 0.08, 0.05, 0.05, 0.04, 0.02]NOV /DEC 200518. A discrete memoryless source has an alphabet of five symbols with their Probabilities for its output as given here: [X] = [ X1, X2, X3, X4, X5] P[X] = [0.45, 0.15, 0.15, 0.10, 0.15] Compute two different Huffman codes for this source. For these two codes find(i) Average Code word length(ii) Efficiency and Redundancy APRIL/MAY 2008

UNIT – II

2 MARKS

1. What is ADPCM?2. What will happen if speech is coded at low bit rates?3. Compare and contrast DPCM and ADPCM.4. Explain - Adaptive Subband Coding.5. Give the principle behind DPCM. NOV/DEC 20086. What is granular noise? NOV/DEC 20087. Write the condition required to avoid the slope overload distortion in delta modulation. MAY/JUNE 20078. Why subband coding is preferred for speech coding? NOV/DEC 20079. What do you mean by slope overload distortion in delta modulation? NOV/DEC 200710.What is quantization noise and on which parameter it depends? APRIL/MAY 200811.Differentiate vocoder and waveform coder. APRIL/MAY 200812.Draw the block diagram for differential pulse code modulator. NOV/DEC 200513.Explain slope overloading. NOV/DEC 2005

8 or 16 Marks:

3

Page 4: ITC Imp Ques

1. With the block diagram explain the operation of a basic ADPCM. Also, Explain how a basic ADPCM scheme obtains improved performance over a DPCM scheme. NOV/DEC 20072. Explain the working principle of Dela Modulation? State the drawbacks of delta Modulation and suggest solution. NOV/DEC 20073. Explain Adaptive Quantization and Prediction with Backward estimation in ADPCM system with block diagrams. APRIL/MAY 20084. Explain Dela Modulation (DM) system with block diagrams? What is Slope overload error? State condition to avoid Slope Overload errors? How granular noise and Slope Overload error is minimized in ADM systems?APRIL/MAY 20085. Consider a sinusoidal signal m(t) = Acoswmt applied to a Delta modulator with stepsize Ñ. Show that Slope Overload noise will occur in A > __Ñ__wm Ts

NOV/DEC 20036. Delta Modulator is designed to Operate at 3times the Nyquist rate for a signal with 3khz bandwidth. The Quantized stepsize is 250mv. Determine the maximum amplitude of 1khz input sinusoidal for which DM does not have slope Overload Noise.7. With a neat block diagram explain Pulse Code Modulation (PCM)8. Compare PCM , DM , ADM , DPCM , ADPCM.9. Write notes on: (i) Adaptive Subband coding. NOV/DEC 2008 (ii) Adaptive Delta Modulation.10.Write notes on: (i) LPC. NOV/DEC 2008 (ii) Adaptive Differential Pulse Code Modulation.

11. With neat sketch and supportive mathematical expressions, briefly explain the working principle of Differential pulse code modulation. MAY/JUNE 200712. Briefly describe about the two schemes available for coding the speech signals at low bit rates, namely , Adaptive differential pulse code modulation and Adaptive subband coding. MAY/JUNE 200713. Explain a PCM system to digitize a speech signal .What are A-law and µ-law? APRIL/MAY 200814. A PCM system uses a uniform quantizer followed by a 7-bit binary encoder. The Bit rate of the system is equal to 50 * 106 bits/sec. (i) What is the maximum message bandwidth for which the system operates satisfactorily? (ii) Determine the output signal-to-quantization noise ratio when a full load sinusoidal modulating wave of frequency 1 mhz is applied to the input. APRIL/MAY 2008

UNIT – III

2 MARKS

1. What is a Generator polynomial? Give some standard generator polynomials.2. What are Convolutional Codes? How are they different from block Codes?3. What are Code Tree, Code Trellis and State diagrams for Convolutional encoders?4. Define Hamming distance and Hamming Weight. MAY/JUNE 20075. Show that c= { 000, 001, 101 }is not a linear code. MAY/JUNE 20076. What is meant by Systematic and non Systematic Codes?7. What is meant by Linear block Codes?8. What is meant by Cyclic Codes?

4

Page 5: ITC Imp Ques

9. Define Syndrome10. What are the properties of Linear Block Codes?11. What are the properties of Cyclic Codes?12. What are the properties of Syndrome?13. When is a code said to be linear? NOV/DEC 200614. What is the essence of Huffman coding? NOV/DEC 200615. State two properties of syndrome (used in linear block codes).NOV/DEC 200716. What do you mean by code rate and constraint length in convolutional code? NOV/DEC 200717. Define Syndrome in error correction coding.APRIL/MAY 2008, NOV/DEC 200818. Why Cyclic codes are extremely well suited for error detection?APRIL/MAY 200819. Give the error correcting capability of a linear block code.NOV/DEC 2008

8 MARKS

1 a)Define linear block code. (2) b)How to find the parity check matrix? (4) c)Give the syndrome decoding algorithm. (4) d)Design a linear block code with dmin ≥ 3 for some block length n = 2m-1. (6) e)Consider a hamming code C which is determined by the parity check matrix.

H =

i) Show that the two vectors C1 = (0010011) and C2 = (0001111) are codewords of C and calculate the hamming distance between them. ii) Assume that a codeword C was transmitted and that a vector r = c + e is received. Show that the syndrome s = r.HT only depends on error vector e. iii) Calculate the syndromes for all possible error vectors e with Hamming weight < = 1 and list them in a table. How can this be used to correct a single bit error in an arbitrary position. iv) What is the length and the dimension K of the code. Why can the minimum Hamming distance dmin not be larger than three?2. The generator matrix for a (6,3)block code is given below. Find all the Code word of this Code.

G =

3. Considering (7,4) Code defined by generator polynomial g(x)=1+x+x3

the codeword 0111001 is sent over a noisy Channel producing a received word 0101001 that has a single error. Determine Syndrome Polynomial S(x) and error polynomial e(x).4. For a (6,3) systematic linear block code, the three parity check bits c4,c5,c6 are formed from the following equations MAY/JUNE 2007 C4 = d1+d3 C5 = d1+d2+d3 C6 = d1+d2 i) Write down the generator matrix ii) Construct all possible codewords iii) Suppose that the received word is 01011.Decode this received word by finding the location of the error and the transmitted data bits.

5

Page 6: ITC Imp Ques

5. Consider a (7,4) cyclic code with generator polynomial g(x) = 1+x+x3 .Let data d=(1010) . Find the corresponding systematic codeword. MAY/JUNE 20076. Consider the (7,4) Hamming code defined by the generator polynomial g(x)=1+x+x3 The codeword 1000101 is sent over a noisy channel producing the received word 0000101 that has a single error. Determine the syndrome polynomial s(x) for this received word. Find its corresponding message vector m and express m in polynomial m(x). NOV/DEC 20067. How is syndrome calculated in cyclic codes? NOV/DEC 20068. The SEC (7,4) Hamming code can be converted into a double error detecting and single error correcting (8,4) code by using an extra parity check. Construct the generator matrix for the code and also construct encoder and decoder for the code. NOV/DEC 20079. A convolutional encoder has a single shift register with 2 stages, 3 modulo-2 adders and an output multiplexer.The generator sequences of the encoder are as follows g1(x)=(1,0,1) and g2(x)=(1,1,0) and g3(x)=(1,1,1).Draw (i) the block diagram of the encoder, (ii) state diagram, and also explain the working principle of the encoder. NOV/DEC 200710. Construct a Convolutional encoder for the following specifications: Rate efficiency = ½, Constraint length = 4. The connection from the shift registers to modulo-2 adders are described by the following equations: g1(x) = 1+x g2(x) = x Determine the output codeword for the input message 1110. APRIL/MAY 200811. Explain cyclic codes with its generator polynomial and parity check polynomial. NOV/DEC 200812. Consider the (7,4) Hamming code with P = 1 1 0.Determine the codeword for 0 1 1 1 1 1 1 0 1 The message 0010. Suppose the codeword 1100010 is received.Determine if the codeword is correct. If it is in error correct the error. NOV/DEC 2008

UNIT – IV

2 MARKS

1. Differentiate loss less and lossy compression technique and give one example for each.2. Compare Static Coding and Dynamic Coding.3. Compare Arithmetic Coding and Huffman Coding4. What is Run length Coding?5. What is GIF Interlaced mode?6. What is JPEG standard?7. Give some examples of lossy and lossless compression algorithms. NOV/DEC 20068. What is the major advantage of the adaptive Huffman coding over static Huffman Coding? NOV/DEC 2007 MAY/JUNE 20079. Write the formula for quantization which is used in JPEG compression. NOV/DEC 200710. List the three tokens available at the output of the entrophy encoder in JPEG algorithm . MAY/JUNE 200711. Distinguish between global color table and local color table in GIF. MAY/JUNE 2007

6

Page 7: ITC Imp Ques

12. How dynamic Huffman coding is different than basic Huffman coding? APRIL/MAY 200813. Why graphic Interchange Format is used extrensively in the internet? APRIL/MAY 200814. Define Statistical encoding. NOV/DEC 200815. What is differential encoding? NOV/DEC 2008

8 MARKS

1. Explain the principles of Arithmetic coding. APRIL/MAY 20082. Explain Static Coding with an example.3. Explain Dynamic Coding with an example.4. Explain in detail about GIF,TIFF.5. With a neat block diagram explain in detail about JPEG encoder /Decoder.6. Discuss the various stages in JPEG standard7. With the aid of an example, describe how arithmetic coding can be used for text compression. NOV/DEC 20068. With the aid of a block diagram explain how digitized pictures are compressed.

NOV/DEC 20069. With suitable example briefly explain static Huffman coding and dynamic coding.Also compare them. NOV/DEC 200710. With the aid of a diagram ,describe the interlaced mode of operation of GIF and also describe the principles of TIFF. NOV/DEC 200711. Breifly describe the procedures followed in two of the text compression algorithms given below (i) Dynamic Huffman coding MAY/JUNE 2007 (ii) Arithmetic coding12. With suitable block diagram, briefly explain JPEG encoder and JPEG decoder. MAY/JUNE 200713. Consider the transmission of a message comprising a string of characters with probabilities of : e = 0.3, n = 0.3, t = 0.2, w = 0.1, z = 0.1 Use Arithmetic coding technique to encode this string. APRIL/MAY 200814. List the different types of lossless and lossy data compression techniques. APRIL/MAY 200815. Why lossy compression techniques are used for speech,audio,and video.Justify your answer with numeric calculations. APRIL/MAY 200816. Explain dynamic Huffman coding .Illustrate it for the message . “This is”. NOV/DEC 200817. Write notes on the following: (i) GIF. (ii) Digitized Documents. NOV/DEC 2008

UNIT – V

2 MARKS

1. What is LPC?2. What is CELP?3. Define Pitch,Period,Loudness.4. What is Perceptual Coding?5. What is MPEG?6. How CELP provides better quality than LPC in speech coding? APRIL/MAY 20087. Distinguish between global color table and local color table in GIF.

7

Page 8: ITC Imp Ques

MAY/JUNE 20078. Mention two basic properties of linear prediction. MAY/JUNE 20079. List the three features which determine the perception of a signal by the ear. NOV/DEC 200710. Define the terms ‘group of pictures’ and ‘prediction span’ with respect to video Compression. NOV/DEC 2007

8 MARKS

1. With block schematic diagram explain an LPC coder and decoder. APRIL/MAY 20082. Compare H.261 and MPEG – 1 standard. APRIL/MAY 20083. With suitable block diagram, briefly explain the implementation schematic of H.261.Also,briefly explain macro block and frame/picture encoding formats of H.261. MAY/JUNE 20074. In connection with perceptual coding, briefly describe the following concepts (i) Frequency Masking (ii) Temporal Masking MAY/JUNE 20075. With a neat block diagram explain Dolby AC-1 & Dolby AC-2 audio Coders.6. State the intended application of MPEG–1,MPEG–2,MPEG–4.APRIL/MAY 20087. Explain the principles of LPC. Draw the schematic of an LPC encoder and decoder, and identify and explain the perception parameters and associated vocal tract excitation parameters. NOV/DEC 20078. State and explain the encoding procedure used with (i) the motion vector and (ii) P and B frame. Draw the necessary sketches. NOV/DEC 20079. Explain the encoding procedure of I, P, and B frame in video compression using necessary diagram. NOV/DEC 200810. Explain principles of video encoding based on H.261 standard. NOV/DEC 2008

8