quant ization

23
Waveform Coding Compression: factors to be considered BW / storage efficiency Signal distortion Computational complexity (for real-time usage) Two categories: Lossless and Lossy Redundancies Statistical; e.g. Huffman coding Inter-sample; e.g. DPCM Psycho-acoustic / psycho-visual; e.g. sub-band coding

Upload: khatri-ravi

Post on 20-Jul-2016

241 views

Category:

Documents


0 download

DESCRIPTION

Quantization

TRANSCRIPT

Page 1: Quant Ization

Waveform Coding

Compression: factors to be considered –

BW / storage efficiency

Signal distortion

Computational complexity (for real-time usage)

Two categories: Lossless and Lossy

Redundancies

Statistical; e.g. Huffman coding

Inter-sample; e.g. DPCM

Psycho-acoustic / psycho-visual; e.g. sub-band coding

Page 2: Quant Ization

Speech coding

Classes of coders:

Waveform coders

Time-domain waveform coders

Frequency-domain waveform coders

Source coders – vocoders: Follow a speech

production model, encoding excitation parameters.

Perceptual coders – Make use of hearing properties.

Page 3: Quant Ization

Coding methods

Non-uniform quantization (companding) – A-law, µ-law.

Adaptive-quantizer PCM – vary the quantizer step in

proportion to short-time average speech amplitude.

DPCM

Sub-band coding

Transform coding

Vector quantization

Noise-shaping – hiding quantization noise at times /

frequencies of high speech energy

Page 4: Quant Ization

Noise shaping in frequency

Page 5: Quant Ization

Quantization

All digital speech coders use a form of PCM to convert an

analog signal to a digital representation.

No. of quant. levels = L; generally taken in the form of 2N

why?

Sampling rate Fs – min. sampling rate is Nyquist rate

Bits per sample = R (for fixed-rate coding)

In general,

Bit-rate = Fs R

LRL R

2log2

LR 2log

Page 6: Quant Ization

Quantization

Quantization provides first level of compression with

some loss – discuss.

Input samples:

Quantizer output:

Choice of R (or L) is a compromise between coding rate

(compression) and quality – discuss.

The y’s are called reconstruction level why?

Boundary value between two intervals is called decision

level why?

maxmin xxxxX

LyyyYYXQ ,.....,,;: 21

Page 7: Quant Ization

Quantizer types

Mid-rise – even no. of levels; so L can be of the form

Mid-tread – odd no. of levels; at best L can be of the

form

preferred in case of speech why

Uniform quantizer –

equal length quantization intervals

Reconstruction level = mid-point of the interval

Non-uniform quantizer

Page 8: Quant Ization

Quantization noise

Assumed to be

stationary white noise,

uncorrelated with the input signal,

each error sample uniformly distributed in the range

[- ∆/2, ∆/2] where ∆ is the quantization step.

Components of quantization noise:

Granular noise

Overload noise

Page 9: Quant Ization

Quantization noise

Quantization noise calculation

For uniform quantizer:

Quantizer designed for −xm to + xm

Step size

dqqpqdxpyx Q

L

k

XkQ )(2

1

22

R

mm

QQ xL

xdqqpq 22

2

2222 2

3

1

312)(

Lxm2

Page 10: Quant Ization

Non-uniform quantizer

Lloyd-Max quatizer: pdf optimized

Reconstruction level = centroid in an interval

Decision level = mid-point between two reconstruction

level.

Designing by iterative solution.

1

1

)(

)(

k

k

k

k

x

x

X

x

x

X

k

dxxp

dxxxp

y

21 kkk yyx

Page 11: Quant Ization

Log-companding

Companding = Compressing + expanding.

Basically we need to determine a suitable compressing function c(x).

For constant SNR quantization with large L, we need log-companding.

But, such function undefined for x = 0 so quasi-log companding used.

Compressor

C(x)

Uniform

Quantizer

Q[C(x)]

Expander

C−1(x) Q[x]x

Page 12: Quant Ization

A-law quantizer

Mid-rise type.

Chosen value of K

Compression function (logarithmic for large values of x):

Hence, SNR

Standard value of A = 87.56

mxAln1

11

for )sgn(ln1ln1

)(

mm

m

x

x

Ax

x

xA

A

xxc

22 ln13 AL

Page 13: Quant Ization

A-law quantizer

Compression function (linear for small values of x):

Companding gain (slope):

Step-size:

Step-size ratio = max. step-size / min. step size = A

Ax

xxx

A

Axc

m

10for )sgn(

ln1)(

Axc

dx

d

A

Axc

dx

d

mxxx ln1

1)(

ln1)(

0

AL

x

A

A

L

x mm ln12

maxln12

min

Page 14: Quant Ization

μ-law quantizer

Mid-tread type.

Chosen value of K

Compression function (logarithmic for large values of x):

Hence, SNR

Standard value of μ = 255

mx 1ln

1

1for )sgn(ln

1ln)(

mm

m

x

xx

x

xxxc

22 1ln3 L

Page 15: Quant Ization

μ-law quantizer

Compression function (linear for small values of x):

Companding gain (slope):

Step-size:

Step-size ratio = max. step-size / min. step size = μ

10for )sgn(1ln

1ln)(

mm

m

x

xx

x

xxxc

1ln

1)(

1ln)(

0 mxxx

xcdx

dxc

dx

d

1ln

2max

1ln2min

L

x

L

x mm

Page 16: Quant Ization

Adaptive quantization

Quantizer parameters (all decision levels and reconstruction levels) need to be adjusted with change in signal statistics:

Change in pdf – form of quantizer to be changed, e.g. non-uniform to uniform or vice-versa; needs computationally expensive redesigning.

Change in mean value – decision / reconstruction levels to be shifted by the same amount; so computationally simple.

Change in dynamic range – step sizes to be changed proportionately; decision / reconstruction levels also changes proportionately.

Page 17: Quant Ization

Adaptive quantization

We discuss adaptation strategy for change in dynamic range.

Std. deviation changes proportionately with dynamic range.

So, kth step-size at nth sample instant is taken in proportion to the signal standard deviation at that instant of time.

Std. dev. at nth sample instant is estimated from knowledge of N future samples (AQF – adaptive quantization forward) or N past samples (AQB – adaptive quantization backward).

)()( nn Xkk

Page 18: Quant Ization

Adaptive quantization forward

Estimated std. dev.:

(assuming zero mean):

Problems:

Delay – needs to wait for next N−1 samples to arrive.

Overhead – needs to transmit estimated std. dev. value for

adjustment of reconstruction levels in the receiver.

1

0

22 1)(ˆ

N

i

X inxN

n

Buffer Coder Decoder

Level

Estimator

x(n) y(n)

Page 19: Quant Ization

Adaptive quantization backward

Estimated std. dev.:

(assuming zero mean):

Quantized samples used so that same level estimation

possible also in receiver – so, no overhead.

Problem – estimation based on quantized samples, not

actual samples.

N

i

X inyN

n1

22 1)(̂

Buffer

Coder Decoder

Level

Estimator

x(n)y(n)

Level

EstimatorBuffer

Page 20: Quant Ization

Predictive coding

Also called differential coding DPCM.

Predicted sample:

Input to quantizer is the prediction error.

Quantized prediction error:

Reconstructed sample:

N

k

k knxhnx1

)(ˆ)(~

)(~)()( nxnxnd

)()()( nqndnu

)()()()()(~)()(~)(ˆ nqnxnqndnxnunxnx

Page 21: Quant Ization

Predictive coding system

)(~ nx

Quantizer

Linear

Predictor

Linear

Predictor

x(n)

u(n)

)(ˆ nx

d(n)+

+

+

+

+

)(ˆ nx

)(~ nx

Page 22: Quant Ization

Optimum prediction

Prediction gain:

Needs to maximize pred. gain minimize pred. error.

Optimum choice of predictor coefficients hk needed.

Method:

Take approximate prediction:

This assumption holds good for large bit/sample (less quantization error) or small N (less accumulated error)

Calculate prediction error variance.

Choose pred. coefficients so as to minimize this pred. error variance.

22DXpG

N

k

k knxhnx1

)()(~

Page 23: Quant Ization

Optimum prediction coefficients

For N = 1,

For any N:

This relation is known as (1) normal equation, or (2) Yule-Walker prediction, or (3) Weiner-Hopf equation

12,1

)1(

)0(

)1(

X

XX

XX

XXopt

R

R

Rh

optN

opt

opt

XXXXXX

XXXXXX

XXXXXX

XX

XX

XX

h

h

h

RNRNR

NRRR

NRRR

NR

R

R

,

,2

,1

......

)0(....)2()1(

................

)2(....)0()1(

)1(....)1()0(

)(

.......

)2(

)1(