21 audio signal processing -- quantization shyh-kang jeng department of electrical engineering/...
DESCRIPTION
23 Binary Numbers Decimal notation –Symbols: 0, 1, 2, 3, 4, …, 9 –e.g., Binary notation –Symbols: 0, 1 –e.g.,TRANSCRIPT
![Page 1: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/1.jpg)
1
Audio Signal Processing-- Quantization
Shyh-Kang JengDepartment of Electrical Engineering/
Graduate Institute of Communication Engineering
![Page 2: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/2.jpg)
2
Overview• Audio signals are typically continuous-time
and continuous-amplitude in nature• Sampling allows for a discrete-time represe
ntation of audio signals• Amplitude quantization is also needed to co
mplete the digitization process• Quantization determines how much distorti
on is presented in the digital signal
![Page 3: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/3.jpg)
3
Binary Numbers
• Decimal notation– Symbols: 0, 1, 2, 3, 4, …, 9– e.g.,
• Binary notation– Symbols: 0, 1– e.g.,
0123 10*910*910*910*11999
1002*02*02*12*0
2*02*12*12*0]01100100[0123
4567
![Page 4: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/4.jpg)
4
Negative Numbers• Folded binary
– Use the highest order bit as an indicator of sign• Two’s complement
– Follows the highest positive number with the lowest negative
– e.g., 3 bits,• We use folded binary notation when we
need to represent negative numbers
42]100[4],011[3 4
![Page 5: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/5.jpg)
5
Quantization Mapping
• Quantization
• Dequantization
Continuous values Binary codes
Binary codes Continuous values)(1 xQ
)(xQ
![Page 6: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/6.jpg)
6
Quantization Mapping (cont.)
• Symmetric quantizers– Equal number of levels (codes) for positive and
negative numbers• Midrise and midread quantizers
![Page 7: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/7.jpg)
7
Uniform Quantization
• Equally sized range of input amplitudes are mapped onto each code
• Midrise or midread• Maximum non-overload input value,• Size of input range per R-bit code,• Midrise• Midread • Let
maxx
Rx 2/2 max)12/(2 max Rx
1max x
![Page 8: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/8.jpg)
8
2-Bit Uniform Midrise Quantizer
01
10
11
3/4
1/4-1/4
-3/4
1.0
-1.0
1.0
0.0
-1.0
00
![Page 9: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/9.jpg)
9
Uniform Midrise Quantizer• Quantize: code(number) = [s][|code|]
• Dequantize: number(code) = sign*|number|
0number10number0
s
elsewhere)number2int(1numberwhen12code 1R
1R
1R2/)5.0code(number
1sif1
0sif1sign
![Page 10: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/10.jpg)
10
2-Bit Uniform Midtread Quantizer
01
11
2/3
0.0
-2/3
1.0
-1.0
1.0
0.0
-1.0
00/10
![Page 11: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/11.jpg)
11
Uniform Midread Quantizer• Quantize: code(number) = [s][|code|]
• Dequantize: number(code) = sign*|number|
0number10number0
s
elsewhere)2/)1number)12int(((1numberwhen12code R
1R
1sif1
0sif1sign
)12/(code2number R
![Page 12: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/12.jpg)
12
Two Quantization Methods• Uniform quantization
– Constant limit on absolute round-off error – Poor performance on SNR at low input power
• Floating point quantization– Some bits for an exponent– the rest for an mantissa– SNR is determined by the number of mantissa bits and r
emain roughly constant– Gives up accuracy for high signals but gains much great
er accuracy for low signals
2/
![Page 13: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/13.jpg)
13
Floating Point Quantization
• Number of scale factor (exponent) bits : Rs• Number of mantissa bits: Rm• Low inputs
– Roughly equivalent to uniform quantization with
• High inputs– Roughly equivalent to uniform quantization wit
h
Rm12R Rs
1RmR
![Page 14: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/14.jpg)
14
Floating Point Quantization Example
• Rs = 3, Rm = 5[s0000000abcd] scale=[000]
mant=[sabcd][s0000000abcd]
[s0000001abcd] scale=[001]mant=[sabcd]
[s0000001abcd]
[s000001abcde] scale=[010]mant=[sabcd]
[s000001abcd1]
[s1abcdefghij] scale=[111]mant=[sabcd]
[s1abcd100000]
![Page 15: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/15.jpg)
15
Quantization Error• Main source of coder error• Characterized by • A better measure
• Does not reflect auditory perception• Can not describe how perceivable the errors are• Satisfactory objective error measure that
reflects auditory perception does not exist
)/(log10SNR 2210 qx
2q
![Page 16: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/16.jpg)
16
Quantization Error (cont.)
• Round-off error• Overload error
Overload
![Page 17: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/17.jpg)
17
Round-Off Error• Comes from mapping ranges of input amplitudes
onto single codes• Worse when the range of input amplitude onto a
code is wider• Assume that the error follows a uniform distribut
ion• Average error power
• For a uniform quantizer
2/
2/
222 12/1 dqqq
)2*3/( R22max
2 xq
![Page 18: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/18.jpg)
18
Round-Off Error (cont.)
771.4R021.6)/(log10
3log102logR20)/(log10
)/(log10SNR
2max
210
10102max
210
2210
xx
xx
qx
4 bits8 bits
16 bits
Input power (dB)
SNR
(dB
)
![Page 19: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/19.jpg)
19
Overload Error
• Comes from signals where • Depends on the probability distribution of
signal values• Reduced for high • High implies wide levels and
therefore high round-off error• Requires a balance between the need to
reduce both errors
max)( xtx
maxx
maxx
![Page 20: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/20.jpg)
20
Entropy• A measure of the uncertainty about the next code t
o come out of a coder• Very low when we are pretty sure what code will c
ome out• High when we have little idea which symbol is co
ming• • Shanon: This entropy equals the lowest possible bit
s per sample a coder could produce for this signal
n
nn ppEntropy )/1(log2
![Page 21: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/21.jpg)
21
Entropy with 2-Code Symbols
• When there exist other lower bit rate ways to encode the codes than just using one bit for each code symbol
))1/(1(log*)1()/1(log* 22 ppppEntropy
p
Entro
py
0 1
5.0p
![Page 22: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/22.jpg)
22
Entropy with N-Code Symbols• • Equals zero when probability equals 1• Any symbol with probability zero does not
contribute to entropy• Maximum when all probabilities are equal• For equal-probability code symbols
• Optimal coders only allocate bits to differentiate symbols with near equal probabilities
n
nn ppEntropy )/1(log2
R2R))2(log*2(*2 R
2RR Entropy
![Page 23: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/23.jpg)
23
Huffman Coding• Create code symbols based on the
probability of each symbols occurrence• Code length is variable• Shorter codes for common symbols• Longer codes for rare symbols• Shannon:• Reduce bits over fixed-bit coding, if the
symbols are not evenly distributed
1RHuffman EntropyEntropy
![Page 24: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/24.jpg)
24
Huffman Coding (cont.)
• Depend on the probabilities of each symbol• Created by recursively allocating bits to
distinguish between the lowest probability symbols until all symbols are accounted for
• To decode, we need to know how the bits were allocated– Recreate the allocation given the probabilities– Pass the allocation with the data
![Page 25: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/25.jpg)
25
Example of Huffman Coding• A 4-symbol case
– Symbol 00 01 10 11– Probability 0.75 0.1 0.075 0.075
• Results– Symbol 00 01 10 11– Code 0 10 110 111–
bits4.13*15.02*1.01*75.0R
010
10
1
0
![Page 26: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/26.jpg)
26
Example (cont.)• Normally 2 bits/sample for 4 symbols• Huffman coding required 1.4 bits/sample on
average• Close to the minimum possible, since
• 0 is a “comma code” here– Example: [01101011011110]
bits2.1Entropy
![Page 27: 21 Audio Signal Processing -- Quantization Shyh-Kang Jeng Department of Electrical Engineering/ Graduate Institute of Communication Engineering](https://reader036.vdocuments.us/reader036/viewer/2022081323/5a4d1b517f8b9ab0599a7dee/html5/thumbnails/27.jpg)
27
Another Example• A 4-symbol case
– Symbol 00 01 10 11– Probability 0.25 0.25 0.25 0.25
• Results– Symbol 00 01 10 11– Code 00 01 10 11
• Adds nothing when symbol probabilities are roughly equal
0 1 0 1
0 1