co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfco deb o ok...

33

Upload: others

Post on 03-Oct-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Codebook Organization to Enhance Maximum A

Posteriori Detection of Progressive Transmission of

Vector Quantized Images Over Noisy Channels

Ren-Yuh Wang Eve A. Riskin Richard Ladner

1

1

This work was supported by NSF Grants No. MIP-9110508 and CCR-9108314, an NSF Young

Investigator Award, and a Sloan Research Fellowship. Ren-Yuh Wang was with the Department

of Electrical Engineering, Box 352500, University of Washington, Seattle, WA 98195-2500. He is

now with FutureTel, Inc., 1092 E. Arques Avenue, Sunnyvale, CA 94086. Eve A. Riskin is with

the Department of Electrical Engineering, University of Washington, and Richard Ladner is with

the Department of Computer Science and Engineering, University of Washington. This work was

appeared in part in the 1993 Proceedings of the International Conference on Acoustics, Speech,

and Signal Processing.

Page 2: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Abstract

We describe a new way to organize a full search vector quantization codebook so that

images encoded with it can be sent progressively and have resilience to channel noise. The

codebook organization guarantees that the most signi�cant bits (MSB's) of the codeword

index are most important to the overall image quality and are highly correlated. Simulations

show that the e�ective channel error rates of the MSB's can be substantially lowered by

implementing a Maximum A Posteriori (MAP) detector similar to one suggested by Phamdo

and Farvardin [12]. The performance of the scheme is close to that of Pseudo-Gray [22] coding

at lower bit error rates and outperforms it at higher error rates. No extra bits are used for

channel error correction.

Page 3: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Contents

1 Introduction 1

2 Principal Component Partitioning 3

3 Distortion From Errors in Di�erent Codeword Index Bits 6

4 Decreasing Average Distortion by Local Switching 8

5 MAP Detection for Images 15

5.1 Applying MAP detection to images coded with organized VQ codebooks : : 15

5.2 Prediction of E�ective Error Rate : : : : : : : : : : : : : : : : : : : : : : : : 18

5.3 Fast MAP Decoding Scheme : : : : : : : : : : : : : : : : : : : : : : : : : : 21

6 Results 23

7 Conclusion And Future Work 27

8 Acknowledgements 27

1

Page 4: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

1 Introduction

Vector quantization (VQ) [1] [5] [6] is a lossy compression technique that has been used

extensively for image compression. Full search VQ leads to higher image quality than is

given by the popular tree-structured VQ (TSVQ), but TSVQ is amenable to progressive

transmission due to its built-in successive approximation character. In a progressive image

transmission system [18], the decoder reconstructs increasingly better reproductions of the

transmitted image as bits arrive to allow for early recognition of the image. In [18], Tzou

describes TSVQ's suitability for progressive transmission. Due to the successive approxi-

mation nature of TSVQ, the farther down the tree the vector is encoded, the better the

reproduction. In [16], we organized a full search VQ so that images coded with it could be

sent progressively as in TSVQ. Related work on ordered VQ codebooks has been studied by

Poggi and Cammorota to reduce the bit rate of standard and progressive transmission VQ

[2, 14], by Nasrabadi and Feng to lower the bit rate in �nite-state VQ [10] and in our earlier

work [15].

One problem of transmitting any VQ codeword index is that VQ is sensitive to channel

errors. When VQ codeword indexes are transmitted over a noisy channel, channel errors can

cause a signi�cant increase in distortion if the VQ codebook is not organized for resilience

to channel noise.

There has been signi�cant previous research on the transmission of VQ indexes over noisy

channels. Phamdo, Farvardin and Moriya [13] designed a channel-matched TSVQ scheme

that does not need extra bits for error correction. They developed an iterative algorithm

to design a set of codebooks to minimize a modi�ed distortion for di�erent channel error

rates. Their experiments showed substantial improvements over ordinary TSVQ when the

channel is very noisy, but because their scheme uses a di�erent codebook for each channel

error rate, if there is a channel mismatch, the encoder does not use the best codebook for the

channel. Phamdo and Farvardin also implemented Maximum A Posteriori (MAP) detection

1

Page 5: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

of Markov sources in which the correlation between successive VQ indexes is used to correct

channel errors [12].

In other work, Sayood and Borkenhagen [17] used residual redundancy to correct channel

errors in the design of a joint source-channel DPCM image coding system. Zeger and Gersho

[22] developed Pseudo-Gray coding to reassign the indexes of a VQ codebook to reduce the

distortion introduced by noisy channels. They rearrange the codebook so that codewords

with similar indexes lie near each other in the input space. They iteratively switch the

codewords when the switch lowers the distortion caused by channel noise. They obtain a

reordered codebook with distortion at a local minimum. Cheng and Kingsbury designed

robust VQs using a codebook organization technique based on minimum cost perfect match-

ing [3] and Knagenhjelm used a Kohonen method for designing VQs robust to channel noise

[8]. Hung and Meng [7] adaptively change the codebook at the receiver, depending on the

channel bit error rate, without su�ering performance degradation for noiseless transmission.

They also developed a modi�ed annealing method to generate VQ codebooks that improve

channel error robustness and have little performance degradation for noiseless transmission.

In this paper we extend our methods for using an organized �xed rate full search codebook

for progressive transmission [16] to protect against channel errors. First, a full search progres-

sive transmission tree is constructed using the method of Principal Component Partitioning

(PCP) [16]. The result is an assignment of codeword indexes which allows for progressive

transmission and at the same time gives protection against channel noise. Second, the full

search progressive transmission tree is modi�ed by switching nodes in a way reminiscent of

the switching that is done in Pseudo-Gray Coding [22]. The switching improves the ability

to protect against channel noise without a�ecting the progressive transmission performance

over noiseless channels. Finally, because the PCP method causes the most signi�cant bits

of the codeword indexes to become highly correlated, we use a MAP detection scheme to

further decrease distortion for noisy channels.

The remainder of the paper is organized as follows. In Section 2, we describe Principal

2

Page 6: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Component Partitioning (PCP), our codebook organization technique. A complete descrip-

tion of it can be found in [16]. In Section 3, we derive an equation which can be used to

calculate the distortion introduced by errors in each bit position of the codeword indexes

and describe our switching algorithm in Section 4. In Section 5, we apply the MAP de-

tector to correct channel errors in progressive image transmission, predict the e�ective and

critical channel error rates of our MAP scheme, and propose a fast decoding scheme for the

MAP detector. In Section 6, we present the results of using our MAP detector on images

transmitted over simulated noisy channels. Finally, we conclude in Section 7.

2 Principal Component Partitioning

We developed Principal Component Partitioning (PCP) in [16] to build a full search progres-

sive transmission tree to organize a full search VQ codebook for progressive transmission.

The tree gives full search VQ the successive approximation character that is built into TSVQ.

The progressive transmission tree is a balanced tree whose terminal nodes or leaves are la-

beled by VQ codewords and whose internal nodes are labeled by intermediate codewords

derived from the leaf codewords. The tree is used to reassign the original codeword indexes

to new indexes that can be used for progressive transmission.

To build the tree, we initially �nd a hyperplane perpendicular to the �rst principal

component of the training set used to design the full search codebook. The hyperplane

partitions the codewords into two equal size sets. An iterative process is used to adjust the

hyperplane [16] and the iteration terminates with two equal size sets of codewords which

are used to build the next layer of the tree. The recursive application of PCP leads to a

top-down construction of the full search progressive transmission tree. Wu and Zhang also

used a method of principal components to build TSVQ's [21].

Fig. 1 (a) is a 256�256 magnetic resonance (MR) image displayed at 8 bits per pixel (bpp).

The image is coded to 2 bpp in Fig. 1 (b) with a size 256 codebook with vector dimension

3

Page 7: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Figure 1: (a) The original 256 � 256 magnetic resonance image at 8 bpp (left) and (b) the

coded image at 2 bpp with a GLA VQ codebook of size 256 and vector dimension 4 (right).

4

Page 8: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Figure 2: The intermediate images from 1 (top left) to 8 bpv (bottom right) coded by a 4

dimensional codebook of size 256 organized by PCP.

5

Page 9: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

4 designed on a training set of 20 MR images with the generalized Lloyd algorithm (GLA)

[9]. We will use this MR image codebook throughout the paper. Fig. 2 is an example of

the quality of intermediate progressive transmission images resulting from organizing the

codebook with PCP. These images range from 1 bit per vector (bpv) to 8 bpv.

As a by-product of the PCP codebook organization, we have found that the PCP code-

book indexes have natural resilience to channel noise. This is because the codewords whose

indexes di�er only in the least signi�cant bits (LSB's) lie near each other in the full search

progressive transmission tree. Thus, errors occurring in the LSB's of the codeword index

produce little distortion. We shall examine this further in Section 3 and see that errors in

di�ering bits of the codeword index have di�erent e�ects on the �nal image. We shall see in

Section 5 that they can be protected to a varying extent by a MAP detector.

3 Distortion From Errors in Di�erent Codeword Index

Bits

In this section, we derive an equation for calculating the distortions that are introduced by

errors in each bit position of the codeword index. The equation is then used to calculate the

distortion that is introduced by channel noise for our MR image codebook.

Given a size N codebook fC

0

, C

1

,: : : ,C

N�1

g designed on a training set T , we de�ne T

i

as the subset of T whose nearest codeword is C

i

, the centroid of T

i

. We de�ne W

i

as the

number of vectors in T

i

and D

i

as the total distortion measured between T

i

and C

i

. When

there are channel errors such that every input in T

i

is mapped to C

j

, we incur an increase

in the total distortion of amount (D

jji

�D

i

) where

D

jji

=

X

X2T

i

jjX � C

j

jj

2

=

X

X2T

i

jj(X � C

i

) + (C

i

� C

j

)jj

2

= D

i

+W

i

jjC

i

� C

j

jj

2

(1)

where jjC

i

�C

j

jj

2

is the squared distance betweenC

i

and C

j

. We de�ne �

i

to be the probability

that the i-th bit (�

8

= most signi�cant bit (MSB), �

1

= least signi�cant bit (LSB)) of the

6

Page 10: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

codeword index is in error. We use the binary symmetric channel (BSC) and in the analysis

that follows, we assume that at most one bit of a codeword index is in error at a time. The

average distortion of a vector transmitted over a BSC channel is then

D

av

=

1

W

N�1

X

i=0

N�1

X

j=0

Q

jji

D

jji

(2)

where W =

P

N�1

j=0

W

j

is the size of the training set. The quantity Q

jji

is the conditional

probability that the index of C

j

is received given that the index of C

i

was transmitted. By

Equation 1,

D

av

=

1

W

N�1

X

i=0

N�1

X

j=0

Q

jji

(D

i

+W

i

jjC

i

� C

j

jj

2

);

and

D

av

=

1

W

N�1

X

i=0

D

i

N�1

X

j=0

Q

jji

+

1

W

N�1

X

i=0

N�1

X

j=0

Q

jji

W

i

jjC

i

� C

j

jj

2

(3)

Equation 3 is the same as Equation 7 in Farvardin [4]. Because

P

N�1

j=0

Q

jji

= 1, the �rst term

of Equation 3 is exactly the average distortion when the codebook is used with a noiseless

channel. We de�ne this as D =

P

N�1

i=0

D

i

W

. For i

k

, which we de�ne to di�er from index i only

in the k-th bit,

Q

i

k

ji

= Q

k

1� �

k

; (4)

where Q =

Q

log

2

N

j=1

(1 � �

j

) is the probability that a codeword index is transmitted with no

error. Since we assumed the probability of more than one error per codeword to be highly

unlikely, we get

N�1

X

j=0;j 6=i

Q

jji

'

log

2

N

X

k=1

Q

i

k

ji

(5)

Equation 3 then reduces to

D

av

' D +

Q

W

log

2

N

X

k=1

k

1� �

k

N�1

X

i=0

W

i

jjC

i

� C

i

k

jj

2

(6)

We organized the MR codebook from Section 2 with the PCP algorithm on the same MR

training set. For this codebook and data set, we calculate:

D

av

' D + A(135

8

1� �

8

+ 57

7

1 � �

7

+ 16

6

1� �

6

+ 5:6

5

1� �

5

7

Page 11: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

+3:8

4

1� �

4

+ 3:1

3

1� �

3

+ 1:6

2

1� �

2

+

1

1� �

1

) (7)

where

A =

Q

W

N�1

X

i=0

W

i

jjC

i

� C

i

1

jj

2

(8)

is normalized against the bit sensitivity of the LSB. Let us de�ne the bit sensitivity for the

k-th LSB to be

S

k

=

N�1

X

i=0

W

i

jjC

i

� C

i

k

jj

2

(9)

Equation 7 shows that for our data set,

S

8

= 135S

1

:

This means that an error in the MSB will result in an increase in distortion that is 135 times

larger than that caused by an error in the LSB (if the channel error rates of the MSB and

LSB are the same). Fig. 3 shows the MR images at 2 bpp with 50% errors in each bit of the

codeword index. We can clearly see that errors occurring in the MSB of the codeword index

(top left) would make the images look much worse than errors in the LSB (bottom right).

Thus, the MSB's of the codeword index must be carefully protected against channel noise.

Generally speaking, using the PCP method to construct the full search progressive trans-

mission tree will always have the e�ect that errors in the MSB's will cause more distortion

than errors in the LSB's. Due to the PCP, the MSB separates the codewords into two equal

size sets which are fairly well separated. Thus, an error in the MSB will cause the decoded

codeword to be far from the one that was transmitted. Each succeeding split separates

codewords that are closer together. Thus channel errors occurring in the LSB's are less

detrimental to the image quality than are errors occurring in the MSB's.

4 Decreasing Average Distortion by Local Switching

After a codebook is organized by PCP, the average distortion between the original image

and the decoded image transmitted over noisy channels is expressed as Equation 6. The �rst

8

Page 12: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Figure 3: The coded images at 2 bpp with 50% errors occurring in the codeword index from

the MSB (top left) to LSB (bottom right) (codebook size 256).

9

Page 13: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

z

z

vC

01

C

23

vv

z

z

E

E

E

E

E

E

E

E

E

E

E

C

2

C

3

C

0

C

1

C

0123

Figure 4: An example of a size 4 codebook organized by PCP and the rate 0 (fC

0123

g) and

rate 1 (fC

01

; C

23

g) intermediate codebooks.

term on the right hand side of Equation 6 is the average distortion for a noiseless channel

and is �xed for a given codebook. In this section, we develop a method of switching nodes

in the full search progressive transmissions tree (related to Pseudo-Gray Coding [22]) which

further decreases the second term of Equation 6, the distortion due to channel errors. The

switching method does not a�ect the progressive transmission performance obtained from

the PCP alone over noiseless channels.

We describe the switching method on a simple example of the size four codebook in

Fig. 4. The four codewords fC

0

; C

1

; C

2

; C

3

g are organized by the PCP and a progressive

transmission tree is �t to the codebook to obtain intermediate codewords fC

01

; C

23

g and

fC

0123

g. Two possible forms of arranging the codebook are shown in Fig. 5(a) and Fig. 5(b).

We wish to select the codebook arrangement with the smaller D

av

in Equation 6. That is,

we compare the values of

(W

0

+W

2

)jjC

0

� C

2

jj

2

+ (W

1

+W

3

)jjC

1

� C

3

jj

2

and

(W

0

+W

3

)jjC

0

� C

3

jj

2

+ (W

1

+W

2

)jjC

1

� C

2

jj

2

:

The codebook arrangement in Fig. 5(b) is obtained by just switching C

2

and C

3

in Fig. 5(a);

this clearly does not change the intermediate codeword C

23

. Similarly, if we were to switch

10

Page 14: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

C

0123

��

H

H

H

H

Hj

C

01

J

J

J

J

J

C

23

J

J

J

J

J

C

0

C

1

C

2

C

3

0 1

0 1 0 1

(a)

C

0123

��

H

H

H

H

Hj

C

01

J

J

J

J

J

C

23

J

J

J

J

J

C

0

C

1

C

3

C

2

0 1

0 1 0 1

(b)

Figure 5: Two possible codebook organizations by PCP.

codewords C

0

and C

1

, the intermediate codeword C

01

would not change. Thus, the progres-

sive transmission performance over noiseless channels is unchanged by this local switching

(LS).

In the general case, there are N codewords organized as leaves of a full search progressive

transmission tree generated by the PCP. By a local switch, we mean the exchange of a left

subtree with a right subtree as described in Fig. 6. A local switch has the e�ect that for some

i and x where 1 � i � log

2

N and 0 � x < 2

log

2

N�i

, and for all y, 0 � y < 2

i�1

, the codewords

with indexes y + 2

i

x and y + 2

i�1

+ 2

i

x are switched. We call i the level of the local switch.

The root is at level log

2

N and the leaves at level 0. By switching nodes in the full search

progressive transmission tree in this manner, it is possible to further reduce D

av

. Indeed,

there is an optimal set of local switches which minimizes the average distortion D

av

but

�nding the optimal set of local switches requires searching exponentially many possibilities.

We are left to explore alternative heuristics that lead to suboptimal solutions.

11

Page 15: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Recall from Section 3 that the bit sensitivity of the k-th bit is de�ned to be

S

k

=

N�1

X

i=0

W

i

jjC

i

� C

i

k

jj

2

:

If we assume that the transmission error rate in all bit positions is identical, minimizing D

av

is equivalent to minimizing

P

log

2

N

k=1

S

k

according to Equation 6 and Equation 9. Thus, the

goal of our heuristics is to minimize this sum. As a practical matter, a local switch at level

log

2

N does not change the value of

P

log

2

N

k=1

S

k

. More generally, a local switch at level i does

not change the value of

P

log

2

N

k=i+1

S

k

. Thus, to test whether a local switch at level i decreases

the average distortion, D

av

, it is only necessary to test whether it decreases the value of

P

i

k=1

S

k

.

There are a number of heuristics to apply LS to decrease D

av

. Any heuristic we describe

starts with a full search progressive transmission tree produced by the PCP as the initial

tree. Listed below are four possibilities.

Simple Greedy Heuristic: In this approach, simply search the full search progressive

transmission tree for the local switch which produces the largest decrease in D

av

. Apply this

local switch and repeat the process until no local switch reduces D

av

.

Simple Greedy Heuristic with Simulated Annealing: In this approach, apply ran-

dom switches to the initial tree without regard to decreasing D

av

(this is the annealing step).

With the result of this annealing step, apply the simple greedy heuristic. Repeat this algo-

rithm a number of times with di�erent starting trees and choose the tree with the smallest

D

av

. Our experience is that the initial tree is a good starting point for the simple greedy

heuristic, so that the annealing step should only be performed on a small percentage of the

nodes. A good choice appears to be choosing to randomly switch about 5% of the nodes

of the initial tree. The number of times the annealing should be repeated depends on the

number of nodes in the initial tree. For our trees with 256 leaves, 100 annealing trials were

adequate.

12

Page 16: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

A B B A

0 10

Local Switch at Node X

1

Level i Level i XX

Figure 6: A local switch at a node switches its left and right subtrees.

Top Down Greedy Heuristic: In this approach, simply search the tree level by level

starting with the root, to �nd any local switch which decreases D

av

. Repeat this process

until no local switch decreases D

av

. Starting the search at the top of the tree has the

advantage that usually, local switches nearer the root cause larger decreases in D

av

than

switches lower down the tree, because more codewords are a�ected by the switch.

Top Down Greedy Heuristic with Simulated Annealing: This approach is similar

to the Simple Greedy Heuristic with Simulated Annealing method, except that the top down

greedy heuristic is used in place of the simple greedy heuristic.

All these heuristics and most of the others we have tried give very similar results. The

simple greedy and top down greedy heuristic take about the same number of iterations before

convergence. Adding simulated annealing increases the execution time by a large factor with

only small decreases in the average distortion.

To obtain the best result, we used the Top Down Greedy Heuristic with Simulated An-

nealing method to further organize our MR codebook from Section 2. We calculate the new

13

Page 17: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

average distortion over noisy channels by Equation 6:

D

av

' D + A(123

8

1� �

8

+ 48

7

1 � �

7

+ 13

6

1� �

6

+ 5:0

5

1� �

5

+2:9

4

1� �

4

+ 2:7

3

1� �

3

+ 1:6

2

1� �

2

+

1

1� �

1

) (10)

Comparing Equation 10 and Equation 7, we see that errors introduce less distortion for our

codebook once it has been further organized by the LS algorithm. If the error rates are the

same on each bit, that is,

i

= �; i = 1; 2; : : : ; 8;

Equation 7 becomes

D

av

' D + A

223�

1� �

and Equation 10 becomes

D

av

' D + A

197�

1� �

:

We can see that the LS algorithm decreases the increase in distortion caused by channel

errors by 11.6% in this case.

As we de�ned in Section 3, the bit sensitivity S

k

is the amount of increased distortion

caused by the channel errors occurring in the k-th LSB of the codeword index. For our

medical image codebook, the S

k

's for a codebook organized with the PCP and the LS

algorithm are in Equation 10. As we can see, the S

k

's are monotonically decreasing with

k in a roughly exponential manner. This shows that the MSB's, whose bit sensitivities are

larger, are more important to the image quality than the LSB's. While this monotonic e�ect

is of course dependent on the data set and can not be proved in general for real images,

we prove a precise exponential relationship among the bit sensitivities for a 2-dimensional

lattice VQ [19].

14

Page 18: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

5 MAP Detection for Images

In Section 3, we showed that the MSB's of codeword indexes from a codebook organized by

the PCP are much more important to the image quality than the LSB's. Another conse-

quence of the full search progressive transmission tree is that it results in a high amount of

correlation between the MSB's of the codeword indexes. This is because images vary slowly

and neighboring input vectors are likely to be coded from the same region of the organized

codebook. This is the same observation used by Neuho� and Moayeri in their interblock

noiseless coding for TSVQ [11].

Here we illustrate the high correlation between MSB's of the codeword indexes from

our organized VQ. Fig. 7 is a display of the MSB (top left) to the LSB (bottom right) of

the codeword indexes for a magnetic resonance brain image coded with our organized VQ

(0 = black and 1 = white). The MSB is obviously highly correlated and is even a coarse

approximation to the original image. An image coded with an unorganized codebook is

not likely to display such high correlation. We see from Fig. 7 that the LSB is much more

random but fortunately, this bit is not important to the image quality.

In this section, we use this correlation in the MSB's of the codeword index to apply MAP

detection to images with channel errors. In addition, we predict the e�ective error rate,

describe the fast MAP decoding scheme, and predict the critical channel error rate based on

training data.

5.1 Applying MAP detection to images coded with organized VQ

codebooks

We send one bit plane of the codeword index at a time from the MSB to the LSB. Because

the MSB's of the codeword indexes of an image are highly correlated, we can use the re-

dundancy between them to correct channel errors for progressive transmission over noisy

communication channels. To implement this, we use a variation of Phamdo and Farvardin's

15

Page 19: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Figure 7: MSB's (top left) to LSB's (bottom right) of image coded with organized full search

codebook of size 256 (0 = black, 1 = white). Notice that the MSB's are much more highly

correlated than the LSB's.

16

Page 20: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

-

T

1

T

2

T

3

T

8

T

0

T

4

T

7

T

6

T

5

R

1

R

2

R

3

R

8

R

0

R

4

R

7

R

6

R

5

Figure 8: T

i

and R

i

are the transmitted and received bits over a BSC.

MAP detector [12]. The received bit plane is simply scanned with a sliding 3� 3 bit block

and the middle bit is either changed or unchanged based on the channel bit error rate (BER)

and conditional probabilities calculated from training data of the received 3� 3 bit block.

In Fig. 8, T

0

and its eight neighboring bits T

i

, 1 � i � 8 are a block of bits from one bit

plane of the codeword index. They are transmitted over a noisy BSC and the corresponding

received bit of T

i

is the random variable R

i

which may be ipped by the channel noise. It

is also helpful to recognize that T

i

is also a random variable, namely, that the probability of

the bit T

i

, 0 � i � 8 is statistically determined by the input images. Let r

i

, 0 � i � 8, be a

�xed set of 9 received bits. Given knowledge of the error rate and the training set, if

Pr(T

0

= r

0

jR

i

= r

i

; 0 � i � 8) � Pr(T

0

= �r

0

jR

i

= r

i

; 0 � i � 8); (11)

the received bit r

0

is decoded to �r

0

; otherwise it is unchanged. In other words, we change

the central bit of the 3 � 3 block if the probability that the transmitted central bit is the

same as the received central bit, conditioned on the received 3� 3 block, is less than 0.5.

The e�ect of the error rate on the probabilities in Equation 11 can be factored out so

that the probabilities (which we estimate as relative frequencies) depend only on the training

set. The resulting inequality has many terms but, if the error rate � is much less than 1, the

following inequality (where the symbol ^ represents a logical AND) can be used in its place

with negligible loss in performance

Pr(T

i

= r

i

; 0 � i � 8)(1� �)

9

+

17

Page 21: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

8

X

j=1

Pr(T

j

= �r

j

^ T

i

= r

i

; i 6= j; 0 � i � 8)�(1� �)

8

Pr(T

0

= �r

0

^ T

i

= r

i

; 1 � i � 8)�(1� �)

8

+

8

X

j=1

Pr(T

0

= �r

0

^ T

j

= �r

j

^ T

i

= r

i

; i 6= j; 1 � i � 8)�

2

(1� �)

7

(12)

The probabilities in Equation 12 depend only on the training data. As examples, Pr(T

i

=

r

i

; 0 � i � 8) is simply the percentage of 3�3 blocks from the training set which are identical

to the block which contains exactly r

0

; r

1

; :::; r

8

, and Pr(T

j

= �r

j

^ T

i

= r

i

; i 6= j; 0 � i � 8)

is the percentage of 3 � 3 blocks from the training set which are identical to the block

which contains r

0

; :::; r

j�1

; �r

j

; r

j+1

; :::; r

8

. The �rst term on the left side in Equation 12 is the

probability of transmitting T

i

, 0 � i � 8 without channel errors. The second term on the

left side is for when T

0

is transmitted correctly but there is one error in T

i

; 1 � i � 8. In the

�rst term on the right side, T

0

is transmitted incorrectly but all other bits are transmitted

correctly. Finally, the second term on the right side is due to transmitting T

0

incorrectly along

with one additional error in T

i

, 0 � i � 8. Our simulation results show that Equation 12 is

a good approximate decision criterion. Adding more terms to allow for more channel errors

requires higher decoding complexity and gave little performance improvement.

5.2 Prediction of E�ective Error Rate

The MAP decoder changes or does not change the received data based on Equation 12. Thus,

it not only corrects errors but may also introduce errors. As a result, the e�ective error rate

of our MAP scheme consists of two parts. One is due to the MAP decoder not correcting

an error caused by channel noise and the other is due to the MAP decoder introducing

errors. When we design the MAP detector, we can calculate these two probabilities based

on the training set and predict the e�ective error rate in advance. Our results show that the

predicted e�ective error rates are very close to the simulated values.

For a bit T

0

transmitted over a BSC with channel error rate �

channel

and decoded with

18

Page 22: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Noisy Channel

channel

- MAP

Decoder

-

T

0

-

R

0

D

0

Figure 9: Bit T

0

is transmitted over noisy channels and received as R

0

and then decoded as

D

0

.

our MAP scheme in Fig. 9, the received bit is R

0

, and R

0

is decoded to D

0

by the MAP

scheme. Because the e�ective error rate �

e�ective

is comprised of two terms, which depend

only on �

channel

and the training set, it can be calculated ahead of time as follows:

e�ective

= Pr(D

0

=

T

0

) = Pr(R

0

= T

0

^D

0

=

R

0

) + Pr(R

0

=

T

0

^D

0

= R

0

) (13)

= (1� �

channel

)Pr(D

0

=

R

0

jR

0

= T

0

) + �

channel

Pr(D

0

= R

0

jR

0

=

T

0

):

For a �xed 3� 3 vector r

0

; r

1

; :::; r

8

of bits, we de�ne the integer

r =

8

X

i=0

r

i

2

i

to simply be the natural interpretation of the 3 � 3 binary vector as an integer. We also

de�ne r(j

1

; j

2

; : : : ; j

n

) to di�er from r in the j

1

-th, j

2

-th,: : :, and j

n

-th bits. Finally, we de�ne

the indicator function

I

frg

=

8

>

<

>

:

1; if r

0

is changed by the MAP detector due to r being received

0; otherwise.

For r = 0; 1; : : : ; 511, de�ne Pr[r] = Pr(T

i

= r

i

; 0 � i � 8) to be the a priori probability of

transmitting 3 � 3 binary block r. By considering the probabilities of all 512 patterns and

their probabilities of being changed by the MAP detector, we get the following equation

Pr(D

0

=

R

0

jR

0

= T

0

) = Pr(R

0

is changed by MAPjR

0

= T

0

)

19

Page 23: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

10-5 10-4 10-3 10-2 10-1 100

o

o

ooo

o

o

oo

o

o

o

oooooo

x

x

xxx

x

x

x

x

x

xxxxxxxx

*

*

***

*

*

***********

++++++++++++++++++

channel error rate

deco

ded

erro

r ra

te /

chan

nel e

rror

rat

e

Solid Line = Simulated Value

o = The First Bit (MSB)

x = The Second Bit

* = The Third Bit

+ = The Fourth Bit

Figure 10: Simulated and estimated e�ective error rates over di�erent channel error rates

for Magnetic Resonance training data.

=

511

X

r=0

Pr[r]

q

8

I

frg

+

P

1�j

1

�8

�q

7

I

fr(j

1

)g

+

P

1�j

1

<j

2

�8

2

q

6

I

fr(j

1

;j

2

)g

+ : : :+ �

8

I

fr(j

1

;j

2

;:::;j

8

)g

where q = 1� �. The �rst term on the right side of the above equation is the case in which

R

1

to R

8

are all received correctly, and the second term is for when one bit is in error. The

following terms are for when two through eight neighboring received bits are in error.

The term Pr(D

0

= R

0

jR

0

=

T

0

) in Equation 13 can be also obtained in the same manner

and thus we can directly calculate �

e�ective

. The calculated and simulated results for the �rst

4 MSB's from our medical image codebook are shown in Fig. 10. We see that the curves

estimated from the training data �t the simulated results very closely. This result means

that we can accurately predict the e�ective channel error rate from the training set and the

channel error rate. We point out that the MAP detector works slightly worse here than

on the test set; the reason is that two images from the test set of MR images include a

signi�cantly larger amount of black background than is included in the training set images.

Errors in the black background are of course more easily corrected by the MAP detector

than errors in other parts of the image.

20

Page 24: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

5.3 Fast MAP Decoding Scheme

In this section, we show that each 3 � 3 pattern r has a channel error rate �

min

(r) below

which the MAP decoder would not change the central bit of the block. For each r, if we

know the channel error rate is below �

min

(r), then we would not apply the MAP decoder if

pattern r is received. The estimate of the channel error rate can be derived from the signal

power received at the decoder, the signal-to-noise ratio, or by using speci�c error correcting

codes [7]. Here we show how to calculate �

min

(r) from the training set.

Equation 12, the decision equation, is quadratic in � and therefore can be expressed as:

P

1

(1� �)

2

+ P

2

�(1� �) � P

3

�(1� �) + P

4

2

; (14)

where the terms P

1

; P

2

; P

3

, and P

4

, whose meaning are clear from Equation 12, can be

calculated from the training set for each speci�c 3 � 3 binary block r. Then for each r,

min

(r) satis�es Equation 14 with equality. Thus, if the channel error rate is greater than

min

(r), the central bit of block r would be ipped, and if it is below �

min

(r), it would not be

changed. This simple decoding of our MAP scheme makes the MAP detection even faster.

If the channel error rate is less than �

min

(r) for all r, the MAP decoder will make no

corrections at all for any data pattern. Thus, we de�ne the critical channel error rate [12]

as:

critical

= min

r

min

(r):

Table 1 is �

critical

for the 8 bit plane images. From the table, the MSB images, which have

higher correlation, have lower values of �

critical

than the LSB images (except for the LSB).

This means that the LSB's could be corrected only if the channel is very noisy. Fortunately,

the LSB's are relatively unimportant so that the failure to correct errors in the LSB's does

not severely a�ect the image quality.

21

Page 25: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Bit-Image �

critical

MSB 0:001517

2nd MSB 0:003091

3rd MSB 0:009511

4th MSB 0:046284

5th MSB 0:103989

6th MSB 0:122357

7th MSB 0:197906

LSB 0:165561

Table 1: Critical error rates for the eight bit planes.

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

10-5 10-4 10-3 10-2 10-1 100

o

o

ooo

oo

oo

o

o

o

oooooo

x

xxx

xxx

xx

x

x

xxxxxxx

**

**

***

*

**********

++

+++

+++++++++++++

channel error rate

deco

ded

erro

r ra

te /

chan

nel e

rror

rat

e

o = The First Bit (MSB)

x = The Second Bit

* = The Third Bit

+ = The Fourth Bit

Figure 11: The ratio of decoded error rates over channel error rates for MAP detection on

the �rst four MSB's for the Magnetic Resonance test data.

22

Page 26: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

6 Results

In this section, we apply our codebook organization techniques (PCP and LS which we refer

to as PCP-LS) and MAP detector to the magnetic resonance VQ codebook. In Fig. 11, we

plot the average decoded error rate against the channel error rate for the four MSB's of the

codeword index. This is for 50 simulations of a BSC and is for a set of 5 MR images not

included in the training set used to design the codebook and calculate the decision statistics.

At 1% channel bit error rate, we correct 67% (correct 75% and introduce 8%) of the errors

in the MSB, correct 43% (correct 56% and introduce 13%) of the errors in the second MSB,

correct 27% (correct 46% and introduce 19%) of the errors in the third MSB, and do not

correct any errors in the fourth MSB. Having di�erent error rates on each bit of the codeword

index is related to work done by Farvardin [4] where he obtained unequal error probabilities

through the use of channel coding and modi�ed his VQ design accordingly.

Simulations of progressive image transmission with a BER of 10% without and with

MAP detection are shown in Fig. 12 and Fig. 13. We have selected a BER of 10% to better

illustrate the e�ects of channel errors and correction in the photographic reproductions.

Here, the MAP detector is only applied to the �rst three MSB's. The image in Fig. 12 (a) is

the decoded image from only the MSB of the codeword index with 10% errors. The images

in Fig. 12 (b) to (h) are the decoded image from two MSB's to eight bits of the codeword

index with 10% errors. The image in Fig. 13 (a) is the decoded image from only the MSB of

the codeword index with 10% errors and corrected by the MAP decoder. We can see that

the MAP decoder corrects most of the errors in the background and some of the errors in

the head region. In the same manner, the images in Fig. 13 (b) and (c) are the decoded

image from two and three MSB's of the codeword index with 10% errors and corrected by

the MAP decoder. The images in Fig. 13 (d) through (h) are for four through eight bit

planes received (the MAP detector is not applied to these bits). We use the signal-to-noise

23

Page 27: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Figure 12: Progressive transmission of a medical image from 1 bpv to 8 bpv with 10% BER

without MAP detection.

ratio (SNR) to measure the performance of the various schemes, where the SNR in dB is:

SNR = 10 log

10

EX

2

D

:

EX

2

is the average pixel intensity of the image and D is the mean-squared-error. The signal-

to-noise ratio (SNR) is 5.13 dB for the �nal image without MAP detection and 9.55 dB for

the �nal image with MAP detection.

Fig. 14 compares the SNRs for simulated transmission of the test set over noisy channels

using di�erent error protection schemes. These are the GLA followed by PCP-LS codebook

organization without and with the MAP detector (GLA/PCP-LS and GLA/PCP-LS+MAP),

Pseudo-Gray coding [22], and a codebook that is randomly ordered. Pseudo-Gray coding

slightly outperforms the GLA/PCP-LS. One di�erence between the two schemes is that

Pseudo-Gray coding is a�ected somewhat evenly by errors in each codeword index bit, while

24

Page 28: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

Figure 13: Progressive transmission of a medical image from 1 bpv to 8 bpv with 10% BER

with MAP detection.

25

Page 29: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

-5

0

5

10

15

20

25

10-6 10-5 10-4 10-3 10-2 10-1 100

o

o

oo

o

o

o

o

o

o

o

o

oooooooooooooooo

xx

xx

x

x

x

xx

x

x

x

xx

x

xx

xxxxxxxxxxx

channel error rate

SNR

x = Randomly Ordered Codebook

Dash Line = GLA/PCP-LS+MAP

o = PSEUDO-GRAY

Solid Line = GLA/PCP-LS

Figure 14: SNR of the decoded images over noisy channels for di�erent error protection

schemes.

the distortions caused by errors in the PCP-LS codeword index range from large for the

MSB's to small for the LSB's. The slight di�erence in SNR's is likely attributed to the

constraint of having a tree structure for the PCP-LS whereas the Pseudo-Gray codebook

is unconstrained. However, Pseudo-Gray coding requires about 120 times the amount of

time required by the PCP-LS to organize the same codebook. The GLA/PCP-LS+MAP

scheme outperforms Pseudo-Gray coding for channel BERs higher than 0.1 % with 4.06 dB

improvement at a 10% error rate. For error rates less than 0.1 %, Pseudo-Gray coding has a

slightly higher SNR. Pseudo-Gray coding does not require any computations by the decoder

while the GLA/PCP-LS+MAP requires the MAP detector.

In Fig. 14, we also see that the GLA/PCP-LS+MAP and GLA/PCP-LS converge to the

same curve at error rates less than 10

�3

, because the MAP detector does not work at low

error rates. We show the performance of Pseudo-Gray Coding, the GLA/PCP-LS, and the

randomly ordered codebook for channel error rates of less than 10

�3

in Fig. 15 in a larger

scale. We see that because the PCP-LS also has resilience to channel noise, the performance

is much better than the randomly ordered codebook at low error rates and very close to

that of Pseudo-Gray coding. The largest di�erence between Pseudo-Gray Coding and the

GLA/PCP-LS is 0.36 dB at channel error rate 10

�3

.

26

Page 30: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

10-6

10-5

10-4

10-3

16

17

18

19

20

21

22

23

24

25

channel error rate

SN

R

o = PSEUDO-GRAY

Solid Line = GLA/PCP

x = Randomly Ordered Codebook

Figure 15: SNR of the decoded images over noisy channels for di�erent error protection

schemes for channels of low error rates at which MAP does not work.

7 Conclusion And Future Work

We organized a full search VQ codebook for progressive transmission. We found that the

MSB's of the codeword index were most important to the image quality and were also highly

correlated. We designed a MAP scheme to correct channel errors in these bits. Simulations

show that we can get performance close to Pseudo-Gray Coding at low channel error rates

and can outperform it at higher error rates while being able to send images progressively.

Future work will include improving the MAP detector, adding channel coding to the system

along the lines of the work of Farvardin [4], improving the performance at lower channel

error rates by combining Pseudo-Gray Coding with PCP-LS+MAP, determining bounds on

the amount of distortion caused by errors in the di�erent codeword index bits, and applying

PCP-LS to other VQ problems [20].

8 Acknowledgements

Wewould like to thank the anonymous reviews for their excellent comments on the manuscript

and Professor Ken Zeger for the Pseudo-Gray code. We are particularly grateful to B. S.

Srinivas for help with the �nal simulations.

27

Page 31: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

References

[1] H. Abut, editor. Vector Quantization. IEEE Press, Piscataway, NJ, May 1990.

[2] E. Cammarota and G. Poggi. Address vector quantization with topology-preserving

codebook ordering. In Proc. 13th GRETSI Symposium, pages 853{856, September 1991.

[3] N.-T. Cheng and N. K. Kingsbury. Robust zero-redundancy vector quantization for

noisy channels. In Proceedings International Conference on Communications, pages

1338 {1342, 1989.

[4] N. Farvardin. A study of vector quantization for noisy channels. IEEE Transactions on

Information Theory, 36(4):799{809, July 1990.

[5] A. Gersho and R. M. Gray. Vector Quantization and Signal Compression. Kluwer

Academic Publishers, Boston, 1992.

[6] R. M. Gray. Vector quantization. IEEE ASSP Magazine, 1:4{29, April 1984.

[7] A. Hung and T. Meng. Adaptive channel optimization of vector quantized data. In

Proceedings Data Compression Conference, pages 282{291, March 1993.

[8] P. Knagenhjelm. A recursive design method for robust vector quantization. In Proceed-

ings International Conference on Signal Processing Applications and Technology, pages

948{954, Boston, MA, November 1992.

[9] Y. Linde, A. Buzo, and R. M. Gray. An algorithm for vector quantizer design. IEEE

Transactions on Communications, 28:84{95, January 1980.

[10] N. M. Nasrabadi and Y. Feng. Vector quantization of images based upon the Koho-

nen self-organizing feature maps. In Proceedings of 1990 ICASSP, pages 2261{2264,

Albuquerque, NM, April 1990.

28

Page 32: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

[11] D. L. Neuho� and N. Moayeri. Tree searched vector quantization with interblock noise-

less coding. In Proceedings of 1988 Conference on Information Sciences and Systems,

pages 781{783, March 1988.

[12] N. Phamdo and N. Farvardin. Optimal detection of discrete Markov sources over dis-

crete memoryless channels - applications to combined source-channel coding. IEEE

Transactions on Information Theory, 40(1):186{193, January 1994.

[13] N. Phamdo, N. Farvardin, and T. Moriya. A uni�ed approach to tree-structured and

multi-stage vector quantization for noisy channels. IEEE Transactions on Information

Theory, 39(3):835{850, May 1993.

[14] G. Poggi. Address-predictive vector quantization of images by topology-preserving code-

book ordering. In European Transactions on Telecommunications, volume 4, pages 423{

434, July-August 1993.

[15] E. A. Riskin, L. E. Atlas, and Shyh-Rong Lay. Ordered neural maps and their ap-

plication to data compression. In Proceedings of the 1991 IEEE Workshop on Neural

Networks for Signal Processing, pages 543 { 551, 1991.

[16] E. A. Riskin, R. Ladner, R.-Y. Wang, and L. E. Atlas. Index assignment for progressive

transmission of full search vector quantization. IEEE Transactions on Image Processing,

3(3):307{312, May 1994.

[17] K. Sayood and J. Borkenhagen. Use of residual redundancy in the design of joint

source/channel coders. IEEE Transactions on Communications, 39(6):838{846, June

1991.

[18] K-H Tzou. Progressive image transmission: a review and comparison of techniques.

Optical Engineering, 26(7):581{589, July 1987.

29

Page 33: Co deb o okpdfs.semanticscholar.org/1ca8/fa8662140e897a43b00e636e42a2efcde267.pdfCo deb o ok Organization to Enhance Maxim um A P osteriori Detection of Progressiv e T ransmission

[19] R.-Y. Wang. Organization of Fixed Rate Vector Quantization Codebooks. PhD thesis,

University of Washington, Seattle, WA, July 1994.

[20] R.-Y. Wang, E. A. Riskin, and R. Ladner. Codebook organization to enhance MAP

detection of weighted universal vector quantized image transmission over noisy channels.

In Proceedings Data Compression Conference, page 463, March 1994.

[21] X. Wu and K. Zhang. A better tree-structured vector quantizer. In Proceedings Data

Compression Conference, pages 392{401, Snowbird, Utah, April 1991.

[22] K. Zeger and A. Gersho. Pseudo-gray coding. IEEE Transactions on Communications,

38(12):2147{2158, December 1990.

30