speed versus quality in low bit-rate still image...

24
Signal Processing: Image Communication 15 (1999) 231}254 Speed versus quality in low bit-rate still image compression Amir Averbuch!,*, Moshe Israeli", Francois Meyer# ! Department of Computer Science, School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel " Faculty of Computer Science, Technion, Haifa 32000, Israel # Yale University School of Medicine, Department of Diagnostic Radiology, 333 Cedar Street, P.O. Box 208042, New Haven, CT 06250-8042, USA Received 13 March 1998 Abstract This paper presents a fast and modi"ed version of the embedded zero-tree wavelet (EZW) coding algorithm that is based on [4,5,15,20] for low bit-rate still image compression applications. The paper presents the trade-o! between the image compression algorithm speed and the reconstructed image quality measured in terms of PSNR. The measurements show a performance speedup by a factor of 6 in average over the EZW [20] and the algorithm presented in [5]. This speedup causes a PSNR degradation of the reconstructed image by 0.8}1.8 dB in average. Nevertheless, the reconstructed image looks &"ne' with no particular visible artifacts even if we have an average degradation of 1 dB in PSNR. The fast algorithm with its achieved speedup is based on consecutive application of three di!erent techniques: (1) Geometric multiresolution wavelet decomposition [15]. It contributes a speedup factor of 4 in comparison to the symmetric wavelet decomposition in [5,20]. It has a prefect reconstruction property which does not degrade the quality. (2) Modi"ed and reduced version of the EZW algorithm for zero-tree coe$cient classi"cation. It contributes a speedup factor of 6 in comparison to the full tree processing in [5,20]. It degrades the quality of the reconstructed image. This includes e$cient multiresolution data representation which enables fast and e$cient traversing of the zero-tree. (3) Exact model coding of the zero-tree coe$cients. It contributes a speedup factor of 15 in comparison to the adaptive modeling in [5]. This is a lossless step which reduces the compression rate because its entropy coder is sub-optimal. The paper presents a detailed description of the above algorithms accompanied by extensive performance analysis. It shows how each component contributes to the overall speedup. The fast compression has an option of manual allocation of compression bits which enables a better reconstruction quality at a pre-selected &zoom' area. ( 1999 Elsevier Science B.V. All rights reserved. Keywords: Wavelet; Geometric decomposition; Compression; EZW 1. Introduction The embedded zero-tree wavelet (EZW) coding algorithm which was "rst introduced by Shapiro * Corresponding author. [20] is a very e!ective technique for low bit-rate still image compression. Other related algorithms were presented in [4,5,12,15,18,19]. Nevertheless, it is common for all these methods that the computa- tional requirements of a still image compression present a challenge to the hardware architecture which exist in today's market. This paper presents 0923-5965/99/$ } see front matter ( 1999 Elsevier Science B.V. All rights reserved. PII: S 0 9 2 3 - 5 9 6 5 ( 9 8 ) 0 0 0 5 7 - 5

Upload: others

Post on 10-Oct-2019

3 views

Category:

Documents


0 download

TRANSCRIPT

Signal Processing: Image Communication 15 (1999) 231}254

Speed versus quality in low bit-rate still image compression

Amir Averbuch!,*, Moshe Israeli", Francois Meyer#

! Department of Computer Science, School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel" Faculty of Computer Science, Technion, Haifa 32000, Israel

# Yale University School of Medicine, Department of Diagnostic Radiology, 333 Cedar Street, P.O. Box 208042, New Haven,CT 06250-8042, USA

Received 13 March 1998

Abstract

This paper presents a fast and modi"ed version of the embedded zero-tree wavelet (EZW) coding algorithm that isbased on [4,5,15,20] for low bit-rate still image compression applications. The paper presents the trade-o! between theimage compression algorithm speed and the reconstructed image quality measured in terms of PSNR. The measurementsshow a performance speedup by a factor of 6 in average over the EZW [20] and the algorithm presented in [5]. Thisspeedup causes a PSNR degradation of the reconstructed image by 0.8}1.8 dB in average. Nevertheless, the reconstructedimage looks &"ne' with no particular visible artifacts even if we have an average degradation of 1 dB in PSNR. The fastalgorithm with its achieved speedup is based on consecutive application of three di!erent techniques: (1) Geometricmultiresolution wavelet decomposition [15]. It contributes a speedup factor of 4 in comparison to the symmetric waveletdecomposition in [5,20]. It has a prefect reconstruction property which does not degrade the quality. (2) Modi"ed andreduced version of the EZW algorithm for zero-tree coe$cient classi"cation. It contributes a speedup factor of 6 incomparison to the full tree processing in [5,20]. It degrades the quality of the reconstructed image. This includes e$cientmultiresolution data representation which enables fast and e$cient traversing of the zero-tree. (3) Exact model coding ofthe zero-tree coe$cients. It contributes a speedup factor of 15 in comparison to the adaptive modeling in [5]. This isa lossless step which reduces the compression rate because its entropy coder is sub-optimal. The paper presents a detaileddescription of the above algorithms accompanied by extensive performance analysis. It shows how each componentcontributes to the overall speedup. The fast compression has an option of manual allocation of compression bits whichenables a better reconstruction quality at a pre-selected &zoom' area. ( 1999 Elsevier Science B.V. All rights reserved.

Keywords: Wavelet; Geometric decomposition; Compression; EZW

1. Introduction

The embedded zero-tree wavelet (EZW) codingalgorithm which was "rst introduced by Shapiro

*Corresponding author.

[20] is a very e!ective technique for low bit-ratestill image compression. Other related algorithmswere presented in [4,5,12,15,18,19]. Nevertheless, itis common for all these methods that the computa-tional requirements of a still image compressionpresent a challenge to the hardware architecturewhich exist in today's market. This paper presents

0923-5965/99/$ } see front matter ( 1999 Elsevier Science B.V. All rights reserved.PII: S 0 9 2 3 - 5 9 6 5 ( 9 8 ) 0 0 0 5 7 - 5

the trade-o! between the speed of still image com-pression algorithm and the reconstructed imagequality measured in PSNR. As we shall see, signi"-cant performance speedup of the algorithm may beachieved. This speedup, though, is accompanied bydegradation of the reconstructed image quality. Inorder to achieve faster compression, three maintechniques were used consecutively:

1. Geometric wavelet decomposition [15]. This tech-nique speeds-up the multi-resolution wavelet de-composition signi"cantly without any e!ect onthe reconstructed image quality. In fact, it pro-vides a perfect reconstruction when using it witha lossless coding algorithm. It enables to per-form direct two-dimensional convolution whilereducing substantially the number of operationsand with no need to have a transpose of theimage (matrix).

2. Modi,ed and reduced version of the EZ= algo-rithm for zero-tree coe.cient classi,cation. Wedescribe a very e$cient data structure in whichthe zero-tree is scanned. In addition, a shorterscan in each quantization iteration is performed.Only parent and its immediate children areconsidered. This technique degrades the recon-structed image quality since it a!ects codinge$ciency of how the wavelet coe$cients arescanned and quantized. This technique com-prises the depth (how many levels in the tree) wescan in order to have a better correlated se-quences for speed.

3. A combination of exact model arithmetic codingand binary coding. This technique a!ects theobtained image quality because it is sub-optimalin comparison to the adaptive model arithmeticcoding used in [5]. This means that the losslesscoding process of the compression algorithm isless e$cient in terms of the compression ratio (orPSNR).

Each component of the algorithm contributes tothe overall speedup. In general, we see that a totalspeedup by a factor of 6 is achieved with degrada-tion of 0.8}1.8 dB in the reconstructed imagePSNR in comparison to [5,20].

In addition to the above techniques two com-plementary algorithms, which also play part in thetime versus quality trade-o!, are presented:

1. A &skip' option in which the image is sub-sam-pled one level during the wavelet decompositionand quantization. This technique reduces theamount of data in the image by a factor of 4.However, the reconstructed image quality isa!ected since only 25% of the original data isreproducible and the other 75% is reconstructedby interpolation.

2. A &zoom' technique in which a pre-selected areain the image is compressed using more bits thanthe rest of the image. This enables the recon-struction of this area with a better quality thanthe rest of the image. It used the tree-structure ofthe reconstructed image in a very e$cient way.This technique has a negligible additional com-putation time cost.

Comparative measurements were taken on a Sili-con Graphics/Indigo workstation with a 100 MHzMIPS R4000 CPU processor. Each section in thepaper contains a table of comparative performanceof the algorithms in terms of the computation timeand the PSNR of the reconstructed image.

We have chosen "ve gray-scale images for perfor-mance comparison. The conclusions of this paper,however, are applicable to color images as well. Theimages we used are described below. They areshown in Section 9.

1. Lenna512 } a 512]512 image with mediumdensity of detail.

2. Barbara512 } a 512]512 image with high den-sity of detail.

3. Barbara256 } a 256]256 image with highdensity of detail. This image was cut from theBarbara512 image.

4. Fabric512 } a 512]512 image which have tex-ture and "ne features.

5. Carl128 } a 128]128 image with mediumdensity of detail. This image was taken froma video-phone sequence.

The paper is organized as follows. In Section 2 wereview various compression techniques. Section 3describes the multiresolution wavelet decomposi-tion algorithms and the techniques which are usedto improve their speed. In Section 4 the data struc-tures which are used in the quantization part ofthe compression algorithm are described. Section 5

232 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

details the zero-tree classi"cation and codingscheme. Section 6 describes the compression algo-rithms and Section 7 discusses the &skip' option.Section 8 presents the &zoom' technique andSection 9 demonstrates some performance com-parisons.

2. Overview of compression techniques

There are many types of coding algorithms suchas statistical coding, pulse code modulation (PCM),predictive coding, transform coding and interpola-tive and extrapolative coding [3,11,16,21,23].Besides these categories, there are other schemeswhich are tailored for speci"c types of images. Eachof these categories can be further divided based onwhether the parameters of the coder are "xed orwhether they change as a function of the type ofdata that is being coded (adaptive). In practice,a compression algorithm may include a mixture ofthese approaches in a compatible manner in orderto achieve proper cost/performance trade-o!s.

Statistical coding, which is also called entropycoding, is generally information-preserving (loss-less). Statistical coding uses a probability modelwhich estimates the entropy of the data which isbeing coded. They are used in wavelet-based algo-rithms such as [4,5,12,14,18,19], in JPEG [17,22]and in the proposed algorithm.

The proposed algorithm is a transform coding[2]. The discrete cosine transform (DCT) [2] is partof JPEG compression standard [17,22]. Despite ofall their advantages, window-based transform cod-ing such as JPEG/DCT generates blocks at lowbit-rates. The blocking e!ect is a natural conse-quence of the independent processing of each block.It is percieved in images as visible discontinuitiesacross block boundaries. The authors used the lo-cal cosine transform (LCT) as basis functions thatoverlap adjacent blocks, thus reducing the blockinge!ect [1].

Quantization of the transformed images can beaccomplished via scalar quantization, vectorquantization [4,8,9,13,16], or more tailoredmethods that utilize the internal structure of thetransform such as [4,5,7,14,15,18}20] which use theproperties of the multiresolution decomposition.

Fractal compression [6,10] uses a set of contrac-tive transformations (fractal functions) that mapfrom a de"ned rectangle of the real plane to smallerportions of that rectangle. These transformationsare applied iteratively to the image.

3. Performance of the multiresolution waveletdecomposition

The multiresolution wavelet decomposition pro-cess involves recursive four-subband splitting ofthe image. The two-dimensional decomposition isachieved by convolution and decimation of theimage, dimension by dimension (tensor product),with the "lter. The level (or depth) of decomposi-tion is a parameter to the algorithm. We used a pairof symmetric, biorthogonal decomposition and re-construction "lters of length 9 and 7, respectively.In [4,5,15,20], this "lter produced the best results interms of the reconstructed image PSNR. We imple-mented the wavelet decomposition in two di!erentmethods: (1) application of side by side of symmet-ric two-step 1-D convolution and (2) geometricdecomposition of the "lter which reduces thenumber of operations and performs a very e$cienttwo-dimensional convolution [15]. Both methodsare reversible in the sense that they enable a perfectreconstruction of the decomposed image. The dif-ference between them lies in the computation speed:in method 1, we utilize the biorthogonal "lter sym-metry and pre-computation of the convolution in-dices which enable almost free mirroring of theimage at the borders, while in method 2 the "ltersare factored utilizing their internal structure. The2-D convolution and decimation is done withoutthe need to do transpose the matrix (image).Although signi"cant speedup is achieved bymethod 2, the reconstructed image quality is nota!ected at all.

Table 1 summarizes the time performance fordi!erent decomposition algorithms as measuredon a Silicon Graphics/Indigo workstation witha 100 MHz MIPS R4000 CPU processor.

In Table 1 we can see that there is a speedupfactor of 4 in favor of geometric decomposition incomparison to symmetric decomposition.

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 233

Table 1Comparison of the performance time in seconds of the geometricdecomposition and symmetric decomposition algorithms. Theimages were decomposed into 6 levels using a symmetric bior-thogonal "lter of length 9

Image Symmetric decomp.(in s)

Geometric decomp.(in s)

Lenna512 0.93 0.21Barbara512 0.93 0.21Fabric512 0.93 0.21Barbara256 0.23 0.055Carl128 0.04 0.02

4. Multiresolution data structures

The fast modi"ed EZW algorithm uses a specialdata representation and data structures which en-able e$cient data access. In [5,20], a data repres-entation model which is called zero-tree was used.In the improved algorithm, a transformation of thezero-tree structure is performed. This transforma-tion reorders the zero-tree coe$cients and createsa data structure which is more suitable for theprocedures which follow the wavelet decomposi-tion, speci"cally it "ts the proposed scanning of thetree and the quantization technique.

4.1. The zero-tree representation

The wavelet decomposition generates a series ofcoe$cients which represents the image after theapplication of the multi-level geometric two-dimen-sional convolution with the biorthogonal "lter oflength 9. The decomposition is performed with pre-determined number of levels (N). As depicted inFig. 1, this series of coe$cients is spatially mappedas either a tree or a forest, where the coarsest levelLL block (LL[N]) is the tree root or the forest roots(in case LL(N) includes more than one coe$cient).The emerging order of the wavelet coe$cients fromthe geometric multiresolution decomposition doesnot "t e$cient physical scanning of the zero-tree (aswe see in Fig. 1) which is a critical component inthe quantization process. Therefore, they have to

go through major physical reorganization. In thefollowing description of the algorithm, we shallrefer to the LL[N] block as the tree root. The forestroots are a generalization of this notation. The treeroot sons are the LH[N], HL[N] and HH[N]blocks coe$cients. The coe$cients in blocksLH[i], HL[i] and HH[i], where 1)i)N!1,represent the o!spring of these blocks, respectively.This spatial representation is called the imagezero-tree. In the zero-tree, the direct o!spring ofeach coe$cient corresponds to the pixels of thesame spatial orientation in the next "ner level of thedecomposition. In the zero-tree, each node (exceptfor the leaves which have none and the roots whichhave only three) has four direct o!spring (sons).

4.2. Zero-tree reordering

One of the key factors in the computationspeedup is to have a data representation whichenables to traverse the zero-tree e$ciently. Thezero-tree coe$cients are reordered to enable fastaccess to the coe$cients during the classi"cationand coding phases. In [5,20], an algorithm forcoding the zero-tree coe$cients was introduced.This algorithm requires an extensive traversing ofthe zero-tree structure, up and down the whole tree,in order to identify the signi"cant coe$cients whichare subsequently quantized. The depth of thesearch (how many levels we go) determines thequality and the compression rate. In order tospeedup this process, the direct o!spring of a coef-"cient in the zero-tree (its sons) are stored in con-secutive memory locations. This is performed for allthe levels of the zero-tree (levels of the waveletdecomposition). Thus, during the decompositionof level n and the creation of blocks LH[n#1],HL[n#1] and HH[n#1], the coe$cients in theseblocks are reordered. The LL[n#1] block is notreordered since it is consequently decomposed intoblocks of the next level.

Since this improved algorithm requires extensivebreadth-"rst-search (BFS) traversal of the zero-tree, a reordering speeds up the required travel andsearch. During the BFS traversal, the decomposi-tion levels are scanned one after the other, from thecoarsest level (zero-tree root) to the "nest level ("rst

234 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Fig. 1. Parent-o!spring dependency in zero-tree for three levels.

decomposition level or zero-tree leaves). The zero-tree reordering allows manipulation and access toeach father and its sons in the zero-tree in the moste$cient way.

4.2.1. Reordering from planar to hypercube blocksThe reordering of the zero-tree coe$cients is

demonstrated logically in Fig. 2 and the actualphysical layout of the memory is given in Fig. 3.We assume we have an array of N (the depth of thedecomposition) pointers where each level j is ac-cessed via the pointer j in the array of pointers.A parent in row j sees his four children in row j#1in the following way: if he occupies locationk#o!set 3 in level j then the beginning of his "rstchild starts in contiguous location in memory at thenext level j#1 with o!set 4k#3. Thus, if we haveparent in the kth location in row j his four children

are located contiguously in row j#1 starting inlocation 4k. The fast transformation that takes thewavelet coe$cients from their locations after theconvolution and decimation to the one which isdescribed by Fig. 3 is based on [24] and is de-scribed in detail hereafter.

In planar ordering as used in the C programminglanguage, the index of a sample I in an N-dimen-sional matrix located at index i

kalong dimension

k of length Ikis

index (I(i0,i1,2,iN~1)

)"N~1+j/0

ij

j~1<k/0

lk

"i0#l

0(i1#l

1(2(i

N~2#l

N~2) lN~1

)2)).

The length of each dimension can be splittedinto a multiple of a power of 2, l

k"h

k2bk. Hence,

the matrix can be splitted into B"<khk

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 235

Fig. 2. Parent-o!spring physical dependency in the reordered zero-tree.

sub-matrices whose size along each dimension isa power of 2.

We can now reindex each sample of the matrixwithin the sub-matrix it belongs to. I

(i0,i1,2,iN~1)"

I(q0,q1,2,qN~1),(r0,r1,2,rN~1)

where qk"i

kdiv h

kis the

sub-matrix index along dimension k andrk"i

kmod h

kis the index of the sample along

dimension k within the sub-matrix.We wish to reorder the original matrix into

these sub-matrices in such way that the sub-matrices are contiguous in memory and that thesub-matrix M

(q0,q1,2,qN~1)holding the sample

I(q0,q1,2,qN~1),(r0,r1,2,rN~1)

starts in memory at index

IM"2+

k bkN~1+j/0

qj

j~1<k/0

hk.

The index of sample I(q0,q1,2,qN~1),(r0,r1,2,rN~1)

withinthat sub-matrix is

index (I(q0,q1,2,qN~1),(r0,r1,2,rN~1)

)"N~1+j/0

[rj2+j~1

k/1bk].

Now that the original matrix has been ordered intosub-matrices whose sizes along each dimension isa power of 2. Each sub-matrix can now be reor-dered from a planar ordering into a hypercube likestructure.

The index rkalong each dimension k of the sub-

matrix can be seen in its binary representationrk"+bk~1

j/0sk,j) 2j where s

k,jis the value of bit j.

Let ¹k,j

be the number of dimension i)k in thesub-matrix whose size is *2j:

¹k,j"DMb

ixk*jND.

236 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Fig. 3. The physical layout of parents and children in the zero tree.

We can now compute the new index ofI(q0,q1,2,qN~1),(r0,r1,2,rN~1)

within the sub-matrixM

(q0,q1,2,qN~1)as

new index (I(q0,q1,2,qN~1),(r0,r1,2,rN~1)

)

"+k,j

2Tk,l`+i:jTN~1,ls

k,j.

For example, if we have an image of size 3 ) 2i]5 ) 2jthan it is mapped by the above transformationto 15 blocks where each block is of size 2i]2j.Each block get reordered in the following way:t0y2y1y0x1x0Py

2y1x1t0y0x0.

5. Zero-tree coding

As we shall see in the following sections, thezero-tree coe$cients are coded and compressedduring consecutive iterations of the coding algo-rithm. In each loop iteration, the coe$cients are"rst coded using a classi"cation algorithm and thenentropy coded using binary and arithmetic codingmethods. During the zero-tree reordering process,the coe$cients are truncated to integer values.

Since the values are eventually quantized andcoded using entropy coding, and since the quantiz-ation process produces values in integer resolution,this truncation has no impact on the reconstructedimage quality. As part of the truncation, the zero-tree coe$cients are stored in special data con-structs. In these constructs, a separate "eld is main-tained for each of the following:

1. Value } the coe$cient absolute value.2. Sign } the coe$cient sign.3. Type } the coe$cient type (set during the classi-"cation process).

This memory construct enables usage of a positiveinteger threshold during the quantization and clas-si"cation phases for all the wavelet coe$cients.

The coding is accomplished via a special algo-rithm for coe$cient classi"cation. This classi"ca-tion algorithm takes advantage of the correlationbetween the parent coe$cient and its o!spring inthe zero-tree. This close correlation evolves fromthe same spatial orientation of these coe$cientsin respect to the original image. The presentalgorithm is an improvement of [5,20]. The im-provement in computation time lies mostly in the

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 237

bottom-up classi"cation which avoids the need fortraversing up and down the whole zero-tree insearch for all the descendants of every coe$cient. Inthe proposed algorithm all the classi"cation deci-sions are made between parent and its immediatechildren only.

5.1. Coezcient classixcation

The classi"cation and coding are performed iter-atively until a stopping condition is met (usuallya prede"ned bit budget). In each iteration of theclassi"cation a threshold is used. This thresholdbecomes smaller with each iteration, thus re"ningthe quantization of the coe$cients which arecoded. A speedup of the classi"cation algorithm isachieved by selecting thresholds which are expo-nents of 2. This permits testing of a coe$cientagainst the threshold by performing a bitwise &and'operation. Also, if a threshold which is an exponentof 2 is initially selected, then all the subsequentthresholds would also hold this requirement sincefor every classi"cation iteration, the threshold isdecreased by a factor of 2 (or right-shifted).

Fig. 4 depicts a #ow-chart of the classi"cationand the coding process.

During the classi"cation, each wavelet coe$cientis tagged with one of the following types:

1. Positive (POS). A coe$cient which is equal orlarger than the threshold, and which has notbeen coded in the previous iterations (it is small-er than threshold ) 2).As we shall see in the classi"cation examplegiven below, the positive coe$cients are laterre"ned into negative (NEG) coe$cients in casethe original coe$cient was negative (i.e. the sign"eld is set).

2. Zero (Z). A coe$cient which is smaller than thethreshold.

3. Zero-tree root (ZTR). A coe$cient which issmaller than the threshold and which ful"lls thefollowing requirements:3.1. All its immediate descendants ("rst degree

only) in the zero-tree are smaller than thethreshold.

3.2. Its immediate father in the zero-tree is nota ZTR.

Fig. 4. General compression #ow-chart.

4. Positive ZTR (PZT). A coe$cient which is equalto or larger than the threshold which ful"lls therequirements 3.1 and 3.2 of the ZTR coe$cientabove.

238 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

In analogy to the positive coe$cients above, thepositive ZTR coe$cients are later re"ned intonegative ZTR (NZT) coe$cients in case the orig-inal coe$cient was negative (i.e. the sign "eld isset).

5. Isolated zero (IZ). A coe$cient which is smallerthan the threshold but has at least one positiveimmediate descendant.

The coe$cients which are signi"cant to the zero-tree reconstruction are the positive, negative, zero-tree root, positive ZTR, negative ZTR, and isolatedzero (all except the zero coe$cients). When a coef-"cient is classi"ed as signi"cant, a pointer to thatcoe$cient is set. For this purpose, an array ofpointers to the signi"cant coe$cients is built dur-ing each classi"cation iteration. This array enablesmanipulation of only the signi"cant coe$cients inthe following steps of the compression process,without having to traverse the zero-tree again inorder to locate them.

Figs. 5 and 6 depict the #ow-charts of the classi-"cation algorithm phases. These #ow-charts referto as the classi"cation of one decomposition levelduring one classi"cation iteration. Thus, the &sons'refer to the group of 4 sons (or 3 sons in case of thezero-tree root) in the zero-tree structure. A &father'refers to the mutual father of these sons in the nextcoarser level of the decomposition in the zero-tree.

The algorithm above presents the following im-provements over the classi"cation algorithm pre-sented in [5,20]:

1. The bottom-up classi"cation algorithm avoidsthe need to traverse the zero-tree and to search

for all the descendants of every coe$cient. Ineach decomposition level, only the coe$cients ofthe current level and their mutual father in thecoarser decomposition level are processed. Thesons of a mutual father are ordered in conse-cutive memory locations (as in Fig. 3). This stepsubstantially reduces the complexity of thealgorithm.

2. The introduction of the PZT and NZT typesenables to have more compression. It has beenobserved that in many cases, and especially inthe "ner decomposition levels that have largerblocks, there exist many cases where a POS ora NEG coe$cient can be also a ZTR. Withoutthe PZT and NZT types, such a situation wouldrequire the coding of the types of the coe$cientand of its four sons in the zero-tree (POS, ZTR,ZTR, ZTR, ZTR). With the improved algorithm,only one type (PZT or NZT) is coded.

3. Integer arithmetic and the choice of thresholdswhich are powers of 2.

5.2. Classixcation example

In order to demonstrate the improvement of theabove classi"cation algorithm we give an exampleof a 8]8 image. The zero-tree representation ofthis image after decomposition into three levels isgiven below. For this example we chose to plot thezero-tree as a tree rather than as a matrix for theconvenience of the representation. The zero-tree isdepicted as follows:

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 239

Fig. 5. Flow-chart of the classi"cation phase I. It is done between two levels on each parent and its immediate children.

240 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Fig. 6. Flow-chart of the classi"cation phase II.

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 241

The decomposed image zero-tree is

The following is obtained after classi"cation ofthe "rst level of the zero-tree (LH[1], HL[1] andHH[1]). The threshold is selected to be 128 sincethe largest coe$cient is 152 (in absolute value).

After classi"cation of the second decompositionlevle (LH[2], HL[2] and HH[2]) with the samethreshold (128), the following is obtained:

We should note two phenomena at this stage:

1. A coe$cient which was temporarily classi"ed asisolated zero (IZ) during the classi"cation of its

sons, was eventually classi"ed as positive (POS).This coe$cient is marked in the above examplein bold. The temporary classi"cation set during

242 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

the classi"cation of the sons is only an indicationthat at least one o!spring of this coe$cient issigni"cant, and thus it cannot be a zero-tree root(or negative ZTR or positive ZTR either). Thisindication propagates up the zero-tree duringthe classi"cation of the upper levels until theroot. This mechanism of propagation of the sub-tree status eliminates the need to traverse upand down the zero-tree in search for signi"cantcoe$cients.

2. A coe$cient was classi"ed as negative zero-treeroot (NZT). With the classi"cation in [5], weobtain a negative classi"ed coe$cient and fourzero-tree root sons, thus requiring compressionof "ve types (NEG and 4 ZTR-s). With theimproved classi"cation, only one type (NZT) iscompressed.

Finally, moving one more level, we obtain theclassi"cation of the zero-tree root. The completeclassi"cation of the zero-tree with threshold128 is

Since each classi"cation is based on the analysisbetween two neighboring levels we get &weaker'classi"cation and therefore we get less compres-sion because we do not utilize the fact that thecoe$cient classi"cation symbols (from Section 5.1)can describe the same &trend' even beyond twoneighboring levels. Thus, this &trend' can producebetter compact description of the trends in theclassi"cation tree but the computation complexityis substantially increased as we see in the nextsection.

5.3. Packing the symbols of the tree and bit-planequantization of the wavelet values

After classi"cation phases I and II are completed(#ow-chart in Fig. 4), as it was described inSections 5.1 and 5.2, we packed the symbols thatdescribe the internal structure of the tree by arith-metic coding and quantize and entropy coding thevalues of the wavelet coe$cients via bit-plane cod-ing as described in Section 6.

5.4. Classixcation performance comparison

Table 2 summarizes the performance (in s) of thefull classi"cation algorithm which is described in[5,20] and the improved classi"cation algorithm asmeasured on a Silicon Graphics/Indigo worksta-tion with a 100 MHz MIPS R4000 CPU processor.

In Table 2 we can see that we get a speedupfactor between 5 and 8 when using the improvedclassi"cation algorithm. We can see that for

Barbara512, which is relatively more dense in detail,the classi"cation speedup is 5.5, while for Lenna512with the same image size, we get a speedup of 7.3.

6. Entropy coding

The classi"cation algorithm produces a stream ofsymbols with classi"cation types (POS, NEG, ZTR,PZT, NZT, IZ and Z). These types represent thestructure of the zero-tree at the current threshold

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 243

Table 2Comparison of the time performance in seconds between the fullclassi"cation algorithm and the improved (new) classi"cationalgorithm. The classi"cation in both cases was performed on6 decomposition levels with the same 9-7 "lters

Image Comp. ratio Full class.(in s)

New class.(in s)

Lenna512 40 3.43 0.47Barbara512 40 2.95 0.53Fabric512 40 3.02 0.50Barbara256 10 0.87 0.12Carl128 20 0.2 0.04

quantization level. We can see though that theZ types are redundant, and the zero-tree structurecan be obtained using only the non-Z or signi"canttypes [5,20]. Thus, only the signi"cant types arepassed to the coding algorithm. The coding algo-rithm uses lossless entropy coding of these typesand packed the data e$ciently. In addition to thesigni"cant types, the values of the POS, NEG, PZTand NZT coe$cients are also passed to the coder(the values of the ZTR, IZ and Z coe$cients aresmaller than the threshold). These values are codedusing binary coding.

The signi"cant types are coded using arithmeticcoding. There are three main variations of thearithmetic coding technique which can be used forthe "nal phase of the compression process:

1. Fixed model. A "xed distribution model of thecoded symbols is a priori determined. The modelis uncorrelated with the coded stream. This tech-nique is the most time e$cient since it requiresno adaptive model computation and no modelupdates.

2. Exact model. As in the "xed model, a constantmodel is pre-determined, but here it is a prioricomputed using the complete input stream.Exact model coding is generally more e$cientthan "xed model coding in terms of the com-pression ratio because the model is correlatedwith the coded stream.

3. Adaptive model. Makes no a priori assumptionson the coded stream and adapts on-line themodel to the variations in the stream. Adaptivemodel coding is usually the most e$cient in

terms of the achieved compression ratio, but isalso the least time e$cient. It is used in [5].

In order to speed up the coding algorithm, withoutsacri"cing the compression ratio e$ciency toomuch, exact model for the arithmetic coder of thezero-tree coe$cients types is used. In the full EZWalgorithm in [5,20] both the types and the values ofthe coe$cients are coded using adaptive arithmeticcoding modeling.

The "rst step of the coding part of the algorithmis to compute the distribution (or the model) of thesigni"cant types of the zero-tree. We use array ofpointers which was set for the signi"cant typesduring Phase II of the classi"cation algorithm.While computing this model, two additional modi-"cations are introduced:

1. The values of the POS, NEG, PZT and NZTcoe$cients are stored in a separate array. In thenext coding iterations with the correspondingsmaller thresholds, these values will also be bi-nary coded.

2. The values of the POS, NEG, PZT and NZTcoe$cients in the zero-tree are zeroed (they arestored in step 1 above). These coe$cients will beclassi"ed as non-signi"cant (ZTR, IZ or Z) in thecoming classi"cation iterations and will not re-quire allocation of bits for their type.

The array of POS, NEG, PZT and NZT values isnot initialized between the coding iterations.Rather, it is augmented in every iteration with thenewly identi"ed coe$cients of that iteration. In thelast iteration, all the coded values are stored in thisarray and are coded at the resolution of the lastthreshold.

6.1. Coding performance comparison

Table 3 summarizes the performance time in sec-onds between the adaptive model coding algorithmwhich is used in [5] and the exact model/binarycoding algorithm as measured on a SiliconGraphics/Indigo workstation with a 100 MHzMIPS R4000 CPU processor.

From this table we see that we get a speedupfactor between 7 and 20 when using the exact

244 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Table 3Comparison between the performance time in seconds between the adaptive model and the exact model/binary coding algorithms

Image Comp. ratio Adaptive arithmetic coding(in s)

Exact arithmetic/binary coding(in s)

Lenna512 40 1.04 0.05Barbara512 40 1.02 0.06Fabric512 40 1.02 0.06Barbara256 10 0.39 0.05Carl128 20 0.08 0.005

model/binary coder. The coder computation time islinear to the compression bit budget. This is not thecase with the adaptive model coder.

7. Skipping the 5nest decomposition level

The "nest decomposition level of the imagewhich consists of the LH[1], HL[1] and HH[1]blocks represents the most detail fragments of theimage. These blocks are the largest in the zero-treestructure and include 75% of the coe$cients in thezero-tree. Usually, the coe$cients in these blocksare relatively small in comparison to the coarserlevels of the decomposition. This is due to the factthat the decomposition process is energy-compact-ing and as the decomposition becomes coarser, theimage energy is contained in less coe$cients, mak-ing their values higher in average. We experimentedwith sub-sampling the image one decompositionlevel. The following summarize the algorithmic im-plications of this option:

1. Decomposition: keep only LL[1] in the "rst levelof the decomposition and ignore the rest.

2. Classi,cation and compression: the leaves of thezero-tree are the LH[2], HL[2] and HH[2]blocks coe$cients instead of LH[1], HL[1] andHH[1]. Only 25% of the coe$cients in the fullzero-tree are processed.

3. Decompression: the LH[1], HL[1] and HH[1]blocks are assumed to be zero. The LH[1],HL[1] and HH[1] are interpolated resulting ina smoothing which has a de-noising and blur-ring e!ects on the reconstructed image.

Tables 4}6 summarize the performance time in sec-onds of the di!erent compression algorithms with

Table 4Comparison of the performance time in seconds between thesymmetric decomposition and geometric decomposition algo-rithms with and without the &skip' option. The algorithms per-formed six levels of decomposition

Image Symmetric decomp.(in s)

Geometric decomp.(in s)

No skip Skip No skip Skip

Lenna512 0.93 0.48 0.21 0.18Barbara512 0.93 0.48 0.21 0.18Barbara256 0.23 0.12 0.055 0.048

Table 5Comparison of the performance time in seconds between the fullclassi"cation and the improved classi"cation algorithms withand without the &skip' option. The algorithms performed sixlevels of decomposition

Image Comp. ratio Full class. (in s) New class. (in s)

No skip Skip No skip Skip

Lenna512 40 3.43 0.89 0.47 0.13Barbara512 40 2.95 0.76 0.53 0.14Barbara256 10 0.87 0.25 0.12 0.05

and without the &skip' option. The algorithms per-formed six levels of decomposition.

We can see that in symmetric decomposition the&skip' option provides a speedup by a factor of 2.With the geometric decomposition, the achievedspeedup using the &skip' option is only 1.2. The&skip' option is impractical for small images (i.e.,Carl128), since the loss of resolution due to thesub-sampling would result in a blurry image.

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 245

Table 6Comparison of the performance time in seconds between the adaptive model coder and the exact model/binary coder with and withoutthe &skip' option

Image Comp. ratio Adaptive arithmetic coding (in s) Exact arithmetic/binary coding (in s)

No skip Skip No skip Skip

Lenna512 40 1.04 0.5 0.05 0.05Barbara512 40 1.02 0.44 0.06 0.06Barbara256 10 0.39 0.25 0.05 0.05

In Table 6 we can see that with both classi"ca-tion algorithms, the &skip' option provides aspeedup factor of 3.

From Table 6 we can see that with the exactarithmetic/binary coder, the &skip' option providesno speedup. This supports the observation that thecomputation time of the new coder is linear with thecompression bit budget. With the adaptive arithme-tic coder, a speedup by a factor of 2 is achieved.

8. Zooming

8.1. The zoom concept

In many compression applications, we wouldlike to allocate di!erent bit budgets to di!erentareas of the image. In other words, we want to forcethe compression algorithm to allocate more bits toa pre-selected area (for example a person's face).This motivation led us to de"ne a parameterized`zoom areaa. The `zoom areaa parameters are:

1. Width of the selected area.2. Height of the selected area.3. Location (x, y) of the beginning of the selected

area which is an o!set from the upper left cornerof the image,

4. Zoom budget: percent of the total compressionbudget which is allocated exclusively to the&zoom area'.

8.2. Zoom area representation

The structure of the zero-tree is suitable to per-form selective compression in a pre-selected area.

As with the entire image, the zoom area is alsorepresented as a zero-tree (referred to as zoomzero-tree). This zero-tree is a segment of the fullzero-tree. For the construction of the zoom zero-tree, we exploit the spatial orientation of the zero-tree blocks which de"nes the zoom area in the samerelative locations in all the levels of the zero-tree(Fig. 7). In order to obtain the zoom zero-tree, we"rst take the coe$cients in the "nest decomposi-tion level which lie in the pre-de"ned zoom area.Then, we traverse up the zero-tree from these coe$-cients to their ancestors in the zero-tree. We stop

Fig. 7. Zoom zero-tree construction.

246 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Fig. 8. Zoom compression #ow-chart.

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 247

Table 7The performance time in seconds between the improved classi"cation algorithm with and without the &zoom' option. The algorithmperformed six levels of decomposition. The zoom area occupies 25% of the full image at the image center. The bit budget which isallocated to the zoom area is 50% of the total compression budget

Image Comp. ratio No skip Skip

No zoom Zoom No zoom Zoom

Lenna512 40 0.47 0.5 0.13 0.13Barbara512 40 0.53 0.63 0.14 0.16Barbara256 10 0.12 0.17 0.05 0.05Carl128 20 0.04 0.02 } }

traversing the zero-tree when we either reach thezero-tree root (or LL[N] block) or if we are leftwith only one coe$cient on either dimensions ofthe zoom area. While traversing up the zero-tree,the zoom area which is de"ned by the sub-treewhich we have traversed may become larger thanthe initially de"ned zoom area. For instance, if wehave three coe$cients in the zoom zero-tree ina certain decomposition level, and in the nextcoarser decomposition level we have these coe$c-ient's father in the zoom zero-tree. This actuallymeans that the fourth son of this coe$cient willalso be included in the zoom zero-tree. This featureenables to use the exact same algorithms for classi-"cation and coding for the zoom area as were usedfor the entire image because the zero-tree structureis preserved.

Fig. 7 depicts the zoom zero-tree construction inthe full zero-tree with two decomposition levels.

8.3. Zoom coding and compression

The zoom area is given with pre-de"ned bitbudget which allows us to get better image qualityin this zoom area. This is achieved in the followingway: "rst, we apply the classi"cation processesI and II on the entire image "rst (including thezoom area). Then the entropy coder is initiallyapplied on the zoom area with the prede"nedbudget for this area, and then it is applied on theimage without the zoom area.

Prior to compression of the zoom area, the zoomzero-tree coe$cients are copied into a separate

data structure. In this structure, as with the entireimage zero-tree, the sons of each coe$cient arestored in consecutive memory locations. Thusa speedup of the classi"cation and coding of thezoom zero-tree is achieved as in the full zero-tree.

The zoom zero-tree coe$cients are classi"ed andcoded using the exact same algorithms as are usedwith the full image. Fig. 8 depicts the #ow-chart ofthe compression algorithm using the zoom tech-nique.

8.4. Zoom performance comparison

Tables 7 and 8 summarize the performance timein seconds between the di!erent fast compressionalgorithms. The measurements were taken with andwithout the &zoom' option.

In the tables we can see that with the improvedclassi"cation algorithm, the &zoom' option has al-most no e!ect on the computation speed. This isalso true when both the &zoom' and &skip' optionsare applied.

We can see that with the exact arithmetic andbinary coder, the &zoom' option has almost no e!ecton the computation speed. This is also true whenboth the &zoom' and &skip' options are applied.

9. Performance comparison

We have studied some images which are typicalfor compression implementations. The perfor-mance of the algorithms was measured both in

248 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Table 8The performance time in seconds of the exact model/binary coder with and without the &zoom' option. The zoom area occupies 25% ofthe full image at the image center. The bit budget which is allocated to the zoom area is 50% of the total compression budget

Image Comp. ratio No skip Skip

No zoom Zoom No zoom Zoom

Lenna512 40 0.05 0.05 0.05 0.04Barbara512 40 0.06 0.05 0.06 0.05Barbara256 10 0.05 0.05 0.05 0.04Carl128 20 0.005 0.006 } }

Table 10Comparison of the performance time in seconds of the overall compression algorithms. The fast EZW includes the geometricdecomposition, reduced classi"cation, and exact arithmetic/binary coding. The algorithms performed image decomposition into sixlevels using a biorthogonal symmetric "lter of length 9. Measurements were taken with and without the &skip' option and &zoom' option.The &zoom area' size is 25% of the full image and at the image center. The bit budget which was allocated to the &zoom area' is 50% of thetotal compression budget

Image Comp. ratio Fast EZW (in s) Full EZW (in s)

No skip Skip No skip Skip

No zoom Zoom No zoom Zoom

Lenna512 40 0.73 0.76 0.36 0.35 5.4 1.87Barbara512 40 0.8 0.89 0.38 0.39 4.9 1.68Fabric512 40 0.8 5.0Barbara256 10 0.225 0.275 0.148 0.138 1.49 0.62Carl128 20 0.065 0.046 } } 0.32 }

terms of the computation time and the PSNR of thereconstructed image where applicable.

The peak signal-to-noise ratio (PSNR) in dB ofthe reconstructed image is de"ned as

Mean square error

"MSE"

+SizeX

i/1+SizeY

j/1(New

i,j!Original

i,j)2

SizeX ) Size>

Peak signal-to-noise ratio"PSNR

"20 log10A

255

JMSEB,where New is the reconstructed image, Original isthe original image and SizeX, Size> are the heightand width of the image.

Table 9 summarizes the overall image quality ofthe fast EZW algorithm in comparison to the fullcompression algorithm which was described in [5]in terms of the reconstructed image PSNR in dB.

Table 9Comparison of the overall image quality between the fast EZWalgorithm, the full compression algorithm [5,20] and the SPIHT[19] algorithm in terms of the reconstructed image PSNR in dB

Image Comp. ratio Fast EZW(PSNR)

Full EZW(PSNR)

SPIHT(PSNR)

Lenna512 40 29.6 31 30.8Barbara512 42 25.6 26 26.4Fabric512 40 30.5 32 32Barbara256 10 29.0 30.5 30.1Carl128 22 24.7 26.5 26.2

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 249

Fig. 9. Compression of Lenna512]512.

The PSNR metric is not applicable when using the&skip' or the &zoom' options because these methodsimpose non-uniform bit allocation and thus theobtained image quality cannot be measured by thePSNR.

From the table we can see that for most images,the fast EZW degrades the reconstructed imagePSNR by 0.8}1.8 dB. In images with high densityof detail where the full compression algorithm

performs &relatively poor' (Barbara512), the fastEZW degrades the image PSNR by only 1 dB.

Table 10 summarizes the performance time inseconds of the overall compression algorithms asmeasured on a Silicon Graphics/Indigo worksta-tion with a 100 MHz MIPS processor.

We can see that without the &skip' option, inlarger images, the fast EZW algorithm achievesa speedup factor of 7. In smaller images, the

250 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Fig. 10. Compression of Barbara512]512.

speedup is 6. With the &skip' option, the speedup isdecreased to 5 in larger images and 4 in smallerones. The &zoom' option has a negligible e!ect onthe computation time.

Figs. 9}14 contain examples of the images whichwere tested with the fast compression algorithm.We present images which were generated using the&skip' and &zoom' options where applicable.

10. Conclusions

The proposed algorithm was proved to be a ro-bust still image compression method whichproduces a fully embedded bit stream. It generateshigh quality gray-scale images in very low bit-rates.The computation speed of the algorithm is im-proved by using several techniques: (1) geometric

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 251

Fig. 11. Compression of Barbara256]256.

Fig. 12. Compression of Fabric512]512.

252 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254

Fig. 13. Compression with JPEG.

Fig. 14. Compression of Carl128]128.

decomposition, (2) reduced and improved zero-treecoding, (3) e$cient data representation, and (4)a combination of exact arithmetic and binary cod-ing algorithms. With these techniques, a computa-tion speedup of 6 is obtained. The pitfall of thisspeedup is the degradation of the reconstructedimage PSNR by 0.8}1.8 dB.

References

[1] G. Aharoni, A. Averbuch, R. Coifman, M. Israeli, Localcosine transform } a method for the reduction of theblocking e!ect in JPEG, J. Math. Imaging Vision (Specialissue on Wavelets) 3 (1993) 7}38.

[2] N. Ahmed, K.R. Rao, Orthogonal Transform for DigitalSignal Processing, Springer, New York, 1975.

A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254 253

[3] R.B. Arps, in: Binary image compression, in: Image Trans-mission Techniques, Academic Press, New York, 1979,pp. 219}276.

[4] A. Averbuch, D. Lazar, M. Israeli, Image compressionusing wavelet decomposition, IEEE Trans. Image Process.5 (1) (January 1996) 4}15.

[5] A. Averbuch, R. Nir, Still image compression using codedmultiresolution tree, Research Report, Tel Aviv Univer-sity, 1995.

[6] M.F. Barnsley, L.P. Hurd, Fractal Image Compression,AK Peters, 1993.

[7] P.J. Burt, E.H. Adelson, The Laplacian pyramid as a com-pact image code, IEEE Trans. Commun. COM-31 (April1983) 532}540.

[8] W.H. Equitz, Fast algorithms for vector quantization pic-ture coding, M.S. Thesis, MIT, June 1984.

[9] A. Gersho, R. Gray, Vector Quantization and Signal Com-pression, Kluwer Academic Publishers, Dordrecht, 1991.

[10] A. Jacquin, Fractal image coding based on a theory ofiterated contractive image transformations, Visual Comm.Image Process. PIE-1360 (1990).

[11] N.S. Jayant, P. Noll, Digital Coding of Waveforms,Prentice-Hall, Englewood Cli!s, NJ, 1984, pp. 465}482.

[12] A.S. Lewis, G. Knowles, Image compression using 2-Dwavelet transform, IEEE Trans. Image Process. 1 (2) (1992)244}250.

[13] Y. Linde, A. Buzo, R. Gray, An algorithm for vectorquantizer designs, IEEE Trans. Commun. COM-28(January 1980) 84}95.

[14] S.G. Mallat, A theory for multiresolution signal decompo-sition: the wavelet representation, IEEE Trans. PAMI11 (7) (July 1989) 674}693.

[15] F. Meyer, A.R. Averbuch, J-O. Stromberg, Fast adaptivewavelet packet image compression, IEEE Trans. ImageProcess. to appear; appeared also in Data CompressionConf. } DCC98, Snowbird, Utah, 29 March}1 April1998.

[16] A.N. Netravali, B.G. Haskell, Digital Pictures, Representa-tion and Compression, Plenum Press, New York, 1991.

[17] W.B. Pennebaker, J.L. Mitchell, JPEG Still Image DataCompression Standard, Van Nostrand Reinhold, NewYork, 1993.

[18] A. Said, W.A. Pearlman, Image compression using thespatial orientation tree, IEEE Internat. Symp. on Circuitsand Systems, Chicago, IL, May 1993, pp. 279}282.

[19] A. Said, W.A. Pearlman, A new fast and e$cient imagecode based on set partitioning in hierarchical trees, IEEETrans. Circuits Systems Video Technol. 6 (June 1996).

[20] J.M. Shapiro, Embedded image coding using zerotreesof wavelets coe$cients, IEEE Trans. Signal Process. 41(December 1993) 3445}3462.

[21] J.A. Storer, Data Compression: Methods and Theory,Computer Science Press, Rockville, MD, 1988.

[22] G.K. Wallace, The JPEG still picture compression stan-dard, Comm. ACM 34 (4) (April 1991) 30}44.

[23] R. Williams, Adaptive Data Compression, Kluwer,Dordrecht, 1990.

[24] L. Woog, Private communication, 1997.

254 A. Averbuch et al. / Signal Processing: Image Communication 15 (1999) 231}254