adaptive distributed source coding a...

137
ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATION SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY David P. Varodayan March 2010

Upload: others

Post on 18-Mar-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

ADAPTIVE DISTRIBUTED SOURCE CODING

A DISSERTATION

SUBMITTED TO THE DEPARTMENT OF ELECTRICAL ENGINEERING

AND

THE COMMITTEE ON GRADUATE STUDIES

OF STANFORD UNIVERSITY

IN PARTIAL FULFILLMENT OF THE REQUIREMENTS

FOR THE DEGREE OF

DOCTOR OF PHILOSOPHY

David P. Varodayan

March 2010

Page 2: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

c© Copyright by David P. Varodayan 2010

All Rights Reserved

ii

Page 3: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted
Page 4: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Abstract

Distributed source coding, the separate encoding and joint decoding of statistically

dependent sources, has many potential applications ranging from lower complexity

capsule endoscopy to higher throughput satellite imaging. This dissertation improves

distributed source coding algorithms and the analysis of their coding performance to

handle uncertainty in the statistical dependence among sources.

We construct sequences of rate-adaptive low-density parity-check (LDPC) codes

that enable encoders to switch flexibly among coding rates in order to adapt to

arbitrary degrees of statistical dependence. These code sequences operate close to

the Slepian-Wolf bound at all rates. Rate-adaptive LDPC codes with well-designed

source degree distributions outperform commonly used rate-adaptive turbo codes.

We then consider distributed source coding in the presence of hidden variables

that parameterize the statistical dependence among sources. We derive performance

bounds for binary and multilevel models of this problem and devise coding algorithms

for both cases. Each encoder sends some portion of its source to the decoder uncoded

as doping bits. The decoder uses the sum-product algorithm to simultaneously recover

the source and the hidden statistical dependence variables. This system performs close

to the derived bounds when an appropriate doping rate is selected.

We concurrently develop techniques based on density evolution to analyze our

coding algorithms. Experiments show that our models closely approximate empirical

coding performance. This property allows us to efficiently optimize parameters of the

algorithms, such as source degree distributions and doping rates.

We finally demonstrate the application of these adaptive distributed source coding

techniques to reduced-reference video quality monitoring, multiview coding and low-

complexity video encoding.

iv

Page 5: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Acknowledgments

There are far too many to thank individually, but the efforts of the following were

crucial. Foremost is my advisor Prof. Bernd Girod, from whom I have learned the

arts of making research compelling and engaging. Every facet of this work reflects

his wisdom. Prof. Andrea Montanari and Prof. John Gill offered, not only thoughtful

comments on this dissertation, but also their insights into the analysis of codes.

All my colleagues in the Image, Video and Multimedia Systems group were sources

of inspiration and friendship. It was sheer enjoyment to learn from Anne Aaron,

Shantanu Rane and David Rebollo-Monedero, pioneers of distributed source coding.

Another mentor Prof. Markus Flierl showed me the ways of the academician. The

quartet Aditya Mavlankar, Yao-Chung Lin, David Chen and Keiichi Chono were my

key collaborators. We shared, not just code and ideas, but the daily ups and downs

of research life. Kelly Yilmaz made everything administrative as easy as possible.

Since arriving at Stanford, I have had the companionship and encouragement of

friends, old and new. There were Bay Area Night Game adventures with Prasad, Dan

and Justin, my longtime friends from Australia. News from Toronto was shared with

Adam, Alan, Hattie, Kris, Ming, Lin, Simon, Melissa, Andrew, Nat, Jimmy and Sam,

and in a sporadic but ongoing conversation with Jenny. I befriended many wonderful

people at Stanford, among them Erick, Joyce, Laura, Ocie, Girish, Rebecca, Yuan,

Christine and Sean, and everyone in my GSBGEN 374 T-group.

My deepest gratitude is for my family. My parents Paul and Selvi raised me

across continents to think across boundaries. My sister Florence and I share a lifelong

dialogue that influences me as a brother, son and father. Grace, my wife, brings love

and joy to my life in measures previously unimagined. I thank her for all her efforts

in support of my success. Little Oscar, appearing recently, reminds me how we learn.

v

Page 6: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Contents

Abstract iv

Acknowledgments v

1 Introduction 1

2 Distributed Source Coding Background 4

2.1 Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.1 Lossless Coding . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1.2 Lossy Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

2.2 Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2.1 Slepian-Wolf Coding . . . . . . . . . . . . . . . . . . . . . . . 7

2.2.2 Wyner-Ziv Coding . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Selected Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.1 Low-Complexity Video Encoding . . . . . . . . . . . . . . . . 9

2.3.2 Multiview Coding . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.3.3 Reduced-Reference Video Quality Monitoring . . . . . . . . . 12

2.4 Advanced Techniques . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4.1 Rate Adaptation . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.4.2 Side Information Adaptation . . . . . . . . . . . . . . . . . . . 14

2.4.3 Multilevel Distributed Source Coding . . . . . . . . . . . . . . 14

2.4.4 Performance Analysis and Code Design . . . . . . . . . . . . . 15

2.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

vi

Page 7: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

3 Rate Adaptation 16

3.1 Rationale for Rate-Adaptive LDPC Codes . . . . . . . . . . . . . . . 17

3.2 Construction of Rate-Adaptive LDPC Code Sequences . . . . . . . . 18

3.2.1 Rate Adaptation by Syndrome Deletion . . . . . . . . . . . . 18

3.2.2 Rate Adaptation by Syndrome Merging . . . . . . . . . . . . . 20

3.2.3 Rate-Adaptive Sequences by Syndrome Splitting . . . . . . . . 22

3.3 Analysis of Rate Adaptation . . . . . . . . . . . . . . . . . . . . . . . 23

3.4 Rate-Adaptive LDPC Coding Experimental Results . . . . . . . . . . 25

3.4.1 Performance Comparison with Other Codes . . . . . . . . . . 26

3.4.2 Performance of Rate Adaptation Analysis . . . . . . . . . . . 27

3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

4 Side Information Adaptation 30

4.1 Distributed Random Dot Stereogram Coding . . . . . . . . . . . . . . 31

4.2 Model for Source and Side Information . . . . . . . . . . . . . . . . . 35

4.2.1 Definition of the Block-Candidate Model . . . . . . . . . . . . 35

4.2.2 Slepian-Wolf Bound . . . . . . . . . . . . . . . . . . . . . . . . 37

4.3 Side-Information-Adaptive Codec . . . . . . . . . . . . . . . . . . . . 38

4.3.1 Encoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

4.3.2 Decoder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

4.4 Analysis of Side Information Adaptation . . . . . . . . . . . . . . . . 44

4.4.1 Factor Graph Transformation . . . . . . . . . . . . . . . . . . 44

4.4.2 Derivation of Degree Distributions . . . . . . . . . . . . . . . 45

4.4.3 Monte Carlo Simulation of Density Evolution . . . . . . . . . 48

4.5 Side-Information-Adaptive Coding Experimental Results . . . . . . . 51

4.5.1 Performance with and without Doping . . . . . . . . . . . . . 52

4.5.2 Performance under Different Block-Candidate Model Settings 53

4.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

5 Multilevel Side Information Adaptation 57

5.1 Challenges of Extension to Multilevel Coding . . . . . . . . . . . . . 58

5.2 Model for Multilevel Source and Side Information . . . . . . . . . . . 61

5.2.1 Multilevel Extension of the Block-Candidate Model . . . . . . 61

vii

Page 8: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

5.2.2 Slepian-Wolf Bound . . . . . . . . . . . . . . . . . . . . . . . . 61

5.3 Multilevel Side-Information-Adaptive Codec . . . . . . . . . . . . . . 62

5.3.1 Whole Symbol Encoder . . . . . . . . . . . . . . . . . . . . . . 62

5.3.2 Whole Symbol Decoder . . . . . . . . . . . . . . . . . . . . . . 62

5.4 Analysis of Multilevel Side Information Adaptation . . . . . . . . . . 67

5.4.1 Factor Graph Transformation . . . . . . . . . . . . . . . . . . 67

5.4.2 Derivation of Degree Distributions . . . . . . . . . . . . . . . 69

5.4.3 Monte Carlo Simulation of Density Evolution . . . . . . . . . 70

5.5 Multilevel Coding Experimental Results . . . . . . . . . . . . . . . . 73

5.6 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

6 Applications 77

6.1 Reduced-Reference Video Quality Monitoring . . . . . . . . . . . . . 77

6.1.1 ITU-T J.240 Standard . . . . . . . . . . . . . . . . . . . . . . 79

6.1.2 Maximum Likelihood PSNR Estimation . . . . . . . . . . . . 80

6.1.3 Distributed Source Coding of J.240 Coefficients . . . . . . . . 81

6.2 Multiview Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

6.3 Low-Complexity Video Encoding . . . . . . . . . . . . . . . . . . . . 91

6.4 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

7 Conclusions 99

Bibliography 102

viii

Page 9: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

List of Tables

6.1 Codec settings for distributed source coding of J.240 coefficients . . . 83

6.2 Camera array geometry for multiview data sets Xmas and Dog . . . . 88

6.3 Average PSNR and rate for multiview coding traces . . . . . . . . . . 89

6.4 Average luminance PSNR and rate for distributed video coding traces 95

ix

Page 10: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

List of Figures

1.1 Stereographic solar views captured by NASA’s STEREO satellites . . 1

1.2 Locations of STEREO satellites on July 13, 2007 . . . . . . . . . . . 2

2.1 Block diagrams of lossless distributed source coding . . . . . . . . . . 5

2.2 Slepian-Wolf rate region for lossless distributed source coding . . . . . 6

3.1 Rate-adaptive distributed source coding with feedback . . . . . . . . 17

3.2 Fixed rate LDPC code encoding and decoding graphs . . . . . . . . . 18

3.3 Factor graph produced by syndrome deletion . . . . . . . . . . . . . . 19

3.4 Creation of 4-cycles by adding a single edge to a factor graph . . . . . 20

3.5 Factor graph produced by cumulative syndrome deletion . . . . . . . 21

3.6 Rate-adaptive LDPC code performance . . . . . . . . . . . . . . . . . 27

3.7 Distributions of rate for rate-adaptive LDPC codes . . . . . . . . . . 28

3.8 Rate-adaptive LDPC code performance modeled by density evolution 29

4.1 Stereogram consisting of a pair of random dot images . . . . . . . . . 31

4.2 Superpositions of two random dot images . . . . . . . . . . . . . . . . 32

4.3 Asymmetric distributed source coding of random dot images . . . . . 33

4.4 Block-candidate model of statistical dependence . . . . . . . . . . . . 36

4.5 Slepian-Wolf bounds for block-candidate model . . . . . . . . . . . . 39

4.6 Factor graph for side-information-adaptive decoder . . . . . . . . . . 41

4.7 Factor graph node combinations . . . . . . . . . . . . . . . . . . . . . 42

4.8 Transformed factor graph equivalent in terms of convergence . . . . . 46

4.9 Density evolution for side-information-adaptive decoder . . . . . . . . 49

4.10 Side-information-adaptive codec performance with and without doping 52

x

Page 11: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

4.11 Side-information-adaptive codec performance with different block sizes 54

4.12 Side-information-adaptive codec performance with different numbers

of candidates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

5.1 Comparison of whole symbol and bit-plane-by-bit-plane statistics . . 59

5.2 Isomorphism of binary and Gray orderings of 2-bit symbols . . . . . . 60

5.3 Nonisomorphism of binary and Gray orderings of 3-bit symbols . . . . 60

5.4 Slepian-Wolf bounds for multilevel block-candidate model . . . . . . . 63

5.5 Factor graph for multilevel decoder . . . . . . . . . . . . . . . . . . . 64

5.6 Multilevel factor graph nodes . . . . . . . . . . . . . . . . . . . . . . 65

5.7 Transformed multilevel factor graph equivalent in terms of convergence 68

5.8 Density evolution for multilevel decoder . . . . . . . . . . . . . . . . . 70

5.9 Multilevel codec performance with b = 8, c = 2 and m = 2 . . . . . . 75

5.10 Multilevel codec performance with b = 32, c = 2 and m = 2 . . . . . . 75

5.11 Multilevel codec performance with b = 64, c = 4 and m = 2 . . . . . . 76

5.12 Multilevel codec performance with b = 64, c = 16 and m = 2 . . . . . 76

6.1 Video transmission with quality monitoring and channel tracking . . 78

6.2 Dimension reduction projection of ITU-T J.240 standard . . . . . . . 79

6.3 Frame 0 of Foreman sequence, encoded and transcoded . . . . . . . . 81

6.4 PSNR estimation error using the J.240 standard . . . . . . . . . . . . 82

6.5 Analysis of distributed source coding of J.240 coefficients . . . . . . . 84

6.6 Coding rate for distributed source coding of J.240 coefficients . . . . . 85

6.7 Mean absolute PSNR estimation error versus estimation bit rate . . . 86

6.8 Lossy transform-domain adaptive distributed source codec . . . . . . 87

6.9 Comparison of rate-distortion performance for multiview coding . . . 89

6.10 PSNR and rate traces for multiview coding . . . . . . . . . . . . . . . 90

6.11 Stereographic reconstructions of pairs of multiview images . . . . . . 92

6.12 Comparison of rate-distortion performance for distributed video coding 94

6.13 Luminance PSNR traces for distributed video coding . . . . . . . . . 96

6.14 Rate traces for distributed video coding . . . . . . . . . . . . . . . . . 97

6.15 Reconstructions of distributed video coding frames . . . . . . . . . . 98

xi

Page 12: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 1

Introduction

Consider the source coding (or data compression) of the images of the sun in Fig. 1.1.

These stereographic views were captured by a pair of satellites, part of NASA’s Solar

Terrestrial Relations Observatory (STEREO), on July 13, 2007 [92]. The locations of

the satellites, named STEREO Behind and STEREO Ahead, on that day are plotted

in Fig. 1.2. At roughly 50 million km apart, the satellites did not communicate with

each other and maintained limited communication with Earth. Therefore, the source

coding of the images for their reconstruction on Earth could involve separate encoders

and a joint decoder; a setting known as distributed source coding.

Figure 1.1: Stereographic solar views captured by NASA’s STEREO satellites [131].

1

Page 13: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 1. INTRODUCTION 2

Figure 1.2: Locations of STEREO Behind (B) and STEREO Ahead (A) satellites on July13, 2007. The coordinates are Heliocentric Earth Ecliptic (HEE) and Geocentric SolarEcliptic (GSE) with axes labeled in astronomical units [131].

Distributed source coding is proposed for many other practical applications [135].

An example is capsule endoscopy, the diagnostic imaging of the small intestine using

a pill-sized wireless camera swallowed by the patient. Due to power and memory

constraints, the capsule cannot encode consecutive frames of video jointly. The re-

ceiver, on the other hand, can decode the frames jointly because it is unconstrained

in complexity, by virtue of being located outside the patient’s body.

This dissertation studies the distributed coding of statistically dependent sources

under uncertainty about their statistical dependence. Our contributions are encom-

passed by the title adaptive distributed source coding.

• Rate adaptation is a technique for an encoder to adapt to uncertainty in the

degree of statistical dependence. Our construction of rate-adaptive low-density

parity-check (LDPC) codes allows an encoder to switch flexibly between en-

coding bit rates. If there is feedback from decoder to encoder, a rate-adaptive

encoder need not know the degree of statistical dependence in advance.

Page 14: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 1. INTRODUCTION 3

• Side information adaptation is a technique for the coding system to adapt to

hidden variables that parameterize the statistical dependence. We formulate

models for both binary and multilevel source signals with hidden dependence

and compute performance bounds for them. We develop encoders that send some

portion of their sources uncoded as doping bits and decoders that discover the

hidden variables (while decoding the source) using the sum-product algorithm.

• The analysis of rate and side information adaptation enables the designer to

adapt the system to specific statistics or to make it robust against uncertain

statistics. We provide density evolution techniques to estimate and optimize

performance based on the statistical model and coding algorithm parameters,

such as LDPC code degree distributions and doping rates.

The dissertation makes the first and second contributions in turn, while devel-

oping the third contribution concurrently. Chapter 2 reviews the theory, practice

and applications of distributed source coding, with a focus on precursors to our work.

Chapter 3 motivates the need for rate-adaptive LDPC codes, describes their construc-

tion and obtains the density evolution method for their analysis. We establish that

these codes operate near established information-theoretic performance bounds and

that density evolution models the performance of different codes accurately. Chap-

ter 4 introduces side information adaptation using the simple example of random

dot stereograms. We formalize a binary model for hidden statistical dependence and

derive theoretical performance bounds. Coding algorithms and analysis methods are

presented and their performance is evaluated with respect to each other and the

bounds for various settings of the model. Chapter 5 extends side information adap-

tation to multilevel signals, starting by addressing the additional challenges. As in

the preceding chapter, we continue our discussion with a model, performance bounds,

coding algorithms, analysis methods and experimental validation. Chapter 6 applies

the techniques developed in this dissertation to problems in reduced-reference video

quality monitoring, multiview coding and low-complexity video encoding. We show-

case a different aspect of performance for each application. Chapter 7 concludes the

dissertation and offers avenues for further research.

Page 15: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 2

Distributed Source Coding

Background

Distributed source coding is the data compression of two (or more) sources, or one (or

more) sources with additional side information available at decoder only, such that

the encoders for each source are separate. A single joint decoder receives all the en-

codings, exploits statistical dependencies, and reconstructs the sources, if applicable,

in conjunction with the side information. Fig. 2.1 shows block diagrams for lossless

distributed source coding with two signals. The configuration in Fig. 2.1(b), called

source coding with side information at the decoder, is a special case (often referred

to as the asymmetric case) of the configuration in Fig. 2.1(a).

This chapter presents a survey of distributed source coding. Section 2.1 and Sec-

tion 2.2 outline theoretical foundations and elementary practical techniques, respec-

tively, for both lossless and lossy coding. The state-of-the-art for selected applications

is described in Section 2.3. In Section 2.4, we address recent advances in distributed

source coding techniques, upon which this dissertation makes novel and unifying con-

tributions. Our treatment in this chapter draws, in part, from both [50] and [75].

4

Page 16: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 5

JointDecoder

SeparateEncodersource X reconstruction X

SeparateEncodersource Y reconstruction Y

RX

RY

(a)

Encoder Decodersource X

side information Y

reconstruction XRX

(b)

Figure 2.1: Block diagrams of lossless distributed source coding with two signals: (a) sepa-rate encoding and joint decoding and (b) source coding with side information at the decoder,also known as the asymmetric case of (a).

2.1 Theory

2.1.1 Lossless Coding

Consider the configuration in Fig. 2.1(a), in which X and Y are each finite-alphabet

random sequences of independent and identically distributed samples. Separate en-

coding and separate decoding enable compression of X and Y without loss to rates

RX ≥ H(X) and RY ≥ H(Y ), respectively [175]. Under separate encoding and

joint decoding, though, the Slepian-Wolf theorem [177] shows that the rate region

(tolerating an arbitrarily small error probability) expands to

RX ≥ H(X|Y ), RY ≥ H(Y |X), RX +RY ≥ H(X, Y ), (2.1)

as shown in Fig. 2.2. Remarkably, the sum rate boundary RX +RY = H(X, Y ) is just

as good as the achievable rate for centralized coding (i.e., joint encoding and joint

decoding) of X and Y . At the corner points A and B in Fig. 2.2 the problem reduces

to the asymmetric case of source coding with side information at the decoder, as

Page 17: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 6

H(X|Y) H(X)

H(Y)

H(Y|X)

RX

RY

RX +RY =H(X,Y)

A

B

Separate EncodingSeparate Decoding

Separate EncodingJoint Decoding

Figure 2.2: Slepian-Wolf rate region for lossless distributed source coding of two sources.

depicted in Fig. 2.1(b). This is because at point A, for example, the rate RY = H(Y )

makes Y available to the decoder via conventional lossless coding.

Several variants of lossless distributed source coding admit information-theoretic

bounds. In [12] and [94], only the recovery of X and X+Y modulo 2, respectively, is

important. In [224], the decoder still recovers X and Y , but there is an additional sep-

arate encoder which sees yet another statistically dependent source. Another branch

of study, zero-error distributed source coding, requires that the error probability be

not just vanishing, but strictly zero [220, 96, 95, 176, 233, 200].

2.1.2 Lossy Coding

A straightforward achievable rate-distortion region for lossy distributed source coding

of two sources, stated in [27, 201, 26] and independently in [87], results from combining

quantization with the achievable rate region for lossless coding. The extension to more

than two sources is in [83]. For memoryless Gaussian sources and mean squared error

distortion, an outer bound [212, 211] shows that this region is tight [213]. In general,

there are only loose outer bounds [72, 73].

Page 18: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 7

In the special case of lossy source coding with side information at the decoder,

the Wyner-Ziv theorem provides a single-letter characterization of the rate-distortion

function [226, 225]. In particular, this theorem (and, several years later, the result

in [213]) show that, in the Gaussian source and mean squared error distortion case,

there is no rate loss incurred by the encoder not knowing the side information. 1

The rate loss, for general memoryless sources and mean squared error distortion, is

bounded below 12

bit/sample [239].

Extensions of the Wyner-Ziv problem consider successive refinement [182, 183] and

the possible absence of side information [84]. Another extension called noisy Wyner-

Ziv coding concerns encoding a noisy observation of the source, and decoding with

the help of side information [232, 61, 62, 51, 54]. Some noisy Wyner-Ziv problems

revert to noiseless cases when modified distortion measures are used [221, 114, 115].

2.2 Techniques

2.2.1 Slepian-Wolf Coding

Although the proof of the Slepian-Wolf theorem is nonconstructive, the method of

proof by random binning (see also [45]) provides a connection to channel coding.

Consider again two statistically dependent binary sources X and Y related through

a hypothetical error channel. A linear channel code that achieves capacity for this

channel also achieves the Slepian-Wolf bound [223]. In the asymmetric case (i.e.,

the source coding of X with side information Y at the decoder), the Slepian-Wolf

encoder generates the syndrome of X with respect to the code. This encoding is

a binning operation since the entire coset of X maps to the same syndrome. The

decoder recovers X by selecting the coset member that is jointly typical with Y . A

code used this way for distributed source coding is said to be in syndrome-generating

form. An alternative usage of a channel code is called the parity-generating form, for

which there is no guarantee that the distributed source coding performance matches

the channel coding performance. In the asymmetric case, this approach encodes X

into parity bits with respect to a systematic channel code. The decoder uses the

parity bits to correct the hypothetical errors in Y , and so recovers X.

1The dirty-paper theorem for channel coding [44] is the dual of this result [184, 24, 137].

Page 19: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 8

The earliest application of channel coding techniques to source coding predates the

Slepian-Wolf formulation [29] (cited in [85].) Despite such precursors, it was decades

before the revival of the technique for distributed source coding in the asymmet-

ric [139, 144, 215] and symmetric cases [140, 142, 141, 138] using convolutional/trellis

codes (see [216].) Modern channel codes like turbo codes [28] and low-density parity-

check (LDPC) codes [64] provide substantially better performance. Turbo codes are

applied mostly in parity-generating form for both asymmetric [22, 128, 3] and symmet-

ric coding [65, 68]. Although turbo codes in syndrome-generating form are possible,

decoding the syndrome requires new trellis constructions [191, 190, 163]. In contrast,

LDPC codes allow straightforward syndrome decoding for asymmetric [117, 118, 100]

and symmetric coding [168, 169, 170, 43, 74]. Nevertheless, systematic LDPC codes

are also applied in parity-generating form [166, 167]. Symmetric usage of LDPC codes

is extended to multiple sources in [180, 181, 167].

We defer discussion of several additional capabilities, namely, rate adaptation,

side information adaptation, multilevel distributed source coding, and performance

analysis and code design, to Section 2.4, by which point we already motivate their

use in applications in Section 2.3. Among advanced topics that we do not describe

in detail are distributed zero-error coding [13, 242], distributed (quasi-)arithmetic

coding [17, 77, 125] and joint source-channel coding using distributed source coding

techniques [66, 249, 129, 69, 119].

2.2.2 Wyner-Ziv Coding

A simple construction of a Wyner-Ziv encoder is the concatenation of a vector quan-

tizer (possibly having noncontiguous cells) and a Slepian-Wolf encoder. The proof of

the converse in [226] suggests that this separation incurs no loss in performance with

respect to the Wyner-Ziv bound in the limit of large block length.

Structured quantizers with noncontiguous cells are implemented using nested lat-

tices [139, 173, 99, 228, 116] (based on information-theoretic results for jointly Gaus-

sian source and side information [240, 241]) or trellis-coded quantization [235, 227].

Another approach generalizes the Lloyd algorithm [120] to find locally optimal vector

quantizers [57, 34, 159, 58]. Globally optimal contiguous-cell quantizers are designed

in [130], but are shown to be nonoptimal in general [56]. Pleasingly, for quantization

Page 20: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 9

at high rates, lattices (with contiguous cells) are optimal [154, 157]. Quantizer design

for the noisy Wyner-Ziv problem is treated in [158, 155, 156, 153].

2.3 Selected Applications

The distributed source coding applications reviewed in this section motivate the tech-

nical contributions of this dissertation and are revisited in Chapter 6. Beyond our se-

lection, there is much related work in hyperspectral image coding [103, 23, 192, 40, 41],

array audio coding [124, 49, 165], error-resilient video coding [171, 172, 229], sys-

tematic lossy error protection of video [143, 6, 151, 214, 150, 152], biometric secu-

rity [126, 52, 53, 186, 185], media authentication [108, 207, 105, 110, 111, 112] and

media tampering localization [107, 109, 145, 106, 189, 203].

2.3.1 Low-Complexity Video Encoding

International standards for digital video coding (including ITU-T H.264/MPEG-4

AVC [217]) prescribe syntax for motion-compensated predictive coding. The en-

coder exploits the spatiotemporal dependencies of the video signal to produce a bit

stream as free of redundancy as possible. A small proportion of frames (called key

frames) are coded separately, while the remainder (called predictive frames) consist

of blocks that are either skipped, coded separately or coded as residuals with re-

spect to prediction blocks. A prediction block is selected by motion search within

one or more previously reconstructed frames. In implementations of predictive video

coding, the encoder is complex (with the bulk of the computation due to motion

search) and typically consumes five to ten times the resources needed by the decoder.

This unequal distribution of complexity is appropriate for traditional applications of

broadcast and storage in which video is encoded once and may be decoded thousands

or millions of times. But modern encoding devices, like capsule endoscopes, camera

phones and wireless surveillance cameras, are often severely resource-constrained and

consequently ill-suited to motion-compensated predictive coding.

Distributed source coding offers an alternative approach, in which the responsibil-

ity for exploiting temporal redundancy is transferred from encoder to decoder. The

key frames are still coded separately, but the remaining frames (called Wyner-Ziv

Page 21: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 10

frames) are encoded separately and decoded using side information constructed from

previously reconstructed frames. In this way, the encoder avoids the complexity of

motion search between frames. The decoder, in contrast, takes on additional burden

in constructing statistically dependent side information (perhaps by motion search)

and decoding the Wyner-Ziv encoding. Amazingly, this idea described first in [222]

precedes the standardization of predictive video coding. Its rediscovery independently

in [11] and [147] is responsible for the renaissance in research into distributed source

coding. In the rest of this section, we outline the main ideas of three distributed video

codecs.

In the PRISM codec [147, 149, 148, 146], each block of a Wyner-Ziv frame is

either skipped, coded separately or Wyner-Ziv coded, depending on its correlation

with the colocated block in the previous reconstructed frame. Encoding a Wyner-Ziv

block produces a syndrome with respect to a Bose-Chaudhuri-Hocquenghem (BCH)

code [86, 30] and a cyclic redundancy check (CRC) checksum. For each Wyner-

Ziv block, the decoder constructs multiple candidates of side information from a

search range within the previous reconstructed frame. The syndrome is decoded with

reference to the side information candidates one-by-one until the CRC checksum of the

encoded and recovered blocks match. Block-by-block Wyner-Ziv coding necessitates

short length codes, which are far from optimal, and a significant amount of auxiliary

CRC data. To compensate for poor coding performance, some versions of PRISM

revert to a limited motion search at the encoder [146].

The Wyner-Ziv video codec [11, 8, 9, 7] is so called because it adheres more closely

to distributed source coding principles. All blocks of a Wyner-Ziv frame are encoded

together, without reference to other frames, as parity bits of long rate-compatible

turbo codes [164]. Such codes enable the encoder to increment the coding rate until

the decoder signals sufficiency via a feedback channel. The side information for a

Wyner-Ziv frame is constructed at the decoder by motion-compensated interpolation

of previously reconstructed key frames. The transmission of either auxiliary hashes [5]

or uncoded less significant bit planes [4] of the Wyner-Ziv blocks assists motion-

compensated extrapolation at the decoder, and therefore reduces the need for frequent

key frames. In one version of the codec, Wyner-Ziv encoding of the residual of a frame

with respect to the previous frame relaxes strict separate encoding [10]. This paper

Page 22: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 11

also compares rate-compatible turbo codes with rate-adaptive LDPC codes [204, 205],

from Chapter 3 of this dissertation, and finds the rate-adaptive LDPC codes superior.

The DISCOVER codec follows the architecture of the Wyner-Ziv video codec but

refines its components to improve performance and practicality [16]. Particular ad-

vances are made in the construction of side information [18, 19, 132, 21], the modeling

of the dependence between source and side information [31, 32, 33] and the recon-

struction of the source [98]. It bears mentioning that the best DISCOVER video

coding performance is achieved using the rate-adaptive LDPC codes of this disserta-

tion, rather than rate-compatible turbo codes or a subsequent variant of rate-adaptive

LDPC codes [20].

Distributed video coding has yet to match the coding performance of motion-

compensated predictive coding. Towards this goal, the principal investigators of the

DISCOVER project identify “finding the best side information (or predictor) at the

decoder” as a key task [78]. Chapter 4 of this dissertation contributes the idea of

binary side information adaptation and Chapter 5 extends it to the multilevel case.

Using these techniques, the decoder of a distributed video codec learns the best

motion-compensated side information among a vast number candidates [206].

2.3.2 Multiview Coding

Multiview images and video, captured by a network of cameras, arise in applications

like light field imaging [219, 218] and free viewpoint television [63, 127, 193, 194].

When the camera network is limited in communication, centralized source coding

(addressed in the multiview video coding extension [39] to H.264/AVC) is ineffective

because the uncoded views are transmitted to a joint encoder, creating a bottleneck

as the number of views grows. Under distributed source coding, in contrast, the views

are first coded by separate encoders attached to the cameras and then transmitted

to a joint decoder.

Distributed multiview image coding exploits view dependence in much the same

way as distributed single-view video coding does temporal dependence. For multiview

image coding, the main differences are that the image model is geometric (based on

the camera arrangement) and that the encoding is strictly separate. The Wyner-Ziv

video and PRISM codecs are modified accordingly in [250] and [198], respectively.

Page 23: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 12

Distributed multiview video coding exploits both temporal and view dependence,

the former per view at each of the separate encoders and the latter at the joint decoder.

Unsurprisingly, there is a trade-off between the maximum possible temporal and

view coding gains [60]. Out of the potential view coding gain, the amount achieved

depends on the quality of the interpolated or extrapolated side information views at

the decoder. Side information for a certain time instant is constructed from other

views at the same instant based on disparity [59], affine [79] and homography [133, 15]

models. Camera geometry provides an additional epipolar constraint that reduces the

parameter space of these models [178, 179, 238]. The synthesis of a side information

view from a time duration of views produces even better coding results [237, 187, 55].

For two views, the distributed multiview video codec in [236] is the first to outperform

separate encoding and separate decoding using H.264/AVC.

Just as for low-complexity video encoding, this dissertation offers powerful adap-

tive distributed source coding tools for multiview coding [209, 210, 208, 35, 37, 36].

2.3.3 Reduced-Reference Video Quality Monitoring

In digital video broadcast systems, a content server typically transmits video to client

devices over imperfect links or insecure networks, while subject to stringent delay and

latency requirements. The first step towards ensuring quality of service is monitoring

the fidelity of the received video without significantly impacting the primary trans-

mission. No-reference methods estimate the video quality at each client based on the

received video alone [230]. Reduced-reference methods make use of a supplementary

low-rate bit stream for more accurate quality monitoring [231]. The ITU-T J.240

standard specifies this bit stream as a projection of the video signal after whitening

in both space and Walsh-Hadamard transform domains [2, 93]. A comparison of the

J.240 projections of the transmitted and received video yields an estimate of the peak

signal-to-noise ratio (PSNR).

Conventional source coding of the J.240 projection achieves only limited gains

because its whitened coefficients have large variance. In contrast, distributed source

coding of the transmitted video’s J.240 projection using the received video as side

information at the decoder is effective [42]. Similar work on distributed source coding

of random projections and projections based on perceptual metrics is reported in [202]

Page 24: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 13

and [188], respectively. Once a client device deems its received video quality to be

unacceptable, it can request retransmission of packets from the server [104]. If the

client device is too resource-constrained to perform the decoding and analysis itself,

it can instead feed back to the server its distributed-source-coded J.240 projection for

decoding with side information consisting of the transmitted (or even the original)

video [113].

The rate-adaptive LDPC codes of Chapter 3 feature in most of these systems [42,

202, 113, 188]. In this dissertation, we further describe how side information adap-

tation of Chapter 4 and its multilevel extension of Chapter 5 add the capability

of channel tracking to reduced-reference video quality monitoring with distributed

source codes.

2.4 Advanced Techniques

The applications described in the previous section demand more from distributed

source codes than just good coding performance in the Slepian-Wolf and Wyner-Ziv

settings. We now motivate the contributions of this dissertation and place them in

the context of precursors and (if applicable) subsequent work.

2.4.1 Rate Adaptation

In practice, the degree of statistical dependence between source and side informa-

tion is varying in time and unknown in advance. This is the case, for example, for

J.240 projections of transmitted and received video. So, it is better to model the

bound on coding performance of asymmetric distributed source coding as the varying

conditional entropy rate H(X|Y) of source and side information sequences X and

Y, respectively, rather than the fixed conditional entropy H(X|Y ) of their samples.

Codes that can adapt their coding rates incrementally in response to varying statis-

tics are more practical. In parity-generating form, convolutional and turbo codes are

punctured to different rates with good performance in [81] and [164, 82], respectively.

But removing syndrome bits of LDPC codes in syndrome-generating form leads to

performance degradation at lower rates [80, 136, 174, 101, 190]. The rate-adaptive

Page 25: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 14

sequences of syndrome-generating codes developed in [38] are based on product ac-

cumulate and extended Hamming accumulate codes [102, 89]. We construct the first

rate-adaptive LDPC codes in [204, 205]. Rate adaptation for convolutional and turbo

codes in syndrome-generating form is provided for the asymmetric case in [163] and

the symmetric case in [199]. A different approach towards rate adaptation for LDPC

codes uses a combination of syndrome and parity bits [90].

2.4.2 Side Information Adaptation

Realistic models of the statistical dependence between source and side information

often involve hidden variables. For example, the motion vectors that relate consecu-

tive frames of video are unknown a priori at the decoder of a distributed video codec.

Side information adaptation encompasses methods for learning the hidden variables

and adapting the side information accordingly, and doing so more efficiently than by

exhaustive search. In [70, 245, 71, 67, 247], the statistical dependence is through a

hidden Markov model, so side information adaptation with respect to these variables

is by the Baum-Welch algorithm [25]. We make use of the expectation maximiza-

tion (EM) generalization [47] to decode images related by hidden disparity [209, 208]

and motion [205], and (in work not reported in this dissertation) through contrast

and brightness adjustment [105, 106], cropping and resizing [110], and affine warp-

ing [111, 112].

2.4.3 Multilevel Distributed Source Coding

Much of distributed source coding research, as in channel coding, concerns binary

data, even though signals are frequently quantized to symbols of more than 2 levels.

One approach to multilevel distributed source coding uses codes on finite fields of

order greater than 2 [215, 3], but such codes are cumbersome to design for different

numbers of levels. Another method for coding a multilevel source is applying binary

distributed source coding bit-plane-by-bit-plane, using each decoded bit plane as ad-

ditional side information for the decoding of subsequent bit planes [7, 38]. Although

this technique allows side information adaptation per bit plane, statistical dependence

through hidden variables usually spans all bit planes. A third way is to encode all

Page 26: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 2. DISTRIBUTED SOURCE CODING BACKGROUND 15

the bit planes with a single binary code, but to decode them as symbols using soft

bit-to-symbol and symbol-to-bit mapping [243, 244, 246, 248]. We take this approach

in extending side information adaptation from binary to multilevel [210].

2.4.4 Performance Analysis and Code Design

The structure of LDPC codes, both global [123] and local [48], affects how closely

their performance approaches information-theoretic bounds. Assessing the codes em-

pirically slows down the design process because of the complexity of the iterative

message-passing decoding algorithm [64]. Faster methods for performance analy-

sis consider only parameters of the global structure [121, 197, 161]. Density evolu-

tion [161] summarizes an LDPC code by its degree distributions and evaluates the

convergence of message densities under message passing. This technique is used to

design high-performance LDPC codes with optimized irregular degree distributions

for both channel coding [160, 14] and distributed source coding [117, 180, 181]. In this

dissertation, we use density evolution to analyze and design, not just LDPC codes,

but also all the adaptive distributed source codes of our construction. In fact, density

evolution is applicable to all instances of the sum-product algorithm [97], known also

as loopy belief propagation [134].

2.5 Summary

This chapter reviews distributed source coding, starting with the theory and elemen-

tary techniques of both lossless and lossy coding. We then cover research progress

on three applications, namely low-complexity video encoding, multiview coding and

reduced-reference video quality monitoring. Finally, we describe recent advances in

technique, motivated by the needs of the applications, in the areas of rate adaptation,

side information adaptation, multilevel coding, and performance analysis and code

design. Our contributions in Chapters 3 to 5 of this dissertation build upon and unify

elements of this literature. In Chapter 6, we revisit the three applications reviewed

in this chapter and apply our novel techniques.

Page 27: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 3

Rate Adaptation

This chapter studies the construction of sequences of rate-adaptive LDPC codes for

distributed source coding.1 Each code in a rate-adaptive sequence operates at a

different rate, such that the following property holds: a code creates encoded bits

that form a subsequence of the encoded bits created by any other code of higher rate.

The encoder, using these codes, could therefore increase its coding rate on-the-fly

simply by sending an additional increment of encoded bits. In this way, the encoder

adapts its rate to handle any level of statistical dependence between the source and

the side information.

Section 3.1 motivates the pursuit of rate-adaptive LDPC codes with regard to

practical application of distributed source coding, the codes’ performance and their

amenability to analysis. In Section 3.2, we compare several approaches to construct-

ing sequences of such codes. We find that a novel construction based on syndrome

splitting offers coding performance close to the Slepian-Wolf bound over the entire

range of rates. In Section 3.3, we derive the degree distributions of these codes so that

the technique of density evolution can model their rate adaptation. The experimental

results in Section 3.4 confirm both the high performance of the rate-adaptive LDPC

codes and the accuracy of the density evolution analysis.

1In our early work [204, 205], we used the term low-density parity-check accumulate (LDPCA) codes.This was replaced by rate-adaptive LDPC codes to emphasize the function rather than the structure.

16

Page 28: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 17

Rate-AdaptiveEncoder

Rate-AdaptiveDecodersource X

side information Y

rate control feedback

encodedincrements

reconstruction X

Figure 3.1: Rate-adaptive distributed source coding with feedback. The encoder sendsincrements of encoded data to the decoder until the decoder signals that the coding rate issufficient using the rate control feedback channel.

3.1 Rationale for Rate-Adaptive LDPC Codes

Rate-adaptive codes make distributed source coding practical for nonergodic sources,

while rate-adaptive LDPC codes, in particular, offer both very good coding perfor-

mance and a method of analysis through density evolution.

The source X and side information Y, obtained directly or through projection

from natural images or video, are almost always jointly nonergodic. The encoder’s

appropriate coding rate, which must exceed the conditional entropy rate H(X|Y),

may vary substantially and without prior notice. Codes with the rate-adaptive prop-

erty permit the encoder to flexibly increase its coding rate on-the-fly. Moreover, when

rate-adaptive codes are used in conjunction with a feedback channel from the decoder

as in Fig. 3.1, the encoder can determine the proper coding rate Radaptive as follows:

1. the encoder sends coded bits corresponding to the code of lowest rate Radaptive;

2. the decoder attempts decoding and signals via the feedback channel whether

decoding is successful;

3. if decoding is unsuccessful, the encoder increases rate Radaptive by sending an

additional increment of encoded bits;

4. Steps 2 and 3 repeat until decoding is successful.

In Step 3, we deem decoding to be successful if the decoded bits can be re-encoded

into a bit stream identical to that received from the encoder.

Page 29: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 18

S1

S2

S3

S4

X1

X2

X3

X4

X5

X6

X7

X8

(a)

S1

S2

S3

S4

Y1

Y2

Y3

Y4

Y5

Y6

Y7

Y8

(b)

Figure 3.2: Fixed rate LDPC code (a) encoding bipartite and (b) decoding factor graphs.

Although turbo codes exist in rate-adaptive forms [164, 3, 82], LDPC codes offer

better fixed rate coding performance [117]. Moreover, the performance of LDPC

codes is easier to model via density evolution than that of turbo codes. For these

reasons, we develop a construction for sequences of rate-adaptive LDPC codes.

3.2 Construction of Rate-Adaptive LDPC Code Sequences

The encoder for a fixed rate LDPC code in syndrome-generating form performs bin-

ning in the following way. Using a bipartite graph like the one in Fig. 3.2(a), it encodes

the source X = (X1, X2, . . .) into syndrome bits denoted by the vector (S1, S2, . . .),

such that each syndrome bit is the modulo 2 sum of its neighbors. The decoder re-

covers the source from the syndrome bits and the side information Y = (Y1, Y2, . . .)

by running the sum-product algorithm on a factor graph like the one in Fig. 3.2(b).

In this section, we compare ways for the encoder to extend a mother LDPC code of

fixed rate into a sequence of rate-adaptive codes.

3.2.1 Rate Adaptation by Syndrome Deletion

Starting with a high rate code as a mother code, a naıve way to obtain lower rate

codes is by syndrome deletion. This technique is rate adaptive because, among any

Page 30: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 19

S1

S2

S3

S4

S5

S6

S7

S8

(a)

S2

S4

S6

S8

(b)

Figure 3.3: (a) Factor graph of a regular degree 3 LDPC code of rate 1. This graph containsno 4-cycles. (b) The factor graph of the rate 1

2 code produced by syndrome deletion. Somesource nodes are unconnected or just singly connected, stifling the convergence of the sum-product algorithm.

pair of codes so created, the lower rate code’s syndrome is a subsequence of the higher

rate code’s syndrome. Unfortunately, deletion produces poor codes because it results

in the loss of edges in the graphs of the lower rate codes. The factor graphs of the

lowest rate codes ultimately contain unconnected or singly connected source nodes,

as shown in the example in Fig. 3.3. This poorly connected factor graph is unsuitable

for decoding using the sum-product algorithm.

To mitigate the loss of edges at lower rates, one might increase the number of

edges in the mother code. But more edges degrade the performance of this code

because they introduce shorter cycles. Short cycles in the factor graph allow unreliable

information to recirculate via the sum-product algorithm and become unduly believed.

For instance, the factor graph in Fig. 3.3(a) contains no 4-cycles. It can be shown by

inspection that adding any single edge to this graph creates a 4-cycle. Three examples

are shown in Fig. 3.4. Furthermore, increasing the number of edges in the mother

code increases the decoding complexity of the sum-product algorithm linearly.

Page 31: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 20

(a) (b) (c)

Figure 3.4: Creation of 4-cycles by adding a single edge marked in blue to the factor graph inFig. 3.3(a). Even though the original graph contains no 4-cycles, we can show by inspectionthat any single edge added to the graph creates a 4-cycle, completed by the edges markedin red.

3.2.2 Rate Adaptation by Syndrome Merging

The key idea in achieving superior rate adaptation is to reduce the number of syn-

drome nodes, not by deletion but by merging. Syndrome merging preserves edges in

the factor graph and thereby keep the sum-product algorithm effective at lower rates.

We begin with a transformation of the encoding into an equivalent representa-

tion. Instead of sending the syndrome, the encoder sends the modulo 2 cumulative

syndrome2, denoted as the vector (C1, C2, . . .). There is a one-to-one correspondence

between the representations, since Ci = S0 + S1 + · · ·+ Si and Si = Ci +Ci−1, where

all sums are modulo 2.

The starting point of code design is once again a mother LDPC code of high rate.

But now lower rate codes are obtained by cumulative syndrome deletion, instead

of syndrome deletion. Rate adaptation is guaranteed because, among any pair of

codes so created, the lower rate code’s cumulative syndrome is a subsequence of the

higher rate code’s cumulative syndrome. In the following, we argue that deletion of

2The cumulative syndrome bits can be recast straightforwardly as the parity bits of an extended IrregularRepeat Accumulate (eIRA) channel code [234]. But in the channel coding scenario, the parity bits are subjectto a noisy channel and so the eIRA decoding graph represents them as degree 2 nodes. For this reason, weavoid conflating the concepts of rate-adaptive LDPC codes and eIRA codes.

Page 32: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 21

S1=C1

S2=C2+C1

S3=C3+C2

S4=C4+C3

S5=C5+C4

S6=C6+C5

S7=C7+C6

S8=C8+C7

(a)

S1+S2=C2

S3+S4=C4+C2

S5+S6=C6+C4

S7+S8=C8+C6

(b)

Figure 3.5: (a) Factor graph of a regular degree 3 LDPC code of rate 1. (b) Factor graph ofthe rate 1

2 code produced by cumulative syndrome deletion at the encoder. Since syndromenodes are merged at the decoder, not deleted, all edges have been preserved and the sum-product algorithm remains effective.

cumulative syndrome bits at the encoder is equivalent to merging syndrome nodes at

the decoder.

Deleting the cumulative syndrome value Ci from the vector means that the syn-

drome bits Si = Ci + Ci−1 and Si+1 = Ci+1 + Ci are lost at the decoder. But a new

syndrome bit Si+Si+1 = Ci+1 +Ci−1 is still recoverable. In the factor graph, this new

bit is represented as a merged syndrome node with neighbors consisting of the union

of the neighbors of the lost nodes minus their intersection, because the new bit is the

modulo 2 sum of the pair of lost bits. In general, deleting cumulative syndrome values

Ci, Ci+1, . . . , Cj−1 merges the following syndrome value: Si+Si+1+· · ·+Sj = Cj+Ci−1.

Fig. 3.5 shows an example in which merging syndrome nodes by deleting the odd-

numbered cumulative syndrome values preserves all the edges in the factor graph.

Note that sometimes merging a pair of syndrome nodes can lead to lost edges.

This is the case when the pair shares one or more common neighbors. Although these

cases are infrequent, it is difficult to design the mother code so that no edges are

lost over several merging steps. In the next section, we propose a code construction

that avoids edge loss by generating codes from lowest rate to highest via syndrome

splitting, the inverse operation of merging.

Page 33: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 22

3.2.3 Rate-Adaptive Sequences by Syndrome Splitting

We propose the following recipe to construct a good sequence of codes, for which no

edges are lost across all factor graphs:

1. select the code length n;

2. select the encoded data increment size k to be a factor of n;

3. set a code counter t = 1 and build a low rate mother LDPC code of source

coding rate Radaptive = kn

with n source nodes and k syndrome nodes;

4. build a higher rate code of rate Radaptive = (t + 1) kn

by splitting the k largest

degree syndrome nodes of the code of rate Radaptive = t kn

into pairs of nodes with

equal or consecutive degree, and increment t;

5. repeat Step 4 until the syndrome-generating matrix has full rank n.

In Step 2, we choose k to be a factor of n so that the sequence includes a code of

rate 1.

Step 3 uses the progressive edge growth method of [88, 122] to build the mother

code of desired edge-perspective source degree distribution λ(ω).3 This method cre-

ates as few short cycles as possible and also concentrates the edge-perspective syn-

drome degree distribution ρ(ω) to a single degree or a pair of consecutive degrees.4

Every splitting of k nodes in Step 4 corresponds to the transmission of an increment

of k cumulative syndrome bits. We schedule the increments to split the syndrome

nodes from largest to smallest degree into pairs of nodes with equal or consecutive

degrees, in order to retain the concentration of the syndrome degree distribution

around a limited number of values. Creating higher rate codes by splitting syndrome

nodes offers a couple of advantages versus creating lower rate codes by syndrome

merging. No edges are lost in the factor graph as long as the set of neighbors of a

split node is partitioned into the sets of neighbors of the resulting pair of nodes, and so

the source degree distribution λ(ω) is guaranteed to be invariant. Secondly, splitting

3In the edge-perspective source degree distribution polynomial λ(ω), the coefficient of ωd−1 is the fractionof source-syndrome edges connected to source nodes of degree d.

4In the edge-perspective syndrome degree distribution polynomial ρ(ω), the coefficient of ωd−1 is thefraction of source-syndrome edges connected to syndrome nodes of degree d.

Page 34: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 23

a syndrome node can only remove cycles from the factor graph, not introduce them.

Therefore, this construction also guarantees that there are no additional cycles in the

sequence of factor graphs beyond the ones designed by progressive edge growth in the

original high performance mother code.

The full rank condition of Step 5 ensures that the code of highest rate can always

be decoded using Gaussian elimination regardless of the side information. The highest

coding rate is at least 1 because the syndrome nodes must number greater than or

equal to the number of source nodes, but usually the code of rate 1 suffices.

3.3 Analysis of Rate Adaptation

The method of density evolution can determine whether an LDPC code converges

given the statistical dependency between the source and side information, using just

its source and syndrome degree distributions [161]. The idea in this extension is

to test the convergence of all codes in the rate-adaptive sequence, and estimate the

coding rate according to the lowest rate code that converges. In order to reap the

full complexity savings of density evolution, the degree distributions of all the codes

must be derived without actually generating any of their graphs. The mother code’s

edge-perspective source degree distribution λ(ω) is a design choice of the progressive

edge growth method. Then, for the codes of rate Radaptive = t kn, we obtain the edge-

perspective source and syndrome degree distributions, λt(ω) and ρt(ω), respectively.

Source Degree Distribution

The construction ensures that the edge-perspective source degree distribution is in-

variant,

λt(ω) = λ(ω). (3.1)

Page 35: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 24

Syndrome Degree Distribution

The syndrome degree distributions are derived inductively. The mother code’s edge-

perspective syndrome degree distribution is concentrated on at most two consecutive

values, say d and d+ 1, so we can write

ρ1(ω) = ρ(ω) = (1− α)ωd−1 + αωd, where integer d > 0 and 0 ≤ α < 1. (3.2)

and derive the unique solution

d = bdc (3.3)

α = (d− bdc)bdc+ 1

d, (3.4)

using the equality∫ 1

0ρ(ω)dω = k

n

∫ 1

0λ(ω)dω and where d =

(kn

∫ 1

0λ(ω)dω

)−1

[162].

The inductive step uses to ρt(ω) to derive ρt+1(ω), the edge-perspective syndrome

degree distributions of the codes of rate Radaptive = t kn

and Radaptive = (t + 1) kn,

respectively. In this case, it is easier to work with their node-perspective syndrome

degree distributions, Rt(ω) and Rt+1(ω), which can be converted from and to edge

perspective using the following formulas from [162].5

R(ω) =

∫ ω0ρ(ψ)dψ∫ 1

0ρ(ψ)dψ

(3.5)

ρ(ω) =R′(ω)

R′(1)(3.6)

Since the code construction splits the k syndrome nodes of largest degree out of

the total tk syndrome nodes (that is, a fraction of 1t) into pairs of equal or consecutive

degree, we partition Rt(ω) = Ft(ω) +Gt(ω), where Ft(ω) and Gt(ω) are nonnegative

polynomials, such that the maximum degree of the nonzero terms of Ft(ω) is less

than or equal to the minimum degree of the nonzero terms of Gt(ω). In this way,

Ft(ω) and Gt(ω) represent unnormalized node-perspective degree distributions of sets

of low and high degree syndrome nodes, respectively. Setting Gt(1) = 1t

(and hence

5In the node-perspective syndrome degree distribution polynomial R(ω), the coefficient of ωd is thefraction of syndrome nodes of source-syndrome degree d out of all syndrome nodes.

Page 36: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 25

Ft(1) = 1 − 1t) means that the 1

tfraction of highest degree syndrome nodes are now

marked for splitting.

We further partition Gt(ω) = Get (ω) + Go

t (ω), where Get (ω) and Go

t (ω) are non-

negative polynomials, such that their nonzero terms have even and odd degree only,

respectively. Splitting the even degree syndrome nodes represented by Get (ω) results

in twice as many half degree nodes: 2Get (ω

12 ). Splitting the odd degree syndrome

nodes represented by Got (ω) results in equal numbers of nodes of just less than half

degree and just greater than half degree: ω−12Go

t (ω12 ) + ω

12Go

t (ω12 ).

Therefore, we can write the normalized node-perspective syndrome degree distri-

bution for the code with (t+ 1)k syndrome nodes as

Rt+1(ω) =Ft(ω) + 2Ge

t (ω12 ) + (ω−

12 + ω

12 )Go

t (ω12 )

Ft(1) + 2Get (1) + 2Go

t (1)(3.7)

=t

t+ 1

(Ft(ω) + 2Ge

t (ω12 ) + (ω−

12 + ω

12 )Go

t (ω12 )), (3.8)

since the normalization constant Ft(1) + 2Get (1) + 2Go

t (1) = Ft(1) + 2Gt(1) = 1 + 1t.

3.4 Rate-Adaptive LDPC Coding Experimental Results

Our experiments evaluate the coding performance of rate-adaptive LDPC codes and

the accuracy of the rate adaptation analysis using density evolution. The source

X = (X1, X2, . . . , Xn) and side information Y = (Y1, Y2, . . . , Yn) are both random

binary vectors. Within each vector, the elements are independent and equiprobable.

Between the vectors, the statistical dependence is given by Y = X + N modulo

2, where N is a random binary vector with independent elements equal to 1 with

probability ε. Thus, the relationship between X and Y is binary symmetric with

crossover probability ε = P{Xi 6= Yi}. The Slepian-Wolf bound is the conditional

entropy rate H(X|Y) = H(ε) = −ε log2(ε)− (1− ε) log2(1− ε) bit/bit.

We construct both regular and irregular rate-adaptive LDPC codes. In the regular

codes, all the source nodes have degree 3; that is, the source degree distribution is

λreg(ω) = ω2. The irregular codes have source degree distribution given by λirreg(ω) =

0.1317ω+ 0.2595ω2 + 0.1868ω6 + 0.1151ω7 + 0.0792ω18 + 0.2277ω20, a choice from the

optimized degree distributions in [14].

Page 37: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 26

We consider codes of two lengths, n = 512 and 4096 bits. In both cases, we set

the encoded data increment size k = n128

; that is, k = 4 and 32 bits, respectively. In

this way, each sequence of codes achieves rates Radaptive ∈{

1128, 2

128, . . . , 1

}. Note that

it is not possible to construct the 512-bit irregular code because the maximal source

node degree of 21 exceeds the number of syndrome nodes k = 4 of the mother code.

Hence, we construct regular codes of lengths 512 and 4096 bits and irregular codes of

length 4096 bits only.

3.4.1 Performance Comparison with Other Codes

We compare the performance of the rate-adaptive LDPC codes with naıve syndrome-

deleted LDPC codes and punctured turbo codes [164] of code lengths, n = 512 and

4096 bits. Each sequence of syndrome-deleted LDPC codes are constructed from

a regular degree 3 LDPC mother code of rate 1. Lower rate codes at multiples of

code rate 1128

are obtained by syndrome deletion, as described in Section 3.2.1. Each

sequence of punctured turbo codes is generated using a pair of identical convolutional

encoders with generator matrix[

1 1+D+D3+D4

1+D3+D4

], as proposed for distributed source

coding in [7]. The puncturing of the parity bits is periodic so that the sequence takes

code rates in{

1128, 2

128, . . . , 1

}.

Fig. 3.6 plots the coding rates (averaged over 100 trials) required to compress

the source X with respect to the side information Y for varying conditional entropy

rate H(X|Y) = H(ε). The syndrome-deleted LDPC codes perform very poorly,

with coding rates far from the Slepian-Wolf bound when the entropy H(ε) is low.

The lower rate syndrome-deleted LDPC codes are never effective because syndrome

deletion severely degrades their factor graphs. The rate-adaptive LDPC codes and

punctured turbo codes, in contrast, compress close to the Slepian-Wolf bound for all

entropies H(ε). Note that the average performance of these codes is roughly the same

regardless of the length n = 512 or 4096 bits. The regular rate-adaptive LDPC codes

outperform the punctured turbo codes for H(ε) > 0.5, and the 4096-bit irregular

rate-adaptive LDPC codes outperform the punctured turbo codes for H(ε) > 0.2.

Fig. 3.7 plots the distributions of rate over the 100 trials of the experiments in

Fig. 3.6. For clarity, the rates are binned to the nearest multiple of 116

. Observe that

the 4096-bit codes have tighter distributions of their their rates than their respective

Page 38: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 27

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Comparison of 512−bit Codes

SD LDPCPunctured turboRA LDPC (regular)SW bound

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)ra

te (

bit/b

it)

Comparison of 4096−bit Codes

SD LDPCPunctured turboRA LDPC (regular)RA LDPC (irregular)SW bound

(b)

Figure 3.6: Comparison of coding performance of rate-adaptive (RA) LDPC codes withsyndrome-deleted (SD) LDPC codes and punctured turbo codes of lengths (a) n = 512 bitsand (b) n = 4096 bits.

512-bit codes. Moreover, the variances of the rate of the 4096-bit codes are similar,

regardless of the particular code or the entropy H(ε). So, even though the 4096-bit

codes have similar average performance as their 512-bit counterparts, the 4096-bit

codes are more predictable.

3.4.2 Performance of Rate Adaptation Analysis

We now evaluate the accuracy of density evolution in modeling the average mini-

mum coding rate. Our implementation is a Monte Carlo simulation using up to 214

samples [121].

Fig. 3.8 plots the modeled coding rates for both the regular and irregular rate-

adaptive LDPC codes of source degree distributions λreg(ω) and λirreg(ω), respectively.

The model very closely approximates the empirical results of the 4096-bit codes.

Page 39: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 28

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Rate Distribution at H(ε)=0.2 bit/bit

rate (bit/bit)

prob

abili

ty m

ass

func

tion

of r

ate

SD LDPC 512 bitsSD LDPC 4096 bitsPunctured turbo 512 bitsPunctured turbo 4096 bitsRA LDPC (regular) 512 bitsRA LDPC (regular) 4096 bitsRA LDPC (irregular) 4096 bits

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Rate Distribution at H(ε)=0.4 bit/bit

rate (bit/bit)pr

obab

ility

mas

s fu

nctio

n of

rat

e

SD LDPC 512 bitsSD LDPC 4096 bitsPunctured turbo 512 bitsPunctured turbo 4096 bitsRA LDPC (regular) 512 bitsRA LDPC (regular) 4096 bitsRA LDPC (irregular) 4096 bits

(b)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Rate Distribution at H(ε)=0.6 bit/bit

rate (bit/bit)

prob

abili

ty m

ass

func

tion

of r

ate

SD LDPC 512 bitsSD LDPC 4096 bitsPunctured turbo 512 bitsPunctured turbo 4096 bitsRA LDPC (regular) 512 bitsRA LDPC (regular) 4096 bitsRA LDPC (irregular) 4096 bits

(c)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

Rate Distribution at H(ε)=0.8 bit/bit

rate (bit/bit)

prob

abili

ty m

ass

func

tion

of r

ate

SD LDPC 512 bitsSD LDPC 4096 bitsPunctured turbo 512 bitsPunctured turbo 4096 bitsRA LDPC (regular) 512 bitsRA LDPC (regular) 4096 bitsRA LDPC (irregular) 4096 bits

(d)

Figure 3.7: Comparison of distributions of rate for rate-adaptive (RA) LDPC codes,syndrome-deleted (SD) LDPC codes and punctured turbo codes of lengths n = 512 and 4096bits, operating at entropies (a) H(ε) = 0.2 bit/bit, (b) H(ε) = 0.4 bit/bit, (c) H(ε) = 0.6bit/bit and (d) H(ε) = 0.8 bit/bit.

Page 40: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 3. RATE ADAPTATION 29

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

DE Performance (regular)

RA LDPC 4096 bitsDE modelSW bound

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)ra

te (

bit/b

it)

DE Performance (irregular)

RA LDPC 4096 bitsDE modelSW bound

(b)

Figure 3.8: Comparison of empirical coding performance of 4096-bit rate-adaptive (RA)LDPC codes and coding performance modeled by density evolution (DE) for (a) regularcodes and (b) irregular codes.

3.5 Summary

In this chapter, we show that rate-adaptive LDPC codes enable practical distributed

source coding with performance close to the Slepian-Wolf bound, that can be modeled

well by density evolution. We develop a construction for sequences of codes based

on the primitive of syndrome splitting. We derive the degree distributions of these

codes so that density evolution can be applied towards their analysis. Our experi-

mental results show that the rate-adaptive LDPC codes achieve performance close to

the Slepian-Wolf bound over the entire range of rates, and that density evolution is

accurate in modeling their empirical coding performance.

Page 41: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 4

Side Information Adaptation

This chapter considers distributed source coding in which each block of the source

at the encoder is associated with multiple candidates for side information at the

decoder, just one of which is statistically dependent on the source block. We argue

that the codec must compress and recover the source while simultaneously adapting

probabilities about which side information candidate best matches each source block.

We therefore call this topic side-information-adaptive distributed source coding.

In Section 4.1, we present distributed coding of random dot stereograms as a

motivating example and provide an overview of the operation of the side-information-

adaptive codec. Section 4.2 formalizes the statistical dependence between source and

side information as the block-candidate model and derives its conditional entropy

rate as the Slepian-Wolf bound. Section 4.3 describes the encoding of the source

into doping bits and cumulative syndrome bits, and the decoding as the sum-product

algorithm on a factor graph consisting of source, syndrome and side information

nodes. In Section 4.4, we analyze side information adaptation by transforming the

factor graph, deriving its degree distributions under varying rates of doping, and

applying a Monte Carlo simulation of density evolution. The experimental results in

Section 4.5 show that proposed codec performs close to the Slepian-Wolf bound when

the encoder sends an appropriate number of doping bits. Moreover, density evolution

accurately models the performance and is therefore used to design the doping rate.

30

Page 42: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 31

(a) (b)

Figure 4.1: Stereogram consisting of a pair of random dot images. To view the imagesstereoscopically, cross your eyes to the point that the images completely overlap. A squareshape in the center of the overlapped image appears to float out of the plane of the page.

4.1 Distributed Random Dot Stereogram Coding

Random dot stereograms represent perhaps the simplest possible model for stereo

images [91]. A random dot stereogram consists of a pair of statistically dependent

random dot images, like those shown in Fig. 4.1. By itself each image is devoid of

depth cues, but together they create a sensation of depth when viewed stereoscopi-

cally. Shapes appear to float in planes in front of or behind the actual image surface.

The optical illusion arises because statistically dependent regions of the pair of im-

ages are not necessarily located in the same position within their respective images,

as shown in the two superpositions in Fig. 4.2. Regions corresponding to floating

shapes are in fact shifted relative to each other and the perceived depth of the shape

scales with the amount of disparity. Colocated statistically dependent regions in the

pair of images, on the other hand, appear in the same plane as the surface.

The distributed source coding of the pair of images of the random dot stereogram is

a compelling problem because each image is by itself incompressible, but substantial

compression is possible by exploiting their joint statistics at the decoder. In the

lossless case, the total potential savings are available using asymmetric distributed

Page 43: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 32

(a) (b)

Figure 4.2: Superpositions of the two random dot images of Fig. 4.1, colored red andcyan, respectively, with displacements of (a) 0 and (b) 8 pixels. The superposition withdisplacement 0 pixels reveals statistical dependence at the periphery, while that at 8 pixelsreveals statistical dependence in a central square region. These superposed images areanaglyphs and each could be viewed stereoscopically using red-cyan 3D glasses. As inFig. 4.1, a square shape in the center would appear to float out of the plane of the page.

source coding, in which one image is the source and the other is the side information.

Fig. 4.3 depicts three lossless asymmetric codecs that can be applied to distributed

random dot stereogram coding.

Baseline Codec

The baseline codec in Fig. 4.3(a) applies the rate-adaptive LDPC codes of Chapter 3

without modification. This means it ignores the disparity between the source and side

information and tries to exploit the statistical dependence between colocated pixels.

But, in regions of nonzero disparity, the colocated pixels are in fact statistically

independent. Consequently, the baseline system usually compresses the source very

poorly, and not at all when there is nonzero disparity throughout the random dot

stereogram.

Page 44: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 33

LDPCEncoder

LDPCDecodersource

side information

rate control feedback

encodedincrements

reconstruction

(a)

LDPCEncoder

LDPCDecodersource

side information

rate control feedback

encodedincrements

reconstruction

Oracle

(b)

LDPCEncoder

LDPCDecodersource

side information

rate control feedback

encodedincrements

reconstruction

Side InformationAdapter

(c)

Figure 4.3: Asymmetric distributed source coding of one random dot image with the otheras side information: (a) baseline, (b) oracle and (c) side-information-adaptive codecs.

Page 45: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 34

Oracle Codec

The oracle codec in Fig. 4.3(b) builds on the baseline codec by adding a disparity

oracle to the decoder. The oracle realigns the regions of nonzero disparity in the side

information so that they are colocated with the matching regions in the source image.

In this way, the rate-adaptive LDPC codes exploit the true statistical dependence

between the pair of images, yielding better coding performance. But a fully informed

oracle is impossible because it requires knowledge of the source at the beginning of

the process of decoding the source.

Side-Information-Adaptive Codec

The side-information-adaptive codec in Fig. 4.3(c) provides a practical way for the

decoder to simultaneously learn the disparity and recover the source, at a coding rate

very competitive with that of the impractical oracle system. We replace the oracle

with a module called the side information adapter. Whereas the oracle realigns the

side information through the correct disparity map, the adapter considers multiple

side information candidates realigned through all possible disparity maps. The overall

decoder alternates between iterations of the LDPC decoder and the adapter, both of

which exchange statistical estimates of the source with each other. Each iteration of

LDPC decoding refines the source estimate using the increments of cumulative syn-

drome available so far. Each iteration of side information adaptation refines the source

estimate by computing the likelihoods of all the side information candidates. When

the decoding loop terminates after a fixed number of iterations, either the source is

recovered or the decoder requests a further increment of cumulative syndrome for use

in another decoding attempt.

Our original treatment of distributed random dot stereogram coding [209] casts

the iterative side-information-adaptive decoding algorithm as expectation maximiza-

tion [47]. The decoder’s goal is to recover the reconstruction as the maximum likeli-

hood estimate of the source given both the side information and the received incre-

ments of cumulative syndrome, treating the disparity as hidden variables. The expec-

tation step, which runs within the side information adapter, fixes the source estimate

and estimates the disparity. The maximization step fixes the disparity estimate and

Page 46: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 35

estimates the source using one belief propagation iteration of LDPC decoding. The

remainder of this chapter takes a more abstract view of the side-information-adaptive

decoder and poses it in its entirety as the sum-product algorithm on a single factor

graph. The advantage of this approach is the side information adaptation analysis of

this algorithm using density evolution. We commence by formalizing the statistical

relationship between the source and side information.

4.2 Model for Source and Side Information

4.2.1 Definition of the Block-Candidate Model

Define the source X to be an equiprobable random binary vector of length n and the

side information Y to be a random binary matrix of dimension n× c,

X =

X1

X2

...

Xn

and Y =

Y1,1 Y1,2 · · · Y1,c

Y2,1 Y2,2 · · · Y2,c

......

. . ....

Yn,1 Yn,2 · · · Yn,c

. (4.1)

Assume that n is a multiple of the block size b. Then we further define blocks of X

and Y; namely, vectors x[i] of length b and matrices y[i] of dimension b× c,

x[i] =

X(i−1)b+1

X(i−1)b+2

...

Xib

and y[i] =

Y(i−1)b+1,1 Y(i−1)b+1,2 · · · Y(i−1)b+1,c

Y(i−1)b+2,1 Y(i−1)b+2,2 · · · Y(i−1)b+2,c

......

. . ....

Yib,1 Yib,2 · · · Yib,c

.

(4.2)

Finally, define a candidate y[i, j] to be the jth column of block y[i]; that is, y[i, j] =(Y(i−1)b+1,j, Y(i−1)b+2,j, · · · , Yib,j

), a vector of length b.

The statistics of the block-candidate model are illustrated in Fig. 4.4. The de-

pendence between X and Y is through the vector Z =(Z1, Z2, · · · , Zn

b

)of hidden

random variables, each of which is uniformly distributed over {1, 2, . . . , c}. Given

Zi = zi, the block x[i] has a binary symmetric relationship of crossover probability ε

Page 47: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 36

c

b

source X side information Y

b

b

b

n

Figure 4.4: Block-candidate model of statistical dependence. The source X is an equiproba-ble binary vector of length n bits and the side information Y is a binary matrix of dimensionn× c. Binary values are shown as light/dark. For each block of b bits of X (among n

b suchnonoverlapping blocks), the corresponding b× c block of Y contains exactly one candidatedependent on X (shown color-coded.) The dependence is binary symmetric with crossoverprobability ε. All other candidates are independent of X.

with the candidate y[i, zi]. That is, y[i, zi] = x[i] +n[i] modulo 2, where n[i] is a ran-

dom binary vector with independent elements equal to 1 with probability ε. All other

candidates y[i, j 6= zi] in this block are equiprobable random vectors, independent of

x[i].

In the context of random dot stereograms, the blocks x[i] of X form a regular tiling

of the source image with each tile comprising b pixels. The candidates y[i, j] of a block

of Y represent the regions of the side information image that must be considered to

find the statistically dependent match with the source tile corresponding to x[i]. Thus,

the number of candidates c is the size of the search range in the side information

image for each tile in the source image. All three of the asymmetric distributed

Page 48: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 37

source codecs of Fig. 4.3 recover the source X from its encoding in conjunction with

the side information Y, but they differ in their treatment of the hidden variables Z.

The baseline codec assumes that Z is fixed to some arbitrary value, the oracle codec

knows the true realization, and the side-information-adaptive codec infers it.

4.2.2 Slepian-Wolf Bound

The Slepian-Wolf bound for the block-candidate model is the conditional entropy rate

H(X|Y), which can be expressed as

H(X|Y) = H(X|Y,Z) +H(Z|Y)−H(Z|X,Y). (4.3)

The first term H(X|Y,Z) = H(ε) = −ε log2 ε − (1 − ε) log2(1 − ε) bit/bit, since

each block x[i] = y[i, zi] + n[i] modulo 2. Given Y and Z, the only randomness is

supplied by n[i], the random binary vectors with independent elements equal to 1

with probability ε.

The second term H(Z|Y) = H(Z) = 1bH(Zi) = 1

blog2 c bit/bit, since no informa-

tion about the hidden variables Z is revealed by the side information Y alone. Per

block of b bits, each variable Zi is uniformly distributed over c values.

The third term H(Z|X,Y) can be computed exactly by enumerating all joint

realizations of blocks (x[i],y[i]) = (x, y) along with their probabilities P{x, y} and

entropy terms H(Zi|x, y).

H(Z|X,Y) =1

bH(Zi|x[i],y[i]) (4.4)

=1

b

∑x,y

P{x, y}H(Zi|x, y) (4.5)

=1

b

∑y

P{y|x = 0}H(Zi|x = 0, y) (4.6)

The final equality sets x to 0 because the probability and entropy terms are unchanged

by flipping any bit in x and the colocated bits in the candidates of y. Since the term

H(Zi|x = 0, y) only depends on the number of bits in each candidate equal to 1, the

calculation is tractable for small values of b and c.

Page 49: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 38

An alternative way to simplify the evaluation of H(Z|X,Y) is to assume that the

realization y|x = 0 is typical; that is, the statistically dependent candidate contains εb

bits of value 1 and all other candidates contain 12b bits of value 1 each. This assump-

tion of typicality is valid for large values of b, for which the asymptotic equipartition

property [46] deems∑

y typical

P{y|x = 0} ≈ 1. Approximating (4.6) gives

H(Z|X,Y) ≈ 1

bH(Zi|x = 0, y typical) (4.7)

=1

b(−pdep log2 pdep − (c− 1)pindep log2 pindep) , (4.8)

where pdep =wdep

wdep + (c− 1)windep

(4.9)

pindep =windep

wdep + (c− 1)windep

. (4.10)

Here, wdep = (1 − ε)(1−ε)bεεb and windep = (1 − ε) 12bε

12b are likelihoods weights of the

statistically dependent and independent candidates, respectively, being identified as

the statistically dependent candidate. In this way, the expression in (4.8) computes

the entropy of the index of the statistically dependent candidate.

Fig. 4.5 plots the Slepian-Wolf bounds for the block-candidate model as condi-

tional entropy rates H(X|Y), in exact form for tractable combinations of b and c

and approximated under the typicality assumption for b = 64. Note that the two

computations agree for the combination b = 64, c = 2. The Slepian-Wolf bound is

decreasing in block size b and increasing in both number of candidates c and H(ε).

4.3 Side-Information-Adaptive Codec

4.3.1 Encoder

The side-information-adaptive encoder’s objective is to code the source X to a rate

R > H(X|Y), the Slepian-Wolf bound. The bit stream comprises two segments. The

first segment, called the doping bits, is a sampling of the bits of X sent directly at

a fixed rate Rfixed bit/bit. The second segment consists of the cumulative syndrome

bits of a rate-adaptive LDPC code sent at a variable rate Radaptive bit/bit.

Page 50: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 39

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

H(X

|Y)

(bit/

bit)

Slepian−Wolf Bounds with c=2

Exact H(X|Y), b=1Exact H(X|Y), b=4Exact H(X|Y), b=16Exact H(X|Y), b=64Approx H(X|Y), b=64H(ε)

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)H

(X|Y

) (b

it/bi

t)

Slepian−Wolf Bounds with c=4

Exact H(X|Y), b=1Exact H(X|Y), b=4Exact H(X|Y), b=16Approx H(X|Y), b=64H(ε)

(b)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

H(X

|Y)

(bit/

bit)

Slepian−Wolf Bounds with c=8

Exact H(X|Y), b=1Exact H(X|Y), b=4Approx H(X|Y), b=64H(ε)

(c)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

H(X

|Y)

(bit/

bit)

Slepian−Wolf Bounds with c=16

Exact H(X|Y), b=1Approx H(X|Y), b=64H(ε)

(d)

Figure 4.5: Slepian-Wolf bounds for the block-candidate model, shown as conditional en-tropy rates H(X|Y) for number of candidates c equal to (a) 2, (b) 4, (c) 8 and (d) 16. Theexact form is shown for tractable values of block size b and the approximation (under thetypicality assumption) is shown for b = 64. Note that the exact and approximate formsagree for the combination b = 64, c = 2.

Page 51: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 40

The purpose of doping is to initialize the side-information-adaptive decoder with

reliable information about X. The doping pattern is deterministic and regular, so

that each block x[i] contributes either bbRfixedc or dbRfixede doping bits. The rate-

adaptive LDPC code is constructed as described in Section 3.2.3 with code length n,

set to the length of X, and encoded data increment size k, set to a factor of n. Hence,

the variable rate Radaptive = t kn, where the code counter t is selected during operation

using a rate control feedback channel. For convenience, Rfixed is chosen in advance to

be a multiple of kn

as well.

4.3.2 Decoder

The role of the side-information-adaptive decoder is to recover the source X from

the doping bits and the cumulative syndrome bits so far received from the encoder

in conjunction with the block-candidate side information Y. We denote the received

vector of doping bits as (D1, D2, . . . , DnRfixed). Recall that taking consecutive sums of

the received cumulative syndrome bits modulo 2 produces a vector of syndrome bits,

which we write as the vector (S1, S2, . . . , SnRadaptive) in this section.

The decoder synthesizes all this information by applying the sum-product algo-

rithm on a factor graph structured like the one shown in Fig. 4.6. Each source node,

which represents a source bit, is connected to doping, syndrome and side information

nodes that bear some information about that source bit. The edges carry messages

that represent probabilities about the values of their attached source bits. The sum-

product algorithm iterates by message passing among the nodes so that ultimately

all the information is shared across the entire factor graph. The algorithm terminates

successfully when the source bit estimates, when thresholded, are consistent with the

vector of syndrome bits. If this condition is not reached within a maximum number of

iterations, the decoder increments Radaptive by requesting more cumulative syndrome

bits from the encoder. Then the vector of syndrome bits is regenerated and the sum-

product algorithm is applied to a new factor graph that includes the new syndrome

bits. This process repeats until successful termination.

The nodes in the graph accept input messages from and produce output messages

for their neighbors. These messages represent probabilities that the connected source

Page 52: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 41

side informationnodes

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S12

S14

S16

S11

S13

S15

sourcenodes

syndromenodes

y[1]

y[3]

y[2]

y[4]

D2

D1

Figure 4.6: Factor graph for side-information-adaptive decoder with parameters: codelength n = 32, block size b = 8, number of candidates c = 4, Rfixed = 1

16 and Radaptive = 12 .

The associated rate-adaptive LDPC code is regular with edge-perspective source and syn-drome degree distributions λt(ω) = ω2 and ρt(ω) = ω5, respectively, where the code countert = n

kRadaptive.

bits are 0, and are denoted by vectors (pin1 , p

in2 , . . . , p

ind ) and (pout

1 , pout2 , . . . , pout

d ), re-

spectively, where d is the degree of the node. By the sum-product rule, each output

message poutu is a function of all input messages except pin

u , the input message on the

same edge. Each source node additionally produces an estimate pest that its bit is 0,

based on all the input messages. In the rest of this section, we detail the computation

rules of the nodes or combination of nodes shown in Fig. 4.7.

Page 53: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 42

p1in

p1out

p2in

p2out

p3in

p3out

pdin

pdout

pest

(a)

Dp1

in

p1out

p2in

p2out

p3in

p3out

pdin

pdout

pest

(b)

S

p1in

p1out

p2in

p2out

pdin

pdout

(c)

p1in

p1out

p2in

p2out

pbin

pbout

Y1,1 Y1,2 … Y1,c

Y2,1 Y2,2 … Y2,c

Yb,1 Yb,2 … Yb,c

… … …y[i]

(d)

Figure 4.7: Factor graph node combinations. Input and output messages and source bitestimate (if applicable) of (a) source node unattached to doping node, (b) source nodeattached to doping node, (c) syndrome node and (d) side information node.

Source Node Unattached to Doping Node

Fig. 4.7(a) shows a source node unattached to a doping node. There are two outcomes

for the source bit random variable: it is 0 with likelihood weight∏d

v=1 pinv or 1 with

likelihood weight∏d

v=1(1− pinv ). Consequently,

pest =

d∏v=1

pinv

d∏v=1

pinv +

d∏v=1

(1− pinv )

. (4.11)

Page 54: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 43

Ignoring the input message pinu , the weights are

∏v 6=u p

inv and

∏v 6=u(1− pin

v ), so

poutu =

∏v 6=u

pinv∏

v 6=u

pinv +

∏v 6=u

(1− pinv ). (4.12)

Source Node Attached to Doping Node

Fig. 4.7(b) shows a source node attached to a doping node of binary value D. Recall

that the doping bit specifies the value of the source bit, so the source bit estimate

and the output messages are independent of the input messages,

pest = poutu = 1−D. (4.13)

Syndrome Node

Fig. 4.7(c) shows a syndrome node of binary value S. Since the connected source

bits have modulo 2 sum equal to S, the output message poutu is the probability that

the modulo 2 sum of all the other connected source bits is equal to S. We argue by

mathematical induction on d that

poutu =

1

2+

1− 2S

2

∏v 6=u

(2pinv − 1). (4.14)

Side Information Node

Fig. 4.7(d) shows a side information node of value y[i] consisting of b× c bits, all of

which are labeled in Fig. 4.7(d) omitting the block index i for notational simplicity.

Just as for the other types of node, the computation of the output message poutu

depends on all input messages except pinu . But since the source bits and the bits of

the dependent candidate are related through a crossover probability ε, we define the

noisy input probability of a source bit being 0 by

pnoisy-inv = (1− ε)pin

v + ε(1− pinv ). (4.15)

Page 55: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 44

In computing poutu , the likelihood weight of the candidate y[i, j] being the statistically

dependent candidate equals the product of the likelihoods of that candidate’s b − 1

bits excluding Yu,j,

wu,j =∏v 6=u

(1[Yv,j=0]p

noisy-inv + 1[Yv,j=1](1− pnoisy-in

v )), (4.16)

where 1[.] is the indicator function. We finally marginalize poutu as the normalized sum

of weights for which Yu,j is 0, passed through crossover probability ε,

pclean-outu =

c∑j=1

1[Yu,j=0]wu,j

c∑j=1

wu,j

(4.17)

poutu = (1− ε)pclean-out

u + ε(1− pclean-outu ). (4.18)

4.4 Analysis of Side Information Adaptation

We use density evolution to determine whether the sum-product algorithm converges

on the proposed factor graph. Our approach first transforms the factor graph into

a simpler one that is equivalent with respect to convergence. Next we derive degree

distributions for this graph. Finally, we describe a Monte Carlo simulation of density

evolution for the side-information-adaptive decoder.

4.4.1 Factor Graph Transformation

The convergence of the sum-product algorithm is invariant under manipulations to

the source, side information, syndrome and doping bits and the factor graph itself as

long as the messages passed along the edges are preserved up to relabeling.

The first simplification is to reorder the candidates within each side information

block so that statistically dependent candidate is in the first position y[i, 1]. This

shuffling has no effect on the messages.

We then replace each side information candidate y[i, j] with the modulo 2 sum of

itself and its corresponding source block x[i], and set all the source bits, syndrome

Page 56: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 45

bits and doping bits to 0. The values of the messages would be unchanged if we would

relabel each message to stand for the probability that the attached source bit (which

is now 0) is equal to the original value of that source bit.

Finally, observe that any source node attached to a doping node always outputs

deterministic messages equal to 1, since the doping bit D is set to 0 in (4.13). We

therefore remove all instances of this node combination along with all their attached

edges from the factor graph. In total, a fraction Rfixed of the source nodes are removed.

Although some edges are removed at some syndrome nodes, no change is required to

the syndrome node decoding rule because ignoring input messages pinv = 1 does not

change the term∏

v 6=u(2pinv − 1) in (4.14). In contrast, side information nodes with

edges removed must substitute the missing input messages with 1 in (4.15) to (4.18).

Applying these three manipulations to the factor graph of Fig. 4.6 produces the

simpler factor graph, equivalent with respect to convergence, shown in Fig. 4.8, in

which the values 0 and 1 are denoted light and dark, respectively. The syndrome nodes

all have value 0, which is consistent with the source bits all being 0 as well. Only the

side information candidates in the first position y[i, 1] are statistically dependent with

respect to the source bits. In particular, the bits of y[i, 1] are independently equal to

0 with probability 1− ε, while the bits of y[i, j 6= 1] are independently equiprobable.

Note also the absence of combinations of source nodes linked to doping nodes.

4.4.2 Derivation of Degree Distributions

Density evolution runs, not on a factor graph itself, but using degree distributions

of that factor graph. Recall that degree distributions exist in two equivalent forms,

edge-perspective and node-perspective. In an edge-perspective degree distribution

polynomial, the coefficient of ωd−1 is the fraction of edges connected to a certain type

of node of degree d out of all edges connected to nodes of that type. In a node-

perspective degree distribution polynomial, the coefficient of ωd is the fraction of a

certain type of node of degree d out of all nodes of that type.

In total, we derive twelve degree distributions, six for each of two different graphs.

The source, syndrome and side information degree distributions of the factor graph

before transformation (like the one in Fig. 4.6) are respectively labeled λt(ω), ρt(ω)

and βt(ω) in edge perspective and Lt(ω), Rt(ω) and Bt(ω) in node perspective, where

Page 57: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 46

side informationnodes

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S12

S14

S16

S11

S13

S15

sourcenodes

syndromenodes

y[1]

y[3]

y[2]

y[4]

Figure 4.8: Transformed factor graph equivalent to that of Fig. 4.6 in terms of convergence.After doping, the edge-perspective source and syndrome degree distributions of the asso-ciated rate-adaptive LDPC code are λ∗t (ω) = ω2 and ρ∗t (ω) = 2

45ω3 + 2

9ω4 + 11

15ω5. The

edge-perspective side information degree distribution is β∗t (ω) = 715ω

6 + 815ω

7.

the code counter t = nkRadaptive. Their expected counterparts in the factor graph after

transformation (like the one in Fig. 4.8) are denoted λ∗t (ω), ρ∗t (ω) and β∗t (ω) in edge

perspective and L∗t (ω), R∗t (ω) and B∗t (ω) in node perspective.

For source degree distributions λt(ω), Lt(ω), λ∗t (ω) and L∗t (ω), we count the source-

syndrome edges, but neither the source-side-information nor source-doping edges.

In this way, λt(ω), ρt(ω), Lt(ω) and Rt(ω) are consistent with the corresponding

definitions for rate-adaptive LDPC codes.

Page 58: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 47

Source Degree Distributions

Recall from (3.1) that the edge-perspective source degree distribution λt(ω) is in-

variant with respect to t and equal to the designed source degree distribution λ(ω).

During the factor graph transformation, a fraction Rfixed of the source nodes are se-

lected for removal regardless of their degrees. Therefore, the expected source degree

distributions are preserved,

λ∗t (ω) = λt(ω) = λ(ω), (4.19)

L∗t (ω) = Lt(ω) =

∫ ω0λ(ψ)dψ∫ 1

0λ(ψ)dψ

, (4.20)

using the edge-perspective to node-perspective conversion formula in [162].

Syndrome Degree Distributions

The edge-perspective syndrome degree distribution ρt(ω) is obtained in (3.2) to (3.8)

by performing an inductive process on the node-perspective syndrome degree distri-

bution Rt(ω). We derive R∗t (ω) from Rt(ω) as described below, and obtain ρ∗t (ω) by

differentiating and normalizing R∗t (ω), using a formula analogous to (3.6).

Notice that the factor graph transformation, by removing a fraction Rfixed of the

source nodes regardless of their degrees, removes the same fraction of source-syndrome

edges in expectation. From the perspective of a syndrome node of original degree

d, each edge is retained independently with probability 1 − Rfixed. Consequently,

the chance that it has degree d∗ after factor graph transformation is the binomial

probability(dd∗

)(1−Rfixed)d

∗(Rfixed)d−d

∗. So if the node-perspective syndrome degree

distribution before transformation is expressed as Rt(ω) =∑dmax

d=1 Adωd, then after

transformation the expected node-perspective syndrome degree distribution is given

by

R∗t (ω) =dmax∑d=1

Ad1− (Rfixed)d

d∑d∗=1

(d

d∗

)(1−Rfixed)d

∗(Rfixed)d−d

∗ωd∗, (4.21)

where the normalization factor 11−(Rfixed)d accounts for the fact that degree d∗ = 0

syndrome nodes are not included in the degree distribution.

Page 59: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 48

Side Information Degree Distributions

In the factor graph before transformation, all side information nodes have degree b,

so the edge- and node-perspective side information degree distributions are

βt(ω) = ωb−1, (4.22)

Bt(ω) = ωb. (4.23)

Since the doping pattern is deterministic and regular at rate Rfixed, each side in-

formation node retains either b∗ or b∗ + 1 edges in the transformed graph, where

b∗ = bb(1 − Rfixed)c. With fractional part A = b(1 − Rfixed) − b∗, the node- and

edge-perspective side information degree distributions after transformation are

B∗t (ω) = (1− A)ωb∗

+ Aωb∗+1, (4.24)

β∗t (ω) =(1− A)b∗

b∗ + Aωb∗−1 +

A(b∗ + 1)

b∗ + Aωb∗, (4.25)

where β∗t (ω) is obtained from B∗t (ω) by differentiation and normalization.

4.4.3 Monte Carlo Simulation of Density Evolution

We now use densities to represent the distributions of messages passed among classes

of nodes. The source-to-side-information, source-to-syndrome, syndrome-to-source

and side-information-to-source densities are denoted Qso-si, Qso-syn, Qsyn-so and Qsi-so,

respectively. Another density Qsource captures the distribution of source bit estimates.

Fig. 4.9 depicts a schematic of the density evolution process. The message den-

sities are passed among three stochastic nodes that represent the side information,

source and syndrome nodes. Inside the nodes are written the probabilities that the

values at those positions are 0. Observe that the source and syndrome stochastic

nodes are deterministically 0 and only the elements of the candidate in the first posi-

tion of the side information stochastic node are biased towards 0, in accordance with

the transformation in Section 4.4.1. Fig. 4.9 also shows the edge-perspective degree

distributions of the transformed factor graph beside the stochastic nodes. Since ev-

ery source node connects to exactly one side information node, the edge-perspective

source degree distribution with respect to the side information nodes is 1.

Page 60: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 49

½…½1-ε

½…½1-ε½…½1-ε

… … … 1 1βt*(ω) ρt*(ω)λt*(ω)1

Qso-si

Qsyn-soQsi-so

Qso-synQsource

Figure 4.9: Density evolution for side-information-adaptive decoder. The three stochasticnodes are representatives of the side information, source and syndrome nodes, respectively,of a transformed factor graph, like the one in Fig. 4.8. The quantities inside the nodes arethe probabilities that the values at those positions are 0. Beside the nodes are written theexpected edge-perspective degree distributions of the transformed factor graph. The arrowlabels Qso-si, Qso-syn, Qsyn-so and Qsi-so are densities of messages passed in the transformedfactor graph, and Qsource is the density of source bit estimates.

During density evolution, each message density is stochastically updated as a

function of the values and degree distributions associated with the stochastic node

from which it originates and the other message densities that arrive at that stochastic

node. After a fixed number of iterations of evolution, Qsource is evaluated. The sum-

product algorithm is deemed to converge for the factor graph in question if and only if

the density of source bit estimates Qsource, after thresholding, converges to the source

bit value 0.

The rest of this section provides the stochastic update rules for the densities in

a Monte Carlo simulation of density evolution. For the simulation, the each of the

densities Qsource, Qso-si, Qso-syn, Qsyn-so and Qsi-so is defined to be a set of samples q,

each of which is a probability that its associated source bit is 0. At initialization, all

samples q of all densities are set to 12.

Source Bit Estimate Density

To compute each sample qout of Qsource, let a set Qin consist of 1 sample drawn

randomly from Qsi-so and δ samples drawn randomly from Qsyn-so. The random

Page 61: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 50

degree δ is drawn equal to d with probability equal to the coefficient of ωd in node-

perspective L∗t (ω), since there is one actual source bit estimate per node. Then,

according to (4.11),

qout =

∏qin∈Qin

qin

∏qin∈Qin

qin +∏

qin∈Qin

(1− qin). (4.26)

Source-to-Side-Information Message Density

To compute each updated sample qout of Qso-si, let a set Qin consist of δ samples drawn

randomly from Qsyn-so. The random degree δ is drawn equal to d with probability

equal to the coefficient of ωd in node-perspective L∗t (ω), since there is one actual

output message per node. Using (4.12), the update formula is the same as (4.26).

Source-to-Syndrome Message Density

To compute each updated sample qout of Qso-syn, let a set Qin consist of 1 sample

drawn randomly from Qsi-so and δ − 1 samples drawn randomly from Qsyn-so. The

random degree δ is drawn equal to d with probability equal to the coefficient of ωd−1

in edge-perspective λ∗t (ω), since there is one actual output message per edge. Using

(4.12), the update formula is the same as (4.26).

Syndrome-to-Source Message Density

To compute each updated sample qout of Qsyn-so, let a set Qin consist of δ− 1 samples

drawn randomly from Qso-syn. The random degree δ is drawn equal to d with prob-

ability equal to the coefficient of ωd−1 in edge-perspective ρ∗t (ω), since there is one

actual output message per edge. The syndrome value is deterministically 0. Then,

according to (4.14),

qout =1

2+

1

2

∏qin∈Qin

(2qin − 1). (4.27)

Page 62: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 51

Side-Information-to-Source Message Density

To compute each updated sample qout of Qsi-so, let a set{qinv

}b−1

v=1consist of δ − 1

samples drawn randomly from Qso-si and b − δ samples equal to 1. The random

degree δ is drawn equal to d with probability equal to the coefficient of ωd−1 in edge-

perspective β∗t (ω), since there is one actual output message per edge. The samples set

to 1 substitute for the messages on edges removed during factor graph transformation

due to doping.

For each qout, create also a realization of a b × c block of side information from

the joint distribution induced by the side information stochastic node. That is, each

element Yv,j is independently biased towards 0, with probability 1 − ε if j = 1 or

probability 12

if j 6= 1.

Generate sets{qnoisy-inv

}b−1

v=1and {wj}cj=1, before finally updating sample qout, fol-

lowing (4.15) to (4.18).

qnoisy-inv = (1− ε)qin

v + ε(1− qinv ) (4.28)

wj =b−1∏v=1

(1[Yv,j=0]q

noisy-inv + 1[Yv,j=1](1− qnoisy-in

v ))

(4.29)

qclean-out =

c∑j=1

1[Yb,j=0]wj

c∑j=1

wj

(4.30)

qout = (1− ε)qclean-out + ε(1− qclean-out) (4.31)

4.5 Side-Information-Adaptive Coding Experimental Results

We evaluate the coding performance of the side-information-adaptive codec and the

accuracy of its analysis using density evolution, for binary source X and binary side

information Y under a variety of settings of the block-candidate model. Our key

findings are that good choices for the doping rate Rfixed are required to achieve com-

pression close to the Slepian-Wolf bound, and that the density evolution model suc-

cessfully makes those choices.

Page 63: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 52

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance without Doping b=64, c=16

SIA codecDE modelOracle codecSW boundH(ε)Doping rate

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)ra

te (

bit/b

it)

Performance with Doping b=64, c=16

SIA codecDE modelOracle codecSW boundH(ε)Doping rate

(b)

Figure 4.10: Comparison of performance of side-information-adaptive (SIA) codec withdoping rate Rfixed (a) set to 0 and (b) set optimally using density evolution (DE).

The side-information-adaptive codec, in all our experiments, uses rate-adaptive

LDPC codes of length n = 4096 bits, encoded data increment size k = 32 bits, and

regular source degree distribution λreg(ω) = ω2. Hence, Radaptive ∈{

1128, 2

128, . . . , 1

}.

For convenience, we allow Rfixed ∈{

0, 1128, 2

128, . . . , 1

8

}. Our side information adapta-

tion analysis is implemented as a Monte Carlo simulation of density evolution using

up to 214 samples.

4.5.1 Performance with and without Doping

Fig. 4.10 illustrates the importance of the doping rate Rfixed for the setting of the

block-candidate model with block size b = 64 bits and number of candidates c = 16.

We plot the overall coding rates (each averaged over 100 trials) of the side-information-

adaptive codec and the corresponding oracle codec as we vary the entropy H(ε) of the

noise between X and the statistically dependent candidates of Y. We also plot the

side-information-adaptive codec’s Slepian-Wolf bound H(X|Y) and its performance

modeled according to density evolution. Note that the oracle decoder knows a priori

Page 64: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 53

which candidates of Y are the ones statistically dependent on X, and its own Slepian-

Wolf bound is H(ε).

In Fig. 4.10(a), settingRfixed = 0, the performance of the side-information-adaptive

codec without doping is far from both the performance of the oracle codec and the

Slepian-Wolf bound, since the side-information-adaptive decoder is not initialized

with sufficient reliable information about X. Remarkably, density evolution models

the poor empirical performance with reasonably good accuracy.

Increasing the doping rate Rfixed initializes the decoder better, but increasing it too

much penalizes the overall side-information-adaptive codec rate. We therefore search

through all values Rfixed ∈{

0, 1128, 2

128, . . . , 1

8

}and let density evolution determine

which minimizes the coding rate for each H(ε). Fig. 4.10(b) shows the performance

with these optimal doping rates Rfixed. The side-information-adaptive codec operates

close to both the Slepian-Wolf bound and the oracle codec’s performance, and is

approximated by density evolution even better than when Rfixed = 0 in Fig. 4.10(a).

4.5.2 Performance under Different Block-Candidate Model Settings

We vary the block size b and fix the number of candidates c = 2 of the block-candidate

model in Fig. 4.11 and fix the block size b = 64 bits and vary the number of candidates

c in Fig. 4.12. For each setting, we find the optimal doping rates Rfixed and plot the

side-information-adaptive codec’s empirical performance and performance modeled

by density evolution. The corresponding Slepian-Wolf bounds are computed exactly

in Fig. 4.11 and approximately in Fig. 4.12 using the derivations in Section 4.2.2.

These figures demonstrate that, with optimal doping, the performance of the side-

information-adaptive codec is close to the Slepian-Wolf bound and is modeled well

using density evolution, under a variety of settings. As the block size increases while

the number of candidates is fixed, the doping rate decreases, and as the number of

candidates increases while the block size is fixed, the doping rate increases. Large

and small H(ε) also require greater doping rates than intermediate values of H(ε).

Note that, at H(ε) = 0.9 bit/bit, the coding rate saturates at 1 bit/bit, so Rfixed = 0

suffices.

Page 65: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 54

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance with Doping b=8, c=2

SIA codecDE modelSW boundH(ε)Doping rate

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)ra

te (

bit/b

it)

Performance with Doping b=16, c=2

SIA codecDE modelSW boundH(ε)Doping rate

(b)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance with Doping b=32, c=2

SIA codecDE modelSW boundH(ε)Doping rate

(c)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance with Doping b=64, c=2

SIA codecDE modelSW boundH(ε)Doping rate

(d)

Figure 4.11: Coding performance of side-information-adaptive (SIA) codec and performancemodeled according to density evolution (DE), with optimal doping rate Rfixed, under block-candidate model with number of candidates c fixed to 2 and different block sizes b equal to(a) 8, (b) 16, (c) 32 and (d) 64 bits.

Page 66: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 55

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance with Doping b=64, c=2

SIA codecDE modelSW boundH(ε)Doping rate

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)ra

te (

bit/b

it)

Performance with Doping b=64, c=4

SIA codecDE modelSW boundH(ε)Doping rate

(b)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance with Doping b=64, c=8

SIA codecDE modelSW boundH(ε)Doping rate

(c)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Performance with Doping b=64, c=16

SIA codecDE modelSW boundH(ε)Doping rate

(d)

Figure 4.12: Coding performance of side-information-adaptive (SIA) codec and performancemodeled according to density evolution (DE), with optimal doping rate Rfixed, under block-candidate model with block size b fixed to 64 bits and different numbers of candidates cequal to (a) 2, (b) 4, (c) 8 and (d) 16.

Page 67: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 4. SIDE INFORMATION ADAPTATION 56

4.6 Summary

This chapter covers side-information-adaptive distributed source coding, in which

each block of the source is statistically dependent on just one of several candidates

of the side information. We provide a motivation for and an overview of this work

in our discussion on distributed source coding of random dot stereograms. The sta-

tistical relationship between the source and side information is formalized in terms

of the block-candidate model, which admits the derivation of the conditional en-

tropy rate as the Slepian-Wolf bound. The encoder of the proposed side-information-

adaptive codec encodes the source into two segments: a fixed rate of doping bits

and a variable rate of cumulative syndrome bits (using rate-adaptive LDPC codes.)

The decoder recovers the source with reference to the side information by apply-

ing the sum-product algorithm to a factor graph consisting of source, syndrome and

side information nodes. We analyze the side information adaptation of this codec

by first transforming its factor graph into a simpler one that is equivalent with re-

spect to convergence. After obtaining degree distributions for the transformed graph,

we apply a Monte Carlo simulation of density evolution. Our experimental results

show that the side-information-adaptive codec compresses the source at rates close

to the Slepian-Wolf bound under a variety of settings of the block-candidate model

as long as the doping rate is set optimally. We use density evolution to find the best

choices of doping rate, since it accurately models the empirical performance of the

side-information-adaptive codec.

Page 68: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 5

Multilevel Side Information

Adaptation

This chapter extends the binary techniques of Chapter 4 to the adaptive distributed

source coding of a multilevel source with respect to multilevel side information.

In Section 5.1, we spell out the new challenges in both the coding and the anal-

ysis by density evolution. Section 5.2 extends the block-candidate statistical model

to the multilevel case and derives the Slepian-Wolf bound. In Section 5.3, we de-

scribe the whole symbol encoder, which applies either a binary or Gray symbol-to-bit

mapping before using the side-information-adaptive encoder, and the whole symbol

decoder, which augments the factor graph of the side-information-adaptive decoder

with symbol and mapping nodes. In Section 5.4, we analyze the multilevel extension

by transforming the augmented factor graph, deriving its degree distributions under

varying rates of doping, and applying a Monte Carlo simulation of density evolution.

The experimental results in Section 5.5 demonstrate that the codec’s performance is

close to the Slepian-Wolf bound and well modeled by the density evolution analy-

sis, for both binary and Gray mappings. We also show that the codec using Gray

mapping usually requires a lower doping rate than the one using binary mapping.

57

Page 69: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 58

5.1 Challenges of Extension to Multilevel Coding

Multilevel source and side information introduce new challenges for adaptive dis-

tributed source coding and its analysis. The codec must now apply whole symbol

coding to perform efficient learning of the hidden variables that relate the source

and side information. The density evolution analysis, in order to capture symbol

processing, requires certain symmetry conditions to hold.

Whole Symbol Coding

One way to apply a binary codec to multilevel symbols is bit-plane-by-bit-plane,

making sure to exploit the redundancy among bit planes. This approach works for

rate-adaptive distributed source coding if each bit plane decoded becomes additional

side information at the decoder for the coding of subsequent bit planes [7, 38]. But

the bit-plane-by-bit-plane extension does not work well for side-information adaptive

distributed source coding because each source bit plane contributes different informa-

tion about which candidates of the side information are the statistically dependent

ones. Fig. 5.1 shows an example for source and side information blocks, x[i] and

y[i], respectively, in which each symbol is represented by 2 bits. Either the first or

second bit plane alone suggests that either candidate y[i, 2] or y[i, 3], respectively,

is the closest match to x[i]. But both bit planes together reveal that y[i, 1] is in

fact the matching candidate. We therefore require whole symbol coding instead of

bit-plane-by-bit-plane coding.

Symmetry Conditions for Density Evolution Analysis

The method of density evolution makes the simplifying assumption that the source

symbols are all 0 without loss of generality. This assumption is valid if the probability

of a joint realization of source and side information blocks is unchanged if any bit of

any symbol in the source block is flipped and the colocated bits in the side information

candidates are also flipped.

In the binary block-candidate model, we therefore require each source block to

have a binary symmetric statistical dependence with its matching side information

candidate. We now generalize the symmetry conditions for multilevel symbols. The

Page 70: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 59

source blockx[i]

side information blocky[i]

0000000000000000

01 01 1000 01 1000 01 1010 01 1000 01 1000 01 1000 01 1000 01 10

Figure 5.1: Comparison of whole symbol and bit-plane-by-bit-plane statistics. The sourceblock x[i] matches candidates y[i, 1], y[i, 2] and y[i, 3] in 6, 0 and 0 symbols, respectively.But the number of corresponding matches in the first bit plane are 7, 8 and 0 bits, and inthe second bit plane are 7, 0 and 8 bits, reading bit planes from the left.

symbol values must form a circular ordering, so that all values are in equivalent

positions. The distribution of a side information symbol that is statistically dependent

on its corresponding source symbol must be symmetric around the value of that source

symbol, so that any circular ordering is equivalent to the ordering reversed. Finally,

the symbol values must map to bit representations in such a way that, when the bits

of any subset of bit planes are flipped, the new circular ordering is isomorphic to the

original ordering. This isomorphism condition ensures that the symbol values retain

their relative positions after bit flipping.

The isomorphism condition holds for both binary and Gray mappings of 2-bit

symbols. Fig. 5.2(a) shows the binary mapping, and Fig. 5.2(b) to (d) show the

same mapping with least significant bit (LSB) plane flipped, most significant bit

(MSB) plane flipped and both bit planes flipped. All the resulting binary orderings,

depicted by the arrows, are isomorphic. The same is demonstrated for the 2-bit Gray

mapping in Fig. 5.2(e) to (h). But this property does not extend to 3-bit symbols.

Fig. 5.3(a) and (c) show the binary and Gray mappings and Fig. 5.3(b) and (d) show

the respective mappings with LSB plane flipped. In both cases, the new ordering

is not isomorphic to the original. In fact, it can be shown that the isomorphism

condition only holds for 2-bit symbols.

Page 71: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 60

00

10

0111

(a)

01

11

0010

(b)

10

00

1101

(c)

11

01

1000

(d)

00

11

0110

(e)

01

10

0011

(f)

10

01

1100

(g)

11

00

1001

(h)

Figure 5.2: Mappings of 2-bit symbols: (a) binary, (b) binary with LSB plane flipped,(c) binary with MSB plane flipped, (d) binary with both bit planes flipped, (e) Gray,(f) Gray with LSB plane flipped, (g) Gray with MSB plane flipped and (h) Gray with bothbit planes flipped. The binary orderings in (a) to (d) are isomorphic and so are the Grayorderings in (e) to (h).

000

100

010110

011

001111

101

(a)

001

101

011111

010

000110

100

(b)

000

110010

001100

111

011101

(c)

001

111

010100

011

000101

110

(d)

Figure 5.3: Mappings of 3-bit symbols: (a) binary, (b) binary with LSB plane flipped,(c) Gray and (d) Gray with LSB plane flipped. The binary orderings in (a) and (b) are notisomorphic, nor are the Gray orderings in (c) and (d).

Although these restrictive symmetry conditions are necessary for analysis of the

codec, they are not required for the codec itself. Hence, in the following section, we

extend the definition of the block-candidate model to multilevel symbols of 2 or more

bits.

Page 72: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 61

5.2 Model for Multilevel Source and Side Information

5.2.1 Multilevel Extension of the Block-Candidate Model

The multilevel block-candidate model has much in common with its binary counter-

part defined in Section 4.2.1. The source X is a random vector of length n and the side

information Y is a random matrix of dimension n× c. They are composed of blocks

x[i] of length b and y[i] of dimension b× c, respectively, where b is a factor of n. The

statistical dependence between X and Y is through the vector Z =(Z1, Z2, · · · , Zn

b

)of hidden random variables, each of which is uniformly distributed over {1, 2, . . . , c}.

In the multilevel extension, X consists of symbols of 2m levels, uniformly dis-

tributed over {0, 1, . . . , 2m − 1}. Given Zi = zi, the candidate y[i, zi] is statistically

dependent on the block x[i] according to y[i, zi] = x[i] + n[i] modulo 2m, where the

random vector n[i] has independent symbols equal to l ∈ {0, 1, . . . , 2m−1} with prob-

ability εl. The symbols of all other candidates y[i, j 6= zi] in this block are uniformly

distributed over {0, 1, . . . , 2m − 1} and independent of x[i].

For symmetry and convenience, we usually constrain the statistical dependence

between x[i] and y[i, zi] with a single parameter σ such that

εl =ζl

ζ0 + ζ1 + · · ·+ ζ2m−1

, (5.1)

where ζl =

exp(− l2

2σ2

), if 0 ≤ l < 2m−1

exp(− (2m−l)2

2σ2

), if 2m−1 ≤ l < 2m

, (5.2)

and define the entropy H(σ) = H(ε0, ε1, . . . , ε2m−1) = −2m−1∑l=0

εl log2 εl bit/symbol.

5.2.2 Slepian-Wolf Bound

The Slepian-Wolf bound for the multilevel block-candidate model is the conditional

entropy rate H(X|Y) and is derived using similar arguments as in Section 4.2.2.

The results (4.3) to (4.5) are the same with the exception that H(X|Y,Z) = H(σ),

instead of H(ε). Assuming the symmetry conditions of Section 5.1, we replace the

source symbols with 0 and thereby simplify H(Z|X,Y) to the expression in (4.6).

Page 73: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 62

If we assume typicality in addition, the statistically dependent candidates contain

εlb symbols of value l and all other candidates contain b2m symbols of value l, for all

l ∈ {0, 1, . . . , 2m−1}. We can thus approximate H(Z|X,Y) using (4.7) to (4.10) with

different likelihood weights wdep =2m−1∏l=0

εεlbl and windep =2m−1∏l=0

εb

2m

l .

Fig. 5.4 plots the Slepian-Wolf bounds for the multilevel block-candidate model

with m = 2 as conditional entropy rates H(X|Y), in exact form for tractable combi-

nations of b and c and approximated under the typicality assumption for b = 64.

5.3 Multilevel Side-Information-Adaptive Codec

5.3.1 Whole Symbol Encoder

The multilevel encoder first maps the source X from n symbols of 2m levels into

n′ = mn bits, using an m-bit binary or Gray mapping. Then it codes the n′ bits

into two segments, like the encoder in Section 4.3.1. The doping bits are sampled

deterministically and regularly at a fixed rate Rfixed bit/symbol. The cumulative

syndrome bits are produced at a variable rate Rvariable bit/symbol by a rate-adaptive

LDPC code of code length n′ and encoded data increment size k, which is a factor

of n. In this way, Rvariable = t kn, where t is the code counter, is consistent with its

definition in Chapters 3 and 4. As before, Rfixed is chosen in advance also to be a

multiple of kn, for convenience.

5.3.2 Whole Symbol Decoder

The multilevel decoder recovers the source X from different information about its bit

representation and its symbol values. The doping bits (D1, D2, . . . , Dn′Rfixed) and the

syndrome bits (S1, S2, . . . , Sn′Radaptive), obtained from the cumulative syndrome bits

so far received, pertain to the bit representation of X. In contrast, the multilevel side

information Y, contains information about the symbol values of X.

The decoder synthesizes all the bit and symbol information by applying the sum-

product algorithm on a factor graph structured like the one shown in Fig. 5.5. Like

the factor graph in Fig. 4.6, each source node represents a bit and is connected

to doping and syndrome nodes that bear information about that bit. Unlike the

Page 74: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 63

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

H(X

|Y)

(bit/

sym

bol)

Slepian−Wolf Bounds with m=2, c=2

Exact H(X|Y), b=1Exact H(X|Y), b=4Exact H(X|Y), b=16Exact H(X|Y), b=64Approx H(X|Y), b=64H(σ)

(a)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)H

(X|Y

) (b

it/sy

mbo

l)

Slepian−Wolf Bounds with m=2, c=4

Exact H(X|Y), b=1Exact H(X|Y), b=4Approx H(X|Y), b=64H(σ)

(b)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

H(X

|Y)

(bit/

sym

bol)

Slepian−Wolf Bounds with m=2, c=8

Exact H(X|Y), b=1Approx H(X|Y), b=64H(σ)

(c)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

H(X

|Y)

(bit/

sym

bol)

Slepian−Wolf Bounds with m=2, c=16

Approx H(X|Y), b=64H(σ)

(d)

Figure 5.4: Slepian-Wolf bounds for multilevel block-candidate model with m = 2, shown asconditional entropy rates H(X|Y) for number of candidates c equal to (a) 2, (b) 4, (c) 8 and(d) 16. The exact form is shown for tractable values of block size b and the approximation(under the typicality assumption) is shown for b = 64. Note that the exact and approximateforms agree for the combination b = 64, c = 2.

Page 75: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 64

side informationnodes

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S12

S14

S16

S11

S13

S15

sourcenodes

syndromenodes

y[1]

y[3]

y[2]

y[4]

D2

D1

symbolnodes

mappingnodes

Figure 5.5: Factor graph for multilevel decoder with parameters: number of bits per symbolm = 2, number of symbols n = 16, block size b = 4, number of candidates c = 4, Rfixed = 1

16and Radaptive = 1

2 . The associated rate-adaptive LDPC code is regular with edge-perspectivesource and syndrome degree distributions λt(ω) = ω2 and ρt(ω) = ω5, respectively, wherethe code counter t = n′

k′Radaptive.

factor graph in Fig. 4.6, there are also symbol nodes, each of which is connected

to the side information node that bears information about that symbol. Statistical

information is shared among each symbol node and its m respective source nodes

via a mapping node, which bears no information of its own. An edge incident to a

source node carries messages of the form p that represent the probability that the

source bit is 0. But an edge incident to a symbol node carries messages of the form

p = (p(0), p(1), . . . , p(2m−1)) that represent the probabilities that the source symbol

Page 76: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 65

p1in

p1out

p2in

p2out

pest

(a)

pin

pout

p1in

p1out

p2in

p2out

pmin

pmout

(b)

p1in

p1out

p2in

p2out

pbin

pbout

Y1,1 Y1,2 … Y1,c

Y2,1 Y2,2 … Y2,c

Yb,1 Yb,2 … Yb,c

… … …y[i]

(c)

Figure 5.6: Multilevel factor graph nodes. Input and output messages and symbol estimate(if applicable) of (a) symbol node, (b) mapping node and (c) side information node.

is 0, 1, . . . , 2m − 1, respectively. In addition, the source and symbol nodes produce

estimates of their bit and symbol values, respectively.

The computation rules of source nodes (unattached or attached to doping nodes)

and syndrome nodes are the same as described in Section 4.3.2. We now specify the

computation rules of symbol, mapping and side information nodes, with input and

output messages and source estimates as denoted in Fig. 5.6. Keep in mind that the

sum-product algorithm requires each output message to be a function of all input

messages except the one along the same edge as that output message.

Symbol Node

Fig. 5.6(a) shows a symbol node, necessarily of degree 2. There are 2m outcomes

for the symbol random variable: it is l with likelihood weight pin1 (l)pin

2 (l), for all l ∈{0, 1, . . . , 2m−1}. Hence, the symbol estimate pest = (pest(0), pest(1), . . . , pest(2m − 1))

is given by

pest(l) =pin

1 (l)pin2 (l)

pin1 (0)pin

2 (0) + pin1 (1)pin

2 (1) + · · ·+ pin1 (2m − 1)pin

2 (2m − 1). (5.3)

Page 77: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 66

To calculate the output messages pout1 and pout

2 , we ignore the input messages pin2 and

pin1 , respectively. Doing so, reduces (5.3) to

pout1 = pin

2 , (5.4)

pout2 = pin

1 . (5.5)

Mapping Node

Fig. 5.6(b) shows a mapping node, with one edge carrying multilevel probability

messages pin and pout, and m edges carrying binary probability messages pinv and pout

v

for v ∈ {1, 2, . . . ,m}. The role of the mapping node is to marginalize the output

message of the symbol or one of the bits based on all the other input symbol or bit

messages, through the mapping used in the multilevel encoder, whether binary or

Gray (or other.) We represent the mapping by the function map(l, v) which returns

the vth bit of the m bit mapping of the symbol l. Then the output probability that

the symbol is l is the product of the input probabilities of the vth bit being map(l, v)

over all v ∈ {1, 2, . . . ,m},

pout(l) =m∏v=1

(1[map(l,v)=0]p

inv + 1[map(l,v)=1](1− pin

v )). (5.6)

To compute poutu , we first calculate likelihoods wmap

u,l of the symbol being l given the

input symbol message and all the input bit messages except pinu , and then marginalize

the likelihoods over l according to whether map(l, u) = 0, as follows.

wmapu,l = pin(l)

∏v 6=u

(1[map(l,v)=0]p

inv + 1[map(l,v)=1](1− pin

v ))

(5.7)

poutu =

2m−1∑l=0

1[map(l,u)=0]wmapu,l

2m−1∑l=0

wmapu,l

(5.8)

Page 78: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 67

Side Information Node

Fig. 5.6(c) shows a side information node of value y[i] consisting of b × c multilevel

symbols, all of which are labeled in Fig. 5.6(c) omitting the block index i for notational

simplicity. The derivation of an output message poutu extends the arguments in the

derivation for the side information node in Section 4.3.2 from binary to multilevel. In

the following, ? denotes circular convolution.

pnoisy-inv = pin

v ? (ε0, ε1, . . . , ε2m−1) (5.9)

wsiu,j =

∏v 6=u

2m−1∑l=0

1[Yv,j=l]pnoisy-inv (l) (5.10)

pclean-outu (l) =

c∑j=1

1[Yu,j=l]wsiu,j

c∑j=1

wsiu,j

(5.11)

poutu = pclean-out

u ? (ε0, ε1, . . . , ε2m−1) (5.12)

5.4 Analysis of Multilevel Side Information Adaptation

Using a similar process to that of Section 4.4, we determine the convergence of the

multilevel decoder for different settings of symbol-to-bit mapping and doping rate

Rfixed at the multilevel encoder. Note that our analysis requires the symmetry condi-

tions of Section 5.1 to hold.

5.4.1 Factor Graph Transformation

We transform the multilevel factor graph shown in Fig. 5.5 into a graph equivalent in

terms of convergence, shown in Fig. 5.7, using similar manipulations to the ones in

Section 4.4.1. As before, the candidates in each side information block are reordered

so that the statistically dependent one is in the first position y[i, 1]. We then subtract

from each side information candidate y[i, j] the corresponding source block x[i] mod-

ulo 2m− 1, and set all the source symbols and bits, syndrome bits and doping bits to

0. The side information nodes therefore have the first candidate y[i, 1] statistically

Page 79: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 68

side informationnodes

S1

S2

S3

S4

S5

S6

S7

S8

S9

S10

S12

S14

S16

S11

S13

S15

sourcenodes

syndromenodes

y[1]

y[3]

y[2]

y[4]

symbolnodes

mappingnodes

Figure 5.7: Transformed multilevel factor graph equivalent to that of Fig. 5.5 in terms ofconvergence. After doping, the edge-perspective source and syndrome degree distributionsof the associated rate-adaptive LDPC code are λ∗t (ω) = ω2 and ρ∗t (ω) = 2

45ω3 + 2

9ω4 + 11

15ω5.

The edge-perspective side information and mapping degree distributions are β∗t (ω) = ω3

and µ∗t (ω) = 115 + 14

15ω.

dependent with respect to 0 and all other candidates independent of 0. Finally, we

remove the doping nodes, their attached source nodes and the edges connected to

those source nodes from the graph. To compensate for the removed edges, the map-

ping nodes must substitute the missing messages with the value 1. But, just as in

Section 4.4.1, no change is required to the syndrome node decoding rule.

Page 80: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 69

5.4.2 Derivation of Degree Distributions

The multilevel factor graphs before and after transformation are each characterized

by four types of degree distribution in both edge- and node-perspective. The source

degree distributions λt(ω), Lt(ω), λ∗t (ω) and L∗t (ω) and the syndrome degree dis-

tributions ρt(ω), Rt(ω), ρ∗t (ω) and R∗t (ω) are defined and derived identically as in

Section 4.4.2. The side information degree distributions βt(ω), Bt(ω), β∗t (ω) and

B∗t (ω) have the same definitions as before, but have different derivations because the

position of multilevel side information nodes is different to their binary counterparts.

We define mapping degree distributions µt(ω), Mt(ω), µ∗t (ω) and M∗t (ω) to be the

degree distributions of the mapping nodes, counting only the mapping-source edges,

not the mapping-symbol edges.

Mapping Degree Distributions

Since the mapping nodes inhabit the same position in the multilevel factor graphs

as the side information nodes do in the binary factor graphs, their degree distribu-

tions have analogous forms. Before transformation, the edge- and node-perspective

mapping distributions are, respectively,

µt(ω) = ωm−1, (5.13)

Mt(ω) = ωm, (5.14)

because the number of mapping-source edges per mapping node is m. After trans-

formation, the node- and edge-perspective mapping degree distributions follow (4.24)

and (4.25), as

M∗t (ω) = (1− A)ωm

∗+ Aωm

∗+1, (5.15)

µ∗t (ω) =(1− A)m∗

m∗ + Aωm

∗−1 +A(m∗ + 1)

m∗ + Aωm

∗, (5.16)

where m∗ = bm(1 − Rfixed)c and A = m(1 − Rfixed) −m∗. We assume that the low

doping rate Rfixed <m−1m

, so that m∗ ≥ 1 and no mapping nodes are removed during

the factor graph transformation.

Page 81: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 70

U…UE

U…UEU…UE

… … … βt*(ω)

Qsym-si

Qsi-sym

Δ2 Δ2ρt*(ω)λt*(ω)1

Qsyn-so

Qso-synQsourceΔ 111

Qmap-sym

Qsym-mapμt*(ω)

Qso-map

Qmap-so

Figure 5.8: Density evolution for multilevel decoder. The five stochastic nodes are represen-tatives of the side information, symbol, mapping, source and syndrome nodes, respectively,of a transformed multilevel factor graph, like the one in Fig. 5.7. The quantities inside thenodes are the distributions of the symbols or bits at those positions. Beside the nodes arewritten the expected edge-perspective degree distributions of the transformed multilevelfactor graph. The arrow labels are densities of messages passed in the transformed factorgraph, and Qsource is the density of source bit estimates.

Side Information Degree Distributions

The side information degree distributions are unchanged by transformation because

each side information node retains all b of its mapping node neighbors. So,

β∗t (ω) = βt(ω) = ωb−1, (5.17)

B∗t (ω) = Bt(ω) = ωb. (5.18)

5.4.3 Monte Carlo Simulation of Density Evolution

The distributions of messages passed among classes of nodes are now represented as

message densities. The densities of the binary messages to or from source nodes are

like the ones in Section 4.4.3, but the densities of multilevel messages to or from

symbol nodes are multidimensional. The densities are labeled in the schematic in

Fig. 5.8 along with five stochastic nodes representing side information, symbol, map-

ping, source and syndrome nodes. Inside the nodes are written probability distri-

butions for the symbols or bits at those positions. The symbol distributions E, U

and ∆ stand for (ε0, ε1, . . . , ε2m−1),(

12m ,

12m , . . . ,

12m

)and (1, 0, . . . , 0), respectively,

over symbol values {0, 1, . . . , 2m − 1}. The binary distribution ∆2 is (1, 0) over the

values {0, 1}. Beside the stochastic nodes are written the edge-perspective degree

distributions of the transformed factor graph.

Page 82: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 71

Density evolution stochastically updates each density for a fixed number of itera-

tions. The sum-product algorithm is deemed to converge for the multilevel decoder

if and only if the source bit estimate density Qsource, after thresholding, converges to

the source bit value 0.

In our Monte Carlo simulation, we represent the densities of the binary and mul-

tilevel messages differently. A density of binary messages is a set of samples q, each

of which is a probability that its associated source bit is 0, just as in Section 4.4.3.

At initialization, all samples q are set to 12. A density of multilevel messages is a

set of multidimensional samples q = (q(0), q(1), . . . , q(2m − 1)), each of which is a

probability distribution of its associated source symbol over values {0, 1, . . . , 2m− 1}.At initialization, all samples q are set to

(1

2m ,1

2m , . . . ,1

2m

).

Several of the update rules for the densities of binary messages are similar to

those in Section 4.4.3. The rule for the syndrome-to-source message density Qsyn-so

is identical. The rules for the source bit estimate density Qsource and the source-to-

syndrome message density Qso-syn are the same as their counterparts in Section 4.4.3

with the replacement of the side-information-to-source message density Qsi-so with the

mapping-to-source message density Qmap-so. Likewise, the source-to-mapping message

density Qso-map is the same as the source-to-side-information message density Qso-si

in Section 4.4.3 with the same substitution. The update rules for the other densities

are now described.

Symbol-to-Side-Information and Symbol-to-Mapping Message Densities

Just as the symbol nodes relay messages without modification between the side in-

formation and mapping nodes in (5.4) and (5.5), the stochastic symbol node relays

densities between the side information and mapping stochastic nodes.

Qsym-si = Qmap-sym (5.19)

Qsym-map = Qsi-sym (5.20)

Side-Information-to-Symbol Message Density

To compute each updated sample qout of Qsi-sym, let a set{qinv

}b−1

v=1consist of b − 1

samples drawn randomly from Qsym-si. Create also a realization of a b × c block of

Page 83: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 72

side information from the joint distribution induced by the side information stochastic

node. That is, each element Yv,j is independently drawn from E if j = 1 or from U if

j 6= 1. Generate sets{qnoisy-inv

}b−1

v=1and

{wsij

}cj=1

, before finally updating sample qout,

following (5.9) to (5.12).

qnoisy-inv = qin

v ? (ε0, ε1, . . . , ε2m−1) (5.21)

wsij =

b−1∏v=1

2m−1∑l=0

1[Yv,j=l]qnoisy-inv (l) (5.22)

qclean-out(l) =

c∑j=1

1[Yb,j=l]wsij

c∑j=1

wsij

(5.23)

qout = qclean-out ? (ε0, ε1, . . . , ε2m−1) (5.24)

Mapping-to-Symbol Message Density

To compute each updated sample qout of Qmap-sym, let a set{qinv

}mv=1

consist of δ sam-

ples drawn randomly from Qso-map and m−δ samples equal to 1. The random degree δ

is drawn equal to d with probability equal to the coefficient of ωd in node-perspective

M∗t (ω), since there is one actual output message per node. The samples set to 1

substitute for the messages on edges removed during factor graph transformation due

to doping. Then, according to (5.6),

qout(l) =m∏v=1

(1[map(l,v)=0]q

inv + 1[map(l,v)=1](1− qin

v )). (5.25)

Mapping-to-Source Message Density

To compute each updated sample qout of Qmap-so, let a set{qinv

}m−1

v=1consist of δ − 1

samples drawn randomly from Qso-map and m − δ samples equal to 1. The random

degree δ is drawn equal to d with probability equal to the coefficient of ωd−1 in edge-

perspective µ∗t (ω), since there is one actual output message per edge. Furthermore,

let qin be 1 sample drawn randomly from Qsym-map and let u be an integer drawn

Page 84: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 73

randomly from {1, 2, . . . ,m}. Generate a set {wmapl }2m−1

l=0 , before updating sample

qout, following (5.7) and (5.8).

wmapl = qin(l)

∏v 6=u

(1[map(l,v)=0]q

inv + 1[map(l,v)=1](1− qin

v ))

(5.26)

qout =

2m−1∑l=0

1[map(l,u)=0]wmapl

2m−1∑l=0

wmapl

(5.27)

5.5 Multilevel Coding Experimental Results

We evaluate the coding performance of the multilevel codec under both binary and

Gray symbol-to-bit mappings. In order to compare the performance to the analysis

using density evolution, our experimental settings satisfy the symmetry conditions in

Section 5.1; in particular, each symbol comprises m = 2 bits.1 Our key finding in this

section is that both binary and Gray mappings offer coding performance close to the

Slepian-Wolf bound and are well modeled by density evolution, but that the codec

using Gray mapping usually requires a lower doping rate than the one using binary

mapping.

The source, in all our experiments, consists of n = 2048 symbols so that the

multilevel codec uses rate-adaptive LDPC codes of length n′ = mn = 4096 bits,

encoded data increment size k = 32 bits, and regular source degree distribution

λreg(ω) = ω2. Hence, Radaptive ∈{

164, 2

64, . . . , 2

}. For convenience, we allow Rfixed ∈{

0, 164, 2

64, . . . , 1

4

}. The statistical dependence between source blocks and their match-

ing side information candidates is parameterized by σ as in (5.1) and (5.2). The

Monte Carlo simulation of density evolution uses up to 214 samples.

Fig. 5.9 to Fig. 5.12 fix the values of block size and number of candidates (b, c)

to the combinations (8, 2), (32, 2), (64, 4) and (64, 16), respectively. In each figure,

for both binary and Gray mappings, we determine the optimal doping rates Rfixed at

different values of H(σ), and plot the multilevel codec’s empirical performance and

performance modeled by density evolution. The corresponding Slepian-Wolf bounds

1For experiments using larger values of m, refer to the results in Chapter 6.

Page 85: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 74

are computed exactly in Fig. 5.9 and Fig. 5.10 and approximately in Fig. 5.11 and

Fig. 5.12 using the derivations in Section 5.2.2.

These figures demonstrate that, for m = 2 and regardless of the choice of mapping

and the settings of b and c, the performance of the multilevel codec with optimal

doping is close to the Slepian-Wolf bound and is modeled well using density evolution.

Whether the binary or Gray mapping provides better compression depends on the

values of b, c and H(σ), but the Gray mapping usually uses a lower optimal doping

rate.

5.6 Summary

This chapter extends the algorithms and analysis of side-information-adaptive coding

from binary to multilevel source and side information. There are two new chal-

lenges: the need for whole symbol coding and the requirement that certain symmetry

conditions hold for density evolution to work. We also extend the block-candidate

statistical model to the multilevel case and derive the Slepian-Wolf bound. The whole

symbol encoder converts the multilevel source into bits using either a binary or Gray

mapping and then applies the binary side-information-adaptive encoder. The whole

symbol decoder recovers the multilevel symbols using the factor graph of the binary

side-information-adaptive decoder augmented with symbol and mapping nodes. We

analyze the multilevel extension by transforming the augmented factor graph into

one equivalent in terms of convergence, deriving its degree distributions under vary-

ing rates of doping, and applying a Monte Carlo simulation of density evolution. Our

experimental results under a variety of settings of the block-candidate model show

that both binary and Gray mappings offer coding performance close to the Slepian-

Wolf bound and well modeled by density evolution analysis, but that Gray mapping

usually requires a lower doping rate than binary mapping.

Page 86: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 75

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Performance with Binary m=2, b=8, c=2

Multilevel codecDE modelSW boundH(σ)Doping rate

(a)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)ra

te (

bit/s

ymbo

l)

Performance with Gray m=2, b=8, c=2

Multilevel codecDE modelSW boundH(σ)Doping rate

(b)

Figure 5.9: Coding performance of multilevel codec with optimal doping rate Rfixed, m = 2,b = 8, c = 2 and either (a) binary or (b) Gray mapping.

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Performance with Binary m=2, b=32, c=2

Multilevel codecDE modelSW boundH(σ)Doping rate

(a)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Performance with Gray m=2, b=32, c=2

Multilevel codecDE modelSW boundH(σ)Doping rate

(b)

Figure 5.10: Coding performance of multilevel codec with optimal doping rate Rfixed, m = 2,b = 32, c = 2 and either (a) binary or (b) Gray mapping.

Page 87: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 5. MULTILEVEL SIDE INFORMATION ADAPTATION 76

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Performance with Binary m=2, b=64, c=4

Multilevel codecDE modelSW boundH(σ)Doping rate

(a)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)ra

te (

bit/s

ymbo

l)

Performance with Gray m=2, b=64, c=4

Multilevel codecDE modelSW boundH(σ)Doping rate

(b)

Figure 5.11: Coding performance of multilevel codec with optimal doping rate Rfixed, m = 2,b = 64, c = 4 and either (a) binary or (b) Gray mapping.

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Performance with Binary m=2, b=64, c=16

Multilevel codecDE modelSW boundH(σ)Doping rate

(a)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Performance with Gray m=2, b=64, c=16

Multilevel codecDE modelSW boundH(σ)Doping rate

(b)

Figure 5.12: Coding performance of multilevel codec with optimal doping rate Rfixed, m = 2,b = 64, c = 16 and either (a) binary or (b) Gray mapping.

Page 88: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 6

Applications

This chapter applies the distributed source coding algorithms developed in this dis-

sertation to reduced-reference video quality monitoring, multiview coding and low-

complexity video encoding. These topics were introduced and their literature was

surveyed in Section 2.3. All three applications make use of rate adaptation and side

information adaptation with multilevel signals, but our experiments for each applica-

tion showcase different aspects of performance.

In Section 6.1 on reduced-reference video quality monitoring, we apply density

evolution analysis to design distributed source coding algorithms for the coding of

video projection coefficients. Section 6.2 on multiview coding demonstrates that

distributed source coding outperforms state-of-the-art multiview image coding based

on separate encoding and decoding. In Section 6.3 on low-complexity video coding,

we show that distributed video coding with side information adaptation for motion

outperforms coding without side information adaptation when there is motion.

6.1 Reduced-Reference Video Quality Monitoring

End-to-end quality monitoring is an important service in the delivery of video over

Internet Protocol (IP) networks. Fig. 6.1 depicts a video transmission system with an

attached system for reduced-reference video quality monitoring and channel tracking.

A server sends c channels of encoded video to an intermediary. The intermediary

transcodes a single trajectory with channel changes through the video and forwards

77

Page 89: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 78

VideoEncoder

VideoDecoder

c videochannels

1 videotrajectory

VideoTranscoder

J.240Projection

J.240Projection

ProjectionDecoder

ProjectionEncoder

PSNREstimator

Y X

Server Intermediary Device

rate controlfeedback

encodedcoefficients

Figure 6.1: Video transmission with quality monitoring and channel tracking.

it to a display device.1 We are interested in the attached monitoring and tracking

system. The device encodes projection coefficients X of the video trajectory and

sends them back to the server. A feedback channel from server to device is available

for rate control. The server decodes and compares the coefficients with projection

coefficients Y of the c channels of video, in order to estimate the transcoded video

quality and track its channel change.

This section first describes the projection and peak signal-to-noise ratio (PSNR)

estimation technique of the ITU-T J.240 standard [2] for reduced-reference video

quality monitoring. We suggest an improved maximum likelihood PSNR estimation

technique, applicable when the PSNR estimator is located at the server. The main

contribution of this section is the design by density evolution analysis of distributed

source coding systems for the projection coefficients. These coding systems signifi-

cantly reduce the bit rate needed for quality monitoring and channel tracking.

1An example of such an intermediary is the Slingbox made by Sling Media Inc.

Page 90: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 79

2D WHT 2D IWHT Downsampling

MaximumLength

Sequence 1

MaximumLength

Sequence 2

Figure 6.2: Dimension reduction projection of ITU-T J.240 standard.

6.1.1 ITU-T J.240 Standard

The J.240 standard for reduced-reference video quality monitoring specifies both a

dimension reduction projection for video signals and a function that compares two

sets of coefficients to estimate PSNR [2].

The projection partitions the luminance channel of the video into blocks, sizes

of 8 × 8 or 16 × 16 pixels being typical. From each block, a single coefficient is

obtained by the process shown in Fig. 6.2. The block is multiplied by a maximum

length sequence [76], transformed using the 2D Walsh-Hadamard transform (WHT),

multiplied by another maximum length sequence, and inverse transformed using the

2D inverse Walsh-Hadamard transform (IWHT). Finally, one coefficient is sampled

from the block.

Suppose that χ and ϕ, each of length n, denote projection coefficient vectors of a

transcoded trajectory and its matching encoded counterpart, respectively. The J.240

standard assumes that both vectors are uniformly quantized with step size Q into χ

and ϕ. The PSNR of the transcoded video with respect to the original encoded video

is estimated as

MSEJ.240 =Q2

n

n∑i=1

(χi − ϕi)2 (6.1)

PSNRJ.240 = 10 log10

2552

MSEJ.240

. (6.2)

Page 91: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 80

6.1.2 Maximum Likelihood PSNR Estimation

When the PSNR estimator is located at the server (as in Fig. 6.1), it has access to

the unquantized coefficient vector Y. We suggest the following maximum likelihood

(ML) estimation formulas, which support nonuniform quantization of X, in [113].

MSEML = E

[1

n

n∑i=1

(χi − ϕi)2

∣∣∣∣χi, ϕi]

(6.3)

PSNRML = 10 log10

2552

MSEML

(6.4)

We now compare the performance of J.240 and maximum likelihood PSNR es-

timation for c = 8 channels of video (Foreman, Carphone, Mobile, Mother and

Daughter, Table, News, Coastguard, Container) at resolution 176×144 and frame

rate 30 Hz. The first 256 frames of each sequence are encoded at the server using

H.264/AVC Baseline profile with quantization parameter set to 16 [217]. The interme-

diary transcodes each trajectory with JPEG using scaled versions of the quantization

matrix specified in Annex K of the standard [1], with scaling factors 0.5, 1 and 2,

respectively. Fig. 6.3 shows encoded and transcoded versions of frame 0 of the Fore-

man sequence. The difference in quality of the transcoded images is noticeable both

visually and through their PSNR values of 36.9, 35.2 and 33.6 dB, respectively.

The number of J.240 coefficients per frame, denoted b, is either 99 or 396, depend-

ing on the block size 16× 16 or 8× 8, respectively. The coefficients are quantized to

m bits, uniformly for estimation by the J.240 standard and nonuniformly for max-

imum likelihood estimation. The nonuniform quantization is designed to populate

the quantization bins approximately equally with samples from the entire data set.

Fig. 6.4(a) and (b) plot the mean absolute PSNR estimation error versus the PSNR

estimation group of picture (GOP) size for b = 99 and 396, respectively. The PSNR

estimation GOP size is the number of frames over which the estimation is performed.

Maximum likelihood estimation of PSNR at just m = 1 or 2 bits outperforms J.240

estimation at m = 7 bits and, in some cases, even m = 9 bits. Observe also that max-

imum likelihood estimation improves significantly as the number of coefficients in the

PSNR estimation GOP grows. This suggests that video quality monitoring for video

at low resolution (such as 176×144) is more challenging than at higher resolution.

Page 92: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 81

(a) (b)

(c) (d)

Figure 6.3: Frame 0 of Foreman sequence at resolution 176×144, (a) encoded usingH.264/AVC, and (b)–(d) transcoded using JPEG with quantization matrix scaling factors0.5, 1 and 2, respectively, to PSNR of 36.9, 35.2 and 33.6 dB.

6.1.3 Distributed Source Coding of J.240 Coefficients

The J.240 standard takes for granted that the projection encoder in Fig. 6.1 uses sim-

ple fixed length coding. Conventional variable length coding produces limited gains

because the coefficients X are approximately independent and identically distributed

Gaussian random variables, due to the dimension reduction projection in Fig. 6.2. In

contrast, distributed source coding of X offers better compression by exploiting the

Page 93: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 82

1 2 4 8 16 32 64 128 2560

1

2

3

4

PSNR estimation GOP size

mea

n ab

solu

te P

SN

R e

rror

(dB

)PSNR Estimation Error with b=99

ML estimation with m=1ML estimation with m=2J.240 standard with m=7J.240 standard with m=8J.240 standard with m=9

(a)

1 2 4 8 16 32 64 128 2560

1

2

3

4

PSNR estimation GOP sizem

ean

abso

lute

PS

NR

err

or (

dB)

PSNR Estimation Error with b=396

ML estimation with m=1ML estimation with m=2J.240 standard with m=8J.240 standard with m=9J.240 standard with m=10

(b)

Figure 6.4: PSNR estimation error using the J.240 standard and maximum likelihood (ML)estimation with m bit quantization for number of coefficients per frame (a) b = 99 and (b)b = 396.

statistically related coefficients Y at the projection decoder. This side information

adheres to the block-candidate model with block size equal to b, the number of J.240

coefficients per frame, and number of candidates equal to c, the number of chan-

nels. In the remainder of this section, we design and evaluate six adaptive distributed

source coding systems for this task.

Table 6.1 presents settings for these codecs. The independent settings are m = 1 or

2 (with binary or Gray symbol-to-bit mapping for m = 2), b = 99 or 396 and c = 8.

For each codec, the decoder assumes a value either ε or σ (depending on whether

m = 1 or 2) that parameterizes the distribution between the J.240 coefficients of

a transcoded frame and those of the respective encoded frame at the server. We

choose ε and σ giving entropies H(ε) = 0.2 and H(σ) = 0.4 bit/coefficient, because

these entropies exceed about 90% of the empirical conditional entropy rates. We

also design the doping rates Rfixed by applying the analysis algorithms developed in

Sections 4.4 and 5.4 as follows. The optimal coding and doping rates, as predicted by

density evolution under the independent settings, are plotted in Fig. 6.5 for the entire

Page 94: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 83

Independent Settings Dependent Settings

m Mapping b c Decoder ε Decoder σ Rfixed

1 99 8 0.0311 0.03031 396 8 0.0311 0.01522 binary 99 8 0.3835 0.04552 Gray 99 8 0.3835 0.03032 binary 396 8 0.3835 0.01522 Gray 396 8 0.3835 0.0152

Table 6.1: Codec settings for distributed source coding of J.240 coefficients.

ranges 0 ≤ H(ε) ≤ 1 or 0 ≤ H(σ) ≤ 2. The optimal doping rates are chosen from{0, m

132, 2m

132, . . . , 16m

132

}. For each codec, we choose the value of Rfixed as the maximum

optimal doping rate for H(ε) ≤ 0.2 or H(σ) ≤ 0.4. In these ranges, observe that the

coding rates are lower for binary mapping than Gray mapping, when m = 2.

All six systems are tested using the same video sequences, coding, transcoding

and quantization that generate the maximum likelihood PSNR estimation curves

with m = 1 and 2 and b = 99 and 396 in Fig. 6.4. Although the codecs are designed

for c = 8 channels of video, we evaluate their performance with c = 1, 2, 4 and 8.

In each trial, the coefficients X are obtained from a transcoded random trajectory

through c of the channels. The coefficients Y in block-candidate form are obtained

from the versions of the same c channels, encoded at the server. The codecs process 8

frames of coefficients at a time, using rate-adaptive LDPC codes with regular source

degree distribution nreg(ω) = ω2, length n′ = nm = 8bm bits and data increment

size k = 8bm132

bits. Consequently, Radaptive ∈{m

132, 2m

132, . . . ,m

}. Fig. 6.6 compares the

average coding rates in bit/coefficient for the six systems when c = 1, 2, 4 and 8.

Even though the codecs were designed for c = 8, they perform better or as well when

c is less than 8. The coding rates for c = 8 agree with those predicted by density

evolution. In particular, coding is more efficient for m = 2 bit binary mapping than

m = 2 bit Gray mapping.

Fig. 6.7 plots the mean absolute PSNR estimation error versus the estimation bit

rate for different combinations of estimation and coding techniques, all with c = 8.

The PSNR estimates are computed over a PSNR estimation GOP size of 256 frames.

The estimation bit rate is the transmission rate from projection encoder to projection

Page 95: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 84

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)

rate

(bi

t/bit)

Density Evolution with m=1, b=99, c=8

DE modelSW boundH(ε)Doping rate

(a)

0 0.2 0.4 0.6 0.8 10

0.2

0.4

0.6

0.8

1

H(ε) (bit/bit)ra

te (

bit/b

it)

Density Evolution with m=1, b=396, c=8

DE modelSW boundH(ε)Doping rate

(b)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Density Evolution with m=2, b=99, c=8

Binary DE modelGray DE modelSW boundH(σ)Binary doping rateGray doping rate

(c)

0 0.4 0.8 1.2 1.6 20

0.4

0.8

1.2

1.6

2

H(σ) (bit/symbol)

rate

(bi

t/sym

bol)

Density Evolution with m=2, b=396, c=8

Binary DE modelGray DE modelSW boundH(σ)Binary doping rateGray doping rate

(d)

Figure 6.5: Analysis of distributed source coding of J.240 coefficients using density evolu-tion (DE) with optimal doping rate Rfixed, under the settings (a) m = 1, b = 99, c = 8,(b) m = 1, b = 396, c = 8, (c) m = 2, b = 99, c = 8 with binary and Gray mapping and(d) m = 2, b = 396, c = 8 with binary and Gray mapping.

Page 96: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 85

m=1 binary m=2 Gray m=2 0

0.1

0.2

0.3

0.4

0.5

0.6

codi

ng r

ate

(bit/

coef

ficie

nt)

Distributed Source Coding with b=99

c=1c=2c=4c=8

(a)

m=1 binary m=2 Gray m=2 0

0.1

0.2

0.3

0.4

0.5

0.6

codi

ng r

ate

(bit/

coef

ficie

nt)

Distributed Source Coding with b=396

c=1c=2c=4c=8

(b)

Figure 6.6: Coding rate for distributed source coding of J.240 coefficients, with settings asin Table 6.1, including number of J.240 coefficients per frame b set to (a) 99 and (b) 396.

decoder in kbit/s assuming the video channels have frame rate of 30 Hz. In all trials,

the transcoded trajectory is correctly tracked and the PSNR estimation errors are

computed with respect to the matching trajectory at the server. The curve for J.240

estimation and fixed length coding is obtained by varying the number of quantization

bits m from 1 to 10. The performance of maximum likelihood estimation and fixed

length coding is shown only for m = 1 and 2. The combination of maximum likelihood

estimation and distributed source coding is also evaluated for m = 1 and 2, the

latter with binary symbol-to-bit mapping. The best combination that achieves PSNR

estimation error around 0.5 dB requires an estimation rate of only 1.27 kbit/s; it uses

maximum likelihood estimation and distributed source coding with b = 99, m = 2

and binary mapping. This performance is almost 20 times better than that of J.240

estimation and fixed length coding, which requires an estimation rate of 23.7 kbit/s

for about 0.5 dB PSNR estimation error.

Page 97: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 86

0 5 10 15 20 25 300

5

10

15

20

25

30

estimation rate (kbit/s)

mea

n ab

solu

te P

SN

R e

rror

(dB

)Performance with b=99, c=8

J.240 estimation and FLCML estimation and FLCML estimation and DSC

(a)

0 20 40 60 80 100 1200

5

10

15

20

25

30

estimation rate (kbit/s)m

ean

abso

lute

PS

NR

err

or (

dB)

Performance with b=396, c=8

J.240 estimation and FLCML estimation and FLCML estimation and DSC

(b)

Figure 6.7: Mean absolute PSNR estimation error versus estimation bit rate for differentestimation and coding techniques, with number of J.240 coefficients per frame b set to (a) 99and (b) 396. PSNR estimation is either by J.240 or maximum likelihood (ML) and codingis either fixed length coding (FLC) or distributed source coding (DSC).

6.2 Multiview Coding

Arrays of tens or hundreds of cameras capture correspondingly large amounts of raw

data. Distributed source coding of these views at separate encoders, one for each

camera, reduces the raw data without bringing all of it to a single encoder. The

separately encoded bit streams are sent to a joint decoder, which reconstructs the

views. Section 4.1 introduces a toy version of this problem, the adaptive distributed

source coding of random dot stereograms [91]. We now extend that idea to the

practical lossy transform-domain coding of real multiview images from arrays of more

than one hundred cameras. We demonstrate that distributed multiview image coding

outperforms codecs based on separate encoding and decoding.

Our system requires that one key view be conventionally coded and reconstructed.

Each of the other views is coded as depicted in Fig. 6.8, where X and X are the view

and its reconstruction and Y is another already reconstructed view. At the encoder,

the view X is first transformed using an 8 × 8 discrete cosine transform (DCT) and

Page 98: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 87

LDPCEncoder

rate controlfeedback

Side InformationAdapter

Transform Quantizer InverseTransform

Recon-struction

OvercompleteTransform

LDPCDecoder

Figure 6.8: Lossy transform-domain adaptive distributed source codec. The core is anadaptive distributed source codec that performs lossless coding of quantization indices.

quantized using the matrix in Annex K of the JPEG standard, scaled by some fac-

tor [1]. The resulting m-bit quantization indices X are compressed by the LDPC

encoder, which is part of an adaptive distributed source codec. At the decoder, the

overcomplete transform computes the DCT of all 8 × 8 blocks of the reconstructed

view Y and supplies the side information adapter with block-candidate side informa-

tion Y. The side information consists of c candidates for each block of b indices in X,

derived from an 8× 8 block of X. The LDPC decoder and side information adapter

iteratively decode the quantization indices, requesting increments of rate from the

LDPC encoder using the feedback channel as necessary. The quantization coeffi-

cients are reconstructed to the centroids of their quantization intervals2 and inverse

transformed, producing the reconstructed view X.

We apply this coding technique to two multiview image data sets Xmas and

Dog with camera geometry as described in Table 6.2 [195, 196]. Each view is a

luminance signal of resolution 320 × 240. View 0 is set as a key view and coded

conventionally with JPEG. For each of the other views, view i is coded as X according

to Fig. 6.8 using the reconstructed view i−1 as Y . The views are quantized with one

of four scaling factors 0.5, 0.76, 1.15 and 1.74, which make m = 8 bits sufficient to

2Although side-information-assisted reconstruction is useful when alternating views are key views [11], itdegrades performance when there is just one key view.

Page 99: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 88

Data Camera Camera Number of Inter-Camera Distance toSet Array Alignment Cameras Distance (m) Scene (m)

Xmas linear horizontal parallel 101 0.003 0.3Dog linear horizontal converging 80 0.05 8.2

Table 6.2: Camera array geometry for multiview data sets Xmas and Dog [195, 196].

represent the indices. We choose a Gray mapping from symbols to bits to minimize

the number of bit transitions between adjacent symbols. The side-information’s block

size b is fixed at 64 coefficients because of the 8 × 8 transforms at encoder and

decoder. The linear horizontal camera arrangement means that the 8× 8 candidates

in Y for each 8 × 8 block of X lie in a horizontal search range. For these data sets,

searching through integer shifts in [−5, 5] is enough, giving c = 11 candidates per

block. We statistically model the difference between a source block and its matching

side information candidate as zero-mean Laplacian distributions, one tuned for each

of the b = 64 coefficients. In our experiments, we divide each view into 10 horizontal

tiles of size 320× 24 pixels and code each tile individually. The rate-adaptive LDPC

codes therefore have length n = 61440 bits with data increment size k = 480 bits and

regular source degree distribution nreg(ω) = ω2. To determine the doping rate Rfixed,

we do not perform analysis by density evolution since the source and side information

do not satisfy all the required assumptions. For example, each block of the source is

usually statistically dependent on all the candidates (instead of exactly one) in the

side information block, especially in the low frequency coefficients. Instead, we find

that Rfixed = 0 suffices, precisely due to the redundant side information.

Fig. 6.9 shows that the overall rate-distortion performance of the adaptive codec

is superior to separate encoding and decoding of the views. In coding Xmas and

Dog, it outperforms the intra mode of H.264/AVC Baseline by about 3 dB and 0.5

dB, respectively, and JPEG by about 6 dB and 4 dB, respectively. We also compare

the performance of an oracle codec, which operates in almost the same way as the

adaptive codec. The difference is that the side information adapter in Fig. 6.8 is

replaced with an oracle that knows exactly which candidates of Y match the blocks

of X. That the adaptive codec performs almost as well as the oracle codec supports

our choice of Rfixed = 0.

Page 100: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 89

0.2 0.4 0.6 0.8 1 1.2 1.429

31

33

35

37

39

rate (bit/pixel)

PS

NR

(dB

)RD Performance, Xmas

OracleSide Information AdaptationH.264/AVC IntraJPEG

(a)

0.3 0.5 0.7 0.9 1.1 1.3 1.529

31

33

35

37

rate (bit/pixel)P

SN

R (

dB)

RD Performance, Dog

OracleSide Information AdaptationH.264/AVC IntraJPEG

(b)

Figure 6.9: Comparison of rate-distortion (RD) performance for multiview coding of theoracle, adaptive, H.264/AVC intra and JPEG codecs, for the data sets (a) Xmas and(b) Dog.

Xmas Dog

Codec PSNR (dB) rate (bit/pixel) PSNR (dB) rate (bit/pixel)

Adaptive 37.9 0.545 36.7 0.828H.264/AVC intra 35.1 0.549 36.1 0.838

JPEG 31.5 0.542 32.9 0.834

Table 6.3: Average PSNR and rate for multiview coding traces in Fig. 6.10.

In Fig. 6.10, we plot PSNR and rate traces for the adaptive, H.264/AVC intra

and JPEG codecs, when they operate at approximately the same rates, which are

shown together with average PSNR in Table 6.3. The PSNR traces in Fig. 6.10(a)

and (b) indicate that the adaptive codec consistently reconstructs views better than

H.264/AVC intra and JPEG. The Dog data set has greater PSNR variability across

views than Xmas because its converging camera geometry makes its images less uni-

form. In the rate traces in Fig. 6.10(c) and (d), the fact that the adaptive codec

codes view 0 conventionally creates rate peaks at view 0. Even besides that view, the

Page 101: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 90

0 20 40 60 80 10031

32

33

34

35

36

37

38

39

40

view number

PS

NR

(dB

)PSNR Trace, Xmas

Side Information AdaptationH.264/AVC IntraJPEG

(a)

0 20 40 60 8032

33

34

35

36

37

38

39

view numberP

SN

R (

dB)

PSNR Trace, Dog

Side Information AdaptationH.264/AVC IntraJPEG

(b)

0 20 40 60 80 1000.5

0.6

0.7

0.8

0.9

1

1.1

1.2

view number

rate

(bi

t/pix

el)

Rate Trace, Xmas

Side Information AdaptationH.264/AVC IntraJPEG

(c)

0 20 40 60 800.7

0.8

0.9

1

1.1

1.2

1.3

1.4

1.5

view number

rate

(bi

t/pix

el)

Rate Trace, Dog

Side Information AdaptationH.264/AVC IntraJPEG

(d)

Figure 6.10: PSNR and rate traces for multiview coding. The adaptive, H.264/AVC intraand JPEG codecs operate at similar rates, which are shown together with average PSNR inTable 6.3. The traces are for (a) PSNR of Xmas, (b) PSNR of Dog, (c) rate of Xmas and(d) rate of Dog.

Page 102: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 91

adaptive codec has greater rate variability than the other two codecs because its rate

depends on the statistics of pairs of views.

From the traces in Fig. 6.10, we select pairs of images, views 35 and 65 from Xmas

and views 35 and 45 from Dog, for stereographic viewing in Fig. 6.11. The adaptive

codec’s reconstructions in Fig. 6.11(a) and (b) retain more detail than those of the

H.264/AVC intra codec in Fig. 6.11(c) and (d), for example in the reins of the sleigh in

Xmas and the creases of the curtain in Dog. They also have fewer block compression

artifacts compared to the JPEG reconstructions in Fig. 6.11(e) and (f).

6.3 Low-Complexity Video Encoding

Motion-compensated hybrid coding of video relies on a computationally intensive

encoder to exploit the redundancy among frames of video. Distributed source coding

provides a way for the frames to be encoded separately, and thus at lower complexity,

while having their joint redundancy exploited at the decoder. In this section, we

describe a distributed video codec based on adaptive distributed source coding and

similar to the multiview image codec in Section 6.2. The main result is that the

adaptive codec (with side information adaptation) outperforms the nonadaptive codec

(that does not adapt to motion), when there is motion in the video. When there is

little motion, the adaptive codec performs just as well as the nonadaptive one.

Our experiments use standard video test sequences Foreman, Carphone, Container

and Hall (listed in order of decreasing motion activity) at resolution 352 × 288 and

frame rate 30 Hz. The raw video is in YUV 4:2:0 format, which means that the

luminance channel Y has resolution 352×288 and the two chrominance channels U and

V have resolution 176×144. In keeping with video coding practice, every eighth frame

starting from frame 0 is intra coded as a key frame, in our case, using JPEG. For the

remaining frames, the luminance channel of frame i is coded as X according to Fig. 6.8

with the luminance channel of the reconstructed frame i− 1 as Y . The chrominance

channels are coded without side information adaptation. Instead, the motion vectors

obtained from the luminance decoding are used to generate motion-compensated side

information for the chrominance channels. All channels are transformed using 8× 8

DCT and quantized using a scaled version of the quantization matrix in Annex K

Page 103: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 92

(a) (b)

(c) (d)

(e) (f)

Figure 6.11: Stereographic reconstructions of pairs of multiview images, views 35 and 65 ofXmas and views 35 and 45 of Dog from the traces in Fig. 6.10, through coding by (a) and(b) adaptive, (c) and (d) H.264/AVC intra, and (e) and (f) JPEG codecs.

Page 104: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 93

of the JPEG standard with one of four scaling factors 0.5, 1, 2 and 4 [1]. Like

the multiview image codec, quantization is to m = 8 bits with Gray symbol-to-bit

mapping and the side-information’s block size b = 64 coefficients. The candidates in Y

for each 8×8 block of X lie in a motion search range of [−5, 5]×[−5, 5], giving c = 121

candidates per block. Also like the multiview image codec, we model the difference

between a source block and its matching side information candidate as tuned zero-

mean Laplacian distributions for each coefficient. For the experiments, the luminance

and two chrominance channels are divided into 16, 4 and 4 tiles, respectively, each

of size 88 × 72 pixels, and each tile is coded individually. The rate-adaptive LDPC

codes have length n = 50688 bits with data increment size k = 384 bits and regular

source degree distribution nreg(ω) = ω2. We do not perform density evolution analysis

to determine the doping rate Rfixed for the same reasons as for the multiview image

codec. As before, we find that Rfixed = 0 is enough.

Fig. 6.12 compares the overall rate-distortion performance of four codecs in coding

the first 16 frames of each sequence. All the codecs have similar encoding complexity

because they use the same sets of transforms and quantization. As in Section 6.2,

the oracle codec is the same as the adaptive codec, except that the side information

adapter in Fig. 6.8 is replaced with an oracle that knows exactly which candidates of

Y match the blocks of X. The nonadaptive codec is the same as the adaptive codec,

except that the side information adapter is replaced with a module that always selects

the zero motion candidate.

The adaptive codec performs almost as well as the oracle codec, for all four video

sequences, again validating our choice of Rfixed = 0. For Foreman, the adaptive codec

clearly outperforms its nonadaptive counterpart due to the significant motion in the

sequence. With less motion in Carphone, the gap is reduced. When there is almost

no motion as in Container and Hall, the nonadaptive codec functions equivalently to

the oracle codec, and so is marginally superior to the adaptive codec.

We plot luminance PSNR and rate traces in Fig. 6.13 and 6.14, respectively. The

traces are selected so that the four codecs plotted in the same panel operate at the

same PSNR. The average PSNR and rates for these traces are shown in Table 6.4. At

frames 0 and 8, all the codecs perform identically since every eighth frame is always

coded by JPEG. These traces agree with the rate-distortion plots in Fig. 6.12.

Page 105: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 94

0 1 2 3 4 5 6 727

29

31

33

35

37

rate (Mbit/s)

lum

inan

ce P

SN

R (

dB)

RD Performance, Foreman

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(a)

0 1 2 3 4 5 6 729

31

33

35

37

39

rate (Mbit/s)lu

min

ance

PS

NR

(dB

)

RD Performance, Carphone

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(b)

0 1 2 3 4 5 6 728

30

32

34

36

38

rate (Mbit/s)

lum

inan

ce P

SN

R (

dB)

RD Performance, Container

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(c)

0 1 2 3 4 5 6 730

32

34

36

38

40

rate (Mbit/s)

lum

inan

ce P

SN

R (

dB)

RD Performance, Hall

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(d)

Figure 6.12: Comparison of rate-distortion (RD) performance for distributed video codingof the oracle, adaptive, JPEG and nonadaptive codecs, for the sequences (a) Foreman,(b) Carphone, (c) Container and (d) Hall in YUV 4:2:0 format at resolution 352× 288 andframe rate 30 Hz.

Page 106: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 95

Foreman Carphone

Codec PSNR (dB) rate (Mbit/s) PSNR (dB) rate (Mbit/s)

Oracle 34.4 2.48 35.4 2.04Adaptive 34.4 2.99 35.4 2.20

Nonadaptive 34.4 4.51 35.4 2.81JPEG 34.4 2.92 35.4 2.53

Container Hall

Codec PSNR (dB) rate (Mbit/s) PSNR (dB) rate (Mbit/s)

Oracle 34.0 1.53 36.6 1.80Adaptive 34.0 1.60 36.6 1.85

Nonadaptive 34.0 1.53 36.6 1.84JPEG 34.0 3.03 36.6 2.68

Table 6.4: Average luminance PSNR and rate for distributed video coding traces in Fig. 6.13and 6.14.

Fig. 6.15 depicts the reconstructions of frame 15 of the sequences Foreman, Car-

phone, Container and Hall as coded by the adaptive codec in the PSNR and rate

traces in Fig. 6.13 and 6.14. These reconstructions show that the PSNR measure-

ments are matched by good visual quality.

6.4 Summary

This chapter demonstrates the potential of adaptive distributed source coding for

three applications, while highlighting different facets of its performance. We design

distributed source coding algorithms using density evolution analysis for the prob-

lem of reduced-reference video quality monitoring and channel tracking. We show

that distributed multiview image coding outperforms state-of-the-art coding based

on separate encoding and decoding. Finally, for low-complexity video encoding of se-

quences with motion, we demonstrate that coding with side information adaptation

for motion delivers better performance than coding that does not adapt to motion.

Page 107: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 96

0 3 6 9 12 1533

34

35

36

frame number

lum

inan

ce P

SN

R (

dB)

PSNR Trace, Foreman

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(a)

0 3 6 9 12 1534

35

36

37

frame numberlu

min

ance

PS

NR

(dB

)

PSNR Trace, Carphone

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(b)

0 3 6 9 12 1533

34

35

36

frame number

lum

inan

ce P

SN

R (

dB)

PSNR Trace, Container

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(c)

0 3 6 9 12 1535

36

37

38

frame number

lum

inan

ce P

SN

R (

dB)

PSNR Trace, Hall

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(d)

Figure 6.13: Luminance PSNR traces for distributed video coding. The oracle, adaptive,JPEG and nonadaptive codecs operate at identical PSNR, shown together with averagerates in Table 6.4. The PSNR traces are for (a) Foreman, (b) Carphone, (c) Container and(d) Hall, and correspond to the respective rate traces in Fig. 6.14.

Page 108: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 97

0 3 6 9 12 151

2

3

4

5

6

7

8

frame number

rate

(M

bit/s

)Rate Trace, Foreman

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(a)

0 3 6 9 12 151

2

3

4

5

frame numberra

te (

Mbi

t/s)

Rate Trace, Carphone

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(b)

0 3 6 9 12 151

2

3

4

frame number

rate

(M

bit/s

)

Rate Trace, Container

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(c)

0 3 6 9 12 151

2

3

4

frame number

rate

(M

bit/s

)

Rate Trace, Hall

OracleSide Information AdaptationNo Side Information AdaptationJPEG

(d)

Figure 6.14: Rate traces for distributed video coding. The oracle, adaptive, JPEG andnonadaptive codecs operate at similar PSNR, which are shown together with average ratesin Table 6.4. The rate traces are for (a) Foreman, (b) Carphone, (c) Container and (d) Hall,and correspond to the respective PSNR traces in Fig. 6.13.

Page 109: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 6. APPLICATIONS 98

(a) (b)

(c) (d)

Figure 6.15: Reconstructions of distributed video coding frames, frame 15 from the traces inFig. 6.13 and 6.14, of the sequences (a) Foreman, (b) Carphone, (c) Container and (d) Hall.

Page 110: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Chapter 7

Conclusions

Adaptive distributed source coding is the separate encoding and joint decoding of

statistically dependent signals, when there is uncertainty in the statistical dependence.

This dissertation considers the encoder and decoder to have separate access to the

source and side information, respectively, and addresses adaptive distributed source

coding with three intertwining threads of study: algorithms, analysis and applications.

Algorithms

We develop coding algorithms for rate adaptation and side information adaptation

for both binary and multilevel signals.

Rate adaptation is the existing idea that the encoder switches flexibly among

coding rates. This capability, combined with a feedback channel from the decoder,

means that the encoder need not know in advance the degree of statistical depen-

dence between source and side information. Our contribution is a construction for

rate-adaptive low-density parity-check (LDPC) codes that uses syndrome merging

and splitting rather than naıve syndrome deletion. These codes perform close to

the information-theoretic bounds and are able to outperform commonly used rate-

adaptive turbo codes. Furthermore, the LDPC code construction facilitates decoding

by the sum-product algorithm and so permits performance analysis by density evo-

lution, which we discuss in more detail below.

Side information adaptation is our novel technique for the decoder to adapt to

multiple candidates of side information without knowing in advance which candidate

99

Page 111: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 7. CONCLUSIONS 100

is most statistically dependent on the source. Our approach defines block-candidate

models for side information, for which we compute tight information-theoretic coding

bounds and devise decoders based on the sum-product algorithm. The main experi-

mental finding is that the encoder usually sends a low rate of the source bits uncoded

as doping bits in order to achieve coding performance close to the bounds. In the case

of multilevel signals, we argue that side information adaptation requires the coding

of whole symbols rather than coding bit-plane-by-bit-plane. We also observe that

binary and Gray symbol-to-bit mappings yield different coding performance.

Future work in adaptive distributed source coding algorithms should continue

adding functionality. An unsolved problem is the construction of layered rate-adaptive

codes, for which some specified bits of the source are recovered at rates insufficient for

complete decoding. Side information adaptation might be extended to more general

block-candidate models, for example, ones symmetric in source and side information.

Analysis

The analysis of coding performance employs density evolution, a technique we ap-

ply to all the decoding algorithms in this dissertation. The idea is that testing the

convergence of distributions (or densities) of sum-product messages is more efficient

than testing the convergence of the messages themselves, because the former does not

require complete knowledge of the decoder’s factor graph.

Experiments demonstrate that the analysis technique closely approximates em-

pirical coding performance, and consequently enables tuning of parameters of the

coding algorithms. In this way, we design the aforementioned rate-adaptive LDPC

codes that outperform rate-adaptive turbo codes and select the optimal doping rates

for side information adaptation.

Our analysis for multilevel signals requires that certain symmetry conditions hold.

In particular, symbol values must map to bit representations in such a way that,

whenever the bits of any subset of bit planes are flipped, the new circular ordering is

isomorphic to the original ordering. We show that this isomorphism condition holds

for 2-bit symbols with either binary or Gray mapping. Extending density evolution

analysis to symbols of greater than 2 bits remains an open problem.

Page 112: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

CHAPTER 7. CONCLUSIONS 101

Applications

We showcase this dissertation’s algorithm and analysis contributions by demonstrat-

ing their use in different media applications.

End-to-end quality is a key concern in the delivery of video over best-effort net-

works. We devise a reduced-reference video quality monitoring and channel tracking

system based on adaptive distributed source coding, and design it using density evo-

lution analysis. The reduced-reference bit rate is almost 20 times lower than that of

a comparable system based on the ITU-T J.240 standard.

We develop a lossy multiview coding system that adapts to uncertain statistics

caused by unknown disparity among the views. Operating on image data of 101

views, our codec achieves a quality 3 dB higher in peak signal-to-noise ratio (PSNR)

than that of intra coding with H.264/AVC Baseline at the same bit rate.

Finally, we apply our adaptive techniques to a classic distributed source coding

problem: low-complexity video encoding. Our codec adapts to the motion in the

video and, whenever there is motion, outperforms a codec that assumes zero motion.

These media systems demonstrate the potential of adaptive distributed source

coding using readily accessible image and video data. In contrast, as suggested in

the introduction to this dissertation, the most exciting applications, such as satellite

imaging and capsule endoscopy, involve hard-to-obtain data. It is just this inacces-

sibility that fits them to the constraint of separate encoding and joint decoding and

that continues to leave them open to future systems research.

Page 113: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

Bibliography

[1] ITU-T Recommendation T.81: Digital compression and coding of continuous-

tone still images. Sept. 1992.

[2] ITU-T Recommendation J.240: Framework for remote monitoring of transmit-

ted picture signal-to-noise ratio using spread-spectrum and orthogonal trans-

form, Jun. 2004.

[3] A. M. Aaron and B. Girod. Compression with side information using turbo

codes. In Proc. IEEE Data Compression Conf., Snowbird, Utah, 2002.

[4] A. M. Aaron and B. Girod. Wyner-Ziv video coding with low-encoder complex-

ity. In Proc. Picture Coding Symp., San Francisco, California, 2004.

[5] A. M. Aaron, S. Rane, and B. Girod. Wyner-Ziv video coding with hash-based

motion compensation at the receiver. In Proc. IEEE Internat. Conf. Image

Process., Singapore, 2004.

[6] A. M. Aaron, S. Rane, D. Rebollo-Monedero, and Bernd Girod. Systematic

lossy forward error protection for video waveforms. In Proc. IEEE Internat.

Conf. Image Process., Barcelona, Spain, 2003.

[7] A. M. Aaron, S. Rane, E. Setton, and B. Girod. Transform-domain Wyner-Ziv

codec for video. In SPIE Visual Communications and Image Process. Conf.,

San Jose, California, 2004.

[8] A. M. Aaron, S. Rane, R. Zhang, and B. Girod. Wyner-Ziv coding for video:

Applications to compression and error resilience. In Proc. IEEE Data Compres-

sion Conf., Snowbird, Utah, 2003.

102

Page 114: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 103

[9] A. M. Aaron, E. Setton, and Bernd Girod. Towards practical Wyner-Ziv coding

of video. In Proc. IEEE Internat. Conf. Image Process., Barcelona, Spain, 2003.

[10] A. M. Aaron, D. P. Varodayan, and B. Girod. Wyner-Ziv residual coding of

video. In Proc. Picture Coding Symp., Beijing, China, 2006.

[11] A. M. Aaron, R. Zhang, and B. Girod. Wyner-Ziv coding of motion video.

In Proc. Asilomar Conf. on Signals, Syst., Comput., Pacific Grove, California,

2002.

[12] R. Ahlswede and J. Korner. Source coding with side information and a converse

for degraded broadcast channels. IEEE Trans. Inform. Theory, 21(6):629–637,

November 1975.

[13] A. K. Al Jabri and S. Al-Issa. Zero-error codes for correlated information

sources. In Proc. IMA Internat. Conf. Cryptog. Coding, Cirencester, United

Kingdom, 1997.

[14] A. Amraoui. LTHC: LdpcOpt, visited on Nov. 15, 2005.

Available online at http://lthcwww.epfl.ch/research/ldpcopt.

[15] X. Artigas, E. Angeli, and L. Torres. Side information generation for multi-

view distributed video coding using a fusion approach. In Proc. Nordic Signal

Process. Symp., Reykjavik, Iceland, 2006.

[16] X. Artigas, J. Ascenso, M. Dalai, S. Klomp, D. Kubasov, and M. Ouaret. The

DISCOVER codec: Architecture, Techniques and Evaluation. In Proc. Picture

Coding Symp., Lisbon, Portugal, 2007.

[17] X. Artigas, S. Malinowski, C. Guillemot, and L. Torres. Overlapped quasi-

arithmetic codes for distributed video coding. In Proc. IEEE Internat. Conf.

Image Process., San Antonio, Texas, 2007.

[18] J. Ascenso, C. Brites, and F. Pereira. Improving frame interpolation with spatial

motion smoothing for pixel domain distributed video coding. In Proc. EURASIP

Conf. Speech Image Process., Multimedia Commun. Services, Smolenice, Slovak

Republic, 2005.

Page 115: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 104

[19] J. Ascenso, C. Brites, and F. Pereira. Content adaptive Wyner-Ziv video cod-

ing driven by motion activity. In Proc. IEEE Internat. Conf. Image Process.,

Altanta, Georgia, 2006.

[20] J. Ascenso, C. Brites, and F. Pereira. Design and performance of a novel low-

density parity-check code for distributed video coding. In Proc. IEEE Internat.

Conf. Image Process., San Diego, California, 2008.

[21] J. Ascenso and F. Pereira. Adaptive hash-based side information exploitation

for efficient Wyner-Ziv video coding. In Proc. IEEE Internat. Conf. Image

Process., San Antonio, Texas, 2007.

[22] J. Bajcsy and P. Mitran. Coding for the Slepian-Wolf problem with turbo codes.

In Proc. IEEE Global Commun. Conf., San Antonio, Texas, 2001.

[23] M. Barni, D. Papini, A. Abrardo, and E. Magli. Distributed source coding of

hyperspectral images. In Proc. Internat. Geoscience Remote Sensing Symp.,

Seoul, South Korea, 2005.

[24] R. J. Barron, B. Chen, and G. W. Wornell. The duality between information

embedding and source coding with side information and some applications.

IEEE Trans. Inform. Theory, 49(5):1159–1180, May 2003.

[25] L. E. Baum, T. Petrie, G. Soules, and N. Weiss. A maximization technique

occurring in the statistical analysis of probabilistic functions of Markov chains.

Ann. Math. Stat., 41(1):164–171, Oct. 1970.

[26] T. Berger. Multiterminal source coding. In Lecture Notes presented at CISM

Summer School, 1977.

[27] T. Berger and S. Tung. Encoding of correlated analog sources. In Proc. IEEE-

USSR Joint Workshop on Inform. Theory, Moscow, USSR, 1975.

[28] C. Berrou, A. Glavieux, and P. Thitimajshima. Near Shannon limit error-

correcting coding and decoding: Turbo-codes. In Proc. Internat. Conf. Com-

mun., Geneva, Switzerland, 1993.

Page 116: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 105

[29] R. B. Blizard. Convolutional coding for data compression. Martin Marietta

Corp., Denver Div., Rep. R-69-17, 1969.

[30] R. C. Bose and D. K. Ray-Chaudhuri. On a class of error correcting binary

group codes. Inf. Control, 3(1):68–79, Mar. 1960.

[31] C. Brites, J. Ascenso, and F. Pereira. Modeling correlation noise statistics at

decoder for pixel based Wyner-Ziv video coding. In Proc. Picture Coding Symp.,

Beijing, China, 2006.

[32] C. Brites, J. Ascenso, and F. Pereira. Studying temporal correlation noise

modeling for pixel based Wyner-Ziv video coding. In Proc. IEEE Internat.

Conf. Image Process., Altanta, Georgia, 2006.

[33] C. Brites and F. Pereira. Correlation noise modeling for efficient pixel and

transform domain WynerZiv video coding. IEEE Trans. Circuits Syst. Video

Technol., 18(9):1177–1190, Sept. 2008.

[34] J. Cardinal and G. Van Asche. Joint entropy-constrained multiterminal quanti-

zation. In Proc. Internat. Symp. Inform. Theory, Lausanne, Switzerland, 2002.

[35] D. M. Chen, D. P. Varodayan, M. Flierl, and B. Girod. Distributed stereo image

coding with improved disparity and noise estimation. In Proc. IEEE Internat.

Conf. Acoustics Speech Signal Process., Las Vegas, Nevada, 2008.

[36] D. M. Chen, D. P. Varodayan, M. Flierl, and B. Girod. Wyner-Ziv coding of

multiview images with unsupervised learning of disparity and Gray code. In

Proc. IEEE Internat. Conf. Image Process., San Diego, California, 2008.

[37] D. M. Chen, D. P. Varodayan, M. Flierl, and B. Girod. Wyner-Ziv coding of

multiview images with unsupervised learning of two disparities. In Proc. IEEE

Internat. Conf. Multimedia and Expo, Hannover, Germany, 2008.

[38] J. Chen, A. Khisti, D. M. Malioutov, and J. S. Yedidia. Distributed source

coding using serially-concatenated-accumulate codes. In IEEE Inform. Theory

Workshop, San Antonio, Texas, 2004.

Page 117: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 106

[39] Y. Chen, Y.-K. Wang, K. Ugur, M. M. Hannuksela, J. Lainema, and M. Gab-

bouj. The emerging MVC standard for 3D video services. EURASIP Adv.

Signal Process. J., 2009. doi:10.1155/2009/786015.

[40] N.-M. Cheung and A. Ortega. An efficient and highly parallel hyperspectral im-

agery compression scheme based on distributed source coding. In Proc. Asilomar

Conf. on Signals, Syst., Comput., Pacific Grove, California, 2006.

[41] N.-M. Cheung, C. Tang, A. Ortega, and C. S. Raghavendra. Efficient wavelet-

based predictive slepian-wolf coding for hyperspectral imagery. EURASIP Sig-

nal Process. J., 86(11):3180–3195, Nov. 2006.

[42] K. Chono, Y.-C. Lin, D. P. Varodayan, and B. Girod. Reduced-reference image

quality estimation using distributed source coding. In Proc. IEEE Internat.

Conf. Multimedia and Expo, Hannover, Germany, Jun. 2008.

[43] T. P. Coleman, A. H. Lee, M. Medard, and M. Effros. On some new approaches

to practical Slepian-Wolf compression inspired by channel coding. In Proc.

IEEE Data Compression Conf., Snowbird, Utah, 2004.

[44] M. Costa. Writing on dirty paper. IEEE Trans. Inform. Theory, 29(3):439–441,

May 1983.

[45] T. M. Cover. A proof of the data compression theorem of Slepian and Wolf for

ergodic sources. IEEE Trans. Inform. Theory, 21(2):226–228, Oct. 1975.

[46] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley

& Sons, Inc., 1991.

[47] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete

data via the EM algorithm. J. Royal Stat. Soc., Series B, 39(1):1–38, 1977.

[48] C. Di, D. Proietti, I. E. Telatar, T. J. Richardson, and R. L. Urbanke. Finite-

length analysis of low-density parity-check codes on the binary erasure channel.

IEEE Trans. Inform. Theory, 48(6):1570–1579, Jun. 2002.

Page 118: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 107

[49] H. Dong, J. Lu, and Y. Sun. Distributed audio coding in wireless sensor net-

works. In Internat. Conf. Comput. Intelligence Security, Guangzhou, China,

2006.

[50] P. L. Dragotti and M. Gastpar. Distributed Source Coding. Elsevier, Inc., 2009.

[51] S. C. Draper. Successive structuring of source coding algorithms for data fusion,

buffering, and distribution in networks. Ph.D. dissertation, MIT, 2002.

[52] S. C. Draper, A. Khisti, E. Martinian, A. Vetro, and J. S. Yedidia. Secure

storage of fingerprint biometrics using Slepian-Wolf codes. In IEEE Inform.

Theory Applic. Workshop, San Diego, California, 2007.

[53] S. C. Draper, A. Khisti, E. Martinian, A. Vetro, and J. S. Yedidia. Using dis-

tributed source coding to secure fingerprint biometrics. In Proc. IEEE Internat.

Conf. Acoustics Speech Signal Process., Honolulu, Hawaii, 2007.

[54] S. C. Draper and G. W. Wornell. Side information aware coding strategies for

sensor networks. IEEE J. Select. Areas Commun., 22(6):966–976, Aug. 2004.

[55] F. Dufaux, M. Ouaret, and T. Ebrahimi. Recent advances in multiview dis-

tributed video coding. In Proc. Mobile Multimedia/Image Process. Military

Security Applic., Orlando, Florida, 2007.

[56] M. Effros and D. Muresan. Codecell contiguity in optimal fixed-rate and

entropy-constrained network scalar quantizers. In Proc. IEEE Data Compres-

sion Conf., Snowbird, Utah, 2002.

[57] M. Fleming and M. Effros. Network vector quantization. In Proc. IEEE Data

Compression Conf., Snowbird, Utah, 2001.

[58] M. Fleming, Q. Zhao, and M. Effros. Network vector quantization. IEEE Trans.

Inform. Theory, 50(8):1584–1604, Aug. 2004.

[59] M. Flierl and B. Girod. Coding of multi-view image sequences with video

sensors. In Proc. IEEE Internat. Conf. Image Process., Atlanta, Georgia, 2006.

Page 119: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 108

[60] M. Flierl and P. Vandergheynst. Video coding with motion-compensated tem-

poral transforms and side information. In Proc. IEEE Internat. Conf. Acoustics

Speech Signal Process., Philadelphia, Pennsylvania, 2005.

[61] T. Flynn and R. Gray. Encoding of correlated observations. IEEE Trans.

Inform. Theory, 33(6):773–787, Nov. 1987.

[62] T. Flynn and R. Gray. Correction to encoding of correlated observations. IEEE

Trans. Inform. Theory, 37(3):699, May 1991.

[63] T. Fujii and M. Tanimoto. Free viewpoint TV system based on ray-space repre-

sentation. In SPIE 3D TV Video Display Conf., Boston, Massachusetts, 2002.

[64] R. G. Gallager. Low-density parity-check codes. Ph.D. dissertation, MIT, 1963.

[65] J. Garcıa-Frıas. Compression of correlated binary sources using turbo codes.

IEEE Commun. Lett., 5(10):417–419, Oct. 2001.

[66] J. Garcıa-Frıas. Joint source-channel decoding of correlated sources over noisy

channels. In Proc. IEEE Data Compression Conf., Snowbird, Utah, 2001.

[67] J. Garcıa-Frıas. Decoding of low-density parity check codes over finite-state

binary Markov channels. IEEE Trans. Commun., 52(11):1840–1843, Nov. 2004.

[68] J. Garcıa-Frıas and Y. Zhao. Data compression of unknown single and correlated

binary sources using punctured turbo codes. In Proc. Allerton Conf. Commun.,

Contr. and Comput., Monticello, Illinois, 2001.

[69] J. Garcıa-Frıas and Y. Zhao. Compression of binary memoryless sources using

punctured turbo codes. IEEE Commun. Lett., 6(9):394–396, Sept. 2002.

[70] J. Garcıa-Frıas and W. Zhong. LDPC codes for asymmetric compression of

multi-terminal sources with hidden Markov correlation. In Proc. CTA Commun.

Networks Symp., College Park, Maryland, 2003.

[71] J. Garcıa-Frıas and W. Zhong. LDPC codes for compression of multi-terminal

sources with hidden Markov correlation. IEEE Commun. Lett., 7(3):115–117,

Mar. 2003.

Page 120: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 109

[72] M. Gastpar. On the Wyner-Ziv problem with two sources. In Proc. Internat.

Symp. Inform. Theory, Yokohama, Japan, 2003.

[73] M. Gastpar. On Wyner-Ziv networks. In Proc. Asilomar Conf. on Signals,

Syst., Comput., Pacific Grove, California, 2003.

[74] N. Gehrig and P. L. Dragotti. Symmetric and a-symmetric Slepian-Wolf codes

with systematic and nonsystematic linear codes. IEEE Commun. Lett., 9(1):61–

63, Jan. 2005.

[75] B. Girod, A. M. Aaron, S. Rane, and D. Rebollo-Monedero. Distributed video

coding. Proceedings IEEE, 93(1):71–83, Jan. 2005.

[76] S. W. Golomb. Shift Register Sequences. Holden-Day, 1967.

[77] M. Grangetto, E. Magli, and G. Olmo. Distributed arithmetic coding. IEEE

Commun. Lett., 11(11):883–885, Nov. 2007.

[78] C. Guillemot, F. Pereira, L. Torres, T. Ebrahimi, R. Leonardi, and J. Oster-

mann. Distributed monoview and multiview video coding. IEEE Signal Proc.

Magazine, 24(5):67–76, Sept. 2007.

[79] X. Guo, Y. Lu, F. Wu, W. Gao, and S. Li. Distributed multi-view video coding.

In SPIE Visual Communications and Image Process. Conf., San Jose, Califor-

nia, 2006.

[80] J. Ha and S. W. McLaughlin. Optimal puncturing of irregular low-density

parity-check codes. In Proc. IEEE Internat. Conf. on Commun., Anchorage,

Alaska, 2003.

[81] J. Hagenauer. Rate-compatible punctured convolutional codes (RCPC codes)

and their applications. IEEE Trans. Commun., 36(4):389–400, Apr. 1988.

[82] J. Hagenauer, J. Barros, and A. Schaeffer. Lossless turbo source coding with

decremental redundancy. In Proc. 5th ITG Conf. on Source and Channel Cod-

ing, Erlangen, Germany, 2004.

Page 121: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 110

[83] T. S. Han and K. Kobayashi. A unified achievable rate region for a general

class of multiterminal source coding systems. IEEE Trans. Inform. Theory,

26(3):277–288, May 1980.

[84] C. Heegard and T. Berger. Rate distortion when side information may be

absent. IEEE Trans. Inform. Theory, 31(6):727–734, Nov. 1985.

[85] M. E. Hellman. Convolutional source encoding. IEEE Trans. Inform. Theory,

21(6):651–656, Nov. 1975.

[86] A. Hocquenghem. Codes correcteurs d’erreurs. Chiffres (Paris), 2:147–156,

Sept. 1959.

[87] K. B. Housewright. Source coding studies for multiterminal systems. Ph.D.

dissertation, University of California, Los Angeles, 1979.

[88] X.-Y. Hu, E. Eleftheriou, and D. Arnold. Regular and irregular progressive

edge-growth Tanner graphs. IEEE Trans. Inform. Theory, 51(1):286–398, Jan.

2005.

[89] M. Isaka and M. P. C. Fossorier. High rate serially concatenated coding with

extended Hamming codes. IEEE Commun. Lett., 9(2):160–162, Feb. 2005.

[90] J. Jiang, D. He, and A. Jagmohan. Rateless Slepian-Wolf coding based on

rate adaptive low-density parity-check codes. In Proc. Internat. Symp. Inform.

Theory, Nice, France, 2007.

[91] B. Julesz. Binocular depth perception of computer generated patterns. Bell

Sys. Tech. J, 38:1001–1020, 1960.

[92] M. L. Kaiser. The STEREO mission: an overview. Advances Space Research,

36(8):1483–1488, Aug. 2005.

[93] R. Kawada, O. Sugimoto, A. Koike, M. Wada, and S. Matsumoto. Highly

precise estimation scheme for remote video PSNR using spread spectrum and

extraction of orthogonal transform coefficients. Electronics Commun. Japan

(Part I), 89(6):51–62, Jun. 2006.

Page 122: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 111

[94] J. Korner and K. Marton. How to encode the modulo-two sum of binary sources.

IEEE Trans. Inform. Theory, 25(2):219–221, March 1979.

[95] P. Koulgi, E. Tuncel, E. Rengunathan, and K. Rose. On zero-error source

coding of correlated sources. IEEE Trans. Inform. Theory, 49(11):2856–2873,

Nov. 2003.

[96] P. Koulgi, E. Tuncel, E. Rengunathan, and K. Rose. On zero-error source coding

with decoder side information. IEEE Trans. Inform. Theory, 49(1):99–111, Jan.

2003.

[97] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger. Factor graphs and the sum-

product algorithm. IEEE Trans. Inform. Theory, 47(2):498–519, Feb. 2001.

[98] D. Kubasov, J. Nayak, and C. Guillemot. Optimal reconstruction in Wyner-Ziv

video coding with multiple side information. In Proc. IEEE Internat. Workshop

Multimedia Signal Process., Chania, Greece, 2007.

[99] J. Kusuma, L. Doherty, and K. Ramchandran. Distributed compression for

sensor networks. In Proc. IEEE Internat. Conf. Image Process., Thessaloniki,

Greece, 2001.

[100] C. F. Lan, A. Liveris, K. Narayanan, Z. Xiong, and C. Georghiades. Slepian-

Wolf coding of multiple M-ary sources using LDPC codes. In Proc. IEEE Data

Compression Conf., Snowbird, Utah, 2004.

[101] J. Li and H. Alqamzi. An optimal distributed and adaptive source coding

strategy using rate-compatible punctured convolutional codes. In Proc. IEEE

Internat. Conf. Acoustics Speech Signal Process., Philadelphia, Pennsylvania,

2005.

[102] J. Li, K. R. Narayanan, and C. N. Georghiades. Product accumulate codes:

a class of codes with near-capacity performance and low decoding complexity.

IEEE Trans. Inform. Theory, 50(1):31–46, Jan. 2004.

[103] X. Li. Distributed coding of multispectral images: a set theoretic approach. In

Proc. IEEE Internat. Conf. Image Process., Singapore, 2004.

Page 123: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 112

[104] Z. Li, Y.-C. Lin, D. P. Varodayan, P. Baccichet, and B. Girod. Distortion-

aware retransmission and concealment of video packets using a Wyner-Ziv-

coded thumbnail. In Proc. IEEE Internat. Workshop Multimedia Signal Pro-

cess., Cairns, Australia, Oct. 2008.

[105] Y.-C. Lin, D. P. Varodayan, T. Fink, E. Bellers., and B. Girod. Authenticating

contrast and brightness adjusted images using distributed source coding and

expectation maximization. In Proc. IEEE Internat. Conf. Multimedia and Expo,

Hannover, Germany, Jun. 2008.

[106] Y.-C. Lin, D. P. Varodayan, T. Fink, E. Bellers, and B. Girod. Localization of

tampering in contrast and brightness adjusted images using distributed source

coding and expectation maximization. In Proc. IEEE Internat. Conf. Image

Process., San Diego, California, Oct. 2008.

[107] Y.-C. Lin, D. P. Varodayan, and B. Girod. Image authentication and tampering

localization using distributed source coding. In Proc. IEEE Internat. Workshop

Multimedia Signal Process., Chania, Greece, Oct. 2007.

[108] Y.-C. Lin, D. P. Varodayan, and B. Girod. Image authentication based on

distributed source coding. In Proc. IEEE Internat. Conf. Image Process., San

Antonio, Texas, Sept. 2007.

[109] Y.-C. Lin, D. P. Varodayan, and B. Girod. Spatial models for localization of

image tampering using distributed source codes. In Proc. Picture Coding Symp.,

Lisbon, Portugal, Nov. 2007.

[110] Y.-C. Lin, D. P. Varodayan, and B. Girod. Authenticating cropped and resized

images using distributed source coding and expectation maximization. In SPIE

Electronic Imaging Media Forensics Security XI, San Jose, California, Jan. 2009.

[111] Y.-C. Lin, D. P. Varodayan, and B. Girod. Distributed source coding authenti-

cation of images with affine warping. In Proc. IEEE Internat. Conf. Acoustics

Speech Signal Process., Taipei, Taiwan, Apr. 2009.

Page 124: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 113

[112] Y.-C. Lin, D. P. Varodayan, and B. Girod. Distributed source coding authen-

tication of images with contrast and brightness adjustment and affine warping.

In Proc. Picture Coding Symp., Chicago, Illinois, May 2009.

[113] Y.-C. Lin, D. P. Varodayan, and B. Girod. Video quality monitoring for mo-

bile multicast peers using distributed source coding. In Proc. Internat. Mobile

Multimedia Commun. Conf., London, United Kingdom, Sept. 2009.

[114] T. Linder, R. Zamir, and K. Zeger. On source coding with side information for

general distortion measures. In Proc. Internat. Symp. Inform. Theory, Cam-

bridge, Massachusetts, 1998.

[115] T. Linder, R. Zamir, and K. Zeger. On source coding with side-information-

dependent distortion measures. IEEE Trans. Inform. Theory, 46(7):2697–2704,

Nov. 2000.

[116] Z. Liu, S. Cheng, A. Liveris, and Z. Xiong. Slepian-Wolf coded nested quanti-

zation SWC-NQ for Wyner-Ziv coding: performance analysis and code design.

In Proc. IEEE Data Compression Conf., Snowbird, Utah, 2004.

[117] A. Liveris, Z. Xiong, and C. Georghiades. Compression of binary sources with

side information at the decoder using LDPC codes. IEEE Commun. Lett.,

6(10):440–442, Oct. 2002.

[118] A. Liveris, Z. Xiong, and C. Georghiades. Compression of binary sources with

side information at the decoder using LDPC codes. In Proc. IEEE Global

Commun. Conf., Taipei, Taiwan, 2002.

[119] A. Liveris, Z. Xiong, and C. Georghiades. Joint source-channel coding of bi-

nary sources with side information at the decoder using IRA codes. In Proc.

IEEE Internat. Workshop Multimedia Signal Process., St. Thomas, U.S. Virgin

Islands, 2002.

[120] S. P. Lloyd. Least squares quantization in PCM. IEEE Trans. Inform. Theory,

28(2):129–137, Mar. 1982.

[121] D. J. C. MacKay. Good error-correcting codes based on very sparse matrices.

IEEE Trans. Inform. Theory, 45(2):399–431, Mar. 1999.

Page 125: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 114

[122] D. J. C. MacKay. Source code for progressive edge growth parity-check matrix

construction, 2004.

[123] D. J. C. MacKay, S. T. Wilson, and M. C. Davey. Comparison of constructions

of irregular Gallager codes. IEEE Trans. Commun., 47(10):1449–1454, Oct.

1999.

[124] A. Majumdar, K. Ramchandran, and I. Kozintsev. Distributed coding for wire-

less audio sensors. In Proc. IEEE Applic. Signal Process. Audio Acoustics Work-

shop, New Paltz, New York, 2003.

[125] S. Malinowski, X. Artigas, C. Guillemot, and L. Torres. Distributed coding

using punctured quasi-arithmetic codes for memory and memoryless sources.

IEEE Trans. Signal Process. Accepted.

[126] E. Martinian, S. Yekhanin, and J. S. Yedidia. Secure biometrics via syndromes.

In Proc. Allerton Conf. Commun., Contr. and Comput., Monticello, Illinois,

2005.

[127] W. Matusik and H. Pfister. 3D TV: a scalable system for real-time acquisition,

transmission, and autostereoscopic display of dynamic scenes. ACM Trans.

Graphics, 23(3):814–824, Aug. 2004.

[128] P. Mitran and J. Bajcsy. Near Shannon-limit coding for the Slepian-Wolf prob-

lem. In Proc. Biennial Symp. Commun., Kingston, Ontario, Canada, 2002.

[129] P. Mitran and J. Bajcsy. Turbo source coding: a noise-robust approach to data

compression. In Proc. IEEE Data Compression Conf., Snowbird, Utah, 2002.

[130] D. Muresan and M. Effros. Quantization as histogram segmentation: glob-

ally optimal scalar quantizer design in network systems. In Proc. IEEE Data

Compression Conf., Snowbird, Utah, 2002.

[131] NASA. NASA - STEREO Mission, visited on Mar. 9, 2010.

Available online at http://www.nasa.gov/stereo.

Page 126: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 115

[132] L. Natario, C. Brites, J. Ascenso, and F. Pereira. Extrapolating side infor-

mation for low-delay pixel-domain distributed video coding. In Proc. Internat.

Workshop Very Low Bitrate Video Coding, Sardinia, Italy, 2005.

[133] M. Ouaret, F. Dufaux, and T. Ebrahimi. Fusion-based multiview distributed

video coding. In Proc. ACM Internat. Workshop Video Surveillance Sensor

Networks, Santa Barbara, California, 2006.

[134] J. Pearl. Reverend Bayes on inference engines: A distributed hierarchical ap-

proach. In Proc. AAAI Conf. Artificial Intelligence, Pittsburgh, Pennsylvania,

1982.

[135] F. Pereira, L. Torres, C. Guillemot, T. Ebrahimi, R. Leonardi, and S. Klomp.

Distributed video coding: selecting the most promising application scenarios.

EURASIP Signal Process.: Image Commun. J, 23(5):339–352, Jun. 2008.

[136] H. Pishro-Nik and F. Fekri. Results on punctured LDPC codes. In IEEE

Inform. Theory Workshop, San Antonio, Texas, 2004.

[137] S. S. Pradhan, J. Chou, and K. Ramchandran. Duality between source coding

and channel coding and its extension to the side information case. IEEE Trans.

Inform. Theory, 49(5):1181–1203, May 2003.

[138] S. S. Pradhan, J. Kusuma, and K. Ramchandran. Distributed compression in

a dense microsensor network. IEEE Signal Process. Mag., 19(2):51–60, Mar.

2002.

[139] S. S. Pradhan and K. Ramchandran. Distributed source coding using syndromes

(DISCUS): design and construction. In Proc. IEEE Data Compression Conf.,

Snowbird, Utah, 1999.

[140] S. S. Pradhan and K. Ramchandran. Distributed source coding: symmetric

rates and applications to sensor networks. In Proc. IEEE Data Compression

Conf., Snowbird, Utah, 2000.

[141] S. S. Pradhan and K. Ramchandran. Geometric proof of rate-distortion function

of Gaussian sources with side information at the decoder. In Proc. Internat.

Symp. Inform. Theory, Sorrento, Italy, 2000.

Page 127: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 116

[142] S. S. Pradhan and K. Ramchandran. Group-theoretic construction and analysis

of generalized coset codes for symmetric/asymmetric distributed source coding.

In Proc. Conf. Inform. Sciences Syst., Princeton, New Jersey, 2000.

[143] S. S. Pradhan and K. Ramchandran. Enhancing analog image transmission

systems using digital side information: a new wavelet-based image coding

paradigm. In Proc. IEEE Data Compression Conf., Snowbird, Utah, 2001.

[144] S. S. Pradhan and K. Ramchandran. Distributed source coding using syndromes

(DISCUS): design and construction. IEEE Trans. Inform. Theory, 49(3):626–

643, Mar. 2003.

[145] G. Prandi, G. Valenzise, M. Tagliasacchi, and A. Sarti. Detection and identifica-

tion of sparse audio tampering using distributed source coding and compressive

sensing techniques. In Proc. Internat. Conf. Digital Audio Effects, Espoo, Fin-

land, Sept. 2008.

[146] R. Puri, A. Majumdar, and K. Ramchandran. PRISM: A video coding

paradigm with motion estimation at the decoder. IEEE Trans. Image Process.,

16(10):24362448, Oct. 2007.

[147] R. Puri and K. Ramchandran. PRISM: a new robust video coding architecture

based on distributed compression principles. In Proc. Allerton Conf. Commun.,

Contr. and Comput., Monticello, Illinois, 2002.

[148] R. Puri and K. Ramchandran. PRISM: a ‘reversed’ multimedia coding

paradigm. In Proc. IEEE Internat. Conf. Image Process., Barcelona, Spain,

2003.

[149] R. Puri and K. Ramchandran. PRISM: an uplink-friendly multimedia coding

paradigm. In Proc. IEEE Internat. Conf. Acoustics Speech Signal Process.,

Hong Kong, China, 2003.

[150] S. Rane. Systematic lossy error protection of video signals. Ph.D. dissertation,

Stanford University, 2007.

Page 128: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 117

[151] S. Rane, A. M. Aaron, and B. Girod. Systematic lossy forward error protection

for error resilient digital video broadcasting. In SPIE Visual Communications

and Image Process. Conf., San Jose, California, 2004.

[152] S. Rane, P. Baccichet, and B. Girod. Systematic lossy error protection of video

signals. IEEE Trans. Circuits Syst. Video Technol., 18(10):1347–1360, Nov.

2008.

[153] D. Rebollo-Monedero. Quantization and transforms for distributed source cod-

ing. Ph.D. dissertation, Stanford University, 2007.

[154] D. Rebollo-Monedero, A. M. Aaron, and B. Girod. Transforms for high-rate

distributed source coding. In Proc. Asilomar Conf. on Signals, Syst., Comput.,

Pacific Grove, California, 2003.

[155] D. Rebollo-Monedero and B. Girod. Design of optimal quantizers for distributed

coding of noisy sources. In Proc. IEEE Internat. Conf. Acoustics Speech Signal

Process., Philadelphia, Pennsylvania, 2005.

[156] D. Rebollo-Monedero and B. Girod. Network distributed quantization. In Proc.

IEEE Inform. Theory Workshop, Tahoe City, California, 2007.

[157] D. Rebollo-Monedero, S. Rane, A. M. Aaron, and B. Girod. High-rate quanti-

zation and transform coding with side information at the decoder. EURASIP

Signal Process. J., 86(11):3160–3179, Nov. 2006.

[158] D. Rebollo-Monedero, S. Rane, and B. Girod. Wyner-Ziv quantization and

transform coding of noisy sources at high rates. In Proc. Asilomar Conf. on

Signals, Syst., Comput., Pacific Grove, California, 2004.

[159] D. Rebollo-Monedero, R. Zhang, and B. Girod. Design of optimal quantizers for

distributed source coding. In Proc. IEEE Data Compression Conf., Snowbird,

Utah, 2003.

[160] T. J. Richardson, A. Shokrollahi, and R. L. Urbanke. Design of capacity-

approaching irregular low-density parity-check codes. IEEE Trans. Inform.

Theory, 47(2):619–637, Feb. 2001.

Page 129: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 118

[161] T. J. Richardson and R. L. Urbanke. The capacity of low-density parity check

codes under message-passing decoding. IEEE Trans. Inform. Theory, 47(2):599–

618, Feb. 2001.

[162] T. J. Richardson and R. L. Urbanke. Modern Coding Theory. Cambridge

University Press, 2008.

[163] A. Roumy, K. Lajnef, and C. Guillemot. Rate-adaptive turbo syndrome scheme

for Slepian-Wolf coding. In Proc. Asilomar Conf. on Signals, Syst., Comput.,

Pacific Grove, California, 2007.

[164] D. N. Rowitch and L. B. Milstein. On the performance of hybrid FEC/ARQ

systems using rate compatible punctured turbo (RCPT) codes. IEEE Trans.

Commun., 48(6):948–959, Jun. 2000.

[165] O. Roy and M. Vetterli. Distributed spatial audio coding in wireless hearing

aids. In Proc. IEEE Applic. Signal Process. Audio Acoustics Workshop, New

Paltz, New York, 2007.

[166] M. Sartipi and F. Fekri. Distributed source coding in wireless sensor networks

using LDPC coding: the entire Slepian-Wolf rate region. In Proc. IEEE Wireless

Commun. Networking Conf., New Orleans, Louisiana, 2005.

[167] M. Sartipi and F. Fekri. Distributed source coding using short to moderate

length rate-compatible LDPC codes: the entire Slepian-Wolf rate region. IEEE

Trans. Commun., 56(3):400–411, Mar. 2008.

[168] D. Schonberg, S. S. Pradhan, and K. Ramchandran. LDPC codes can achieve

the Slepian-Wolf bound for general binary sources. In Proc. Allerton Conf.

Commun., Contr. and Comput., Monticello, Illinois, 2002.

[169] D. Schonberg, S. S. Pradhan, and K. Ramchandran. Distributed code construc-

tion for the entire Slepian-Wolf rate region for arbitralily correlated sources.

In Proc. Asilomar Conf. on Signals, Syst., Comput., Pacific Grove, California,

2003.

Page 130: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 119

[170] D. Schonberg, S. S. Pradhan, and K. Ramchandran. Distributed code construc-

tion for the entire Slepian-Wolf rate region for arbitralily correlated sources. In

Proc. IEEE Data Compression Conf., Snowbird, Utah, 2004.

[171] A. Sehgal and N. Ahuja. Robust predictive coding and the Wyner-Ziv problem.

In Proc. IEEE Data Compression Conf., Snowbird, Utah, 2003.

[172] A. Sehgal, A. Jagmohan, and N. Ahuja. A causal state-free video encoding

paradigm. In Proc. IEEE Internat. Conf. Image Process., Barcelona, Spain,

2003.

[173] S. D. Servetto. Lattice quantization with side information. In Proc. IEEE Data

Compression Conf., Snowbird, Utah, 2000.

[174] S. Sesia, G. Caire, and G. Vivier. Incremental redundancy hybrid ARQ schemes

based on low-density parity-check codes. IEEE Trans. Commun., 52(8):1311–

1321, Aug. 2004.

[175] C. E. Shannon. A mathematical theory of communication. Bell Sys. Tech. J,

27:379–423, 623–656, 1948.

[176] G. Simonyi. On Witsenhausen’s zero-error rate for multiple sources. IEEE

Trans. Inform. Theory, 49(12):3258–3261, Dec. 2003.

[177] D. Slepian and J. K. Wolf. Noiseless coding of correlated information sources.

IEEE Trans. Inform. Theory, 19(4):471–480, Jul. 1973.

[178] B. Song, O. Bursalioglu, A. Roy-Chowdhury, and E. Tuncel. Towards a multi-

terminal video compression algorithm using epipolar geometry. In Proc. IEEE

Internat. Conf. Acoustics Speech Signal Process., Toulouse, France, 2006.

[179] B. Song, E. Tuncel, and A. Roy-Chowdhury. Towards a multi-terminal video

compression algorithm by integrating distributed source coding with geometri-

cal constraints. J. Multimedia, 2(3):9–16, Jun. 2007.

[180] V. Stankovic, A. Liveris, Z. Xiong, and C. Georghiades. Design of Slepian-Wolf

codes by channel code partitioning. In Proc. IEEE Data Compression Conf.,

Snowbird, Utah, 2004.

Page 131: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 120

[181] V. Stankovic, A. Liveris, Z. Xiong, and C. Georghiades. On code design for

the Slepian-Wolf problem and lossless multiterminal networks. IEEE Trans.

Inform. Theory, 52(4):1495–1507, Apr. 2006.

[182] Y. Steinberg and N. Merhav. On successive refinement for the Wyner-Ziv prob-

lem. IEEE Trans. Inform. Theory, 50(8):1636–1654, Aug. 2004.

[183] Y. Steinberg and N. Merhav. On hierarchical joint source-channel coding with

degraded side information. IEEE Trans. Inform. Theory, 52(3):886–903, Mar

2006.

[184] J. K. Su, J. J. Eggers, and B. Girod. Illustration of the duality between channel

coding and rate distortion with side information. In Proc. Asilomar Conf. on

Signals, Syst., Comput., Pacific Grove, California, 2000.

[185] Y. Sutcu, S. Rane, J. S. Yedidia, S. C. Draper, and A. Vetro. Feature extraction

for a Slepian-Wolf biometric system using LDPC codes. In Proc. Internat. Symp.

Inform. Theory, Toronto, Ontario, Canada, 2008.

[186] Y. Sutcu, S. Rane, J. S. Yedidia, S. C. Draper, and A. Vetro. Feature transfor-

mation for a Slepian-Wolf biometric system based on error correcting codes. In

Proc. Comput. Vision Pattern Recog. Biometrics Workshop, Anchorage, Alaska,

2008.

[187] M. Tagliasacchi, G. Prandi, and S. Tubaro. Symmetric distributed coding of

stereo video sequences. In Proc. IEEE Internat. Conf. Image Process., San

Antonio, Texas, 2007.

[188] M. Tagliasacchi, G. Valenzise, M. Naccari, and S. Tubaro. Reduced-reference

video quality assessment using distributed source coding. Springer Multimedia

Tools Applic. Accepted.

[189] M. Tagliasacchi, G. Valenzise, and S. Tubaro. Localization of sparse image tam-

pering via random projections. In Proc. IEEE Internat. Conf. Image Process.,

San Diego, California, Oct. 2008.

Page 132: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 121

[190] P. Tan and J. Li. Enhancing the robustness of distributed compression using

ideas from channel coding. In Proc. IEEE Global Commun. Conf., St. Louis,

Missouri, 2005.

[191] P. Tan and J. Li. A practical and optimal symmetric Slepian-Wolf compres-

sion strategy using syndrome formers and inverse syndrome formers. In Proc.

Allerton Conf. Commun., Contr. and Comput., Monticello, Illinois, 2005.

[192] C. Tang, N.-M. Cheung, A. Ortega, and C. S. Raghavendra. Efficient inter-

band prediction and wavelet based compression for hyperspectral imagery: A

distributed source coding approach. In Proc. IEEE Data Compression Conf.,

Snowbird, Utah, 2005.

[193] M. Tanimoto. Free viewpoint television – FTV. In Proc. Picture Coding Symp.,

San Francisco, California, 2004.

[194] M. Tanimoto. FTV (free viewpoint television) creating ray-based image engi-

neering. In Proc. IEEE Internat. Conf. Image Process., Genoa, Italy, 2005.

[195] M. Tanimoto. Tanimoto Lab. Dept. of Info. Elec. Nagoya University, visited on

Mar. 9, 2010.

Available online at http://www.tanimoto.nuee.nagoya-u.ac.jp/english.

[196] M. Tanimoto, T. Fujii, and N. Fukushima. 1D parallel test sequences for MPEG-

FTV (M15378). In ISO/IEC JTC1/SC29/WG11, Archamps, France, 2008.

[197] S. ten Brink. Designing iterative decoding schemes with the extrinsic infor-

mation transfer chart. AEU Int. J. Electron. Commun., 54(6):389–398, Dec.

2000.

[198] G. Toffetti, M. Tagliasacchi, M. Marcon, A. Sarti, S. Tubaro, and K. Ram-

chandran. Image compression in a multi-camera system based on a distributed

source coding approach. In Proc. Euro. Signal Process. Conf., Antalya, Turkey,

2005.

[199] V. Toto-Zarasoa, A. Roumy, and C. Guillemot. Rate-adaptive codes for the

entire Slepian-Wolf region and arbitrarily correlated sources. In Proc. IEEE

Internat. Conf. Acoustics Speech Signal Process., Las Vegas, Nevada, 2008.

Page 133: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 122

[200] E. Tuncel. Kraft inequality and zero-error source coding with decoder side

information. IEEE Trans. Inform. Theory, 53(12):4810–4816, Dec. 2007.

[201] S. Tung. Multiterminal source coding. Ph.D. dissertation, Cornell University,

1977.

[202] G. Valenzise, M. Naccari, M. Tagliasacchi, and S. Tubaro. Reduced-reference

estimation of channel-induced video distortion using distributed source coding.

In Proc. ACM Multimedia, Vancouver, British Columbia, Canada, Oct. 2008.

[203] G. Valenzise, G. Prandi, and M. Tagliasacchi. Identification of sparse audio

tampering using distributed source coding and compressive sensing techniques.

EURASIP Image Video Process. J., 2009. doi: 10.1155/2009/158982.

[204] D. P. Varodayan, A. M. Aaron, and B. Girod. Rate-adaptive distributed source

coding using low-density parity-check codes. In Proc. Asilomar Conf. on Sig-

nals, Syst., Comput., Pacific Grove, California, 2005.

[205] D. P. Varodayan, A. M. Aaron, and B. Girod. Rate-adaptive codes for dis-

tributed source coding. EURASIP Signal Process. J., 86(11):3123–3130, Nov.

2006.

[206] D. P. Varodayan, D. M. Chen, M. Flierl, and B. Girod. Wyner-Ziv coding of

video with unsupervised motion vector learning. EURASIP Signal Process.:

Image Commun. J, 23(5):369–378, Jun. 2008.

[207] D. P. Varodayan, Y.-C. Lin, and B. Girod. Audio authentication based on

distributed source coding. In Proc. IEEE Internat. Conf. Acoustics Speech

Signal Process., Las Vegas, Nevada, Apr. 2008.

[208] D. P. Varodayan, Y.-C. Lin, A. Mavlankar, M. Flierl, and B. Girod. Wyner-Ziv

coding of stereo images with unsupervised learning of disparity. In Proc. Picture

Coding Symp., Lisbon, Portugal, 2007.

[209] D. P. Varodayan, A. Mavlankar, M. Flierl, and B. Girod. Distributed coding of

random dot stereograms with unsupervised learning of disparity. In Proc. IEEE

Page 134: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 123

Internat. Workshop Multimedia Signal Process., Victoria, British Columbia,

Canada, 2006.

[210] D. P. Varodayan, A. Mavlankar, M. Flierl, and B. Girod. Distributed grayscale

stereo image coding with unsupervised learning of disparity. In Proc. IEEE

Data Compression Conf., Snowbird, Utah, 2007.

[211] A. Wagner and V. Anantharam. An infeasibility result for the multiterminal

source-coding problem. IEEE Trans. Inform. Theory. Submitted.

[212] A. Wagner and V. Anantharam. An improved outer bound for the multiterminal

source coding problem. In Proc. Internat. Symp. Inform. Theory, Adelaide,

Australia, 2005.

[213] A. Wagner, S. Tavildar, and P. Viswanath. Rate-region of the quadratic gaussian

two-encoder source-coding problem. IEEE Trans. Inform. Theory, 54(5):1938–

1961, May 2008.

[214] J. Wang, A. Majumdar, and K. Ramchandran. Robust transmission over a lossy

network using a distributed source coded auxiliary channel. In Proc. Picture

Coding Symp., San Francisco, California, 2004.

[215] X. Wang and M. Orchard. Design of trellis codes for source coding with side

information at the decoder. In Proc. IEEE Data Compression Conf., Snowbird,

Utah, 2001.

[216] S. B. Wicker. Error Control Systems for Digital Communication and Storage.

Prentice Hall, 1995.

[217] T. Wiegand, G. Sullivan, G. Bjøntegaard, and A. Luthra. Overview of the

H.264/AVC video coding standard. IEEE Trans. Circuits Syst. Video Technol.,

13(7):560–576, Jul. 2003.

[218] B. Wilburn, N. Joshi, V. Vaish, E.-V. Talvala, E. Antunez, A. Barth, A. Adams,

M. Levoy, and M. Horowitz. High performance imaging using large camera

arrays. ACM Trans. Graph., 24(3):765–776, Jul. 2005.

Page 135: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 124

[219] B. Wilburn, M. Smulski, H.-H. Lee, and M. Horowitz. The light field video

camera. In SPIE Media Processors Conf., San Jose, California, 2002.

[220] H. S. Witsenhausen. The zero-error side information problem and chromatic

numbers. IEEE Trans. Inform. Theory, 22(5):592–593, Sept. 1976.

[221] H. S. Witsenhausen. Indirect rate-distortion problems. IEEE Trans. Inform.

Theory, 26(5):518–521, Sept. 1980.

[222] H. S. Witsenhausen and A. D. Wyner. Interframe coder for video signals. US

Patent 4191970, Tech. Rep., Nov. 1980.

[223] A. D. Wyner. Recent results in the Shannon theory. IEEE Trans. Inform.

Theory, 20(1):2–10, Jan. 1974.

[224] A. D. Wyner. On source coding with side information at the decoder. IEEE

Trans. Inform. Theory, 21(5):294–300, May 1975.

[225] A. D. Wyner. The rate-distortion function for source coding with side informa-

tion at the decoder-II: general sources. Inf. Control, 38(1):60–80, Jul. 1978.

[226] A. D. Wyner and J. Ziv. The rate-distortion function for source coding with

side information at the decoder. IEEE Trans. Inform. Theory, 22(1):1–10, Jan.

1976.

[227] Z. Xiong, A. Liveris, and S. Cheng. Distributed source coding for sensor net-

works. IEEE Signal Process. Mag., 21(5):80–94, Mar. 2004.

[228] Z. Xiong, A. Liveris, S. Cheng, and Z. Liu. Nested quantization and Slepian-

Wolf coding: a Wyner-Ziv coding paradigm for i.i.d. sources. In Proc. IEEE

Workshop on Statistical Signal Process., St. Louis, MO, 2003.

[229] Q. Xu and Z. Xiong. Layered Wyner-Ziv video coding. IEEE Trans. Image

Process., 15(12):3791–3803, Dec. 2006.

[230] T. Yamada, Y. Miyamoto, and M. Serizawa. No-reference video quality esti-

mation based on error-concealment effectiveness. In Proc. IEEE Packet Video

Conf., Lausanne, Switzerland, Nov. 2007.

Page 136: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 125

[231] T. Yamada, Y. Miyamoto, M. Serizawa, and H. Harasaki. Reduced-reference

based video quality metrics using representative luminance values. In Proc. In-

ternat. Video Process. Quality Metrics Consumer Electronics Workshop, Scotts-

dale, Arizona, Jan. 2007.

[232] H. Yamamoto and K. Itoh. Source coding theory for multiterminal communi-

cation systems with a remote source. Trans. IECE Japan, E63:700–706, Oct.

1980.

[233] Y. Yan and T. Berger. Zero-error instantaneous coding of correlated sources

with length constraints is NP-complete. IEEE Trans. Inform. Theory,

52(4):1705–1708, Apr. 2006.

[234] M. Yang, W. E. Ryan, and Y. Li. Design of efficiently encodable moderate-

length high-rate irregular LDPC codes. IEEE Trans. Commun., 52(4):564–571,

Apr. 2004.

[235] Y. Yang, S. Cheng, Z. Xiong, and W. Zhao. Wyner-Ziv coding based on TCQ

and LDPC codes. In Proc. Asilomar Conf. on Signals, Syst., Comput., Pacific

Grove, California, 2003.

[236] Y. Yang, V. Stankovic, Z. Xiong, and W. Zhao. Two-terminal video coding.

IEEE Trans. Image Process., 18(3):534551, Mar. 2009.

[237] Y. Yang, V. Stankovic, W. Zhao, and Z. Xiong. Multiterminal video coding. In

Proc. IEEE Internat. Conf. Image Process., San Antonio, Texas, 2007.

[238] C. Yeo and K. Ramchandran. Robust distributed multiview video compres-

sion for wireless camera networks. In SPIE Visual Communications and Image

Process. Conf., San Jose, California, 2007.

[239] R. Zamir. The rate loss in the Wyner-Ziv problem. IEEE Trans. Inform.

Theory, 42(6):2073–2084, Nov. 1996.

[240] R. Zamir and S. Shamai. Nested linear/lattice codes for Wyner-Ziv encoding.

In IEEE Inform. Theory Workshop, Killarney, Ireland, 1998.

Page 137: ADAPTIVE DISTRIBUTED SOURCE CODING A DISSERTATIONivms.stanford.edu/...10PhDThesis_AdaptiveDistributedSourceCoding.pdf · adaptive distributed source coding a dissertation submitted

BIBLIOGRAPHY 126

[241] R. Zamir, S. Shamai, and U. Erez. Nested linear/lattice codes for structured

multiterminal binning. IEEE Trans. Inform. Theory, 48(6):1250–1276, Jun.

2002.

[242] Q. Zhao and M. Effros. Optimal code design for lossless and near lossless source

coding in multiple access networks. In Proc. IEEE Data Compression Conf.,

Snowbird, Utah, 2001.

[243] Y. Zhao and J. Garcıa-Frıas. Data compression of correlated non-binary sources

using punctured turbo codes. In Proc. IEEE Data Compression Conf., Snow-

bird, Utah, 2002.

[244] Y. Zhao and J. Garcıa-Frıas. Joint estimation and data compression of corre-

lated non-binary sources using punctured turbo codes. In Proc. Conf. Inform.

Sciences Syst., Princeton, New Jersey, 2002.

[245] Y. Zhao and J. Garcıa-Frıas. Turbo codes for symmetric compression of corre-

lated binary sources with hidden Markov correlation. In Proc. CTA Commun.

Networks Symp., College Park, Maryland, 2003.

[246] Y. Zhao and J. Garcıa-Frıas. Joint estimation and compression of correlated

non-binary sources using punctured turbo codes. IEEE Trans. Commun.,

53(3):385–390, Mar. 2005.

[247] Y. Zhao and J. Garcıa-Frıas. Turbo compression/joint source channel coding

of correlated binary sources with hidden Markov correlation. EURASIP Signal

Process. J., 86(11):3115–3122, Nov. 2006.

[248] W. Zhong and J. Garcıa-Frıas. Compression of non-binary sources using LDPC

codes. In Proc. Conf. Inform. Sciences Syst., Baltimore, Maryland, 2005.

[249] G. Zhu and F. Alajaji. Turbo codes for nonuniform memoryless sources over

noisy channels. IEEE Commun. Lett., 6(2):64–66, Feb. 2002.

[250] X. Zhu, A. M. Aaron, and B. Girod. Distributed compression for large cam-

era arrays. In Proc. IEEE Workshop on Statistical Signal Process., St. Louis,

Missouri, 2003.