communicating correlated sources over user...

Post on 05-Jul-2020

2 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Communicating Correlated Sources over 2−user MAC

Arun Padakandla

Center for Science of Information, Purdue University

June 28, 2017

Communicating distributed correlated sources over channelsPart 2 : Joint source-channel coding technique

Arun Padakandla

Center for Science of Information, Purdue University

June 28, 2017

Problem Setup : Transmitting Correlated sources over 2−user MAC

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Shannon-theoretic study

?? Optimal coding tech. ??

?? Necessary and sufficient conditions in terms of WS1S2 , WY |X1X2. ??

Plain Vanilla Lossless source-channel coding over 2-user MAC.

1. A new coding tech.

2. Characterize performance, derive new sufficient conditions.

Problem Setup : Transmitting Correlated sources over 2−user MAC

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Shannon-theoretic study

?? Optimal coding tech. ??

?? Necessary and sufficient conditions in terms of WS1S2 , WY |X1X2. ??

Plain Vanilla Lossless source-channel coding over 2-user MAC.

1. A new coding tech.

2. Characterize performance, derive new sufficient conditions.

Problem Setup : Transmitting Correlated sources over 2−user MAC

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Shannon-theoretic study

?? Optimal coding tech. ??

?? Necessary and sufficient conditions in terms of WS1S2 , WY |X1X2. ??

Plain Vanilla Lossless source-channel coding over 2-user MAC.

1. A new coding tech.

2. Characterize performance, derive new sufficient conditions.

Big picture : Where is this going?

General multi-terminal problems : Seems NO single-letter (S-L) expr. forcapacity.

Example : Our fauvorite point-to-point chnl (PTP).

WY|X

X Y

Capacity [Shannon ’48] = suppX

I(X;Y ), and NOT (1)

Capacity [Shannon ’48] = supn≥1

suppnX

1

nI(Xn;Y n)

suppXand supn≥1 suppn

X: world of difference .

Associated with (1) is a single-letter (S-L) coding tech.

Big picture : Where is this going?

General multi-terminal problems : Seems NO single-letter (S-L) expr. forcapacity.

Example : Our fauvorite point-to-point chnl (PTP).

WY|X

X Y

Capacity [Shannon ’48] = suppX

I(X;Y ), and NOT (1)

Capacity [Shannon ’48] = supn≥1

suppnX

1

nI(Xn;Y n)

suppXand supn≥1 suppn

X: world of difference .

Associated with (1) is a single-letter (S-L) coding tech.

Big picture : Where is this going?

General multi-terminal problems : Seems NO single-letter (S-L) expr. forcapacity.

Example : Our fauvorite point-to-point chnl (PTP).

WY|X

X Y

Capacity [Shannon ’48] = suppX

I(X;Y ), and NOT (1)

Capacity [Shannon ’48] = supn≥1

suppnX

1

nI(Xn;Y n)

suppXand supn≥1 suppn

X: world of difference .

Associated with (1) is a single-letter (S-L) coding tech.

Big picture : Where is this going?

What if this were not true? A general multi-terminal scenario

Rx1

Rx2

Rxj

S1Tx1

Tx2

Txm

WY|X

X1

Xm

X2

Y1

Yj

Y2S2

Sm

T1

T2

Tj

Capacity > suppX|S

α(pS1···SlX1···XmY1···Ym) for every functional α(·)

Admits no optimal S-L coding techn.

This work addresses the above difficulty.

Contribution : S-L expr. for multi-letter (M-L) coding techs.

A multi-letter (M-L) coding tech. by carefully stitching together S-L techs.

Characterize (inner bound) ‘performance’ via S-L expr.Yes suppX

α(pXY ). NOT supn≥1 suppnX

.

For a general problem instance.

Derived S-L expr. strictly larger than all current known inner bnds derived viaS-L coding techs. (Examples)

Derived S-L expr. subsume current known largest inner bounds, strictly enlargefor exmpls.

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Rx1

Tx1

Tx2

S1

S2

X1

WS1S2

~~ S1

X2

Y1

2-IC

WY1Y2|X1X2

Rx2

S2Y

2

Contribution : S-L expr. for multi-letter (M-L) coding techs.

A multi-letter (M-L) coding tech. by carefully stitching together S-L techs.

Characterize (inner bound) ‘performance’ via S-L expr.Yes suppX

α(pXY ). NOT supn≥1 suppnX

.

For a general problem instance.

Derived S-L expr. strictly larger than all current known inner bnds derived viaS-L coding techs. (Examples)

Derived S-L expr. subsume current known largest inner bounds, strictly enlargefor exmpls.

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Rx1

Tx1

Tx2

S1

S2

X1

WS1S2

~~ S1

X2

Y1

2-IC

WY1Y2|X1X2

Rx2

S2Y

2

Contribution : S-L expr. for multi-letter (M-L) coding techs.

A multi-letter (M-L) coding tech. by carefully stitching together S-L techs.

Characterize (inner bound) ‘performance’ via S-L expr.Yes suppX

α(pXY ). NOT supn≥1 suppnX

.

For a general problem instance.

Derived S-L expr. strictly larger than all current known inner bnds derived viaS-L coding techs. (Examples)

Derived S-L expr. subsume current known largest inner bounds, strictly enlargefor exmpls.

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Rx1

Tx1

Tx2

S1

S2

X1

WS1S2

~~ S1

X2

Y1

2-IC

WY1Y2|X1X2

Rx2

S2Y

2

Contribution : S-L expr. for multi-letter (M-L) coding techs.

A multi-letter (M-L) coding tech. by carefully stitching together S-L techs.

Characterize (inner bound) ‘performance’ via S-L expr.Yes suppX

α(pXY ). NOT supn≥1 suppnX

.

For a general problem instance.

Derived S-L expr. strictly larger than all current known inner bnds derived viaS-L coding techs. (Examples)

Derived S-L expr. subsume current known largest inner bounds, strictly enlargefor exmpls.

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Rx1

Tx1

Tx2

S1

S2

X1

WS1S2

~~ S1

X2

Y1

2-IC

WY1Y2|X1X2

Rx2

S2Y

2

Multi-letter coding tech. : Why?? Necessary??

Prior work : Cover, El Gamal and Salehi (CES) [Nov. 1980] technique.Dueck’s example

[Nov. 1980](S1, S2) transmissible over MAC if

H(S1|S2) < I(X1;Y |X2, S2, U), H(S2|S1) < I(X2;Y |X1, S1, U)

H(S1, S2|K) < I(X1X2;Y |U), H(S1, S2) < I(X1X2;Y )

for a valid pmf WS1S2pUpX1|US1pX2|US2

WY |X1X2.

The Gacs - Korner part has to specially coded via a common conditionalcodebook.

Ignoring the GK part, treating it as soft correlation is strictly sub-optimal.

Conditional coding + separation : NO S-L long Markov chain (LMC)

S1(1) S1(2) S1(n)

X2(1) X2(2) X2(n)

U(1) U(2) U(n)

X1(1) X1(2) X1(n)

S2(1) S2(2) S2(n)

U(1) U(2) U(n)

BLOCK MAPPING

BLOCK MAPPING

X1(i) = f1i(Sn1 ) and X2(i) = f2i(S

n2 ).

X1(i)− S1(i)− S2(i)−X2(i) does NOT hold.

In fact, X1 − US1 − US2 −X2 holds. SinceU(i) = gi(K

n), we potentially have

X1(i)− Sn1 − Sn

2 −X2(i),

permitting no more reduction.

“Gacs-Korner (GK) part permitting correlation viaan n−letter LMC, and n→∞.”

Conditional coding + separation : NO S-L long Markov chain (LMC)

S1(1) S1(2) S1(n)

X2(1) X2(2) X2(n)

U(1) U(2) U(n)

X1(1) X1(2) X1(n)

S2(1) S2(2) S2(n)

U(1) U(2) U(n)

BLOCK MAPPING

BLOCK MAPPING

X1(i) = f1i(Sn1 ) and X2(i) = f2i(S

n2 ).

X1(i)− S1(i)− S2(i)−X2(i) does NOT hold.

In fact, X1 − US1 − US2 −X2 holds. SinceU(i) = gi(K

n), we potentially have

X1(i)− Sn1 − Sn

2 −X2(i),

permitting no more reduction.

“Gacs-Korner (GK) part permitting correlation viaan n−letter LMC, and n→∞.”

Conditional coding + separation : NO S-L long Markov chain (LMC)

S1(1) S1(2) S1(n)

X2(1) X2(2) X2(n)

U(1) U(2) U(n)

X1(1) X1(2) X1(n)

S2(1) S2(2) S2(n)

U(1) U(2) U(n)

BLOCK MAPPING

BLOCK MAPPING

X1(i) = f1i(Sn1 ) and X2(i) = f2i(S

n2 ).

X1(i)− S1(i)− S2(i)−X2(i) does NOT hold.

In fact, X1 − US1 − US2 −X2 holds. SinceU(i) = gi(K

n), we potentially have

X1(i)− Sn1 − Sn

2 −X2(i),

permitting no more reduction.

“Gacs-Korner (GK) part permitting correlation viaan n−letter LMC, and n→∞.”

Stripped Gacs-Korner part ⇒ Constrained to S-L LMC

S1

S2

K X1 − US1 − US2 −X2 and hence

X1 − Sn1 − Sn

2 −X2 permitted.

Stripped Gacs-Korner part ⇒ Constrained to S-L LMC

S1

S2

K1

K2

P (K1 6= K2) = ξ > 0 is very tiny.

Constrained to

X1 − S1 − S2 −X2.

Discountnuous shrinkage of valid input PMFs.

Sub-optimality of Cover El Gamal Salehi rateregion.

Stripped Gacs-Korner part ⇒ Constrained to S-L LMC

S1

S2

K1

K2

P (K1 6= K2) = ξ > 0 is very tiny.

Constrained to

X1 − S1 − S2 −X2.

Discountnuous shrinkage of valid input PMFs.

Sub-optimality of Cover El Gamal Salehi rateregion.

Dueck’s ingenious example [March 1981]

Rx

Tx1

Tx2

S1

S2

X1

WS1S2

~~

S1S2X2

YWY|X1X2

Identifies a sequence of MACs and source pairs WY |X1X2and WS1S2 such that

P (S1 6= S2)→ 0 and, for all source-channel pairs, sufficiently far in thissequence

For any X1 − S1 − S2 −X2, we have I(X1X2;Y ) < H(S1S2).

Yet the source pair is transmissible over the MAC.

Import of Dueck’s example

Necessary to induce correlation via a block of symbols onto channel inputs.

This necessitates a multi-letter coding.

Focus of this talk

1. Desired coding structure.

2. Overall coding tech.

3. ??challenges.??

4. Tools.

Conditional coding of GK part

⇓Correlation induced via multi-letter LMC

Absence of GK part

⇓Constrained to S-L LMC

Induce l−letter correlation via approximate conditional coding.

Approximate conditional coding of near, but not perfect, GK parts

S1n

Kn

S1n

Un

Un

Approximate conditional coding of near, but not perfect, GK parts

S1n

K1n

S1n

U1n

U2n

K2n

P (Kn1 6= Kn

2 ) = 1− (1− ξ)n → 1 as n→∞

FIX the BLOCK-LENGTH of (inner) conditional code to l.

Fixed B-L (inner) conditional code ⇒ non-vanishing Prob. of error

K1(1) K1(2) K1(l)

encoding

U1(1) U1(2) U1(l)

U l1 will be decoded with non-vanishing prob. of error.

Fixed B-L (inner) conditional code ⇒ non-vanishing Prob. of error

K1(1,1) K1(1,2) K1(1,l)

K1(m,1) K1(m,2) K1(m,l)

K1(t,1) K1(t,2) K1(t,l)

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

Row-by-row

encoding

Code over many many codewords of inner conditional code.

Number of rows m→∞.

No. of columns l remains constant.

Encoders employs identical inner codes and maps

K1(1,1) K1(1,2) K1(1,l)

K1(m,1) K1(m,2) K1(m,l)

K1(t,1) K1(t,2) K1(t,l)

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

Row-by-row

encoding

K2(1,1) K2(1,2) K2(1,l)

K2(m,1) K2(m,2) K2(m,l)

K2(t,1) K2(t,2) K2(t,l)

U2(1,1) U2(1,2) U2(1,l)

U2(m,1) U2(m,2) U2(m,l)

U2(t,1) U2(t,2) U2(t,l)

Row-by-row

encoding

Both encoders employ IDENTICAL inner codes and maps

Encoder 1

Encoder 2

Inner code error events

1. Cannot assign U−codeword to every l−length vector in K.

I τl =Prob. of unassigned U−codeword.

2. Encoders do NOT agree on a fraction of chosen U−codewords. How doyou handle this?

I Let ξ[l]:= P (Kl1 6= Kl

2) = 1− (1− ξ)l.

3. Decoder INCORRECTLY decodes a fraction of codewords on whichencoders agree on?

I Let gl denote prob of error of U−codebook on U − Y channel.

Inner code prob. of error : φ:= τl + ξ[l] + gl

How much more information needs to be sent?

Inner code decoded row by row. Let

G(1,1) G(1,2) G(1,l)

G(m,1) G(m,2) G(m,l)

G(t,1) G(t,2) G(t,l)

denote decoded version of K1,K2 matrices.

How much more information need to be sent?

Simple Fano-type upper bounding

Rows are IID. Treat as super-symbols.

G(1,1) G(1,2) G(1,l)

G(m,1) G(m,2) G(m,l)

G(t,1) G(t,2) G(t,l)

H(Sl1|Gl) ≤ H(Kl

1|Gl) +H(Sl1|Gl,Kl

1)

≤ H(Kl1,1{Kl

1 6=Gl}|Gl) +H(Sl

1|Kl1)

≤ hb(φ) + lφ log |K|+ lH(S1|K1)

≤ l(LS(φ, |K|) +H(S1|K1))

LS(φ, |K| = Loss due to incorrect conditional coding

Simple Fano-type upper bounding

Rows are IID. Treat as super-symbols.

G(1,1) G(1,2) G(1,l)

G(m,1) G(m,2) G(m,l)

G(t,1) G(t,2) G(t,l)

H(Sl1|Gl) ≤ H(Kl

1|Gl) +H(Sl1|Gl,Kl

1)

≤ H(Kl1,1{Kl

1 6=Gl}|Gl) +H(Sl

1|Kl1)

≤ hb(φ) + lφ log |K|+ lH(S1|K1)

≤ l(LS(φ, |K|) +H(S1|K1))

LS(φ, |K| = Loss due to incorrect conditional coding

Questions, Challenges

1. How do you communicate the rest of the information?

I Superposition coding.

2. How do you super-impose on codewords that induce an l−letter pmf?

I Achievable region should be in terms of pX1|U not pXl1|Ul .

Superposition coding over a matrix

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

Superposing on matrix requires optimizing over pXl1|Ul .

Desire S-L characterization. Desire suppX1|U.

Extract IID sub-vectors from this matrix and super-impose separate codebookson these sub-vectors.

Superposition coding over a matrix

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

Superposing on matrix requires optimizing over pXl1|Ul .

Desire S-L characterization. Desire suppX1|U.

Extract IID sub-vectors from this matrix and super-impose separate codebookson these sub-vectors.

The rows of Uj are independent and identically distributed.

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

U1(1,1) U1(1,2) U1(1,l)

U1(m,1) U1(m,2) U1(m,l)

U1(t,1) U1(t,2) U1(t,l)

Each row has been obtained by mapping a separate sub-block of sourcesymbols.

Interleaving [Shirani and Pradhan 2014]

Interleaving [Shirani and Pradhan 2014]

Interleaving [Shirani and Pradhan 2014]

The same argument holds for each sub-vectorsE(1, λ1(i)) · · ·E(m,λm(i)) : i = 1, · · · , l.

We therefore have l iid sub-vectors E(1, λ1(i)) · · ·E(m,λm(i)) : i = 1, · · · , l.

Randomly chosen co-ordinates are iid!!!

Co-ordinates with same color coded together within a block in the outer code.l such blocks.

Randomly chosen co-ordinates are iid!!!

Co-ordinates with same color coded together within a block in the outer code.l such blocks.

Randomly chosen co-ordinates are iid!!!

Co-ordinates with same color coded together within a block in the outer code.l such blocks.

Randomly chosen co-ordinates are iid!!!

Co-ordinates with same color coded together within a block in the outer code.l such blocks.

Csiszar’s constant composition codes

Pick test channel pU to be type of sequences of length l.

The identical U−codebook chosen at both encoders is a constant compositioncode of type pU .

Interleaved vectors will be IID pU !!!!

Upperbound the difference using relationship between chosen and actualpmf

Let pUpX1|US1pX2|US2

be chosen pmf.

pUj= pU since symbols of CU chosen iid pU .

P (U1 6= U2) ≤ β

pXj |Uj Sj= pXj |USj

: By choice of coding technique. One can derive an upper

bound using these relations.

Upperbound the difference using relationship between chosen and actualpmf

Let pUpX1|US1pX2|US2

be chosen pmf.

pUj= pU since symbols of CU chosen iid pU .

P (U1 6= U2) ≤ β

pXj |Uj Sj= pXj |USj

: By choice of coding technique. One can derive an upper

bound using these relations.

Joint source channel coding using fixed-block length

Kj(t, 1 : l) is mapped to Uj(t, 1 : l) via fixed block-length code.

Induces a correlation between Kj and Uj

Unable to characterize this correlation analytically.

Propose to destroy this correlation by coding across two m× l matrices.

top related