opportunistic orthogonal writing on dirty paperpramodv/talks/lids_tv05.pdf · opportunistic...

Post on 30-Apr-2020

7 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Opportunistic Orthogonal Writing on DirtyPaper

Pramod Viswanath

Joint work withTie Liu

University of Illinois at Urbana-Champaign

March 1, 2005

The Dirty Paper Channel

y(t) = x(t) + s(t) + w(t)

WGN

p.s.d.N0

2

The Dirty Paper Channel

y(t) = x(t) + s(t) + w(t)

p.s.d.N0

2

WGNWG Interference

p.s.d.Ns

2

The Dirty Paper Channel

y(t) = x(t) + s(t) + w(t)

p.s.d.N0

2

WGNWG Interference

p.s.d.Ns

2

PowerP

BandwidthW

¦ Observation (Cos81): Capacity of the channel is

W log(

1 +P

N0W

)

¦ An abstractbinningscheme

Focus of this talk

y(t) = x(t) + s(t) + w(t)

p.s.d.N0

2

WGNWG Interference

p.s.d.Ns

2

PowerP

Wideband

¦ Wideband dirty paper channel

Main Result

¦ Setting: Wideband channel

– Criterion: Minimum Energy per reliable BitEb

¦ Main Result:

– An explicit binning scheme

– Ramifications:

∗ Typical error events

∗ Low rate VQ for wideband Gaussian source

∗ Generalization to abstract alphabets

PPM and Wideband AWGN Channel

... ...

M Pulses

¦ Each message corresponds to a pulse

Opportunistic PPM

M Pulses

K Sub-pulses

¦ Each message corresponds toK sub-pulses

Encoding Opportunistic PPM

����������

����������

s(t)

¦ Transmit sub-pulse (amongK possible choices) wheres(t) is largest

Decoding Opportunistic PPM

���������

���������

y(t)

¦ Choose sub-pulse (amongMK possible choices) wherey(t) is

largest

Relieving the Tension

¦ Tranmit signal amplitude

√Es =

√Eb log M +

√Ns log K

¦ √Ns log K is the opportunistic amplitude gain

¦ LargerK means larger signal amplitude

¦ But decoder has more choices to get confused (MK sub-pulses)

Relieving the Tension

¦ Tranmit signal amplitude

√Es =

√Eb log M +

√Ns log K

¦ Error probability

P [Error] ≤ MK P[s + w ≥

√Es

]

Relieving the Tension

¦ Tranmit signal amplitude

√Es =

√Eb log M +

√Ns log K

¦ Error probability

P [Error] ≤ MK P[s + w ≥

√Es

]

≈ MK P[s ≥ Ns

Ns + N0

√Es

]P

[w ≥ N0

Ns + N0

√Es

]

Relieving the Tension

¦ Tranmit signal amplitude

√Es =

√Eb log M +

√Ns log K

¦ Error probability

P [Error] ≤ MK P[s + w ≥

√Es

]

≈ MK P[s ≥ Ns

Ns + N0

√Es

]P

[w ≥ N0

Ns + N0

√Es

]

?≈ M P[w ≥

√Eb log M

]

¦ If possible, then desired result follows

Choice ofK

¦ Transmit signal amplitude

√Es =

√Eb log M +

√Ns log K

¦ Chooselog K = Ns

N0log M

¦ Then

Ns

Ns + N0

√Es =

√Ns log K,

N0

Ns + N0

√Es =

√Eb log M

Tension at correct choice ofK

¦ Tranmit signal amplitude√Es =

√Eb log M +√

Ns log K

¦ Chooselog K = Ns

N0log M

Ns

Ns + N0

√Es =

√Ns log K,

N0

Ns + N0

√Es =

√Eb log M

¦ Error probability

P [Error] ≤ MK P[s ≥ Ns

Ns + N0

√Es

]P

[w ≥ N0

Ns + N0

√Es

]

= MK P[s ≥

√Ns log K

]P

[w ≥

√Eb log M

]

The AWGN Performance

¦ Tranmit signal amplitude√Es =

√Eb log M +√

Ns log K

¦ Chooselog K = Ns

N0log M

Ns

Ns + N0

√Es =

√Ns log K,

N0

Ns + N0

√Es =

√Eb log M

¦ Error probability

P [Error] ≤ MK P[s ≥ Ns

Ns + N0

√Es

]P

[w ≥ N0

Ns + N0

√Es

]

= M K P[s ≥

√Ns log K

]P

[w ≥

√Eb log M

]

≈ M 1 P[w ≥

√Eb log M

]

¦ AWGN performance is achieved

Non-Gaussian Dirt

¦ Interferences(t) is not AWG

¦ Make lengthN of each pulselong enough:

∑Ni=1 si√N

∼ N

¦ Essentially convert dirt into Gaussian

¦ Robustness: OPPM works over a (restricted) arbitrarily varying dirt

Story So Far

¦ OPPM is a simple concrete scheme to achieve AWGN performance

over wideband Dirty Paper channel

Pedagogical Utility

¦ OPPM is a simple concrete scheme to achieve AWGN performance

over wideband Dirty Paper channel

A text book example

Utility of OPPM

¦ OPPM is a simple concrete scheme to achieve AWGN performance

over wideband Dirty Paper channel

A text book example

¦ OPPM is also useful to prove new results

Typical Error Events

¦ Error occurs in two ways:

Typical Error Events

¦ Error occurs in two ways:

¦ Opportunistic gain is not large enough

P[

maxk=1...K

sk <√

Ns log K

]

Typical Error Events

¦ Error occurs in two ways:

¦ Opportunistic gain is not large enough

P[

maxk=1...K

sk <√

Ns log K

]

¦ Interference plus Noise is too large

P[s + w >

√Es

]

Typical Error Events

¦ Error occurs in two ways:

¦ Opportunistic gain is not large enough

P[

maxk=1...K

sk <√

Ns log K

]≈ exp (exp (−ε1 log K))

¦ Interference plus Noise is too large

P[s + w >

√Es

]≈ exp (−ε2 log K)

Typical Error Events

¦ Error occurs in two ways:

¦ Opportunistic gain is not large enough

P[

maxk=1...K

sk <√

Ns log K

]≈ exp (− exp (ε1 log K))

¦ Interference plus Noise is too large

P[s + w >

√Es

]≈ exp (−ε2 log K)

¦ Typical error event same as in wideband AWGN channel

Error exponent of wideband dirty paper channel same as that of

AWGN channel

OPPM is Structured Binning

¦ Interpretation of Costa’s binning scheme

– Each bin is a goodquantizer

– All bins put together,good channel code

OPPM is Structured Binning

¦ Interpretation of Costa’s binning scheme

– Each bin is a goodquantizer

– All bins put together,good channel code

¦ OPPM fits this theme

– Overall it is PPM - a good channel code

¦ Where is the quantizer?

Pulse Position Quantization

Reconstruction

Source

Optimality of PPQ

¦ Consider performance criterion:

maxQuantization Rate

Reduction in Distortion

Optimality of PPQ

¦ Consider performance criterion:

maxQuantization Rate

Reduction in Distortion

¦ An optimality property:

PPQ is a best low rate VQ of a wideband Gaussian source

A Deeper Understanding

¦ Opportunistic PPM is

a combination ofPPM(a good wideband channel code)

andPPQ(a good wideband VQ)

A Deeper Understanding

¦ Opportunistic PPM is

a combination ofPPM(a good wideband channel code)

andPPQ(a good wideband VQ)

¦ Generalization:

– Abstract alphabet Gelfand-Pinsker channel

– Abstract cost measure over input alphabet

– Performance measure:

Capacity per unit cost of an abstract Gelfand-Pinsker channel

Gelfand-Pinsker Channel

ENC DEC

Xn Y n

Pn(Y |XS)

Sn

¦ Memoryless Channel

¦ Alphabets:x ∈ X , s ∈ S, y ∈ Y

¦ Cost:b(x) for x ∈ X

Capacity per Unit Cost

¦ Largest rate of reliable communication per unit cost:

maxPXU|S

I(U ; Y )− I(U ; S)E [b(X)]

.

¦ U is an auxiliary random variable

¦ Achieved by an abstract binning scheme

Capacity per Unit Cost

¦ Largest rate of reliable communication per unit cost:

maxPXU|S

I(U ; Y )− I(U ; S)E [b(X)]

.

¦ U is an auxiliary random variable

¦ Achieved by an abstract binning scheme

¦ Main result:

Generalized Opportunistic PPM achieves the capacity per unit cost

¦ Need a zero cost input letter

Generalized PPM

N is length of each pulse

0 0 0 0 00000 00 0 0 0 0 00 0 0 0

M Pulses

x x x x x xx x

¦ Achieves capacity per unit cost of a classical channel

S. Verd́u, “On Capacity per Unit Cost”,IEEE Transactions on

Information Theory, Vol. 36(5), pp. 1019-1030, September 1990.

Generalized Opportunistic PPM

K Sub-Pulses

...

length N

...

¦ Each message corresponds toK sub-pulses

Encoding

... ................... ........ ............. .. .........

Type V

... ...Sn

K Sub-Pulses

¦ Transmitx = f(s) over sub-pulse that hastypeV

¦ Deterministic functionf : S → X

Encoding

... ................... ........ ............. .. .........

Type V

... ...Sn

K Sub-Pulses

¦ Transmitx = f(s) over sub-pulse that hastypeV

¦ Deterministic functionf : S → X

¦ NeedK = exp (D(V ||S))

Decoding

¦ Binary Hypothesis Testat any sub-pulse

H0 : PY |X=f(V ),V

H1 : PY |X=0,S

Decoding

¦ Binary Hypothesis Testat any sub-pulse

H0 : PY |X=f(V ),V

H1 : PY |X=0,S

¦ Fix P [H0 → H1] = ε.

¦ Stein’s Lemma (almost)

P [H1 → H0]·= exp

(−N D(PY |X=f(V ),V ||PY |X=0,S

))

Decoding

¦ Binary Hypothesis Testat any sub-pulse

H0 : PY |X=f(V ),V

H1 : PY |X=0,S

¦ Fix P [H0 → H1] = ε.

¦ Stein’s Lemma (almost)

P [H1 → H0]·= exp

(−N D(PY |X=f(V ),V ||PY |X=0,S

))

¦ Independent tests for each sub-pulse

– MK tests

Rate per Unit Cost

¦ Error Probability

P [Error] ≤ M K exp(−N D

(PY |X=f(V ),V ||PY |X=0,S

))

= M exp (N D (V ||S)) exp(−N D

(PY |X=f(V ),V ||PY |X=0,S

))

= M exp(−N

(D

(PY |X=f(V ),V ||PY |X=0,S

)−D (V ||S)))

Rate per Unit Cost

¦ Error Probability

P [Error] ≤ M K exp(−N D

(PY |X=f(V ),V ||PY |X=0,S

))

= M exp(−N

(D

(PY |X=f(V ),V ||PY |X=0,S

)−D (V ||S)))

¦ Cost incurred

N E [b(f(V ))]

Rate per Unit Cost

¦ Error Probability

P [Error] ≤ M K exp(−N D

(PY |X=f(V ),V ||PY |X=0,S

))

= M exp(−N

(D

(PY |X=f(V ),V ||PY |X=0,S

)−D (V ||S)))

¦ Cost incurred

NE [b(f(V ))]

¦ Conclusion: achievable rate per unit cost

log M

NE [b(f(V ))]=

D(PY |X=f(V ),V ||PY |X=0,S

)−D (V ||S)E [b(f(V ))]

Main Result

¦ Theorem: The capacity per unit cost is

maxPV ¿PS , f :S→X

D(PY |X=f(V ),V ||PY |X=0,S

)−D (V ||S)E [b(f(V ))]

¦ Converse follows by calculus on

maxPXU|S

I(U ; Y )− I(U ; S)E [b(X)]

¦ Key idea: identify

V (the prefered type) from

U (the auxiliary random variable)

Hindsight

¦ Return to Dirty Paper Channel:

y[m] = x[m] + s[m] + w[m]

¦ Cost functionb(x) = x2

¦ ChooseX = x andV = Ns

N0x + S

¦ ThenD

(PY |X=x,V ||PY |X=0,S

)−D (V ||S)x2

=1

N0

A Summary

¦ Capacity of Gelfand-Pinsker Channels achieved by binning scheme

¦ Binning scheme

– Each bin a good quantizer

– Overall good channel code

¦ Consider cost-efficient quantization and channel coding

– S. Verd́u, “Capacity per Unit Cost”

¦ Combination of cost-efficient quantizers and channel codes

achieves capacity per unit cost of Gelfand-Pinsker channel

Another Ramification

¦ Consider low-rate VQ of wideband Gaussian sourcex1(t)

¦ Decoder has access tocorrelatedwideband Gaussian sourcex2(t)

¦ A type ofWyner-Ziv lossy compression

– A binning scheme suggests no “rate loss”

¦ Ideas developed here suggest an explicit scheme: OPPQ

OPPQ: Encoding

Source

Reconstructionwithout side information

OPPQ: Reconstruction

Source

with side informationReconstruction

top related