discrete memoryless channel and its capacity tutorial
TRANSCRIPT
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Chapter 3: Discrete memoryless channel and itscapacity
Vahid MeghdadiUniversity of Limoges
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Discrete memoryless channel
Transmission rate over a noisy channelRepetition codeTransmission rate
Capacity of DMCCapacity of a noisy channelExamples
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Discrete memoryless channel (DMC)
Discrete Memory-less channel
Inputrandomvariable
Outputrandomvariable
X Y
I The input of a DMC is a RV (random variable) X who selectsits value from a discrete limited set X .
I The cardinality of X is the number of the point in the usedconstellation.
I In an ideal channel, the output is equal to the input.I In a non-ideal channel, the output can be different from the
input with a given probability.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Discrete memoryless channel
I All these transition probabilities from xi to yj are gathered ina transition matrix.
I The (i , j) entry of the matrix is P(Y = yj |X = xi ), which iscalled forward transition probability.
I In DMC the output of the channel depends only on the inputof the channel at the same instant and not on the inputbefore or after.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
DMC model
p11p1j
piM
pMM
pij
pi11
i
M
1
j
M
X Y
pi , P(X = i) , qj , P(Y = j) and pij , P(Y = j |X = i).And,
qj =Mi=1
pipij
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
DMC probability model
The last equation can be written using matrix form:
Qy =
q1q2...qM
=
p11 p21 . . . pM1p12 p22 . . . pM2
......
. . ....
p1M p2M . . . pMM
.
p1p2...pM
This equation can be compactly written as:
Qy = PY |XPx
Note:Mj=1
pij = 1 and pe =Mi=1
pi
Mj=1,i 6=j
pij
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Hard- and soft-decision
I Normally the size of constellation at the input and at theoutput are the same, i.e., |X | = |Y |. In this case the receiveremploys hard-decision decoding. It means that the decodermakes a firm decision about the transmitted symbol.
I It is possible also that |X | 6= |Y |. In this case the receiveremploys a soft-decision.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Repetition codeTransmission rate
Repetition code
Consider the above channel and say p = 0.1. With observing aone at the output, are we sure what had been sent? We canonly say that X was one with the probability of 90%, it means thatpe = 0.1. If we send a one three times and make a majoritydecision at the output, the probability of error will be:
pe = 1 pc = 1[
(1 p)3 +(
32
)(1 p)2p1
]= 0.028
It means that with reducing the rate, we can arbitrarily decreasethe error probability.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Repetition codeTransmission rate
Transmission rate
H(X ) is the amount of information per symbol at the input of thechannel.H(Y ) is the amount of information per symbol at the output ofthe channel.H(X |Y ) is the amount of uncertainty remaining on X knowing Y .The information transmission is given by:
I (X ;Y ) = H(X ) H(X |Y ) bits/channel useFor an ideal channel X = Y , there is no uncertainty over X whenwe observe Y . So all the information is transmitted for eachchannel use: I (X ;Y ) = H(X )If the channel is too noisy, X and Y are independent. So theuncertainty over X remains the same knowing or not Y , i.e. noinformation passes through the channel: I (X ;Y ) = 0.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Repetition codeTransmission rate
Transmission over noisy channels
The question is: Can we send the information over a noisy channelwithout error but with a non zero rate? The answer is yes, and thelimit of the rate is given by Shannon.
Example: For the following channel, for eachchannel use, two bits are sent. Using uniforminput variable X , the input entropy is H(X ) = 2bits per symbol. There is no way to transmit 2bits per channel use through this channel.However, it is possible to realize an ideal channelif we reduce the rate.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Repetition codeTransmission rate
Transmission scheme
From the previous example, we change the probability distributionof X as P(X = 0) = P(X = 2) = 0.5 andP(X = 1) = P(X = 3) = 0. So H(x) = 1 bit per symbol. At thereceiver, we can precisely detect the correct transmitted symbol.So the channel has been transformed to an ideal channel with anon-zero rate.This strategy gives an information transmission rate of 1 bit persymbol. Is it the maximum rate? The answer is yes because:The entropy of Y i.e. H(Y ) is at most equal to 2: H(Y ) 2. Theconditional entropy H(Y |X ) is the uncertainty over Y given X .But, if X is known, there is just two possibilities for Y givingH(Y |X ) = 1. So [H(Y ) H(Y |X )] 2 1 = 1.Therefore the information rate cannot be greater than 1. So theproposed scheme is optimal.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Capacity of a noisy channelExamples
Capacity of a noisy channel
DefinitionThe channel capacity of a discrete memoryless channel is definedby:
C = maxp(x)
I (X ;Y )
= maxp(x)
[H(Y ) H(Y |X )]= max
p(x)[H(X ) H(X |Y )]
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Capacity of a noisy channelExamples
Example: Binary symmetric channel BSC
Choose PX (x) to maximize I (X ;Y ).Solution (bis): The channel can be modeled usinga new binary RV with:
Z =
{1 prob p0 prob 1 p
Therefore we can write: Y = X Z . When X isgiven, the information on Y is the same as theinformation over Z . Therefore H(Y |X ) = H(Z ).And H(Z ) = H(p, 1 p). Since H(Y ) 1, wecan write I (X ;Y ) 1 H(Z ). The maximum ofthis quantity is obtained when H(Y ) = 1 or whenPX (0) = PX (1) = 0.5. The capacity isC = 1 H(p, 1 p).
X Y
1-p
1-p
p
p
0
1
0
1
0
1
X Y
Z
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Capacity of a noisy channelExamples
Binary erasure channel BEC
The binary erasure channel is when some bits are lost (rather thancorrupted). Here the receiver knows which bit has been erased.What is the capacity of BEC?
C = maxp(x)
[H(Y ) H(Y |X )] = maxp(x)
[H(Y ) H(a, 1 a)]
Symmetry: P(X = 0) = P(X = 1) = 1/2. So Yhas three possibilities with probabilitiesP(Y = 0) = P(Y = 1) = (1 a)/2 andP(Y = e) = a. So we can write:
C = H
(1 a
2,
1 a2
, a
) H(a, 1 a)
= 1 a bit per channel useVahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Capacity of a noisy channelExamples
Exercise
Another way to obtain the capacity of BEC. Let define the RV Efor the BEC as
E =
{1 if error, prob=a0 if not error, prob=1-a
Because C = maxH(Y )H(Y |X ) and H(Y |X ) = H(a, 1 a), wecalculate H(Y ) using this new RV.Using chain rule, we know thatH(Y ,E ) = H(E ) + H(Y |E ) = H(Y ) + H(E |Y ). SinceH(E |Y ) = 0 (why), H(Y ) = H(E ) +H(Y |E ). H(E ) = H(a, 1 a)and H(Y |E ) = P(E = 0)H(Y |E = 0) + P(E = 1)H(Y |E = 1).On the other hand, H(Y |E = 1) = 0 and H(Y |E = 0) = H(X ).So, H(Y ) = H(a, 1 a) + (1 a)H(X ) and H(X ) = 1. So thecapacity will be H(a, 1 a) + (1 a) H(a, 1 a) = (1 a).
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
-
OutlineDiscrete memoryless channel
Transmission rate over a noisy channelCapacity of DMC
Capacity of a noisy channelExamples
What is the transmission scheme?
We want to know how we can obtain this rate. Suppose a largepacket of size N of symbols is transmitted and there is a returnpath. The receiver, knowing which symbol is not received, can askthe transmitter to re-transmit the lost symbols. So in this block,only N aN symbols are passed to the receiver. So the rate 1 ais achieved.
Note: Feedback does not increase the capacity of DMC.
Vahid MeghdadiUniversity of Limoges Chapter 3: Discrete memoryless channel and its capacity
OutlineDiscrete memoryless channelTransmission rate over a noisy channelRepetition codeTransmission rate
Capacity of DMCCapacity of a noisy channelExamples