edi042 – error control coding i binary-input awgn channel ... · generator polynomial i encoding...
TRANSCRIPT
EDI042 – Error Control Coding(Kodningsteknik)
Chapter 2: Principles of Error Control Coding, Part II
Michael Lentmaier
November 10, 2014
Repetition
System model
I Binary-input AWGN channelI Binary symmetric channel (BSC), hard-decisionsI Coding gain and Shannon limit
Block codes: basic principles
I Length N, dimension K, rate R, code as subspaceI Hamming weight, Hamming distance, minimum distanceI Perfect codes, Hamming boundI Linear block codes, generator matrix G, parity-check matrix HI Systematic encoding, equivalent codes
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 1 / 19
Repetition
Example
I Repetition codes, K = 1, R = 1
N , dmin = NI Single parity-check (SPC) codes, K = N �1, R = N�1
N , dmin = 2
I Hamming codes, N = 2
m �1, K = N �m, dmin = 3
Correcting capability
I Decoding radius t, detection vs. correctionI Coset decomposition, standard arrayI Syndrome s, coset leaders, correctable error patterns e
Problem:How can we find good codes?
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 2 / 19
Constructing Codes from OtherCodes
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 3 / 19
Code Extension / Lengthening
TheoremA linear (N,K,d
min
) code C with odd minimum distance dmin
can be
extended to an (N +1,K,dmin
+1) code Cext
by adding a parity-check
symbol to each codeword, i.e.,
vN+1
= v1
+ v2
+ · · ·+ vN
I Proof: The weight of codewords cannot decrease after extensionand the codewords with odd minimum weight dmin will result incodewords with even weight dmin +1
I The parity-check matrix of an extended code can be written as
Hext =
2
664
0
H 0
0
1 1 1 1 1 1
3
775
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 4 / 19
Code Shortening
TheoremA linear (N,K,d
min
) code, N,K > 1, with systematic generator matrix
G can be shortened to an (N �1,K �1,d) code with d � dmin
I Proof: Fix the first information symbol in u to zero and encodewith G. Then remove the first symbol from the codeword v.The minimum distance of the code cannot decrease by thisprocedure.
I A generator matrix of the shortened code can be obtained byremoving the first row and the first column from G
Example ((5,2,3) double-shortened Hamming code)
G =
2
664
1 0 0 0 0 1 1
0 1 0 0 1 0 1
0 0 1 0 1 1 0
0 0 0 1 1 1 1
3
775 Gsh =
1 0 1 1 0
0 1 1 1 1
�
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 5 / 19
Code Puncturing
TheoremA linear (N,K,d
min
) code, N > 1 can be punctured to an (N �1,K,d)code with d
min
�1 d dmin
I Proof: Encode the input vector u with an arbitrary generatormatrix G of the code. Then remove the symbol vi from thecodeword v for some fixed position i 2 {1, . . . ,N}.The number of codewords is preserved by this procedure.
I Puncturing increases the code rate to R = KN�1
> KN
I Shortening reduces the code rate to R = K�1
N�1
< KN
Example ((5,2,3) code, R = 2
5
= 0.4, i = 1)
Gsh =
1 0 1 1 0
0 1 1 1 1
�Shortening: R = 1
4
= 0.25, dmin = 4
Puncturing: R = 2
4
= 0.5, dmin = 2
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 6 / 19
|v|v+w| Construction (Plotkin, 1960)
DefinitionConsider two linear binary codes C
1
(N,K1
,d1
) and C2
(N,K2
,d2
) ofequal length N. A new code of length 2N can be formed by the|v|v+w| construction as follows:
C0 = {(v,v+w) |v 2 C1
,w 2 C2
}
I Parameters of the resulting code C0(N0,K0,d0):
length: N0 = 2Ndimension: K0 = K
1
+K2
minimum distance: d0 = min{2d1
,d2
}I The generator matrix of C0 can be written as
G0 =
G
1
G1
0 G2
�
I Example: C1
(3,2,2), C2
(3,1,3) ) C0(6,3,3)
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 7 / 19
Recursive Construction of Codes
Using the |v|v+w| construction we can obtain a family of codes:
C(2, 1, 2) C(2, 2, 1)
C(4, 3, 2)C(4, 1, 4) C(4, 4, 1)
C(8, 4, 4) C(8, 7, 2) C(8, 8, 1)C(8, 1, 8)
C(16, 1, 16) C(16, 5, 8) C(16, 11, 4) C(16, 15, 2) C(16, 16, 1)
I For the leftmost codes choose repetition codesI For the rightmost codes choose full vector space FN
2
I The other codes can be constructed recursivelyI Reed-Muller codes can be constructed in this way
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 8 / 19
Reed-Muller Codes of First Order
DefinitionA first order Reed-Muller code R(1,m) = C(2m,m+1,2m�1) can bedefined by a generator matrix G that is composed of an all-one rowvector of length N = 2
m, followed by all possible distinct binary columnvectors of length m.
I Example: R(1,3), N = 2
3 = 8, dmin = 2
2 = 4
G =
2
664
1 1 1 1 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
3
775
I Recursive construction:C
1
= R(1,m), C2
= (N,1,N) repetition code, i.e.,
R(1,m+1) = {(v,v+w) |v 2 R(1,m),w 2 {(0 . . .0),(1 . . .1)}}
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 9 / 19
Bounds on Achievable CodeParameters
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 10 / 19
Singleton Bound
I Recall: The Hamming bound is a lower bound on the requiredredundancy of a code with given dmin (perfect codes)
I Another very simple but important bound is given as follows:
Theorem (Singleton Bound)
The parameters of any linear code C(N,K,d) satisfy the relation
N �K � d �1
I Proof: N �K is the maximum number of independent columns ina parity-check matrix H
I A code is called maximum distance separable (MDS) ifN �K = d �1
I Reed-Solomon codes are a class of MDS codesI Only trivial binary MDS codes exist: repetition codes,
single-parity check codes and codes with no redundancy
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 11 / 19
Gilbert-Varshamov Bound
I The Gilbert-Varshamov bound shows that there exist good codesI There are two different variants of this bound:
Theorem (Varshamov Bound)
There exists a linear binary code C(N,K,d) that satisfies the relation
2
N�K�1 d�2
Âi=0
✓N �1
i
◆
Theorem (Gilbert Bound)
There exists a linear binary code C(N,K,d) that satisfies the relation
2
N�K d�1
Âi=0
✓Ni
◆
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 12 / 19
Cyclic Codes
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 13 / 19
Cyclic CodesDefinitionA linear block code C is called cyclic if every cyclic shift of acodeword v 2 C is also a codeword of C:
v = (v0
,v1
, . . . ,vN�1
) 2 C ()
v(i) = (vN�i,vN�i+1
, . . . ,vN�1
,v0
,v1
, . . .vN�i�1
) 2 C
I A cyclic code can be encoded by an K ⇥N cyclic generatormatrix G of the form:
G =
2
6664
g0
g1
· · · gN�K 0 · · · 0
0 g0
· · · gN�K�1
gN�K · · · 0
......
. . . . . . . . . . . ....
0 0 · · · g0
g1
· · · gN�K
3
7775
I The first row g = (g0
,g1
, . . . ,gN�K ,0, . . . ,0), g0
6= 0, gN�K 6= 0
is a nonzero codeword. It follows that the K cyclic shifts inthe rows are also codewords and linearly independent.
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 14 / 19
Code Polynomials
I Codewords of a cyclic code can be described by polynomials
v = (v0
,v1
, . . . ,vN�1
) , v(X) = v0
+ v1
X + · · ·+ vN�1
XN�1
I A cyclic shift by i positions becomes then
v(i) , Xi v(X) = q(X)(XN �1)+v(i)(X)
Xi v(X) = v(i)(X) mod XN �1
I For any code polynomial v(X) we have
X v(X), X2 v(X), . . . , Xi v(X) mod XN �1 2 C
I It follows from linearity that for any a(X) = a0
+a1
X + · · · we have
a(X) v(X) mod XN �1 2 C
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 15 / 19
Generator PolynomialI Encoding of a cyclic code C can be performed by means of a
generator polynomial g(X) of degree N �K:
v(X) =�u
0
+u1
X + · · ·+uK�1
XK�1
�g(X) = u(X) g(X)
TheoremIf g(X) is a polynomial of degree N �K and it is a factor of XN �1, then
it generates an (N,K) cyclic code.
I A polynomial v(X) of degree N �1 or less is a code polynomial ofC if and only if it is a multiple of g(X).
I Possible realization:
Part V Coding for Multicarrier Modulation 79
where the division is stopped as soon as we obtain a remainder of degree � K. That remainder is p(D).
Any cyclic code can also be defined via its parity polynomial h(D), which is related to the generatorpolynomial via
g(D)h(D) = DN � 1 ( mod (DN � 1) = 0). (27)
Note that this equation corresponds to the matrix equation GHT = 0. The parity polynomial is apolynomial of degree deg(h(D)) = K, obtained from the generator polynomial via division
h(D) =DN � 1
g(D)
Since v(D) = u(D)g(D), from (27) we immediately obtain that every code polynomial must satisfy
v(D)h(D) = u(D)(DN � 1) ( mod (DN � 1) = 0) (28)
which corresponds to the matrix equation vHT = 0. We can rewrite (28) in an equivalent form
v(D) =u(D)
h(D)(DN � 1) (29)
The equations (24), (25) and (29) specify three di�erent encoding rules for the same code. Theseencoders can be realised as linear sequential circuits using delay elements and modulo-p adders, as specifiedin the next section.
4.3 Encoding of cyclic codes
The following three encoders realise mappings defined by the equations (24), (25) and (28), respectively.They are often referred to as time-domain encoders. Note that for all encoders, g0 = 1, h0 = 1.
1. Encoder 1: Non-systematic encoder that generates codewords according to v(D) = u(D)g(D), i.e.,by multiplication of the information polynomial by the generator polynomial, as shown in Figure 4.This encoder is nothing else but an FIR filter (or a shift register) whose tap coe�cients are thecoe�cients gk of the generator polynomial. The codesymbols appear at the filter output during Nclock cycles. Before the encoding begins, all delay elements are assumed to be in the zero state.
. . .
. . .
g1 g2
ui
vi
gN�K
Figure 4: Encoder 1.
2. Encoder 2: Systematic encoder that outputs information symbols as the first K code symbolsand subsequently generates parity checks via long division by the generator polynomial, v(D) =
u(D)+p(D), p(D) = �Rem
✓u(D)
g(D)
◆, as shown in Figure 5. Before the encoding starts, all delay
elements are in zero state. During the first K clock cycles, the switches are in position (1), andduring the next (N � K) clocks they are in position (2).
3. Encoder 3: Systematic encoder that generates the code symbols according to the division v(D) =u(D)h(D) (D
N � 1) realised as a linear feedback shift register (LFSR) whose coe�cients are coe�cients
hk of the parity polynomial, as shown in Figure 6. We say that h(D) is the connection polynomialfor this LFSR. Before the encoding starts, the K information symbols are loaded into the LFSR,such that they appear at the output during the first K clock cycles.
uK�1, . . . , u1, u0
vN�1, . . . , v1, v0
I Example: (23,12,7) Golay code (perfect binary code)g(X) = 1+X2 +X4 +X5 +X6 +X10 +X11
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 16 / 19
Parity Polynomial
I A cyclic code can also be represented by a degree K paritypolynomial h(X) = h
0
+h1
X + · · ·+hKXK , defined by
g(X) h(X) = XN �1
I Then it follows that each code polynomial v(X) 2 C satisfies
v(X) h(X) = u(X) g(X) h(X) = u(X)�XN �1
�= 0 mod XN �1
I A parity-check matrix of the code C is then given by
H =
2
6664
hK · · · h0
hK · · · h0
. . .hK · · · h
0
3
7775,
2
6664
¯h(X)X ¯h(X)
...XN�K�1
¯h(X)
3
7775
I Here ¯h(X) denotes the reciprocal polynomial¯h(X) = XKh
�X�1
�= h
0
XK +h1
XK�1 + · · ·+hK�1
X +1
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 17 / 19
Example: Cyclic Hamming Code
I A (7,4) cyclic Hamming code is generated by the polynomial
g(X) = 1+X +X3
I We obtain the parity polynomial as
h(X) =X7 +1
X3 +X +1
= 1+X +X2 +X4
I This corresponds to the cyclic matrices
G =
2
664
1 1 0 1
1 1 0 1
1 1 0 1
1 1 0 1
3
775 ,
2
664
g(X)X g(X)X2 g(X)X3 g(X)
3
775
H =
2
41 0 1 1 1
1 0 1 1 1
1 0 1 1 1
3
5 ,
2
4¯h(X)
X ¯h(X)X2
¯h(X)
3
5
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 18 / 19
Systematic Encoding of Cyclic CodesI Assume the following systematic codeword structure:
v = (p0
,p1
, . . . ,pN�K�1
,u0
,u1
, . . . ,uK�1
) , v(X) = p(X)+XN�K u(X)
I Since every codeword v(X) must be a multiple of g(X) we canwrite
v(X) = p(X)+XN�K u(X) = a(X) g(X)
or, equivalently,XN�K u(X) = a(X) g(X)�p(X)
I The parity polynomial p(X) can be computed as remainder of adivision of the shifted input polynomial XN�K u(X) by g(X)
Example
Let g(X) = 1+X +X3 and u(X) = 1+X3 , u(X) = (1,0,0,1).Dividing X3 u(X) = X3 +X6 by g(X) = 1+X +X3 yields
�p(X) = +p(X) = X +X2
Hence v(X) = X +X2 +X3 +X6 , v = (0,1,1,1,0,0,1).
Michael Lentmaier, Fall 2014 EDI042 – Error Control Coding: Chapter 2 19 / 19