l. j. wang 1 introduction to reed-solomon coding ( part i )

26
L. J. Wang 1 Introduction to Reed-Solomon Coding ( Part I )

Upload: dayana-penfold

Post on 14-Dec-2015

222 views

Category:

Documents


0 download

TRANSCRIPT

L. J. Wang1

Introduction to

Reed-Solomon Coding

( Part I )

L. J. Wang2

Introduction

One of the most error control codes is Reed-Sol

omon codes.

These codes were developed by Reed & Solomo

n in June, 1960.

The paper I.S. Reed and Gus Solomon, “ Polyno

minal codes over certain finite fields ”, Journal o

f the society for industrial & applied mathematic

s.

L. J. Wang3

Reed-Solomon (RS) codes have many

applications such as compact disc (CD, VCD,

DVD), deep space exploration, HDTV,

computer memory, and spread-spectrum

systems.

In the decades, since RS discovery, RS codes

are the most frequency used digital error control

codes in the world.

L. J. Wang4

digital data 0 1 0 1 1 0 0 1 1 0 0 1 0 1 0 0 0

signal

noise

singalwith noise

samplingtime

0 1 0 1 1 0 0 0 1 0 0 1 0 1 0 1 0reconstruct-ed data

bits in error

Figure-1. Effect of noise on a digital signal

Effect of Noise

L. J. Wang5

digital data 0 1 0 1 1 0 0 1 1 0 0 1 0 0 0

Reconstructed data 0 1 0 1 1 0 0 0 1 0 0 1 0 1 0

encoder 0 0 0 0 check bits, r=2

1 1 1 1

information bits, k=1

block length of code, n=3

000, 111 code word

a ( n, k ) code, n=3, k=1, and r=n-k=3-1=2

code rate, p=k/n=1/3

encoder 000 111 000 111 111 000 000

receiver 000 101 000 111 111 010 001

decoder 000 111 000 111 111 000 000

L. J. Wang6

A (7,4) hamming code

n=7, k=4, r=n-k=7-4=3, p=4/7.

0101 1100 1001 0000

I1 I2 I3 I4

encoder

receiver

decoder

A (7,4) HAMMING CODE

0101 1100 10011 2 3 1 2 3 1 2 3c c c c c c c c c

0111 1100 10011 2 3 1 2 3 1 2 3c c c c c c c c c

0101 1100 10011 2 3 1 2 3 1 2 3c c c c c c c c c

L. J. Wang7

Let a1, a2, ..., ak be the k binary of message digital.

Let c1, c2, ..., cr be the r parity check bits.

An n-digital codeword can be given by

a1a2a3...akc1c2c3...cr

n bits

The check bits are chosen to satisfy the r=n-k equations,

0 = h11a1h12a2...h1kakc1

0 = h21a1h22a2...h2kakc2

.

. (1)

.

0 = hr1a1hr2a2...hrkakcr

L. J. Wang8

Equation (1) can be writen in matrix notation,

h11 h12 ... h1k 1 0 ... 0 a1 0

h21 h22 ... h2k 0 1 ... 0 a2 0

. . .

. ak = 0

. c1 0

. c2 0

. . .

hr1 hr2 ... hrk 0 0 ... 1 cr 0

rn n1 r1

H T = 0

L. J. Wang9

Let E be an n1 error pattern at least one error, that is

e1 0

e2 0 . .

E = . = ej = 1 . . . .

en 0

Also let R be the received codeword, that is

r1 a1 0

r2 a2 0 . . .

R = . = T + E = ak + ej = 1

. c1 . . . .

rn cr 0

L. J. Wang10

Thus

S = H R = H (T+E) = H T + H E = H E

S = H E

where S is an r1 syndorme pattern.

Problem, for given S, Find E

 

s1 h11 h12 0

s2 h21 h22 0

. = . e1 + . e2 + ... + . en (2)

. . . .

. . . .

sr hr1 hr2 1

L. J. Wang11

Assume e1=0, e2=1, e3=0, ..., en=0

s1 h12

s2 h22

. = .

. .

. .

sr hr2

The syndrome is equal to the second column of the parity check matrix H.

Thus, the second position of received codeword is error.

L. J. Wang12

A (n,k) hamming code has n=r+k=2r-1, where k is message

bits and r=n-k is parity check bits.

The rate of the hamming code is given by

Hamming code is a single error correcting code.

In order to correct two or more errors, cyclic binary code,

BCH code and Reed-Solomon code are developed to correct

t errors, where t 1≧ .

Rk

n

r rr

r r

2 1

2 11

2 1

L. J. Wang13

In GF(24), let p(x)=x4+x+1 be a primitive irreducible polynomial over

GF(24).

Then the elements of GF(24) are

Single-error-correcting Binary BCH code

0 = 00000001001001001000

0

1

2

3

4

5 2

6 3 2

7 3

8 2

9 3

10 2

11 3 2

12 4 3 2

13 1 3 2

14 3

15

1 001101101100

1 10111 0101

10101 0111

11101111

1 11011 1001

1

L. J. Wang14

The parity check matrix of a (n=15,k=11) BCH code for

correcting one error is

Encoder: Let the codeword of this code is

1

0

0

0

0

1

0

0

0

0

1

0

0

0

0

0

1

1

1

1

1

0

1

1

1

0

0

1

),,,,.....,,,( 012311121314

H

C C C C C C C C C C C C C C C14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

information bits parity check bits

L. J. Wang15

11001.1110010100 is wordcode The

1001101110000011

0101100111100111

0

10010100111

1000010000100001

1111

1011

1001

11110010100

3

2

1

0

0

1

2

3

0123

cccc

cccc

cccc

L. J. Wang16

Decoder:

Let received word be

R = C + E

codeword error pattern

H R=H(C+E)=H C+H E‧ ‧ ‧ T=H E‧ T

=

where

Let R=C+E=(11100101001001)+(00100....0)

=(11000101001001)

0112

1213

1314

14 ..... eeeee

E e e e e e( , , ,......, , )14 13 12 1 0

L. J. Wang17

H R H H HT*

11000101001001

11100101001001

01100101001001

( ,

( ...... ).........

14 13 11 1 0

14 13 12 11 10

12

00100001001001

0 0 1 0 0 0

, ,.... , )

location

L. J. Wang18

Let Information polynomial be

I(x)=

The codeword is

44

55

66

77

88

99

1010

1111

1212

1313

1414

xCxCxC

xCxCxCxCxCxCxCxC

01

12

23

34

45

513

1314

14 ......)( CxCxCxCxCxCxCxCxC

Information polynomial parity check polynomial

I(x) R(x)

L. J. Wang19

Note that

C(x)=Q(x) g(x)‧

where

g(x) is called a generator polynomial,

C(x) is a codeword if and only if

C(x) is a multiple of g(x).

For example, to encode a (15,11) BCH code, the generator

polynomial is g(x)=x4+x+1, where α is a order of 15 in

GF(24) and is called a minimum polynomial of α.

L. J. Wang20

To encode, one needs to find C3,C2,C1,C0 or

R(x) =

such that

satisfies

To show this, dividing I(x) by g(x), one obtains

I(x)=Q(x)g(x)+R(x)

Encoder

C(x)=I(x)+R(x)=Q(x)*g(x)

Since C(x) is a multiple of g(x); then

C(x)=I(x)+R(x) is a (15,11) BCH code.

012

23

3 CXCXCXC

C x I x R x( ) ( ) ( )

0)()()( RIC

C I R Q g( ) ( ) ( ) ( ) * ( ) 0

L. J. Wang21

Example :

I(x)=Q(x)g(x)+R(x)

C(x)=Q(x)g(x)=I(x)+R(x)

=

=111001010011001

45678

91011121314

10010

...100111)(

xxxxx

xxxxxxxI

101114

2578910

4510111213144

1)(

10....001)(

xxx

xxxxxxxxQ

xxxxxxxxxxg

…1001)( 23 xxxxR

x x x x x x x x x

x x x x x

14 13 12 11 10 9 8 7 6

5 4 3 2

0 0 1 0 1 0

0 1 1 0 0 1

L. J. Wang22

To decode, let the error polynomial is

E(x)=

The received word polynomial is

R’(x)=C(x)+E(x)=

The syndrome is

=

=

is the error location in a received word.

0 0 1 0 0 014 13 12 11 10 x x x x x ....

x x x x x x14 13 9 7 4 3 1

)()()()()()()( EEgQECRS

14 13 9 7 4 3 1

3 3 2 3 3 31 1 1 1 1

12

32 1

12

L. J. Wang23

To encode a (n=15, I=7) BCH code over GF(24), which ca

n correct two errors.

Let C(x)=K(x)g1(x)g2(x)

where g1(α) is the minimal polynomial of α.

=> g1(α) = 0

g2(α3) is the minimal polynomial of α3.

=> g2(α3) = 0

Double-error-correcting Binary BCH code

L. J. Wang24

The minimal polynomial of αis

The minimal polynomial of α3 is

The generator polynomial of a(15,7) BCH code is

g x x x x x

x x1

2 2 2

4

2 3

1

( ) ( )( )( )( )

g x x x x x

x x x x2

3 3 2 3 4 3 8

4 3 2 1

( ) ( )( ( ) )( ( ) )( ( ) )

g x g x g x x x x x x x

x x x x

( ) ( ) ( ) ( )( )

1 2

4 4 3 2

8 7 6 4

1 1

1

L. J. Wang25

An RS code is a cyclic symbol error-correcting code.

An RS codeword will consist of I information or message

symbols, together with P parity or check symbols. The

word length is N=I+P.

The symbols in an RS codeword are usually not binary,

i.e., each symbol is represent by more than one bit. In fact,

a favorite choice is to use 8-bit symbols. This is related to

the fact that most computers have word length of 8 bits or

multiples of 8 bits.

Reed-Solomon (RS) code

L. J. Wang26

In order to be able to correct ‘t’ symbol errors, the

minimum distance of the code words ‘D’ is given by

D=2t+1.

If the minimum distance of an RS code is D, and the word

length is N, then the number of message symbols I in a

word is given by

I = N – ( D – 1 )

P = D – 1.