code design: computer search

23
1 Code design: Computer search Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do not try several generator matrices for the same code? Avoid nonminimal matrices (and of course catastrophic ones)

Upload: jonas-sanford

Post on 01-Jan-2016

38 views

Category:

Documents


0 download

DESCRIPTION

Code design: Computer search. Low rate: Represent code by its generator matrix Find one representative for each equivalence class of codes Permutation equivalences? Do not try several generator matrices for the same code? Avoid nonminimal matrices (and of course catastrophic ones). - PowerPoint PPT Presentation

TRANSCRIPT

Page 1: Code design: Computer search

1

Code design: Computer search

• Low rate:• Represent code by its generator matrix

• Find one representative for each equivalence class of codes• Permutation equivalences?

• Do not try several generator matrices for the same code?

• Avoid nonminimal matrices (and of course catastrophic ones)

Page 2: Code design: Computer search

2

Code design: Computer search (2)*• High rate codes

• Represent code by its parity check matrix• Find one representative for each equivalence class of codes• Avoid nonminimal matrices

• Problem: Branch complexity of rate k/n codes• One solution:• Use a bit-oriented trellis• Can limit complexity of bit-level trellis to +, 0 min(n-

k,k)• Reference:

E. Rosnes and Ø. Ytrehus,Maximum length convolutional codes under a trellis complexity constraint, J. of Complexity 20 (2004) 372-403.

Page 3: Code design: Computer search

3

Code design/ Bounds on codes*• Heller bound: dfree best minimum distance in any block code of

the same parameters as any truncated/terminated code

• McEliece: For rate (n-1)/n codes of dfree =5, n 2(+1)/2

• Sphere packing bounds

• Applies to codes of odd dfree

• Similar to the Hamming/sphere packing bound for block codes

• Corollary: For rate (n-1)/n codes of dfree =5, n 2/2

• Reference: E. Rosnes and Ø. Ytrehus,Sphere packing bounds for convolutional codes, IEEE Trans. On Information Theory, to appear (2004)

Page 4: Code design: Computer search

4

QLI codes

Skip this.

Page 5: Code design: Computer search

5

Code design: Construction from block codes

• Justesen:

• Constr 12.1 …skip

• Constr 12.2 …skip

• Other attempts

• Wyner code d=3

• Generalizations d=3 and 4. Can be shown to be optimum

• Reference:E. Rosnes and Ø. Ytrehus,Maximum length convolutional codes under a trellis complexity constraint, J. of Complexity 20 (2004) 372-403.

• Other algebraic (”BCH-like”) constructions (not in book)

• Warning: Messy

Page 6: Code design: Computer search

6

Implementation issues

• Decoder memory

• Metrics dynamic range (normalization)*

• Path memory

• Decoder synchronization

• Receiver quantization

Page 7: Code design: Computer search

7

Decoder memory

• Limit to how large can be• In practice is seldom more than 4 (16 states)• Exists implementation with = 14 (BVD)

Page 8: Code design: Computer search

9

Path memory• For large information length:

• Impractical to store complete path before backtracing

• Requires a large amount of memory

• Imposes delay

• Practical solution:

• Select integer • After blocks: Start backtracing by either

• Starting from the survivor in an arbitrary state, or the survivor in the state with best metrics

• Try backtracing from all states, and select the first k-bit block that occurs most frequently in these backtracings

• This determines the first k-bit information block. Subsequent information blocks are decoded successively in the same way

Page 9: Code design: Computer search

10

Truncation of the decoder• Not maximum likelihood

• The truncated decoder can make two kinds of errors

• EML = The kind of error that an ML decoder would also make

• Associated with low-weight paths (paths of weight dfree, dfree+1, …)

• P(EML ) Adfree Pdfree

where Pd depends on the channel and is

decreasing rapidly as the argument d increases

• ET = The kind of error that are due to truncation, and which an ML decoder would not make

• Why would an ML decoder not make those errors?

• Because if the decoding decision is allowed to ”mature”, the erroneous codewords will not preferred over the correct one. However, the truncated decoder enforces an early decision and thus makes mistakes

Page 10: Code design: Computer search

11

Truncation length

• Assume the all zero word is the correct word

• P(ET) Ad() Pd() ,where d() is the minimum weight of a path that leaves state zero at time 1 and that is still unmerged at time

• Want to select d() such that P(EML ) >> P(ET)

• The minimum length such that d() > dfree is called the truncation length. Often 4-5 times the memory m. Determined e. g. by modified Viterbi algorithm

Page 11: Code design: Computer search

12

Example

Page 12: Code design: Computer search

13

R=1/2, =4 code, varying path memory

Page 13: Code design: Computer search

14

Synchronization

• Symbol synchronization: Which n bits belong together as a branch label?

• Metrics evaluation: If metrics are consistently bad this insicates loss of synchronization

• May be enhanced by code construction*

• May embed special synchronization patterns in the transmitted sequence

• What if the noise is bad?

• Decoder: Do we know that the encoder start in the zero state?

• Not always

• Solution: Discard the first (5m) branches

Page 14: Code design: Computer search

15

Quantization

Skip this

Page 15: Code design: Computer search

16

Other complexity issues• Branch complexity (of high rate encoders)

• Punctured codes

• Bit level encoders (equivalence to punctured codes)*

• Syndrome trellis decoding (S. Riedel)*

• Speedup:

• Parallell processing

• One processor per node

• Several time instants at a time / pipelining (Fettweiss)*

• Differential Viterbi decoding

Page 16: Code design: Computer search

17

SISO decoding algorithms

• Two new elements

• Soft-in Soft-out (SISO)

• Allows non-uniform input probability

• The a priori information bit probabilities P(ul) may be different

• Applications: Turbo decoding

• Two most used algorithms:

• The Soft Output Viterbi Algorithm

• The BCJR algorithm (APP, MAP)

Page 17: Code design: Computer search

18

SOVA

Page 18: Code design: Computer search

19

SOVA (continued)

Page 19: Code design: Computer search

20

SOVA (continued)

Page 20: Code design: Computer search

21

SOVA (continued)

Page 21: Code design: Computer search

22

SOVA Update

Page 22: Code design: Computer search

23

SOVA

• Finite path memory: Straightforward

• Complexity vs. Complexity of Viterbi Algorithm:Modest increase

• Storage complexity: Need to store reliability vectors in addition to metrics and survivor pointers

• Need to update the reliability vector

• Provides ML decoding (Minimizes WER) and reliability measure

• Not MAP: Does not minimize BER

Page 23: Code design: Computer search

24

Suggested exercises

• 12.6-12.26