turbo coding (ch 16) - universitetet i bergeneirik/inf244/lectures/lecture14.pdf · interleavers...
Post on 03-Apr-2018
219 Views
Preview:
TRANSCRIPT
1
Turbo coding (CH 16)
• Parallel concatenated codes
• Distance properties
• Not exceptionally high minimum distance
• But ”few” codewords of ”low” weight
• Trellis complexity
• Usually extremely high trellis complexity
• Decoding
• Suboptimum (but close to ML) iterative (turbo) decoding
• Performance
• Low error probability at SNRs close to the Shannon limit
2
History• Shannon (1948):
• The channel’s SNR (AWGN channel) determines the capacity C, in the sense that for code rates R < C we can have error-free transmission
• For each code rate R we can compute the Shannon limit
• Difficult to approach the Shannon limit by classical methods
• But... Gallager (1961) and Tanner (1981)
• Berrou, Glavieux, and Thitimajshima invented turbo codes in 1993
3
Encoding• Encode information by a systematic encoder
• Usually a recursive systematic rate ½ convolutional encoder
• Reorder information bits
• Encode permuted information bits again, using a recursive systematic encoder (may be the same). Delete the systematic bits this time
4
Example, more detailed
5
Remarks• Starting with rate ½ component codes we get approximately rate 1/3
• Can be punctured (parity or information bits) to adjust the rate
• Can add more interleavers and component codes to lower the rate
• Large information blocks give
• Better distance properties
• Better working decoding algorithm
• Simple component codes (ν=4?) are best for moderate BERs
• Interleaver design is difficult, and there is no known technique to design the best one. Design criteria are:
• Implementation complexity
• Performance at low SNR (pseudorandom-like)
• Performance at high SNR (high minimum distance)
• Disadvantage: Delay in decoding
6
Example
ν = 4, K = 65536
Waterfall region
Error floor region
7
Distance properties of turbo codes
• Classical coding approach is to maximize minimum distance
• New approach: Few codewords with low weights
• Recall:
• In a feedforward encoder, a low-weight codeword is usually generated by a low-weight input sequence
• In a feedback encoder, a low-weight codeword is usually generated by an input information sequence that is a multiple of the feedback polynomial. Often higher input weights
• Spectral thinning
8
Spectral thinning: Example
9
Spectral thinning: Example
10
Spectral thinning: Example
11
Spectral thinning: Remarks
• Requires feedback encoder
• Single one input in feedforward encoder: Local weight gain effect
• Single one input in feedback encoder: Gains weight (at least) until next input one is seen
• Requires an interleaver to make the code time-varying
• Stronger effect for longer block lengths; similar weight spectrum as random codes
• Moderate effect on minimum distance
12
Interleavers for turbo codes• Goal: Input patterns which produce low-weight words in one
component code should map through the interleaver to patterns which produce high-weight words in the other component code
• Interleavers with ”traditional” structure is usually bad for turbo codes
• Interleavers with a randomlike structure achieve the above goal to a larger extent
• Interleavers which are pseudorandom with constraints on spreading properties, and with additional constraints based on the particular component encoders, have provided good results
• But such ”randomlike” interleavers may be hard to implement in an efficient manner
• Dithered relative prime (DRP) and quadratic permutation polynomial (QPP) interleavers are easy to implement and have very good properties as well*
13
Block interleaver: Example• Critical input sequence is (1+D5)Dl
14
Effects of block interleaver
15
Pseudorandom interleavers• Your favourite (pseudo)random generator together with
table lookup
• Quadratic congruence
• cm ≡ km(m+1)/2 (mod K), 0 ≤ m < K, to generate an
index mapping function cm→ cm+1 (mod K), k is an odd integer
• Example with K = 4 and k = 1: (cm) = (0,1,3,2) and interleaver is defined by (1,3,0,2). This pattern can also be shifted cyclically
• Statistical properties are similar to random interleavers when K is a power of 2
16
Turbo decoding
Channel
17
Turbo decodingLc = 4Es /N0
ExtrinsicApriori
L(1)(ul) = ln(P(ul = +1|r1, La(1)) / P(ul = -1|r1, La
(1)))
SISO 1 SISO 2
top related