adaptive threshold technique for bit-flipping decoding of low-density parity-check codes

3
IEEE COMMUNICATIONS LETTERS, VOL. 14, NO. 9, SEPTEMBER 2010 857 Adaptive Threshold Technique for Bit-Flipping Decoding of Low-Density Parity-Check Codes Junho Cho, Member, IEEE, and Wonyong Sung, Senior Member, IEEE Abstract—The bit-ipping (BF) algorithm for decoding of low- density parity-check (LDPC) codes requires less complex hard- ware than other decoding algorithms, but its error correcting performance needs to be improved. In this letter, a threshold adaptation scheme is applied to BF algorithm based decoding of LDPC codes. Our experiments show that the optimal threshold value for BF depends on the error conditions, and threshold adjustment during decoding helps to prevent meaningless no- ipping iterations. Based on these statistical observations, the adaptive threshold technique is proposed to pursue both the maximum error correcting capability and the fastest decoding convergence. Index Terms—Bit-ipping (BF) algorithm, low-density parity- check (LDPC) codes, adaptive threshold. I. I NTRODUCTION T HE bit-ipping (BF) algorithm requires the smallest computational complexity among various decoding al- gorithms for low-density parity-check (LDPC) codes [1], [2]. Although the soft-decision decoding schemes such as the sum- product and the min-sum algorithms show much better error performance than the BF algorithm, they need soft-decision input data as well as much higher decoding complexity to achieve the best error performance [3], [4]. This letter is focused on maximizing error correcting capability of the simple BF decoding when hard-decision input data are only available. Empirical results are presented to show how a ipping threshold affects the error performance of the BF al- gorithm, which then suggest the need of threshold adjustment during decoding iterations. We also demonstrate that the slow convergence problem of a conservative threshold adaptation can be relieved by nding the optimal initial threshold from the empirical data. II. FLIPPING THRESHOLD OF THE BF ALGORITHM In the Gallager’s hard-decision decoding algorithms, a vari- able is ipped whenever or more parity-check constraints orthogonal on the variable are violated, where is a positive integer [2]. We thus call the ipping threshold. Denoting and the column and row degrees of an LDPC code, respectively, the threshold can either be xed at 1 (algorithm A) or vary with 2 1 during decoding iterations (algorithm B). In his original work, the Manuscript received April 13, 2010. The associate editor coordinating the review of this letter and approving it for publication was M. Lentmaier. This work was supported in part by the Ministry of Education, Science and Technology (MEST), Republic of Korea, under the Brain Korea 21 Project, and in part by the National Research Foundation(NRF) grant funded by MEST (No. 20090075770). The authors are with the School of Electrical Engineering and Computer Science, Seoul National University, 599 Gwanangno, Gwanak-gu, Seoul, 151- 744, Korea (e-mail: [email protected]; [email protected]). Digital Object Identier 10.1109/LCOMM.2010.072310.100599 0 10 20 30 40 50 60 0 1 2 3 4 5 x 10 3 E n p(E n ) (a) Number of errors = 32 0 10 20 30 40 50 60 0 1 2 3 4 5 x 10 3 E n p(E n ) (b) Number of errors = 52 Fig. 1. Pmf of for correct (dashed line) and erroneous (solid line) variables when decoding the (4161, 3431) PG-LDPC code. optimal choice of at iteration is obtained by the smallest integer that satises 1 0 0 [ 1 + (1 2 ) 1 1 (1 2 ) 1 ] 2+1 , (1) where 0 and denote the crossover probability of a binary symmetric channel (BSC) and the bit error probability after the -th decoding iteration, respectively. The optimal is derived by assuming that the variable and check nodes are independent each other. This assumption, however, does not hold for the BF algorithm since the marginalization process of the Gallager’s algorithm B, which excludes the messages once generated by a node and then passed back to itself [5], is not generally employed in the BF algorithm for simplifying computations. Therefore, (1) is not optimal for the BF algorithm. Finite lengths of the underlying LDPC codes also introduce correla- tion between nodes and break the independence assumption. Another problem is that the choice of for each iteration varies depending on the channel parameter 0 . Regarding the BF decoding, Kou et al. mentioned in [6] that its error performance might be improved by using an adaptive choice of , but the details were not addressed. In order to nd a good threshold, , for the BF algorithm regardless of the channel parameter, we rst investigate how the threshold affects the error performance in an empirical way. A known number of errors are injected into a legitimate codeword and the number of unsatised parity-checks for each variable is assessed, where correct and erroneous variables are observed separately. For a non-negative integer , we make all variables in an -bit received word are equally likely to be erroneous with the probability 0 = / . After generating the uniformly-distributed random errors, the received word is included in the observation data if the number of erroneous variables is equal to , but it is abandoned otherwise. In this way, 10,000 sample error patterns are observed for each . Figure 1 depicts the resulting probability mass function (pmf) of at the rst BF decoding iteration for 1089-7798/10$25.00 c 2010 IEEE

Upload: junho-cho

Post on 05-Nov-2016

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Adaptive Threshold Technique for Bit-Flipping Decoding of Low-Density Parity-Check Codes

IEEE COMMUNICATIONS LETTERS, VOL. 14, NO. 9, SEPTEMBER 2010 857

Adaptive Threshold Technique forBit-Flipping Decoding of Low-Density Parity-Check Codes

Junho Cho, Member, IEEE, and Wonyong Sung, Senior Member, IEEE

Abstract—The bit-flipping (BF) algorithm for decoding of low-density parity-check (LDPC) codes requires less complex hard-ware than other decoding algorithms, but its error correctingperformance needs to be improved. In this letter, a thresholdadaptation scheme is applied to BF algorithm based decoding ofLDPC codes. Our experiments show that the optimal thresholdvalue for BF depends on the error conditions, and thresholdadjustment during decoding helps to prevent meaningless no-flipping iterations. Based on these statistical observations, theadaptive threshold technique is proposed to pursue both themaximum error correcting capability and the fastest decodingconvergence.

Index Terms—Bit-flipping (BF) algorithm, low-density parity-check (LDPC) codes, adaptive threshold.

I. INTRODUCTION

THE bit-flipping (BF) algorithm requires the smallestcomputational complexity among various decoding al-

gorithms for low-density parity-check (LDPC) codes [1], [2].Although the soft-decision decoding schemes such as the sum-product and the min-sum algorithms show much better errorperformance than the BF algorithm, they need soft-decisioninput data as well as much higher decoding complexity toachieve the best error performance [3], [4]. This letter isfocused on maximizing error correcting capability of thesimple BF decoding when hard-decision input data are onlyavailable. Empirical results are presented to show how aflipping threshold affects the error performance of the BF al-gorithm, which then suggest the need of threshold adjustmentduring decoding iterations. We also demonstrate that the slowconvergence problem of a conservative threshold adaptationcan be relieved by finding the optimal initial threshold fromthe empirical data.

II. FLIPPING THRESHOLD OF THE BF ALGORITHM

In the Gallager’s hard-decision decoding algorithms, a vari-able is flipped whenever 𝑏 or more parity-check constraintsorthogonal on the variable are violated, where 𝑏 is a positiveinteger [2]. We thus call 𝑏 the flipping threshold. Denoting𝑑𝑣 and 𝑑𝑐 the column and row degrees of an LDPC code,respectively, the threshold 𝑏 can either be fixed at 𝑑𝑣 − 1(algorithm A) or vary with ⌈𝑑𝑣

2 ⌉ ≤ 𝑏 ≤ 𝑑𝑣 − 1 duringdecoding iterations (algorithm B). In his original work, the

Manuscript received April 13, 2010. The associate editor coordinating thereview of this letter and approving it for publication was M. Lentmaier.

This work was supported in part by the Ministry of Education, Science andTechnology (MEST), Republic of Korea, under the Brain Korea 21 Project,and in part by the National Research Foundation(NRF) grant funded by MEST(No. 20090075770).

The authors are with the School of Electrical Engineering and ComputerScience, Seoul National University, 599 Gwanangno, Gwanak-gu, Seoul, 151-744, Korea (e-mail: [email protected]; [email protected]).

Digital Object Identifier 10.1109/LCOMM.2010.072310.100599

0 10 20 30 40 50 600

1

2

3

4

5x 10

−3

En

p(E

n)

(a) Number of errors = 32

0 10 20 30 40 50 600

1

2

3

4

5x 10

−3

En

p(E

n)

(b) Number of errors = 52

Fig. 1. Pmf of 𝐸𝑛 for correct (dashed line) and erroneous (solid line)variables when decoding the (4161, 3431) PG-LDPC code.

optimal choice of 𝑏 at iteration ℓ is obtained by the smallestinteger that satisfies

1− 𝑝0𝑝0

≤[1 + (1 − 2𝑝ℓ)

𝑑𝑐−1

1− (1 − 2𝑝ℓ)𝑑𝑐−1

]2𝑏−𝑑𝑣+1

, (1)

where 𝑝0 and 𝑝ℓ denote the crossover probability of a binarysymmetric channel (BSC) and the bit error probability after theℓ-th decoding iteration, respectively. The optimal 𝑏 is derivedby assuming that the variable and check nodes are independenteach other. This assumption, however, does not hold for the BFalgorithm since the marginalization process of the Gallager’salgorithm B, which excludes the messages once generated bya node and then passed back to itself [5], is not generallyemployed in the BF algorithm for simplifying computations.Therefore, (1) is not optimal for the BF algorithm. Finitelengths of the underlying LDPC codes also introduce correla-tion between nodes and break the independence assumption.Another problem is that the choice of 𝑏 for each iterationℓ varies depending on the channel parameter 𝑝0. Regardingthe BF decoding, Kou et al. mentioned in [6] that its errorperformance might be improved by using an adaptive choiceof 𝑏, but the details were not addressed.

In order to find a good threshold, 𝑏, for the BF algorithmregardless of the channel parameter, we first investigate howthe threshold affects the error performance in an empiricalway. A known number of errors are injected into a legitimatecodeword and the number of unsatisfied parity-checks 𝐸𝑛

for each variable 𝑛 is assessed, where correct and erroneousvariables are observed separately. For a non-negative integer𝑥, we make all variables in an 𝑁 -bit received word areequally likely to be erroneous with the probability 𝑝0 = 𝑥/𝑁 .After generating the uniformly-distributed random errors, thereceived word is included in the observation data if the numberof erroneous variables is equal to 𝑥, but it is abandonedotherwise. In this way, 10,000 sample error patterns areobserved for each 𝑥. Figure 1 depicts the resulting probabilitymass function (pmf) of 𝐸𝑛 at the first BF decoding iteration for

1089-7798/10$25.00 c⃝ 2010 IEEE

Page 2: Adaptive Threshold Technique for Bit-Flipping Decoding of Low-Density Parity-Check Codes

858 IEEE COMMUNICATIONS LETTERS, VOL. 14, NO. 9, SEPTEMBER 2010

30 40 50 60 7025

35

45

55

Number of errors

Thr

esho

ld

Fig. 2. Range of 𝑏 with 𝐺(𝑏) ≥ 1 (solid line) and the optimal 𝑏 having thelargest 𝐺(𝑏) (circle).

the (65, 65)-regular (4161, 3431) projective geometry (PG)-LDPC code with different numbers of injected errors. The pmfof 𝐸𝑛 is therefore estimated based on 10,000𝑁 ≈ 4.2 × 107

variables. Distinguishing the correct and erroneous variablesbased on 𝐸𝑛 becomes increasingly difficult as the number oferrors increases (compare Fig. 1(a) and (b) for example). Thisindicates that LDPC codes with a large 𝑑𝑣 may lead to goodBF decoding performance, since 𝐸𝑛 ∈ [0, 𝑑𝑣]. It has been alsostated in [7] that the existence of many redundant parity-checkequations (i.e., a large 𝑑𝑣) can lead to good performance ofBF-based decoding. In this work, therefore, geometric LDPCcodes are used for performance simulations.

III. BF DECODING WITH THRESHOLD ADAPTATION

Let 𝒩𝑐 and 𝒩𝑒 denote the sets of correct and erroneousvariables, respectively. Then, we can define the flipping gainof a flipping threshold 𝑏 by

𝐺(𝑏) = ∣{𝑛 : 𝐸𝑛 ≥ 𝑏 and 𝑛 ∈ 𝒩𝑒}∣− ∣{𝑛 : 𝐸𝑛 ≥ 𝑏 and 𝑛 ∈ 𝒩𝑐}∣ , (2)

where the first term stands for the gain obtained from bit-flipping of erroneous variables and the second term is the losscaused by bit-flipping of correct variables. For a given numberof errors, 𝐺(𝑏) provides an explicit metric for assessing theperformance of 𝑏. When a threshold 𝑏 has 𝐺(𝑏) ≥ 1, itis expected that the probability of error decreases after bit-flipping with 𝑏. Furthermore, the 𝑏 yielding the greatest 𝐺(𝑏)can be regarded as optimal in the sense that it leads tothe fastest decoding convergence. The range of 𝑏 producing𝐺(𝑏) ≥ 1 and the optimal 𝑏’s for the (4161, 3431) PG-LDPCcode are depicted in Fig. 2, which shows the choice of 𝑏 shouldvary with the number of errors occurred. Suppose that thereceived word contains 32 errors and 𝑏 = 41 is used (whichis optimal for the case of 72 errors), then more iterations arerequired to finish the decoding due to the small 𝐺(𝑏). On theother hand, if there are 72 errors and 𝑏 = ⌈𝑑𝑣

2 ⌉ = 33 (which isoptimal for codewords with 32 errors) is used, the bit-flippingwill generate almost one thousand errors. Therefore, applyinga suitable 𝑏 is greatly important for achieving fast convergenceand good error performance with the BF decoding.

The intermediate bit-flipping behaviors of the BF decodingcan be categorized into two; one results in either increase ordecrease of the number of errors, and the other produces nochange since every 𝐸𝑛 incidently becomes smaller than 𝑏.

Encountering the latter condition, the BF decoding repeatsmeaningless operations until the assigned time is exhausted.A remedy to this situation is lowering 𝑏 correspondingly.Therefore, the BF decoding can be carried out with theadaptive threshold as follows:Initialization) Set 𝑏 as the largest value of all optimal 𝑏’s.Step 1) Compute the parity-check for every check node.

Step 2) If all parity-checks are zero, stop decoding with theoutput of current decoded word; else if the given iterationlimit is reached, declare a decoding failure and stopdecoding with the output of the initially received word.

Step 3) Compute 𝐸𝑛 for every variable node by simplycounting the unsatisfied parity-checks associated with 𝑛.Record the maximum number 𝐸max = max𝑛 𝐸𝑛.

Step 4) Flip all variables satisfying 𝐸𝑛 ≥ 𝑏. If no flippingoccurs, reduce 𝑏 to 𝐸max and repeat Step 4. Otherwise,repeat from Step 1. If the iteration limit is reached, stopdecoding.

The adaptive threshold scheme maximizes the probabilityof decoding success by maintaining 𝑏 near 𝐸max. It alsoenables fast decoding convergence by setting the initial 𝑏 asthe optimal value of the correctable worst error conditionrather than the most conservative value 𝑑𝑣 − 1 used inthe Gallager’s algorithm A. Such an aggressive initializationbrings a negligible negative effect on the achievable errorperformance as will be shown in Section IV. Meanwhile, incase of the decoding failures, the decoder outputs the initiallyreceived word instead of the decoded word. It is becausethe inappropriate bit-flipping attempts which have lead tothe decoding failure generally cause a disastrous increase oferrors as mentioned before. Note that repeating Step 4 withoutconducting Step 1∼3 is also considered as one iteration in ourexperiments.

For LDPC codes with a small 𝑑𝑣, where there is not enoughroom to adapt 𝑏, we conjecture that the probabilistic BF [8]improves the decoding performance with a similar approach.When 𝑑𝑣 = 3, for example, 𝑏 = 2 may yield too muchincorrect flips while 𝑏 = 3 results in too small correct flips;however, since a fractional 𝑏 such as 𝑏 = 2.5 is not allowed,the probabilistic BF algorithm flips variables using 𝑏 = 1, 2,and 3 with the probabilities 𝑝1, 𝑝2, and 𝑝3, respectively, where𝑝1 < 𝑝2 < 𝑝3 ≤ 1.

IV. SIMULATION RESULTS

All simulations are performed assuming codewords aretransmitted over an additive white Gaussian noise (AWGN)channel with zero mean and variance 𝑁0/2 via binary phase-shift keying (BPSK) signaling. This is equivalent to transmit-ting the codewords over a BSC with 𝑝0 = 𝑄(

√2/𝑁0).

Figure 3 shows the error performance of the (4161, 3431)PG-LDPC code with the BF decoding as a function of theiteration number. The optimal 𝑏 fixed for whole decodingiterations is 39 and shows much better performance than𝑏 = ⌈𝑑𝑣

2 ⌉ = 33. The two adaptive threshold schemes, onestarting at 𝑏 = 𝑑𝑣 − 1 = 64 and the other at 𝑏 = 41(which produces the largest flipping gain at the worst errorcondition), achieve better error performance than the fixedthreshold schemes. Regarding the decoding time, the fixed

Page 3: Adaptive Threshold Technique for Bit-Flipping Decoding of Low-Density Parity-Check Codes

CHO and SUNG: ADAPTIVE THRESHOLD TECHNIQUE FOR BIT-FLIPPING DECODING OF LOW-DENSITY PARITY-CHECK CODES 859

0 5 10 15 20 25 3010

−6

10−5

10−4

10−3

10−2

10−1

100

Iteration

Fram

e er

ror

rate

b = ⎡ dv/2⎤ = 33, fixed

b = 39, fixed b = d

v − 1 = 64, adapting

b = 41, adapting

Fig. 3. Evolution of the frame error rate through iterations when 𝐸𝑏/𝑁0 =5dB (dashed line) and 5.25dB (solid line).

4 4.25 4.5 4.75 5 5.25 5.510

−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Eb/N

0 (dB)

Err

or r

ates

Uncoded BPSK b = ⎡ d

v/2⎤ = 33, fixed

b = 39, fixed b = 41, adaptingBCH(4095,3375,60)

Fig. 4. Frame (solid line) and bit (dashed line) error rates of the (4161,3431) PG-LDPC code under the various BF decoding schemes. The iterationlimit is set to 20.

𝑏’s of 33 and 39 converge very fast within 10 iterations,whereas the adaptive 𝑏’s require more than 20 iterations.By setting the initial 𝑏 as the optimal value for the worsterror condition, the slow convergence problem of the adaptivescheme can be quite alleviated and as a result the iteration limitcan be determined to a small number with little performanceloss; e.g., to attain the frame error rate (FER) of 10−4 with𝐸𝑏/𝑁0 = 5.25dB, the initial 𝑏 = 64 and 𝑏 = 41 require 21 and11 iterations, respectively. The average number of iterationsto reach the decoding success is also much smaller when theinitial 𝑏 = 41; e.g., when a sufficient iteration limit is givenwith 𝐸𝑏/𝑁0 = 5.25dB, the initial 𝑏 = 64 and 𝑏 = 41 consume10.3 and 2.1 iterations in average, respectively.

Figure 4 shows the frame and bit error rates (BERs) ofthe (4161, 3431) PG-LDPC code under the BF decoding withvarious flipping thresholds. Also shown are those of the (4095,3375, 60) BCH code under the syndrome decoding, whichhas the comparable codeword length and code rate. All theBF decoding schemes are assigned 20 iterations. The adaptivethreshold scheme achieves approximately 0.11dB signal-to-noise ratio (SNR) gain over the best fixed threshold scheme

3 3.5 4 4.5 5 5.5 610

−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Eb/N

0 (dB)

Err

or r

ates

Uncoded BPSK b = ⎡ d

v/2⎤ = 17, fixed

b = 20, fixed b = 24, adaptingBCH(1023,783,24)

Fig. 5. Frame (solid line) and bit (dashed line) error rates of the (1057, 813)PG-LDPC code under the various BF decoding schemes. The iteration limitis set to 20.

and 0.18dB gain over the syndrome decoding of the equivalentBCH code at the BER of 10−6. Another simulation results forthe (1057, 813) PG-LDPC code are shown in Fig. 5, where theadaptive threshold BF decoding obtains 0.23dB and 0.34dBSNR gains over the fixed threshold scheme and the syndromedecoding of the BCH (1023, 783, 24) code, respectively.

V. CONCLUDING REMARKS

The proposed adaptive threshold technique tries to maxi-mize the number of corrected errors at each decoding iterationby changing the threshold. The simulation results for decodingof geometric LDPC codes show that the proposed methodoutperforms the best fixed threshold BF decoding, and attainsSNR gains of 0.11 ∼ 0.23 dB over the standard BF algorithmwhen the iteration limit is 20. The complexity overhead toimplement the proposed adaptive threshold technique is almostnegligible when compared to the standard BF decoding.

REFERENCES

[1] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Inf. Theory,vol. IT-8, pp. 2128, Jan. 1962.

[2] ——, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press,1963.

[3] F. R. Kschischang, B. J. Frey, and H.-A. Loeliger, “Factor graphs andthe sum-product algorithm,” IEEE Trans. Inf. Theory, vol. 47, no. 2, pp.498519, Feb. 2001.

[4] J. Chen, A. Dholakia, E. Eleftheriou, M. P. C. Fossorier, and X.-Y. Hu,“Reduced-complexity decoding of LDPC codes,” IEEE Trans. Commun.,vol. 53, no. 8, pp. 12881299, Aug. 2005.

[5] T. J. Richardson and R. L. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Trans. Inf. Theory,vol. 47, no. 2, pp. 599618, Feb. 2001.

[6] Y. Kou, S. Lin, and M. P. C. Fossorier, “Low-density parity-check codesbased on finite geometries: a rediscovery and new results,” IEEE Trans.Inf. Theory, vol. 47, no. 7, pp. 27112736, Nov. 2001.

[7] M. P. C. F. R. Palanki and J. S. Yedidia, “Iterative decoding of multiple-step majority logic decodable codes,” IEEE Trans. Commun., vol. 55, no.6, pp. 10991102, June 2007.

[8] N. Miladinovic and M. P. C. Fossorier, “Improved bit-flipping decodingof low-density parity-check codes,” IEEE Trans. Inf. Theory, vol. 51, no.4, pp. 15941606, Apr. 2005.