Research ArticleA Parallel Decoding Algorithm for Short Polar CodesBased on Error Checking and Correcting
Yingxian Zhang Xiaofei Pan Kegang Pan Zhan Ye and Chao Gong
Laboratory of Satellite Communications College of Communications Engineering PLA University of Science and TechnologyNanjing 210007 China
Correspondence should be addressed to Yingxian Zhang jenusxyzgmailcom
Received 4 May 2014 Accepted 8 July 2014 Published 23 July 2014
Academic Editor Lei Cao
Copyright copy 2014 Yingxian Zhang et alThis is an open access article distributed under theCreative CommonsAttribution Licensewhich permits unrestricted use distribution and reproduction in any medium provided the original work is properly cited
We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polarcodes In order to enhance the error-correcting capacity of the decoding algorithm we first derive the error-checking equationsgenerated on the basis of the frozen nodes and thenwe introduce themethod to check the errors in the input nodes of the decoder bythe solutions of these equations In order to further correct those checked errors we adopt the method of modifying the probabilitymessages of the error nodes with constant values according to themaximization principle Due to the existence ofmultiple solutionsof the error-checking equations we formulate a CRC-aided optimization problem of finding the optimal solutionwith three differenttarget functions so as to improve the accuracy of error checking Besides in order to increase the throughput of decoding we usea parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder Numerical resultsshow that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with thesame code length
1 Introduction
Due to the ability of achieving Shannon capacity and itslow encoding and decoding complexity the polar codes havereceived much attention in recent years [1ndash20] Howevercompared to some original coding schemes such as LDPCand Turbo codes the polar codes have a remarkable draw-back that is the performance of the codes in the finite lengthregime is limited [2 3] Hence researchers have proposedmany decoding algorithms to improve the performance of thecodes [4ndash19]
In [4 5] a list successive-cancelation (SCL) decodingalgorithm was proposed with consideration of 119871 successive-cancelation (SC) [1] decoding paths and the results showedthat performance of SCL was very close to that of maximum-likelihood (ML) decoding Then in [6] another decod-ing algorithm derived from SC called stack successive-cancelation (SCS) was introduced to decrease the timecomplexity of the SCL In particular with CRC aided SCLyielded better performance than that of some Turbo codesas shown in [7] However due to the serial processing nature
of the SC the algorithms in [4ndash7] suffered a low decodingthroughput and high latency Based on this observationsome improved versions of SC were proposed with theexplicit aim to increase throughput and reduce the latencywithout sacrificing error-rate performance such as simplifiedsuccessive-cancellation (SSC) [8] maximum-likelihood SSC(ML-SSC) [9] and repetition single parity check ML-SSC(RSM-SSC) [10 11] Besides those SC based algorithmsresearchers had also investigated some other algorithms In[12 13] the ML and maximum a posteriori (MAP) decodingwere proposed for the short polar codes And in [14] alinear programming decoder was introduced for the binaryerase channels (BECs) With the factor graph representationof polar codes [15] authors in [16 17] showed that beliefpropagation (BP) polar decoding had particular advantageswith respect to the decoding throughput while the perfor-mance was better than that of the SC and some improved SCdecodingWhat is more is that with theminimal stopping setoptimized results of [18 19] had shown that the error floorperformance of polar codes was superior to that of LDPCcodes
Hindawi Publishing Corporatione Scientific World JournalVolume 2014 Article ID 895782 14 pageshttpdxdoiorg1011552014895782
2 The Scientific World Journal
Indeed all the decoding algorithms in [4ndash19] can improvethe performance of polar codes to a certain degree Howeveras the capacity achieving coding scheme the results ofthose algorithms are disappointing Hence we cannot helpwonderingwhy the performance of the polar codeswith finitelength is inferior to that of the existing coding schemes andhow we can improve it To answer the questions we need tomake a further analysis of those decoding algorithms in [4ndash19]
For the decoding algorithms with serial processing therehas been the problem of error propagation except the lowdecoding throughput and high latency [20 21] That is tosay errors which occurred in the previous node will leadto the error decoding of the later node However noneof the existing serial processing algorithms has consideredthis observation Furthermore it is noticed from the factorgraph of polar codes in [15] that the degree of the check orvariable nodes in the decoder is 2 or 3 which will weakenthe error-correcting capacity of the decoding as comparedto the LDPC codes with the average degree usually greaterthan 3 [22 23] Hence the performance of the polar codesis inferior to that of LDPC codes with the same length [1819] What is more is that BP polar decoding needs moreiterations than that of LDPCcodes as shown in [16 17 22 23]Therefore in order to improve the performance of a decodingalgorithm for polar codes it is important to enhance theerror-correcting capacity of the algorithm
Motivated by aforementioned observations we propose aparallel decoding algorithm for short polar codes based onerror checking and correcting in this paper We first classifythe nodes of the proposed decoder into two categoriesinformation nodes and frozen nodes values of which aredetermined and independent of decoding algorithms Thenwe introduce the method to check the errors in the inputnodes of the decoder with the solutions of the error-checkingequations generated based on the frozen nodes To correctthose checked errors we modify the probability messagesof the error nodes with constant values according to themaximization principle Through delving the error-checkingequations solving problem we find that there exist multiplesolutions for those equations Hence as to check the errorsas accurately as possible we further formulate a CRC-aided optimization problem of finding the optimal solutionof the error-checking equations with three different targetfunctions Besides we also use a parallel method based onthe decoding tree representations of the nodes to calculateprobability messages in order to increase the throughputof decoding The main contributions of this paper can besummarized as follows
(i) An error-checking algorithm for polar decodingbased on the error-checking equations solving isintroduced furthermore as to enhance the accuracyof the error checking a CRC-aided optimizationproblemof finding the optimal solution is formulated
(ii) To correct the checked errors we propose amethod ofmodifying the probabilitymessages of the error nodesaccording to the maximization principle
(iii) In order to improve the throughput of the decodingwe propose a parallel probabilitymessages calculatingmethod based on the decoding tree representation ofthe nodes
(iv) The whole procedure of the proposed decoding algo-rithm is described with the form of pseudocode andthe complexity of the algorithm is also analyzed
The finding of this paper suggests that with the errorchecking and correcting the error-correcting capacity of thedecoding algorithmcan be enhancedwhichwill yield a betterperformance at cost of certain complexity Specifically withthe parallel probability messages calculating the throughputof decoding is higher than the serial process based decodingalgorithms All of these results are finally proved by oursimulation work
The remainder of this paper is organized as follows InSection 2 we explain some notations and introduce certainpreliminary concepts used in the subsequent sections Andin Section 3 the method of the error checking for decodingbased on the error-checking equations is described in detailIn Section 4 we introduce the methods of probability mes-sages calculating and error correcting and after the formu-lation of the CRC-aided optimization problem of finding theoptimal solution the proposed decoding algorithm with theform of pseudocode is presented Then the complexity ofour algorithm is analyzed Section 5 provides the simulationresults for the complexity and bit error performance Finallywe make some conclusions in Section 6
2 Preliminary
21 Notations In this work the blackboard bold letterssuch as X denote the sets and |X| denotes the num-ber of elements in X The notation 119906
119873minus1
0denotes an 119873-
dimensional vector (1199060 119906
1 119906
119873minus1) and 119906
119895
119894indicates a
subvector (119906119894 119906
119894+1 119906
119895minus1 119906
119895) of 119906
119873minus1
0 0 le 119894 119895 le 119873 minus 1
When 119894 gt 119895 119906119895119894is an empty vector Further given a vector set
U vector 119894is the 119894th element of U
The matrixes in this work are denoted by bold lettersThe subscript of a matrix indicates its size for exampleA119873times119872
represents an119873times119872matrixA Specifically the squarematrixes are written as A
119873 size of which is 119873 times 119873 and Aminus1
119873
is the inverse of A119873 Furthermore the Kronecker product
of two matrixes A and B is written as A otimes B and the 119899thKronecker power of A is Aotimes119899
During the procedure of the encoding and decoding wedenote the intermediate node as V(119894 119895) 0 le 119894 le 119899 0 le 119895 le
119873 minus 1 where 119873 = 2119899 is the code length Besides we also
indicate the probability values of the intermediate node V(119894 119895)being equal to 0 or 1 as 119901V(119894119895)(0) or 119901V(119894119895)(1)
Throughout this Paper ldquooplusrdquo denotes the Modulo-TwoSum and ldquosum119872
119894=0oplus119909
119894rdquo means ldquo119909
0oplus 119909
1oplus oplus119909
119872rdquo
22 Polar Encoding and Decoding A polar coding schemecan be uniquely defined by three parameters block-length119873 = 2
119899 code rate 119877 = 119870119873 and an information set I sub N =
0 1 119873 minus 1 where119870 = |I| With these three parameters
The Scientific World Journal 3
u0
u4
u2
u6
u1
u5
u3
u7
(0 0)
(0 4)
(0 2)
(0 6)
(0 1)
(0 5)
(0 3)
(0 7)
(1 0)
(1 2)
(1 1)
(1 3)
(1 4)
(1 6)
(1 5)
(1 7)
(2 0)
(2 1)
(2 2)
(2 3)
(2 4)
(2 5)
(2 6)
(2 7)
(3 0)
(3 1)
(3 2)
(3 3)
(3 4)
(3 5)
(3 6)
(3 7)
W
W
W
W
W
W
W
W
x0
x1
x2
x3
x4
x5
x6
x7
y0
y1
y2
y3
y4
y5
y6
y7
+
+
+
+
+
+
+
+
+
+
+
+
Figure 1 Construction of the polar encoding with length 119873 = 8
a source binary vector 119906119873minus1
0consisting of 119870 information bits
and 119873 minus 119870 frozen bits can be mapped a codeword 119909119873minus1
0by
a linear matrix G119873
= B119873Fotimes1198992 where F
2= [
1 0
1 1] B
119873is a
bit-reversal permutation matrix defined in [1] and 119909119873minus1
0=
119906119873minus1
0G119873
In practice the polar encoding can be completed with theconstruction shown in Figure 1 where the gray circle nodesare the intermediate nodes And the nodes in the leftmostcolumn are the input nodes of encoder values of which areequal to binary source vector that is V(0 119894) = 119906
119894 while
the nodes in the rightmost column are the output nodes ofencoder V(119899 119894) = 119909
119894 Based on the construction a codeword
1199097
0is generated by the recursively linear transformation of the
nodes between adjacent columnsAfter the procedure of the polar encoding all the bits in
the codeword 119909119873minus1
0are passed to the 119873-channels which are
consisted of 119873 independent channels of 119882 with a transitionprobability of 119882(119910
119894| 119909
119894) where 119910
119894is 119894th element of the
received vector 119910119873minus10
At the receiver the decoder can output the estimated
codeword 119909119873minus1
0and the estimated source binary vector
119873minus1
0
with different decoding algorithms [1ndash19] It is noticed from[1ndash19] that the construction of all the decoders is the same asthat of the encoder here we make a strict proof for that withthe mathematical formula in the following theorem
Theorem 1 For the generation matrix of the a polar code G119873
there exists
Gminus1
119873= G
119873 (1)
That is to say for the decoding of the polar codes one willhave
119873minus1
0= 119909
119873minus1
0Gminus1
119873= 119909
119873minus1
0G119873 (2)
where Gminus1
119873is construction matrix of the decoder
Proof The proof of Theorem 1 is based on the matrix trans-formation which is shown detailedly in Appendix A
Hence as for the polar encoder shown in Figure 1 thereis
G8= Gminus1
8=
[[[[[[[[[[
[
1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 0 1 0 1 0
1 1 0 0 0 0 0 0
1 1 0 0 1 1 0 0
1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1
]]]]]]]]]]
]
(3)
Furthermore we have the construction of the decoder asshown in Figure 2(a) where nodes in the rightmost columnare the input nodes of the decoder and the output nodes arethe nodes in the leftmost column During the procedure ofthe decoding the probability messages of the received vectorare recursively propagated from the rightmost column nodesto the leftmost column nodes Then the estimated sourcebinary vector 7
0can be decided by
119894=
0 119901V(0119894) (0) gt 119901V(0119894) (1)
1 otherwise(4)
In fact the input probability messages of the decoderdepend on the transition probability 119882(119910
119894| 119909
119894) and the
received vector 11991070 hence there is
119901V(119899119894) (0) = 119882(119910119894| 119909
119894= 0)
119901V(119899119894) (1) = 119882(119910119894| 119909
119894= 1)
(5)
For convenience of expression we will write the inputprobability messages 119882(119910
119894| 119909
119894= 0) and 119882(119910
119894| 119909
119894=
1) as 119902119894(0) and 119902
119894(1) respectively in the rest of this paper
Therefore we further have
119901V(119899119894) (0) = 119902119894 (0)
119901V(119899119894) (1) = 119902119894 (1)
(6)
23 Frozen and Information Nodes In practice due to theinput of frozen bits [1] values of some nodes in the decoderare determined which are independent of the decodingalgorithm as the red circle nodes illustrated in Figure 2(a)(code construction method is the same as [1]) Based on thisobservation we classify the nodes in the decoder into twocategories the nodes with determined values are called frozennodes and the other nodes are called information nodes as thegray circle nodes shown in Figure 2(a) In addition with thebasic process units of the polar decoder shown in Figure 2(b)we have the following lemma
Lemma 2 For the decoder of a polar code with rate 119877 lt
1 there must exist some frozen nodes the number of whichdepends on the information set I
Proof The proof of Lemma 2 can be easily finished based onthe process units of the polar decoder as shown in Figure 2(b)where V(119894 119895
1) V(119894 119895
2) V(119894 + 1 119895
3) and V(119894 + 1 119895
4) are the four
nodes of the decoder
4 The Scientific World Journal
u0
u1
u2
u3
u4
u5
u6
u7
(0 0)
(0 1)
(0 2)
(0 3)
(0 4)
(0 5)
(0 6)
(0 7)
(1 0)
(1 1)
(1 2)
(1 3)
(1 4)
(1 5)
(1 6)
(1 7)
(2 0)
(2 2)
(2 1)
(2 3)
(2 4)
(2 6)
(2 5)
(2 7)
(3 0)
(3 4)
(3 2)
(3 6)
(3 1)
(3 5)
(3 3)
(3 7)
y0
y2
y6
y1
y4
y5
y3
y7
+
+
+
+
+
+
++ +
+
+
+
(a)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)+++
(b)
Figure 2 (a) Construction of polar decoding with code length 119873 = 8 (b) Basic process units of the polar decoder
Lemma 2 has shown that for a polar code with rate119877 lt 1the frozen nodes are always existing for example the frozennodes in Figure 2(a) are V(0 0) V(1 0) V(0 1) V(1 1) V(0 2)and V(0 4) For convenience we denote the frozen node setof a polar code as V
119865 and we assume that the default value of
each frozen node is 0 in the subsequent sections
24 Decoding Tree Representation It can be found from theconstruction of the decoder in Figure 2(a) that the decodingof a node V(119894 119895) can be equivalently represented as a binarydecoding tree with some input nodes where V(119894 119895) is the rootnode of that tree and the input nodes are the leaf nodes Theheight of a decoding tree is as most as log
2119873 and each of the
intermediate node has one or two son nodes As illustrated inFigure 3 the decoding trees for frozen nodes V(0 0) V(0 1)V(0 2) V(0 4) V(1 0) and V(1 1) in Figure 2(a) are given
During the decoding procedure probability messages ofV(0 0) V(0 1) V(0 2) V(0 4) V(1 0) and V(1 1) will strictlydepend on the probability messages of the leaf nodes as thebottom nodes shown in Figure 3 In addition based on the(2) we further have
V (0 0) =
7
sum
119894=0
oplus V (3 119894)
V (1 0) = V (3 0) oplus V (3 1) oplus V (3 2) oplus V (3 3)
V (0 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (1 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (0 2) = V (3 2) oplus V (3 3) oplus V (3 6) oplus V (3 7)
V (0 4) = V (3 1) oplus V (3 3) oplus V (3 5) oplus V (3 7)
(7)
To generalize the decoding tree representation for thedecoding we introduce the following Lemma
Lemma 3 In the decoder of a polar code with length 119873 = 2119899
there is a unique decoding tree for each node V(119894 119895) the leaf
nodes set of which is indicated as V119871
V(119894119895) And if 119895 = 119873 minus 1 thenumber of the leaf nodes is even that is
V (119894 119895) =
1198722
sum
119896=0
oplus V (119899 1198952119896)
0 le 1198952119896
le 119873 minus 1 V (119899 1198952119896) isin V
119871
V(119894119895)
(8)
where 119872 = |V119871
V(119894119895)| and (119872 mod 2) = 0 While if 119895 = 119873 minus 1119872 is equal to 1 and it is true that
V (119894 119873 minus 1) = V (119899119873 minus 1) (9)
Proof Theproof of Lemma 3 is based on (2) and constructionof the generation matrix It is easily proved that except thelast column (only one ldquo1rdquo element) there is an even numberof ldquo1rdquo elements in all the other columns of Fotimes119899
2 As B
119873is
a bit-reversal permutation matrix which is generated bypermutation of rows in I
119873 hence the generation matrix
G119873
has the same characteristic as Fotimes1198992
(see the proof ofTheorem 1) Therefore (8) and (9) can be easily proved by(2)
Lemma 3 has clearly shown the relationship between theinput nodes and other intermediate nodes of the decoderwhich is useful for error checking and probability messagescalculation introduced in the subsequent sections
3 Error Checking for Decoding
As analyzed in Section 1 the key problem to improve theperformance of polar codes is to enhance the error-correctingcapacity of the decoding In this section we will show how toachieve the goal
31 Error Checking by the Frozen Nodes It is noticed fromSection 23 that the values of the frozennodes are determinedHence if the decoding is correct the probability messagesof any frozen node V(119894 119895) must satisfy the condition of119901V(119894119895)(0) gt 119901V(119894119895)(1) (the default value of frozen nodesis 0) which is called reliability condition throughout this
The Scientific World Journal 5
(0 0)
(1 0) (1 1)
(2 0) (2 1) (2 2) (2 3)
(3 0) (3 1) (3 2) (3 3) (3 4) (3 5) (3 6) (3 7)
(a)
(0 2)
(1 2) (1 3)
(2 1) (2 3)
(3 2) (3 3) (3 6) (3 7)
(b)
(0 4)
(1 4) (1 5)
(2 4) (2 5) (2 6) (2 7)
(3 1) (3 3) (3 5) (3 7)
(c)
(1 0)
(2 0) (2 1)
(3 0) (3 1) (3 2) (3 3)
(d)
(0 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(e)
(1 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(f)
Figure 3 The decoding trees for the nodes V(0 0) V(0 1) V(1 0) V(1 1) V(0 2) and V(0 4)
paper While in practice due to the noise of received signalthere may exist some frozen nodes unsatisfying the reliabilitycondition which indicates that there must exist errors in theinput nodes of the decoder Hence it is exactly based on thisobservation that we can check the errors during the decodingTo further describe detailedly a theorem is introduced toshow the relationship between the reliability condition of thefrozen nodes and the errors in the input nodes of the decoder
Theorem 4 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) if the probability messages of V(119894 119895) do not satisfy thereliability condition during the decoding procedure there mustexist an odd number of error nodes in V119871
V(119894119895) otherwise theerror number will be even (including 0)
Proof For the proof of Theorem 4 see Appendix B for detail
Theorem 4 has provided us an effective method to detectthe errors in the leaf nodes set of the frozen node Forexample if the probability messages of the frozen nodeV(0 0) in Figure 2 do not satisfy the reliability conditionthat is 119901V(00)(0) le 119901V(00)(1) it can be confirmed that theremust exist errors in the set of V(3 0) V(3 1) V(3 7) andthe number of these errors may be 1 or 3 or 5 or 7 Thatis to say through checking the reliability condition of thefrozen nodes we can confirm existence of the errors in the
input nodes of the decoder which is further presented as acorollary
Corollary 5 For a polar code with the frozen node set V119865
if existV(119894 119895) isin V119865and V(119894 119895) does not satisfy the reliability
condition there must exist errors in the input nodes of decoder
Proof The proof of Corollary 5 is easily completed based onTheorem 4
Corollary 5 has clearly shown that through checking theprobability messages of each frozen node errors in the inputnodes of decoder can be detected
32 Error-Checking Equations As aforementioned errors inthe input nodes can be found with probability messages ofthe frozen node but there still is a problem which is how tolocate the exact position of each error To solve the problema parameter called error indicator is defined for each inputnode of the decoder And for the input node V(119899 119894) the errorindicator is denoted as 119888
119894 value of which is given by
119888119894=
1 V (119899 119894) is error0 otherwise
(10)
That is to say by the parameter of error indicatorwe can determine whether an input node is error or not
6 The Scientific World Journal
Hence the above problem can be transformed into howto obtain the error indicator of each input node Moti-vated by this observation we introduce another corollary ofTheorem 4
Corollary 6 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) there is
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 0 otherwise
(11)
where119872 = |V119871
V(119894119895)| V(119899 119894119896) isin V119871
V(119894119895) and119873 = 2119899 is code length
Furthermore under the field of 119866119865(2) (11) can be written as
119872minus1
sum
119896=0
oplus 119888119894119896
= 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
119872minus1
sum
119896=0
oplus 119888119894119896
= 0 otherwise
(12)
Proof The proof of Corollary 6 is based on Lemma 3 andTheorem 4 and here we ignore the detailed derivation pro-cess
Corollary 6 has shown that the problem of obtainingthe error indicator can be transformed to find solutionsof (12) under the condition of (11) In order to introducemore specifically here we will take an example based on thedecoder in Figure 2(a)
Example 7 We assume that frozen nodes V(0 0) V(1 0)V(0 2) and V(0 4) do not satisfy the reliability conditionhence based on Theorem 4 and Corollary 6 there are equa-tions as
7
sum
119894=0
oplus 119888119894= 1
1198880oplus 119888
1oplus 119888
2oplus 119888
3= 1
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198882oplus 119888
3oplus 119888
6oplus 119888
7= 1
1198881oplus 119888
3oplus 119888
5oplus 119888
7= 1
(13)
Furthermore (13) can be written as matrix form which is
E68(119888
7
0)119879
=
[[[[[[[
[
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
]]]]]]]
]
[[[[[[[[[[
[
1198880
1198881
1198882
1198883
1198884
1198885
1198886
1198887
]]]]]]]]]]
]
= (1205745
0)119879
(14)
where 1205745
0= (1 1 0 0 1 1) and E
68is the coefficient
matrix with size of 6 times 8 Therefore by solving (14) we willget the error indicator vector of input nodes in Figure 2 Inorder to further generalize the above example we provide alemma
Lemma 8 For a polar code with the code length 119873 code rate119877 = 119870119873 and frozen node set V
119865 we have the error-checking
equations as
E119872119873
(119888119873minus1
0)119879
= (120574119872minus1
0)119879
(15)
where 119888119873minus1
0is the error indicator vector and 119872 = |V
119865| 119872 ge
119873minus119870 E119872119873
is called error-checking matrix elements of whichare determined by the code construction method and 120574
119872minus1
0is
called error-checking vector elements of which depend on theprobabilitymessages of the frozen nodes inV
119865 that isforallV
119894isin V
119865
0 le 119894 le 119872 minus 1 there is a unique 120574119894isin 120574
119872minus1
0such that
120574119894=
1 119901V119894 (0) le 119901V119894 (1)
0 119901V119894 (0) gt 119901V119894 (1) (16)
Proof The proof of the Lemma 8 is based on (10)ndash(14)Lemma 3 andTheorem 4 which will be ignored here
33 Solutions of Error-Checking Equations Lemma 8 pro-vides a general method to determine the position of errorsin the input nodes by the error-checking equations It is stillneeded to investigate the existence of solutions of the error-checking equations
Theorem9 For a polar code with code length119873 and code rate119877 = 119870119873 there is
rank (E119872119873
) = rank ([ E119872119873
(120574119872minus1
0)119879
]) = 119873 minus 119870 (17)
where [ E119872119873
(120574119872minus1
0)119879
] is the augmented matrix of (15) andrank(X) is the rank of matrix X
Proof For the proof ofTheorem 9 see Appendix C for detail
It is noticed from Theorem 9 that there must existmultiple solutions for error-checking equations therefore wefurther investigate the general expression of solutions of theerror-checking equations as shown in the following corollary
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
2 The Scientific World Journal
Indeed all the decoding algorithms in [4ndash19] can improvethe performance of polar codes to a certain degree Howeveras the capacity achieving coding scheme the results ofthose algorithms are disappointing Hence we cannot helpwonderingwhy the performance of the polar codeswith finitelength is inferior to that of the existing coding schemes andhow we can improve it To answer the questions we need tomake a further analysis of those decoding algorithms in [4ndash19]
For the decoding algorithms with serial processing therehas been the problem of error propagation except the lowdecoding throughput and high latency [20 21] That is tosay errors which occurred in the previous node will leadto the error decoding of the later node However noneof the existing serial processing algorithms has consideredthis observation Furthermore it is noticed from the factorgraph of polar codes in [15] that the degree of the check orvariable nodes in the decoder is 2 or 3 which will weakenthe error-correcting capacity of the decoding as comparedto the LDPC codes with the average degree usually greaterthan 3 [22 23] Hence the performance of the polar codesis inferior to that of LDPC codes with the same length [1819] What is more is that BP polar decoding needs moreiterations than that of LDPCcodes as shown in [16 17 22 23]Therefore in order to improve the performance of a decodingalgorithm for polar codes it is important to enhance theerror-correcting capacity of the algorithm
Motivated by aforementioned observations we propose aparallel decoding algorithm for short polar codes based onerror checking and correcting in this paper We first classifythe nodes of the proposed decoder into two categoriesinformation nodes and frozen nodes values of which aredetermined and independent of decoding algorithms Thenwe introduce the method to check the errors in the inputnodes of the decoder with the solutions of the error-checkingequations generated based on the frozen nodes To correctthose checked errors we modify the probability messagesof the error nodes with constant values according to themaximization principle Through delving the error-checkingequations solving problem we find that there exist multiplesolutions for those equations Hence as to check the errorsas accurately as possible we further formulate a CRC-aided optimization problem of finding the optimal solutionof the error-checking equations with three different targetfunctions Besides we also use a parallel method based onthe decoding tree representations of the nodes to calculateprobability messages in order to increase the throughputof decoding The main contributions of this paper can besummarized as follows
(i) An error-checking algorithm for polar decodingbased on the error-checking equations solving isintroduced furthermore as to enhance the accuracyof the error checking a CRC-aided optimizationproblemof finding the optimal solution is formulated
(ii) To correct the checked errors we propose amethod ofmodifying the probabilitymessages of the error nodesaccording to the maximization principle
(iii) In order to improve the throughput of the decodingwe propose a parallel probabilitymessages calculatingmethod based on the decoding tree representation ofthe nodes
(iv) The whole procedure of the proposed decoding algo-rithm is described with the form of pseudocode andthe complexity of the algorithm is also analyzed
The finding of this paper suggests that with the errorchecking and correcting the error-correcting capacity of thedecoding algorithmcan be enhancedwhichwill yield a betterperformance at cost of certain complexity Specifically withthe parallel probability messages calculating the throughputof decoding is higher than the serial process based decodingalgorithms All of these results are finally proved by oursimulation work
The remainder of this paper is organized as follows InSection 2 we explain some notations and introduce certainpreliminary concepts used in the subsequent sections Andin Section 3 the method of the error checking for decodingbased on the error-checking equations is described in detailIn Section 4 we introduce the methods of probability mes-sages calculating and error correcting and after the formu-lation of the CRC-aided optimization problem of finding theoptimal solution the proposed decoding algorithm with theform of pseudocode is presented Then the complexity ofour algorithm is analyzed Section 5 provides the simulationresults for the complexity and bit error performance Finallywe make some conclusions in Section 6
2 Preliminary
21 Notations In this work the blackboard bold letterssuch as X denote the sets and |X| denotes the num-ber of elements in X The notation 119906
119873minus1
0denotes an 119873-
dimensional vector (1199060 119906
1 119906
119873minus1) and 119906
119895
119894indicates a
subvector (119906119894 119906
119894+1 119906
119895minus1 119906
119895) of 119906
119873minus1
0 0 le 119894 119895 le 119873 minus 1
When 119894 gt 119895 119906119895119894is an empty vector Further given a vector set
U vector 119894is the 119894th element of U
The matrixes in this work are denoted by bold lettersThe subscript of a matrix indicates its size for exampleA119873times119872
represents an119873times119872matrixA Specifically the squarematrixes are written as A
119873 size of which is 119873 times 119873 and Aminus1
119873
is the inverse of A119873 Furthermore the Kronecker product
of two matrixes A and B is written as A otimes B and the 119899thKronecker power of A is Aotimes119899
During the procedure of the encoding and decoding wedenote the intermediate node as V(119894 119895) 0 le 119894 le 119899 0 le 119895 le
119873 minus 1 where 119873 = 2119899 is the code length Besides we also
indicate the probability values of the intermediate node V(119894 119895)being equal to 0 or 1 as 119901V(119894119895)(0) or 119901V(119894119895)(1)
Throughout this Paper ldquooplusrdquo denotes the Modulo-TwoSum and ldquosum119872
119894=0oplus119909
119894rdquo means ldquo119909
0oplus 119909
1oplus oplus119909
119872rdquo
22 Polar Encoding and Decoding A polar coding schemecan be uniquely defined by three parameters block-length119873 = 2
119899 code rate 119877 = 119870119873 and an information set I sub N =
0 1 119873 minus 1 where119870 = |I| With these three parameters
The Scientific World Journal 3
u0
u4
u2
u6
u1
u5
u3
u7
(0 0)
(0 4)
(0 2)
(0 6)
(0 1)
(0 5)
(0 3)
(0 7)
(1 0)
(1 2)
(1 1)
(1 3)
(1 4)
(1 6)
(1 5)
(1 7)
(2 0)
(2 1)
(2 2)
(2 3)
(2 4)
(2 5)
(2 6)
(2 7)
(3 0)
(3 1)
(3 2)
(3 3)
(3 4)
(3 5)
(3 6)
(3 7)
W
W
W
W
W
W
W
W
x0
x1
x2
x3
x4
x5
x6
x7
y0
y1
y2
y3
y4
y5
y6
y7
+
+
+
+
+
+
+
+
+
+
+
+
Figure 1 Construction of the polar encoding with length 119873 = 8
a source binary vector 119906119873minus1
0consisting of 119870 information bits
and 119873 minus 119870 frozen bits can be mapped a codeword 119909119873minus1
0by
a linear matrix G119873
= B119873Fotimes1198992 where F
2= [
1 0
1 1] B
119873is a
bit-reversal permutation matrix defined in [1] and 119909119873minus1
0=
119906119873minus1
0G119873
In practice the polar encoding can be completed with theconstruction shown in Figure 1 where the gray circle nodesare the intermediate nodes And the nodes in the leftmostcolumn are the input nodes of encoder values of which areequal to binary source vector that is V(0 119894) = 119906
119894 while
the nodes in the rightmost column are the output nodes ofencoder V(119899 119894) = 119909
119894 Based on the construction a codeword
1199097
0is generated by the recursively linear transformation of the
nodes between adjacent columnsAfter the procedure of the polar encoding all the bits in
the codeword 119909119873minus1
0are passed to the 119873-channels which are
consisted of 119873 independent channels of 119882 with a transitionprobability of 119882(119910
119894| 119909
119894) where 119910
119894is 119894th element of the
received vector 119910119873minus10
At the receiver the decoder can output the estimated
codeword 119909119873minus1
0and the estimated source binary vector
119873minus1
0
with different decoding algorithms [1ndash19] It is noticed from[1ndash19] that the construction of all the decoders is the same asthat of the encoder here we make a strict proof for that withthe mathematical formula in the following theorem
Theorem 1 For the generation matrix of the a polar code G119873
there exists
Gminus1
119873= G
119873 (1)
That is to say for the decoding of the polar codes one willhave
119873minus1
0= 119909
119873minus1
0Gminus1
119873= 119909
119873minus1
0G119873 (2)
where Gminus1
119873is construction matrix of the decoder
Proof The proof of Theorem 1 is based on the matrix trans-formation which is shown detailedly in Appendix A
Hence as for the polar encoder shown in Figure 1 thereis
G8= Gminus1
8=
[[[[[[[[[[
[
1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 0 1 0 1 0
1 1 0 0 0 0 0 0
1 1 0 0 1 1 0 0
1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1
]]]]]]]]]]
]
(3)
Furthermore we have the construction of the decoder asshown in Figure 2(a) where nodes in the rightmost columnare the input nodes of the decoder and the output nodes arethe nodes in the leftmost column During the procedure ofthe decoding the probability messages of the received vectorare recursively propagated from the rightmost column nodesto the leftmost column nodes Then the estimated sourcebinary vector 7
0can be decided by
119894=
0 119901V(0119894) (0) gt 119901V(0119894) (1)
1 otherwise(4)
In fact the input probability messages of the decoderdepend on the transition probability 119882(119910
119894| 119909
119894) and the
received vector 11991070 hence there is
119901V(119899119894) (0) = 119882(119910119894| 119909
119894= 0)
119901V(119899119894) (1) = 119882(119910119894| 119909
119894= 1)
(5)
For convenience of expression we will write the inputprobability messages 119882(119910
119894| 119909
119894= 0) and 119882(119910
119894| 119909
119894=
1) as 119902119894(0) and 119902
119894(1) respectively in the rest of this paper
Therefore we further have
119901V(119899119894) (0) = 119902119894 (0)
119901V(119899119894) (1) = 119902119894 (1)
(6)
23 Frozen and Information Nodes In practice due to theinput of frozen bits [1] values of some nodes in the decoderare determined which are independent of the decodingalgorithm as the red circle nodes illustrated in Figure 2(a)(code construction method is the same as [1]) Based on thisobservation we classify the nodes in the decoder into twocategories the nodes with determined values are called frozennodes and the other nodes are called information nodes as thegray circle nodes shown in Figure 2(a) In addition with thebasic process units of the polar decoder shown in Figure 2(b)we have the following lemma
Lemma 2 For the decoder of a polar code with rate 119877 lt
1 there must exist some frozen nodes the number of whichdepends on the information set I
Proof The proof of Lemma 2 can be easily finished based onthe process units of the polar decoder as shown in Figure 2(b)where V(119894 119895
1) V(119894 119895
2) V(119894 + 1 119895
3) and V(119894 + 1 119895
4) are the four
nodes of the decoder
4 The Scientific World Journal
u0
u1
u2
u3
u4
u5
u6
u7
(0 0)
(0 1)
(0 2)
(0 3)
(0 4)
(0 5)
(0 6)
(0 7)
(1 0)
(1 1)
(1 2)
(1 3)
(1 4)
(1 5)
(1 6)
(1 7)
(2 0)
(2 2)
(2 1)
(2 3)
(2 4)
(2 6)
(2 5)
(2 7)
(3 0)
(3 4)
(3 2)
(3 6)
(3 1)
(3 5)
(3 3)
(3 7)
y0
y2
y6
y1
y4
y5
y3
y7
+
+
+
+
+
+
++ +
+
+
+
(a)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)+++
(b)
Figure 2 (a) Construction of polar decoding with code length 119873 = 8 (b) Basic process units of the polar decoder
Lemma 2 has shown that for a polar code with rate119877 lt 1the frozen nodes are always existing for example the frozennodes in Figure 2(a) are V(0 0) V(1 0) V(0 1) V(1 1) V(0 2)and V(0 4) For convenience we denote the frozen node setof a polar code as V
119865 and we assume that the default value of
each frozen node is 0 in the subsequent sections
24 Decoding Tree Representation It can be found from theconstruction of the decoder in Figure 2(a) that the decodingof a node V(119894 119895) can be equivalently represented as a binarydecoding tree with some input nodes where V(119894 119895) is the rootnode of that tree and the input nodes are the leaf nodes Theheight of a decoding tree is as most as log
2119873 and each of the
intermediate node has one or two son nodes As illustrated inFigure 3 the decoding trees for frozen nodes V(0 0) V(0 1)V(0 2) V(0 4) V(1 0) and V(1 1) in Figure 2(a) are given
During the decoding procedure probability messages ofV(0 0) V(0 1) V(0 2) V(0 4) V(1 0) and V(1 1) will strictlydepend on the probability messages of the leaf nodes as thebottom nodes shown in Figure 3 In addition based on the(2) we further have
V (0 0) =
7
sum
119894=0
oplus V (3 119894)
V (1 0) = V (3 0) oplus V (3 1) oplus V (3 2) oplus V (3 3)
V (0 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (1 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (0 2) = V (3 2) oplus V (3 3) oplus V (3 6) oplus V (3 7)
V (0 4) = V (3 1) oplus V (3 3) oplus V (3 5) oplus V (3 7)
(7)
To generalize the decoding tree representation for thedecoding we introduce the following Lemma
Lemma 3 In the decoder of a polar code with length 119873 = 2119899
there is a unique decoding tree for each node V(119894 119895) the leaf
nodes set of which is indicated as V119871
V(119894119895) And if 119895 = 119873 minus 1 thenumber of the leaf nodes is even that is
V (119894 119895) =
1198722
sum
119896=0
oplus V (119899 1198952119896)
0 le 1198952119896
le 119873 minus 1 V (119899 1198952119896) isin V
119871
V(119894119895)
(8)
where 119872 = |V119871
V(119894119895)| and (119872 mod 2) = 0 While if 119895 = 119873 minus 1119872 is equal to 1 and it is true that
V (119894 119873 minus 1) = V (119899119873 minus 1) (9)
Proof Theproof of Lemma 3 is based on (2) and constructionof the generation matrix It is easily proved that except thelast column (only one ldquo1rdquo element) there is an even numberof ldquo1rdquo elements in all the other columns of Fotimes119899
2 As B
119873is
a bit-reversal permutation matrix which is generated bypermutation of rows in I
119873 hence the generation matrix
G119873
has the same characteristic as Fotimes1198992
(see the proof ofTheorem 1) Therefore (8) and (9) can be easily proved by(2)
Lemma 3 has clearly shown the relationship between theinput nodes and other intermediate nodes of the decoderwhich is useful for error checking and probability messagescalculation introduced in the subsequent sections
3 Error Checking for Decoding
As analyzed in Section 1 the key problem to improve theperformance of polar codes is to enhance the error-correctingcapacity of the decoding In this section we will show how toachieve the goal
31 Error Checking by the Frozen Nodes It is noticed fromSection 23 that the values of the frozennodes are determinedHence if the decoding is correct the probability messagesof any frozen node V(119894 119895) must satisfy the condition of119901V(119894119895)(0) gt 119901V(119894119895)(1) (the default value of frozen nodesis 0) which is called reliability condition throughout this
The Scientific World Journal 5
(0 0)
(1 0) (1 1)
(2 0) (2 1) (2 2) (2 3)
(3 0) (3 1) (3 2) (3 3) (3 4) (3 5) (3 6) (3 7)
(a)
(0 2)
(1 2) (1 3)
(2 1) (2 3)
(3 2) (3 3) (3 6) (3 7)
(b)
(0 4)
(1 4) (1 5)
(2 4) (2 5) (2 6) (2 7)
(3 1) (3 3) (3 5) (3 7)
(c)
(1 0)
(2 0) (2 1)
(3 0) (3 1) (3 2) (3 3)
(d)
(0 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(e)
(1 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(f)
Figure 3 The decoding trees for the nodes V(0 0) V(0 1) V(1 0) V(1 1) V(0 2) and V(0 4)
paper While in practice due to the noise of received signalthere may exist some frozen nodes unsatisfying the reliabilitycondition which indicates that there must exist errors in theinput nodes of the decoder Hence it is exactly based on thisobservation that we can check the errors during the decodingTo further describe detailedly a theorem is introduced toshow the relationship between the reliability condition of thefrozen nodes and the errors in the input nodes of the decoder
Theorem 4 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) if the probability messages of V(119894 119895) do not satisfy thereliability condition during the decoding procedure there mustexist an odd number of error nodes in V119871
V(119894119895) otherwise theerror number will be even (including 0)
Proof For the proof of Theorem 4 see Appendix B for detail
Theorem 4 has provided us an effective method to detectthe errors in the leaf nodes set of the frozen node Forexample if the probability messages of the frozen nodeV(0 0) in Figure 2 do not satisfy the reliability conditionthat is 119901V(00)(0) le 119901V(00)(1) it can be confirmed that theremust exist errors in the set of V(3 0) V(3 1) V(3 7) andthe number of these errors may be 1 or 3 or 5 or 7 Thatis to say through checking the reliability condition of thefrozen nodes we can confirm existence of the errors in the
input nodes of the decoder which is further presented as acorollary
Corollary 5 For a polar code with the frozen node set V119865
if existV(119894 119895) isin V119865and V(119894 119895) does not satisfy the reliability
condition there must exist errors in the input nodes of decoder
Proof The proof of Corollary 5 is easily completed based onTheorem 4
Corollary 5 has clearly shown that through checking theprobability messages of each frozen node errors in the inputnodes of decoder can be detected
32 Error-Checking Equations As aforementioned errors inthe input nodes can be found with probability messages ofthe frozen node but there still is a problem which is how tolocate the exact position of each error To solve the problema parameter called error indicator is defined for each inputnode of the decoder And for the input node V(119899 119894) the errorindicator is denoted as 119888
119894 value of which is given by
119888119894=
1 V (119899 119894) is error0 otherwise
(10)
That is to say by the parameter of error indicatorwe can determine whether an input node is error or not
6 The Scientific World Journal
Hence the above problem can be transformed into howto obtain the error indicator of each input node Moti-vated by this observation we introduce another corollary ofTheorem 4
Corollary 6 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) there is
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 0 otherwise
(11)
where119872 = |V119871
V(119894119895)| V(119899 119894119896) isin V119871
V(119894119895) and119873 = 2119899 is code length
Furthermore under the field of 119866119865(2) (11) can be written as
119872minus1
sum
119896=0
oplus 119888119894119896
= 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
119872minus1
sum
119896=0
oplus 119888119894119896
= 0 otherwise
(12)
Proof The proof of Corollary 6 is based on Lemma 3 andTheorem 4 and here we ignore the detailed derivation pro-cess
Corollary 6 has shown that the problem of obtainingthe error indicator can be transformed to find solutionsof (12) under the condition of (11) In order to introducemore specifically here we will take an example based on thedecoder in Figure 2(a)
Example 7 We assume that frozen nodes V(0 0) V(1 0)V(0 2) and V(0 4) do not satisfy the reliability conditionhence based on Theorem 4 and Corollary 6 there are equa-tions as
7
sum
119894=0
oplus 119888119894= 1
1198880oplus 119888
1oplus 119888
2oplus 119888
3= 1
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198882oplus 119888
3oplus 119888
6oplus 119888
7= 1
1198881oplus 119888
3oplus 119888
5oplus 119888
7= 1
(13)
Furthermore (13) can be written as matrix form which is
E68(119888
7
0)119879
=
[[[[[[[
[
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
]]]]]]]
]
[[[[[[[[[[
[
1198880
1198881
1198882
1198883
1198884
1198885
1198886
1198887
]]]]]]]]]]
]
= (1205745
0)119879
(14)
where 1205745
0= (1 1 0 0 1 1) and E
68is the coefficient
matrix with size of 6 times 8 Therefore by solving (14) we willget the error indicator vector of input nodes in Figure 2 Inorder to further generalize the above example we provide alemma
Lemma 8 For a polar code with the code length 119873 code rate119877 = 119870119873 and frozen node set V
119865 we have the error-checking
equations as
E119872119873
(119888119873minus1
0)119879
= (120574119872minus1
0)119879
(15)
where 119888119873minus1
0is the error indicator vector and 119872 = |V
119865| 119872 ge
119873minus119870 E119872119873
is called error-checking matrix elements of whichare determined by the code construction method and 120574
119872minus1
0is
called error-checking vector elements of which depend on theprobabilitymessages of the frozen nodes inV
119865 that isforallV
119894isin V
119865
0 le 119894 le 119872 minus 1 there is a unique 120574119894isin 120574
119872minus1
0such that
120574119894=
1 119901V119894 (0) le 119901V119894 (1)
0 119901V119894 (0) gt 119901V119894 (1) (16)
Proof The proof of the Lemma 8 is based on (10)ndash(14)Lemma 3 andTheorem 4 which will be ignored here
33 Solutions of Error-Checking Equations Lemma 8 pro-vides a general method to determine the position of errorsin the input nodes by the error-checking equations It is stillneeded to investigate the existence of solutions of the error-checking equations
Theorem9 For a polar code with code length119873 and code rate119877 = 119870119873 there is
rank (E119872119873
) = rank ([ E119872119873
(120574119872minus1
0)119879
]) = 119873 minus 119870 (17)
where [ E119872119873
(120574119872minus1
0)119879
] is the augmented matrix of (15) andrank(X) is the rank of matrix X
Proof For the proof ofTheorem 9 see Appendix C for detail
It is noticed from Theorem 9 that there must existmultiple solutions for error-checking equations therefore wefurther investigate the general expression of solutions of theerror-checking equations as shown in the following corollary
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 3
u0
u4
u2
u6
u1
u5
u3
u7
(0 0)
(0 4)
(0 2)
(0 6)
(0 1)
(0 5)
(0 3)
(0 7)
(1 0)
(1 2)
(1 1)
(1 3)
(1 4)
(1 6)
(1 5)
(1 7)
(2 0)
(2 1)
(2 2)
(2 3)
(2 4)
(2 5)
(2 6)
(2 7)
(3 0)
(3 1)
(3 2)
(3 3)
(3 4)
(3 5)
(3 6)
(3 7)
W
W
W
W
W
W
W
W
x0
x1
x2
x3
x4
x5
x6
x7
y0
y1
y2
y3
y4
y5
y6
y7
+
+
+
+
+
+
+
+
+
+
+
+
Figure 1 Construction of the polar encoding with length 119873 = 8
a source binary vector 119906119873minus1
0consisting of 119870 information bits
and 119873 minus 119870 frozen bits can be mapped a codeword 119909119873minus1
0by
a linear matrix G119873
= B119873Fotimes1198992 where F
2= [
1 0
1 1] B
119873is a
bit-reversal permutation matrix defined in [1] and 119909119873minus1
0=
119906119873minus1
0G119873
In practice the polar encoding can be completed with theconstruction shown in Figure 1 where the gray circle nodesare the intermediate nodes And the nodes in the leftmostcolumn are the input nodes of encoder values of which areequal to binary source vector that is V(0 119894) = 119906
119894 while
the nodes in the rightmost column are the output nodes ofencoder V(119899 119894) = 119909
119894 Based on the construction a codeword
1199097
0is generated by the recursively linear transformation of the
nodes between adjacent columnsAfter the procedure of the polar encoding all the bits in
the codeword 119909119873minus1
0are passed to the 119873-channels which are
consisted of 119873 independent channels of 119882 with a transitionprobability of 119882(119910
119894| 119909
119894) where 119910
119894is 119894th element of the
received vector 119910119873minus10
At the receiver the decoder can output the estimated
codeword 119909119873minus1
0and the estimated source binary vector
119873minus1
0
with different decoding algorithms [1ndash19] It is noticed from[1ndash19] that the construction of all the decoders is the same asthat of the encoder here we make a strict proof for that withthe mathematical formula in the following theorem
Theorem 1 For the generation matrix of the a polar code G119873
there exists
Gminus1
119873= G
119873 (1)
That is to say for the decoding of the polar codes one willhave
119873minus1
0= 119909
119873minus1
0Gminus1
119873= 119909
119873minus1
0G119873 (2)
where Gminus1
119873is construction matrix of the decoder
Proof The proof of Theorem 1 is based on the matrix trans-formation which is shown detailedly in Appendix A
Hence as for the polar encoder shown in Figure 1 thereis
G8= Gminus1
8=
[[[[[[[[[[
[
1 0 0 0 0 0 0 0
1 0 0 0 1 0 0 0
1 0 1 0 0 0 0 0
1 0 1 0 1 0 1 0
1 1 0 0 0 0 0 0
1 1 0 0 1 1 0 0
1 1 1 1 0 0 0 0
1 1 1 1 1 1 1 1
]]]]]]]]]]
]
(3)
Furthermore we have the construction of the decoder asshown in Figure 2(a) where nodes in the rightmost columnare the input nodes of the decoder and the output nodes arethe nodes in the leftmost column During the procedure ofthe decoding the probability messages of the received vectorare recursively propagated from the rightmost column nodesto the leftmost column nodes Then the estimated sourcebinary vector 7
0can be decided by
119894=
0 119901V(0119894) (0) gt 119901V(0119894) (1)
1 otherwise(4)
In fact the input probability messages of the decoderdepend on the transition probability 119882(119910
119894| 119909
119894) and the
received vector 11991070 hence there is
119901V(119899119894) (0) = 119882(119910119894| 119909
119894= 0)
119901V(119899119894) (1) = 119882(119910119894| 119909
119894= 1)
(5)
For convenience of expression we will write the inputprobability messages 119882(119910
119894| 119909
119894= 0) and 119882(119910
119894| 119909
119894=
1) as 119902119894(0) and 119902
119894(1) respectively in the rest of this paper
Therefore we further have
119901V(119899119894) (0) = 119902119894 (0)
119901V(119899119894) (1) = 119902119894 (1)
(6)
23 Frozen and Information Nodes In practice due to theinput of frozen bits [1] values of some nodes in the decoderare determined which are independent of the decodingalgorithm as the red circle nodes illustrated in Figure 2(a)(code construction method is the same as [1]) Based on thisobservation we classify the nodes in the decoder into twocategories the nodes with determined values are called frozennodes and the other nodes are called information nodes as thegray circle nodes shown in Figure 2(a) In addition with thebasic process units of the polar decoder shown in Figure 2(b)we have the following lemma
Lemma 2 For the decoder of a polar code with rate 119877 lt
1 there must exist some frozen nodes the number of whichdepends on the information set I
Proof The proof of Lemma 2 can be easily finished based onthe process units of the polar decoder as shown in Figure 2(b)where V(119894 119895
1) V(119894 119895
2) V(119894 + 1 119895
3) and V(119894 + 1 119895
4) are the four
nodes of the decoder
4 The Scientific World Journal
u0
u1
u2
u3
u4
u5
u6
u7
(0 0)
(0 1)
(0 2)
(0 3)
(0 4)
(0 5)
(0 6)
(0 7)
(1 0)
(1 1)
(1 2)
(1 3)
(1 4)
(1 5)
(1 6)
(1 7)
(2 0)
(2 2)
(2 1)
(2 3)
(2 4)
(2 6)
(2 5)
(2 7)
(3 0)
(3 4)
(3 2)
(3 6)
(3 1)
(3 5)
(3 3)
(3 7)
y0
y2
y6
y1
y4
y5
y3
y7
+
+
+
+
+
+
++ +
+
+
+
(a)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)+++
(b)
Figure 2 (a) Construction of polar decoding with code length 119873 = 8 (b) Basic process units of the polar decoder
Lemma 2 has shown that for a polar code with rate119877 lt 1the frozen nodes are always existing for example the frozennodes in Figure 2(a) are V(0 0) V(1 0) V(0 1) V(1 1) V(0 2)and V(0 4) For convenience we denote the frozen node setof a polar code as V
119865 and we assume that the default value of
each frozen node is 0 in the subsequent sections
24 Decoding Tree Representation It can be found from theconstruction of the decoder in Figure 2(a) that the decodingof a node V(119894 119895) can be equivalently represented as a binarydecoding tree with some input nodes where V(119894 119895) is the rootnode of that tree and the input nodes are the leaf nodes Theheight of a decoding tree is as most as log
2119873 and each of the
intermediate node has one or two son nodes As illustrated inFigure 3 the decoding trees for frozen nodes V(0 0) V(0 1)V(0 2) V(0 4) V(1 0) and V(1 1) in Figure 2(a) are given
During the decoding procedure probability messages ofV(0 0) V(0 1) V(0 2) V(0 4) V(1 0) and V(1 1) will strictlydepend on the probability messages of the leaf nodes as thebottom nodes shown in Figure 3 In addition based on the(2) we further have
V (0 0) =
7
sum
119894=0
oplus V (3 119894)
V (1 0) = V (3 0) oplus V (3 1) oplus V (3 2) oplus V (3 3)
V (0 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (1 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (0 2) = V (3 2) oplus V (3 3) oplus V (3 6) oplus V (3 7)
V (0 4) = V (3 1) oplus V (3 3) oplus V (3 5) oplus V (3 7)
(7)
To generalize the decoding tree representation for thedecoding we introduce the following Lemma
Lemma 3 In the decoder of a polar code with length 119873 = 2119899
there is a unique decoding tree for each node V(119894 119895) the leaf
nodes set of which is indicated as V119871
V(119894119895) And if 119895 = 119873 minus 1 thenumber of the leaf nodes is even that is
V (119894 119895) =
1198722
sum
119896=0
oplus V (119899 1198952119896)
0 le 1198952119896
le 119873 minus 1 V (119899 1198952119896) isin V
119871
V(119894119895)
(8)
where 119872 = |V119871
V(119894119895)| and (119872 mod 2) = 0 While if 119895 = 119873 minus 1119872 is equal to 1 and it is true that
V (119894 119873 minus 1) = V (119899119873 minus 1) (9)
Proof Theproof of Lemma 3 is based on (2) and constructionof the generation matrix It is easily proved that except thelast column (only one ldquo1rdquo element) there is an even numberof ldquo1rdquo elements in all the other columns of Fotimes119899
2 As B
119873is
a bit-reversal permutation matrix which is generated bypermutation of rows in I
119873 hence the generation matrix
G119873
has the same characteristic as Fotimes1198992
(see the proof ofTheorem 1) Therefore (8) and (9) can be easily proved by(2)
Lemma 3 has clearly shown the relationship between theinput nodes and other intermediate nodes of the decoderwhich is useful for error checking and probability messagescalculation introduced in the subsequent sections
3 Error Checking for Decoding
As analyzed in Section 1 the key problem to improve theperformance of polar codes is to enhance the error-correctingcapacity of the decoding In this section we will show how toachieve the goal
31 Error Checking by the Frozen Nodes It is noticed fromSection 23 that the values of the frozennodes are determinedHence if the decoding is correct the probability messagesof any frozen node V(119894 119895) must satisfy the condition of119901V(119894119895)(0) gt 119901V(119894119895)(1) (the default value of frozen nodesis 0) which is called reliability condition throughout this
The Scientific World Journal 5
(0 0)
(1 0) (1 1)
(2 0) (2 1) (2 2) (2 3)
(3 0) (3 1) (3 2) (3 3) (3 4) (3 5) (3 6) (3 7)
(a)
(0 2)
(1 2) (1 3)
(2 1) (2 3)
(3 2) (3 3) (3 6) (3 7)
(b)
(0 4)
(1 4) (1 5)
(2 4) (2 5) (2 6) (2 7)
(3 1) (3 3) (3 5) (3 7)
(c)
(1 0)
(2 0) (2 1)
(3 0) (3 1) (3 2) (3 3)
(d)
(0 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(e)
(1 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(f)
Figure 3 The decoding trees for the nodes V(0 0) V(0 1) V(1 0) V(1 1) V(0 2) and V(0 4)
paper While in practice due to the noise of received signalthere may exist some frozen nodes unsatisfying the reliabilitycondition which indicates that there must exist errors in theinput nodes of the decoder Hence it is exactly based on thisobservation that we can check the errors during the decodingTo further describe detailedly a theorem is introduced toshow the relationship between the reliability condition of thefrozen nodes and the errors in the input nodes of the decoder
Theorem 4 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) if the probability messages of V(119894 119895) do not satisfy thereliability condition during the decoding procedure there mustexist an odd number of error nodes in V119871
V(119894119895) otherwise theerror number will be even (including 0)
Proof For the proof of Theorem 4 see Appendix B for detail
Theorem 4 has provided us an effective method to detectthe errors in the leaf nodes set of the frozen node Forexample if the probability messages of the frozen nodeV(0 0) in Figure 2 do not satisfy the reliability conditionthat is 119901V(00)(0) le 119901V(00)(1) it can be confirmed that theremust exist errors in the set of V(3 0) V(3 1) V(3 7) andthe number of these errors may be 1 or 3 or 5 or 7 Thatis to say through checking the reliability condition of thefrozen nodes we can confirm existence of the errors in the
input nodes of the decoder which is further presented as acorollary
Corollary 5 For a polar code with the frozen node set V119865
if existV(119894 119895) isin V119865and V(119894 119895) does not satisfy the reliability
condition there must exist errors in the input nodes of decoder
Proof The proof of Corollary 5 is easily completed based onTheorem 4
Corollary 5 has clearly shown that through checking theprobability messages of each frozen node errors in the inputnodes of decoder can be detected
32 Error-Checking Equations As aforementioned errors inthe input nodes can be found with probability messages ofthe frozen node but there still is a problem which is how tolocate the exact position of each error To solve the problema parameter called error indicator is defined for each inputnode of the decoder And for the input node V(119899 119894) the errorindicator is denoted as 119888
119894 value of which is given by
119888119894=
1 V (119899 119894) is error0 otherwise
(10)
That is to say by the parameter of error indicatorwe can determine whether an input node is error or not
6 The Scientific World Journal
Hence the above problem can be transformed into howto obtain the error indicator of each input node Moti-vated by this observation we introduce another corollary ofTheorem 4
Corollary 6 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) there is
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 0 otherwise
(11)
where119872 = |V119871
V(119894119895)| V(119899 119894119896) isin V119871
V(119894119895) and119873 = 2119899 is code length
Furthermore under the field of 119866119865(2) (11) can be written as
119872minus1
sum
119896=0
oplus 119888119894119896
= 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
119872minus1
sum
119896=0
oplus 119888119894119896
= 0 otherwise
(12)
Proof The proof of Corollary 6 is based on Lemma 3 andTheorem 4 and here we ignore the detailed derivation pro-cess
Corollary 6 has shown that the problem of obtainingthe error indicator can be transformed to find solutionsof (12) under the condition of (11) In order to introducemore specifically here we will take an example based on thedecoder in Figure 2(a)
Example 7 We assume that frozen nodes V(0 0) V(1 0)V(0 2) and V(0 4) do not satisfy the reliability conditionhence based on Theorem 4 and Corollary 6 there are equa-tions as
7
sum
119894=0
oplus 119888119894= 1
1198880oplus 119888
1oplus 119888
2oplus 119888
3= 1
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198882oplus 119888
3oplus 119888
6oplus 119888
7= 1
1198881oplus 119888
3oplus 119888
5oplus 119888
7= 1
(13)
Furthermore (13) can be written as matrix form which is
E68(119888
7
0)119879
=
[[[[[[[
[
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
]]]]]]]
]
[[[[[[[[[[
[
1198880
1198881
1198882
1198883
1198884
1198885
1198886
1198887
]]]]]]]]]]
]
= (1205745
0)119879
(14)
where 1205745
0= (1 1 0 0 1 1) and E
68is the coefficient
matrix with size of 6 times 8 Therefore by solving (14) we willget the error indicator vector of input nodes in Figure 2 Inorder to further generalize the above example we provide alemma
Lemma 8 For a polar code with the code length 119873 code rate119877 = 119870119873 and frozen node set V
119865 we have the error-checking
equations as
E119872119873
(119888119873minus1
0)119879
= (120574119872minus1
0)119879
(15)
where 119888119873minus1
0is the error indicator vector and 119872 = |V
119865| 119872 ge
119873minus119870 E119872119873
is called error-checking matrix elements of whichare determined by the code construction method and 120574
119872minus1
0is
called error-checking vector elements of which depend on theprobabilitymessages of the frozen nodes inV
119865 that isforallV
119894isin V
119865
0 le 119894 le 119872 minus 1 there is a unique 120574119894isin 120574
119872minus1
0such that
120574119894=
1 119901V119894 (0) le 119901V119894 (1)
0 119901V119894 (0) gt 119901V119894 (1) (16)
Proof The proof of the Lemma 8 is based on (10)ndash(14)Lemma 3 andTheorem 4 which will be ignored here
33 Solutions of Error-Checking Equations Lemma 8 pro-vides a general method to determine the position of errorsin the input nodes by the error-checking equations It is stillneeded to investigate the existence of solutions of the error-checking equations
Theorem9 For a polar code with code length119873 and code rate119877 = 119870119873 there is
rank (E119872119873
) = rank ([ E119872119873
(120574119872minus1
0)119879
]) = 119873 minus 119870 (17)
where [ E119872119873
(120574119872minus1
0)119879
] is the augmented matrix of (15) andrank(X) is the rank of matrix X
Proof For the proof ofTheorem 9 see Appendix C for detail
It is noticed from Theorem 9 that there must existmultiple solutions for error-checking equations therefore wefurther investigate the general expression of solutions of theerror-checking equations as shown in the following corollary
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
4 The Scientific World Journal
u0
u1
u2
u3
u4
u5
u6
u7
(0 0)
(0 1)
(0 2)
(0 3)
(0 4)
(0 5)
(0 6)
(0 7)
(1 0)
(1 1)
(1 2)
(1 3)
(1 4)
(1 5)
(1 6)
(1 7)
(2 0)
(2 2)
(2 1)
(2 3)
(2 4)
(2 6)
(2 5)
(2 7)
(3 0)
(3 4)
(3 2)
(3 6)
(3 1)
(3 5)
(3 3)
(3 7)
y0
y2
y6
y1
y4
y5
y3
y7
+
+
+
+
+
+
++ +
+
+
+
(a)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)
(i j1)
(i j2)
(i + 1 j3)
(i + 1 j4)+++
(b)
Figure 2 (a) Construction of polar decoding with code length 119873 = 8 (b) Basic process units of the polar decoder
Lemma 2 has shown that for a polar code with rate119877 lt 1the frozen nodes are always existing for example the frozennodes in Figure 2(a) are V(0 0) V(1 0) V(0 1) V(1 1) V(0 2)and V(0 4) For convenience we denote the frozen node setof a polar code as V
119865 and we assume that the default value of
each frozen node is 0 in the subsequent sections
24 Decoding Tree Representation It can be found from theconstruction of the decoder in Figure 2(a) that the decodingof a node V(119894 119895) can be equivalently represented as a binarydecoding tree with some input nodes where V(119894 119895) is the rootnode of that tree and the input nodes are the leaf nodes Theheight of a decoding tree is as most as log
2119873 and each of the
intermediate node has one or two son nodes As illustrated inFigure 3 the decoding trees for frozen nodes V(0 0) V(0 1)V(0 2) V(0 4) V(1 0) and V(1 1) in Figure 2(a) are given
During the decoding procedure probability messages ofV(0 0) V(0 1) V(0 2) V(0 4) V(1 0) and V(1 1) will strictlydepend on the probability messages of the leaf nodes as thebottom nodes shown in Figure 3 In addition based on the(2) we further have
V (0 0) =
7
sum
119894=0
oplus V (3 119894)
V (1 0) = V (3 0) oplus V (3 1) oplus V (3 2) oplus V (3 3)
V (0 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (1 1) = V (3 4) oplus V (3 5) oplus V (3 6) oplus V (3 7)
V (0 2) = V (3 2) oplus V (3 3) oplus V (3 6) oplus V (3 7)
V (0 4) = V (3 1) oplus V (3 3) oplus V (3 5) oplus V (3 7)
(7)
To generalize the decoding tree representation for thedecoding we introduce the following Lemma
Lemma 3 In the decoder of a polar code with length 119873 = 2119899
there is a unique decoding tree for each node V(119894 119895) the leaf
nodes set of which is indicated as V119871
V(119894119895) And if 119895 = 119873 minus 1 thenumber of the leaf nodes is even that is
V (119894 119895) =
1198722
sum
119896=0
oplus V (119899 1198952119896)
0 le 1198952119896
le 119873 minus 1 V (119899 1198952119896) isin V
119871
V(119894119895)
(8)
where 119872 = |V119871
V(119894119895)| and (119872 mod 2) = 0 While if 119895 = 119873 minus 1119872 is equal to 1 and it is true that
V (119894 119873 minus 1) = V (119899119873 minus 1) (9)
Proof Theproof of Lemma 3 is based on (2) and constructionof the generation matrix It is easily proved that except thelast column (only one ldquo1rdquo element) there is an even numberof ldquo1rdquo elements in all the other columns of Fotimes119899
2 As B
119873is
a bit-reversal permutation matrix which is generated bypermutation of rows in I
119873 hence the generation matrix
G119873
has the same characteristic as Fotimes1198992
(see the proof ofTheorem 1) Therefore (8) and (9) can be easily proved by(2)
Lemma 3 has clearly shown the relationship between theinput nodes and other intermediate nodes of the decoderwhich is useful for error checking and probability messagescalculation introduced in the subsequent sections
3 Error Checking for Decoding
As analyzed in Section 1 the key problem to improve theperformance of polar codes is to enhance the error-correctingcapacity of the decoding In this section we will show how toachieve the goal
31 Error Checking by the Frozen Nodes It is noticed fromSection 23 that the values of the frozennodes are determinedHence if the decoding is correct the probability messagesof any frozen node V(119894 119895) must satisfy the condition of119901V(119894119895)(0) gt 119901V(119894119895)(1) (the default value of frozen nodesis 0) which is called reliability condition throughout this
The Scientific World Journal 5
(0 0)
(1 0) (1 1)
(2 0) (2 1) (2 2) (2 3)
(3 0) (3 1) (3 2) (3 3) (3 4) (3 5) (3 6) (3 7)
(a)
(0 2)
(1 2) (1 3)
(2 1) (2 3)
(3 2) (3 3) (3 6) (3 7)
(b)
(0 4)
(1 4) (1 5)
(2 4) (2 5) (2 6) (2 7)
(3 1) (3 3) (3 5) (3 7)
(c)
(1 0)
(2 0) (2 1)
(3 0) (3 1) (3 2) (3 3)
(d)
(0 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(e)
(1 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(f)
Figure 3 The decoding trees for the nodes V(0 0) V(0 1) V(1 0) V(1 1) V(0 2) and V(0 4)
paper While in practice due to the noise of received signalthere may exist some frozen nodes unsatisfying the reliabilitycondition which indicates that there must exist errors in theinput nodes of the decoder Hence it is exactly based on thisobservation that we can check the errors during the decodingTo further describe detailedly a theorem is introduced toshow the relationship between the reliability condition of thefrozen nodes and the errors in the input nodes of the decoder
Theorem 4 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) if the probability messages of V(119894 119895) do not satisfy thereliability condition during the decoding procedure there mustexist an odd number of error nodes in V119871
V(119894119895) otherwise theerror number will be even (including 0)
Proof For the proof of Theorem 4 see Appendix B for detail
Theorem 4 has provided us an effective method to detectthe errors in the leaf nodes set of the frozen node Forexample if the probability messages of the frozen nodeV(0 0) in Figure 2 do not satisfy the reliability conditionthat is 119901V(00)(0) le 119901V(00)(1) it can be confirmed that theremust exist errors in the set of V(3 0) V(3 1) V(3 7) andthe number of these errors may be 1 or 3 or 5 or 7 Thatis to say through checking the reliability condition of thefrozen nodes we can confirm existence of the errors in the
input nodes of the decoder which is further presented as acorollary
Corollary 5 For a polar code with the frozen node set V119865
if existV(119894 119895) isin V119865and V(119894 119895) does not satisfy the reliability
condition there must exist errors in the input nodes of decoder
Proof The proof of Corollary 5 is easily completed based onTheorem 4
Corollary 5 has clearly shown that through checking theprobability messages of each frozen node errors in the inputnodes of decoder can be detected
32 Error-Checking Equations As aforementioned errors inthe input nodes can be found with probability messages ofthe frozen node but there still is a problem which is how tolocate the exact position of each error To solve the problema parameter called error indicator is defined for each inputnode of the decoder And for the input node V(119899 119894) the errorindicator is denoted as 119888
119894 value of which is given by
119888119894=
1 V (119899 119894) is error0 otherwise
(10)
That is to say by the parameter of error indicatorwe can determine whether an input node is error or not
6 The Scientific World Journal
Hence the above problem can be transformed into howto obtain the error indicator of each input node Moti-vated by this observation we introduce another corollary ofTheorem 4
Corollary 6 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) there is
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 0 otherwise
(11)
where119872 = |V119871
V(119894119895)| V(119899 119894119896) isin V119871
V(119894119895) and119873 = 2119899 is code length
Furthermore under the field of 119866119865(2) (11) can be written as
119872minus1
sum
119896=0
oplus 119888119894119896
= 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
119872minus1
sum
119896=0
oplus 119888119894119896
= 0 otherwise
(12)
Proof The proof of Corollary 6 is based on Lemma 3 andTheorem 4 and here we ignore the detailed derivation pro-cess
Corollary 6 has shown that the problem of obtainingthe error indicator can be transformed to find solutionsof (12) under the condition of (11) In order to introducemore specifically here we will take an example based on thedecoder in Figure 2(a)
Example 7 We assume that frozen nodes V(0 0) V(1 0)V(0 2) and V(0 4) do not satisfy the reliability conditionhence based on Theorem 4 and Corollary 6 there are equa-tions as
7
sum
119894=0
oplus 119888119894= 1
1198880oplus 119888
1oplus 119888
2oplus 119888
3= 1
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198882oplus 119888
3oplus 119888
6oplus 119888
7= 1
1198881oplus 119888
3oplus 119888
5oplus 119888
7= 1
(13)
Furthermore (13) can be written as matrix form which is
E68(119888
7
0)119879
=
[[[[[[[
[
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
]]]]]]]
]
[[[[[[[[[[
[
1198880
1198881
1198882
1198883
1198884
1198885
1198886
1198887
]]]]]]]]]]
]
= (1205745
0)119879
(14)
where 1205745
0= (1 1 0 0 1 1) and E
68is the coefficient
matrix with size of 6 times 8 Therefore by solving (14) we willget the error indicator vector of input nodes in Figure 2 Inorder to further generalize the above example we provide alemma
Lemma 8 For a polar code with the code length 119873 code rate119877 = 119870119873 and frozen node set V
119865 we have the error-checking
equations as
E119872119873
(119888119873minus1
0)119879
= (120574119872minus1
0)119879
(15)
where 119888119873minus1
0is the error indicator vector and 119872 = |V
119865| 119872 ge
119873minus119870 E119872119873
is called error-checking matrix elements of whichare determined by the code construction method and 120574
119872minus1
0is
called error-checking vector elements of which depend on theprobabilitymessages of the frozen nodes inV
119865 that isforallV
119894isin V
119865
0 le 119894 le 119872 minus 1 there is a unique 120574119894isin 120574
119872minus1
0such that
120574119894=
1 119901V119894 (0) le 119901V119894 (1)
0 119901V119894 (0) gt 119901V119894 (1) (16)
Proof The proof of the Lemma 8 is based on (10)ndash(14)Lemma 3 andTheorem 4 which will be ignored here
33 Solutions of Error-Checking Equations Lemma 8 pro-vides a general method to determine the position of errorsin the input nodes by the error-checking equations It is stillneeded to investigate the existence of solutions of the error-checking equations
Theorem9 For a polar code with code length119873 and code rate119877 = 119870119873 there is
rank (E119872119873
) = rank ([ E119872119873
(120574119872minus1
0)119879
]) = 119873 minus 119870 (17)
where [ E119872119873
(120574119872minus1
0)119879
] is the augmented matrix of (15) andrank(X) is the rank of matrix X
Proof For the proof ofTheorem 9 see Appendix C for detail
It is noticed from Theorem 9 that there must existmultiple solutions for error-checking equations therefore wefurther investigate the general expression of solutions of theerror-checking equations as shown in the following corollary
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 5
(0 0)
(1 0) (1 1)
(2 0) (2 1) (2 2) (2 3)
(3 0) (3 1) (3 2) (3 3) (3 4) (3 5) (3 6) (3 7)
(a)
(0 2)
(1 2) (1 3)
(2 1) (2 3)
(3 2) (3 3) (3 6) (3 7)
(b)
(0 4)
(1 4) (1 5)
(2 4) (2 5) (2 6) (2 7)
(3 1) (3 3) (3 5) (3 7)
(c)
(1 0)
(2 0) (2 1)
(3 0) (3 1) (3 2) (3 3)
(d)
(0 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(e)
(1 1)
(2 2) (2 3)
(3 4) (3 5) (3 6) (3 7)
(f)
Figure 3 The decoding trees for the nodes V(0 0) V(0 1) V(1 0) V(1 1) V(0 2) and V(0 4)
paper While in practice due to the noise of received signalthere may exist some frozen nodes unsatisfying the reliabilitycondition which indicates that there must exist errors in theinput nodes of the decoder Hence it is exactly based on thisobservation that we can check the errors during the decodingTo further describe detailedly a theorem is introduced toshow the relationship between the reliability condition of thefrozen nodes and the errors in the input nodes of the decoder
Theorem 4 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) if the probability messages of V(119894 119895) do not satisfy thereliability condition during the decoding procedure there mustexist an odd number of error nodes in V119871
V(119894119895) otherwise theerror number will be even (including 0)
Proof For the proof of Theorem 4 see Appendix B for detail
Theorem 4 has provided us an effective method to detectthe errors in the leaf nodes set of the frozen node Forexample if the probability messages of the frozen nodeV(0 0) in Figure 2 do not satisfy the reliability conditionthat is 119901V(00)(0) le 119901V(00)(1) it can be confirmed that theremust exist errors in the set of V(3 0) V(3 1) V(3 7) andthe number of these errors may be 1 or 3 or 5 or 7 Thatis to say through checking the reliability condition of thefrozen nodes we can confirm existence of the errors in the
input nodes of the decoder which is further presented as acorollary
Corollary 5 For a polar code with the frozen node set V119865
if existV(119894 119895) isin V119865and V(119894 119895) does not satisfy the reliability
condition there must exist errors in the input nodes of decoder
Proof The proof of Corollary 5 is easily completed based onTheorem 4
Corollary 5 has clearly shown that through checking theprobability messages of each frozen node errors in the inputnodes of decoder can be detected
32 Error-Checking Equations As aforementioned errors inthe input nodes can be found with probability messages ofthe frozen node but there still is a problem which is how tolocate the exact position of each error To solve the problema parameter called error indicator is defined for each inputnode of the decoder And for the input node V(119899 119894) the errorindicator is denoted as 119888
119894 value of which is given by
119888119894=
1 V (119899 119894) is error0 otherwise
(10)
That is to say by the parameter of error indicatorwe can determine whether an input node is error or not
6 The Scientific World Journal
Hence the above problem can be transformed into howto obtain the error indicator of each input node Moti-vated by this observation we introduce another corollary ofTheorem 4
Corollary 6 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) there is
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 0 otherwise
(11)
where119872 = |V119871
V(119894119895)| V(119899 119894119896) isin V119871
V(119894119895) and119873 = 2119899 is code length
Furthermore under the field of 119866119865(2) (11) can be written as
119872minus1
sum
119896=0
oplus 119888119894119896
= 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
119872minus1
sum
119896=0
oplus 119888119894119896
= 0 otherwise
(12)
Proof The proof of Corollary 6 is based on Lemma 3 andTheorem 4 and here we ignore the detailed derivation pro-cess
Corollary 6 has shown that the problem of obtainingthe error indicator can be transformed to find solutionsof (12) under the condition of (11) In order to introducemore specifically here we will take an example based on thedecoder in Figure 2(a)
Example 7 We assume that frozen nodes V(0 0) V(1 0)V(0 2) and V(0 4) do not satisfy the reliability conditionhence based on Theorem 4 and Corollary 6 there are equa-tions as
7
sum
119894=0
oplus 119888119894= 1
1198880oplus 119888
1oplus 119888
2oplus 119888
3= 1
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198882oplus 119888
3oplus 119888
6oplus 119888
7= 1
1198881oplus 119888
3oplus 119888
5oplus 119888
7= 1
(13)
Furthermore (13) can be written as matrix form which is
E68(119888
7
0)119879
=
[[[[[[[
[
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
]]]]]]]
]
[[[[[[[[[[
[
1198880
1198881
1198882
1198883
1198884
1198885
1198886
1198887
]]]]]]]]]]
]
= (1205745
0)119879
(14)
where 1205745
0= (1 1 0 0 1 1) and E
68is the coefficient
matrix with size of 6 times 8 Therefore by solving (14) we willget the error indicator vector of input nodes in Figure 2 Inorder to further generalize the above example we provide alemma
Lemma 8 For a polar code with the code length 119873 code rate119877 = 119870119873 and frozen node set V
119865 we have the error-checking
equations as
E119872119873
(119888119873minus1
0)119879
= (120574119872minus1
0)119879
(15)
where 119888119873minus1
0is the error indicator vector and 119872 = |V
119865| 119872 ge
119873minus119870 E119872119873
is called error-checking matrix elements of whichare determined by the code construction method and 120574
119872minus1
0is
called error-checking vector elements of which depend on theprobabilitymessages of the frozen nodes inV
119865 that isforallV
119894isin V
119865
0 le 119894 le 119872 minus 1 there is a unique 120574119894isin 120574
119872minus1
0such that
120574119894=
1 119901V119894 (0) le 119901V119894 (1)
0 119901V119894 (0) gt 119901V119894 (1) (16)
Proof The proof of the Lemma 8 is based on (10)ndash(14)Lemma 3 andTheorem 4 which will be ignored here
33 Solutions of Error-Checking Equations Lemma 8 pro-vides a general method to determine the position of errorsin the input nodes by the error-checking equations It is stillneeded to investigate the existence of solutions of the error-checking equations
Theorem9 For a polar code with code length119873 and code rate119877 = 119870119873 there is
rank (E119872119873
) = rank ([ E119872119873
(120574119872minus1
0)119879
]) = 119873 minus 119870 (17)
where [ E119872119873
(120574119872minus1
0)119879
] is the augmented matrix of (15) andrank(X) is the rank of matrix X
Proof For the proof ofTheorem 9 see Appendix C for detail
It is noticed from Theorem 9 that there must existmultiple solutions for error-checking equations therefore wefurther investigate the general expression of solutions of theerror-checking equations as shown in the following corollary
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
6 The Scientific World Journal
Hence the above problem can be transformed into howto obtain the error indicator of each input node Moti-vated by this observation we introduce another corollary ofTheorem 4
Corollary 6 For any frozen node V(119894 119895) with a leaf node setV119871
V(119894119895) there is
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
(
119872minus1
sum
119896=0
119888119894119896) mod 2 = 0 otherwise
(11)
where119872 = |V119871
V(119894119895)| V(119899 119894119896) isin V119871
V(119894119895) and119873 = 2119899 is code length
Furthermore under the field of 119866119865(2) (11) can be written as
119872minus1
sum
119896=0
oplus 119888119894119896
= 1 119901V(119894119895) (0) le 119901V(119894119895) (1)
119872minus1
sum
119896=0
oplus 119888119894119896
= 0 otherwise
(12)
Proof The proof of Corollary 6 is based on Lemma 3 andTheorem 4 and here we ignore the detailed derivation pro-cess
Corollary 6 has shown that the problem of obtainingthe error indicator can be transformed to find solutionsof (12) under the condition of (11) In order to introducemore specifically here we will take an example based on thedecoder in Figure 2(a)
Example 7 We assume that frozen nodes V(0 0) V(1 0)V(0 2) and V(0 4) do not satisfy the reliability conditionhence based on Theorem 4 and Corollary 6 there are equa-tions as
7
sum
119894=0
oplus 119888119894= 1
1198880oplus 119888
1oplus 119888
2oplus 119888
3= 1
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198884oplus 119888
5oplus 119888
6oplus 119888
7= 0
1198882oplus 119888
3oplus 119888
6oplus 119888
7= 1
1198881oplus 119888
3oplus 119888
5oplus 119888
7= 1
(13)
Furthermore (13) can be written as matrix form which is
E68(119888
7
0)119879
=
[[[[[[[
[
1 1 1 1 1 1 1 1
1 1 1 1 0 0 0 0
0 0 0 0 1 1 1 1
0 0 0 0 1 1 1 1
0 0 1 1 0 0 1 1
0 1 0 1 0 1 0 1
]]]]]]]
]
[[[[[[[[[[
[
1198880
1198881
1198882
1198883
1198884
1198885
1198886
1198887
]]]]]]]]]]
]
= (1205745
0)119879
(14)
where 1205745
0= (1 1 0 0 1 1) and E
68is the coefficient
matrix with size of 6 times 8 Therefore by solving (14) we willget the error indicator vector of input nodes in Figure 2 Inorder to further generalize the above example we provide alemma
Lemma 8 For a polar code with the code length 119873 code rate119877 = 119870119873 and frozen node set V
119865 we have the error-checking
equations as
E119872119873
(119888119873minus1
0)119879
= (120574119872minus1
0)119879
(15)
where 119888119873minus1
0is the error indicator vector and 119872 = |V
119865| 119872 ge
119873minus119870 E119872119873
is called error-checking matrix elements of whichare determined by the code construction method and 120574
119872minus1
0is
called error-checking vector elements of which depend on theprobabilitymessages of the frozen nodes inV
119865 that isforallV
119894isin V
119865
0 le 119894 le 119872 minus 1 there is a unique 120574119894isin 120574
119872minus1
0such that
120574119894=
1 119901V119894 (0) le 119901V119894 (1)
0 119901V119894 (0) gt 119901V119894 (1) (16)
Proof The proof of the Lemma 8 is based on (10)ndash(14)Lemma 3 andTheorem 4 which will be ignored here
33 Solutions of Error-Checking Equations Lemma 8 pro-vides a general method to determine the position of errorsin the input nodes by the error-checking equations It is stillneeded to investigate the existence of solutions of the error-checking equations
Theorem9 For a polar code with code length119873 and code rate119877 = 119870119873 there is
rank (E119872119873
) = rank ([ E119872119873
(120574119872minus1
0)119879
]) = 119873 minus 119870 (17)
where [ E119872119873
(120574119872minus1
0)119879
] is the augmented matrix of (15) andrank(X) is the rank of matrix X
Proof For the proof ofTheorem 9 see Appendix C for detail
It is noticed from Theorem 9 that there must existmultiple solutions for error-checking equations therefore wefurther investigate the general expression of solutions of theerror-checking equations as shown in the following corollary
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 7
Corollary 10 For a polar code with code length 119873 and coderate 119877 = 119870119873 there exists a transformation matrix P
119873119872in
the field of 119866119865(2) such that
[ E119872119873
(120574119872minus1
0)119879
]
P119873119872997888997888997888rarr [
I119867
A119867119870
(120574119867minus1
0)119879
0(119872minus119867)119867
0(119872minus119867)119870
0(119872minus119867)times1
]
(18)
where 119867 = 119873 minus 119870 A119867119870
is the submatrix of transformationresult of E
119872119873 and 120574
119867minus1
0is the subvector of transformation
result of 120574119872minus1
0 Based on (18) the general solutions of error-
checking equations can be obtained by
(119888119873minus1
0)119879
= [
[
(119888119873minus1
119870)119879
(119888119870minus1
0)119879]
]
= [
[
A119867119870
(119888119870minus1
0)119879
oplus (120574119867minus1
0)119879
(119888119870minus1
0)119879
]
]
(19)
(119888119873minus1
0)119879
= B119873(119888
119873minus1
0)119879
(20)
where 119888119894
isin 0 1 and B119873
is an element-permutationmatrix which is determined by the matrix transformation of(18)
Proof The proof of Corollary 10 is based on Theorem 9and the linear equation solving theory which are ignoredhere
It is noticed from (18) and (19) that solutions of theerror-checking equations tightly depend on the two vec-tors 120574
119867minus1
0and 119888
119870minus1
0 Where 120574
119867minus1
0is determined by the
transformation matrix P119873119872
and the error-checking vector120574119872minus1
0 and 119888
119870minus1
0is a random vector In general based on
119888119870minus1
0 the number of solutions for the error-checking equa-
tions may be up to 2119870 which is a terrible number for
decoding Although the solutions number can be reducedthrough the checking of (11) it still needs to reduce thesolutions number in order to increase efficiency of errorchecking To achieve the goal we further introduce atheorem
Theorem 11 For a polar code with code length 119873 = 2119899 and
frozen node set V119865 there exists a positive real number 120575 such
that forallV(119894 119895) isin V119865 if (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 there is
forallV (119899 119895119896) isin V
119871
V(119894119895) 997904rArr 119888119895119896
= 0 (21)
where V119871
V(119894119895) is the leaf nodes set of V(119894 119895) 0 le 119895119896
le 119873 minus 10 le 119896 le |V119871
V(119894119895)|minus1 and the value of 120575 is related to the transitionprobability of the channel and the signal power
Proof For the proof ofTheorem 11 seeAppendix D for detail
Theorem 11 has shown that with the probabilitymessagesof the frozen node and 120575 we can determine the values of
some elements in 119888119870minus1
0quickly by which the freedom degree
of 119888119870minus10
will be further reduced Correspondingly the numberof solutions for the error-checking equations will be alsoreduced
Based on above results we take (14) as an example to showthe detailed process of solving the error-checking equationsThrough the linear transformation of (18) we have 120574
3
0=
(1 1 1 0)
A4=
[[[
[
1 1 1 0
1 1 0 1
1 0 1 1
0 1 1 1
]]]
]
(22)
B8=
[[[[[[[[[[
[
0 0 0 1 0 0 0 0
0 0 1 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 1
1 0 0 0 0 0 0 0
0 0 0 0 0 0 1 0
0 0 0 0 0 1 0 0
0 0 0 0 1 0 0 0
]]]]]]]]]]
]
(23)
By the element permutation of B8 we further have 119888
3
0=
(1198883 1198885 1198886 1198887) and 119888
7
4= (119888
0 1198881 1198882 1198884) If (119901V(01)(0)119901V(01)(1)) ge
120575 with the checking of (21) there is (1198883 1198885 1198886 1198887) = (119888
3 0 0 0)
and (1198880 1198881 1198882 1198884) = (119888
3oplus1 119888
3oplus1 119888
3oplus1 0) which imply that the
solutions number will be 2 Furthermore with the checkingof (11) we obtain the exact solution 119888
7
0= (0 0 0 1 0 0 0 0)
that is the 4th input node is errorIt is noticed clearly from the above example that with
the checking of (11) and (21) the number of the solutionscan be greatly reduced which make the error checking moreefficient And of course the final number of the solutions willdepend on the probability messages of the frozen nodes and120575
As the summarization of this section we given thecomplete process framework of error checking by solutions ofthe error-checking equations which is shown in Algorithm 1
4 Proposed Decoding Algorithm
In this section we will introduce the proposed decodingalgorithm in detail
41 Probability Messages Calculating Probability messagescalculating is an important aspect of a decoding algorithmOur proposed algorithm is different from the SC and BPalgorithms because the probability messages are calculatedbased on the decoding tree representation of the nodes in thedecoder and for an intermediate node V(119894 119895) with only oneson node V(119894 + 1 119895
119900) 0 le 119895
119900le 119873 minus 1 there is
119901V(119894119895) (0) = 119901V(119894+1119895119900) (0)
119901V(119894119895) (1) = 119901V(119894+1119895119900) (1) (24)
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
8 The Scientific World Journal
InputThe frozen nodes set V
119865
The probability messages set of the V119865
The matrixes P119873119872
A119867119870
and B119873
OutputThe error indicator vectors set C
(1) Getting 120574119872minus1
0with the probability messages set of V
119865
(2) Getting 120574119867minus1
0with 120574
119872minus1
0and P
119873119872
(3) for each V(119894 119895) isin V119865do
(4) if 119901V(119894119895)(0)119901V(119894119895)(1) gt 120575 then(5) Setting the error indicator for each leaf node of V(119894 119895) to 0(6) end if(7) end for(8) for each valid of 119888119870minus1
0do
(9) Getting 119888119873minus1
119870with A
119867119870and 120574
119873minus119870minus1
0
(10) if (11) is satisfied then(11) Getting 119888
119873minus1
0isin C with B
119873
(12) else(13) Dropping the solution and continuing(14) end if(15) end for(16) return C
Algorithm 1 Error checking for decoding
While if V(119894 119895) has two son nodes V(119894 + 1 119895119897) and V(119894 + 1 119895
119903)
0 le 119895119897 119895119903le 119873 minus 1 we will have
119901V(119894119895) (0) = 119901V(119894+1119895119897) (0) 119901V(119894+1119895119903) (0)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (1)
119901V(119894119895) (1) = 119901V(119894+1119895119897) (0) 119901V(119894+1j119903) (1)
+ 119901V(119894+1119895119897) (1) 119901V(119894+1119895119903) (0)
(25)
Based on (24) and (25) the probability messages of allthe variable nodes can be calculated in parallel which willbe beneficial to the decoding throughput
42 Error Correcting Algorithm 1 in Section 33 has pro-vided an effective method to detect errors in the input nodesof the decoder and now we will consider how to correctthese errors To achieve the goal we propose a method basedon modifying the probability messages of the error nodeswith constant values according to themaximization principleBased on themethod the new probability messages of a errornode will be given by
1199021015840
119894(0) = 120582
0 119902
119894 (0) gt 119902119894 (1)
1199021015840
119894(0) = 1 minus 120582
0 otherwise
(26)
and 1199021015840
119894(1) = 1 minus 119902
1015840
119894(0) where 119902
1015840
119894(0) 119902
1015840
119894(1) are the new
probabilitymessages of the error node V(119899 119894) and1205820is a small
nonnegative constant that is 0 le 1205820≪ 1 Furthermore we
will get the new probability vector of the input nodes as
119902119873minus1
0(0)
1015840= (119902
0(0)1015840 119902
1(0)1015840 119902
119873minus1(0)1015840)
119902119873minus1
0(1)
1015840= (119902
0(1)1015840 119902
1(1)1015840 119902
119873minus1(1)1015840)
(27)
where 119902119894(0)
1015840= 119902
1015840
119894(0) and 119902
119894(1)
1015840= 119902
1015840
119894(1) if the input node
V(119899 119894) is error otherwise 119902119894(0)
1015840= 119902
119894(0) and 119902
119894(1)
1015840= 119902
119894(1)
Then probabilitymessages of all the nodes in the decoderwillbe recalculated
In fact when there is only one error indicator vectoroutput from Algorithm 1 that is |C| = 1 after the errorcorrecting and the probability messages recalculation theestimated source binary vector
119873minus1
0can output directly by
the hard decision of the output nodes While if |C| gt 1 inorder to minimize the decoding error probability it needsfurther research about how to get the optimal error indicatorvector
43 Reliability Degree To find the optimal error indicatorvector we will introduce a parameter called reliability degreefor each node in the decoder And for a node V(119894 119895) thereliability degree 120577
V(119894119895) is given by
120577V(119894j)
=
119901V(119894119895) (0)
119901V(119894119895) (1) 119901V(119894119895) (0) gt 119901V(119894119895) (1)
119901V(119894119895) (1)
119901V(119894119895) (0) otherwise
(28)
The reliability degree indicates the reliability of the nodersquosdecision value and the larger the reliability degree the higher
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 9
the reliability of that value For example if the probabilitymessages of the node V(0 0) in Figure 2 are 119901V(00)(0) = 095
and 119901V(00)(1) = 005 there is 120577V(119894119895)
= 095005 = 19 thatis the reliability degree of V(0 0) = 0 is 19 And in fact thereliability degree is an important reference parameter for thechoice of the optimal error indicator vector which will beintroduced in the following subsection
44 Optimal Error Indicator Vector As aforementioned dueto the existence of |C| gt 1 correspondingly one node in thedecoder may have multiple reliability degrees We denote the119896th reliability degree of node V(119894 119895) as 120577
V(119894119895)119896
value of whichdepends on the 119896th element of C that is 119888
119896 Based on the
definition of reliability degree we introduce three methodsto get the optimal error indicator vector
The first method is based on the fact that when there isno noise in the channel the reliability degree of the node willapproach to infinity that is 120577V(119894119895) rarr infin Hence the mainconsideration is to maximize the reliability degree of all thenodes in decoder and the target function can be written as
= argmax119888119896isinC
log2119873
sum
119894=0
119873
sum
119895=0
120577V(119894119895)119896
(29)
where 119888119896is the optimal error indicator vector
To reduce the complexity we have further introduced twosimplified versions of the abovemethodOnone handwe justmaximize the reliability degree of all the frozen nodes hencethe target function can be written as
= argmax119888119896isinC
sum
V(119894119895)isinV119865
120577V(119894119895)119896
(30)
On the other hand we take the maximization of theoutput nodesrsquo reliability degree as the optimization targetfunction of which will be given by
= argmax119888119896isinC
119873minus1
sum
119895=0
120577V(0119895)119896
(31)
Hence the problem of getting the optimal error indicatorvector can be formulated as an optimization problemwith theabove three target functions What is more is that with theCRC aided the accuracy of the optimal error indicator vectorcan be enhanced Based on these observations the findingof the optimal error indicator vector will be divided into thefollowing steps
(1) Initialization we first get number 119871 candidates of theoptimal error indicator vector 119888
1198960 1198881198961 119888
119896119871minus1 by the
formulas of (29) or (30) or (31)(2) CRC-checking in order to get the optimal error indi-
cator vector correctly we further exclude some can-didates from 119888
1198960 1198881198961 119888
119896119871minus1by the CRC-checking
If there is only one valid candidate after the CRC-checking the optimal error indicator vector will beoutput directly otherwise the remaining candidateswill further be processed in step 3
Table 1The space and time complexity of each step in Algorithm 2
Step number inAlgorithm 2 Space complexity Time complexity
(1) 119874(1) 119874(119873)
(2) 119874(119873log2119873) 119874(119873log
2119873)
(3) 119874(1198830) 119874(119883
1)
(4)ndash(7) 119874(1198790119873log
2119873) 119874(119879
0119873log
2119873)
(8) 119874(1) 119874(1198790119873log
2119873) or 119874(119879
0119873)
(9) 119874(1) 119874(119873)
Table 2The space and time complexity of each step in Algorithm 1
Step number inAlgorithm 1 Space complexity Time complexity
(1) 119874(1) 119874(119872)
(2) 119874(1) 119874(119872)
(3)ndash(7) 119874(1) 119874(119872119873)
(8)ndash(15) 119874(1) 119874 (1198791(119872 minus 119870)119870) + 119874(119879
1119872)
(3) Determination if there are multiple candidates witha correct CRC-checking we will further choose theoptimal error indicator vector from the remainingcandidates of step 2 with the formulas of (29) or (30)or (31)
So far we have introduced the main steps of proposeddecoding algorithm in detail and as the summarization ofthese results we now provide the whole decoding procedurewith the form of pseudocode as shown in Algorithm 2
45 Complexity Analysis In this section the complexity ofthe proposed decoding algorithm is considered We firstinvestigate the space and time complexity of each step inAlgorithm 2 as shown in Table 1
In Table 1119874(1198830)119874(119883
1) are the space and time complex-
ity of Algorithm 1 respectively and 1198790is the element number
of error indicator vectors output by Algorithm 1 that is 1198790=
|C| It is noticed that the complexity of the Algorithm 1 has agreat influence on the complexity of the proposed decodingalgorithm hence we further analyze the complexity of eachstep of Algorithm 1 and the results are shown in Table 2
In Table 2 119872 is the number of the frozen nodes and 1198791
is the valid solution number of the error-checking equationsafter the checking of (21) Hence we get the space and timecomplexity of Algorithm 1 as 119874(119883
0) = 119874(1) and 119874(119883
1) =
2119874(119872)+119874(119872119873)+119874(1198791(119872minus119870)119870)+119874(119879
1119872) Furthermore
we can get the space and time complexity of the proposeddecoding algorithm as 119874((119879
0+ 1)119873log
2119873) and 119874(2119873) +
119874((21198790+ 1)119873log
2119873) +119874((119879
1+119873+ 2)119872) +119874(119879
1119870(119873minus119870))
From these results we can find that the complexity of theproposed decoding algorithm mainly depends on 119879
0and 119879
1
values of which depend on the different channel condition asillustrated in our simulation work
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
10 The Scientific World Journal
InputThe the received vector 119910119873minus1
0
OutputThe decoded codeword 119873minus1
0
(1) Getting the probability messages 119902119873minus10
(0) and 119902119873minus1
0(1) with the received vector 119910119873minus1
0
(2) Getting the probability messages of each frozen node V119865
(3) Getting the error indicator vectors set C with the Algorithm 1(4) for each 119888
119896isin C do
(5) Correcting the errors indicated by 119888119896with (26)
(6) Recalculating the probability messages of all the nodes of the decoder(7) end for(8) Getting the optimal error indicator vector for the decoding(9) Getting the decoded codeword
119873minus1
0by hard decision
(10) return 119873minus1
0
Algorithm 2 Decoding algorithm based on error checking and correcting
5 Simulation Results
In this section Monte Carlo simulation is provided to showthe performance and complexity of the proposed decodingalgorithm In the simulation the BPSK modulation and theadditive white Gaussian noise (AWGN) channel are assumedThe code length is 119873 = 2
3= 8 code rate 119877 is 05 and the
index of the information bits is the same as [1]
51 Performance To compare the performance of SC SCLBP and the proposed decoding algorithms three optimiza-tion targets with 1 bit CRC are used to get the optimal errorindicator vector in our simulation and the results are shownin Figure 4
As it is shown from Algorithms 1 2 and 3 in Figure 4the proposed decoding algorithm almost yields the sameperformance with the three different optimization targetsFurthermore we can find that compared with the SCSCL and BP decoding algorithms the proposed decodingalgorithm achieves better performance Particularly in thelow region of signal to noise ratio (SNR) that is 119864
119887119873
0 the
proposed algorithm provides a higher SNR advantage forexample when the bit error rate (BER) is 10
minus3 Algorithm1 provides SNR advantages of 13 dB 06 dB and 14 dBand when the BER is 10
minus4 the SNR advantages are 11 dB05 dB and 10 dB respectively Hence we can conclude thatperformance of short polar codes could be improved with theproposed decoding algorithm
In addition it is noted fromTheorem 11 that the value of120575 depended on the transition probability of the channel andthe signal power will affect the performance of the proposeddecoding algorithmHence based onAlgorithm 1 in Figure 4the performance of our proposed decoding algorithm withdifferent 120575 and SNR is also simulated and the results areshown in Figure 5 It is noticed that the optimal values of 120575according to 119864
119887119873
0= 1 dB 119864
119887119873
0= 3 dB 119864
119887119873
0= 5 dB
and 119864119887119873
0= 7 dB are 25 30 50 and 55 respectively
52 Complexity To estimate the complexity of the proposeddecoding algorithm the average numbers of parameters 119879
0
0 1 2 3 4 5 6 7 8
10minus1
10minus2
10minus3
10minus4
10minus5
10minus6
EbN0 (dB)
BER
SC(3 8)SCL(3 8) L = 4
BP(3 8) iterations = 60
Algorithm 1(4 8) CRC-1Algorithm 2(4 8) CRC-1Algorithm 3(4 8) CRC-1
Figure 4 Performance comparison of SC SCL(119871 = 4) BP (iterationnumber is 60) and the proposed decoding algorithm Algorithm 1means that target function to get the optimal error indicator vectoris (29) Algorithm 2 means that the target function is (30) andAlgorithm 3 means that the target function is (31) 120575 in Theorem 11takes the value of 4
and 1198791indicated in Section 45 are counted and shown in
Figure 6It is noticed from Figure 6 that with the increasing of
the SNR the average numbers of parameters 1198790and 119879
1are
sharply decreasing In particular we can find that in thehigh SNR region both of the 119879
0and 119879
1are approaching to
a number less than 1 In this case the space complexity ofthe algorithm will be 119874(119873log
2119873) and the time complexity
approaches to 119874(119873119872) In addition we further compare thespace and time complexity of Algorithm 1 (120575 = 4) andSC SCL (119871 = 4) and BP decoding algorithm results ofwhich are shown in Figure 7 It is noticed that in the high
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 11
15 2 25 3 35 4 45 5 55 6
120575
EbN0 = 1dBEbN0 = 3dB
EbN0 = 5dBEbN0 = 7dB
10minus2
10minus3
10minus4
10minus5
10minus6
BER
Figure 5 Performance of proposed decoding algorithm withdifferent 120575
T0 algorithm 1T1 algorithm 1T0 algorithm 2
T1 algorithm 2T0 algorithm 3T1 algorithm 3
0 1 2 3 4 5 6 7 8
EbN0 (dB)
12
10
8
6
4
2
0
Aver
age n
umbe
r
Figure 6 Average number of parameters 1198790and 119879
1with 120575 = 4
SNR region the space complexity of the proposed algorithmis almost the same as that of SC SCL and BP decodingalgorithm and the space complexity of the proposed algo-rithm will be close to 119874(119873119872) All of the above resultshave suggested the effectiveness of our proposed decodingalgorithm
6 Conclusion
In this paper we proposed a parallel decoding algorithmbased on error checking and correcting to improve the
0 1 2 3 4 5 6 7
EbN0 (dB)
3000
2500
2000
1500
1000
500
0
Com
plex
ity
SC(3 8) spaceSCL(3 8) L = 4 spaceBP(3 8) iterations = 60spaceAlgorithm 1(4 8)CRC-1 space
SC(3 8) timeSCL(3 8) L = 4 timeBP(3 8) iterations = 60timeAlgorithm 1(4 8)CRC-1 time
Figure 7 Space and time complexity comparison of SC SCL(119871 = 4)BP (iteration number is 60) and Algorithm 1 (120575 = 4)
performance of the short polar codes To enhance the error-correcting capacity of the decoding algorithm we derivedthe error-checking equations generated on the basis of thefrozen nodes and through delving the problem of solvingthese equations we introduced the method to check theerrors in the input nodes by the solutions of the equations Tofurther correct those checked errors we adopted the methodofmodifying the probabilitymessages of the error nodes withconstant values according to themaximization principle Dueto the existence of multiple solutions of the error-checkingequations we formulated a CRC-aided optimization problemof finding the optimal solution with three different targetfunctions so as to improve the accuracy of error checkingBesides in order to increase the throughput of decoding weused a parallelmethod based on the decoding tree to calculateprobability messages of all the nodes in the decoder Numer-ical results showed that the proposed decoding algorithmachieves better performance than that of the existing decod-ing algorithms where the space and time complexity wereapproaching to119874(119873log
2119873) and119874(119873119872) (119872 is the number of
frozen nodes) in the high signal to noise ratio (SNR) regionwhich suggested the effectiveness of the proposed decodingalgorithm
It is worth mentioning that we only investigated theerror correcting for short polar codes while for the long-length codes the method in this paper will yield highercomplexity Hence in future we will extend the idea oferror correcting in this paper to the research of long codelength in order to further improve the performance of polarcodes
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
12 The Scientific World Journal
Appendix
A Proof of Theorem 1
We can get the inverse of F2through the linear transfor-
mation of matrix that is Fminus12
= [1 0
1 1] Furthermore we
have
(Fotimes22
)minus1
= [F2
02
F2
F2
]
minus1
= [Fminus12
02
minusFminus12F2Fminus12
Fminus12
]
= [F2
02
F2
F2
] = Fotimes22
(A1)
Based on the mathematical induction we will have
(Fotimes1198992
)minus1
= Fotimes1198992
(A2)
The inverse of G119873can be expressed as
Gminus1
119873= (B
119873Fotimes1198992
)minus1
= (Fotimes1198992
)minus1Bminus1
119873= Fotimes119899
2Bminus1119873
(A3)
SinceB119873is a bit-reversal permutationmatrix by elemen-
tary transformation of matrix there is Bminus1119873
= B119873 Hence we
have
Gminus1
119873= Fotimes119899
2B119873 (A4)
It is noticed from Proposition 16 of [1] that Fotimes1198992B119873
=
B119873Fotimes1198992 therefore there is
Gminus1
119873= G
119873 (A5)
B Proof of Theorem 4
We assume the leaf nodes number of the frozen node V(119894 119895) is119876 that is 119876 = |V119871
V(119894119895)| If 119876 = 2 based on (25) there is
119901V(119894119895) (0) = 119901V0 (0) 119901V1 (0) + 119901V0 (1) 119901V1 (1)
119901V(119894119895) (1) = 119901V0 (0) 119901V1198971
(1) + 119901V0 (1) 119901V1 (0) (B1)
where V0 V
1isin V119871
V(119894119895) Based on the above equations we have
119901V(119894119895) (0) minus 119901V(119894119895) (1)
= (119901V0 (0) minus 119901V0 (1)) (119901V1 (0) minus 119901V1 (1))
(B2)
Therefore by themathematical inducing when119876 gt 2 wewill have
119901V(119894119895) (0) minus 119901V(119894119895) (1) =
119876minus1
prod
119896=0
(119901V119896 (0) minus 119901V119896 (1)) (B3)
where V119896isin V119871
V(119894119895)To prove Theorem 4 we assume that the values of all the
nodes in V119871
V(119894119895) are set to 0 without generality That is to saywhen the node V
119896isin V119871
V(119894119895) is right there is 119901V119896(0) gt 119901V119896(1)Hence based on the above equation when the probability
messages of V(119894 119895) do not satisfy the reliability condition thatis 119901V(119894119895)(0) minus 119901V(119894119895)(1) le 0 there must exist a subset V119871119874
V(119894119895) sube
V119871
V(119894119895) and |V119871119874
V(119894119895)| is an odd number such that
forallV119896isin V
119871119874
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B4)
While if 119901V(119894119895)(0) minus 119901V(119894119895)(1) gt 0 there must exist a subsetV119871119864
V(119894119895) sube V119871
V(119894119895) and |V119871119864
V(119894119895)| is an even number such that
forallV119896isin V
119871119864
V(119894119895) 997888rarr 119901V119896 (0) le 119901V119896 (1) (B5)
So the proof of Theorem 4 is completed
C Proof of Theorem 9
It is noticed from (1) that the coefficient vector of error-checking equation generated by the frozen nodes in theleftmost column is equal to one column vector of Gminus1
119873
denoted as 119892119894 0 le 119894 le 119873 minus 1 For example the coefficient
vector of error-checking equation generated by V(0 0) is equalto 119892
1= (1 1 1 1 1 1 1 1)
119879 Hence based on the proof ofTheorem 1 we have
rank (E119872119873
) ge 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) ge 119873 minus 119870
(C1)
In view of the process of polar encoding we can find thatthe frozen nodes in the intermediate column are generated bythe linear transformation of the frozen nodes in the leftmostcolumn That is to say error-checking equation generated bythe frozen nodes in the intermediate column can be linearexpressed by the error-checking equation generated by thefrozen nodes in the leftmost column Hence we further have
rank (E119872119873
) le 119873 minus 119870
rank ([ E119872119873
(120574119872minus1
0)119879
]) le 119873 minus 119870
(C2)
Therefore the proof of (17) is completed
D Proof of Theorem 11
To proveTheorem 11 we assume that the real values of all theinput nodes are 0 without generality Given the conditionsof transition probability of the channel and constraint of thesignal power it can be easily proved that there exists a positiveconstant 120573
0gt 1 such that
forallV (119899 119896) isin V119868997904rArr
1
1205730
le119901V(119899119896) (0)
119901V(119899119896) (1)le 120573
0 (D1)
where V(119899 119896) is an input node and V119868is the input nodes set of
the decoder That is to say for a frozen node V(119894 119895) with a leafnodes set V119871
V(119894119895) we have
forallV119896isin V
119871
V(119894119895) 997904rArr1
1205730
le
119901V119896 (0)
119901V119896 (1)le 120573
0 (D2)
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
The Scientific World Journal 13
Based on (25) and the decoding tree of V(119894 119895) we have theprobability messages of V(119894 119895) as
119901V(119894119895) (0)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898minus1sube0119876minus1
2119898minus1
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898minus1
119901V119896119903(0)
119901V(119894119895) (1)
=
1198762minus1
sum
119898=0
sum
forall1198960 1198962119898sube0119876minus1
2119898
prod
119897=0
119901V119896119897(1)
119876minus2119898minus1
prod
119903=00le119896119903le119876minus1
119896119903notin1198960 1198962119898
119901V119896119903(0)
(D3)
where V119896119897 V
119896119903isin V119871
V(119894119895) Hence we further have
119901V(119894119895) (0)
119901V(119894119895) (1)
=
1 + sum1198762minus1
119898=1sumforall1198960 1198962119898minus1
sube0119876minus1
prod2119898minus1
119897=0(119901V119896119897
(0) 119901V119896l(1))
sum1198762minus1
119898=0sumforall1198960 1198962119898
sube0119876minus1
prod2119898
119897=0(119901V119896119897
(0) 119901V119896119897(1))
(D4)
With definition of variables 1205930
= 119901V0(0)119901V0(1)1205931
= 119901V1(0)119901V1(1) 120593119876minus1
= 119901V119876minus1(0)119901V119876minus1(1) 11205730
le
1205930 120593
1 120593
119876minus1le 120573
0 the above equation will be written as
119901V(119894119895) (0)
119901V(119894119895) (1)
= 119891 (1205930 120593
1 120593
119876minus1)
= (1 + sdot sdot sdot + 120593119876minus2
120593119876minus1
+ 1205930120593112059321205933
+ sdot sdot sdot + 120593119876minus4
120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )
times (1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932
+ sdot sdot sdot + 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot )minus1
(D5)
To take the derivative of 119891(1205930 120593
1 120593
119876minus1) we further
define functions as
ℎ (1205930 120593
1 120593
119876minus1)
= 1 + 12059301205931+ sdot sdot sdot + 120593
119876minus2120593119876minus1
+ 1205930120593112059321205933+ sdot sdot sdot + 120593
119876minus4120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
119892 (1205930 120593
1 120593
119876minus1)
= 1205930+ sdot sdot sdot + 120593
119876minus1+ 120593
012059311205932+ sdot sdot sdot
+ 120593119876minus3
120593119876minus2
120593119876minus1
+ sdot sdot sdot
(D6)
Then the derivative of 119891(1205930 120593
1 120593
119876minus1) with respect to 120593
119896
will be
120597119891
120597120593119896
=(120597ℎ120597120593
119896) 119892 minus (120597119892120597120593
119896) ℎ
1198922
=
119892119892120593119896=0
minus ℎℎ120593119896=0
1198922=
1198922
120593119896=0minus ℎ
2
120593119896=0
1198922
(D7)
where 119892120593119896=0
= 119892(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) and ℎ
120593119896=0=
ℎ(1205930 120593
119896minus1 0 120593
119896+1 120593
119876minus1) Based on solution of the
equations (1205971198911205971205930) = 0 (120597119891120597120593
1) = 0 and (120597119891120597120593
119876minus1) =
0 we get the extreme value point of 119891(1205930 120593
1 120593
119876minus1) as
1205930
= 1205931
= sdot sdot sdot = 120593119876minus1
= 1 Based on the analysis of themonotonicity of119891(120593
0 120593
1 120593
119876minus1) we can get themaximum
value as 120575 = 119891(1205730 120573
0 120573
0⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟⏟
119876
) What is more we can also get
that when 119891(1205930 120593
1 120593
119876minus1) ge 120575 there is 120593
0gt 1 120593
1gt 1
and120593119876minus1
gt 1That is to say when (119901V(119894119895)(0)119901V(119894119895)(1)) ge 120575 wewill have 119901V0(0) gt 119901V0(1) 119901V1(0) gt 119901V1(1) and 119901V119876minus1(0) gt
119901V119876minus1(1) that is there is no error in V119871
V(119894119895) So the proof ofTheorem 11 is completed
Conflict of Interests
The authors declare that they do not have any commercialor associative interests that represent a conflict of interests inconnection with the work submitted
Acknowledgment
The authors would like to thank all the reviewers for theircomments and suggestions
References
[1] E Arikan ldquoChannel polarization a method for constructingcapacity-achieving codes for symmetric binary-input memory-less channelsrdquo IEEE Transactions on InformationTheory vol 55no 7 pp 3051ndash3073 2009
[2] EArikan andE Telatar ldquoOn the rate of channel polarizationrdquo inProceedings of the IEEE International Symposiumon InformationTheory (ISIT rsquo09) pp 1493ndash1495 June-July 2009
[3] S B Korada E Sas oglu and R Urbanke ldquoPolar codescharacterization of exponent bounds and constructionsrdquo IEEETransactions on Information Theory vol 56 no 12 pp 6253ndash6264 2010
[4] I Tal andAVardy ldquoList decoding of polar codesrdquo inProceedingsof the IEEE International Symposium on Information TheoryProceedings (ISIT rsquo11) pp 1ndash5 St Petersburg Russia August2011
[5] I Tal and A Vardy ldquoList decoding of polar codesrdquo httparxivorgabs12060050
[6] K Chen K Niu and J-R Lin ldquoImproved successive cancella-tion decoding of polar codesrdquo IEEE Transactions on Communi-cations vol 61 no 8 pp 3100ndash3107 2013
[7] KNiu andKChen ldquoCRC-aided decoding of polar codesrdquo IEEECommunications Letters vol 16 no 10 pp 1668ndash1671 2012
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
14 The Scientific World Journal
[8] A Alamdar-Yazdi and F R Kschischang ldquoA simplified succes-sive-cancellation decoder for polar codesrdquo IEEE Communica-tions Letters vol 15 no 12 pp 1378ndash1380 2011
[9] G Sarkis and W J Gross ldquoIncreasing the throughput of polardecodersrdquo IEEE Communications Letters vol 17 no 4 pp 725ndash728 2013
[10] G Sarkis P Giard A Vardy CThibeault andW J Gross ldquoFastpolar decoders algorithm and implementationrdquo IEEE Journalon Selected Areas in Communications vol 32 no 5 pp 946ndash9572014
[11] P Giard G Sarkis CThibeault andW J Gross ldquoA fast softwarepolar decoderrdquo httparxivorgabs13066311
[12] E Arikan H Kim G Markarian U Ozur and E PoyrazldquoPerformance of short polar codes under ML decodingrdquo inProceedings of the ICT-Mobile Summit Conference June 2009
[13] S Kahraman andM E Celebi ldquoCode based efficientmaximum-likelihood decoding of short polar codesrdquo in Proceedings of theIEEE International Symposium on InformationTheory (ISIT 12)pp 1967ndash1971 Cambridge Mass USA July 2012
[14] N Goela S B Korada and M Gastpar ldquoOn LP decoding ofpolar codesrdquo in Proceedings of the IEEE Information TheoryWorkshop (ITW rsquo10) pp 1ndash5 Dublin Ireland September 2010
[15] E Arikan ldquoA performance comparison of polar codes and reed-muller codesrdquo IEEE Communications Letters vol 12 no 6 pp447ndash449 2008
[16] N Hussami S B Korada and R Urbanke ldquoPerformance ofpolar codes for channel and source codingrdquo in Proceedings ofthe IEEE International Symposium on Information Theory (ISITrsquo09) pp 1488ndash1492 July 2009
[17] E Arikan ldquoPolar codes a pipelined implementationrdquo in Pro-ceedings of the 4th International Symposium on BroadbandCommunication (ISBC rsquo10) pp 11ndash14 July 2010
[18] A Eslami and H Pishro-Nik ldquoOn bit error rate performanceof polar codes in finite regimerdquo in Proceedings of the 48thAnnual Allerton Conference on Communication Control andComputing (Allerton rsquo10) pp 188ndash194 October 2010
[19] A Eslami and H Pishro-Nik ldquoOn finite-length performance ofpolar codes stopping sets error floor and concatenated designrdquoIEEE Transactions on Communications vol 61 no 3 pp 919ndash929 2013
[20] E Arikan ldquoSystematic polar codingrdquo IEEE CommunicationsLetters vol 15 no 8 pp 860ndash862 2011
[21] J L Massey ldquoCatastrophic error-propagation in convolutionalcodesrdquo in Proceedings of the 11th Midwest Symposium on CircuitTheory pp 583ndash587 January 1968
[22] R G Gallager ldquoLow-density parity-check codesrdquo IEEE Trans-actions on Information Theory vol 8 pp 21ndash28 1962
[23] D Divsalar and C Jones ldquoCTH08-4 protograph LDPC codeswith node degrees at least 3rdquo in Proceedings of the IEEE GlobalTelecommunications Conference (GLOBECOM rsquo06) pp 1ndash5 SanFrancisco Calif USA December 2006
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of
International Journal of
AerospaceEngineeringHindawi Publishing Corporationhttpwwwhindawicom Volume 2014
RoboticsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Active and Passive Electronic Components
Control Scienceand Engineering
Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
International Journal of
RotatingMachinery
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporation httpwwwhindawicom
Journal ofEngineeringVolume 2014
Submit your manuscripts athttpwwwhindawicom
VLSI Design
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Shock and Vibration
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Civil EngineeringAdvances in
Acoustics and VibrationAdvances in
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Electrical and Computer Engineering
Journal of
Advances inOptoElectronics
Hindawi Publishing Corporation httpwwwhindawicom
Volume 2014
The Scientific World JournalHindawi Publishing Corporation httpwwwhindawicom Volume 2014
SensorsJournal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Modelling amp Simulation in EngineeringHindawi Publishing Corporation httpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Chemical EngineeringInternational Journal of Antennas and
Propagation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
Navigation and Observation
International Journal of
Hindawi Publishing Corporationhttpwwwhindawicom Volume 2014
DistributedSensor Networks
International Journal of