11.pdf

Post on 10-Nov-2015

3 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

  • 38 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009

    Large-SNR Error Probability Analysis of BICM withUniform Interleaving in Fading Channels

    Alfonso Martinez, Member, IEEE and Albert Guillen i Fa`bregas, Member, IEEE

    AbstractThis paper studies the average error probability ofbit-interleaved coded modulation with uniform interleaving infully-interleaved fading channels. At large signal-to-noise ratio,the dominant pairwise error events are mapped into symbols withHamming weight larger than one, causing a flattening of the errorprobability. Closed-form expressions for the error probabilitywith general modulations are provided. For interleavers ofpractical length, the flattening is noticeable only at very lowvalues of the error probability.

    Index TermsBit-interleaved coded modulation, error proba-bility, higher-order modulation, saddlepoint approximation, bi-nary phase-shift keying (BPSK), quaternary phase-shift keying(QPSK).

    I. MOTIVATION AND SUMMARY

    WHILST QPSK is equivalent to two parallel indepen-dent BPSK channels in the Gaussian channel, theequivalence fails in fading channels because of the statisticaldependence between the quadrature components introduced bythe fading coefficients. This dependence ensures that the errorprobability is dominated by the number of QPSK symbolswith Hamming weight two. The asymptotic slope of the errorprobability is reduced and an error floor results, a phenomenonsimilar in nature to the floor appearing in turbo codes [1], [2].

    More generally, bit-interleaved coded modulation (BICM)[3] is affected by the same phenomenon. For large signal-to-noise ratio (SNR), the error probability is determined by errorevents with a high diversity, so that the minimum Hammingweight of the code is distributed across the largest possiblenumber of modulation symbols. Although worse error eventswith smaller diversity are present, they are weighted by a lowerror probability, and thus remain hidden for most practicalpurposes. We analyze this behaviour by first studying theunion bound to the error probability for QPSK modulationwith Gray labeling and fully-interleaved fading, and thenextending the results to general constellations. The saddlepointapproximation allows us to derive closed-form expressions forthe pairwise error probability, which highlight the aforemen-tioned floor effect. Finally, we estimate of the threshold SNRat which the error probability changes slope.

    Manuscript received November 20, 2007; revised May 14, 2008 andSeptember 25, 2008; accepted November 4, 2008. The associate editorcoordinating the review of this paper and approving it for publication wasH. Nguyen. This work was supported by the International Incoming ShortVisits Scheme 2007/R2 of the Royal Society.

    A. Martinez is with Centrum Wiskunde & Informatica (CWI), Amsterdam,The Netherlands (e-mail: alfonso.martinez@ieee.org).

    A. Guillen i Fa`bregas is with the Department of Engineering, Universityof Cambridge, Cambridge, UK (e-mail: guillen@ieee.org).

    Digital Object Identifier 10.1109/T-WC.2009.071300

    II. CHANNEL MODEL

    At the transmitter, a linear binary code of rate Rc is used togenerate codewords c = (c1, c2, . . . , c) of length , which areinterleaved before modulation. Then, consecutive groups of minterleaved bits (cm(k1)+1, . . . , cmk), for k = 1, . . . , n, aremapped onto a modulation symbol xk using some mappingrule, such as binary reflected Gray labeling. We denote themodulation signal set by X , its cardinality by |X |, and thenumber of bits per symbol m = log2 |X |. The signal setis normalized to unit average energy. Subindices r and irespectively denote real and imaginary parts. We also assumethat n m is an integer. We denote the inverse mappingfunction for labeling position j as cj : X {0, 1}, namelycj(x) is the j-th bit of symbol x. The sets X j1,...,jvc1,...,cv containthe symbols with bit labels in positions j1, . . . , jv equal toc1, . . . , cv.

    For each input symbol the corresponding channel output ykis given by

    yk = hk

    SNRxk + zk, k = 1, . . . , n (1)

    where SNR is the average received signal-to-noise ratio, zk areindependent samples of circularly-symmetric Gaussian noiseof variance 1, and hk are i. i. d. fading coefficients. Weassume that the fading coefficient is known at the receiver,which implies that the phase of hk is irrelevant thanks to thecircular symmetry of the noise. We also assume that the fadingcoefficients hk are drawn from a Nakagami distribution of pa-rameter mf > 0 [4]. This distribution encompasses Rayleighand AWGN channels, and approximates Rician fading.

    III. UNION BOUND AND AVERAGE PAIRWISE ERRORPROBABILITY FOR BICM

    At the receiver, we consider the maximum-metric BICMdecoder which selects the codeword with largest metricn

    k=1 q(xk, yk) [3]. The real-valued metric function q(xk, yk)is computed by the demodulator for each symbol accordingto the formula

    q(xk, yk) =m

    j=1

    qj(cj(xk), yk

    ), (2)

    where the bit metric qj(cj(x) = c, y

    )is given by

    qj(cj(x) = c, y

    )=

    xX jc

    p(y|x, h), (3)

    where p(y|x, h) = 1 e|yh

    SNRx|2 is the channel transitionprobability density.

    The word error rate, denoted by Pe, is the probabilityof selecting at the decoder a codeword different from the

    1536-1276/09$25.00 c 2009 IEEE

  • IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009 39

    transmitted one. Similarly, the bit error rate Pb is the aver-age number of input bits in error out of the possible Rccorresponding to a codeword of length . Exact expressionsfor Pe or Pb are difficult to obtain and one often resorts tobounding, such as the union bound [4]. In the union bound,the probability of an error event is bounded by the sum ofthe probabilities of all possible pairwise error events, where acodeword c other than the transmitted c has a larger metric.We define the pairwise score as

    pw n

    k=1

    mj=1

    logqj(cm(k1)+j , yk)

    qj(cm(k1)+j , yk). (4)

    The pairwise error probability PEP(c, c) between a referencecodeword c and the competitor codeword c is given byPEP(c, c) = Pr

    {pw > 0

    }. By construction, the pairwise

    score is given by the sum of n symbol scores,

    sk m

    j=1

    logqj(cm(k1)+j , yk)

    qj(cm(k1)+j , yk). (5)

    If the codewords have Hamming distance d, at most d symbolscores are non zero. Further, each symbol score is in turn givenby the sum of m bit scores

    bk,j logqj(cm(k1)+j , yk)

    qj(cm(k1)+j , yk). (6)

    Clearly, we only need to consider the non-zero bit andsymbol scores. These scores are random variables whosedensity function depends on all the random elements in thechannel, as well as the transmitted bits, their position in thesymbol and the bit pattern. In order to avoid this dependence,and as done by Yeh et al. [5], a uniform interleaver [2] is addedbetween the binary code and the mapper, so that the pairwiseerror probability is averaged over all possible ways of placingthe d bits in the n modulation symbols. We distinguish thesealternative placements by counting the number of symbolswith weight v, where 0 v w and w = min(m, d).Denoting the number of symbols of weight v by nv, thesymbol pattern n is given by n = (n0, . . . , nw). We alsohave that

    wv=0 nv = n and d =

    wv=1 vnv .

    For finite-length interleaving, the conditional pairwise errorprobability PEP(d, n) varies for every possible pattern. Asdone by Yeh et al. [5], averaging over all possible ways ofchoosing d locations in a codeword we have the followingunion bounds 1

    Pe d

    Adn

    P (n) PEP(d, n), (7)

    Pb d

    Adn

    P (n) PEP(d, n), (8)

    where Ad is the number of binary codewords of Hammingweight d, Ad =

    j

    jRc

    Aj,d, with Aj,d being the numberof codewords of Hamming weight d generated with an input

    1If a more detailed code spectrum were known, namely Ad,n andAj,d,n , respectively denoting the number of codewords with Hammingweight d mapped onto the pattern n and the number of such codewordswith input weight j, similar expressions to those in Eq. (7) could be writtenwithout the averaging operation.

    message of weight j, and P (n) is the probability of a partic-ular pattern n. A counting argument [5] gives the probabilityof the pattern n, P (n) Pr

    (n = (n0, . . . , nw)

    ), as

    P (n) =

    (m1

    )n1(m2

    )n2 (mw)nw(mnd

    ) n!n0!n1!n2! . . . nw !

    (9)

    We remove the dependence of the pairwise error probabilityon the specific choice of modulation symbols by averagingover all possible such choices. This method consists of addingto every transmitted codeword c C a random binary wordd {0, 1}n known by the receiver. This is equivalent toscrambling the output of the encoder by a sequence known atthe receiver. Scrambling guarantees the symbols correspondingto two m-bit sequences (c1, . . . , cm) and (c1, . . . , cm) aremapped to all possible pairs of symbols differing in a givenHamming weight, hence making the channel symmetric. In[3], [5], the scrambler role was played by a random choicebetween a mapping rule and its complement with prob-ability 1/2 at every channel use. Scrambling is the naturalextension of this random choice to weights larger than 1.

    The cumulant transform [6] is an equivalent representationof the probability distribution of a random variable; thedistribution can be recovered by an inverse Fourier transform.Consider a non-zero symbol score s of Hamming weight1 v m, i.e., the Hamming weight between the binarylabels of the reference and competitor symbols is v. Thecumulant transform of s is

    v(s) log E[es

    s]

    = log

    (1(mv

    ) j

    12v

    c{0,1}v

    E

    [vi=1 qji(cji , Y )

    svi=1 qji(cji , Y )s

    ]),

    (10)

    where j = (j1, . . . , jv) is a sequence of v bit indices, the bit cis the binary complement of c, and y are the channel outputswith bit v-tuple c transmitted at positions in j. The cumulanttransform of the pairwise score pw(n) is given by

    pw(s, n) log E[espw(n)] =wv=1

    nvv(s). (11)

    The expectation in (10) is done according to pj(y|c) =1

    2mv

    xX j1,...,jvc1,...,cvp(y|x, h).

    An important feature of the cumulant transform is that thetail probability is to great extent determined by the cumulanttransform around the saddlepoint s, defined as the value ofs for which (s) d(s)ds = 0. Indeed, in our notation theChernoff bound is given by PEP(d, n) epw(s,n). Thenthe saddlepoint approximation to PEP(d, n) is given by

    PEP(d, n) 1s

    2pw(s, n)epw(s,n), (12)

    where the saddlepoint s is the root of the equationpw(s, n) = 0.

    A particularly important case arises when v = 1, i.e., whenthe binary labels of the reference and competitor symbols inthe symbol pairwise score differ only by a single bit andall d different bits are mapped onto different constellation

  • 40 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009

    0 10 20 30 40 50 6010

    12

    1010

    108

    106

    104

    102

    100

    EbN0

    (dB)

    Pb

    = 40

    = 200

    SimulationUnion BoundUnion Bound (n2)

    Union Bound (n2 = max{0, dn})

    Fig. 1. Error probability of QPSK and the (5, 7)8 convolutional code witha uniform interleaver of length = 40, 200 in a fully-interleaved fadingchannel with mf = 0.5. Diamonds correspond to bit-error rate simulation,the solid line corresponds to the union bound, dashed lines correspond tothe union bound for n2 = max{0, d n} (the upper one corresponds to = 200) and dotted lines correspond to the union bound for n2 = n2 .

    symbols. This is the case for interleavers of practical length.As noticed in [3], [7], this simplifies significantly the analysis.The cumulant transform of the symbol score with v = 1,denoted by b1 , is given by

    1(s) log E[es

    b1]

    = log

    (1m

    mj=1

    12

    c{0,1}

    E[qj(c, Y )s

    qj(c, Y )s

    ]). (13)

    We denote the corresponding pairwise error probability byPEP1(d). As noticed in [7], This expression is related, butnot identical to, the ones appearing in [3], [5]. These authorsconsider an approximation to the bit decoding metric qj(c, y)whereby only one of the summands in Eq. (3) is kept. Eventhough this approximation slightly simplifies the analysis, itleads to large inaccuracies for mappings other than Graymapping. In this paper, we consider the full bit metric.

    As shown in Appendix A, the probability that all bit scoresare independent approaches 1 as 1 d(d1)2 (m 1) for largeinterleaver lengths. In general, the effect of the dependenceacross the bits belonging to the same symbol must be takeninto account.

    IV. INEQUIVALENCE BETWEEN BPSK AND QPSK WITHGRAY LABELING

    Obtaining closed-form expressions for the cumulant trans-forms can be difficult, and numerical methods are needed.Notable exceptions are BPSK and QPSK with Gray labeling,where the pattern n is uniquely determined by the numberof QPSK symbols whose bits are at Hamming distance 1 and2, respectively denoted by n1 and n2. Then n1 + 2n2 = d. Itis also clear that max{0, d n} n2 d2. We denote thepairwise error probability by PEP(d, n2). Reference [8] gavethe union bounds to the average error probability for block-fading channels, of which QPSK can be seen as a particular

    case. In any case, having n2 0 implies that we have n2symbols with Hamming weight two that fade with the samefading coefficient, and therefore, QPSK behaves differentlyfrom two independent BPSK channels.

    A non-zero symbol score s is the result of having aHamming distance of either 1 or 2 bits. In the first case,it is easy to see that it has a Gaussian distribution of mean2SNR|hk|2 and variance 4SNR|hk|2. This score is equalto that of BPSK modulation with effective signal-to-noiseratio 12SNR|hk|2. When both bits are different, the score isa real-valued random variable with a Gaussian distributionof mean 4SNR|hk|2 and variance 8SNR|hk|2. This scorecoincides with that of a BPSK modulation with signal-to-noiseratio SNR|hk|2. Using the expression for 1(s) from [7], weconclude that

    pw(s, n2) = log(E[es1

    ]d2n2 E [es2]n2) (14)= log

    ((1 +

    2sSNRmf

    2s2SNRmf

    )mf (d2n2)

    (1 +

    4sSNRmf

    4s2SNRmf

    )mfn2). (15)

    Direct calculation shows that the saddlepoint is located at s =+ 12 , and we finally approximate the conditional pairwise errorprobability PEP(d, n2) by the saddlepoint approximation

    PEP(d, n2) 12

    mfSNRdmf+dSNRn2SNR

    (2mf+SNR)(mf+SNR)(1 +

    SNR2mf

    )mf (d2n2)(1 +

    SNRmf

    )mfn2.

    (16)

    In the limit mf , i.e., for the AWGN channel, thisformula respectively becomes

    PEP(d, n2) 12dSNR

    e12dSNR, (17)

    which is independent of n2, as expected. The error per-formance approaches that of two codewords at distance dtransmitted over BPSK with signal-to-noise ratio 12SNR.

    Zummo et al. gave a similar analysis in their study ofblock-fading channels [8] and of BICM [5]. Our use of thesaddlepoint approximation gives a simple and tight closed-form approximation to the pairwise error probability. Figure 1depicts the bit-error probability of QPSK with the (5, 7)8convolutional code for = 40, 200 and mf = 0.5. In all cases,the saddlepoint approximation to the union bound (solid lines)is very accurate for moderate-to-large SNR. In particular, weobserve the change in slope with respect to the standard unionbound, which assumes that all bits in which the codewordsdiffer are mapped onto different symbols (independent binarychannels). The approximated threshold (20) computed withthe minimum distance is SNRth = 22 dB for = 40 andSNRth = 36 dB for = 200. This floor is absent forindependent binary parallel channels. In the next section, weprove that the floor appears at very low error rates (i. e. atvery high SNR) for practical codes.

  • IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009 41

    0 5 10 15 2010

    40

    1030

    1020

    1010

    100

    d

    Pb

    mf = 0.5, = 100mf = 0.5, = 1000mf = 3, = 100mf = 3, = 1000

    Fig. 2. Approximation to the bit error probability of QPSK at the thresholdsignal-to-noise ratio SNRth, Pb = 2PEP1(d), as a function of the minimumHamming distance d for several values of mf and . In solid line, mf =0.5, = 100; in dashed line, mf = 0.5, = 1000; in dash-dotted line,mf = 3, = 100; and in dotted line mf = 3, = 1000.

    A. Asymptotic Analysis

    In this section, we concentrate on the asymptotic studyof the pairwise error probability averaged over all possibleinterleavers of length n, using Eq. (9). It seems clear thatthe best (i. e. steepest) conditional pairwise error probabiltyPEP(d, n2) is attained for n2 = 0, which corresponds toall bits in the pairwise score belonging to different symbols.Similarly, the worst (i. e. flattest) conditional pairwise errorprobability PEP(d, n2) is attained for n2 = d2, whichcorresponds to the bits having maximal concentration in theminimum possible number of QPSK symbols. For these twocases, we show in Appendix B that the respective probabilitiesP (n2) admit the following asymptotic (in n) approximations

    P (n2 = 0) 1d(d 12

    )

    and

    P (n2 = n2)

    2(

    de

    ) 12 d d even,

    2dd+12

    e(e)12 (d1)(d1)

    12 d

    d odd.(18)

    We determine the position at which the respective pairwiseprobabilities cross by solving

    P (n2 = 0)PEP(d, 0) = P (n2 = n2) PEP(d, n2). (19)

    Under the assumptions that SNR and are large, with the aidof Eq. (18), and keeping only the term at minimum distancedmin (which we denote without the subscript, to remove clutterfrom the equations), Eq. (19) is easily solved to obtain (20).For large d, the threshold is approximately given by SNRth 4mf

    (ed

    ) 1mf in both cases. For the special case of Rayleigh

    fading, mf = 1, SNRth grows linearly with the interleaverlength and is inversely proportional to the Hamming distanced for very large d.

    Figure 2 depicts the approximate value of Pb, coarselyapproximated as 2PEP(d), at the threshold SNR for severalvalues of mf and as a function of the minimum Hamming

    distance d. As expected from Figure 2, the error probabilityquickly becomes very small, at values typically below theoperating point of common communication systems. For shortpackets using weak codes transmitted over channels withsignificant fading, the threshold may be of importance.

    V. HIGH-ORDER MODULATIONS: ASYMPTOTIC ANALYSIS

    In this section, we extend the analysis presented in the previ-ous section for QPSK to general constellations and mappings,and estimate the signal-to-noise ratio at which the slope ofthe error probability changes. Since we are interested in largevalues of SNR, we shall be working with an asymptotic ap-proximation to the error probability. In particular, our analysisis based on an extension of the results in [3], [7] for theasymptotic behaviour of the error probability in the Rayleigh-fading channel. We will provide such asymptotic expressionsfor the symbols scores of varying weight and use the approachpresented for QPSK in the previous section to determine thevalue of SNR at which the various approximations to the errorprobability cross.

    In [3], Caire et al. considered the Rayleigh fading atlarge SNR, and derived the following approximation to thecumulant transform of the bit score,

    1(s) log(

    d2h4

    SNR)

    , (21)

    where d2h is a harmonic distance given by

    d2h =

    (1

    m2m

    1c=0

    mj=1

    xX jc

    1|x x|2

    )1, (22)

    where x is the closest symbol in the constellation X jc tox. Eq. (21) may be used in the Chernoff bound to give anapproximation to the error probability at large SNR. It wasfound in [3] that this gives a good approximation. Using thats = 12 , that limSNR 1(s) = 8mf [7] and the saddlepointapproximation in Eq. (12), we obtain the large SNR heuristicapproximation

    PEP1(d) 12

    d

    (4

    d2hSNR

    )d. (23)

    In Appendix C we extend the analysis in [3] to the cu-mulant transform of the BICM symbol score of weight v forNakagami-mf fading, and obtain the following limit

    v(s) mf log(

    d2h(v)4mf

    SNR

    ), (24)

    where d2h(v) is a generalization of the harmonic distance givenby

    d2h(v) =

    (1(

    mv

    )2m

    c

    j

    xXc

    j

    (vi=1(x xi)

    vi=1 |x xi|2

    )2mf) 1mf.

    (25)

    For a given x, xi is the i-th symbol in the sequence of vsymbols (x1, . . . , x

    v) which have binary label cji at position

    ji and for which the ratio|vi=1(xxi)| v

    i=1 |xxi|2 is minimum among all

  • 42 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009

    SNRth =

    4mf(

    e

    81d d

    ) 1mf d even

    4mf (e)1

    mf(

    e2

    ) 2mf (d1) (d1)

    dmf (d1) (d+1)

    1mf (d1)

    d

    2(d+1)mf (d1)

    d odd.(20)

    possible such sequences. For mf = 1 and v = 1 we recoverthe harmonic distance d2h above.

    As it happened with the bit score and PEP1(d), Eq. (24)may be used in the Chernoff bound or in the saddlepointapproximation in Eq. (12), to obtain a heuristic approximationto the pairwise error probability for large SNR, namely

    PEPH(d, n) 12

    mf

    v1 nv

    mv=1

    (4mfd2h(v)

    1SNR

    )nvmf.

    (26)For the sake of simplicity, we disregard the effect of thecoefficient 1

    s

    2pw(s)in our analysis of the threshold SNR.

    For sufficiently large signal-to-noise ratio, the error prob-ability is determined by the worst possible distribution pat-tern n of the d bits onto the n symbols, that with thelargest tail probability for the pairwise score. Then, sinced =

    wv=1 nvv by construction, we can view the pattern n =

    (n0, n1, . . . , nw) as a (non-unique) representation of theinteger d as a weighted sum of the integers {0, 1, 2, . . . , w}.By construction, the sum

    v1 nv is the number of non-

    zero Hamming weight symbols in the candidate codeword.Clearly, the lowest

    v1 nv gives the worst (flattest) pairwise

    error probability in the presence of fading. We obtain anequation for SNRth similar to Eq. (19) for QPSK for thefully-interleaved fading channel,(

    P (0)(

    4mfd2h(1)

    1SNRth

    )mf)d=

    1

    SNRmfm

    v=1 nvth

    n:min

    v nv

    P (n)m

    v=1

    (4mfd2h(v)

    )nvmf. (27)

    The left-hand side corresponds to the steepest pairwise errorprobability, namely PEP1(d), weighted by the probability thatall bit scores are independent, denoted by P (0). The right-hand side corresponds to the largest pairwise error probabilitywith smallest number of non-zero symbol scores, that isamong all possible patterns n with minimum

    v nv. Note

    that the exponent of SNR is mf

    v nv, and thus has thelowest possible diversity, as it should.

    From Eq. (27), we can extract the value of SNRth as

    SNRth 4mf(

    n:min

    v nv

    P (n)(d2h(1)

    )dP (0)

    v

    (d2h(v)

    )nv) 1

    mf (d

    v nv)

    .

    (28)

    As expected, for the specific case of QPSK with Graymapping, computation of this value of SNRth (d2h(1) = 2,d2h(2) = 4) gives a result which is consistent with the resultderived in Section IV-A, namely Eq. (20), with the minordifference that we use now the Chernoff bound whereas thesaddlepoint approximation was used in the QPSK case.

    10 15 20 25 30 35 4010

    12

    1010

    108

    106

    104

    102

    100

    = 90 = 300

    = 3000

    =

    EbN0

    (dB)P

    b

    Simulation = 90Simulation = 3000Saddlepoint UB (inf. int.)Heuristic Approx. (v = 1)Heuristic Approx. (v = 2)Heuristic Approx. (v = 3)

    Fig. 3. Bit error probability union bounds and bit-error rate simulationsof 8-PSK with the 8-state rate-2/3 convolutional code in a fully-interleavedRayleigh fading channel. Interleaver length = 90 (circles) and = 3000(diamonds). In solid lines, the saddlepoint approximation union bounds for = 90, = 300, = 3000 and for infinite interleaving, with PEP1(d).In dashed, dashed-dotted, and dotted lines, the heuristic approximations withweight v = 1, 2, 3 respectively.

    As we observe in Figure 3, the slope change presentfor QPSK is also present for 8-PSK with Rayleigh fadingand Gray labeling; the code is the optimum 8-state rate-2/3 convolutional code, with dmin = 4. Again, this effect isdue to the probability of having symbol scores of Hammingweight larger than 1. The figure depicts simulation results(for interleaver sizes = 90, 3000) together with the sad-dlepoint approximations for finite (for = 90, 300, 3000),infinite interleaving (with PEP1(d)), and with the heuristicapproximation PEPH(d n) (only for = 90 and d = 4).For 8-PSK with Gray mapping, evaluation of Eq. (25) givesd2h(1) = 0.7664, d

    2h(2) = 1.7175, and d

    2h(3) = 2.4278.

    Table I gives the values of P (n) for the various patterns n.Table I also gives the threshold SNRth given in Eq. (28) forall possible values of

    v1 nv, not only for the worst case.

    We observe that the main flattening of the error probabilitytakes place at high SNR. This effect essentially disappears forinterleavers of practical length: for = 300 (resp. = 3000)the error probability at the first threshold is about 108 (resp.1012). The saddlepoint approximation is remarkably precise;the heuristic approximation PEPH(ddmin, n) also gives verygood results.

    VI. CONCLUSIONS

    We have studied the large-SNR behavior of the error prob-ability of BICM over fully-interleaved fading channels. Our

  • IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009 43

    TABLE IASYMPTOTIC ANALYSIS FOR 8-PSK WITH VARYING INTERLEAVER

    LENGTH = 3n AND MINIMUM DISTANCE d = 4.

    Pattern n P (n) ThresholdEbN0

    (dB)

    (n 4, 4, 0, 0) = 90 0.8688 N/A = 300 0.9602 N/A = 3000 0.9960 N/A

    (n 3, 2, 1, 0) = 90 0.1287 16.0 = 300 0.0396 21.5 = 3000 0.0040 31.6

    (n 2, 0, 2, 0), = 90 0.0015 20.5(n 1, 1, 0, 1) = 300 0.0002 26.0

    = 3000 2 106 39.1

    analysis reveals that the pairwise error probability is asymp-totically dominated by the number pairwise error symbolswith Hamming weight larger than one, yielding an error floor.We have derived closed-form approximations to this errorprobability. For practical code lengths, the error floor appearsat very low error rates.

    APPENDIX APROBABILITY OF ALL-ONE SEQUENCE

    We use Stirlings approximation to the factorial, n! nnen

    2n, to Eq. (9) to obtain

    Pind P (n1 = d, n2 = 0, . . . , nw = 0)

    =n

    (md)nd+ 12( d)d+ 12

    , (29)

    with the obvious simplifications and combinations. Extractinga factor in ( d) and ( md), and cancelling commonpowers of in numerator and denominator, we get

    Pind (1 d

    n

    )n+d12(1 d

    )d+12. (30)

    We now take logarithms, and use Taylors expansion of thelogarithm, log(1 + t) t 12 t2, in the right-hand side ofEq. (30). Discarding all powers of higher than 2, andcombining common terms, we obtain

    logPind md2

    2+

    d

    2n+

    d2

    2 d

    2= d(d 1)

    2(m 1).

    (31)

    Finally, recovering the exponential, Pind e d(d1)2 (m1).

    APPENDIX BASYMPOTICS OF P (n2)

    Assume that d is even, so n2 = 12d and n1 = 0. Then

    P (n2) =d!(2n d)!n!

    (2n)!(n 12d)!(12d)!. (32)

    Using Stirlings approximation to the factorial, and after somesimplifications, we have that

    P (n2) 2(

    2n d2n

    )n(d

    2n d) 1

    2d

    . (33)

    In the limit of large n, using that limn(1+ an

    )n = ea, and2n d, we have

    P (n2) 2e 12d(

    d

    2n

    )12 d

    = 2(

    d

    2ne

    ) 12d

    . (34)

    If d is odd, then n2 = 12 (d 1) and n1 = 1. Then

    P (n2) =2d!(2n d)!n!

    (2n)!(n 12 (d + 1)

    )!(12 (d 1)

    )!. (35)

    Using again Stirlings approximation, and after some simpli-fications, we have

    P (n2)

    2e

    (2n d

    2n

    )2n(n

    n 12 (d + 1))n

    dd+12(2n (d + 1)) 12d

    (2n d)d 12 (d 1)12d. (36)

    Again for large n, using that limn(1 + an

    )n = ea, and2n d, we have

    P (n2)

    2e

    ede12 (d+1)

    dd+12

    (2n)12 (d1)(d 1)12d

    =

    2dd+12

    e(2ne)12 (d1)(d 1)12d

    . (37)

    APPENDIX CASYMPTOTIC ANALYSIS WITH FULLY-INTERLEAVED

    NAKAGAMI FADING

    We wish to compute the limit v(s) limSNR ev (s)

    SNRmf,

    given by

    v(s) = limSNR

    1

    SNRmf

    12v(mv

    ) j,c

    E

    [vi=1 qji(cji , Y )

    svi=1 qji(cji , Y )

    s

    ] .(38)

    We can rewrite the denominator asv

    i=1

    qji(cji , y) =v

    i=1

    ( xX jicji

    e|H

    SNR(Xx)+Z|2)

    =x

    vi=1

    e|H

    SNR(Xxi)+Z|2 , (39)

    where x is one of all possible sequences of v modulationsymbols, with symbol at index ji drawn from the set X jicji . Asimilar formula holds for the denominator, now with symbolsdrawn from the set X jicji . Expanding the exponent in Eq. (39),we obtain

    vi=1

    qji(cji , Y )

    =x

    e|H|SNRv

    i=1 |Xxi|2+2SNRRe

    (vi=1 H(Xxi)Z

    )+v|Z|2 .

    (40)

    As done in [3], [7], we keep only the dominant sum-mand in the bit scores qji(, y) appearing in numerator anddenominator. For a given x, this summand corresponds to

  • 44 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 8, NO. 1, JANUARY 2009

    the sequence x having the smallest possible value of the

    ratio|vi=1(xxi)|

    vi=1 |xxi|2 . In particular, in the denominator all the

    symbols in the sequence coincide with x. We now carry outthe expectation over Z . Completing squares, and using thatthe formula for the density of Gaussian noise, we have that

    1

    e|z|

    2e

    vi=1 s(SNR|H|2|XXi|2+2

    SNRRe(H(XXi)z)) dz

    = eSNR|H|2

    (sv

    i=1 |XXi|2s2|vi=1(XXi)|2) .(41)

    In turn, the expectation over h of this quantity yields [9]

    (1 +

    (s

    vi=1

    |X X i|2 s2

    vi=1

    (X X i)2)

    SNR

    mf

    )mf.

    (42)

    We next turn back to the limit of large SNR. We have that

    v(s) =m

    mff

    2m(mv

    )j,c

    xX jc

    s v

    i=1

    |x xi|2 s2

    vi=1

    (x xi)2mf

    . (43)

    For each summand, the optimizing s is readily computed to

    be s =v

    i=1 |xxi|22|vi=1(xxi)|2 , which gives

    v(s) =1

    2m(mv

    ) j,c

    xX jc

    (4mf |

    vi=1(x xi)|2

    (v

    i=1 |x xi|2)2)mf

    .

    (44)

    REFERENCES

    [1] C. Berrou, A. Glavieux, and P. Thitimajshima, Near Shannon limit error-correcting coding and decoding: turbo-codes. in Proc. IEEE Int. Conf.Commun., Geneva, vol. 2, 1993.

    [2] S. Benedetto and G. Montorsi, Unveiling turbo codes: some results onparallel concatenated coding schemes, IEEE Trans. Inf. Theory, vol. 42,no. 2, pp. 409428, 1996.

    [3] G. Caire, G. Taricco, and E. Biglieri, Bit-interleaved coded modulation,IEEE Trans. Inform. Theory, vol. 44, no. 3, pp. 927946, 1998.

    [4] J. G. Proakis, Digital Communications. McGraw-Hill, 4th edition, 2001.[5] P. Yeh, S. Zummo, and W. Stark, Error probability of bit-interleaved

    coded modulation in wireless environments, IEEE Trans. Veh. Technol.,vol. 55, no. 2, pp. 722728, 2006.

    [6] R. W. Butler, Saddlepoint Approximations with Applications. CambridgeUniversity Press, 2007.

    [7] A. Martinez, A. Guillen i Fa`bregas, and G. Caire, Error probabilityanalysis of bit-interleaved coded modulation, IEEE Trans. Inf. Theory,vol. 52, no. 1, pp. 262271, Jan. 2006.

    [8] S. Zummo, P. Yeh, and W. Stark, A union bound on the error probabilityof binary codes over block fading channels, IEEE Trans. Veh. Technol.,vol. 54, no. 6, pp. 20852093, 2005.

    [9] A. Martinez, A. Guillen i Fa`bregas, and G. Caire, A closed-formapproximation for the error probability of BPSK fading channels, IEEETrans. Wireless Commun., vol. 6, no. 6, pp. 20512056, June 2007.

top related