Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
The Performance of Turbo codes for Wireless Communication Systems                       Grace Oletu                       ...
data. The general expression for the MAP rule in terms of                                 L (d) = Lc(x) + L (d) + Le (d)  ...
fashion, a hard-output decoder would not be suitable. That is       both algorithms. For figure 5, LOG MAP shows betterbec...
0                                                                         0       10                                      ...
Upcoming SlideShare
Loading in …5

The performance of turbo codes for wireless communication systems


Published on

  • Be the first to comment

  • Be the first to like this

The performance of turbo codes for wireless communication systems

  1. 1. The Performance of Turbo codes for Wireless Communication Systems Grace Oletu Predrag Rapajic Department of Computer and Communication Systems Department of Computer and Communication Systems University of Greenwich University of Greenwich Chatham,United Kingdom Chatham,United Kingdom—Turbo codes play an important role in complexity. The paper is arranged as follows Section II is themaking communications systems more efficient and Channel Model, decoding algorithm is described in sectionreliable. This paper provides a description of two turbo III, section IV is on Log-MAP Algorithm.Principles ofcodes algorithms. Soft-output Viterbi algorithm and Iterative Decoding in section V, and Section VI compares thelogarithmic-maximum a posteriori turbo decoding simulation results and performance for both algorithms foralgorithms are the two candidates for decoding turbocodes. Soft-input soft-output (SISO) turbo decoder based different block length, before concluding in section VII.on soft-output Viterbi algorithm (SOVA) and the II. CHANNEL MODELlogarithmic versions of the MAP algorithm, namely,Log-MAP decoding algorithm. The bit error rate (BER) The transmitted symbols +1/-1, corresponding to theperformances of these algorithms are compared. code bits of 1/0 pass through additive white GaussianSimulation results are provided for bit error rate channel. By scaling the random number of distribution N (0,performance using constraint lengths of K=3, over 1) with the standard deviation ı, an AWGN noise ofAWGN channel, show improvements of 0.4 dB for log- distribution N (0, ı2) is obtained. This is added to the symbolMAP over SOVA at BER 10-4. to emulate the noisy channel effect. Keywords- Turbo codes, Iterative decoding III. DECODING TURBO CODE I. INTRODUCTION Let the binary logical elements 1 and 0 be represented electronically by voltages +1 and -1, respectively. The The near Shannon limit error correction performance of variable d is used to represent the transmitted data bit asTurbo codes [1] and parallel concatenated convolutional shown in figure (1), whether it appears as a voltage or as acodes [2] have raised a lot of interest in the research logical element. Sometimes one format is more convenientcommunity to find practical decoding algorithms for than the other. Let the binary 0 (or the voltage value - 1) beimplementation of these codes. The demand of turbo codes the null element under addition.for wireless communication systems has been increasing Signal transmission over an AWGN channel, A well-since they were first introduced by Berrou et. al. in the early known hard-decision rule, known as maximum likelihood1990s [1]. Various systems such as 3GPP, HSDPA and (ML), is to choose the data dk = +1 or dk = -1 associated withWiMAX have already adopted turbo codes in their standards the larger of the two intercept values. For each data bit atdue to their large coding gain. In [3], it has also been shown time k, this is tantamount to deciding that dk = +1 if xk fallsthat turbo codes can be applied to other wireless on the right side of the decision line, otherwise deciding thatcommunication systems used for satellite and deep space dk = -1.applications. The MAP decoding also known as BCJR [4] algorithm is d unot a practical algorithm for implementation in real systems.The MAP algorithm is computationally complex and asensitive to SNR mismatch and inaccurate estimation of the anoise variance [5]. MAP algorithm is not practical toimplement in a chip. The logarithmic version of the MAPalgorithm [6-8] and the Soft Output Viterbi Algorithm(SOVA) [9-10] are the practical decoding algorithms forimplementation in this system. rate R = ½ and generators G = [7 5] v Figure 1 Recursive systematic convolutional Encoder with memory two, This paper describes Turbo decoding Algorithms, SOVAhas the least computational complexity and the worse bit A similar decision rule, known as maximum a posteriorierror rate (BER) performance, while the Log-MAP algorithm (MAP), which can be shown to be a minimum probability of[6] has the best BER performance but high computational error rule, takes into account the a priori probabilities of the___________________________________978-1-61284-840-2/11/$26.00 ©2011 IEEE
  2. 2. data. The general expression for the MAP rule in terms of L (d) = Lc(x) + L (d) + Le (d) (9)APPs is as follows: Equation (9) shows that the output LLR of a systematic ‫ͳܪ‬ decoder can be represented as having three LLR Elements: aP (d) = +1| x) ൐ P (d = -1|x) (1) channel measurement, a priori knowledge of the data, and an ൏ extrinsic LLR stemming solely from the decoder. To yield H2 the final L (d), each of the individual LLRs can be added as shown in Equation (9), because the three terms are Equation (1) states that you should choose the hypothesis statistically independent. This soft decoder output L(d) is aH1, (d = +1), if the APP P (d = +1|x), is greater than the APP real number that provides a hard decision as well as theP (d = -1|x). Otherwise, you should choose hypothesis H2, reliability of that decision. The sign of L(d) denotes the hard(d= -1). Using the Bayes’ theorem, the APPs in Equation (1) decision; that is, for positive values L(d) of decide that d =can be replaced by their equivalent expressions, yielding the +1, and for negative values decide that d = -1. Thefollowing: magnitude of denotes the reliability of that decision. Often, ‫ͳܪ‬ the value L(d) due to the decoding has the same sign as Lc(x) P (x|d) = +1) P (d = +1) ൐ P (x|d = -1) P (d = -1) (2) + L(d), and therefore acts to improve the reliability of L(d). ൏ H2 IV. LOG-MAP ALGORITHM Equation (2) is generally expressed in terms of a ratio, This algorithm, called the log-MAP algorithm [11-15],yielding the so-called likelihood ratio test, as follows: gives the same error performance as the MAP algorithm but is easier to implement. The Log-MAP algorithm computes ‫ͳܪ‬ ‫ͳܪ‬ the MAP parameters by utilizing a correction function to ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ିଵሻ ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ (3) ௉ሺ௫ȁௗୀ ିଵሻ ൐ ௉ሺௗୀ ାଵሻ Or ௉ሺ௫ȁௗୀ ିଵሻ ௉ሺௗୀ ିଵሻ ൐ 1 compute the logarithm of sum of numbers. More precisely ൏ H2 ൏ for A1 = A + B, then H2 ሚ ሚ ෨ ሚ ෨ ‫ = 1ܣ‬ln (A + B) = max (‫ + )ܤ,ܣ‬fc (|‫)|ܤ – ܣ‬ (11) By taking the logarithm of the likelihood ratio, we obtaina useful metric called the log-likelihood ratio (LLR). It is a ሚ ෨ ሚ ෨ where fc(|‫ )| ܤ — ܣ‬is the correction function. fc (|‫)| ܤ— ܣ‬real number representing a soft decision output of a detector, can be computed using either a look-up table [8] or simply adesignated as follows: threshold detector [12] that performs similar to look-up table. ௉ሺௗୀ ାଵȁ௫ሻ ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ The simple equation for threshold detector isL (d|x) = log [௉ሺௗୀ ିଵȁ௫ሻ] =log [ ௉ሺ௫ȁௗୀ ିଵሻ ௉ሺௗୀ ିଵሻ] (4) ሚ ෨ ͲǤ͵͹ͷ ሚ ෨ fc (|‫ = )| ܤ—ܣ‬ቄ if |‫ 2 | ܤ—ܣ‬otherwise ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ ͲL (d|x) = log [௉ሺ௫ȁௗୀ ିଵሻ] + log [ ] (5) can be extended recursively. If A2 =A+ B + C, then A2 ௉ሺௗୀ ିଵሻ ሚ ሚ ሚ ሚ ln(A1 + C) = max(‫ + ) ܥ ,ܣ‬fc(|‫)| ܥ – ܣ‬ (12)L (d|x) = L (x|d) + L(d) (6) This recursive operation is specially needed for computation of the soft output decoded bits. At each step, the To simplify the notation, Equation (6) is rewritten as logarithm of addition of two values by maximizationfollows: operation is accommodated by additional correction value መL(݀) = Lc (x) + L (d) which is provided by a look-up table or a threshold detector (7) in the Log-MAP algorithm. The Log-MAP parameters are where the notation Lc(x) emphasizes that this LLR term very close approximations of the MAP parameters andis the result of a channel measurement made at the receiver. therefore, the Log-MAP BER performance is close to that ofThe equations above were developed with only a data the MAP algorithm.detector in mind. Next, the introduction of a decoder will V. PRINCIPLES OF ITERATIVE DECODINGtypically yield decision-making benefits. For a systematiccode, it can be shown that the LLR (soft output) L (d) out of In a typical communications receiver, a demodulator isthe decoder is equal to Equation (8): often designed to produce soft decisions, which are then መ መL (݀) = L(݀) + Le (݀)መ transferred to a decoder. The improvement in error (8) performance of systems utilizing such soft decisions is typically approximated as 2 dB, as compared to hard Where L(d) is the LLR of a data bit out of the decisions in AWGN. Such a decoder could be called a softdemodulator (input to the decoder), and Le(d) is called the input/ hard output decoder, because the final decodingextrinsic LLR, represents extra knowledge gleaned from the process out of the decoder must terminate in bits (harddecoding process. The output sequence of a systematic decisions). With turbo codes, where two or more componentdecoder is made up of values representing data bits and codes are used, and decoding involves feeding outputs fromparity bits. From Equations (7) and (8), the output LLR L (d) one decoder to the inputs of other decoders in an iterativeof the decoder is now written as follows:
  3. 3. fashion, a hard-output decoder would not be suitable. That is both algorithms. For figure 5, LOG MAP shows betterbecause a hard decision into a decoder degrades system performance than SOVA for constraint length of three andperformance (compared to soft decisions). for block length of 1024 and 4096 respectively. Hence, what is needed for the decoding of turbo codes isa soft input/ soft output decoder. For the first decoding CONCLUSIONSiteration of such a soft input/soft output decoder , we Our Simulation results shows that the Log-MAPgenerally assume the binary data to be equally likely, performs better in terms of block length compared to SOVA,yielding an initial a priori LLR value of L(d)=0. The channel and thus it is more suitable for wireless communication.LLR value, Lc(x), is measured by forming the logarithm ofthe ratio of the values for a particular observation whichappears as the second term in Equation (5). The output L(d) REFERENCESof the decoder in Figure 3 is made up of the LLR from the [1] C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon Limitdetector, L’(d) , and the extrinsic LLR output, Le(d) , Error-Correcting Coding and Decoding: Turbo Codes,“Proceeding ofrepresenting knowledge gleaned from the decoding process. IEEE ICC 93, pp. 1064-1070.As illustrated in Figure 2, for iterative decoding, the extrinsic [2] S. Benedetto, G. Montorsi, “Design of Parallel Concatenationlikelihood is fed back to the decoder input, to serve as a Convolutional Codes: IEEE Trans. on communication, vol. 44, No.5, May 1996.refinement of the a priori probability of the data for the next [3] C. Berrou, “The Ten-Year-Old Turbo Codes are Entering intoiteration. Service,” IEEE Commun. Mag. vol. 41, no. 8, pp.110-116, Aug 2003. [4] L. Bahi, J. Cocke, F. Jelinek, and J. Raviv, "Optimum decoding of linear codes for minimizing symbol error rate," IEEE Trans.on Inf. Theory, vol. IT-20, pp. 284-287, Mar. 1974. [5] T.A. Summers and S.G. Wilson, "SNR Mismatch and Online Estimation in Turbo Decoding, "IEEE Trans. On Comm. vol.46, no. 4, pp. 421-424, April 1998. [6] P. Robertson, P. Hoeher, and E. Villebrun, "Optimal and Sub-Optimal Maximum A Posteriori Algorithms Suitable for Turbo Decoding, “European Trans. on Telecomm. vol. 8, no. 2, pp. 119-126, March- April 1997. [7] P. Robertson, E. Villebrun, and P. Hoeher, "A Comparison of Optimal and Sub-optimal MAP Decoding Algorithms Operating in the Log Domain,” International Conference on Communications, pp. 1009-1013, June 1995. [8] S. Benedetto, G. Montorsi, D. Divsalr, and F. Pollara "Soft- Output Decoding Algorithm in Iterative Decoding of Turbo Codes," TDA Progress Report 42-124, pp. 63-87, February 15, 1996. [9] J. Hagenauer, and P. Hoeher, "A Viterbi Algorithm with Soft- Decision Outputs and Its applications, "Proc. Of GLOBECOM, pp. 1680-1686, November 1989. [10] J. Hagenauer, Source-Controlled Channel Decoding, "IEEE Transaction on Communications, vol. 43, No. 9, pp.2449-2457, VI. SIMULATION RESULTS September 1995. The simulation curves presented shows the influence of [11] J.Hagenauer E.Offer, and L.Papke,”Iterative Decoding of Binaryiteration number, Block length, code rate and code generator. Block and Convolutional Codes,” IEEE Trans.Inform. Theory, 42:429-45, March 1996.Rate ½ codes are obtained from their rate 1/3 counterparts by [12] W.J. Gross and P.G. Gulak, "Simplified MAP algorithm suitables foralternately puncturing the parity bits of the constituent implementation of turbo decoders" Electronic Letters, vol. 34, no.encoders. For rate R = ½ encoder with Constraint Length 3 16, August 6, 1998.and generators G1 = 7, G2 = 5. The BER has been computed [13] J. Hagenauer, L. Papke, “Decoding Turbo Codes With the Softafter each decoding as a function of signal to noise ratio Output Viterbi Algorithm (SOVA),” in Proc. Int. Symp.OnEb/No. Information Theory, p164, Norway, June 1994. [14] J. Hagenauer, P. Robertson, L. Papke, “Iterative Decoding of In figures (3-6) BER for SOVA and LOG MAP as a Systematic Convolutional Codes With the MAP and SOVA Algorithms,” ITG Conf., Frankfurt,Germany,pp 1-9, Oct. 1994function of Eb/No curves are shown for constituent codes of [15] J. Hagenauer, E. Offer, L. Papke, “Iterative Decoding of Bloc andconstraint length three and code rate ½. Eight decoding Convolutional Codes,” IEEE Trans. Infor. Theory, Vol. IT. 42, No.2,iterations were performed for Block length of 1024 and 4096. pp 429-445, March 1996.From these figure it can be observed that a large block lengthcorresponds to a lower BER. Also the improvement achievedwhen the block length is increased from 1024 to 4096 for
  4. 4. 0 0 10 10 -1 10 1 10 -1 1 10 -2 3 2 10 -2 2 4 3 BER BER -3 10 -3 iter 1 -4 iter 1 10 iter 3 4 10 iter 3 iter 6 -4 10 iter 8 -5 iter 6 10 iter 8 -5 -6 10 10 0 0.5 1 1.5 2 0 0.5 1 1.5 2 EbNo (dB) EbNo (dB) Figure 4, BER of K = 1024 Turbo code with log MAP decoding inFigure 3, BER of K = 4096 Turbo code with SOVA decoding in AWGN AWGN channel with various number of iteration channel with various number of iterations. (1) Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8 VII. Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8 0 10 0 10 log map 10 -1 1 -1 10 sova -2 2 -2 10 10 BER 3 BER 10 -3 2 1 10 -3 4 -4 10 -4 iter 1 10 iter 3 iter 6 -5 10 iter 8 -5 10 0 0.5 1 1.5 -6 10 EbNo (dB) 0 0.5 1 1.5 2 EbNo (dB)Figure 5, BER of K = 4096 Turbo code of log MAP and SOVA after 8 decoder iteration in AWGN channel. (1) SOVA 1, (2) LOG-MAP Figure 6, BER of K = 4096 Turbo code with log MAP decoding in AWGN channel with various number of iteration. (1) Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8