SlideShare a Scribd company logo
1 of 4
Download to read offline
The Performance of Turbo codes for Wireless Communication Systems


                       Grace Oletu                                                            Predrag Rapajic
 Department of Computer and Communication Systems                     Department of Computer and Communication Systems
               University of Greenwich                                              University of Greenwich
              Chatham,United Kingdom                                               Chatham,United Kingdom
                  Og14@gre.ac.uk                                                      p.rapajic@gre.ac.uk


Abstract—Turbo codes play an important role in                    complexity. The paper is arranged as follows Section II is the
making communications systems more efficient and                  Channel Model, decoding algorithm is described in section
reliable. This paper provides a description of two turbo          III, section IV is on Log-MAP Algorithm.Principles of
codes algorithms. Soft-output Viterbi algorithm and               Iterative Decoding in section V, and Section VI compares the
logarithmic-maximum a posteriori turbo decoding                   simulation results and performance for both algorithms for
algorithms are the two candidates for decoding turbo
codes. Soft-input soft-output (SISO) turbo decoder based          different block length, before concluding in section VII.
on soft-output Viterbi algorithm (SOVA) and the                                         II.    CHANNEL MODEL
logarithmic versions of the MAP algorithm, namely,
Log-MAP decoding algorithm. The bit error rate (BER)                  The transmitted symbols +1/-1, corresponding to the
performances of these algorithms are compared.                    code bits of 1/0 pass through additive white Gaussian
Simulation results are provided for bit error rate                channel. By scaling the random number of distribution N (0,
performance using constraint lengths of K=3, over                 1) with the standard deviation ı, an AWGN noise of
AWGN channel, show improvements of 0.4 dB for log-                distribution N (0, ı2) is obtained. This is added to the symbol
MAP over SOVA at BER 10-4.                                        to emulate the noisy channel effect.
   Keywords- Turbo codes, Iterative decoding                                         III. DECODING TURBO CODE
                      I.   INTRODUCTION                               Let the binary logical elements 1 and 0 be represented
                                                                  electronically by voltages +1 and -1, respectively. The
    The near Shannon limit error correction performance of
                                                                  variable d is used to represent the transmitted data bit as
Turbo codes [1] and parallel concatenated convolutional
                                                                  shown in figure (1), whether it appears as a voltage or as a
codes [2] have raised a lot of interest in the research
                                                                  logical element. Sometimes one format is more convenient
community to find practical decoding algorithms for
                                                                  than the other. Let the binary 0 (or the voltage value - 1) be
implementation of these codes. The demand of turbo codes
                                                                  the null element under addition.
for wireless communication systems has been increasing
                                                                      Signal transmission over an AWGN channel, A well-
since they were first introduced by Berrou et. al. in the early
                                                                  known hard-decision rule, known as maximum likelihood
1990s [1]. Various systems such as 3GPP, HSDPA and
                                                                  (ML), is to choose the data dk = +1 or dk = -1 associated with
WiMAX have already adopted turbo codes in their standards
                                                                  the larger of the two intercept values. For each data bit at
due to their large coding gain. In [3], it has also been shown
                                                                  time k, this is tantamount to deciding that dk = +1 if xk falls
that turbo codes can be applied to other wireless
                                                                  on the right side of the decision line, otherwise deciding that
communication systems used for satellite and deep space
                                                                  dk = -1.
applications.

    The MAP decoding also known as BCJR [4] algorithm is                      d                                           u
not a practical algorithm for implementation in real systems.
The MAP algorithm is computationally complex and                                                                a
sensitive to SNR mismatch and inaccurate estimation of the                                          a
noise variance [5]. MAP algorithm is not practical to
implement in a chip. The logarithmic version of the MAP
algorithm [6-8] and the Soft Output Viterbi Algorithm
(SOVA) [9-10] are the practical decoding algorithms for
implementation in this system.
                                                                  rate R = ½ and generators G = [7 5]
                                                                                                                        v
                                                                  Figure 1 Recursive systematic convolutional Encoder with memory two,


    This paper describes Turbo decoding Algorithms, SOVA
has the least computational complexity and the worse bit              A similar decision rule, known as maximum a posteriori
error rate (BER) performance, while the Log-MAP algorithm         (MAP), which can be shown to be a minimum probability of
[6] has the best BER performance but high computational           error rule, takes into account the a priori probabilities of the
___________________________________
978-1-61284-840-2/11/$26.00 ©2011 IEEE
data. The general expression for the MAP rule in terms of                                 L (d) = Lc(x) + L (d) + Le (d)      (9)
APPs is as follows:
                                                                          Equation (9) shows that the output LLR of a systematic
              ‫ͳܪ‬                                                      decoder can be represented as having three LLR Elements: a
P (d) = +1| x) ൐ P (d = -1|x)                           (1)           channel measurement, a priori knowledge of the data, and an
               ൏                                                      extrinsic LLR stemming solely from the decoder. To yield
              H2                                                      the final L (d), each of the individual LLRs can be added as
                                                                      shown in Equation (9), because the three terms are
    Equation (1) states that you should choose the hypothesis         statistically independent. This soft decoder output L(d) is a
H1, (d = +1), if the APP P (d = +1|x), is greater than the APP        real number that provides a hard decision as well as the
P (d = -1|x). Otherwise, you should choose hypothesis H2,             reliability of that decision. The sign of L(d) denotes the hard
(d= -1). Using the Bayes’ theorem, the APPs in Equation (1)           decision; that is, for positive values L(d) of decide that d =
can be replaced by their equivalent expressions, yielding the         +1, and for negative values decide that d = -1. The
following:                                                            magnitude of denotes the reliability of that decision. Often,
                          ‫ͳܪ‬                                          the value L(d) due to the decoding has the same sign as Lc(x)
 P (x|d) = +1) P (d = +1) ൐ P (x|d = -1) P (d = -1)         (2)       + L(d), and therefore acts to improve the reliability of L(d).
                           ൏
                          H2
                                                                                        IV.     LOG-MAP ALGORITHM
    Equation (2) is generally expressed in terms of a ratio,              This algorithm, called the log-MAP algorithm [11-15],
yielding the so-called likelihood ratio test, as follows:             gives the same error performance as the MAP algorithm but
                                                                      is easier to implement. The Log-MAP algorithm computes
                 ‫ͳܪ‬                                ‫ͳܪ‬                 the MAP parameters by utilizing a correction function to
      ௉ሺ௫ȁௗୀ ାଵሻ    ௉ሺௗୀ ିଵሻ   ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ              (3)
      ௉ሺ௫ȁௗୀ ିଵሻ
                 ൐ ௉ሺௗୀ ାଵሻ Or ௉ሺ௫ȁௗୀ ିଵሻ ௉ሺௗୀ ିଵሻ ൐ 1                compute the logarithm of sum of numbers. More precisely
                 ൏
                 H2
                                                   ൏                  for A1 = A + B, then
                                                   H2                  ሚ                        ሚ ෨         ሚ ෨
                                                                      ‫ = 1ܣ‬ln (A + B) = max (‫ + )ܤ,ܣ‬fc (|‫)|ܤ – ܣ‬              (11)
    By taking the logarithm of the likelihood ratio, we obtain
a useful metric called the log-likelihood ratio (LLR). It is a                       ሚ ෨                                      ሚ ෨
                                                                          where fc(|‫ )| ܤ — ܣ‬is the correction function. fc (|‫)| ܤ— ܣ‬
real number representing a soft decision output of a detector,        can be computed using either a look-up table [8] or simply a
designated as follows:                                                threshold detector [12] that performs similar to look-up table.
               ௉ሺௗୀ ାଵȁ௫ሻ         ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ                 The simple equation for threshold detector is
L (d|x) = log [௉ሺௗୀ ିଵȁ௫ሻ] =log [ ௉ሺ௫ȁௗୀ ିଵሻ ௉ሺௗୀ ିଵሻ]   (4)
                                                                           ሚ ෨          ͲǤ͵͹ͷ     ሚ ෨
                                                                      fc (|‫ = )| ܤ—ܣ‬ቄ         if |‫ 2 | ܤ—ܣ‬otherwise
               ௉ሺ௫ȁௗୀ ାଵሻ            ௉ሺௗୀ ାଵሻ
                                                                                          Ͳ
L (d|x) = log [௉ሺ௫ȁௗୀ ିଵሻ] + log [          ]                 (5)     can be extended recursively. If A2 =A+ B + C, then A2
                                     ௉ሺௗୀ ିଵሻ
                                                                                       ሚ ሚ          ሚ ሚ
                                                                      ln(A1 + C) = max(‫ + ) ܥ ,ܣ‬fc(|‫)| ܥ – ܣ‬          (12)
L (d|x) = L (x|d) + L(d)                                      (6)         This recursive operation is specially needed for
                                                                      computation of the soft output decoded bits. At each step, the
    To simplify the notation, Equation (6) is rewritten as            logarithm of addition of two values by maximization
follows:                                                              operation is accommodated by additional correction value
    መ
L'(݀) = Lc (x) + L (d)                                                which is provided by a look-up table or a threshold detector
                                                     (7)              in the Log-MAP algorithm. The Log-MAP parameters are
    where the notation Lc(x) emphasizes that this LLR term            very close approximations of the MAP parameters and
is the result of a channel measurement made at the receiver.          therefore, the Log-MAP BER performance is close to that of
The equations above were developed with only a data                   the MAP algorithm.
detector in mind. Next, the introduction of a decoder will                      V.      PRINCIPLES OF ITERATIVE DECODING
typically yield decision-making benefits. For a systematic
code, it can be shown that the LLR (soft output) L (d) out of             In a typical communications receiver, a demodulator is
the decoder is equal to Equation (8):                                 often designed to produce soft decisions, which are then
    መ        መ
L (݀) = L'(݀) + Le (݀)መ                                               transferred to a decoder. The improvement in error
                                                       (8)            performance of systems utilizing such soft decisions is
                                                                      typically approximated as 2 dB, as compared to hard
    Where L(d) is the LLR of a data bit out of the
                                                                      decisions in AWGN. Such a decoder could be called a soft
demodulator (input to the decoder), and Le(d) is called the
                                                                      input/ hard output decoder, because the final decoding
extrinsic LLR, represents extra knowledge gleaned from the
                                                                      process out of the decoder must terminate in bits (hard
decoding process. The output sequence of a systematic
                                                                      decisions). With turbo codes, where two or more component
decoder is made up of values representing data bits and
                                                                      codes are used, and decoding involves feeding outputs from
parity bits. From Equations (7) and (8), the output LLR L (d)
                                                                      one decoder to the inputs of other decoders in an iterative
of the decoder is now written as follows:
fashion, a hard-output decoder would not be suitable. That is       both algorithms. For figure 5, LOG MAP shows better
because a hard decision into a decoder degrades system              performance than SOVA for constraint length of three and
performance (compared to soft decisions).                           for block length of 1024 and 4096 respectively.

    Hence, what is needed for the decoding of turbo codes is
a soft input/ soft output decoder. For the first decoding                                          CONCLUSIONS
iteration of such a soft input/soft output decoder , we                 Our Simulation results shows that the Log-MAP
generally assume the binary data to be equally likely,              performs better in terms of block length compared to SOVA,
yielding an initial a priori LLR value of L(d)=0. The channel       and thus it is more suitable for wireless communication.
LLR value, Lc(x), is measured by forming the logarithm of
the ratio of the values for a particular observation which
appears as the second term in Equation (5). The output L(d)                                        REFERENCES
of the decoder in Figure 3 is made up of the LLR from the           [1]    C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon Limit
detector, L’(d) , and the extrinsic LLR output, Le(d) ,                    Error-Correcting Coding and Decoding: Turbo Codes,“Proceeding of
representing knowledge gleaned from the decoding process.                  IEEE ICC 93, pp. 1064-1070.
As illustrated in Figure 2, for iterative decoding, the extrinsic   [2]    S. Benedetto, G. Montorsi, “Design of Parallel Concatenation
likelihood is fed back to the decoder input, to serve as a                 Convolutional Codes: IEEE Trans. on communication, vol.             44,
                                                                           No.5, May 1996.
refinement of the a priori probability of the data for the next
                                                                    [3]    C. Berrou, “The Ten-Year-Old Turbo Codes are Entering into
iteration.                                                                 Service,” IEEE Commun. Mag. vol. 41, no. 8, pp.110-116, Aug 2003.
                                                                    [4]    L. Bahi, J. Cocke, F. Jelinek, and J. Raviv, "Optimum decoding of
                                                                           linear codes for minimizing symbol error rate," IEEE Trans.on Inf.
                                                                           Theory, vol. IT-20, pp. 284-287, Mar. 1974.
                                                                    [5]    T.A. Summers and S.G. Wilson, "SNR Mismatch and Online
                                                                           Estimation in Turbo Decoding, "IEEE Trans. On Comm. vol.46, no.
                                                                           4, pp. 421-424, April 1998.
                                                                    [6]    P. Robertson, P. Hoeher, and E. Villebrun, "Optimal and Sub-Optimal
                                                                           Maximum A Posteriori Algorithms Suitable for Turbo Decoding,
                                                                           “European Trans. on Telecomm. vol. 8, no. 2, pp. 119-126, March-
                                                                           April 1997.
                                                                    [7]    P. Robertson, E. Villebrun, and P. Hoeher, "A Comparison of
                                                                           Optimal and Sub-optimal MAP Decoding Algorithms Operating in
                                                                           the Log Domain,” International Conference on Communications, pp.
                                                                           1009-1013, June 1995.
                                                                    [8]    S. Benedetto, G. Montorsi, D. Divsalr, and F. Pollara "Soft- Output
                                                                           Decoding Algorithm in Iterative Decoding of Turbo Codes," TDA
                                                                           Progress Report 42-124, pp. 63-87, February 15, 1996.
                                                                    [9]    J. Hagenauer, and P. Hoeher, "A Viterbi Algorithm with Soft-
                                                                           Decision Outputs and Its applications, "Proc. Of GLOBECOM, pp.
                                                                           1680-1686, November 1989.
                                                                    [10]   J. Hagenauer, Source-Controlled Channel Decoding, "IEEE
                                                                           Transaction on Communications, vol. 43, No. 9, pp.2449-2457,
                   VI. SIMULATION RESULTS                                  September 1995.
    The simulation curves presented shows the influence of          [11]   J.Hagenauer E.Offer, and L.Papke,”Iterative Decoding of Binary
iteration number, Block length, code rate and code generator.              Block and Convolutional Codes,” IEEE Trans.Inform.             Theory,
                                                                           42:429-45, March 1996.
Rate ½ codes are obtained from their rate 1/3 counterparts by
                                                                    [12]   W.J. Gross and P.G. Gulak, "Simplified MAP algorithm suitables for
alternately puncturing the parity bits of the constituent                  implementation of turbo decoders" Electronic      Letters, vol. 34, no.
encoders. For rate R = ½ encoder with Constraint Length 3                  16, August 6, 1998.
and generators G1 = 7, G2 = 5. The BER has been computed            [13]   J. Hagenauer, L. Papke, “Decoding Turbo Codes With the             Soft
after each decoding as a function of signal to noise ratio                 Output Viterbi Algorithm (SOVA),” in Proc. Int. Symp.On
Eb/No.                                                                     Information Theory, p164, Norway, June 1994.
                                                                    [14]   J. Hagenauer, P. Robertson, L. Papke, “Iterative Decoding            of
    In figures (3-6) BER for SOVA and LOG MAP as a                         Systematic Convolutional Codes         With the MAP and SOVA
                                                                           Algorithms,” ITG Conf., Frankfurt,Germany,pp 1-9, Oct. 1994
function of Eb/No curves are shown for constituent codes of
                                                                    [15]   J. Hagenauer, E. Offer, L. Papke, “Iterative Decoding of Bloc and
constraint length three and code rate ½. Eight decoding                    Convolutional Codes,” IEEE Trans. Infor. Theory, Vol. IT. 42, No.2,
iterations were performed for Block length of 1024 and 4096.               pp 429-445, March 1996.
From these figure it can be observed that a large block length
corresponds to a lower BER. Also the improvement achieved
when the block length is increased from 1024 to 4096 for
0                                                                         0
       10                                                                         10


          -1
       10
                                                                1                 10
                                                                                    -1
                                                                                                                                                1
       10
          -2
                            3                                   2                 10
                                                                                    -2
                                                                                                                                                2
                                   4                                                                                      3




                                                                            BER
 BER




          -3
       10
                                                                                    -3                iter 1

          -4            iter 1
                                                                                  10
                                                                                                      iter 3
                                                                                                                                  4
       10
                        iter 3                                                                        iter 6
                                                                                    -4
                                                                                  10                  iter 8
          -5            iter 6
       10
                        iter 8
                                                                                    -5
          -6                                                                      10
       10                                                                                0                 0.5                1           1.5             2
               0          0.5              1              1.5       2
                                                                                                                        EbNo (dB)
                                       EbNo (dB)                            Figure 4, BER of K = 1024 Turbo code with log MAP decoding in
Figure 3, BER of K = 4096 Turbo code with SOVA decoding in AWGN                      AWGN channel with various number of iteration
              channel with various number of iterations.                       (1) Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8
      VII. Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8

                                                                                             0
                                                                                         10

               0
            10

                                                   log map                               10
                                                                                             -1
                                                                                                                                                1
               -1
            10
                                                   sova
                                                                                             -2
                                                                                                                                                2
               -2                                                                        10
            10
                                                                                  BER




                                                                                                                              3
    BER




            10
               -3
                            2                                   1                        10
                                                                                             -3


                                                                                                                          4
               -4
            10                                                                               -4          iter 1
                                                                                         10
                                                                                                         iter 3
                                                                                                         iter 6
               -5
            10                                                                                           iter 8
                                                                                             -5
                                                                                         10
                                                                                                  0               0.5                 1             1.5
               -6
            10                                                                                                          EbNo (dB)
                    0        0.5               1          1.5       2
                                       EbNo (dB)
Figure 5, BER of K = 4096 Turbo code of log MAP and SOVA after 8
                decoder iteration in AWGN channel.
                     (1) SOVA 1, (2) LOG-MAP                                Figure 6, BER of K = 4096 Turbo code with log MAP decoding in
                                                                                    AWGN channel with various number of iteration.
                                                                                 (1) Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8

More Related Content

What's hot

Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
 
OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6
OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6 OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6
OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6 ijngnjournal
 
Iaetsd low power flip flops for vlsi applications
Iaetsd low power flip flops for vlsi applicationsIaetsd low power flip flops for vlsi applications
Iaetsd low power flip flops for vlsi applicationsIaetsd Iaetsd
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
Hossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on GraphsHossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on Graphsknowdiff
 
Fault Tolerant Parallel Filters Based On Bch Codes
Fault Tolerant Parallel Filters Based On Bch CodesFault Tolerant Parallel Filters Based On Bch Codes
Fault Tolerant Parallel Filters Based On Bch CodesIJERA Editor
 
Dynamic time warping and PIC 16F676 for control of devices
Dynamic time warping and PIC 16F676 for control of devicesDynamic time warping and PIC 16F676 for control of devices
Dynamic time warping and PIC 16F676 for control of devicesRoger Gomes
 
Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...
Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...
Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...Associate Professor in VSB Coimbatore
 
haffman coding DCT transform
haffman coding DCT transformhaffman coding DCT transform
haffman coding DCT transformaniruddh Tyagi
 
Cancellation of Zigbee interference in OFDM based WLAN for multipath channel
Cancellation of Zigbee interference in OFDM based WLAN for multipath channelCancellation of Zigbee interference in OFDM based WLAN for multipath channel
Cancellation of Zigbee interference in OFDM based WLAN for multipath channelIDES Editor
 
A new Algorithm to construct LDPC codes with large stopping sets
A new Algorithm to construct LDPC codes with large stopping setsA new Algorithm to construct LDPC codes with large stopping sets
A new Algorithm to construct LDPC codes with large stopping setsNestor Barraza
 
Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...
Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...
Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...TELKOMNIKA JOURNAL
 

What's hot (20)

Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...
 
79 83
79 8379 83
79 83
 
OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6
OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6 OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6
OPTIMUM EFFICIENT MOBILITY MANAGEMENT SCHEME FOR IPv6
 
Iaetsd low power flip flops for vlsi applications
Iaetsd low power flip flops for vlsi applicationsIaetsd low power flip flops for vlsi applications
Iaetsd low power flip flops for vlsi applications
 
Hv3414491454
Hv3414491454Hv3414491454
Hv3414491454
 
iscas07
iscas07iscas07
iscas07
 
Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)Welcome to International Journal of Engineering Research and Development (IJERD)
Welcome to International Journal of Engineering Research and Development (IJERD)
 
Coding Scheme
Coding SchemeCoding Scheme
Coding Scheme
 
Hossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on GraphsHossein Taghavi : Codes on Graphs
Hossein Taghavi : Codes on Graphs
 
Fault Tolerant Parallel Filters Based On Bch Codes
Fault Tolerant Parallel Filters Based On Bch CodesFault Tolerant Parallel Filters Based On Bch Codes
Fault Tolerant Parallel Filters Based On Bch Codes
 
Dynamic time warping and PIC 16F676 for control of devices
Dynamic time warping and PIC 16F676 for control of devicesDynamic time warping and PIC 16F676 for control of devices
Dynamic time warping and PIC 16F676 for control of devices
 
Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...
Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...
Reduced Complexity Maximum Likelihood Decoding Algorithm for LDPC Code Correc...
 
wcnc05
wcnc05wcnc05
wcnc05
 
haffman coding DCT transform
haffman coding DCT transformhaffman coding DCT transform
haffman coding DCT transform
 
Cancellation of Zigbee interference in OFDM based WLAN for multipath channel
Cancellation of Zigbee interference in OFDM based WLAN for multipath channelCancellation of Zigbee interference in OFDM based WLAN for multipath channel
Cancellation of Zigbee interference in OFDM based WLAN for multipath channel
 
A new Algorithm to construct LDPC codes with large stopping sets
A new Algorithm to construct LDPC codes with large stopping setsA new Algorithm to construct LDPC codes with large stopping sets
A new Algorithm to construct LDPC codes with large stopping sets
 
Am32265271
Am32265271Am32265271
Am32265271
 
Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...
Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...
Joint Fixed Power Allocation and Partial Relay Selection Schemes for Cooperat...
 
Iy3116761679
Iy3116761679Iy3116761679
Iy3116761679
 

Similar to The performance of turbo codes for wireless communication systems

Design and implementation of log domain decoder
Design and implementation of log domain decoder Design and implementation of log domain decoder
Design and implementation of log domain decoder IJECEIAES
 
Iisrt jona priyaa(1 5)
Iisrt jona priyaa(1 5)Iisrt jona priyaa(1 5)
Iisrt jona priyaa(1 5)IISRT
 
Ebc7fc8ba9801f03982acec158fa751744ca copie
Ebc7fc8ba9801f03982acec158fa751744ca   copieEbc7fc8ba9801f03982acec158fa751744ca   copie
Ebc7fc8ba9801f03982acec158fa751744ca copieSourour Kanzari
 
Chaos Encryption and Coding for Image Transmission over Noisy Channels
Chaos Encryption and Coding for Image Transmission over Noisy ChannelsChaos Encryption and Coding for Image Transmission over Noisy Channels
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...IJCSEA Journal
 
Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...
Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...
Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...IDES Editor
 
simulation of turbo encoding and decoding
simulation of turbo encoding and decodingsimulation of turbo encoding and decoding
simulation of turbo encoding and decodingGulafshan Saifi
 
02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knightDevanshi Piprottar
 
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...Journal For Research
 

Similar to The performance of turbo codes for wireless communication systems (20)

Turbocode
TurbocodeTurbocode
Turbocode
 
Design and implementation of log domain decoder
Design and implementation of log domain decoder Design and implementation of log domain decoder
Design and implementation of log domain decoder
 
LDPC_CODES.ppt
LDPC_CODES.pptLDPC_CODES.ppt
LDPC_CODES.ppt
 
Turbo Code
Turbo Code Turbo Code
Turbo Code
 
Iisrt jona priyaa(1 5)
Iisrt jona priyaa(1 5)Iisrt jona priyaa(1 5)
Iisrt jona priyaa(1 5)
 
Ebc7fc8ba9801f03982acec158fa751744ca copie
Ebc7fc8ba9801f03982acec158fa751744ca   copieEbc7fc8ba9801f03982acec158fa751744ca   copie
Ebc7fc8ba9801f03982acec158fa751744ca copie
 
H017653645
H017653645H017653645
H017653645
 
Chaos Encryption and Coding for Image Transmission over Noisy Channels
Chaos Encryption and Coding for Image Transmission over Noisy ChannelsChaos Encryption and Coding for Image Transmission over Noisy Channels
Chaos Encryption and Coding for Image Transmission over Noisy Channels
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
 
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
NON-STATISTICAL EUCLIDEAN-DISTANCE SISO DECODING OF ERROR-CORRECTING CODES OV...
 
Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...
Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...
Comparative Analysis of Distortive and Non-Distortive Techniques for PAPR Red...
 
Turbo codes.ppt
Turbo codes.pptTurbo codes.ppt
Turbo codes.ppt
 
Presentation
PresentationPresentation
Presentation
 
simulation of turbo encoding and decoding
simulation of turbo encoding and decodingsimulation of turbo encoding and decoding
simulation of turbo encoding and decoding
 
02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight02 ldpc bit flipping_decoding_dark knight
02 ldpc bit flipping_decoding_dark knight
 
LDPC Encoding and Hamming Encoding
LDPC Encoding and Hamming EncodingLDPC Encoding and Hamming Encoding
LDPC Encoding and Hamming Encoding
 
4g lte matlab
4g lte matlab4g lte matlab
4g lte matlab
 
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...
PERFORMANCE ESTIMATION OF LDPC CODE SUING SUM PRODUCT ALGORITHM AND BIT FLIPP...
 

More from chakravarthy Gopi

400 puzzles-and-answers-for-interview
400 puzzles-and-answers-for-interview400 puzzles-and-answers-for-interview
400 puzzles-and-answers-for-interviewchakravarthy Gopi
 
Linear predictive coding documentation
Linear predictive coding  documentationLinear predictive coding  documentation
Linear predictive coding documentationchakravarthy Gopi
 
SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...
SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...
SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...chakravarthy Gopi
 
implementation of _phase_lock_loop_system_using_control_system_techniques
implementation of _phase_lock_loop_system_using_control_system_techniquesimplementation of _phase_lock_loop_system_using_control_system_techniques
implementation of _phase_lock_loop_system_using_control_system_techniqueschakravarthy Gopi
 
A REAL TIME PRECRASH VEHICLE DETECTION SYSTEM
A REAL TIME PRECRASH VEHICLE DETECTION SYSTEMA REAL TIME PRECRASH VEHICLE DETECTION SYSTEM
A REAL TIME PRECRASH VEHICLE DETECTION SYSTEMchakravarthy Gopi
 
Lpc vocoder implemented by using matlab
Lpc vocoder implemented by using matlabLpc vocoder implemented by using matlab
Lpc vocoder implemented by using matlabchakravarthy Gopi
 
Comparison of features for musical instrument recognition
Comparison of features for musical instrument recognitionComparison of features for musical instrument recognition
Comparison of features for musical instrument recognitionchakravarthy Gopi
 

More from chakravarthy Gopi (13)

14104042 resume
14104042 resume14104042 resume
14104042 resume
 
400 puzzles-and-answers-for-interview
400 puzzles-and-answers-for-interview400 puzzles-and-answers-for-interview
400 puzzles-and-answers-for-interview
 
More puzzles to puzzle you
More puzzles to puzzle youMore puzzles to puzzle you
More puzzles to puzzle you
 
puzzles to puzzle u
 puzzles to puzzle u puzzles to puzzle u
puzzles to puzzle u
 
Mahaprasthanam
MahaprasthanamMahaprasthanam
Mahaprasthanam
 
GATE Ece(2012)
GATE Ece(2012)GATE Ece(2012)
GATE Ece(2012)
 
Linear predictive coding documentation
Linear predictive coding  documentationLinear predictive coding  documentation
Linear predictive coding documentation
 
SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...
SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...
SINGING-VOICE SEPARATION FROM MONAURAL RECORDINGS USING ROBUST PRINCIPAL COMP...
 
implementation of _phase_lock_loop_system_using_control_system_techniques
implementation of _phase_lock_loop_system_using_control_system_techniquesimplementation of _phase_lock_loop_system_using_control_system_techniques
implementation of _phase_lock_loop_system_using_control_system_techniques
 
A REAL TIME PRECRASH VEHICLE DETECTION SYSTEM
A REAL TIME PRECRASH VEHICLE DETECTION SYSTEMA REAL TIME PRECRASH VEHICLE DETECTION SYSTEM
A REAL TIME PRECRASH VEHICLE DETECTION SYSTEM
 
Model resume
Model resumeModel resume
Model resume
 
Lpc vocoder implemented by using matlab
Lpc vocoder implemented by using matlabLpc vocoder implemented by using matlab
Lpc vocoder implemented by using matlab
 
Comparison of features for musical instrument recognition
Comparison of features for musical instrument recognitionComparison of features for musical instrument recognition
Comparison of features for musical instrument recognition
 

The performance of turbo codes for wireless communication systems

  • 1. The Performance of Turbo codes for Wireless Communication Systems Grace Oletu Predrag Rapajic Department of Computer and Communication Systems Department of Computer and Communication Systems University of Greenwich University of Greenwich Chatham,United Kingdom Chatham,United Kingdom Og14@gre.ac.uk p.rapajic@gre.ac.uk Abstract—Turbo codes play an important role in complexity. The paper is arranged as follows Section II is the making communications systems more efficient and Channel Model, decoding algorithm is described in section reliable. This paper provides a description of two turbo III, section IV is on Log-MAP Algorithm.Principles of codes algorithms. Soft-output Viterbi algorithm and Iterative Decoding in section V, and Section VI compares the logarithmic-maximum a posteriori turbo decoding simulation results and performance for both algorithms for algorithms are the two candidates for decoding turbo codes. Soft-input soft-output (SISO) turbo decoder based different block length, before concluding in section VII. on soft-output Viterbi algorithm (SOVA) and the II. CHANNEL MODEL logarithmic versions of the MAP algorithm, namely, Log-MAP decoding algorithm. The bit error rate (BER) The transmitted symbols +1/-1, corresponding to the performances of these algorithms are compared. code bits of 1/0 pass through additive white Gaussian Simulation results are provided for bit error rate channel. By scaling the random number of distribution N (0, performance using constraint lengths of K=3, over 1) with the standard deviation ı, an AWGN noise of AWGN channel, show improvements of 0.4 dB for log- distribution N (0, ı2) is obtained. This is added to the symbol MAP over SOVA at BER 10-4. to emulate the noisy channel effect. Keywords- Turbo codes, Iterative decoding III. DECODING TURBO CODE I. INTRODUCTION Let the binary logical elements 1 and 0 be represented electronically by voltages +1 and -1, respectively. The The near Shannon limit error correction performance of variable d is used to represent the transmitted data bit as Turbo codes [1] and parallel concatenated convolutional shown in figure (1), whether it appears as a voltage or as a codes [2] have raised a lot of interest in the research logical element. Sometimes one format is more convenient community to find practical decoding algorithms for than the other. Let the binary 0 (or the voltage value - 1) be implementation of these codes. The demand of turbo codes the null element under addition. for wireless communication systems has been increasing Signal transmission over an AWGN channel, A well- since they were first introduced by Berrou et. al. in the early known hard-decision rule, known as maximum likelihood 1990s [1]. Various systems such as 3GPP, HSDPA and (ML), is to choose the data dk = +1 or dk = -1 associated with WiMAX have already adopted turbo codes in their standards the larger of the two intercept values. For each data bit at due to their large coding gain. In [3], it has also been shown time k, this is tantamount to deciding that dk = +1 if xk falls that turbo codes can be applied to other wireless on the right side of the decision line, otherwise deciding that communication systems used for satellite and deep space dk = -1. applications. The MAP decoding also known as BCJR [4] algorithm is d u not a practical algorithm for implementation in real systems. The MAP algorithm is computationally complex and a sensitive to SNR mismatch and inaccurate estimation of the a noise variance [5]. MAP algorithm is not practical to implement in a chip. The logarithmic version of the MAP algorithm [6-8] and the Soft Output Viterbi Algorithm (SOVA) [9-10] are the practical decoding algorithms for implementation in this system. rate R = ½ and generators G = [7 5] v Figure 1 Recursive systematic convolutional Encoder with memory two, This paper describes Turbo decoding Algorithms, SOVA has the least computational complexity and the worse bit A similar decision rule, known as maximum a posteriori error rate (BER) performance, while the Log-MAP algorithm (MAP), which can be shown to be a minimum probability of [6] has the best BER performance but high computational error rule, takes into account the a priori probabilities of the ___________________________________ 978-1-61284-840-2/11/$26.00 ©2011 IEEE
  • 2. data. The general expression for the MAP rule in terms of L (d) = Lc(x) + L (d) + Le (d) (9) APPs is as follows: Equation (9) shows that the output LLR of a systematic ‫ͳܪ‬ decoder can be represented as having three LLR Elements: a P (d) = +1| x) ൐ P (d = -1|x) (1) channel measurement, a priori knowledge of the data, and an ൏ extrinsic LLR stemming solely from the decoder. To yield H2 the final L (d), each of the individual LLRs can be added as shown in Equation (9), because the three terms are Equation (1) states that you should choose the hypothesis statistically independent. This soft decoder output L(d) is a H1, (d = +1), if the APP P (d = +1|x), is greater than the APP real number that provides a hard decision as well as the P (d = -1|x). Otherwise, you should choose hypothesis H2, reliability of that decision. The sign of L(d) denotes the hard (d= -1). Using the Bayes’ theorem, the APPs in Equation (1) decision; that is, for positive values L(d) of decide that d = can be replaced by their equivalent expressions, yielding the +1, and for negative values decide that d = -1. The following: magnitude of denotes the reliability of that decision. Often, ‫ͳܪ‬ the value L(d) due to the decoding has the same sign as Lc(x) P (x|d) = +1) P (d = +1) ൐ P (x|d = -1) P (d = -1) (2) + L(d), and therefore acts to improve the reliability of L(d). ൏ H2 IV. LOG-MAP ALGORITHM Equation (2) is generally expressed in terms of a ratio, This algorithm, called the log-MAP algorithm [11-15], yielding the so-called likelihood ratio test, as follows: gives the same error performance as the MAP algorithm but is easier to implement. The Log-MAP algorithm computes ‫ͳܪ‬ ‫ͳܪ‬ the MAP parameters by utilizing a correction function to ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ିଵሻ ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ (3) ௉ሺ௫ȁௗୀ ିଵሻ ൐ ௉ሺௗୀ ାଵሻ Or ௉ሺ௫ȁௗୀ ିଵሻ ௉ሺௗୀ ିଵሻ ൐ 1 compute the logarithm of sum of numbers. More precisely ൏ H2 ൏ for A1 = A + B, then H2 ሚ ሚ ෨ ሚ ෨ ‫ = 1ܣ‬ln (A + B) = max (‫ + )ܤ,ܣ‬fc (|‫)|ܤ – ܣ‬ (11) By taking the logarithm of the likelihood ratio, we obtain a useful metric called the log-likelihood ratio (LLR). It is a ሚ ෨ ሚ ෨ where fc(|‫ )| ܤ — ܣ‬is the correction function. fc (|‫)| ܤ— ܣ‬ real number representing a soft decision output of a detector, can be computed using either a look-up table [8] or simply a designated as follows: threshold detector [12] that performs similar to look-up table. ௉ሺௗୀ ାଵȁ௫ሻ ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ The simple equation for threshold detector is L (d|x) = log [௉ሺௗୀ ିଵȁ௫ሻ] =log [ ௉ሺ௫ȁௗୀ ିଵሻ ௉ሺௗୀ ିଵሻ] (4) ሚ ෨ ͲǤ͵͹ͷ ሚ ෨ fc (|‫ = )| ܤ—ܣ‬ቄ if |‫ 2 | ܤ—ܣ‬otherwise ௉ሺ௫ȁௗୀ ାଵሻ ௉ሺௗୀ ାଵሻ Ͳ L (d|x) = log [௉ሺ௫ȁௗୀ ିଵሻ] + log [ ] (5) can be extended recursively. If A2 =A+ B + C, then A2 ௉ሺௗୀ ିଵሻ ሚ ሚ ሚ ሚ ln(A1 + C) = max(‫ + ) ܥ ,ܣ‬fc(|‫)| ܥ – ܣ‬ (12) L (d|x) = L (x|d) + L(d) (6) This recursive operation is specially needed for computation of the soft output decoded bits. At each step, the To simplify the notation, Equation (6) is rewritten as logarithm of addition of two values by maximization follows: operation is accommodated by additional correction value መ L'(݀) = Lc (x) + L (d) which is provided by a look-up table or a threshold detector (7) in the Log-MAP algorithm. The Log-MAP parameters are where the notation Lc(x) emphasizes that this LLR term very close approximations of the MAP parameters and is the result of a channel measurement made at the receiver. therefore, the Log-MAP BER performance is close to that of The equations above were developed with only a data the MAP algorithm. detector in mind. Next, the introduction of a decoder will V. PRINCIPLES OF ITERATIVE DECODING typically yield decision-making benefits. For a systematic code, it can be shown that the LLR (soft output) L (d) out of In a typical communications receiver, a demodulator is the decoder is equal to Equation (8): often designed to produce soft decisions, which are then መ መ L (݀) = L'(݀) + Le (݀)መ transferred to a decoder. The improvement in error (8) performance of systems utilizing such soft decisions is typically approximated as 2 dB, as compared to hard Where L(d) is the LLR of a data bit out of the decisions in AWGN. Such a decoder could be called a soft demodulator (input to the decoder), and Le(d) is called the input/ hard output decoder, because the final decoding extrinsic LLR, represents extra knowledge gleaned from the process out of the decoder must terminate in bits (hard decoding process. The output sequence of a systematic decisions). With turbo codes, where two or more component decoder is made up of values representing data bits and codes are used, and decoding involves feeding outputs from parity bits. From Equations (7) and (8), the output LLR L (d) one decoder to the inputs of other decoders in an iterative of the decoder is now written as follows:
  • 3. fashion, a hard-output decoder would not be suitable. That is both algorithms. For figure 5, LOG MAP shows better because a hard decision into a decoder degrades system performance than SOVA for constraint length of three and performance (compared to soft decisions). for block length of 1024 and 4096 respectively. Hence, what is needed for the decoding of turbo codes is a soft input/ soft output decoder. For the first decoding CONCLUSIONS iteration of such a soft input/soft output decoder , we Our Simulation results shows that the Log-MAP generally assume the binary data to be equally likely, performs better in terms of block length compared to SOVA, yielding an initial a priori LLR value of L(d)=0. The channel and thus it is more suitable for wireless communication. LLR value, Lc(x), is measured by forming the logarithm of the ratio of the values for a particular observation which appears as the second term in Equation (5). The output L(d) REFERENCES of the decoder in Figure 3 is made up of the LLR from the [1] C. Berrou, A. Glavieux, and P. Thitimajshima, "Near Shannon Limit detector, L’(d) , and the extrinsic LLR output, Le(d) , Error-Correcting Coding and Decoding: Turbo Codes,“Proceeding of representing knowledge gleaned from the decoding process. IEEE ICC 93, pp. 1064-1070. As illustrated in Figure 2, for iterative decoding, the extrinsic [2] S. Benedetto, G. Montorsi, “Design of Parallel Concatenation likelihood is fed back to the decoder input, to serve as a Convolutional Codes: IEEE Trans. on communication, vol. 44, No.5, May 1996. refinement of the a priori probability of the data for the next [3] C. Berrou, “The Ten-Year-Old Turbo Codes are Entering into iteration. Service,” IEEE Commun. Mag. vol. 41, no. 8, pp.110-116, Aug 2003. [4] L. Bahi, J. Cocke, F. Jelinek, and J. Raviv, "Optimum decoding of linear codes for minimizing symbol error rate," IEEE Trans.on Inf. Theory, vol. IT-20, pp. 284-287, Mar. 1974. [5] T.A. Summers and S.G. Wilson, "SNR Mismatch and Online Estimation in Turbo Decoding, "IEEE Trans. On Comm. vol.46, no. 4, pp. 421-424, April 1998. [6] P. Robertson, P. Hoeher, and E. Villebrun, "Optimal and Sub-Optimal Maximum A Posteriori Algorithms Suitable for Turbo Decoding, “European Trans. on Telecomm. vol. 8, no. 2, pp. 119-126, March- April 1997. [7] P. Robertson, E. Villebrun, and P. Hoeher, "A Comparison of Optimal and Sub-optimal MAP Decoding Algorithms Operating in the Log Domain,” International Conference on Communications, pp. 1009-1013, June 1995. [8] S. Benedetto, G. Montorsi, D. Divsalr, and F. Pollara "Soft- Output Decoding Algorithm in Iterative Decoding of Turbo Codes," TDA Progress Report 42-124, pp. 63-87, February 15, 1996. [9] J. Hagenauer, and P. Hoeher, "A Viterbi Algorithm with Soft- Decision Outputs and Its applications, "Proc. Of GLOBECOM, pp. 1680-1686, November 1989. [10] J. Hagenauer, Source-Controlled Channel Decoding, "IEEE Transaction on Communications, vol. 43, No. 9, pp.2449-2457, VI. SIMULATION RESULTS September 1995. The simulation curves presented shows the influence of [11] J.Hagenauer E.Offer, and L.Papke,”Iterative Decoding of Binary iteration number, Block length, code rate and code generator. Block and Convolutional Codes,” IEEE Trans.Inform. Theory, 42:429-45, March 1996. Rate ½ codes are obtained from their rate 1/3 counterparts by [12] W.J. Gross and P.G. Gulak, "Simplified MAP algorithm suitables for alternately puncturing the parity bits of the constituent implementation of turbo decoders" Electronic Letters, vol. 34, no. encoders. For rate R = ½ encoder with Constraint Length 3 16, August 6, 1998. and generators G1 = 7, G2 = 5. The BER has been computed [13] J. Hagenauer, L. Papke, “Decoding Turbo Codes With the Soft after each decoding as a function of signal to noise ratio Output Viterbi Algorithm (SOVA),” in Proc. Int. Symp.On Eb/No. Information Theory, p164, Norway, June 1994. [14] J. Hagenauer, P. Robertson, L. Papke, “Iterative Decoding of In figures (3-6) BER for SOVA and LOG MAP as a Systematic Convolutional Codes With the MAP and SOVA Algorithms,” ITG Conf., Frankfurt,Germany,pp 1-9, Oct. 1994 function of Eb/No curves are shown for constituent codes of [15] J. Hagenauer, E. Offer, L. Papke, “Iterative Decoding of Bloc and constraint length three and code rate ½. Eight decoding Convolutional Codes,” IEEE Trans. Infor. Theory, Vol. IT. 42, No.2, iterations were performed for Block length of 1024 and 4096. pp 429-445, March 1996. From these figure it can be observed that a large block length corresponds to a lower BER. Also the improvement achieved when the block length is increased from 1024 to 4096 for
  • 4. 0 0 10 10 -1 10 1 10 -1 1 10 -2 3 2 10 -2 2 4 3 BER BER -3 10 -3 iter 1 -4 iter 1 10 iter 3 4 10 iter 3 iter 6 -4 10 iter 8 -5 iter 6 10 iter 8 -5 -6 10 10 0 0.5 1 1.5 2 0 0.5 1 1.5 2 EbNo (dB) EbNo (dB) Figure 4, BER of K = 1024 Turbo code with log MAP decoding in Figure 3, BER of K = 4096 Turbo code with SOVA decoding in AWGN AWGN channel with various number of iteration channel with various number of iterations. (1) Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8 VII. Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8 0 10 0 10 log map 10 -1 1 -1 10 sova -2 2 -2 10 10 BER 3 BER 10 -3 2 1 10 -3 4 -4 10 -4 iter 1 10 iter 3 iter 6 -5 10 iter 8 -5 10 0 0.5 1 1.5 -6 10 EbNo (dB) 0 0.5 1 1.5 2 EbNo (dB) Figure 5, BER of K = 4096 Turbo code of log MAP and SOVA after 8 decoder iteration in AWGN channel. (1) SOVA 1, (2) LOG-MAP Figure 6, BER of K = 4096 Turbo code with log MAP decoding in AWGN channel with various number of iteration. (1) Iteration 1, (2) Iteration 3, (3) Iteration 6, (4) Iteration 8