Turbo code


Published on

Published in: Technology
1 Like
  • Be the first to comment

No Downloads
Total Views
On Slideshare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Turbo code

  1. 1. Technical Seminar Report on <br /> TURBO CODE<br /> A technical seminar report submitted in partial fulfillment of the requirement for the degree of bachelor of engineering under BPUT<br /> <br /> SUBMITTED BY:<br /> <br />PRASANTA KUMAR BARIK<br />REGISTRATION NO: 0701106246<br />DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING<br />COLLEGE OF ENGINEERING AND TECHNOLOGY<br />Techno Campus, Kalinga Nagar, Ghatikia, Bhubaneswar-751003<br /> CERTIFICATE<br /> <br /> This is to certify that PRASANTA KUMAR BARIK is a student of 8th semester B.Tech Computer Science and Engineering in the College of Engineering & Technology, with registration number 0701106246 in the batch 2007-2011 has taken active interest in the preparing report on "TURBO CODE".<br /> <br /> This is in partial fulfillment of requirement for the Bachelor of Technology degree in Computer Science , under Biju Pattnaik University of Technology ,Orissa.This report is verified and attested by<br />Prof Jibitesh Mishra<br />HOD,Department of CSE<br />College of Engineering & Technology,Bhubaneswar <br /> ACKNOWLEDGEMENT<br /> Many people have contributed to the success of this. Although a single sentence hardly suffices, I would like to thank God for blessing me with His grace. I am profoundly indebted to my seminar guide Er Sarita Tripathy for innumerable acts of timely advice, encouragement and I sincerely express my gratitude to her.<br /> I express my immense pleasure and thankfulness to all the teachers and staff of the Department of Computer Science, College Of Engineering & Technology, for their cooperation and support.<br /> Last but not least, I thank all others, and especially my classmates who in one way or another helped me in successful completion of this work.<br />Prasanta Kumar Barik<br />Regd no.: 0701106246<br />8th Semester, CSE<br />ABSTRACT<br /> <br /> During the transmission of data from transmitter to receiver, there is loss<br />of information in the communication channel due to noise. This loss is<br />measured in terms of bit error rate (BER) and several decoding<br />algorithms and modulation techniques used to minimize it. . Turbo codes<br />are one of the most powerful types of error control codes currently<br />available, which could achieve low BERs at signal to noise ratio (SNR)<br />very close to Shannon limit. Nevertheless, the specific performance of the<br />code highly depends on the particular decoding algorithm used at the<br />receiver. In this sense, the election of the decoding algorithm involves a<br />trade off between the gain introduced by the code and the complexity of<br />the decoding process.<br /> <br /> <br /> Prasanta Kumar Barik<br />CSE-0701106246<br /> INDEX<br /> <br /> CHAPTER PAGE NO<br /> <br /> I.Introduction 1<br /> II.Channel Coding 2-4<br /> Backword Correction<br /> Forward Error Correction<br /> Need For Better Code <br /> III. Turbo Code 5-15<br /> Encoding with Interleaving<br /> Recursive Systematic Convolutional Encoder<br /> Decoding<br /> Perfomance<br /> Example<br /> IV. Conclusion 16<br /> V. Reference 17 <br /> <br /> <br /> <br /> Turbo Codes<br />1.Introduction<br />Concatenated coding schemes were first proposed by Forney as a method for<br />achieving large coding gains by combining two or more relatively simple buildingblock or component codes (sometimes called constituent codes). The resulting codes had the error-correction capability of much longer codes, and they were endowed with a structure that permitted relatively easy to moderately complex decoding. A serial concatenation of codes is most often used for power-limited systems such as transmitters on deep-space probes. The most popular of these schemes consists of a Reed-Solomon outer (applied first, removed last) code<br />followed by a convolutional inner (applied last, removed first) code .<br /> A turbo code can be thought of as a refinement of the concatenated encoding structure plus an iterative algorithm for decoding the associated code sequence. Turbo codes were first introduced in 1993 by Berrou, Glavieux, and Thitimajshima, and reported in ,where a scheme is described that achieves a bit-error probability of 10-5 using a rate 1/2 code over an additive white Gaussian noise channel and modulation at an Eb/N0 of 0.7 dB. The codes are<br />constructed by using two or more component codes on different interleaved versions of the same information sequence. Whereas, for conventional codes, the final step at the decoder yields hard-decision decoded bits (or, more generally, decoded symbols), for a concatenated scheme such as a turbo code to work properly, the decoding algorithm should not limit itself to passing hard decisions among the decoders. To best exploit the information learned from each decoder, the decoding algorithm must effect an exchange of soft decisions rather than hard decisions. For a system with two component codes, the concept behind turbo decoding is to pass soft decisions from the output of one decoder to the input of the other decoder, and to iterate this process several times so as to produce more reliable decisions.<br />2.Channel Coding<br />The task of channel coding is to encode the information sent over a communication<br />channel in such a way that in the presence of channel noise, errors can be<br />detected and/or corrected. We distinguish between two coding methods:<br />• Backward error correction (BEC) <br />requires only error detection: if an error<br />is detected, the sender is requested to retransmit the message. While this<br />method is simple and sets lower requirements on the code’s error-correcting<br /> [Type a quote from the document or the summary of an interesting point. You can position the text box anywhere in the document. Use the Text Box Tools tab to change the formatting of the pull quote text box.]<br />xi−1xi−2xi−3++++ <br />Figure 1: A convolutional encoder<br />properties, it on the other hand requires duplex communication and causes<br />undesirable delays in transmission.<br />• Forward error correction (FEC) <br />requires that the decoder should also be<br />capable of correcting a certain number of errors, i.e. it should be capable of<br />locating the positions where the errors occurred. Since FEC codes require<br />only simplex communication, they are especially attractive in wireless communication<br />systems, helping to improve the energy efficiency of the system.<br />In the rest of this paper we deal with binary FEC codes only.<br />Next, we briefly recall the concept of conventional convolutional codes. Convolutional<br />codes differ from block codes in the sense that they do not break the<br />message stream into fixed-size blocks. Instead, redundancy is added continuously<br />to the whole stream. The encoder keeps M previous input bits in memory. Each<br />output bit of the encoder then depends on the current input bit as well as the M<br />stored bits. Figure 1 depicts a sample convolutional encoder. The encoder produces<br />two output bits per every input bit, defined by the equations<br />y1,i = xi + xi−1 + xi−3,<br />y2,i = xi + xi−2 + xi−3.<br />For this encoder, M = 3, since the ith bits of output depend on input bit i, as well<br />as three previous bits i − 1, i − 2, i − 3. The encoder is nonsystematic, since the<br />input bits do not appear explicitly in its output.<br />An important parameter of a channel code is the code rate. If the input size (or<br />message size) of the encoder is k bits and the output size (the code word size) is n<br />bits, then the ratio k<br />n is called the code rate r. Since our sample convolutional encoder<br />produces two output bits for every input bit, its rate is 1<br />2 . Code rate expresses<br />Turbo Codes 3<br />the amount of redundancy in the code—the lower the rate, the more redundant the<br />code.<br />Finally, the Hamming weight or simply the weight of a code word is the number<br />of non-zero symbols in the code word. In the case of binary codes, dealt with in<br />this paper, the weight of a code word is the number of ones in the word.<br />3.A Need for Better Codes<br />Designing a channel code is always a tradeoff between energy efficiency and bandwidth efficiency. Codes with lower rate (i.e. bigger redundancy) can usually correct more errors. If more errors can be corrected, the communication system can<br />operate with a lower transmit power, transmit over longer distances, tolerate more<br />interference, use smaller antennas and transmit at a higher data rate. These properties make the code energy efficient. On the other hand, low-rate codes have a large overhead and are hence more heavy on bandwidth consumption. Also, decoding complexity grows exponentially with code length, and long (low-rate) codes set high computational requirements to conventional decoders. According to Viterbi, this is the central problem of channel coding: encoding is easy but decoding is hard .<br />For every combination of bandwidth (W), channel type, signal power (S) and received noise power (N), there is a theoretical upper limit on the data transmission<br />rate R, for which error-free data transmission is possible. This limit is called channel capacity or also Shannon capacity (after Claude Shannon, who introduced the notion in 1948). For additive white Gaussian noise channels, the formula is<br />R<Wlog21+SN [bits/second] <br />In practical settings, there is of course no such thing as an ideal error-free channel.<br />Instead, error-free data transmission is interpreted in a way that the bit error probability can be brought to an arbitrarily small constant. The bit error probability, or bit error rate (BER) used in benchmarking is often chosen to be 10−5or 10−6.<br />Now, if the transmission rate, the bandwidth and the noise power are fixed, we get<br />a lower bound on the amount of energy that must be expended to convey one bit<br />of information. Hence, Shannon capacity sets a limit to the energy efficiency of a<br />code.<br />Although Shannon developed his theory already in the 1940s, several decades<br />later the code designs were unable to come close to the theoretical bound. Even<br />in the beginning of the 1990s, the gap between this theoretical bound and practical<br />implementations was still at best about 3dB. This means that practical codes<br />required about twice as much energy as the theoretical predicted minimum.1<br />Hence, new codes were sought that would allow for easier decoding. One way<br />of making the task of the decoder easier is using a code with mostly high-weight<br />code words. High-weight code words, i.e. code words containing more ones and<br />less zeros, can be distinguished more easily.<br />Another strategy involves combining simple codes in a parallel fashion, so that<br />each part of the code can be decoded separately with less complex decoders and<br />each decoder can gain from information exchange with others. This is called the<br />divide-and-conquer strategy. Keeping these design methods in mind, we are now ready to introduce the concept of turbo codes.<br />4.Turbo Codes: Encoding with Interleaving<br />The first turbo code, based on convolutional encoding, was introduced in 1993 by<br />Berrou et al. Since then, several schemes have been proposed and the term<br />“turbo codes” has been generalized to cover block codes as well as convolutional<br />codes. Simply put,<br />a turbo code is formed from the parallel concatenation of two codes separated by<br />an interleaver.<br />The generic design of a turbo code is depicted in Figure. Although the general<br />concept allows for free choice of the encoders and the interleaver, most designs<br />follow the ideas presented<br />• The two encoders used are normally identical;<br />• The code is in a systematic form, i.e. the input bits also occur in the output<br />• The interleaver reads the bits in a pseudo-random order.<br />The choice of the interleaver is a crucial part in the turbo code design . The task<br />of the interleaver is to “scramble” bits in a (pseudo-)random, albeit predetermined<br />*A decibel is a relative measure. If E is the actual energy and Eref is the theoretical lower<br />bound, then the relative energy increase in decibels is <br />10log10EEref<br />. <br />Since log102≅0.3, <br />A twofold relative energy increase equals 3dB.<br /> <br /> Systematic output <br />INPUT Xi <br /> Encoder 1<br />Output 1<br /> <br /> Interleaver<br /> Encoder 2<br /> <br /> Output 2<br /> Figure 2: The generic turbo encoder<br />fashion. This serves two purposes. Firstly, if the input to the second encoder is interleaved,its output is usually quite different from the output of the first encoder.<br />This means that even if one of the output code words has low weight, the other<br />usually does not, and there is a smaller chance of producing an output with very<br />low weight. Higher weight, as we saw above, is beneficial for the performance of<br />the decoder. Secondly, since the code is a parallel concatenation of two codes, the<br />divide-and-conquer strategy can be employed for decoding. If the input to the second decoder is scrambled, also its output will be different, or “uncorrelated” from the output of the first encoder. This means that the corresponding two decoders will gain more from information exchange.<br />We now briefly review some interleaver design ideas, stressing that the list is by no<br />means complete. The first three designs are illustrated in Figure 3 with a sample<br />input size of 15 bits.<br />1. A “row-column” interleaver: data is written row-wise and read columnwise.<br />While very simple, it also provides little randomness.<br />2. A “helical” interleaver: data is written row-wise and read diagonally.<br />3. An “odd-even” interleaver: first, the bits are left uninterleaved and encoded,<br />but only the odd-positioned coded bits are stored. Then, the bits are<br />scrambled and encoded, but now only the even-positioned coded bits are<br />stored. Odd-even encoders can be used, when the second encoder produces<br />one output bit per one input bit.<br />4. A pseudo-random interleaver defined by a pseudo-random number generator<br />or a look-up table.<br />Turbo Codes 6<br />InputX1X2X3X4X5X6X7X8X9X10X11X12X13X14X15<br />Row-column interleaver outputX1X6X11X2X7X12X3X8X13X4X9X14X5X10X15<br />Helical interleaver outputX11X7X3X14X10X1X12X8X4X15X6X2X13X9X5<br />Odd-even interleaver outputEncoder output without interleavingX1X2X3X4X5X6X7X8X9X10X11X12X13X14X15Y1-Y3-Y5-Y7-Y9-Y11-Y13-Y15Encoder output with row-column interleavingX1X6X11X2X7X12X3X8X13X4X9X14X5X10X15-Z6-Z2-Z12-Z8-Z4-Z14-Z10-Final output of the encoderY1Z6Y3Z2Y5Z12Y7Z8Y9Z4Y11Z14Y13Z10Y15<br /> <br /> Figure 3: Interleaver designs<br />There is no such thing as a universally best interleaver. For short block sizes, the<br />odd-even interleaver has been found to outperform the pseudo-random interleaver,<br />and vice versa. The choice of the interleaver has a key part in the success of the<br />code and the best choice is dependent on the code design. For further reading,<br />several articles on interleaver design can be found.<br />5.Recursive Systematic Convolutional (RSC) Encoder<br />The recursive systematic convolutional (RSC) encoder is obtained from the<br />nonrecursive nonsystematic (conventional) convolutional encoder by feeding back one of its encoded outputs to its input. Figure shows a conventional convolutional encoder.<br /> <br /> g1<br />+<br />DD<br />x<br />+<br /> g2 <br /> Figure 4.1: Conventional convolutional encoder<br />The conventional convolutional encoder is represented by the generator sequences<br />g1 =[111] and g2 =[101] and can be equivalently represented in a more compact form as G=[g1, g2]. The RSC encoder of this conventional convolutional encoder is represented as G=[1, g2 / g1] where the first output (represented by g1) is fed back to the input. In the above representation, 1 denotes the systematic output, g2 denotes the feedforward output, and g1 is the feedback to the input of the RSC encoder. Figure 4.2 shows the resulting RSC encoder.<br /> <br />D<br />D<br /> Figure 4.2: The RSC encoder obtained from the previous figure<br />6.Turbo Codes: Some Notes on Decoding<br />In the traditional decoding approach, the demodulator makes a “hard” decision<br />of the received symbol, and passes to the error control decoder a discrete value,<br />either a 0 or a 1. The disadvantage of this approach is that while the value of some<br />bits is determined with greater certainty than that of others, the decoder cannot<br />make use of this information.<br />A soft-in-soft-out (SISO) decoder receives as input a “soft” (i.e. real) value of<br />the signal. The decoder then outputs for each data bit an estimate expressing the<br />probability that the transmitted data bit was equal to one. In the case of turbo<br />codes, there are two decoders for outputs from both encoders. Both decoders<br />provide estimates of the same set of data bits, albeit in a different order. If all<br />intermediate values in the decoding process are soft values, the decoders can gain<br />greatly from exchanging information, after appropriate reordering of values. Information exchange can be iterated a number of times to enhance performance.<br />At each round, decoders re-evaluate their estimates, using information from the<br />other decoder, and only in the final stage will hard decisions be made, i.e. each bit<br />is assigned the value 1 or 0. Such decoders, although more difficult to implement,<br />are essential in the design of turbo codes.<br />7.Working Of Turbo Code with the Example<br /> <br /> <br /> Figure 5.1: Encoding<br /> <br /> Figure 5.2:Decoding<br />8.Turbo Codes: Performance<br />We have seen that the conventional codes left a 3dB gap between theory and practice. After bringing out the arguments for the efficiency of turbo codes, one clearly wants to ask: how efficient are they?<br />Already the first rate 1/3 code proposed in 1993 made a huge improvement: the gap between Shannon’s limit and implementation practice was only 0.7dB, giving a less than 1.2-fold overhead. (In the authors’ measurements, the allowed bit error<br />rate BER was 10−5). In [2], a thorough comparison between convolutional codes<br />and turbo codes is given. In practice, the code rate usually varies between 1/2 and<br />1/6 . Let the allowed bit error rate be 10−6. For code rate 1/2 , the relative increase in energy consumption is then 4.80dB for convolutional codes, and 0.98dB for turbo codes. For code rate 1/6 , the respective numbers are 4.28dB and -0.12dB2. It can also be noticed, that turbo codes gain significantly more from lowering the code rate than conventional convolutional codes.<br /> Figure 6: Perfomance<br />9.The UMTS Turbo Code<br />++<br /> <br />Xi-1 Xi-2 Xi-3<br />+Input Xi<br />++<br />+ Interleaver<br />++<br />+ X’i-1 X’i-2 X’i-3<br /> X’i<br />++<br /> Figure 7: The UMTS turbo encoder<br />The UMTS turbo encoder closely follows the design ideas presented in the original<br />1993 paper .The starting building block of the encoder is the simple convolutional<br />encoder depicted in Figure 1. This encoder is used twice, once without<br />interleaving and once with the use of an interleaver, exactly as described above.<br />In order to obtain a systematic code, desirable for better decoding, the following<br />modifications are made to the design. Firstly, a systematic output is added to the<br />encoder. Secondly, the second output from each of the two encoders is fed back<br />to the corresponding encoder’s input. The resulting turbo encoder<br />is a rate 1/3 encoder, since for each input bit it produces one systematic<br />output bit and two parity bits. Details on the interleaver design can be found in<br />the corresponding specification .<br />Although the relative value is negative, it does not actually violate the Shannon’s limit. The negative value is due to the fact that we allow for a small error, whereas Shannon’s capacity applies for perfect error-free transmission.<br />As a comparison, the GSM system uses conventional convolutional encoding in<br />combination with block codes. The code rate varies with the type of input; in the<br />case of speech signal, it<br />260456<12<br />10.Conclusions<br />Turbo codes are a recent development in the field of forward-error-correction<br />channel coding. The codes make use of three simple ideas: parallel concatenation<br />of codes to allow simpler decoding; interleaving to provide better weight<br />distribution; and soft decoding to enhance decoder decisions and maximize the<br />gain from decoder interaction.<br />While earlier, conventional codes performed—in terms of energy efficiency or,<br />equivalently, channel capacity—at least twice as bad as the theoretical bound suggested,turbo codes immediately achieved performance results in the near range of the theoretically best values, giving a less than 1.2-fold overhead. Since the first<br />proposed design in 1993, research in the field of turbo codes has produced even<br />better results. Nowadays, turbo codes are used in many commercial applications,<br />including both third generation cellular systems UMTS and cdma2000.<br />11.References<br />[1] University of South Australia, Institute for Telecommunications Research,<br />Turbo coding research group. http://www.itr.unisa.edu.au/<br />~steven/turbo/.<br />[2] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class of<br />powerful error correction coding schemes. Part I: Code structures and interleaver<br />design. J. Elec. and Electron.Eng., Australia, 19:129–142, September<br />1999.<br />[3] S.A. Barbulescu and S.S. Pietrobon. Turbo codes: A tutorial on a new class of<br />powerful error correction coding schemes. Part II: Decoder design and performance.<br />J. Elec. and Electron.Eng., Australia, 19:143–152, September 1999.<br />