Finite-Precision Analysis of Demappers and Decoders
for L...
they require to model the effect of the demapper block (i...
matrix followed by a co...
codes when used with BPSK modulation is instead addressed...
variable and check node...
Fig. 5. Performance of the considered LDPC code for unifo...
more involved hardware ...
Fig. 7. Performance of the considered LDPC code for unifo...
Fig. 8. Comparison betw...
B. Simplified Demapper
The acceptability of the approximat...
block can be implemente...
[24] S. Kim, G. Sobelman, and J. Moon, “Parallel VLSI arc...
Upcoming SlideShare
Loading in …5

Finite-Precision Analysis of Demappers and Decoders for LDPC-Coded M-QAM Systems


Published on

For more projects feel free to contact us @

  • Be the first to comment

  • Be the first to like this

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Finite-Precision Analysis of Demappers and Decoders for LDPC-Coded M-QAM Systems

  1. 1. IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 239 Finite-Precision Analysis of Demappers and Decoders for LDPC-Coded M-QAM Systems Marco Baldi, Franco Chiaraluce, Member, IEEE, and Giovanni Cancellieri Abstract—LDPC codes are state-of-art error correcting codes, included in several standards for broadcast transmissions. Itera- tive soft-decision decoding algorithms for LDPC codes reach ex- cellent error correction capability; their performance, however, is strongly affected by finite-precision issues in the representation of inner variables. Great attention has been paid, in recent literature, to the topic of quantization for LDPC decoders, but mostly focusing on binary modulations and analysing finite precision effects in a disaggregrated manner, i.e., considering separately each block of the receiver. Modern telecommunication standards, instead, often adopt high order modulation schemes, e.g. -QAM, with the aim to achieve large spectral efficiency. This puts additional quantiza- tion problems, that have been poorly debated in previous litera- ture. This paper discusses the choice of suitable quantization char- acteristics for both the decoder messages and the received sam- ples in LDPC-coded systems using -QAM schemes. The anal- ysis involves also the demapper block, that provides initial likeli- hood values for the decoder, by relating its quantization strategy with that of the decoder. A new demapper version, based on ap- proximate expressions, is also presented, that introduces a slight deviation from the ideal case but yields a low complexity hardware implementation. Index Terms—Demodulation, digital communication, error cor- rection codes, fixed point arithmetic, quantization. I. INTRODUCTION THE current scenario of error correcting codes is domi- nated by schemes using Soft-Input Soft-Output (SISO) decoding. Among them, an important role is played by Low- Density Parity-Check (LDPC) codes, that permit to approach the theoretical Shannon limit [1], [2], while ensuring reduced complexity. For such reason, these codes have been included in some re- cent telecommunication standards [3]–[5]. The second genera- tion of Digital Video Broadcasting (DVB) standards, in partic- ular, considers LDPC codes in place of more conventional con- catenated schemes formed by Reed-Solomon and convolutional codes, that were adopted in first generation DVB standards. Similarly, the second version of the satellite DVB (DVB-S2) standard includes LDPC codes in conjunction with BCH codes [3]. LDPC codes will be probably adopted also in the upcoming second generation of the terrestrial DVB (DVB-T2) standard, that will replace soon its present version [6]. Possible technolo- gies to be included in such new standard are currently under evaluation [7]. Manuscript received May 15, 2008; revised February 09, 2009. First pub- lished April 28, 2009; current version published May 22, 2009. The authors are with the Department of Biomedical Engineering, Elec- tronics and Telecommunications, Polytechnic University of Marche, 60131 Ancona, Italy (e-mail:;, g.cancel- Digital Object Identifier 10.1109/TBC.2009.2016498 Fig. 1. Block diagram of the LDPC-coded M-QAM system. Based on the above considerations, a relevant issue concerns comparison between the error rate performance that is achiev- able by using LDPC codes and that ensured by other schemes employing SISO decoding. An example of such comparison will be given in Section II for the important case of the Digital Video Broadcasting - Return Channel Satellite (DVB-RCS) standard [8]. Moreover, modern broadcast communications are character- ized by increasing throughput requirements; this is true, for ex- ample, for the DVB-T2 standard, that must support High Def- inition Television (HDTV) services. So, there is the need of large spectral efficiencies, that is usually satisfied by employing high order modulation schemes [9], [10]. The DVB-T standard adopts QPSK, 16-QAM and 64-QAM schemes in conjunction with OFDM, and probably the same will be for DVB-T2. Another issue in broadcast transmissions concerns com- plexity of the decoder implementation, that can be somehow reduced by introducing suitable approximations [11]. In partic- ular, in SISO decoders, complexity is strongly affected by the finite-precision representation of the inner variables. The aim of this paper is to study finite-precision effects on an LDPC coded -QAM system of the type depicted in Fig. 1; it employs binary LDPC codes in conjunction with high order modulation schemes [12]. The meaning of the various blocks and quantities involved in Fig. 1 will be explained in detail in Sections IV and V. This topic has been already discussed in previous literature, but most of previous works were limited to consider binary modulation. Higher order modulation schemes, like -QAM, whose adoption is justified by the need to increase the spectral efficiency, put a number of additional problems. In particular, 0018-9316/$25.00 © 2009 IEEE Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  2. 2. 240 IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 they require to model the effect of the demapper block (i.e., the symbol-to-metric calculator) and to refine the optimization procedure for saving the number of quantization bits without incurring significant performance losses. This can suggest, in particular, the adoption of suitable non-uniform quantization schemes, that are able to face efficiently the clipping effect. If not controlled, this effect can cause the appearance of remark- able and unexpected error floors. After having derived, through examples, a quantitative eval- uation of the quantization and clipping effects for the proposed scenario, we discuss a non-uniform quantization law that repre- sents a good trade-off for both waterfall and error floor perfor- mance. Differently from previous proposals, this non-uniform quantization scheme is specifically targeted to overcome clip- ping issues arising in -QAM systems, while maintaining rea- sonably small the number of quantization bits. This solution is obtained through a simple compander-like approach, and can be implemented by exploiting uniform quantization hardware. We discuss also the relationship that should exist between the number of bits to use in the quantization of the received signals and the extrinsic messages, in such a way as to ensure compa- rable quantization errors. This analysis is based on theoretical ar- guments on the demapper functions. Finally, we propose a low complexity receiver scheme that, requiring Look-Up Tables with reduced size, can be convenient in a hardware implementation. The organization of the paper is as follows. In Section II we present a comparison between the performance of LDPC codes and standard turbo codes for the DVB-RCS application. In Section III we provide a short overview of previous works on the quantization problem, that is the main issue of the paper. In Section IV we describe the system model. In Section V we discuss the choice of the quantization law for the decoder mes- sages. In Section VI we find the relationship that should exist between the input signals quantization and the decoder mes- sages quantization in order to have comparable accuracies. In Section VII we develop an approximate analysis of the receiver that permits to express directly the number of quantization bits and, most of all, can be used to conceive a more efficient imple- mentation. Finally, Section VIII concludes the paper. II. EXAMPLES OF TURBO-LIKE CODES IN DVB The introduction of turbo codes has substantially changed the scenario of forward error correction, and started a revision process of traditional coding schemes in practical applications. Turbo codes are able to achieve very good correction per- formance, and to approach the Shannon capacity limit. This is due to the adoption of soft-decision decoding algorithms im- plementing the so-called “turbo principle”, which consists in an iterated exchange and update of inner messages estimating the reliability of each received bit. A very similar decoding approach characterizes LDPC codes, first introduced by Gallager [13] and then, recently, rediscovered by the scientific community. It can be shown that turbo decoding is an instance of Pearl’s “Belief Propagation” algorithm, already implemented in LDPC decoders [14]; so, both turbo and LDPC codes can be included in a wider class of “turbo-like” codes. Due to their excellent performance, turbo-like codes are being adopted in an increasing number of telecommunication stan- dards and applications, with a special focus on Digital Video Broadcasting. The DVB-S2 standard, in particular, makes use of semi-random LDPC codes, characterized by a parity-check matrix obtained through the concatenation of a non-structured block and a staircase lower triangular block (that facilitates sys- tematic encoding). LDPC codes with structured parity-check matrices can reduce both the encoding and decoding com- plexity; among them, an important class is represented by Quasi-Cyclic LDPC (QC-LDPC) codes, that can be encoded through very simple circuits based on barrel shift registers. The DVB-RCS standard, that deals with the implementation of interactive channels for satellite applications, includes in- stead a turbo code for error correction. Its turbo encoder uses a double binary circular recursive systematic convolutional code, an optimized two-level interleaver and a puncturing map to deal with variable rates [15]. The information block lengths are also variable, ranging from 12 bytes to 216 bytes. The performance of double binary turbo codes has been com- pared with that of structured LDPC codes in [16]. The authors conclude that, for high code length and rate, LDPC codes often outperform turbo codes (they observed that the two schemes achieve comparable performance for rate 3/4 and block length 1152, while, for higher rate and block length, LDPC codes can be better than turbo codes). So, it seems interesting to investigate the applicability of structured LDPC codes in those applications where rather long and high-rate codes are needed. As a first example, we have considered the DVB-RCS standard turbo code mentioned above, with MPEG2 information block size (that is 188 bytes [17]) and code rate 4/5. For comparison, we have simulated two LDPC codes de- signed with different approaches. Both of them have dimension and length , that are coincident with those of the turbo code. The first LDPC code has been designed by means of the Progressive Edge Growth (PEG) algorithm [18], that aims at maximizing the girth length within the Tanner graph. The as- sociated parity-check matrix is non-structured; it has column weight 3 and row weight ranging between 14 and 16. The second LDPC code is structured and consists of a QC-LDPC code designed through the “Random Difference Families” (RDF) approach [19]. It is characterized by a parity-check matrix formed by a row of 5 binary circulant blocks, i.e., . Each block has size 376 376, and row/column weight 5, 4, 5, 4 and 5, respectively. This implies that matrix has average column weight and row weight . By assuming, without loss of generality, that the block is non-singular, a very simple systematic form for the generator matrix of the considered QC-LDPC code is as follows: (1) where superscripts and denote inversion and transposition, respectively. Matrix is formed by a 1504 1504 identity Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  3. 3. BALDI et al.: FINITE-PRECISION ANALYSIS OF DEMAPPERS AND DECODERS FOR LDPC-CODED M-QAM SYSTEMS 241 matrix followed by a column of 4 binary circulant blocks (since obtained as the product of circulant blocks). So, the encoder im- plementation is very simple, and basically consists in translating the last column of blocks into circuits based on barrel shift reg- isters. With the purpose of extending the comparison to higher rate codes, we have also considered another QC-LDPC code, still designed through the RDF approach, with the same dimension of the previous one, but rate (i.e., length ). Its parity-check matrix is a row of nine circulant blocks with size 188 188, each having row/column weight equal to 4 or 5. The row weight of the whole matrix is , while its average column weight is . Looking at the DVB-RCS turbo code, it should be noted that the code rate 8/9 is higher than those considered in the standard. However, the optimized two-level interleaver can be used also for this rate, while the puncturing rule can be easily extended, this way giving us the opportunity to make a comparison for a higher rate. Strictly speaking, however, it is quite evident that this code cannot be considered a standard code; for this reason, we call it “DVB-RCS-like” turbo code with rate 8/9. Fig. 2 reports the simulated performance of the considered codes over the AWGN channel, by using BPSK modulation and in absence of quantization. In simulating turbo codes, we have used 8 iterations for rate 3/4, and 15 iterations for rate 8/9; these choices are adequate for achieving satisfactory convergence of the decoding algorithm. From the figure, we observe that the performance of turbo codes and LDPC codes is similar at both rates. The turbo codes exhibit a slightly earlier waterfall, that yields an initial coding gain against LDPC codes, for small signal-to-noise ratio and high error rates. For smaller error rates, however, the curves of the LDPC codes show a more favorable slope, and intersect those of the turbo codes. This means that, co- herent with the conclusions in [16], the LDPC codes can provide a valid alternative to the turbo codes for high signal-to-noise ra- tios. If we focus on the FER curve for codes with rate 8/9, in particular, an error floor effect appears in the turbo code perfor- mance, while the LDPC code has no evident floor, at least in the explored region of FER values. So, well-designed high-rate LDPC codes are less exposed than turbo codes to floors for error rates of practical interest. The presence of error floor in the performance of LDPC codes is even more rare when adopting high order modulations, that are widely used in modern telecommunication standards. In such case, however, “artificial” floors may arise when implementing quantized versions of the LDPC decoder, due to approximation and clipping of intrinsic messages. This motivates our work and, in the following sections, we will study quantization effects for the considered high-rate QC-LDPC code, in conjunction with high order modulation schemes. III. OVERVIEW OF PREVIOUS WORK ON QUANTIZATION The existence of finite-precision issues in LDPC decoders is well consolidated: in [20] the “Parity Likelihood Ratio” approach, rather than the more conventional “Log-Likelihood Ratio” (LLR) approach, is proposed to overcome some quanti- zation problems that appear when the Sum-Product decoding Fig. 2. Comparison of turbo and LDPC codes for DVB-RCS: (a) bit error rate (BER) and (b) frame error rate (FER) versus the signal-to-noise ratio per bit (E =N ). algorithm is applied. In [2] it is clearly stated that adaptive quan- tization schemes (unfeasible in many practical applications) can exploit better the channel capacity. Besides quantization of decoder messages, in [21] quantization of the received samples is considered, concluding that a 4-bit representation is a good trade-off between performance and complexity. This conclusion, however, is established only for binary (BPSK) modulation, neglecting the impact of the demapper block. On the other hand, in such paper the authors study the decoder structure and propose non-uniform quantization to implement hyperbolic functions. A similar analysis is developed in [22], where low com- plexity versions of the logarithmic Sum Product Algorithm (LLR-SPA) are presented. The authors show that core hyper- bolic functions of the LLR-SPA decoder can be effectively implemented through a uniform quantization or a piece-wise linear approximation, in the latter case with negligible perfor- mance loss. The relevant issue of an optimal trade-off between resolution and dynamic range for decoding non-binary LDPC Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  4. 4. 242 IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 codes when used with BPSK modulation is instead addressed in [23]. Many authors suggest the adoption of 6-bit quantization for the decoder messages as the best trade-off between performance and complexity in coded binary modulation [21], [24]–[26]. The same choice can be adopted for low-complexity versions of the Sum-Product Algorithm, like the Min-Sum variant [27]. But, in this case, it is also proved that less quantization bits in the implementation of a Min-Sum LDPC decoder can yield a slight performance degradation [28]. When considering higher order modulation schemes, more bits are necessary to represent both the received samples and the decoder messages without incurring significant performance loss. Only a few papers are devoted to study such more involved situation. An example is in [29], where the authors consider only uniform quantization schemes. Moreover, quantization is applied to the decoder messages (with the peculiarities of -ary systems), while that of the received samples is neglected. Even the several proposals of non-uniform quantization schemes are generally addressed to binary systems [30]; on the other hand a valuable alternative to non-uniform quantization consists in the Soft-Bit decoding approach presented in [31]. The references above evidence the need for deepening the study in the case of -ary modulation schemes. An improved analysis should take into account the joint effects of the decoder and the demapper blocks. Actually, this is one of the targets of the present paper, and our proposed solutions will be discussed in the next sections. IV. SYSTEM MODEL The analysis we have developed is quite general and can be applied, with some distinctions, to any value of . However, for better evidence, in the following we will mainly refer to the specific case of a 16-QAM constellation. For any equal to an even power of 2, a Gray labeling can be adopted to match every sequence of encoded bits to each symbol. An example of Gray labeling for is shown in Fig. 3; we will refer to it in the subsequent analysis. Attention will be focused on the high-rate QC-LDPC code described in Section II. It has length and dimension , coincident with the size of an MPEG2 Transport Stream (TS) packet [17]. The code rate is ; so, by assuming , the spectral efficiency is about 3.6 bit/s/Hz, that is a large enough value for most broadcast applications. Let us look at Fig. 1. The LDPC encoder maps each -bit word produced by the source into an -bit LDPC codeword. Each codeword is then passed to the mapper and modu- lator block, that transforms groups of code bits into a symbol of the bi-dimensional -QAM constellation. The modulated signal is then transmitted over an Additive White Gaussian Noise (AWGN) channel. At the receiver side, the demapper block is a maximum a posteriori (MAP) symbol-to-bit metric calculator, that is able to produce an initial likelihood value for each received bit (such values are denoted as intrinsic or channel messages). These messages serve as input for the Sum-Product Algorithm (SPA), that starts iterating and, at each iteration, produces updated versions of the extrinsic and the a posteriori messages [32]. The former Fig. 3. Gray labeling for 16-QAM. are used as input for the subsequent iteration (if needed), while the latter represent the decoder output, and serve to obtain an estimated codeword that is subject to the hard decision and the parity-check test. The efficiency of this scheme, which is very simple to implement, has been tested even in comparison with more involved solutions, like those based on multilevel coding formats, showing everywhere excellent error rate performance [12]; therefore, it is often preferred in practical applications. Simulations have been carried out over the AWGN channel. As the QAM constellation is not geometrically uniform, the simulated information patterns cannot be fixed (the all zero se- quence would be the canonical choice) but are generated by a random, uncorrelated, source. V. QUANTIZATION OF DECODER MESSAGES A. Outline of the Decoding Algorithm Let and be the magnitude of the in-phase and quadrature components, respectively, for each received pass-band signal. The latter, denoted by , is the sum of a symbol of the constellation and a sample of a white Gaussian noise , where and are independent Gaussian random variables with zero mean and variance . Moreover, let us denote by the -th code bit associated with the symbol, by the subset of signals whose label has the value , and by the subset of signals whose label has the value . The LLR of the coded bit , given the received signal , can be expressed as: (2) The values (2), calculated for all bits of a codeword, are the intrinsic messages given as input to the belief propagation algo- rithm. They serve to initialize extrinsic messages, that are then updated through the iterated exchange of messages between Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  5. 5. BALDI et al.: FINITE-PRECISION ANALYSIS OF DEMAPPERS AND DECODERS FOR LDPC-CODED M-QAM SYSTEMS 243 variable and check nodes in the Tanner graph representing the code. At the end of each iteration, a posteriori messages are cal- culated and, based on their sign, an estimate of the transmitted codeword is derived. The procedure stops when all the parity- check equations are satisfied or when the maximum number of iterations, fixed a priori, is reached. The detailed description of the SPA algorithm for LDPC de- coding can be found in several books and papers (see [33] and [34], for example) and is here omitted for the sake of brevity. B. Uniform Midtread Quantization of the Decoder Messages As stated in the Introduction, in a practical implementation, all the decoder messages are quantized, resulting in a perfor- mance degradation compared to the ideal behavior, that is ob- tained by assuming real (double precision floating point) vari- ables for the involved quantities. In principle, we consider uni- form midtread quantization, that converts the real value into a word of bits. The corresponding law is reported in (3), where is the saturation threshold, is the quantization step (depen- dent on the number of bits ) and and are the exact and the uniform quantized values, respectively: (3) In this expression, “ ” represents the floor function, that gives the largest relative number smaller than, or equal to, its ar- gument. When considering uniform midtread quantization, two equivalent approaches are possible: direct fixed point represen- tation and integer rescaling. In direct fixed point representa- tion, bits of each word are reserved for the fractional part (this is often denoted as ) and is quantized by con- verting it into its nearest value. In this case, the quan- tization step is and the saturation threshold is . In the integer rescaling approach, instead, the saturation threshold is fixed in advance and the dynamic range is divided into uniform intervals, each with am- plitude . In this case, the value of can be denoted through the -bit interval index it is associated with, or through its fixed point value, coincident with the product of the interval index by the amplitude (whose fixed point representation must be suitably chosen). The set of all the possible values of can be stored in an -bit indexed Look-Up Table (LUT) or calculated, any time, through a suitable multiplier circuit. The integer rescaling approach requires an extra step for reconstructing the quantized values and, depending on the threshold choice, can yield a non optimal use of the fixed point representation. However, these drawbacks are overcome when the decoder involves only linear operations. For example, the Min-Sum approximate version of the LLR-SPA decoder requires additions for variable nodes update, minimum search operations for check nodes update, and sign operations for estimating each bit when the decoder stops iterating. In this case, all the quantities involved in the decoding process can be scaled by , and the whole decoder can work on integer values, without the need of fixed point representation. Furthermore, the intrinsic messages can be normalized into a fixed range; for Fig. 4. Max intrinsic message amplitude versus E =N for different bit posi- tions. example, if the demapper output is divided by its max amplitude (that, in a practical implementation, cannot diverge), the input LLRs are normalized into the range . This way, the dynamic range of the decoder messages and their quantization threshold become independent of the signal-to-noise ratio. In particular, the choice of unitary threshold implies clipping of the updated messages but not that of initial messages, and this occurs independently of the signal-to-noise ratio. For this reason, we adopt the integer rescaling approach. On the other hand, when using the Min-Sum approximate ver- sion of the decoder, the amount of memory required to store the extrinsic information can be reduced through other strategies ([35], [36]), due to the fact that, at each iteration, extrinsic mes- sages associated with a check node can assume only two fixed values. This is not the case of the SPA decoder, in which ex- trinsic information can assume arbitrary values. C. Effect of Quantization and Clipping Because of the inherent complexity of the decoding process, an analytical approach able to express the impact of the quanti- zation/clipping effect would be very difficult to face. Moreover, theoretical arguments permit to obtain only asymptotic results [2], that could be quite distant from practical cases. Thus, we resort to numerical simulations. We consider, for the decoder messages (that is intrinsic, ex- trinsic and a posteriori messages), the quantization character- istic of (3) with and ; the value of such pa- rameters can be optimized through a series of numerical sim- ulations. As regards the threshold, in particular, a preemptive analysis is possible based on the intrinsic messages. If we limit the Gauss plane to a finite square area of side around the signal constellation, it is possible to calculate, through (2), the max intrinsic message amplitude as a function of the average signal-to-noise ratio per bit, , and the bit position. This is shown in Fig. 4 for and the constellation of Fig. 3. Fig. 4 shows that, in the considered range of signal-to-noise ratios, the input LLR can assume very high values. As we expect that the clipping effect has a negative impact on the performance of the decoder, according with this figure, the value of should be set very large. It is interesting to observe that the problem is Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  6. 6. 244 IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 Fig. 5. Performance of the considered LDPC code for uniform (Msg m 0T ) and non-uniform (Msg m 0T 0F) midtread decoder messages quantization: (a) BER versus E =N ; (b) FER versus E =N . emphasized by the need to use rather high signal-to-noise ratios because of the adoption of the -ary modulation. In the case of using BPSK, which is a more conventional choice, the problem would be much less dramatic. This is because, for a given code and desired error rate, the signal-to-noise ratios for BPSK are much smaller, and the required value of can be reduced ac- cordingly. The negative effect of clipping on the initial messages has been confirmed through numerical simulations, whose results are reported in Fig. 5. In running simulations, we have adopted the LLR-SPA, with a maximum number of iterations equal to 100. The same will be for the other performance curves shown in the sequel. From Fig. 5, we see that the curves of BER and FER corresponding to and (i.e., ) show a significant error floor; even if the resolution is increased (for example, by setting and , i.e., ) the error floor remains. This confirms that the error-floor behavior, in these cases, is mainly due to the effect of clipping intrinsic messages. On the other hand, if the clipping effect is avoided, for ex- ample by increasing the dynamic range though maintaining uni- tary step (that happens when and are chosen), the error floor is mitigated (this is evident from the FER curve). Better and better performance can be achieved by increasing also the quantization resolution: the choice of and (i.e., ) ensures, in fact, excellent perfor- mance. However, the values of and, most of all, , required to obtain the best performance, when employing the quantization characteristic described by (3), can become prohibitively high. Therefore, in the next subsection, we introduce a non-uniform quantization characteristic, that is logarithmic in the quantiza- tion interval amplitudes. D. Proposal of a New Non-Uniform Quantization Function Given the real value and a positive real number , that we call the logarithmic “factor”, let us define , and . The proposed non-uniform quantization characteristic is as follows: (4) where is the non-uniform quantized version of . This new characteristic has more dense quantization levels for small input values and more sparse quantization levels for high input values, in line with the observation that nearly zero LLRs (that are re- sponsible for the decoder most uncertain condition) are more sensitive to quantization effects than high LLRs (that represent a firm belief condition). Non-uniform quantization, according with (4), is obtained through a classic compander approach based on uniform midtread quantization. Such a choice, however, implies a Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  7. 7. BALDI et al.: FINITE-PRECISION ANALYSIS OF DEMAPPERS AND DECODERS FOR LDPC-CODED M-QAM SYSTEMS 245 more involved hardware realization when (even linear) oper- ations must be performed on quantized values; so an accurate complexity assessment must be done when considering this solution. For and , the choice of implies that the quantization characteristic expressed by (4) has the min- imum interval amplitude equal to , that coincides with the lowest step already considered for uniform quantiza- tion. We have applied the non-uniform quantization with this choice of the parameters and the simulated performance is also shown in Fig. 5. We see that, by reducing the impact of the clip- ping effect, the logarithmic characteristic avoids the presence of the error floor, even assuming . More specifically, the BER and FER curves relative to non-uniform quantization with and are only a small fraction of dB far from those corresponding to uniform quantization with and , despite the former system adopts a smaller number of quantization bits. In conclusion, law (4), although more in- volved to implement, seems quite suitable in the region of low error rates. VI. QUANTIZATION OF THE RECEIVED SIGNALS The effect of the quantization on the input received sam- ples can be related, through a simple analytical approach, to the decoder messages quantization. An estimate of the number of quantization bits for the input signals can be easily found that is compatible with the resolution adopted for the mes- sages, so avoiding introduction of further performance degra- dation. A. Estimate of the Maximum Quantization Error The sub-system processing the received samples should im- plement (2): once having obtained and , as the results of an analog-to-digital conversion, these values are used to calculate the for each set of codeword bits ( , in the considered 16-QAM example). Coherent with the approach followed in Section V, the output of the demapper block is then quantized. Noting by the dynamic range of the input and (for example, in Fig. 3) and by the number of quantiza- tion bits adopted, under the hypothesis of using uniform midrise quantization (that is preferable, at the input, for a number of practical reasons [37]), the quantization step is . The maximum quantization error at the input, for and , re- spectively, is , and it reflects on a max- imum error on the LLR of the -th bit. Obviously, this propagated error depends on the value of , and a suitable de- sign criterion should consist in choosing an that satisfies the condition: (5) In (5), represents the constant interval amplitude in the case of uniform LLR quantization, while it can be replaced by the minimum interval amplitude when non-uniform LLR Fig. 6. Estimated number of quantization bits for the received signals. quantization is adopted. If (5) is verified, the signal quantiza- tion has no impact on the decoder messages quantization, and the BER performance is exactly the same achievable with un- quantized input samples. can be approximated through the following expression: (6) Partial derivatives appearing in (6) can be easily computed starting from (2); the final result is: (7) In this formula, and are implicit in ; on the other hand, in (7) the noise variance is present and it influences the result. B. Optimization of the Signal Quantization Parameters By computing through (7) and inserting it in con- dition (5), we are able to find couples of values that, regardless and , ensure an error on the LLRs, as induced by the quantization of the received samples, not larger than that permitted for extrinsic messages quantization. Noting by the distance between adjacent symbols in the 16-QAM constella- tion ( in Fig. 3), the following relationship holds: SNR (8) where SNR is the ratio between the average signal power and the noise power. Therefore, , for fixed , depends on the average signal-to-noise ratio per bit. A plot of versus , based on (5) (where has been considered) and (6), is shown in the “exact” plots of Fig. 6 for the first two bits. The approximate points must not be considered in this phase; their meaning will be described in Section VII. The analysis for the third and fourth bit provides identical results, with the position Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  8. 8. 246 IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 Fig. 7. Performance of the considered LDPC code for uniform midrise samples quantization (Sig m 0T ) and uniform midtread decoder messages quantization (Msg m 0 T ): (a) BER versus E =N ; (b) FER versus E =N . , because of the intrinsic symmetry of the constellation, that will be further discussed in Section VII. The required value of , for each bit, is a step-wise in- creasing function of . Clearly, in order to satisfy condi- tion (5) in a given range of values and for all the bit posi- tions, it is necessary to assume the greatest (i.e., most stringent) value of . As an example, for (which implies for the considered code and constellation), the suggested value is . This estimate can be used to forecast the actual performance. For the sake of verification, we have considered uniform quan- tization of the decoder messages (that is the most critical case, having constant resolution) and repeated, in Fig. 7, the simu- lation in Fig. 5, but now considering also the quantization of the received samples for different numbers of quantization bits . Coherent with the theory, the curve with is exactly superposed to the unquantized one. Anyway, we also see that the simulated performance degradation for a lower can be very small, and even with it remains below 0.2 dB. This result is not surprising: the value of obtained by im- posing (5) is quite conservative; it aims to ensure that the error on the received samples is always not greater than that on the de- coder messages. When such a condition is unsatisfied, it is not realistic to think that performance becomes immediately bad: first of all the threshold at the right hand side of (5) could be exceeded for a small fraction of time and by a limited amount; secondly, the sensitivity of the decoding algorithm on the ini- tial condition should be taken into account, so that it is not sure that any excess translates into an additional error. Although af- fordable in principle (the former in analytical terms by using the probability density functions of the received samples, the latter by using empirical rules drawn by simulation) this further study is rather involved and does not permit to derive general con- clusions. For this reason, the value of calculated by means of (5) only represents a “sufficient” condition to obtain the de- sired good performance. On the other hand, one can object that such an overestimate (in the specified sense) of the value of obliges to operate with a number of quantization bits unaccept- ably high. However, it should be noticed that the value of only affects the demapper, not the decoder (whose registers are involved in the message passing algorithm), and a simple solu- tion can be adopted to reduce the complexity of such block. This new proposal is described in the following section. VII. DEMAPPER BASED ON APPROXIMATE EXPRESSIONS A. Second Order Approximation When the value of SNR (and then of ) is sufficiently high, (7) can be greatly simplified by considering, in each sum, the leading term only. This dominant contribution is due to the signals and that, for Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  9. 9. BALDI et al.: FINITE-PRECISION ANALYSIS OF DEMAPPERS AND DECODERS FOR LDPC-CODED M-QAM SYSTEMS 247 Fig. 8. Comparison between the exact and approximate LLRs for the first two bits, as a function of x (fixed y), at E =N = 0 dB. Fig. 9. Comparison between the exact and approximate LLRs for the first two bits, as a function of x (fixed y), at E =N = 8 dB. each , are at minimum distance from the received sample . This technique coincides with the log-sum approximation and has been successfully applied for both product codes [38] and convolutional codes [39]. Actually, by imposing this simplification and taking into ac- count (8), (7) becomes: SNR (9) This relationship is very simple and more expressive than (7): first of all we notice a linear dependence on the SNR (such a dependence is necessarily more involved in the rigorous ex- pression). Moreover, in general, it can be further simplified. For example, looking at the 16-QAM constellation in Fig. 3, it is Fig. 10. Circuit for the evaluation of L (b ). easy to see that and have always in common the in-phase component (i.e., ) or the quadrature component (i.e., ) and that the maximum difference between the un- equal components is . By replacing (9) in (5), together with the highlighted maximum value, with simple algebra we find: (10) where is the smallest integer greater than . This result is shown in the “approximate” plot of Fig. 6, as a function of , and compared to the exact one (for bits 1 and 2). Both the exact and approximate curves exhibit, as obvious, a staircase behavior. Small regions usually exist, for low/medium signal-to-noise ratios, where the approximate formula can pro- vide a value of one bit higher than that given by the exact for- mula. Actually, these regions are practically indistinguishable, in the range considered, for the first bit, whilst they are evident for the second bit. This is due to the fact that, when the second bit is considered, the maximum difference between the dominant contributions in is smaller than . So, in prin- ciple, an adaptive quantization can be conceived, that varies the value of according with the bit position. Anyway, it is clear that such a procedure would be difficult to manage in a practical implementation. The same simplification used in (9) can be also introduced in the LLR expression (2). This looks like the classic max-log approximation. Under the same hypotheses, (2) becomes: (11) The residual difference between and , that is due to the approximation, is appreciable for small signal-to-noise ratios. An example is shown in Fig. 8, for , where and are plotted as a function of , for an arbitrary . The difference becomes smaller and smaller for in- creasing signal-to-noise ratios and, at the values of of interest (i.e., those required to have low error rates), it is usu- ally acceptable for all bits. An example is shown in Fig. 9 for ; in this case the exact and approximate curves are almost overlaid. In comparison with Fig. 8, it is interesting to observe the very different LLR’s dynamics. Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  10. 10. 248 IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 B. Simplified Demapper The acceptability of the approximation suggests a simple so- lution to reduce considerably the complexity of the demapper block. The exact expression for , in fact, requires the im- plementation of a processor able to calculate , given its inputs. An alternative solution would be to store the values of in a LUT indexed on , , (i.e. the quantized versions of , , , respectively). Looking at (11), instead, a smarter solution is possible. Due to thelinearityintheSNR,the -bitlevelindexesforthequantized versionof can be stored in the LUT, in place of those of . This way, the dependence on the SNR is eliminated, and the -bit output words only depend on the -bit input words, regardless of the channel. To reconstruct the value of from each -bit value, if needed, the circuit shown in Fig. 10 can be adopted. It multiplies each level index by the fixed point representation of . This circuit uses an SNR value that is continuously estimated at the receiver side, for example by using the signal-mean square error (S/MSE) ratio. When multiplication is performed, it is easy to show that, if is the number of bits used to represent (the always positive quantity) and the -bit index includes one sign bit, the output value of can be represented through bits, at the most. However, as stated in Section V, when the decoder involves only linear operations, it can be normalized in such a way as to be independent of the signal-to-noise ratio. In this case the demapper does not perform the multiplication step and the LUT output is the initial extrinsic message. The proposed solution permits to implement only one LUT that contains the quantized values of and has -bit addresses, being the greatest value obtained by applying the analysis shown in the previous section to the considered SNR range. C. Reduction of the LUTs Size The LUT size is (12) i.e., it consists of bits in the 16-QAM case. This value can be further reduced taking into account the following considerations. Fig. 11 shows the subsets and for the 16-QAM constel- lation of Fig. 3, calculated for all the bit positions . From Figs. 11(a) and 11(b), we notice that the values of and depend only on the quadrature component, as in their expressions we have . Similarly, from Figs. 11(c) and 11(d), it is evident that and de- pend only on the in-phase component, as in their expressions we have . Moreover, we notice that the two subsets and coincide with and , respectively, when an axial symmetry around the bisector of the first and third quadrant is applied. Therefore, the values of coincide with those of , for . Similarly, and coincide with and under the same transformation, so the values of coincide with those of , for . Fig. 11. Subsets A (diamond markers) and B (square markers) for the 16-QAM constellation of Fig. 3: (a) A and B ; (b) A and B ; (c) A and B ; (d) A and B . Fig. 12. Circuit employing the 16-QAM demapper LUT with reduced size. Therefore, the same LUT can be used to obtain the values of and , such as those of and . Hence, the address word length can be halved simply by adding an “input selector” block before the LUT, that is able to forward only the right component of each input signal on the basis of the bit position. The corresponding circuit is plotted in Fig. 12. In this case, only the values of and are stored in the LUT. As previously shown, such values only depend on (therefore they can be calculated for an arbitrary value of ) and coincide with those of and for . Hence, when the switch in Fig. 12 is in position “A”, the quadrature component of the input signal is used as index for the LUT, and the values of and are available at its outputs. On the contrary, when the switch is in position “B”, the in-phase component of the input signal is used as index, and the values of and are available at the two outputs. Hence, the LUT shown in Fig. 12 consists of bits, and it is times smaller than that in Fig. 10. The same arguments hold for any Gray-labeled constellation of signals, with even. In all these cases, the demapper Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  11. 11. BALDI et al.: FINITE-PRECISION ANALYSIS OF DEMAPPERS AND DECODERS FOR LDPC-CODED M-QAM SYSTEMS 249 block can be implemented by means of a LUT with -bit ad- dresses and -bit outputs, that is, with size (13) However, it should be noted that, for demapping each re- ceived sample, the circuit in Fig. 12 requires two LUT accesses, while that in Fig. 10 requires only one access. Therefore, two optimized circuits should be used in order to obtain the same latency as for the original scheme. Nevertheless, if we consider two implementations of the optimized circuit and compare their size with that of the original one, we obtain a size gain equal to (14) The value of depends on the number of quantization bits used for the received samples, , as expected. As shown in the previous sections, this number can be quite high (up to 10), thus yielding a considerable size gain when adopting the optimized circuit. VIII. CONCLUSION Modern telecommunications require more and more re- liable and spectrally efficient transmissions. Reliability can be achieved by using LDPC codes, while spectral efficiency requires the adoption of high order modulation, like -QAM, schemes. Practical implementation of these solutions needs to reconsider many of the conclusions already drawn for the more classic LDPC coded binary modulations. The larger signal-to-noise ratio required, as a counterpart to the improved spectral efficiency, makes the -ary modulated scheme much more sensitive to the clipping effect, to the point that unex- pected error floors can appear if the system parameters are not correctly designed. In principle, the number of quantization bits needed can become very large, thus making the system quite unfeasible. To solve the problem, attractive solutions seem to be the adoption of non-uniform quantization and in deep analysis of the demapper block functionalities. By exploiting symmetry properties and taking into account the peculiarities of the quantities involved in the decision process, efficient demapping can be achieved with minimum size LUTs. The role of the quantization of the incoming signals can be also controlled in such a way as to avoid altering the trade-off found in the quantization of the decoder messages. We have studied these aspects for the case of DVB compatible LDPC codes, in conjunction with -QAM modulation. For the sake of clarity, the results presented in this paper have been re- ferred to the specific case of 16-QAM, but most of the analysis and the proposed new ideas can be easily extended to higher order constellations and, in principle, to -ary systems with different modulations. ACKNOWLEDGMENT The authors wish to thank Giambattista Di Donna and Sergio Bianchi, at Siemens, for their contribution and helpful discus- sion. REFERENCES [1] D. MacKay and R. Neal, “Near Shannon limit performance of low density parity check codes,” Electronics Letters, vol. 33, no. 6, pp. 457–458, Mar. 1997. [2] T. Richardson and R. Urbanke, “The capacity of low-density parity- check codes under message-passing decoding,” IEEE Trans. Inform. Theory, vol. 47, no. 2, pp. 599–618, Feb. 2001. [3] Digital Video Broadcasting (DVB); Second Generation Framing Struc- ture, Channel Coding and Modulation Systems for Broadcasting, Inter- active Services, News Gathering and Other Broadband Satellite Appli- cations, ETSI EN Std. 302 307 (v1.1.2), Jun. 2006, Rev. 1.1.1. [4] IEEE P802.11 Wireless LANs WWiSE Proposal: High throughput ex- tension to the 802.11 Standard, IEEE Std. 11-04-0886-00-000n, Aug. 2004. [5] IEEE Standard for Local and Metropolitan Area Networks - Part 16: Air Interface for Fixed and Mobile Broadband Wireless Access Sys- tems - Amendment for Physical and Medium Access Control Layers for Combined Fixed and Mobile Operation in Licensed Bands, IEEE Std. 802.16e-2005, Dec. 2005. [6] Digital Video Broadcasting (DVB); Framing Structure, Channel Coding and Modulation for Digital Terrestrial Television, ETSI EN Std. 300 744 (v1.5.1), Nov. 2004. [7] “DVB-T2 call for technologies,” Digital Video Broadcasting Project, Tech. Rep. SB 1644r1, Apr. 2007. [8] Digital Video Broadcasting (DVB); Interaction channel for Satellite Distribution Systems, ETSI EN Std. 301 790 (v1.4.1), Sep. 2005. [9] N. H. Tran and H. H. Nguyen, “Signal mappings of 8-ary constellations for bit interleaved coded modulation with iterative decoding,” IEEE Trans. Broadcast., vol. 52, no. 1, pp. 92–99, Mar. 2006. [10] B. Rong, T. Jiang, X. Li, and M. R. Soleymani, “Combine LDPC codes over GF(q) with q-ary modulations for bandwidth efficient transmis- sion,” IEEE Trans. Broadcast., vol. 54, no. 1, pp. 78–84, Mar. 2008. [11] S. Papaharalabos, M. Papaleo, P. T. Mathiopoulos, M. Neri, A. Vanelli- Coralli, and G. E. Corazza, “DVB-S2 LDPC decoding using robust check node update approximations,” IEEE Trans. Broadcast., vol. 54, no. 1, pp. 120–126, Mar. 2008. [12] Y. Li and W. Ryan, “Bit-reliability mapping in LDPC-coded modula- tion systems,” IEEE Commun. Lett., vol. 9, no. 1, pp. 1–3, Jan. 2005. [13] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Inform. Theory, vol. IT-8, pp. 21–28, Jan. 1962. [14] R. J. McEliece, D. J. C. MacKay, and J.-F. Cheng, “Turbo decoding as an instance of Pearl’s “belief propagation” algorithm,” IEEE J. Select. Areas Commun., vol. 16, no. 2, pp. 140–152, Feb. 1998. [15] C. Douillard, M. Jézéquel, C. Berrou, N. Brengarth, J. Tousch, and N. Pham, “The turbo code standard for DVB-RCS,” in Proc. Second International Symposium on Turbo Codes, Brest, France, Sep. 2000, pp. 535–538. [16] T. Lestable, E. Zimmerman, M.-H. Hamon, and S. Stiglmayr, “Block- LDPC codes vs duo-binary turbo-codes for European next generation wireless systems,” in Proc. IEEE VTC-2006 Fall, Montréal, Canada, Sep. 2006, pp. 1–5. [17] Information Technology - Generic Coding of Moving Pictures and As- sociated Audio Information - Part 1: System, ISO/IEC Std. 13 818-1, 1996. [18] X. Y. Hu and E. Eleftheriou, “Progressive edge-growth Tanner graphs,” in Proc. IEEE Global Telecommunications Conference (GLOBECOM’01), San Antonio, TX, Nov. 2001, vol. 2, pp. 995–1001. [19] M. Baldi and F. Chiaraluce, “Cryptanalysis of a new instance of McEliece cryptosystem based on QC-LDPC codes,” in Proc. IEEE ISIT 2007, Nice, France, Jun. 2007, pp. 2591–2595. [20] L. Ping and W. Leung, “Decoding low density parity check codes with finite quantization bits,” IEEE Commun. Lett., vol. 4, no. 2, pp. 62–64, Feb. 2000. [21] T. Zhang, Z. Wang, and K. Parhi, “On finite precision implementation of low density parity check codes decoder,” in IEEE International Sym- posium on Circuits and Systems ISCAS 2001, Sydney, NSW, May 2001, vol. 4, pp. 202–205. [22] X.-Y. Hu, E. Eleftheriou, D.-M. Arnold, and A. Dholakia, “Effi- cient implementations of the sum-product algorithm for decoding LDPC codes,” in Proc. IEEE Global Telecommunications Confer- ence (GLOBECOM’01), San Antonio, TX, Nov. 2001, vol. 2, pp. 1036E–1036E. [23] H. Wymeersch, H. Steendam, and M. Moeneclaey, “Computational complexity and quantization effects of decoding algorithms for non-bi- nary LDPC codes,” in Proc. IEEE Int. Conf. on Acoustic, Speech and Signal Processing, ICASSP 2004, Montreal, Canada, May 2004, vol. 4, pp. 669–672. Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.
  12. 12. 250 IEEE TRANSACTIONS ON BROADCASTING, VOL. 55, NO. 2, JUNE 2009 [24] S. Kim, G. Sobelman, and J. Moon, “Parallel VLSI architectures for a class of LDPC codes,” in Proc. IEEE ISCAS 2002, Scottsdale, AZ, May 2002, vol. 2, pp. II-93–II-96. [25] S. L. Howard, C. Schlegel, and V. C. Gaudet, “Degree-matched check node decoding for regular and irregular LDPCs,” IEEE Trans. Circuits Syst. II, vol. 53, no. 10, pp. 1054–1058, Oct. 2006. [26] L. Yang, H. Liu, and C.-J. Richard Shi, “Code construction and FPGA implementation of a low-error-floor multi-rate low-density parity-check code decoder,” IEEE Trans. Circuits Syst. I, vol. 53, no. 4, pp. 892–904, Apr. 2006. [27] D. Oh and K. K. Parhi, “Performance of quantized min-sum decoding algorithms for irregular LDPC codes,” in Proc. IEEE ISCAS 2007, New Orleans, LA, May 2007, pp. 2758–2761. [28] Z. Cui and Z. Wang, “Efficient message passing architecture for high throughput LDPC decoder,” in Proc. IEEE ISCAS 2007, New Orleans, LA, May 2007, pp. 917–920. [29] M. Shen, H. Niu, H. Liu, and J. Ritcey, “Finite precision implemen- tation of LDPC coded M-ary modulation over wireless channels,” in Proc. Asilomar Conference on Signals, Systems and Computers, Nov. 2003, vol. 1, pp. 114–118. [30] Z. Cui and Z. Wang, “A 170 Mbps (8176, 7156) quasi-cyclic LDPC decoder implementation with FPGA,” in Proc. IEEE ISCAS 2006, Kos, Greece, May 2006, pp. 5095–5098. [31] S. Howard, V. Gaudet, and C. Schlegel, “Soft-bit decoding of regular low-density parity-check codes,” IEEE Trans. Circuits Syst. II, vol. 52, no. 10, pp. 646–650, Oct. 2005. [32] J. Hagenauer, E. Offer, and L. Papke, “Iterative decoding of binary block and convolutional codes,” IEEE Trans. Inform. Theory, vol. 42, no. 2, pp. 429–445, Mar. 1996. [33] S. Lin and D. J. Costello, Error Control Coding, Second ed. Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2004. [34] D. J. C. MacKay, “Good error correcting codes based on very sparse matrices,” IEEE Trans. Inform. Theory, vol. 45, no. 2, pp. 399–432, Mar. 1999. [35] A. Hunt, “Hyper-Codes: High-Performance Low-Complexity Error-Correcting Codes,” Master’s thesis, , Carleton University, Ottawa, Canada, 1998. [36] A. Hunt, J. Lodge, and S. Crozier, “Method of Enhanced Max-Log-a Posteriori Probability Processing,” U.S. Patent 6145114, Nov. 2000. [37] Private Communication. Apr. 2006, Siemens, Cassina de’ Pecchi, Italy. [38] R. Pyndiah, A. Picart, and A. Glavieux, “Performance of block turbo coded 16-QAM and 64-QAM modulations,” in Proc. IEEE Global Telecommunications Conference (GLOBECOM’95), Singapore, Nov. 1995, vol. 2, pp. 1039–1043. [39] F. Tosato and P. Bisaglia, “Simplified soft-output demapper for binary interleaved COFDM with application to HIPERLAN/2,” in Proc. IEEE ICC 2002, New York, May 2002, vol. 2, pp. 664–668. Marco Baldi was born in Macerata, Italy, in 1979. He received the “Laurea” degree (summa cum laude) in Electronics Engineering in 2003, and the Doctoral degree in Electronics, Informatics and Telecommu- nications Engineering in 2006 from the Polytechnic University of Marche, Ancona, Italy. At present, he is a post-doctoral researcher and contract Professor at the same university. His main research activity is in channel coding, with particular interest in linear block codes for symmetric and asymmetric channels, low-density parity-check (LDPC) codes and their ap- plication in cryptography. Franco Chiaraluce (M’06) was born in Ancona, Italy, in 1960. He received the “Laurea in Ingegneria Elettronica” (summa cum laude) from the University of Ancona in 1985. Since 1987 he joined the De- partment of Electronics and Automatics of the same university. At present, he is an Associate Professor at the Polytechnic University of Marche. His main research interests involve various aspects of com- munication systems theory and design, with special emphasis on error correcting codes, sensor networks, cryptography and multiple access techniques. He is co-author of more than 180 papers and two books. He is member of IEEE and IEICE. Giovanni Cancellieri was born in Florence, Italy, in 1952. He received the degrees in Electronic Engineering and in Physics from the University of Bologna. Since 1986 he is Full Professor of Telecommunications at the Polytechnic University of Marche. His main research activities are focused on optical fibers, radio communications and wireless systems, with special emphasis on channel coding and modulation systems. He is co-author of about one hundred fifty papers, five books of scientific contents, and two international patents. Since 2003 he is president of CReSM (Centro Radioelettrico Sperimentale G. Marconi). Authorized licensed use limited to: TAGORE ENGINEERING COLLEGE. Downloaded on June 13, 2009 at 01:50 from IEEE Xplore. Restrictions apply.