In satellite communication deep space mission are the most challenging mission, where system has to work at very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details simulation is presented and compare the performance with different interleaver design .
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
A new efficient way based on special stabilizer multiplier permutations to at...IJECEIAES
BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity. ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
This paper presents the bit error rate (BER) performance of the low density parity check (LDPC) based
space-time trellis coded 2×2 multiple-input multiple-output orthogonal frequency-division multiplexing
(STTC-MIMO-OFDM) system on text message transmission. The system under investigation incorporates
1/2-rated LDPC encoding scheme under various digital modulations (BPSK, QPSK and QAM) over an
additative white gaussian noise (AWGN) and other fading (Raleigh and Rician) channels for two transmit
and two receive antennas. At the receiving section of the simulated system, Minimum Mean-Square-Error
(MMSE) channel equalization technique has been implemented to extract transmitted symbols without
enhancing noise power level. The effectiveness of the proposed system is analyzed in terms of BER with
signal-to-noise ratio (SNR). It is observable from the Matlab based simulation study that the proposed
system outperforms with BPSK as compared to other digital modulation schemes at relatively low SNRs
under AWGN, Rayleigh and Rician fading channels. The transmitted text message is found to have
retrieved effectively at the receiver under implementation of iterative sum-product LDPC decoding
algorithm. It has also been anticipated that the performance of the LDPC-based STTC-MIMO-OFDM
system degrades with the increase of noise power.
Performance analysis and implementation for nonbinary quasi cyclic ldpc decod...ijwmn
Non-binary low-density parity check (NB-LDPC) codes are an extension of binary LDPC codes with
significantly better performance. Although various kinds of low-complexity iterative decoding algorithms
have been proposed, there is a big challenge for VLSI implementation of NBLDPC decoders due to its high
complexity and long latency. In this brief, highly efficient check node processing scheme, which the
processing delay greatly reduced, including Min-Max decoding algorithm and check node unit are
proposed. Compare with previous works, less than 52% could be reduced for the latency of check node
unit. In addition, the efficiency of the presented techniques is design to demonstrate for the (620, 310) NBQC-
LDPC decoder.
Performances Concatenated LDPC based STBC-OFDM System and MRC Receivers IJECEIAES
This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.
A new efficient way based on special stabilizer multiplier permutations to at...IJECEIAES
BCH codes represent an important class of cyclic error-correcting codes; their minimum distances are known only for some cases and remains an open NP-Hard problem in coding theory especially for large lengths. This paper presents an efficient scheme ZSSMP (Zimmermann Special Stabilizer Multiplier Permutation) to find the true value of the minimum distance for many large BCH codes. The proposed method consists in searching a codeword having the minimum weight by Zimmermann algorithm in the sub codes fixed by special stabilizer multiplier permutations. These few sub codes had very small dimensions compared to the dimension of the considered code itself and therefore the search of a codeword of global minimum weight is simplified in terms of run time complexity. ZSSMP is validated on all BCH codes of length 255 for which it gives the exact value of the minimum distance. For BCH codes of length 511, the proposed technique passes considerably the famous known powerful scheme of Canteaut and Chabaud used to attack the public-key cryptosystems based on codes. ZSSMP is very rapid and allows catching the smallest weight codewords in few seconds. By exploiting the efficiency and the quickness of ZSSMP, the true minimum distances and consequently the error correcting capability of all the set of 165 BCH codes of length up to 1023 are determined except the two cases of the BCH(511,148) and BCH(511,259) codes. The comparison of ZSSMP with other powerful methods proves its quality for attacking the hardness of minimum weight search problem at least for the codes studied in this paper.
PERFORMANCE OF ITERATIVE LDPC-BASED SPACE-TIME TRELLIS CODED MIMO-OFDM SYSTEM...ijcseit
This paper presents the bit error rate (BER) performance of the low density parity check (LDPC) based
space-time trellis coded 2×2 multiple-input multiple-output orthogonal frequency-division multiplexing
(STTC-MIMO-OFDM) system on text message transmission. The system under investigation incorporates
1/2-rated LDPC encoding scheme under various digital modulations (BPSK, QPSK and QAM) over an
additative white gaussian noise (AWGN) and other fading (Raleigh and Rician) channels for two transmit
and two receive antennas. At the receiving section of the simulated system, Minimum Mean-Square-Error
(MMSE) channel equalization technique has been implemented to extract transmitted symbols without
enhancing noise power level. The effectiveness of the proposed system is analyzed in terms of BER with
signal-to-noise ratio (SNR). It is observable from the Matlab based simulation study that the proposed
system outperforms with BPSK as compared to other digital modulation schemes at relatively low SNRs
under AWGN, Rayleigh and Rician fading channels. The transmitted text message is found to have
retrieved effectively at the receiver under implementation of iterative sum-product LDPC decoding
algorithm. It has also been anticipated that the performance of the LDPC-based STTC-MIMO-OFDM
system degrades with the increase of noise power.
Performance analysis and implementation for nonbinary quasi cyclic ldpc decod...ijwmn
Non-binary low-density parity check (NB-LDPC) codes are an extension of binary LDPC codes with
significantly better performance. Although various kinds of low-complexity iterative decoding algorithms
have been proposed, there is a big challenge for VLSI implementation of NBLDPC decoders due to its high
complexity and long latency. In this brief, highly efficient check node processing scheme, which the
processing delay greatly reduced, including Min-Max decoding algorithm and check node unit are
proposed. Compare with previous works, less than 52% could be reduced for the latency of check node
unit. In addition, the efficiency of the presented techniques is design to demonstrate for the (620, 310) NBQC-
LDPC decoder.
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
Hardware implementation of (63, 51) bch encoder and decoder for wban using lf...ijitjournal
Error Correcting Codes are required to have a reliable communication through a medium that has an
unacceptable bit error rate and low signal to noise ratio. In IEEE 802.15.6 2.4GHz Wireless Body Area
Network (WBAN), data gets corrupted during the transmission and reception due to noises and
interferences. Ultra low power operation is crucial to prolong the life of implantable devices. Hence simple
block codes like BCH (63, 51, 2) can be employed in the transceiver design of 802.15.6 Narrowband PHY.
In this paper, implementation of BCH (63, 51, t = 2) Encoder and Decoder using VHDL is discussed. The
incoming 51 bits are encoded into 63 bit code word using (63, 51) BCH encoder. It can detect and correct
up to 2 random errors. The design of an encoder is implemented using Linear Feed Back Shift Register
(LFSR) for polynomial division and the decoder design is based on syndrome calculator, inversion-less
Berlekamp-Massey algorithm (BMA) and Chien search algorithm. Synthesis and simulation were carried
out using Xilinx ISE 14.2 and ModelSim 10.1c. The codes are implemented over Virtex 4 FPGA device and
tested on DN8000K10PCIE Logic Emulation Board. To the best of our knowledge, it is the first time an
implementation of (63, 51) BCH encoder and decoder carried out.
Hardware Architecture of Complex K-best MIMO DecoderCSCJournals
This paper presents a hardware architecture of complex K-best Multiple Input Multiple Output (MIMO) decoder reducing the complexity of Maximum Likelihood (ML) detector. We develop a novel low-power VLSI design of complex K-best decoder for MIMO and 64 QAM modulation scheme. Use of Schnorr-Euchner (SE) enumeration and a new parameter, Rlimit in the design reduce the complexity of calculating K-best nodes to a certain level with increased performance. The total word length of only 16 bits has been adopted for the hardware design limiting the bit error rate (BER) degradation to 0.3 dB with list size, K and Rlimit equal to 4. The proposed VLSI architecture is modeled in Verilog HDL using Xilinx and synthesized using Synopsys Design Vision in 45 nm CMOS technology. According to the synthesize result, it achieves 1090.8 Mbps throughput with power consumption of 782 mW and latency of 0.33 us. The maximum frequency the design proposed is 181.8 MHz.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Performance Comparision of Coded and Un-Coded OFDM for Different Fic CodeIJERA Editor
Error correction and detection in digital communication is used to compensate the bit error rate introduced during transmission of data. In this paper the investigation has been made to the performance of some error detecting and correcting coding algorithm for OFDM system. Convolution code, RS code and linear block code based OFDM system has been implemented, studied and analyzed. Simulation is performed in MATLAB environment.
A new channel coding technique to approach the channel capacityijwmn
After Shannon’s 1948 channel coding theorem, we have witnessed many channel coding techniques developed to achieve the Shannon limit. A wide range of channel codes is available with different complexity levels and error correction performance. Many powerful coding schemes have been deployed in the power-limited Additive White Gaussian Noise (AWGN) channel. However, it seems like we have arrived at an end of advancement path, for most of the existing channel codes. This article introduces a new coding technique that can either be used as the last coding stage of concatenated coding scheme or in parallel configuration with other powerful channel codes to achieve reliable error performance with moderately complex decoding. We will go through an example to understand the overall approach of the proposed coding technique, and finally we will look at some simulation results over an AWGN channel to demonstrate its potential.
An Efficient Fault Tolerance System Design for Cmos/Nanodevice Digital MemoriesIJERA Editor
Targeting on the future fault-prone hybrid CMOS/Nanodevice digital memories, this paper present two faulttolerance
design approaches the integrally address the tolerance for defect and transient faults. These two
approaches share several key features, including the use of a group of Bose-Chaudhuri- Hocquenghem (BCH)
codes for both defect tolerance and transient fault tolerance, and integration of BCH code selection and dynamic
logical-to-physical address mapping. Thus, a new model of BCH decoder is proposed to reduce the area and
simplify the computational scheduling of both syndrome and chien search blocks without parallelism leading to
high throughput.The goal of fault tolerant computing is improve the dependability of systems where
dependability can be defined as the ability of a system to deliver service at an acceptable level of confidence in
either presence or absence falult.ss The results of the simulation and implementation using Xilinx ISE software
and the LCD screen on the FPGA’s Board will be shown at last.
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Master's Degree Thesis on High Order Modulation and Coding Shemes for Satellite Transmitters
Advisors
Prof.dr. Roberto Garello
Eng.dr. Domenico Giancristofaro
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier SystemsIJMER
In this paper, we concern with designing and implementing a Convolutional encoder and
Adaptive Viterbi Decoder (AVD) which are the essential blocks in digital communication system using
FPGA technology. Convolutional coding is a coding scheme used in communication systems for error
correction employed in applications like deep space communications and wireless communications. It
provides an alternative approach to block codes for transmission over a noisy channel. The block
codes can be applied only for the blocks of data where as the Convolutional coding has an advantage
that it can be applied to both continuous data stream and blocks of data. The Viterbi decoder with
PNPH (Permutation Network based Path History) management unit which is a special path
management unit for faster decoding speed with less routing area. The proposed architecture can be
realized by an Adaptive Viterbi Decoder having constraint length, K of 3 and a code rate (k/n) of 1/2
using Verilog HDL. Simulation is done using Xilinx ISE 12.4i design software and it is targeted into
Xilinx Virtex-5, XC5VLX110T FPGA
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
International Journal of Engineering Research and Development is an international premier peer reviewed open access engineering and technology journal promoting the discovery, innovation, advancement and dissemination of basic and transitional knowledge in engineering, technology and related disciplines.
Hardware implementation of (63, 51) bch encoder and decoder for wban using lf...ijitjournal
Error Correcting Codes are required to have a reliable communication through a medium that has an
unacceptable bit error rate and low signal to noise ratio. In IEEE 802.15.6 2.4GHz Wireless Body Area
Network (WBAN), data gets corrupted during the transmission and reception due to noises and
interferences. Ultra low power operation is crucial to prolong the life of implantable devices. Hence simple
block codes like BCH (63, 51, 2) can be employed in the transceiver design of 802.15.6 Narrowband PHY.
In this paper, implementation of BCH (63, 51, t = 2) Encoder and Decoder using VHDL is discussed. The
incoming 51 bits are encoded into 63 bit code word using (63, 51) BCH encoder. It can detect and correct
up to 2 random errors. The design of an encoder is implemented using Linear Feed Back Shift Register
(LFSR) for polynomial division and the decoder design is based on syndrome calculator, inversion-less
Berlekamp-Massey algorithm (BMA) and Chien search algorithm. Synthesis and simulation were carried
out using Xilinx ISE 14.2 and ModelSim 10.1c. The codes are implemented over Virtex 4 FPGA device and
tested on DN8000K10PCIE Logic Emulation Board. To the best of our knowledge, it is the first time an
implementation of (63, 51) BCH encoder and decoder carried out.
Hardware Architecture of Complex K-best MIMO DecoderCSCJournals
This paper presents a hardware architecture of complex K-best Multiple Input Multiple Output (MIMO) decoder reducing the complexity of Maximum Likelihood (ML) detector. We develop a novel low-power VLSI design of complex K-best decoder for MIMO and 64 QAM modulation scheme. Use of Schnorr-Euchner (SE) enumeration and a new parameter, Rlimit in the design reduce the complexity of calculating K-best nodes to a certain level with increased performance. The total word length of only 16 bits has been adopted for the hardware design limiting the bit error rate (BER) degradation to 0.3 dB with list size, K and Rlimit equal to 4. The proposed VLSI architecture is modeled in Verilog HDL using Xilinx and synthesized using Synopsys Design Vision in 45 nm CMOS technology. According to the synthesize result, it achieves 1090.8 Mbps throughput with power consumption of 782 mW and latency of 0.33 us. The maximum frequency the design proposed is 181.8 MHz.
Chaos Encryption and Coding for Image Transmission over Noisy Channelsiosrjce
IOSR Journal of Computer Engineering (IOSR-JCE) is a double blind peer reviewed International Journal that provides rapid publication (within a month) of articles in all areas of computer engineering and its applications. The journal welcomes publications of high quality papers on theoretical developments and practical applications in computer technology. Original research papers, state-of-the-art reviews, and high quality technical notes are invited for publications.
Performance Comparision of Coded and Un-Coded OFDM for Different Fic CodeIJERA Editor
Error correction and detection in digital communication is used to compensate the bit error rate introduced during transmission of data. In this paper the investigation has been made to the performance of some error detecting and correcting coding algorithm for OFDM system. Convolution code, RS code and linear block code based OFDM system has been implemented, studied and analyzed. Simulation is performed in MATLAB environment.
A new channel coding technique to approach the channel capacityijwmn
After Shannon’s 1948 channel coding theorem, we have witnessed many channel coding techniques developed to achieve the Shannon limit. A wide range of channel codes is available with different complexity levels and error correction performance. Many powerful coding schemes have been deployed in the power-limited Additive White Gaussian Noise (AWGN) channel. However, it seems like we have arrived at an end of advancement path, for most of the existing channel codes. This article introduces a new coding technique that can either be used as the last coding stage of concatenated coding scheme or in parallel configuration with other powerful channel codes to achieve reliable error performance with moderately complex decoding. We will go through an example to understand the overall approach of the proposed coding technique, and finally we will look at some simulation results over an AWGN channel to demonstrate its potential.
An Efficient Fault Tolerance System Design for Cmos/Nanodevice Digital MemoriesIJERA Editor
Targeting on the future fault-prone hybrid CMOS/Nanodevice digital memories, this paper present two faulttolerance
design approaches the integrally address the tolerance for defect and transient faults. These two
approaches share several key features, including the use of a group of Bose-Chaudhuri- Hocquenghem (BCH)
codes for both defect tolerance and transient fault tolerance, and integration of BCH code selection and dynamic
logical-to-physical address mapping. Thus, a new model of BCH decoder is proposed to reduce the area and
simplify the computational scheduling of both syndrome and chien search blocks without parallelism leading to
high throughput.The goal of fault tolerant computing is improve the dependability of systems where
dependability can be defined as the ability of a system to deliver service at an acceptable level of confidence in
either presence or absence falult.ss The results of the simulation and implementation using Xilinx ISE software
and the LCD screen on the FPGA’s Board will be shown at last.
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
Master's Degree Thesis on High Order Modulation and Coding Shemes for Satellite Transmitters
Advisors
Prof.dr. Roberto Garello
Eng.dr. Domenico Giancristofaro
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier SystemsIJMER
In this paper, we concern with designing and implementing a Convolutional encoder and
Adaptive Viterbi Decoder (AVD) which are the essential blocks in digital communication system using
FPGA technology. Convolutional coding is a coding scheme used in communication systems for error
correction employed in applications like deep space communications and wireless communications. It
provides an alternative approach to block codes for transmission over a noisy channel. The block
codes can be applied only for the blocks of data where as the Convolutional coding has an advantage
that it can be applied to both continuous data stream and blocks of data. The Viterbi decoder with
PNPH (Permutation Network based Path History) management unit which is a special path
management unit for faster decoding speed with less routing area. The proposed architecture can be
realized by an Adaptive Viterbi Decoder having constraint length, K of 3 and a code rate (k/n) of 1/2
using Verilog HDL. Simulation is done using Xilinx ISE 12.4i design software and it is targeted into
Xilinx Virtex-5, XC5VLX110T FPGA
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
Пару лет назад я начала делиться своими историями, проектами и задумками в интернете. Продолжаю это делать до сих пор. Мне потребовалось задействовать все социальные инструменты, которыми я хотела пользоваться… и где находились люди, которых мне хотелось позвать. Я вела свои паблики и помогала вести чужие. Кое в чем разобралась и готова поделиться знаниями.
Поэтому публикую этот документ.
Его рабочее название — «Свой паблик ВКонтакте. Всё, что я хотела бы знать год назад, чтобы сэкономить время и нервы» :))
A NOVEL APPROACH FOR LOWER POWER DESIGN IN TURBO CODING SYSTEMVLSICS Design
Low Power is an extremely important issue for future mobile communication systems; The focus of this paper is to implementat turbo codes for low power solutions. The effect on performance due to variation in parameters like frame length, number of iterations, type of encoding scheme and type of the interleave in the presence of additive white Gaussian noise is studied with the floating point model. In order to obtain the effect of quantization and word length variation, a fixed point model of the application is also developed.. The application performance measure, namely bit-error rate (BER) is used as a design constraint while optimizing for power and area coverage. Low power Optimization is Performed on Implementation levels by the use of Voltage scaling. With those Techniques we can reduced the power 98.5%and Area(LUT) is 57% and speed grade is Increased .This type of Power maneger is proposed and implemented based on the timing details of the turbo decoder in the VHDL model.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and
performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar
codes in series to create robust error-correcting codes. The primary objective here is to optimize the
behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving,
a technique that rearranges bits to maximize the separation between originally neighboring symbols. This
rearrangement is instrumental in converting error clusters into distributed errors across the entire
sequence. To evaluate their performance, we proposed to model a communication system with seven
components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel
decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different
block lengths and code rates. Next, we compare the bit error rate (BER) performance between our
proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
New Structure of Channel Coding: Serial Concatenation of Polar Codesijwmn
In this paper, we introduce a new coding and decoding structure for enhancing the reliability and performance of polar codes, specifically at low error rates. We achieve this by concatenating two polar codes in series to create robust error-correcting codes. The primary objective here is to optimize the behavior of individual elementary codes within polar codes. In this structure, we incorporate interleaving, a technique that rearranges bits to maximize the separation between originally neighboring symbols. This rearrangement is instrumental in converting error clusters into distributed errors across the entire sequence. To evaluate their performance, we proposed to model a communication system with seven components: an information source, a channel encoder, a modulator, a channel, a demodulator, a channel decoder, and a destination. This work focuses on evaluating the bit error rate (BER) of codes for different block lengths and code rates. Next, we compare the bit error rate (BER) performance between our proposed method and polar codes.
Turbo codes are error-correcting codes with performance that is close to the
Shannon theoretical limit (SHA). The motivation for using turbo codes is
that the codes are an appealing mix of a random appearance on the channel
and a physically realizable decoding structure. The communication systems
have the problem of latency, fast switching, and reliable data transfer. The
objective of the research paper is to design and turbo encoder and decoder
hardware chip and analyze its performance. Two convolutional codes are
concatenated concurrently and detached by an interleaver or permuter in the
turbo encoder. The expected data from the channel is interpreted iteratively
using the two related decoders. The soft (probabilistic) data about an
individual bit of the decoded structure is passed in each cycle from one
elementary decoder to the next, and this information is updated regularly.
The performance of the chip is also verified using the maximum a posteriori
(MAP) method in the decoder chip. The performance of field-programmable
gate array (FPGA) hardware is evaluated using hardware and timing
parameters extracted from Xilinx ISE 14.7. The parallel concatenation offers
a better global rate for the same component code performance, and reduced
delay, low hardware complexity, and higher frequency support.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration
for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the
IoT network scenario considered in the work, for the same coding rate and the number of decoding
iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective
fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to
polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large
number of decoding iterations and high coding rates.
CODING SCHEMES FOR ENERGY CONSTRAINED IOT DEVICESijmnct_journal
This paper investigates the application of advanced forward error correction techniques mainly: lowdensity parity checks (LDPC) code and polar code for IoT networks. These codes are under consideration for 5G systems. Different code parameters such as code rate and a number of decoding iterations are used
to show their effect on the performance of the network. LDPC is performed better than polar code, over the IoT network scenario considered in the work, for the same coding rate and the number of decoding iterations. Considering bit error rate (BER) performance, LDPC with rate1/3 provided an improvement of
up to 2.6 dB for additive white Gaussian noise (AWGN) channel, and 2 dB for SUI-3 (frequency selective fading channel model). LDPC code gives an improvement in throughput of about 12% as compared to polar code with a coding rate of 2/3 over AWGN channel. The corresponding values over SUI-3 channel
are about 10%. Finally, in comparison with LDPC, polar code shows better energy saving for large number of decoding iterations and high coding rates.
PERFORMANCE OF WIMAX PHYSICAL LAYER WITH VARIATIONS IN CHANNEL CODING AND DIG...ijistjournal
The aim of this paper is to analyze the bit error rate (BER) performance of WiMAX physical layer with the implementation of different concatenated channel coding schemes under QAM and 16QAM digital modulations over realistic channel conditions (i.e. noise and multipath fading). In concatenated channel coding, the WiMAX system incorporates CRC-CC (Cyclic Redundancy Check and Convolutional) or RSCC (Reed-Solomon and Convolutional) encoder over an additative white gaussian noise (AWGN) and other multipath fading (Raleigh and Rician) channels. A segment of synthetic data is used for the analysis. Computer simulation results based on BER and signal to noise ratio (SNR) demonstrate that the performance of concatenated CRC-CC coded WiMAX system under QAM modulation is better as compared to RS-CC coded system over noisy and fading environments.
PERFORMANCE OF WIMAX PHYSICAL LAYER WITH VARIATIONS IN CHANNEL CODING AND DIG...ijistjournal
The aim of this paper is to analyze the bit error rate (BER) performance of WiMAX physical layer with the implementation of different concatenated channel coding schemes under QAM and 16QAM digital modulations over realistic channel conditions (i.e. noise and multipath fading). In concatenated channel coding, the WiMAX system incorporates CRC-CC (Cyclic Redundancy Check and Convolutional) or RSCC (Reed-Solomon and Convolutional) encoder over an additative white gaussian noise (AWGN) and other multipath fading (Raleigh and Rician) channels. A segment of synthetic data is used for the analysis. Computer simulation results based on BER and signal to noise ratio (SNR) demonstrate that the performance of concatenated CRC-CC coded WiMAX system under QAM modulation is better as compared to RS-CC coded system over noisy and fading environments.
International Journal of Engineering Research and Applications (IJERA) is an open access online peer reviewed international journal that publishes research and review articles in the fields of Computer Science, Neural Networks, Electrical Engineering, Software Engineering, Information Technology, Mechanical Engineering, Chemical Engineering, Plastic Engineering, Food Technology, Textile Engineering, Nano Technology & science, Power Electronics, Electronics & Communication Engineering, Computational mathematics, Image processing, Civil Engineering, Structural Engineering, Environmental Engineering, VLSI Testing & Low Power VLSI Design etc.
BER Performance for Convalutional Code with Soft & Hard Viterbi DecodingIJMER
Viterbi decoding has a fixed decoding time. It is well suited to hardware decoder. Hear we proposed Viterbi algorithm with Decoding rate 1/3. Which dynamically improve performance of the channel
An Energy-Efficient Lut-Log-Bcjr Architecture Using Constant Log Bcjr AlgorithmIJERA Editor
Error correcting codes are used to correct the data from the corrupted signal due to noise and interference. There
are many error correcting codes. Among them turbo codes is considered to be the best because it is very close to
the Shannon theoretical limit. The MAP algorithm is commonly used in the turbo decoder. Among the different
versions of the MAP algorithm Constant log BCJR algorithm have less complexity and good error performance.
The Constant log BCJR algorithm can be easily designed using look up table which reduces the memory
consumption. The proposed Constant log BCJR decoder is designed to decode two blocks of data at a time, this
increases the throughput. The complexity of the decoder is further reduced by the use of the add compare select
(ACS) units and registers. The proposed decoder is simulated using Xilinx ISE and synthesized using Sparten3
FPGA and found out that Constant log BCJR decoder utilized less amount of memory and power than the LUT
log BCJR decoder.
An efficient reconfigurable code rate cooperative low-density parity check co...IJECEIAES
In recent days, extensive digital communication process has been performed. Due to this phenomenon, a proper maintenance of authentication, communication without any overhead such as signal attenuation code rate fluctuations during digital communication process can be minimized and optimized by adopting parallel encoder and decoder operations. To overcome the above-mentioned drawbacks by using proposed reconfigurable code rate cooperative (RCRC) and low-density parity check (LDPC) method. The proposed RCRC-LDPC is capable to operate over gigabits/sec data and it effectively performs linear encoding, dual diagonal form, widens the range of code rate and optimal degree distribution of LDPC mother code. The proposed method optimize the transmission rate and it is capable to operate on 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. The proposed method optimizes the transmission rate and is capable to operate on a 0.98 code rate. It is the highest upper bounded code rate as compared to the existing methods. the proposed method's implementation has been carried out using MATLAB and as per the simulation result, the proposed method is capable of reaching a throughput efficiency greater than 8.2 (1.9) gigabits per second with a clock frequency of 160 MHz.
Space time block coding is a technique used in wireless communication to transmit multiple copies of a data stream across a number of antennas and to exploit the various received versions of the data to improve the reliability of data transfer. The fact that the transmitted signal must traverse a potentially difficult environment with scattering, reflection, refraction and so on and may then be further corrupted by thermal noise in the receiver means that some of the received copies of the data may be closer to the original signal than others. This redundancy results in a higher chance of being able to use one or more of the received copies to correctly decode the received signal. In fact, space–time coding combines all the copies of the received signal in an optimal way to extract as much information from each of them as possible.
Overview of the fundamental roles in Hydropower generation and the components involved in wider Electrical Engineering.
This paper presents the design and construction of hydroelectric dams from the hydrologist’s survey of the valley before construction, all aspects and involved disciplines, fluid dynamics, structural engineering, generation and mains frequency regulation to the very transmission of power through the network in the United Kingdom.
Author: Robbie Edward Sayers
Collaborators and co editors: Charlie Sims and Connor Healey.
(C) 2024 Robbie E. Sayers
CFD Simulation of By-pass Flow in a HRSG module by R&R Consult.pptxR&R Consult
CFD analysis is incredibly effective at solving mysteries and improving the performance of complex systems!
Here's a great example: At a large natural gas-fired power plant, where they use waste heat to generate steam and energy, they were puzzled that their boiler wasn't producing as much steam as expected.
R&R and Tetra Engineering Group Inc. were asked to solve the issue with reduced steam production.
An inspection had shown that a significant amount of hot flue gas was bypassing the boiler tubes, where the heat was supposed to be transferred.
R&R Consult conducted a CFD analysis, which revealed that 6.3% of the flue gas was bypassing the boiler tubes without transferring heat. The analysis also showed that the flue gas was instead being directed along the sides of the boiler and between the modules that were supposed to capture the heat. This was the cause of the reduced performance.
Based on our results, Tetra Engineering installed covering plates to reduce the bypass flow. This improved the boiler's performance and increased electricity production.
It is always satisfying when we can help solve complex challenges like this. Do your systems also need a check-up or optimization? Give us a call!
Work done in cooperation with James Malloy and David Moelling from Tetra Engineering.
More examples of our work https://www.r-r-consult.dk/en/cases-en/
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Simulation of Turbo Convolutional Codes for Deep Space Mission
1. Chandan Mishra Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 4, ( Part -7) April 2015, pp.121-124
www.ijera.com 121 | P a g e
Simulation of Turbo Convolutional Codes for Deep Space Mission
Chandan Mishra & Aashish Hiradhar
Dr, C V Raman University Kargi Road Kota, Bilaspur (C.G)
Abstract
In satellite communication deep space mission are the most challenging mission, where system has to work at
very low Eb/No. Concatenated codes are the ideal choice for such deep space mission. The paper describes
simulation of Turbo codes in SIMULINK . The performance of Turbo code is depend upon various factor. In
this paper ,we have consider impact of interleaver design in the performance of Turbo code. A details
simulation is presented and compare the performance with different interleaver design .
Keywords: Eb/No, PCCCs, HCCCs, SCCCs, SNR, AWGN,
I. Introduction
The usefulness of concatenated codes was first
noticed by Forney in [1]. In general, the
concatenation of convolutional codes can be
classified into three categories,i.e., PCCC, SCCC and
hybrid concatenated convolutional codes (HCCC).
The constituent convolutional codes (CCs) used in
each scheme fall into several classes of systematic,
nonsystematic, recursive and non-recursive schemes.
Systematic convolutional codes have their inputs
appear directly at the output, while non systematic
convolutional codes do not have this property. A non-
recursive encoder does not have any feedback
connection while a recursive encoder does. In
general, nonsystematic non-recursive CCs perform
almost the same as equivalent systematic recursive
CCs since they exhibit the same distance spectrum. In
the original turbo code, two identical recursive
systematic convolutional (RSC) codes were used.
Several other authors have explored the use of
nonsystematic recursive CCs as the constituent codes,
e.g., Massey and Costello [2, 3]. In [4, 5],Benedetto
et al. and Perez et al. showed that recursive CCs can
produce higher weight output codewords compared
to nonrecursive CCs, even when the input
information weight is low. This is a major advantage
in a PCCC system since low input weight codewords
dominate the error events. In addition, PCCC requires
a long information block in order to perform well in
the low SNR region. In this case, recursive CCs can
provide an additional interleaving gain that is
proportional to the length of the interleaver while
nonrecursive CCs cannot .Therefore, RSCs are
preferable in practice as the constituent code for a
PCCC or the inner code for an SCCC or HCCC.
Detailed treatments of the constituent CC encoder
can be found in Lin and Costello [6] and many
excellent references within, e.g., [4, 7]. In the
following sections, we will examine the structure for
each scheme. We assume that these systems consist
of only two CCs. Extension to multiple CCs is
straightforward and have been investigated in a
number of references [8, 9].
II. Parallel Concatenated Codes :
Parallel-Concatenated Convolutional Codes
(PCCC), know as turbo codes, allows structure
through concatenation and randomness through
interleaving. The introduction of turbo codes has
increased the interest in the coding area since these
codes give most of the gain promised by the channel-
coding theorem.
The CCSDS Telemetry Channel Coding
Recommendation [1] establishes a common
framework and provides a standardized basis for the
coding schemes used by CCSDS Agencies for space
telemetry data communications. This standard
traditionally provides the benchmark for new and
emerging coding technologies Turbo codes have an
astonishing performance of bit error rate (BER) at
relatively low Eb/N0. Turbo codes were chosen as a
new option for this standard in 1999, only 6 years
since their official presentation to the international
community: this was the first international standard
including turbo codes. The reason was the significant
improvement in terms of power efficiency assured by
turbo codes over the old codes of the standard.
Figure.1 shows complete SIMULINK model of
CCSDS compline turbo encoder and decoder.
RESEARCH ARTICLE OPEN ACCESS
2. Chandan Mishra Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 4, ( Part -7) April 2015, pp.121-124
www.ijera.com 122 | P a g e
Figure.1 CCSDS Compline Turbo Encoder and
Decoder
2.1 Turbo Encoder and Decoder
In this case, two RSCs of rates Ri = 1/ni and i €
{1, 2} are connected in parallel. The interleaver π
interleaves the uncoded message u = {u0, u1, . . . ,
uN−1}, of length N before entering into the second
encoder. If the constituent encoders are RSC codes
and no termination of the constituent codes is
performed, the overall code rate R for this PCCC
scheme is
(1)
Clearly, the code rate R in (1) is less than the
individual code rate Rn of each constituent encoder.
For example, the PCCC in [10] uses two identical
constituent RSC encoders of rate 1/2 each. Since the
two RSC encoders produce the same original
message u at the output (one interleaved and one not
interleaved), one of them is therefore deleted.
Applying (1), the overall code rate is equal to 1/3.
This low rate system offers very strong
protection to the transmitted message. Generally, the
lower the code rate, the higher the protection to the
transmitted data. In practice, a low code rate system
is used when the SNR is low or the bandwidth is
large. However, a low code rate is inefficient in a
bandwidth limited system due to the extra
redundancy in the coded message. A higher coding
rate is necessary for achieving higher bandwidth
efficiency.
Optimally, a high code rate concatenated system
should use high rate constituent CCs with the largest
effective distance deff, where deff is the smallest
Hamming weight of codewords with input weight
two [11]. However, due to the constraints of the
decoder, i.e., the trellis branch complexity increasing
almost exponentially with respect to the input into the
encoder, implementation of these systems are not
normally used in practice. In past, several authors
have tried to use the dual code [12, 13] to design very
high rate turbo codes with low decoder complexity.
However, the implementation of the decoder is not
easy because the estimation of the branch and state
metrics in the decoder requires a very high level of
accuracy.
A simple technique to obtain a higher code rate
using the same low rate constituent code is called
puncturing. Referring to Figure 2, certain parity bits
(C11 and C22) are deleted from the encoded
sequence before going into the multiplexer. The
advantage of this puncturing technique is that it
requires no changes in the decoder, i.e., the same rate
1/2 decoders can be used for different higher code
rates. This is especially useful in an adaptive system
where code rates need to be varied depending on the
channel conditions. The penalty to pay for puncturing
a low rate encoder to a higher rate encoder is that the
system performance is degraded in comparison to a
similar high rate encoder without puncturing. This is
due to a lower deff or a larger number of effective
nearest neighbours Neff of the punctured code. In
addition, when RSC codes are used, there are two
choice either deletion of the parity bits or deletion or
deletion of systematic bits .However in general
deletion of parity bit is compared to systematic bits is
preferred. This restricts us from choosing an optimal
puncturing matrix for a very high code rate, e.g., k/(k
+ 1), since many parity bits are required. In this paper
.we will compare the performance of both type of
puncturing structure.
Figure.2 CCSDS compline Turbo Encoder
To investigate the “goodness” of turbo code
performances, it is useful to compare them against
the channel coding theoretical limits. For a fixed
code-rate k/n and a specific constellation, the ideal
spectral efficiency η (measured in bps/Hz) is
computed by referring to ideal Nyquist base-band
filters. For 4-PSK constellations the following
expression results:
3. Chandan Mishra Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 4, ( Part -7) April 2015, pp.121-124
www.ijera.com 123 | P a g e
Consider the transmission of a binary turbo code
over the AWGN channel by a Gray labelled 4-PSK.
At very high signal-to-noise ratios (SNR), that is very
low error rates, the code performance practically
coincides with the union bound, truncated to the
contribution of the minimum distance . The FER and
BER code performance can then be approximated by:
(2)
(3)
Where Amin is the code dominant multiplicity
(number of codewords with weight dmin), and wmin is
the code dominant information multiplicity (sum of
the Hamming weights of the Amin information frames
generating the codewords with weight dmin). When
comparing the simulated curves with Eq. 2 and 3 a
small fixed penalty (usually less than 0.25 dB for
turbo codes) must be also taken into account, due to
the sub-optimality of iterative decoding.
Figure 3. shows a SIMULINK model of CCSDS
compline Turbo decoder, Here puncturing is done
on parity bits . The puncturing matrix in this case is
[1 0 1 1 1 0] . Figure 4 shows a SIMULINK model of
a turbo decoder, where systematic bits are not send
from encoder side. So there is no puncturing and
depuncturing involve. Figure 5 shows the
comparative performance of two cases. Result shows
that deletion of parity bit will be preferred over
systematic bits.
Figure 3 CCSDS compline Turbo Decoder
Figure 4. Turbo Decoder without transmission of
systematic bits
0 2 4 6 8 10 12 14 16 18
10
-20
10
-15
10
-10
10
-5
10
0
Eb/No(dB)
BER
Uncoded performance
Turbo code 1/2 with puncturing of Parity Bit
Turbo code 1/2 withpuncturing of Systematic Bit
Coding Gain of 9dB
Figure. 5. Comparative performance analysis of
different turbo codec
III. Effect of interlever design on Turbo
code
The performance of Turbo code is highly depend
upon interleaver design. The error floor problem of
Turbo code can be solved by using proper interlever
design. Figure 6 shows the performance of turbo code
with different interlever. The code rate for all cases is
1/2 and total decoding iteration is set to 6. The frame
length is set to 1784 Bits. . Table 1 shows the error
free performance analysis of Turbo codec with
respect to different interleavers. It can be easily seen
that performance of random interleaved Turbo codec
is superior compare to other interleaved Turbo codec.
Hence random interleaver is the suitable choice for
Turbo codec.
Table: 1 Packet size : 1784 Bits, BER=10-6
S.No Interleaver
Type
1st
iteration
(Eb/No)
2nd
iteration
(Eb/No)
3rd
iteration
(Eb/No)
1 Pseudo
random
4.4 3.3 2.1
2 Matrix 5.5 4.8 3.9
3 Helical 5.2 4.7 3.8
4 Circular 5.6 5.0 4.3
5 Algebraic 5.1 4.2 3.5
4. Chandan Mishra Int. Journal of Engineering Research and Applications www.ijera.com
ISSN : 2248-9622, Vol. 5, Issue 4, ( Part -7) April 2015, pp.121-124
www.ijera.com 124 | P a g e
IV. Conclusion
A detailed simulation result are presented for
Turbo concatenated code structure for deep space
mission . Simulation result shows that for identical
code rate, the performance of parity puncture
Turbocode is superior compare to date puncture code
structure. Simulation result also shows that random
interleaver is the ideal choice for Turbo concatenated
code structure.
Reference
[1]. Forney, G. D., Jr., Concatenated Codes
(Cambridge, MA: MIT Press, 1966).
[2] P. C. Massey and D. J. Costello, Jr., “Turbo
codes with recursive nonsystematic quick-
look-in constituent codes,” IEEE Int. Symp.
Inform. Theory, Washington, USA, pp. 141–
145, June 2001.
[3] A. Banerjee, F. Vatta, B. Scanavino, and D.
J. Costello, Jr., “Nonsystematic turbo
codes,” IEEE Trans. Commun., vol. 53, pp.
1841–1849, Nov. 2005.
[4] S. Benedetto, D. Divsalar, G. Montorosi,
and F. Pollara, “Serial concatenation of
interleaved codes: Performance analysis,
design and iterative decoding,”IEEE Trans.
Inform. Theory, vol. 44, pp. 909–926, May
1998.
[5] L. C. Perez, J. Seghers, and D. J. Costello,
Jr., “A distance spectrum interpretation of
turbo codes,” IEEE Trans. Inform. Theory,
vol. 42, pp. 1698–1709, Nov. 1996.
[6] S. Lin and D. J. Costello, Jr.,“Error control
coding,” Pearson Prentice Hall, New
Jersey, 2004.
[7] S. Benedetto, R. Garello, G. Montorsi, C.
Berrou, C. Douillard, D. Giancristofaro,A.
Ginesi,L. Giugno and M. Luise,
“MHOMS: High speed ACM modem for
satellite applications,” IEEE Trans.
Wireless Commun., vol. 12,pp. 66–77, Apr.
2005.
[8] D. Divsalar and F. Pollara, “Multiple turbo
codes,” IEEE Military Commun.Conf., San
Diego, USA, pp. 279–285, Nov. 1995.
[9] S. Huettinger and J. Huber, “Design of
multiple-turbo-codes with transfer
characteristics of component codes,” Conf.
on Inform. Sciences and Systems,Princeton,
USA, Mar. 2002
[10]. J.Hagenauer,E.Offer,and L.Papke, “Iterative
Decoding of Binary Block and
Convolutional Codes," IEEE Transactions
on Information Theory, vol. 43, no. 2, pp.
429-445, March 1996.
[11] S. Dolinar and D. Divsalar, “Weight
distributions for turbo codes using
random and nonrandom permutations,”
JPL TDA Progress Report 42-122, pp. 56-
65, Aug. 1995.
[12] A. Graell i Amat, G. Montorsi, and S.
Benedetto,“A new approach to the
construction of high-rate convolutional
codes,” IEEE Commun. Letters, vol.5, pp.
453–455, Nov. 2001.
[13] S. Sudharshan and S. S. Pietrobon, “A
new scheme to reduce complexity of APP
decoders working on the dual code,”
IEEE Veh. Technol. Conf., Melbourne,
Australia, vol. 3, pp. 1377–1381, May
2006.