Energy-Efficient LDPC Decoder using DVFS for binary sourcesIDES Editor
This paper deals with reduction of the transmission
power usage in the wireless sensor networks. A system with
FEC can provide an objective reliability using less power
than a system without FEC. We propose to study LDPC
codes to provide reliable communication while saving power
in the sensor networks. As shown later, LDPC codes are more
energy efficient than those that use BCH codes. Another
method to reduce the transmission cost is to compress the
correlated data among a number of sensor nodes before
transmission. A suitable source encoder that removes the
redundant information bits can save the transmission power.
Such a system requires distributed source coding. We propose
to apply LDPC codes for both distributed source coding and
source-channel coding to obtain a two-fold energy savings.
Source and channel coding with LDPC for two correlated nodes
under AWGN channel is implemented in this paper. In this
iterative decoding algorithm is used for decoding the data, and
it’s efficiency is compared with the new decoding algorithm
called layered decoding algorithm which based on offset min
sum algorithm. The usage of layered decoding algorithm and
Adaptive LDPC decoding for AWGN channel reduces the
decoding complexity and its number of iterations. So the power
will be saved, and it can be implemented in hardware.
A second important technique in error-control coding is that of convolutional coding . In this type of coding the encoder output is not in block form, but is in the form of an encoded
sequence generated from an input information sequence.
convolutional encoding is designed so that its decoding can be performed in some structured and simplified way. One of the design assumptions that simplifies decoding
is linearity of the code. For this reason, linear convolutional codes are preferred. The source alphabet is taken from a finite field or Galois field GF(q).
Convolution coding is a popular error-correcting coding method used in digital communications.
The convolution operation encodes some redundant information into the transmitted signal, thereby improving the data capacity of the channel.
Convolution Encoding with Viterbi decoding is a powerful FEC technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by AWGN.
It is simple and has good performance with low implementation cost.
Energy-Efficient LDPC Decoder using DVFS for binary sourcesIDES Editor
This paper deals with reduction of the transmission
power usage in the wireless sensor networks. A system with
FEC can provide an objective reliability using less power
than a system without FEC. We propose to study LDPC
codes to provide reliable communication while saving power
in the sensor networks. As shown later, LDPC codes are more
energy efficient than those that use BCH codes. Another
method to reduce the transmission cost is to compress the
correlated data among a number of sensor nodes before
transmission. A suitable source encoder that removes the
redundant information bits can save the transmission power.
Such a system requires distributed source coding. We propose
to apply LDPC codes for both distributed source coding and
source-channel coding to obtain a two-fold energy savings.
Source and channel coding with LDPC for two correlated nodes
under AWGN channel is implemented in this paper. In this
iterative decoding algorithm is used for decoding the data, and
it’s efficiency is compared with the new decoding algorithm
called layered decoding algorithm which based on offset min
sum algorithm. The usage of layered decoding algorithm and
Adaptive LDPC decoding for AWGN channel reduces the
decoding complexity and its number of iterations. So the power
will be saved, and it can be implemented in hardware.
A second important technique in error-control coding is that of convolutional coding . In this type of coding the encoder output is not in block form, but is in the form of an encoded
sequence generated from an input information sequence.
convolutional encoding is designed so that its decoding can be performed in some structured and simplified way. One of the design assumptions that simplifies decoding
is linearity of the code. For this reason, linear convolutional codes are preferred. The source alphabet is taken from a finite field or Galois field GF(q).
Convolution coding is a popular error-correcting coding method used in digital communications.
The convolution operation encodes some redundant information into the transmitted signal, thereby improving the data capacity of the channel.
Convolution Encoding with Viterbi decoding is a powerful FEC technique that is particularly suited to a channel in which the transmitted signal is corrupted mainly by AWGN.
It is simple and has good performance with low implementation cost.
A cornerstone of our digital society sytems from the explosive growth of multimedia traffic in general and video in particular. Video already accounts for over 50% of the internet traffic today and mobile video traffic is expected to grow by a factor of more than 20 in the next five years. This massive volume of data has resulted in a strong demand for implementing highly efficient approaches for video transmission. Error correcting codes for reliable communication have been studied for several decades. However, ideal coding techniques for video streaming are fundamentally different from the classical error corr ection codes. In order to be optimized they must operate under low latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure. Such unique constrains lead to fascinating new open problems in the de sign of error correction codes. In this talk we aim to look from a system theoretical perspective at these problems. In particular we propose the use of convolutional code.
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
In digital communication forward error correction methods have a great practical importance when channel is
noisy. Convolutional error correction code can correct both type of errors random and burst. Convolution
encoding has been used in digital communication systems including deep space communication and wireless
communication. The error correction capability of convolutional code depends on code rate and constraint
length. The low code rate and high constraint length has more error correction capabilities but that also
introduce large overhead. This paper introduces convolutional encoders for various constraint lengths. By
increasing the constraint length the error correction capability can be increased. The performance and error
correction also depends on the selection of generator polynomial. This paper also introduces a good generator
polynomial which has high performance and error correction capabilities.
Introduction to Convolutional Codes
Convolutional Encoder Structure
Convolutional Encoder Representation(Vector, Polynomial, State Diagram and Trellis Representations )
Maximum Likelihood Decoder
Viterbi Algorithm
MATLAB Simulation
Hard and Soft Decisions
Bit Error Rate Tradeoff
Consumed Time Tradeoff
BCH codes, part of the cyclic codes, are very powerful error correcting codes widely used in the information coding techniques. This presentation explains these codes with an example.
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...theijes
GDI(Gate Diffusion Input) technique allows low power consumption, low propagation delay and also minimum number of transistor count (low chip area) for the logic design. in this paper 15-bit NOVEL hamming codec has been proposed. This Novel hamming codec has been simulated with HSPICE using 22nm CMOS technology with various design methodologies like TG technology, pass transistor logic and GDI Technique and designs are compared to 15-bit simple hamming codec with each of various design methodologies respectively. GDI technique provide excellent result in terms power consumption, chip area and propagation delay and also novel hamming codec provide less transistor count over general hamming codec.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
A cornerstone of our digital society sytems from the explosive growth of multimedia traffic in general and video in particular. Video already accounts for over 50% of the internet traffic today and mobile video traffic is expected to grow by a factor of more than 20 in the next five years. This massive volume of data has resulted in a strong demand for implementing highly efficient approaches for video transmission. Error correcting codes for reliable communication have been studied for several decades. However, ideal coding techniques for video streaming are fundamentally different from the classical error corr ection codes. In order to be optimized they must operate under low latency, sequential encoding and decoding constrains, and as such they must inherently have a convolutional structure. Such unique constrains lead to fascinating new open problems in the de sign of error correction codes. In this talk we aim to look from a system theoretical perspective at these problems. In particular we propose the use of convolutional code.
Design and Performance Analysis of Convolutional Encoder and Viterbi Decoder ...IJERA Editor
In digital communication forward error correction methods have a great practical importance when channel is
noisy. Convolutional error correction code can correct both type of errors random and burst. Convolution
encoding has been used in digital communication systems including deep space communication and wireless
communication. The error correction capability of convolutional code depends on code rate and constraint
length. The low code rate and high constraint length has more error correction capabilities but that also
introduce large overhead. This paper introduces convolutional encoders for various constraint lengths. By
increasing the constraint length the error correction capability can be increased. The performance and error
correction also depends on the selection of generator polynomial. This paper also introduces a good generator
polynomial which has high performance and error correction capabilities.
Introduction to Convolutional Codes
Convolutional Encoder Structure
Convolutional Encoder Representation(Vector, Polynomial, State Diagram and Trellis Representations )
Maximum Likelihood Decoder
Viterbi Algorithm
MATLAB Simulation
Hard and Soft Decisions
Bit Error Rate Tradeoff
Consumed Time Tradeoff
BCH codes, part of the cyclic codes, are very powerful error correcting codes widely used in the information coding techniques. This presentation explains these codes with an example.
15-bit NOVEL Hamming Codec using HSPICE 22nm CMOS Technology based on GDI Tec...theijes
GDI(Gate Diffusion Input) technique allows low power consumption, low propagation delay and also minimum number of transistor count (low chip area) for the logic design. in this paper 15-bit NOVEL hamming codec has been proposed. This Novel hamming codec has been simulated with HSPICE using 22nm CMOS technology with various design methodologies like TG technology, pass transistor logic and GDI Technique and designs are compared to 15-bit simple hamming codec with each of various design methodologies respectively. GDI technique provide excellent result in terms power consumption, chip area and propagation delay and also novel hamming codec provide less transistor count over general hamming codec.
International Journal of Engineering Research and Development (IJERD)IJERD Editor
journal publishing, how to publish research paper, Call For research paper, international journal, publishing a paper, IJERD, journal of science and technology, how to get a research paper published, publishing a paper, publishing of journal, publishing of research paper, reserach and review articles, IJERD Journal, How to publish your research paper, publish research paper, open access engineering journal, Engineering journal, Mathemetics journal, Physics journal, Chemistry journal, Computer Engineering, Computer Science journal, how to submit your paper, peer reviw journal, indexed journal, reserach and review articles, engineering journal, www.ijerd.com, research journals,
yahoo journals, bing journals, International Journal of Engineering Research and Development, google journals, hard copy of journal
Reliability Level List Based Iterative SISO Decoding Algorithm for Block Turb...TELKOMNIKA JOURNAL
An iterative Reliability Level List (RLL) based soft-input soft-output (SISO) decoding algorithm has been proposed for Block Turbo Codes (BTCs). The algorithm ingeniously adapts the RLL based decoding algorithm for the constituent block codes, which is a soft-input hard-output algorithm. The extrinsic information is calculated using the reliability of these hard-output decisions and is passed as soft-input to the iterative turbo decoding process. RLL based decoding of constituent codes estimate the optimal transmitted codeword through a directed minimal search. The proposed RLL based decoder for the constituent code replaces the Chase-2 based constituent decoder in the conventional SISO scheme. Simulation results show that the proposed algorithm has a clear advantage of performance improvement over conventional Chase-2 based SISO decoding scheme with reduced decoding latency at lower noise levels.
Study of the operational SNR while constructing polar codes IJECEIAES
Channel coding is commonly based on protecting information to be communicated across an unreliable medium, by adding patterns of redundancy into the transmission path. Also referred to as forward error control coding (FECC), the technique is widely used to enable correcting or at least detecting bit errors in digital communication systems. In this paper we study an original FECC known as polar coding which has proven to meet the typical use cases of the next generation mobile standard. This work is motivated by the suitability of polar codes for the new coming wireless era. Hence, we investigate the performance of polar codes in terms of bit error rate (BER) for several codeword lengths and code rates. We first perform a discrete search to find the best operational signal-to-noise ratio (SNR) at two different code rates, while varying the blocklength. We find in our extensive simulations that the BER becomes more sensitive to operational SNR (OSNR) as long as we increase the blocklength and code rate. Finally, we note that increasing blocklength achieves an SNR gain, while increasing code rate changes the OSNR domain. This trade-off sorted out must be taken into consideration while designing polar codes for high-throughput application.
Performance Analysis of Steepest Descent Decoding Algorithm for LDPC Codesidescitation
Among various hard decision based Bit Flipping (BF)
algorithms for decoding Low-Density Parity-Check (LDPC)
codes such as Weighted Bit Flipping (WBF), Improved
Reliability Ratio Weighted Bit Flipping (IRRWBF) etc., the
Steepest Descent Bit Flipping Algorithm (SDBF) achieves
better error performance. In this paper, the performance of
the Steepest Descent Algorithm for both single steepest
descent and Multi steepest descent modes is analysed. Also
the performance of IEEE 802.16e standard is analysed using
Steepest Descent Bit Flipping (SDBF) decoding algorithm.
SDBF requires fewer check node and variable node operations
compared to Sum Product Algorithm (SPA) and Min Sum
Algorithm (MSA). The SDBF achieves a coding gain of 0.1 ~
0.2 dB compared to Single-SDBF without requiring complex
log and exponential operations.
Design and Implementation of Encoder for (15, k) Binary BCH Code Using VHDL a...IOSR Journals
Abstract: In this paper we have designed and implemented(15, k) a BCH Encoder on FPGA using VHDL for reliable data transfers in AWGN channel with multiple error correction control. The digital logic implementation of binary encoding of multiple error correcting BCH code (15, k) of length n=15 over GF (24) with irreducible primitive polynomial x4+x+1 is organized into shift register circuits. Using the cyclic codes, the reminder b(x) can be obtained in a linear (15-k) stage shift register with feedback connections corresponding to the coefficients of the generated polynomial. Three encoder are designed using VHDL to encode the single, double and triple error correcting BCH code (15, k) corresponding to the coefficient of generated polynomial. Information bit is transmitted in unchanged form up to k clock cycles and during this period parity bits are calculated in the LFSR then the parity bits are transmitted from k+1 to 15 clock cycles. Total 15-k numbers of parity bits with k information bits are transmitted in 15 code word. Here we have implemented (15, 5, 3), (15, 7, 2) and (15, 11, 1) BCH code encoder on Xilinx Spartan 3 FPGA using VHDL and the simulation & synthesis are done using Xilinx ISE 13.3. BCH encoders are conventionally implemented by linear feedback shift register architecture. Encoders of long BCH codes may suffer from the effect of large fan out, which may reduce the achievable clock speed. The data rate requirement of optical applications require parallel implementations of the BCH encoders. Also a comparative performance based on synthesis & simulation on FPGA is presented. Keywords: BCH, BCH Encoder, FPGA, VHDL, Error Correction, AWGN, LFSR cyclic redundancy checking, fan out .
FPGA Implementation of Efficient Viterbi Decoder for Multi-Carrier SystemsIJMER
In this paper, we concern with designing and implementing a Convolutional encoder and
Adaptive Viterbi Decoder (AVD) which are the essential blocks in digital communication system using
FPGA technology. Convolutional coding is a coding scheme used in communication systems for error
correction employed in applications like deep space communications and wireless communications. It
provides an alternative approach to block codes for transmission over a noisy channel. The block
codes can be applied only for the blocks of data where as the Convolutional coding has an advantage
that it can be applied to both continuous data stream and blocks of data. The Viterbi decoder with
PNPH (Permutation Network based Path History) management unit which is a special path
management unit for faster decoding speed with less routing area. The proposed architecture can be
realized by an Adaptive Viterbi Decoder having constraint length, K of 3 and a code rate (k/n) of 1/2
using Verilog HDL. Simulation is done using Xilinx ISE 12.4i design software and it is targeted into
Xilinx Virtex-5, XC5VLX110T FPGA
Belief Propagation Decoder for LDPC Codes Based on VLSI Implementationinventionjournals
International Journal of Engineering and Science Invention (IJESI) is an international journal intended for professionals and researchers in all fields of computer science and electronics. IJESI publishes research articles and reviews within the whole field Engineering Science and Technology, new teaching methods, assessment, validation and the impact of new technologies and it will continue to provide information on the latest trends and developments in this ever-expanding subject. The publications of papers are selected through double peer reviewed to ensure originality, relevance, and readability. The articles published in our journal can be accessed online.
Analysis and Implementation of Hard-Decision Viterbi Decoding In Wireless Com...IJERA Editor
Convolutional codes are also known as Turbo codes because of their error correction capability. These codes are
also awarded as Super product codes, because these codes have replaced the backward error correction codes.
Turbo codes are much more efficient than previous backward error correction codes because these are Forward
error correction (FEC) codes and there is no need for a feedback link to request the transmitter for
retransmission of data, when bits are corrupted in the information channel. A Viterbi decoder decodes stream of
digital data bits that has been encoded by Convolutional encoder. In this paper we introduce a RSC (Recursive
Systematic Convolutional) encoder with constraint length of 2 code rate of 1/3. The RSC encoder and Viterbi
decoder both are implemented on paper, as well as in MATLAB. Simulation results are also presented by using
MATLAB.
ITERATIVE METHOD FOR IMPROVEMENT OF CODING AND DECRYPTIONIJNSA Journal
Cryptographic check values (digital signatures, MACs and H-MACs) are useful only if they are free of errors. For that reason all of errors in cryptographic check values should be corrected after the transmission over a noisy channel before their verification is performed. Soft Input Decryption is a method of combining SISO convolutional decoding and decrypting of cryptographic check values to improve the correction of errors in themselves. If Soft Input Decryption is successful, i.e. all wrong bit of a cryptographic check value are corrected, these bit are sent as feedback information to the channel decoder for a next iteration. The bit of the next iteration are corrected by channel decoding followed by another Soft Input Decryption. Iterative Soft Input Decryption uses interleaved blocks. If one block can be corrected by Soft Input Decryption, the decoding of the interleaved block is improved (serial scheme). If Soft Input Decryption is applied on both blocks and one of the blocks can be corrected, the corrected block is used for an improved decoding of the other block (parallel scheme). Both schemes show significant coding gains compared to convolutional decoding without iterative Soft Input Decryption.
.
An Overview of the ATSC 3.0 Physical Layer SpecificationAlwin Poulose
ATSC 3.0 Physical Layer Specification
IEEE TRANSACTIONS ON BROADCASTING,
VOL. 62, NO. 1, MARCH 2016
Luke Fay, Lachlan Michael, David Gómez-Barquero, Nejib Ammar, and M. Winston Caldwell
1. ITW2003, Paris, France, March 31 – April 4, 2003
Performance estimation for concatenated coding schemes
Simon Huettinger and Johannes Huber
Chair of Information Transmission, University Erlangen-N¨ rnberg, Germany
u
e-mail: huettinger,huber @LNT.de
Abstract — Asymptotical analysis of concatenated codes U [i], which is the a–posteriori probability taking the received
with EXIT charts [tB99] or the AMCA [HH02b] is proven vector Y and code constraints into account, i.e.,
to be a powerful tool for the design of power–efficient com-
def
munication systems. But, usually the result of the asymp- V [i] = Pr U [i] = 0 Y . (1)
totical analysis is a binary decision, whether convergence
of iterative decoding is possible at the chosen signal–to– ˆ
Estimated symbols U [i] can be obtained from the vector of
noise ratio, or not. soft–output values V .
In this paper it is shown how to obtain the Information
Processing Characteristic (IPC) introduced in [HHJF01] II. I NFORMATION P ROCESSING C HARACTERISTICS
for concatenated coding schemes. If asymptotical anal- The Information Processing Characteristic [HHJF01] for
ysis is performed under the assumption of infinite inter- symbol–by–symbol decoding and Interleaving
leaving and infinitely many iterations, this IPC will be a
K
lower bound. Furthermore, it also is possible to estimate def ¯ def 1
the performance of realistic coding schemes by restricting IPCI (C) = I(U ; V ) = I(U [i]; V [i]) (2)
K i=1
the number of iterations.
Finally, the IPC can be used to estimate the resulting characterizes a coding scheme w.r.t. soft–output, i.e. IPCI is
bit error ratio for the concatenated coding scheme. As an the capacity of the memoryless end–to–end channel from U to
upper and a lower bound on the bit error ratio for a given soft–output V .
IPC exist, we are able to lower bound the performance of In [HHFJ02] we proved by information theoretic bounding
any concatenated coding scheme and give an achievability that the IPC of any coding scheme can be upper bounded by:
bound, i.e. it is possible to determine a performance that
can surely be achieved if sufficiently many iterations are IPCI (C) min (C/R, 1) . (3)
performed and a large interleaver is used.
A coding scheme fulfilling (3) with equality is called ideal
I. S YSTEM M ODEL coding scheme.
In the following we analyze the properties of a digital com- The IPCI is important for two reasons. Firstly, the charac-
munications system consisting of a binary Bernoulli source, terization w.r.t. soft–output is very helpful for the analysis and
a channel coder, a channel, a decoder and a sink. Without comparison of coding schemes, which will be used as com-
loss of generality we assume, that the source emits a block ponents of concatenated codes [HHJF01]. Secondly, the IPCI
of K binary information symbols U [i], i ¾ 1, 2, ¡ ¡ ¡ , K . can be a result of a convergence analysis performed with EXIT
The encoder maps the information vector U to a codeword charts [tB99] or the AMCA [HH02b].
X which consists of N symbols X[n], n ¾ 1, 2, ¡ ¡ ¡ , N . In the following we will show how the IPCI can be obtained
The rate of the code, which is supposed to be time–invariant, for concatenated coding schemes and a relationship between
is R = K/N measured in bit per channel symbol. The code- ˆ
the IPCI and the bit error ratio of the hard–output U [i] will be
word X is transmitted over a memoryless channel that corrupts derived.
the message by substitution errors, e.g., the binary symmetric
channel (BSC) or the additive white Gaussian noise channel III. I NFORMATION P ROCESSING C HARACTERISTIC AS
(AWGN Channel). Modulator and demodulator are consid- R ESULT OF A SYMPTOTICAL A NALYSIS
ered as being part of the channel. To obtain the Information Processing Characteristic for
Additionally we introduce an (theoretically infinite) inter- symbol–by–symbol decoding and interleaving IPCI (C),
leaver π½ before encoding that converts the end–to–end chan- firstly we have to determine the mutual information between
nel between U and V to a memoryless channel. the source symbols U and the post-decoding soft–output V of
the decoder using EXIT charts or the AMCA. It is possible
to obtain both, a lower bound achieved by infinite interleav-
ing and infinitely many iterations as well as estimations of the
mutual information after an arbitrary number of iterations. As
long as thereby the number of iterations is restricted such that
Figure 1: System model. the cycles in the graph of the code do not dominate the de-
coding performance the result will be close to bit error per-
The corrupted received sequence Y is processed by the de- formance that can be measured if the whole coding scheme is
coder. The decoder output is the soft–output w.r.t. symbol simulated.
2. Results or intermediate results of EXIT charts and the infinitely many iterations, the decoding process gets stuck ex-
AMCA are the mutual information between the source sym- actly at these points. Hence, from the abscissa and ordinate
bols U in parallel concatenation or the encoded symbols of values of these points a lower bound on the IPCI (C) can be
the outer encoder X in serial concatenation and the respective obtained. This asymptotical IPCI (C) is shown in Fig. 4.
extrinsic soft–output at the decoder side Z resp. Q. The post–
decoding information V , which is the final result at the output 1
−1.5 dB
of an iterative decoder, is created by maximum ratio combin-
0.9
ing [Bre59] of the extrinsic informations of all consituent de-
coders on a symbol basis. This can be modelled statistically 0.8
by information combining [HH02c].
For serial concatenation we also have to assume systematic 0.7
encoding of the outer code, to ensure that the post–decoding
IPCI (C) [bit per source symbol]
mutual information w.r.t. info bits U is the same as the post– 0.6
me
decoding mutual information w.r.t. code bits X of the outer
che
0.5
gs
encoder.
din
l co
Exemplary, IPCI (C) for the serial concatenation of Fig. 2 0.4
a
ide
will be determined in the following.
0.3
−2.5 dB
0.2
10log (E /N )=−3 dB
10 s 0
0.1
Ser. Concat.
ν=8 CC
Figure 2: Encoder for serial concatenation of a rate–1/2 MFD 0
convolutional code of memory ν = 1 with a ν = 2 scrambler 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
(Gr = 07, G = 01). C [bit per channel symbol]
As the concatenation is of extremely low complexity, it can Figure 4: IPCI (C) for the concatenation of Fig. 2, obtained
be assumed that even in practical implementations the limit by EXIT charts. For comparison also the IPCI (C) of a ν = 8
of infinite number of iterations will be closely approximated. convolutional code is given.
Hence, we firstly determine the intersection point of the trans- For a realistic coding scheme 25 iterations are sufficient to
fer characteristics within EXIT charts for the range of signal– closely approximate this behavior. In every iteration the outer
to–noise ratios. Then the post–decoding mutual information is decoder decodes a 2–state trellis of length K, and the inner
calculated using information combining. one visits 4 states in every of the 2K trellis segments. Hence,
the decoding complexity then is 25 ¡ (2 + 2 ¡ 4) = 250 visited
1
states per decoded bit, which is approximately the complexity
of decoding a memory ν = 8 convolutional code. For compar-
0.9
ison the IPCI (C) of a ν = 8 convolutional code also is plotted
0.8 into Fig. 4 This IPCI (C) is directly obtained via Monte Carlo
simulation.
0.7 Obviously, there is a substantial difference in the behav-
I(U ; E) = I(X; Y ) [bit per symbol]
5
dB ior of the two coding schemes. The concatenation shows the
0.6
−1.
)= turbo–cliff, which is typical for iteratively decoded concatena-
/N 0
0.5 g
(E
10
s
tion, whereas the convolutional code shows a more constant
0lo
1 improvement of output if channel capacity is raised. Hence, if
0.4 high mutual information (e.g., I(U ; V ) > 0.999) is required,
which corresponds to a low bit error ratio, the concatenation
0.3
outperforms the convolutional code with equal complexity.
B
5d
0.2 −2.
IV. B OUNDING B IT E RROR P ROBABILITY BY C APACITY
−3 dB Although there is no direct relationship between probabil-
0.1
ity of bit error and the capacity of a channel, an upper and
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
a lower bound can be given. Furthermore, as channels are
I(U ; Z) = I(X; Q) [bit per symbol]
known, which satisfy these bound with equality, they are tight.
We consider memoryless symmetric channels with binary
input U ¾ 0; 1 and discrete or continuous output alphabet
Figure 3: EXIT charts for the concatenation of Fig. 2.
Î . The capacity
Fig. 3 shows some EXIT charts used to determine the I(U ; V ) = H(U ) H(U V ) (4)
IPCI (C) for the concatenation of Fig. 2. Circles mark the
intersection points of the transfer characteristics. Assuming is achieved by equiprobable signaling, i.e. H(U ) = 1.
3. 0.5
With Fano’s inequality [Fan61] which reads 1/2 H(p)
min[p,1−p]
0.45
e2 (BER) H(U V ) = 1 I(U ; V ) (5) 0.4
we have a lower bound on the probability of error. Here, e2 (¡)
0.35
0.3
denotes the binary entropy function 0.25
e2 (x) := x log2 (x) (1 x) log2 (1 x) , (6) 0.2
¾ (0, 1), e2 1 is its inverse for x ¾ (0, 1/2).
0.15
x 0.1
This minimum bit error ratio is achieved by a Binary Sym- 0.05
metric Channel (BSC). Hence, all channels with a larger out- 0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
put alphabet have no lower probability of error at the same
capacity.
p
To derive a lower bound on the capacity [HR70] we need Figure 5: Graphs of min[p, 1 p] and 1 e2 (p).
the a–posteriori probability of U = 0 having received V = v: 2
p = Pr(U = 0 V = v) (7) For any given channel output V = v the entropy of the
binary variable U is given by
and the hard–decision
e2 (p) = H(U V = v), (12)
1 for p 0.5
ˆ
U= . (8)
0 for p > 0.5 as U = 0 occurs with probability p and U = 1 occurs with
There is an equivalent binary symmetric channel from U to probability 1 p. Hence, the average entropy of U given V
ˆ
U . The crossover probability of this channel is equal to the can be expresses as the expectation over the binary entropy
a–posteriori probability p, as uniform signaling is assumed. function of the a–posteriori probability p:
Depending on the actual received V = v and the actual
H(U V ) = E H(U V = v) . (13)
hard decision, which deterministically depends on v, a bit error
occurs with probability Inserting into (4) yields
Pr(U = 0 V = v) for U = 1 = 1 H(U V )
ˆ
Pb = I(U ; V )
ˆ
Pr(U = 1 V = v) for U = 0
= 1 E H(U V = v)
p ˆ
for U = 1
= 1 E e2 (p)
1 p for U = 0
= ˆ
1 2E min[p, 1 p]
p for p 0.5
=
1 p for p > 0.5 = 1 2 ¡ BER. (14)
= min[p, 1 p]. (9) Hence, the bit error ratio can be upper bounded by
The channels bit error ratio BER is given by the expecta-
(1 I(U ; V ))
1
tion over the bit error probability Pb of the actual channels: BER (15)
2
BER = E Pb = E min[p, 1 p] . (10) or equivalently
(10) is a fundamental result for simulations. If, instead of I(U ; V ) 1 2 ¡ BER (16)
counting the really occurred error events during a simulation
of transmission, which is the classical method to determine the (16) is satisfied with equality for a Binary Erasure Chan-
bit error probability of a coding scheme or a channel, (10) is nel (BEC). On average an erasure results in half a bit error
evaluated for every transmitted symbol the variance of the esti- ER/2 = BER. Hence, the BEC is the worst case channel, as
mated bit error ratio is significantly smaller [HLS00]. Further- it maximizes the probability of error for a given capacity.
more, (10) can be used as a test to determine, whether the re- Both bounds are shown in Fig. 6. As they directly corre-
liability estimation of an algorithm yields the true a–posteriori spond to a BSC resp. a BEC they are tight in the whole range
probability [HR90]. If the BER determined by the two differ- of I(U ; V ) ¾ [0; 1].
ent methods does not coincide, the soft–output of the investi-
gated algorithm is a suboptimum reliability measure. V. E STIMATION OF B IT E RROR P ROBABILITY OF
(10) describes two line segments. For any p ¾ [0; 1] the C ONCATENATED C ODING S CHEMES
expression min[p, 1 p] can be upper bounded by 1 e2 (p), 2 After having determined the IPCI (C) for the concatena-
as e2 (p) is a strictly convex function and the line segments tions, we are able to lower bound the achievable bit error ratio
touch 1 e2 (p) at p = 0, 1 , 1 and 1 e2 (p) = 0, 1 , 0, see Fig. 5.
2 2 2 2 using (5). Furthermore, it is possible to give an achievability
bound by (15). If sufficiently large interleavers are used and
min[p, 1 p]
1
e2 (p) (11) sufficiently many iterations are performed, it can be expected,
2
4. interleaving. A block length of K = 100000 is not sufficient
0
10
to be below the acievability bound for all signal–to–noise ra-
tios.
−1
10
VI. C ONCLUSIONS
The information processing characteristic IPCI (C) of con-
catenated coding schemes can be directly obtained from
−2
10
asymptotical analysis. Without simulation of the iterative de-
BER
coding process, which is quite complex, it is possible to en-
−3
tirely characterize the behavior of a coding scheme.
10
As also the bit error ratio, which is the important perfor-
mance measure for applications, can approximately be deter-
Lower Bound mined from the IPCI (C), this analysis is sufficient to decide,
Upper Bound
10
−4
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
whether a coding scheme is appropriate for the intended appli-
I(U ; V ) [bit per symbol] cation.
For theoretical considerations the IPCI (C) has further ad-
Figure 6: Upper and lower bound on the bit error probability vantages. As it characterizes a coding scheme w.r.t. soft–
of a binary input channel. output and has a scaling that magnifies differences between
coding schemes operated below capacity, resulting in bit error
that the simulation results for the concatenation is between the ratios close to 50%, it gives much more insight than a bit error
bounds. ratio curve.
Fig. 7 shows a comparison of bit error performance simula-
R EFERENCES
tion results with the bounds for the investigated concatenation
of Fig. 2. A block length of K = 100000 resp. N = 200000 [Bre59] D. G. Brennan. Linear diversity combining techniques. In Proceed-
ings of the IRE, vol. 47, pp. 1075–1102, Jun. 1959.
has been chosen. 25 iterations are sufficient. More iterations
would not significantly improve the bit error performance. [tB99] S. ten Brink. Convergence of iterative decoding. IEE Electronics
Letters, Vol.35, No.10:pp. 806–808, May 1999.
0
10 [tB00b] S. ten Brink. Iterative Decoding Trajectories of Parallel Concate-
nated Codes. Proc. of 3rd ITG Conference on Source and Channel Cod-
ing, pp. 75–80, Munich, Germany, Jan. 2000.
−1
[Fan61] R. M. Fano. Transmission of Information: A Statistical Theory of
10
Communication. John Wiley & Sons, Inc., New York, 1961.
[HR70] M. E. Hellman and J. Raviv. Probability of Error, Equivocation, and
the Chernoff Bound. IEEE Transactions on Information Theory, vol.16,
no.4:pp.368–372, Jul. 1970.
−2
10
[HLS00] P. Hoeher, I. Land and U. Sorger. Log–Likelihood Values ans
Monte Carlo Simulation – Some Fundamental Results. In Proceedings of
BER
the International Symposium on Turbo Codes, pp. 43–46, Brest, France,
−3
10
Sept. 2000.
[HR90] J. B. Huber and A. Rueppel. Zuverl¨ ssigkeitssch¨ tzung f¨ r die Aus-
a a u
¨
gangssymbole von Trellis–Decodern [in German]. AEU Int. J. Electron.
upper bound
simulation Commun., No.1, pp. 8–21, Jan. 1990.
lower bound
−4
10
0 0.5 1 1.5 [HHJF01] S. Huettinger, J. B. Huber, R. Johannesson and R. Fischer. In-
10 log10 (Eb /Æ0 ) [dB] formation Processing in Soft–Output Decoding. In Proceedings of
39rd Allerton Conference on Communications, Control and Computing,
Oct. 2001.
Figure 7: Comparison of BER obtained via simulation and
[HHFJ02] S. Huettinger, J. Huber, R. Fischer and R. Johannesson. Soft-
estimation from IPC for the concatenation of Fig. 2. Output-Decoding: Some Aspects From Information Theory. In Proceed-
ings of 4. ITG Conference Source and Channel Coding, pp. 81–89, Berlin,
There are two main observations. Fistly, the prediction Jan. 2002.
of the bit error ratio via asymptotical analysis, IPCI (C) and
[HH02b] S. Huettinger and J. Huber Design of “Multiple–Turbo–Codes”
bounds on the bit error probability of memoryless channels with Transfer Characteristics of Component Codes In Proceedings of
are very close to the bit error ratios observed in simulations, Conference on Information Sciences and Systems (CISS ’2002), Prince-
which are much more complex to perform. Secondly, the more ton, Mar. 2002.
the turbo–cliff is pronounced by the concatenation, the closer [HH02c] S. Huettinger and J. Huber. Extrinsic and Intrinsic Information in
the bounds become, and hence the technique becomes more Systematic Coding. In Proceedings of International Symposium on In-
formation Theory 2002, Lausanne, Jul. 2002.
valueable for concatenations that are difficult to simulate.
But, Fig. 7 also shows, that even for constituent codes of [WFH99] U. Wachsmann, R. Fischer and J. B. Huber. Multilevel codes:
Theoretical concepts and practical design rules. IEEE Transactions on
small memory, which have a relatively small decoding hori- Information Theory, IT-45: pp. 1361–1391, Jul. 1999.
zon, quite large interleavers are needed to approximate infinite