PERFORMANCE OF WIMAX PHYSICAL LAYER WITH VARIATIONS IN CHANNEL CODING AND DIG...
Analysis of Error Correcting Code Algorithms used in WiMax Communications
1.
Abstract—Wireless communications have become an
integral part of modern computing. Those
communication methods range from the 802.11
connection of a computer to a router to deep- space
data transmissions from exploratory spacecraft. For all
of these transmission methods, fault tolerance is
extremely important. For short distance transmissions,
error detection is enough, but for long distance
transmissions , error detection and correction are
required.
Error-correcting codes are the means by which we
compensate for the corruption that occurs in
communication over imperfect channels [17]. In a
basic system, the message to be sent is passed through
an encoder, this encoded message is referred to as a
codeword. The system then transmits the codeword.
Due to noise on the channel, the received codeword
may differ from the sent codeword. The received word
is passed through a decoder. The error-correcting
codes that are part of the codeword but not a part of
the original message are used by the receiver to ‘guess’
the correct original message.
The most recent mobile broadband medium has
been IEEE 802.16 described standard; Worldwide
interoperability for Microwave Access (WiMax).
WiMax is a similar standard to wi-fi but on a much
larger scale. While the WiMax standard is designed for
both wireless and fixed stations, error-correcting code
(ECC) is relevant to the implementation of wireless
metropolitan area networks (MAN) in major cities that
allow subscribers to have high-speed and reliable
access to the Internet anywhere.
I. INTRODUCTION
Wireless communications have become an integral
part of modern computing. Those communication methods
range from the 802.11 connection of a computer to a
router to deep- space data transmissions from exploratory
spacecraft. For all of these transmission methods, fault
tolerance is extremely important. For short distance
transmissions, error detection is enough, but for extreme
long distance transmissions, error detection and correction
are required. Mobile broadband connections have become
an important part of business and science. The most recent
mobile broadband medium has been IEEE 802.16
described standard; Worldwide interoperability for
Microwave Access (WiMax). WiMax is a similar
standard to wi-fi but on a much larger scale. While the
WiMax standard is designed for both wireless and fixed
stations, error-correcting code (ECC) is relevant to the
implementation of wireless metropolitan area networks
(MAN) in major cities that allow subscribers to have high-
speed and reliable access to the Internet anywhere.
Error-correcting codes are the means by which we
compensate for the corruption that occurs in
communication over imperfect channels [17]. In a basic
system, the message to be sent is passed through an
encoder, this encoded message is referred to as a
codeword. The system then transmits the codeword. Due
to noise on the channel, the received codeword may differ
from the sent codeword. The received word is passed
through a decoder. The error-correcting codes that are part
of the codeword but not a part of the original message are
used by the receiver to ‘guess’ the correct original
message.
This paper will focus on three major error correction
code (ECC) algorithms: Reed-Solomon error correction,
Turbo code and LDPC- low-density parity-check code.
The analysis of these ECC algorithms will include a
thorough analysis of the mechanisms used to produce the
codes. The comparative performance evaluation will be in
reference to the ECC algorithm's performance in a Wi-
Max environment.
Section II will be a detailed description of the three
ECC algorithms. Section III will be a comparative analysis
of the performance of the three algorithms in a simulated
WiMax environment. Section IV will be the conclusion
with opinions on the effectiveness of the algorithms.
II. DESCRIPTION OF ALGORITHMS
A. Reed Solomon Codes
Reed Solomon (RS) codes were invented by Irving
S. Reed and Gustave Solomon in 1960. Digital technology
was not sufficiently advanced to put this concept into
practice until the early 1980’s. RS codes were then used in
the error correcting features of compact disks and have
followed on to be used in DVD technology. RS codes have
also proven useful in many different data transmission
areas including deep space satellite transmissions such as
for the Voyager space probes and in Earth-bound satellite
transmissions.
Comparative Analysis of Error Correcting Code Algorithms used in
WiMax Communications
William R. Chipman Jr., Member, IEEE
wrchipman@gmail.com
CS530DL Spring 2010 Term Paper, Colorado State University
2. Reed Solomon codes work by creating a polynomial
representation of the data to be transmitted and
oversampling the polynomial so that more data points are
transmitted than are in the original data. The receiving
location of the data stream can than reconstruct the
original data as long as enough extra data is sent to offset
the number of errors. RS codes are block codes meaning
that fixed blocks of data are processed and outputted. The
most often found block size is a (255, 223) symbol block
with each symbol being eight bits. The reason for the eight
bit symbol size is readily apparent in a digital system. In a
(255, 223) encoding, 223 symbols are encoded into 255
symbols. With the (255, 223) encoding, up to 16 bursts of
errors can be corrected.
The definition of a Reed Solomon code is based on
the block size represented as (n, k). The RS code, for
integers1 ≤ 𝑘 ≤ 𝑛, a field F of the size |F| ≥ n and a set S
= {α1, α2, …, αn} ⊆ F,
𝑅𝑆 𝐹,𝑆[𝑛, 𝑘] = { (𝑝(∝1), 𝑝(∝2), … , 𝑝(∝ 𝑛)) ∈ 𝐹 𝑛
|
𝑝 ∈ 𝐹[𝑋]𝑖𝑠 𝑎 𝑝𝑜𝑙𝑦𝑛𝑜𝑚𝑖𝑎𝑙 𝑜𝑓 𝑑𝑒𝑔𝑟𝑒𝑒 ≤ 𝑘 − 1
[7]
Based on this definition, to encode a message m = (m0, m1,
…, mk-1) ∈ Fk
, the polynomial will be
𝑝(𝑋) = 𝑚0 + 𝑚1 𝑋 + ⋯ + 𝑚 𝑘−1 𝑋 𝑘−1
∈ 𝐹[𝑋] [7].
The polynomial is then evaluated at the points α1, α2, … ,
αn to generate the codeword that corresponds to m. The
polynomial p is evaluated by multiplying the message
vector m by the n × k matrix
.
The matrix G is the generator matrix for 𝑅𝑆 𝐹,𝑆[𝑛, 𝑘][7].
The error correcting ability of the Reed Solomon
code is measured by the redundancy in the block, n – k.
There are two problems that RS codes can address:
erasures and errors. Erasures are errors when the location
of the error is known. This is known in advance due to
extra information about a link that identifies likely
locations of errors. Errors are corruptions of the data that
occur randomly. RS codes can correct twice as many
erasures as errors. Additionally, due to the properties of
RS codes in dealing in symbols instead of bits, any
number of error bits is considered one error so long as all
the error bits are in a single symbol. This makes Reed
Solomon codes extremely useful in environments with
bursts of errors instead of random single bit errors.
Modern Reed Solomon applications in long distance
data transmissions often pair the RS code algorithm with a
Viterbi-decoded convolutional coding due to the Viterbi
decoders tendency to produce errors in small bursts. While
this ECC has been widely used for several decades and is
still in wide use, it is now being replaced in many
applications with Turbo Codes that do not need immediate
decoding.
B. Turbo Codes
The next generation of ECC took hold in 1993 with
the development of Turbo codes. Turbo codes were first
Fig. 1 Parallel RSC Turbo encoder.
introduced by Berrou, Glavieux and Thitimajshima. While
there have been many different techniques developed in
the years following the 1993 introduction, all have the
same general characteristics. The first common
characteristic is a composite structure. The information
bits are encoded with multiple component codes [6]. The
second is the interleaving of the components. The
components are reordered prior to transmission. The third
is a soft iterative decoding. The components are decoded
several times and each time the results are compared until
a consensus is determined on the correct value of the bits.
A classic design of a Turbo code system consists of a
parallel hardware-based encoder (fig. 1) and a serial
hardware-based decoder (fig. 2). The encoder generates
three blocks of bits based on the original data. The first
block is the m-bit block of the payload data. This is the
subset of the total data that is being encoded. The second
block is the n/2 parity bits for the payload data. The parity
bits are computed using a recursive systematic
convolutional (RSC) code. The third block is another set
of n/2 parity bits for a known permutation of the payload
data [6]. The three sets combine into a block of data of
size m + n. The interleaver combines the three sub-blocks
into the final data package for transmission.
On the receiving side, the decoder is built in a
complimentary fashion to the encoder. It consists of
multiple decoders interconnected in a serial fashion with
3. interleavers. The decoders operate on the incoming data to
issue soft decisions on what should be the value of the bit.
The data is compared and if possible a hard decision is
made. If a hard decision is not possible, the soft decisions
are fed back to the
Fig. 2 Serial Turbo decoder corresponding to fig. 1
encoder
decoders and further soft decisions are made. This is a
recursive cycle of soft decisions, comparisons and either a
hard decision or re-evaluation is continued until a clear
hard decision can be made.
Turbo Codes perform well on many different errors
and were the first realizable ECC that approached the
Shannon limit. The ability to achieve these performance
goals in a full hardware implementation has led to the
widespread deployment of Turbo codes in many
telecommunication venues. While Turbo codes have been
popular for many years, the Low Density Parity Check
codes are gaining ground in popularity.
C. Low Density Parity Check Codes
Low Density Parity Check Codes (LDPC) were first
developed by Robert Gallager in 1963. Because they were
impractical to implement at that time, they were
essentially forgotten. In the late 1990’s, the
implementation became feasible on modern systems and
LDPC were essentially re-discovered. Although LDPC
were now able to be implemented, Turbo codes were the
encoding of choice in most long distance communication
channels for most of the 1990’s.
Low density parity check code is ECC that is
implementable in a linear time. Because of this near linear
time decoding, they allow for correction in systems where
the noise level approaches the Shannon limit. LDPC codes
are defined by a parity-check matrix that is often randomly
generated. An example LDPC code of a codeword size
N=8 and a rate of ½ can be specified by the following
parity check matrix.
The main idea of the parity check equation is that for a
valid codeword, modulo-2 sum of the adjacent bits of
every check node has to be zero [11]. In a graph notation
(Fig. 3), a bipartite (Tanner) graph can be used to
represent the parity-check matrix.
Fig. 3 Bipartite Graph of LDPC Decoder Structure
The decoder’s purpose is to decide the value of the
transmitted bits. As seen in figure 3, bit nodes and check
nodes communicate to make those decisions. The check
nodes use parity check equations to update the bit node
information and send it back. At this point, the bit nodes
perform a soft majority vote and, if possible, a hard
decision is made. If all the bits satisfy the parity check
equation, the result is a valid codeword otherwise the bit
nodes continue the soft majority voting with results sent to
the check nodes.
There are four main steps that occur in the LDPC
encoding-decoding sequence: initialization, check node
update, bit node update and hard decision making. The
first step, initialization starts with the designation of a key.
Let x represent the binary shift keying symbol (BPSK) and
let y represent the noisy received symbol. With z as a
Gaussian random variable with a zero mean,
𝑦 = 𝑥 + 𝑧
Assume that x = +1 when the bit is 0 and x = -1 when the
bit is 1.
Let 𝑢 = log
𝑝(𝑥= +1 |𝑦)
𝑝(𝑥= −1 |𝑦)
denote the log likelihood ratio for
the transmitted bit.
The sign of u signals the hard decision on the transmitted
bit while the magnitude indicates the reliability of the
decision. Decoding begins by assigning a likelihood value
to all the outgoing edges of every bit node [5].
During the check node update phase, the plan is to
compute the outgoing message from the check node to the
4. adjacent bit nodes. Each outgoing message from the check
node is computed. This computation is essentially a hard
decision based on the modulo-2 sum of all the bits that
participate in the same parity check equation. Because of
this calculation, the hard decision can only be as reliable
as the least reliable bit in the modulo-2 sum [5].
The bit node update phase is when the soft decision is
made in the bit node based on the bits in the adjacent
check nodes. If messages are denoted in the form
𝑣 𝑛→𝑘1, 𝑣 𝑛→𝑘2, … . , 𝑣 𝑛→𝑘 𝑑𝑣
then each bit is computed as:
𝑣 𝑛→𝑘 𝑖
= 𝑢 𝑛 + ∑ 𝑤 𝑘 𝑗→𝑛
𝑗≠𝑖
Therefore the soft majority vote based on the value of the
bit n is based on all information except 𝑤 𝑘 𝑖→𝑛[5].
Once all of the previously described updates are
computed and passed back to the nodes, the hard decision
for each bit n is made by looking at the sign of 𝑣 𝑛→𝑘 𝑖
+
𝑤 𝑘 𝑖→𝑛 for any ki [5]. If the hard decision meets all parity
check requirements then the valid codeword has been
found and the processing stops. If the codeword is not
valid then the cycle is repeated. This continues until a
valid codeword is found or a set number of attempts have
been made. If no valid codeword is found, the most recent
invalid codeword is outputted and a failure declared.
The code rate for LDPC determines the total number
of bits transmitted in relation to the actually number of bits
in the message being encoded. For example, in a (6, 3)
code, 3 bits are encoded in a 6 bit message. These are also
designated as a k/n ratio. Code rates usually range from
1/4 up to 9/10. The code rate chosen is dependent upon
multiple factors including bandwidth utilization
requirements and the linearity of the decoding required.
II. COMPARATIVE ANALYSIS OVER WIMAX
Mobile broadband is high-speed, high-throughput
wireless network connectivity that offers portability and
mobility. The IEEE 802.16 Wireless Metropolitan area
Network (MAN) standard describes the Worldwide
interoperability for Microwave Access (WiMAX) and is
designed to provide standardized access for both fixed and
mobile broadband applications [12]. WiMAX is
specifically
Table 1. Latency Measurements of RS Code for Different
values of n, k, m with its error correcting capabilities [12]
targeted towards high data rate network access in specific
urban regions. While WiMAX offers high data rates for
the end user, it has to also provide a quality of service
(QoS) and struggles in this due to the increased noise
inherent in a wireless system over that of a wired one.
Error generation in a wireless network and
specifically in WiMAX occur in bursts due to the fading
nature of the link and therefore ECC that are burst error
correcting in nature are best. Reed-Solomon ECC is
extremely well suited to burst error correcting and is well
established as a reliable and hardware implementable
solution.
Logeshwaren tested the suitability of RS ECC use in
WiMAX. The latency measurements can be seen in table
1. These numbers were generated using the Chien Search
Algorithm to determine the error location [12]. Figure 4
shows the latency vs. code-rate (CR) and as shown, the
latency delay increases in a linear fashion with the
increasing code-rate until the CR approaches the upper
eighties and then the CR levels off in respect to the
latency. This indicates that the code-rate vs. latency is
maximized in this test set when the code-rate is
approximately 90%.
Figure 4. Latency vs. Code-Rate
5. Table 2. Errors vs. Decoder Latency [12]
In table 2, Logeshwaren’s data shows that as the
number of bit errors increase, the latency of the decoding
remains stable for each set of encoding symbol sizes. This
indicates that Reed Solomon codes are efficient in
decoding error prone data streams such as those found in
802.16 implementations.
Table 3 shows the mandatory channel coding in IEE
802.16 OFDM systems. Changlong used mode 3 to
compare the bit error rate (BER) vs. signal to noise (SNR)
for multiple test positions [3]. Figure 5 shows that as
expected the BER v. SNR performance is increased as the
number of test positions increase with the best results
occurring when the most testing positions are used. These
test results were generated using a Chase II decoding
algorithm [3].
In WiMAX systems, the block error rate (BLER)
measure is often used instead of the BER. Modes 1
through 6 BLER vs SNR can be seen in figure 6.
The BLER
Table 3. Mandatory Channel Coding in IEEE 802.16
OFDM Systems
Figure 5. Chase II performance for RS(64, 48, 8) with
variable test positions
performance degrades in relation to the modulation of the
signal. In addition, as the code rate decreases the
performance is better even at the same modulation levels.
Figure 6. BLER performance comparison of RS ECC
under the Chase II algorithm
Turbo coding in WiMAX is also an efficient
alternative. Because WiMAX supports double-binary
turbo codes [15], a better BER rate can be achieved
compared to single-binary turbo codes and other ECC.
Turbo decoding is extremely computationally intensive
and thus hardware implementations are preferred over
software implementations for efficiency. The test results
presented were generated with a WiMAX decoder as seen
in figure 7. The state transitions for this design can be seen
in figure 8 [15].
6. Figure 7. WiMAX Turbo Decoder
Figure 8. WiMAX turbo code trellis diagram
The trellis diagram shows the possible state
transition available to the system. In a typical turbo
decoder system, between 4 and 10 iterations will be
performed with the increased number of iterations
generating a lower BER. In addition, this design uses tail-
biting when encoding to add efficiency over flush bits
[15]. Optimization of the algorithm is performed by
removing bit calculations until hard bit decisions are
made. By optimizing these decisions, the BER
performance (fig. 9) as related to the Eb/No is
significantly better than is seen in the BLER vs. SNR that
the measured RS ECC achieved. While the SNR is an
analog measure of signal per noise unit and Eb/No is a
digital measure of bit power per noise unit, they are close
enough related to conclude that in these testing scenarios
the WiMAX parallel turbo decoder attained a significantly
higher performance ratio to signal power.
Figure 9. BER performance of the parallel turbo decoder
Figure 10. Proposed WiMAX LDPC implementation
Like turbo decoding, LDPC –based decoders can be
hardware implemented and are parallelizable. In a
WiMAX testing scenario, LDPC codes are effective when
using a belief-propagation algorithm. Using a Viterbi-
based convolutional encoder and a hardware based,
parallel LDPC decoder system (figure10), Lin was able to
achieve BER rates at a 10-5
rate with a SNR ratio of <13dB
(figure 11) [11]. This is a significant improvement over
those attained in a similar system using the RS ECC
decoder. Both LDPC and RS ECC as tested present lower
performance levels per signal power than the parallel turbo
decoder implementation.
Figure 11. BER Performance of the parallel LDPC
decoder
III. CONCLUSION
This paper described the algorithmic design of three
error correcting codes and compared the performance
levels of three independent implementations of the
algorithms. The BER performance comparison clearly
shows that the parallel turbo code performs the best at
similar power levels but the hardware implementation is
significantly more complicated than the RS ECC which
performs the worst of the three tested.
REFERENCES
[1] Abematsu, D., Ohtsuki, T., Kashima, T., Jarot, S.P., “LDPC Codes
for High Data Rate Multiband OFDM Systems over 1Gbps,"
Communications, Computers and Signal Processing, 2007. PacRim
2007. IEEE Pacific Rim Conference on, pp.338-341, 22-24 Aug.
2007
[2] Bo Zhou, Li Zhang, Jingyu Kang, Qin Huang, Tai, Y.Y., Shu Lin,
Meina Xu, "Non-binary LDPC codes vs. Reed-Solomon codes,"
7. Information Theory and Applications Workshop, 2008 , pp.175-184,
Jan. 27 2008-Feb. 1 2008
[3] Changlong Xu, “Soft Decoding Algorithm for RS-CC Concatenated
Codes in WiMAX System," Vehicular Technology Conference,
2007. VTC2007-Spring. IEEE 65th , vol., no., pp.740-742, 22-25
April 2007
[4] Dan-Feng Zhao, Yu-Ping Wu, Ning-Ning Tong, "The Applied
Research of Convolutional Turbo Code Based on WiMAX
Protocol," Wireless Communications, Networking and Mobile
Computing, 2008. WiCOM '08. 4th International Conference on ,
pp.1-3, 12-14 Oct. 2008
[5] Eroz, M.,Sun, F.,Lee, L. “DVB-S2 Low Density Parity Check
Codes with Near Shannon Limit Performance”, Hughes Network
Systems White Paper
[6] Gracie, K., Hamon, M.-H., "Turbo and Turbo-Like Codes:
Principles and Applications in Telecommunications," Proceedings
of the IEEE , vol.95, no.6, pp.1228-1254, June 2007
[7] Guruswami, V., “Notes 6: Reed-Solomon, BCH, Reed-Muller, and
concatenated codes”, Carnegie Mellon University, Introduction to
Coding Theory, Feb 2010
[8] Junbin Chen, Lin Wang, Yong Li, ,"Performance comparison
between non-binary LDPC codes and Reed-Solomon codes over
noise bursts channels," Communications, Circuits and Systems,
2005. Proceedings. 2005 International Conference on , vol.1, no.,
pp. 1- 4 Vol. 1, 27-30 May 2005
[9] Kuhn, V., "Evaluating the performance of turbo codes and turbo-
coded modulation in a DS-CDMA environment," Selected Areas in
Communications, IEEE Journal on , vol.17, no.12, pp.2138-2147,
Dec 1999
[10] Lee, L.-N., Hammons, A.R., Jr., Feng-Wen Sun, Eroz, M.,
"Application and standardization of turbo codes in third-generation
high-speed wireless data services," Vehicular Technology, IEEE
Transactions on , vol.49, no.6, pp.2198-2207, Nov 2000
[11] Lin, L.-H., Wen, K.-A., "A Novel Application of LDPC-Based
Decoder for WiMAX Dual-mode Inner Encoder," Wireless
Technology, 2006. The 9th European Conference on , pp.178-181,
10-12 Sept. 2006
[12] Logeshwaran, R., Paul, I.J.L., "Performance study on the suitability
of Reed Solomon Codes in WiMAX," Wireless Communication
and Sensor Computing, 2010. ICWCSC 2010. International
Conference on , pp.1-4, 2-4 Jan. 2010
[13] Palanisamy, P., Sreedhar, T.V.S., "Performance analysis of Raptor
codes in Wi-Max systems over fading channel," TENCON 2008 -
2008 IEEE Region 10 Conference , pp.1-5, 19-21 Nov. 2008
[14] Palanisamy, P., Thilagavathy, R., Kumar, M.R., Srihari, A.,
"Efficient Realization of CORDIC based LDPC Decoder for
WiMax System," Signal Processing, Communications and
Networking, 2008. ICSCN '08. International Conference on , pp.41-
45, 4-6 Jan. 2008
[15] Roth, J., Manjikian, N., Sudharsanan, S., “Performance
optimization and parallelization of turbo decoding for software-
defined radio," Electrical and Computer Engineering, 2009.
CCECE '09. Canadian Conference on , pp.804-809, 3-6 May 2009
[16] Sartipi, M., Fekri, F., "Source and channel coding in wireless sensor
networks using LDPC codes," Sensor and Ad Hoc Communications
and Networks, 2004. IEEE SECON 2004. 2004 First Annual IEEE
Communications Society Conference on , pp. 309- 316, 4-7 Oct.
2004
[17] Sivasubramanian, B., Leib, H., “Fixed-Rate Raptor Code
Performance Over Correlated Rayleigh Fading Channels,"
Electrical and Computer Engineering, 2007. CCECE 2007.
Canadian Conference on , pp.912-915, 22-26 April 2007
[18] Spielman, D., “The Complexity of Error-Correcting Codes,”
Fundamentals of Computation Theory, Springer Berlin /
Heidelberg, Apr 2006
[19] Vilaipornsawai, U., Soleymani, M.R., “Turbo codes for satellite and
wireless ATM," Information Technology: Coding and Computing,
2001. Proceedings. International Conference on , pp.120-124, Apr
2001